Upcoming Sessions
-
October
7
ILT - DSCI-272: Predicting with Cloudera Machine Learning - 4475423
Starting:2025/10/07 @ 03:30 AM (GMT+02:00) BudapestEnding:2025/10/10 @ 11:30 AM (GMT+02:00) BudapestType:Multi-day Session -
October
13
ILT - DSCI-272: Predicting with MLOps on Cloudera AI - 4677380
Starting:2025/10/13 @ 09:00 AM BerlinEnding:2025/10/16 @ 05:00 PM Berlin
See All Upcoming Sessions

About This Training Generative AI (GenAI) and Large Language Models (LLMs) are extremely powerful new tools that are changing every industry. To fully take advantage of GenAI and LLMs, these new capabilities need to be combined with your existing enterprise data. This two-day course teaches how to use Cloudera AI to train, augment, fine tune, and host LLMs to create powerful enterprise AI solutions. What Skills You Will Gain Through lecture and Hands-On exercises, you will learn how to: Select the right LLM model for a use case Configure a Prompt for an LLM Use Retrieval Augmented Generation (RAG) Fine Tune an LLM Model with Enterprise Data Use the AI Model Registry and host an LLM Create an AI Agent with Crew AI Who Should Take This Course This course is designed for data scientists and machine learning engineers who need to understand how to utilize Cloudera AI to leverage the full power of their enterprise data, generative AI, and Large Language Models and deliver powerful business solutions. DATE: November 24-25, 2025 Virtual Classroom, AMER 9:00 - 17:00 (CDT TIMEZONE) Read more

This four-day hands-on training course delivers the key concepts and knowledge developers need to use Apache Spark to develop high-performance, parallel applications on the Cloudera Data Platform. Hands-on exercises allow students to practice writing Spark applications that integrate with Cloudera Data Platform core components. Participants will learn how to use Spark SQL to query structured data, how to use Hive features to ingest and denormalize data, and how to work with “big data” stored in a distributed file system. After taking this course, participants will be prepared to face real-world challenges and build applications to execute faster decisions, better decisions, and interactive analysis, applied to a wide variety of use cases, architectures, and industries. Download full course description What you'll learn During this course, you will learn how to: Distribute, store, and process data in a cluster Write, configure, and deploy Apache Spark applications Use the Spark interpreters and Spark applications to explore, process, and analyze distributed data Query data using Spark SQL, DataFrames, and Hive tables Deploy a Spark application on the Data Engineering Service What to expect This course is designed for developers and data engineers. All students are expected to have basic Linux experience, and basic proficiency with either Python or Scala programming languages. Basic knowledge of SQL is helpful. Prior knowledge of Spark and Hadoop is not required. DATE: December 16-19, 2025 Virtual Classroom, APAC 9:30 - 17:30 (SGT TIMEZONE) Read more

Apache Ozone is the next-generation hybrid storage service offering versatility and out-of-the-box compatibility. Ozone is an object storage format exceeding the limitations of HDFS. This course teaches architecture, internal operations, installation, file system usage, best practices, security, maintenance, monitoring, tuning and testing. Download full course description What you'll learn This course teaches the Ozone internal architecture and how to install, use, maintain, monitor, tune, integrate, and test the the Ozone service in a secure environment. Participants will gain the following skills: Understanding the Benefits of Using Ozone Installing and Configuring Secure Ozone Managing Files and Objects in Ozone Performance Tuning and Doing Baseline Tests Controlling Replication and Understanding Failover and Recovery Performing Maintenance Tasks Monitoring Ozone Using Recon Service Integrating Hive, Impala, Spark, Nifi, and Flink with Ozone Migrating Data from HDFS to Ozone What to expect This advanced course is for administrators who are currently using CDP Private Cloud Base. The course will appeal to technicians, such as data engineers and applications developers, who are migrating data and applications to Apache Ozone. Prior experience of Cloudera Data Platform, to include HDFS, YARN, and Hive, is expected. Students must have access to the Internet to reach the classroom environments, which are located on Amazon Web Services. DATE: December 9-12, 2025 Virtual Classroom, APAC 9:30 - 17:30 (SGT TIMEZONE) Read more

One of the most critical functions of a data-driven enterprise is the ability to manage ingest and data flow across complex ecosystems. Does your team have the tools and skill sets to succeed at this? Apache NiFi and this four-day course provides the fundamental concepts and experience necessary to automate the ingress, flow, transformation, and egress of data using NiFi. The course also covers tuning, troubleshooting, and monitoring the dataflow process as well as how to integrate a dataflow within the Cloudera CDP Hybrid ecosystem and external systems. Download full course description What you'll learn During this course, you learn how to: Define, configure, organize, and manage dataflows Transform and trace data as it flows to its destination Track changes to dataflows with NiFi Registry Use the NiFi Expression Language to control dataflows Optimize dataflows for better performance and maintainability Connect dataflows with other systems, such as Apache Kafka, Apache Hive, and HDFS Utilize the Data Flow Service What to expect This course is designed for developers, data engineers, administrators, and others with an interest in learning NiFi’s innovative no-code, graphical approach to data ingest. Although programming experience is not required, basic experience with Linux is presumed, and previous exposure to big data concepts and applications is helpful. DATE: December 2-5, 2025 Virtual Classroom, APAC 9:30 - 17:30 (SGT TIMEZONE) Read more

This course helps customers use Cloudera Data Platform to address data governance tasks, motivated by the need for compliance with regulations such as the European Union's General Data Protection Regulation (GDPR) and the United State's Health Insurance Portability and Accountability Act (HIPAA). What you'll learn Through instructor-led discussion, demonstrations, and hands-on exercises, you will learn how to: Identify which tools in Cloudera Data Platform (CDP) to use for key data governance activities Organize data objects using classifications and business glossary terms Find access history for data objects and policies Use Data Catalog Profilers in CDP to assist in organizing data objects Use Data Catalog to foster collaboration with colleagues View and interpret a data object's lineage Create and apply resource- and tag-based access control policies Create policies for data masking and row-level filtering What to expect This course is best suited for data stewards and others who are responsible for, or have an interest in, implementing regulatory compliance or performing typical data governance activities using the Cloudera Data Platform. Familiarity with basic data governance concepts is helpful, but not required. DATE: November 27-28, 2025 Virtual Classroom, APAC 9:30 - 17:30 (SGT TIMEZONE) Read more

Overview This three-day hands-on training course delivers the key concepts and expertise developers need to optimize the performance of their Apache Spark applications. During the course, participants will learn how to identify common sources of poor performance in Spark applications, techniques for avoiding or solving them, and best practices for Spark application monitoring. Optimizing Apache Spark Applications presents the architecture and concepts behind Apache Spark and underlying data platform, then builds on this foundational understanding by teaching students how to tune Spark application code. The course format emphasizes instructor-led demonstrations illustrate both performance issues and the techniques that address them, followed by hands-on exercises that give students an opportunity to practice what they've learned through an interactive notebook environment. Download full course description What You'll Learn Students who successfully complete this course will be able to: Understand Apache Spark's architecture, job execution, and how techniques such as lazy execution and pipelining can improve runtime performance Evaluate the performance characteristics of core data structures such as RDD and DataFrames Select the file formats that will provide the best performance for your application Identify and resolve performance problems caused by data skew Use partitioning, bucketing, and join optimizations to improve SparkSQL performance Understand the performance overhead of Python-based RDDs, DataFrames, and user-defined functions Take advantage of caching for better application performance Understand how the Catalyst and Tungsten optimizers work Understand how Workload XM can help troubleshoot and proactively monitor Spark applications performance Learn how the Adaptive Query Execution engine improves performance What to Expect This course is designed for software developers, engineers, and data scientists who have experience developing Spark applications and want to learn how to improve the performance of their code. This is not an introduction to Spark. Spark examples and hands-on exercises are presented in Python and the ability to program in this language is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. DATE: November 19-21, 2025 Virtual Classroom, APAC 9:30 - 17:30 (SGT TIMEZONE) Read more
Shopping Cart
Your cart is empty