Cloudera Educational Services

Upcoming Sessions

See All Upcoming Sessions

This course introduces Apache Iceberg, a high-performance open table format for organizing petabyte-scale analytic datasets on a file system or object store, available on Cloudera Data Warehouse and Cloudera Data Engineering on both Private and Public Cloud. Combined with Cloudera Data Platform, Iceberg can enable users to build an open data lakehouse architecture for multi-function analytics and to deploy large-scale end-to-end pipelines. This course covers various aspects of Apache Iceberg, such as benefits, architecture, internal operation, read and write operations, and advanced functions, all while drawing comparisons to Hive and building on the students’ existing knowledge and experience. DATE: April 6-9, 2026 9:00 - 17:00 (GMT+1 TIMEZONE)  Virtual Classroom, EMEA Read more

Designing Edge to AI Applications is a 4-day learning event that addresses advanced big data architecture topics for building edge to AI applications to cover streaming, operational data processing, analytics, and machine learning.  The workshop brings together technical contributors into a group setting to design and architect solutions to a challenging business problem. The workshop addresses big data architecture problems in general, and then applies them to the design of a challenging system. Throughout the highly interactive workshop, participants apply concepts to real-world examples resulting in detailed synergistic discussions. The workshop is conducive for participants to learn techniques for architecting big data systems, not only from Cloudera’s experience but also from the experiences of fellow participants.  More specifically, this workshop addresses advanced big data architecture topics, including, data formats, transformation, transactions, real-time, batch and machine learning processing, scalability, fault tolerance, security, and privacy, minimizing the risk of an unsound architecture and technology selection. What you'll learn Cloudera Data Platform Big Data Architecture Building Scalable applications Building Fault Tolerant Solutions Security and Privacy Deployment on Public, Private, and Hybrid Cloud What to expect Participants should mainly be architects, developer team leads, big data developers, data engineers, senior analysts, dev ops admins and machine learning developers who are working on big data or streaming applications and have an interest in how to design and develop such applications on CDP. To gain the most from the workshop, participants should have working knowledge of popular Big Data and streaming technologies such as HDFS, Spark, Kafka, Hive/Impala, Data Formats, and relational database management systems. Detailed API level knowledge is not needed, as there will not be any programming activities and instead the focus will be on architecture design. The workshop will be divided into small groups to discuss the problems, develop solutions, and present their solutions. DATE: March 23-26, 2026 9:00 - 17:00 (GMT+1 TIMEZONE) Virtual Classroom, EMEA Read more

This four-day Analyzing with Data Warehouse course will teach you to apply traditional data analytics and business intelligence skills to big data. This course presents the tools data professionals need to access, manipulate, transform, and analyze complex data sets using SQL and familiar scripting languages. Download full course description What you'll learn Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the ecosystem, learning how to: Use Apache Hive and Apache Impala to access data through queries Identify distinctions between Hive and Impala, such as differences in syntax, data formats, and supported features Write and execute queries that use functions, aggregate functions, and subqueries Use joins and unions to combine datasets Create, modify, and delete tables, views, and databases Load data into tables and store query results Select file formats and develop partitioning schemes for better performance Use analytic and windowing functions to gain insight into their data Store and query complex or nested data structures Process and analyze semi-structured and unstructured data Optimize and extend the capabilities of Hive and Impala Determine whether Hive, Impala, an RDBMS, or a mix of these is the best choice for a given task Utilize the benefits of CDP Public Cloud Data Warehouse   What to expect This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Some knowledge of SQL is assumed, as is basic Linux command-line familiarity.   DATE: February 2-5, 2026 9:00 - 17:00 (GMT+2 TIMEZONE) Virtual Classroom, EMEA Read more

This three-day hands-on training course delivers the key concepts and expertise developers need to optimize the performance of their Apache Spark applications. During the course, participants will learn how to identify common sources of poor performance in Spark applications, techniques for avoiding or solving them, and best practices for Spark application monitoring. Optimizing Apache Spark Applications presents the architecture and concepts behind Apache Spark and underlying data platform, then builds on this foundational understanding by teaching students how to tune Spark application code. The course format emphasizes instructor-led demonstrations illustrate both performance issues and the techniques that address them, followed by hands-on exercises that give students an opportunity to practice what they've learned through an interactive notebook environment. Download full course description   This course is designed for software developers, engineers, and data scientists who have experience developing Spark applications and want to learn how to improve the performance of their code. This is not an introduction to Spark. Spark examples and hands-on exercises are presented in Python and the ability to program in this language is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful.   DATE: April 27-29, 2026 Virtual Classroom, EMEA 9:00 - 17:00 (GMT+2 TIMEZONE) Read more

This four-day course teaches the architecture, deployment, and configuration of Cloudera Data Services on Embedded Containerized Services (ECS). Cloudera Data Services provide a state-of-the-art, low- code platform that unifies the entire data lifecycle, reducing development costs and accelerating the development and deployment of use cases.   The course starts by covering best practices for managing Docker images and containers. Students will then build a Docker private registry. This Docker private registry will be used to deploy a Data Services cluster on ECS. Students will install, configure, and validate Cloudera Data Engineering, Cloudera Data Warehouse, and Cloudera Machine Learning. Through hands-on exercises, students will gain experience with Kubernetes, install a Private Cloud Embedded Container Service (ECS), and deploy Cloudera Data Services. Additionally, the course covers networking and hardware requirements and explains how Kubernetes pods dynamically scale to support Cloudera Data Services. Who should take this course This immersive course is designed for Cloudera Administrators transitioning to managing Cloudera Data Services on premises. Students should have at least 3 to 5 years of system administration experience. Students must have proficiency in the Linux Command Line Interface and knowledge of Identity Management, including Transport Layer Security and Kerberos. Familiarity with SQL select statements is recommended. Prior experience with Cloudera products is required. Students need reliable internet access to connect to the Amazon Web Services environment used in this course. Recommended prerequisite courses • ADMIN-230: Administering Cloudera on premises • ADMIN-332: Securing Cloudera on premises   DATE: March 30 - April 2, 2026 Virtual Classroom, EMEA 9:00 - 17:00 (GMT+2 TIMEZONE) Read more

This four-day instructor-led course begins by introducing Apache Kafka, explaining its key concepts and architecture, and discussing several common use cases. Building on this foundation, you will learn how to plan a Kafka deployment, and then gain hands-on experience by installing and configuring your own cloud-based, multi-node cluster running Kafka on the Cloudera Data Platform (CDP). You will then use this cluster during more than 20 hands-on exercises that follow, covering a range of essential skills, starting with how to create Kafka topics, producers, and consumers, then continuing through progressively more challenging aspects of Kafka operations and development, such as those related to scalability, reliability, and performance problems. Throughout the course, you will learn and use Cloudera’s recommended tools for working with Kafka, including Cloudera Manager, Schema Registry, Streams Messaging Manager, and Cruise Control. DATE: March 30 - April 2, 2026 Virtual Classroom, EMEA 9:00 - 17:00 (GMT+3 TIMEZONE) Read more

Shopping Cart

Your cart is empty