Skip to main content

Apache Iceberg Fundamentals

33m 32s
English
Paid

Course description

Modern data platforms need the flexibility of data lakes and the reliability of warehouses. Apache Iceberg combines both approaches. In this course, you will understand how this powerful open table format works, study its architecture, and learn to use its key features: schema evolution, "time travel," and high-performance analytics in Lakehouse systems. The course is based on practical examples from real data engineering. You will set up a local lab with Docker, Spark, and MinIO, create and manage Iceberg tables. From data recording and metadata analysis to query optimization and partition restructuring – you will gain the experience necessary for confidently working with Iceberg in a production environment. By the end of the course, you will not only understand how Iceberg is structured internally but also have a working environment, ready-made notebooks for projects, and a deep understanding of table operations that are critically important for Lakehouse architecture.
Read more about the course

Why Iceberg?

Iceberg addresses long-standing issues of big data: slow queries, complex schema changes, and the tight coupling of storage with computing systems. You'll learn why companies like Netflix, Stripe, and Apple have chosen Iceberg for their platforms and how to apply these approaches in your own setup.

What you will do:

  1. Build a local Lakehouse lab based on Iceberg using Docker Compose, Spark, REST catalog, and MinIO.
  2. Create your first Iceberg table using a fun dataset (like one with Pokémon), define the schema, write data through PySpark, and explore how Iceberg manages metadata.
  3. Master schema evolution: adding, renaming, and changing column types, as well as advanced partitioning techniques.
  4. Learn to perform point-in-time operations (such as deleting rows) and use the "time travel" feature to analyze past versions of data.
  5. Dive into Iceberg's architecture: parquet files, manifests, snapshots, and catalogs.
  6. Use the MinIO UI to see how data and metadata are physically stored.
  7. Run analytical SQL queries on Iceberg tables through PySpark, using familiar operations like join, group by, and filter.


Watch Online

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 12 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing
0:00
/
#1: Intro

All Course Lessons (12)

#Lesson TitleDurationAccess
1
Intro Demo
01:07
2
Goals
01:03
3
Challenges
04:10
4
Iceberg & Lakehouses
01:42
5
Architecture Deep Dive
02:02
6
Iceberg Features
02:45
7
Architecture & Summary
02:51
8
Setup & Docker
03:31
9
Spark Iceberg Config
02:31
10
Write data to Iceberg
01:32
11
Inspect metadata & schema eval
08:41
12
Inspect data on MinIO & Outro
01:37

Unlock unlimited learning

Get instant access to all 11 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

Case Study in Product Data Science

Case Study in Product Data Science

Sources: LunarTech
This is a course that offers unique opportunities for students seeking to master key aspects of data analysis in product development. The course...
1 hour 4 minutes 47 seconds
Complete linear algebra: theory and implementation

Complete linear algebra: theory and implementation

Sources: udemy
You need to learn linear algebra! Linear algebra is perhaps the most important branch of mathematics for computational sciences, including machine learning, AI, data science, st...
32 hours 53 minutes 26 seconds
dbt for Data Engineers

dbt for Data Engineers

Sources: Andreas Kretz
dbt (data build tool) is a data transformation tool with a priority on SQL. It allows for simple and transparent transformation, testing, and documentation...
1 hour 52 minutes 55 seconds
Python for Data Science and Machine Learning Bootcamp

Python for Data Science and Machine Learning Bootcamp

Sources: udemy
Are you ready to start your path to becoming a Data Scientist! This comprehensive course will be your guide to learning how to use the power of Python to analy
24 hours 49 minutes 42 seconds
Dimensional Data Modeling

Dimensional Data Modeling

Sources: Eka Ponkratova
In today's world, where data plays a key role, effective organization of information is the foundation for quality analytics and report building.
1 hour 37 minutes 57 seconds