Skip to main content

Data Engineering on AWS

4h 46m 38s
English
Paid

Embark on your journey to mastering cloud technologies with the "Data Engineering on AWS" course. This course is tailored for beginners looking to dive into Amazon Web Services (AWS), one of the leading platforms for data processing. Ideal for aspiring data engineers, it provides a solid foundation for starting a career in this dynamic field.

Course Overview

Over the duration of this course, you'll be involved in creating a comprehensive end-to-end project utilizing data from an online store. Through a step-by-step approach, you will learn how to model data, construct data pipelines, and navigate key AWS tools such as Lambda, API Gateway, Kinesis, DynamoDB, Redshift, Glue, and S3.

What to Expect in the Course

Data Work

  • Understand the structure and various types of data you'll handle. Establish clear project goals to ensure successful execution.

Platform and Pipeline Design

  • Gain insights into platform architecture and pipeline design. Learn to load data, store it in S3 (Data Lake), and process it using DynamoDB (NoSQL) and Redshift (Data Warehouse). Build pipelines for interfaces and data streaming.

Basics of AWS

  • Create an AWS account and familiarize yourself with access and security management (IAM). Discover CloudWatch and the Boto3 library for AWS operations using Python.

Data Ingestion Pipeline

  • Create an API using API Gateway, transmit data to Kinesis, configure IAM, and develop an ingestion pipeline with Python.

Data Transfer to S3 (Data Lake)

  • Configure a Lambda function to receive data from Kinesis and store it in S3.

Data Transfer to DynamoDB

  • Set up a pipeline for transferring data from Kinesis to DynamoDB, a fast NoSQL database.

API for Data Access

  • Create an API to interact with database data. Understand why direct access from visualization to the database is discouraged.

Data Visualization in Redshift

  • Stream data to Redshift using Kinesis Firehose, establish a Redshift cluster, configure security, create tables, and set up Firehose. Integrate Power BI with Redshift for comprehensive data analysis.

Batch Processing: AWS Glue, S3, and Redshift

  • Learn the techniques of batch data processing. Configure and execute Glue to write data from S3 to Redshift, understand Crawler and data catalog functionalities, and develop debugging skills.

This course is designed to equip you with essential practical skills in creating both streaming and batch pipelines on AWS, and mastering the important tools necessary to work with cloud-based data.

About the Author: Andreas Kretz

Andreas Kretz thumbnail

I am a senior data engineer and trainer, a tech enthusiast, and a father. For more than ten years, I have been passionate about Data Engineering. Initially, I became a self-taught data engineer and then led a team of data engineers at a large company. When I realized the great demand for education in this field, I followed my passion and founded my own Data Engineering Academy. Since then, I have helped over 2,000 students achieve their goals.

Watch Online 58 lessons

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 58 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing
0:00
/
#1: Important: Before you start!
All Course Lessons (58)
#Lesson TitleDurationAccess
1
Important: Before you start! Demo
00:31
2
Introduction
02:22
3
Data Engineering
04:16
4
Data Science Platform
05:21
5
Data Types You Encounter
03:04
6
What Is A Good Dataset
02:55
7
The Dataset We Use
03:17
8
Defining The Purpose
06:28
9
Relational Storage Possibilities
03:47
10
NoSQL Storage Possibilities
06:29
11
Selecting The Tools
03:50
12
Client
03:06
13
Connect
01:19
14
Buffer
01:30
15
Process
02:43
16
Store
03:42
17
Visualize
03:02
18
Data Ingestion Pipeline
03:01
19
Stream To Raw Storage Pipeline
02:20
20
Stream To DynamoDB Pipeline
03:10
21
Visualization API Pipeline
02:57
22
Visualization Redshift Data Warehouse Pipeline
05:30
23
Batch Processing Pipeline
03:20
24
Create An AWS Account
01:59
25
Things To Keep In Mind
02:46
26
IAM Identity & Access Management
04:08
27
Logging
02:23
28
AWS Python API Boto3
02:58
29
Development Environment
04:03
30
Create Lambda for API
02:34
31
Create API Gateway
08:31
32
Setup Kinesis
01:39
33
Setup IAM for API
05:01
34
Create Ingestion Pipeline (Code)
06:10
35
Create Script to Send Data
05:47
36
Test The Pipeline
04:54
37
Setup S3 Bucket
03:43
38
Configure IAM For S3
03:22
39
Create Lambda For S3 Insert
07:17
40
Test The Pipeline
04:02
41
Setup DynamoDB
09:01
42
Setup IAM For DynamoDB Stream
03:37
43
Create DynamoDB Lambda
09:21
44
Create API & Lambda For Access
06:11
45
Test The API
04:48
46
Setup Redshift Data Warehouse
08:09
47
Security Group For Firehose
03:13
48
Create Redshift Tables
05:52
49
S3 Bucket & jsonpaths.json
03:03
50
Configure Firehose
07:59
51
Debug Redshift Streaming
07:44
52
Bug-fixing
05:59
53
Power Bi
12:17
54
AWS Glue Basics
05:15
55
Glue Crawlers
13:10
56
Glue Jobs
13:44
57
Redshift Insert & Debugging
07:17
58
What We Achieved & Improvements
10:41
Unlock unlimited learning

Get instant access to all 57 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription