Skip to main content

PyTorch for Deep Learning

52h 27s
English
Paid

Course description

Learn PyTorch from scratch! This PyTorch course is your step-by-step guide to developing your own deep learning models using PyTorch. You'll learn Deep Learning with PyTorch by building a massive 3-part real-world milestone project. By the end, you'll have the skills and portfolio to get hired as a Deep Learning Engineer.

Watch Online

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 348 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing

Watch Online PyTorch for Deep Learning

0:00
/
#1: PyTorch for Deep Learning

All Course Lessons (348)

#Lesson TitleDurationAccess
1
PyTorch for Deep Learning Demo
03:34
2
Course Welcome and What Is Deep Learning
05:54
3
Why Use Machine Learning or Deep Learning
03:34
4
The Number 1 Rule of Machine Learning and What Is Deep Learning Good For
05:40
5
Machine Learning vs. Deep Learning
06:07
6
Anatomy of Neural Networks
09:22
7
Different Types of Learning Paradigms
04:31
8
What Can Deep Learning Be Used For
06:22
9
What Is and Why PyTorch
10:13
10
What Are Tensors
04:16
11
What We Are Going To Cover With PyTorch
06:06
12
How To and How Not To Approach This Course
05:10
13
Important Resources For This Course
05:22
14
Getting Setup to Write PyTorch Code
07:40
15
Introduction to PyTorch Tensors
13:26
16
Creating Random Tensors in PyTorch
09:59
17
Creating Tensors With Zeros and Ones in PyTorch
03:09
18
Creating a Tensor Range and Tensors Like Other Tensors
05:18
19
Dealing With Tensor Data Types
09:25
20
Getting Tensor Attributes
08:23
21
Manipulating Tensors (Tensor Operations)
06:00
22
Matrix Multiplication (Part 1)
09:35
23
Matrix Multiplication (Part 2): The Two Main Rules of Matrix Multiplication
07:52
24
Matrix Multiplication (Part 3): Dealing With Tensor Shape Errors
12:58
25
Finding the Min Max Mean and Sum of Tensors (Tensor Aggregation)
06:10
26
Finding The Positional Min and Max of Tensors
03:17
27
Reshaping, Viewing and Stacking Tensors
13:41
28
Squeezing, Unsqueezing and Permuting Tensors
11:56
29
Selecting Data From Tensors (Indexing)
09:32
30
PyTorch Tensors and NumPy
09:09
31
PyTorch Reproducibility (Taking the Random Out of Random)
10:47
32
Different Ways of Accessing a GPU in PyTorch
11:51
33
Setting up Device Agnostic Code and Putting Tensors On and Off the GPU
07:44
34
PyTorch Fundamentals: Exercises and Extra-Curriculum
04:50
35
Introduction and Where You Can Get Help
02:46
36
Getting Setup and What We Are Covering
07:15
37
Creating a Simple Dataset Using the Linear Regression Formula
09:42
38
Splitting Our Data Into Training and Test Sets
08:21
39
Building a function to Visualize Our Data
07:46
40
Creating Our First PyTorch Model for Linear Regression
14:10
41
Breaking Down What's Happening in Our PyTorch Linear regression Model
06:11
42
Discussing Some of the Most Important PyTorch Model Building Classes
06:27
43
Checking Out the Internals of Our PyTorch Model
09:51
44
Making Predictions With Our Random Model Using Inference Mode
11:13
45
Training a Model Intuition (The Things We Need)
08:15
46
Setting Up an Optimizer and a Loss Function
12:52
47
PyTorch Training Loop Steps and Intuition
13:54
48
Writing Code for a PyTorch Training Loop
08:47
49
Reviewing the Steps in a Training Loop Step by Step
14:58
50
Running Our Training Loop Epoch by Epoch and Seeing What Happens
09:26
51
Writing Testing Loop Code and Discussing What's Happening Step by Step
11:38
52
Reviewing What Happens in a Testing Loop Step by Step
14:43
53
Writing Code to Save a PyTorch Model
13:46
54
Writing Code to Load a PyTorch Model
08:45
55
Setting Up to Practice Everything We Have Done Using Device-Agnostic Code
06:03
56
Putting Everything Together (Part 1): Data
06:09
57
Putting Everything Together (Part 2): Building a Model
10:08
58
Putting Everything Together (Part 3): Training a Model
12:41
59
Putting Everything Together (Part 4): Making Predictions With a Trained Model
05:18
60
Putting Everything Together (Part 5): Saving and Loading a Trained Model
09:11
61
Exercise: Imposter Syndrome
02:57
62
PyTorch Workflow: Exercises and Extra-Curriculum
03:58
63
Introduction to Machine Learning Classification With PyTorch
09:42
64
Classification Problem Example: Input and Output Shapes
09:08
65
Typical Architecture of a Classification Neural Network (Overview)
06:32
66
Making a Toy Classification Dataset
12:19
67
Turning Our Data into Tensors and Making a Training and Test Split
11:56
68
Laying Out Steps for Modelling and Setting Up Device-Agnostic Code
04:20
69
Coding a Small Neural Network to Handle Our Classification Data
10:58
70
Making Our Neural Network Visual
06:58
71
Recreating and Exploring the Insides of Our Model Using nn.Sequential
13:18
72
Setting Up a Loss Function Optimizer and Evaluation Function for Our Classification Network
14:51
73
Going from Model Logits to Prediction Probabilities to Prediction Labels
16:08
74
Coding a Training and Testing Optimization Loop for Our Classification Model
15:27
75
Writing Code to Download a Helper Function to Visualize Our Models Predictions
14:14
76
Discussing Options to Improve a Model
08:03
77
Creating a New Model with More Layers and Hidden Units
09:07
78
Writing Training and Testing Code to See if Our New and Upgraded Model Performs Better
12:46
79
Creating a Straight Line Dataset to See if Our Model is Learning Anything
08:08
80
Building and Training a Model to Fit on Straight Line Data
10:02
81
Evaluating Our Models Predictions on Straight Line Data
05:24
82
Introducing the Missing Piece for Our Classification Model Non-Linearity
10:01
83
Building Our First Neural Network with Non-Linearity
10:26
84
Writing Training and Testing Code for Our First Non-Linear Model
15:13
85
Making Predictions with and Evaluating Our First Non-Linear Model
05:48
86
Replicating Non-Linear Activation Functions with Pure PyTorch
09:35
87
Putting It All Together (Part 1): Building a Multiclass Dataset
11:25
88
Creating a Multi-Class Classification Model with PyTorch
12:28
89
Setting Up a Loss Function and Optimizer for Our Multi-Class Model
06:41
90
Going from Logits to Prediction Probabilities to Prediction Labels with a Multi-Class Model
11:03
91
Training a Multi-Class Classification Model and Troubleshooting Code on the Fly
16:18
92
Making Predictions with and Evaluating Our Multi-Class Classification Model
08:00
93
Discussing a Few More Classification Metrics
09:18
94
PyTorch Classification: Exercises and Extra-Curriculum
02:59
95
What Is a Computer Vision Problem and What We Are Going to Cover
11:48
96
Computer Vision Input and Output Shapes
10:09
97
What Is a Convolutional Neural Network (CNN)
05:03
98
Discussing and Importing the Base Computer Vision Libraries in PyTorch
09:20
99
Getting a Computer Vision Dataset and Checking Out Its- Input and Output Shapes
14:31
100
Visualizing Random Samples of Data
09:52
101
DataLoader Overview Understanding Mini-Batch
07:18
102
Turning Our Datasets Into DataLoaders
12:24
103
Model 0: Creating a Baseline Model with Two Linear Layers
14:39
104
Creating a Loss Function: an Optimizer for Model 0
10:30
105
Creating a Function to Time Our Modelling Code
05:35
106
Writing Training and Testing Loops for Our Batched Data
21:26
107
Writing an Evaluation Function to Get Our Models Results
12:59
108
Setup Device-Agnostic Code for Running Experiments on the GPU
03:47
109
Model 1: Creating a Model with Non-Linear Functions
09:04
110
Mode 1: Creating a Loss Function and Optimizer
03:05
111
Turing Our Training Loop into a Function
08:29
112
Turing Our Testing Loop into a Function
06:36
113
Training and Testing Model 1 with Our Training and Testing Functions
11:53
114
Getting a Results Dictionary for Model 1
04:09
115
Model 2: Convolutional Neural Networks High Level Overview
08:25
116
Model 2: Coding Our First Convolutional Neural Network with PyTorch
19:49
117
Model 2: Breaking Down Conv2D Step by Step
15:00
118
Model 2: Breaking Down MaxPool2D Step by Step
15:49
119
Mode 2: Using a Trick to Find the Input and Output Shapes of Each of Our Layers
13:46
120
Model 2: Setting Up a Loss Function and Optimizer
02:39
121
Model 2: Training Our First CNN and Evaluating Its Results
07:55
122
Comparing the Results of Our Modelling Experiments
07:24
123
Making Predictions on Random Test Samples with the Best Trained Model
11:40
124
Plotting Our Best Model Predictions on Random Test Samples and Evaluating Them
08:11
125
Making Predictions Across the Whole Test Dataset and Importing Libraries to Plot a Confusion Matrix
15:21
126
Evaluating Our Best Models Predictions with a Confusion Matrix
06:55
127
Saving and Loading Our Best Performing Model
11:28
128
Recapping What We Have Covered Plus Exercises and Extra-Curriculum
06:02
129
What Is a Custom Dataset and What We Are Going to Cover
09:54
130
Importing PyTorch and Setting Up Device-Agnostic Code
05:55
131
Downloading a Custom Dataset of Pizza, Steak and Sushi Images
14:05
132
Becoming One With the Data (Part 1): Exploring the Data Format
08:42
133
Becoming One With the Data (Part 2): Visualizing a Random Image
11:41
134
Becoming One With the Data (Part 3): Visualizing a Random Image with Matplotlib
04:48
135
Transforming Data (Part 1): Turning Images Into Tensors
08:54
136
Transforming Data (Part 2): Visualizing Transformed Images
11:31
137
Loading All of Our Images and Turning Them Into Tensors With ImageFolder
09:19
138
Visualizing a Loaded Image From the Train Dataset
07:19
139
Turning Our Image Datasets into PyTorch DataLoaders
09:04
140
Creating a Custom Dataset Class in PyTorch High Level Overview
08:01
141
Creating a Helper Function to Get Class Names From a Directory
09:07
142
Writing a PyTorch Custom Dataset Class from Scratch to Load Our Images
17:47
143
Compare Our Custom Dataset Class to the Original ImageFolder Class
07:14
144
Writing a Helper Function to Visualize Random Images from Our Custom Dataset
14:19
145
Turning Our Custom Datasets Into DataLoaders
07:00
146
Exploring State of the Art Data Augmentation With Torchvision Transforms
14:24
147
Building a Baseline Model (Part 1): Loading and Transforming Data
08:16
148
Building a Baseline Model (Part 2): Replicating Tiny VGG from Scratch
11:25
149
Building a Baseline Model (Part 3): Doing a Forward Pass to Test Our Model Shapes
08:10
150
Using the Torchinfo Package to Get a Summary of Our Model
06:39
151
Creating Training and Testing loop Functions
13:04
152
Creating a Train Function to Train and Evaluate Our Models
10:15
153
Training and Evaluating Model 0 With Our Training Functions
09:54
154
Plotting the Loss Curves of Model 0
09:03
155
Discussing the Balance Between Overfitting and Underfitting and How to Deal With Each
14:14
156
Creating Augmented Training Datasets and DataLoaders for Model 1
11:04
157
Constructing and Training Model 1
07:11
158
Plotting the Loss Curves of Model 1
03:23
159
Plotting the Loss Curves of All of Our Models Against Each Other
10:56
160
Predicting on Custom Data (Part 1): Downloading an Image
05:33
161
Predicting on Custom Data (Part2): Loading In a Custom Image With PyTorch
07:01
162
Predicting on Custom Data (Part 3): Getting Our Custom Image Into the Right Format
14:08
163
Predicting on Custom Data (Part 4): Turning Our Models Raw Outputs Into Prediction Labels
04:25
164
Predicting on Custom Data (Part 5): Putting It All Together
12:48
165
Summary of What We Have Covered Plus Exercises and Extra-Curriculum
06:05
166
What Is Going Modular and What We Are Going to Cover
11:35
167
Going Modular Notebook (Part 1): Running It End to End
07:41
168
Downloading a Dataset
04:51
169
Writing the Outline for Our First Python Script to Setup the Data
13:51
170
Creating a Python Script to Create Our PyTorch DataLoaders
10:36
171
Turning Our Model Building Code into a Python Script
09:19
172
Turning Our Model Training Code into a Python Script
06:17
173
Turning Our Utility Function to Save a Model into a Python Script
06:08
174
Creating a Training Script to Train Our Model in One Line of Code
15:47
175
Going Modular: Summary, Exercises and Extra-Curriculum
06:00
176
Introduction: What is Transfer Learning and Why Use It
10:06
177
Where Can You Find Pretrained Models and What We Are Going to Cover
05:13
178
Installing the Latest Versions of Torch and Torchvision
08:06
179
Downloading Our Previously Written Code from Going Modular
06:42
180
Downloading Pizza, Steak, Sushi Image Data from Github
08:01
181
Turning Our Data into DataLoaders with Manually Created Transforms
14:41
182
Turning Our Data into DataLoaders with Automatic Created Transforms
13:07
183
Which Pretrained Model Should You Use
12:16
184
Setting Up a Pretrained Model with Torchvision
10:57
185
Different Kinds of Transfer Learning
07:12
186
Getting a Summary of the Different Layers of Our Model
06:50
187
Freezing the Base Layers of Our Model and Updating the Classifier Head
13:27
188
Training Our First Transfer Learning Feature Extractor Model
07:55
189
Plotting the Loss Curves of Our Transfer Learning Model
06:27
190
Outlining the Steps to Make Predictions on the Test Images
07:58
191
Creating a Function Predict On and Plot Images
10:01
192
Making and Plotting Predictions on Test Images
07:24
193
Making a Prediction on a Custom Image
06:22
194
Main Takeaways, Exercises and Extra Curriculum
03:22
195
What Is Experiment Tracking and Why Track Experiments
07:07
196
Getting Setup by Importing Torch Libraries and Going Modular Code
08:14
197
Creating a Function to Download Data
10:24
198
Turning Our Data into DataLoaders Using Manual Transforms
08:31
199
Turning Our Data into DataLoaders Using Automatic Transforms
07:48
200
Preparing a Pretrained Model for Our Own Problem
10:29
201
Setting Up a Way to Track a Single Model Experiment with TensorBoard
13:36
202
Training a Single Model and Saving the Results to TensorBoard
04:39
203
Exploring Our Single Models Results with TensorBoard
10:18
204
Creating a Function to Create SummaryWriter Instances
10:45
205
Adapting Our Train Function to Be Able to Track Multiple Experiments
04:58
206
What Experiments Should You Try
06:00
207
Discussing the Experiments We Are Going to Try
06:02
208
Downloading Datasets for Our Modelling Experiments
06:32
209
Turning Our Datasets into DataLoaders Ready for Experimentation
08:29
210
Creating Functions to Prepare Our Feature Extractor Models
15:55
211
Coding Out the Steps to Run a Series of Modelling Experiments
14:28
212
Running Eight Different Modelling Experiments in 5 Minutes
03:51
213
Viewing Our Modelling Experiments in TensorBoard
13:39
214
Loading In the Best Model and Making Predictions on Random Images from the Test Set
10:33
215
Making a Prediction on Our Own Custom Image with the Best Model
03:45
216
Main Takeaways, Exercises and Extra Curriculum
03:57
217
What Is a Machine Learning Research Paper?
07:35
218
Why Replicate a Machine Learning Research Paper?
03:14
219
Where Can You Find Machine Learning Research Papers and Code?
08:19
220
What We Are Going to Cover
08:22
221
Getting Setup for Coding in Google Colab
08:22
222
Downloading Data for Food Vision Mini
04:03
223
Turning Our Food Vision Mini Images into PyTorch DataLoaders
09:48
224
Visualizing a Single Image
03:46
225
Replicating a Vision Transformer - High Level Overview
09:54
226
Breaking Down Figure 1 of the ViT Paper
11:13
227
Breaking Down the Four Equations Overview and a Trick for Reading Papers
10:56
228
Breaking Down Equation 1
08:15
229
Breaking Down Equations 2 and 3
10:04
230
Breaking Down Equation 4
07:28
231
Breaking Down Table 1
11:06
232
Calculating the Input and Output Shape of the Embedding Layer by Hand
15:42
233
Turning a Single Image into Patches (Part 1: Patching the Top Row)
15:04
234
Turning a Single Image into Patches (Part 2: Patching the Entire Image)
12:34
235
Creating Patch Embeddings with a Convolutional Layer
13:34
236
Exploring the Outputs of Our Convolutional Patch Embedding Layer
12:55
237
Flattening Our Convolutional Feature Maps into a Sequence of Patch Embeddings
10:00
238
Visualizing a Single Sequence Vector of Patch Embeddings
05:04
239
Creating the Patch Embedding Layer with PyTorch
17:02
240
Creating the Class Token Embedding
13:25
241
Creating the Class Token Embedding - Less Birds
13:25
242
Creating the Position Embedding
11:26
243
Equation 1: Putting it All Together
13:26
244
Equation 2: Multihead Attention Overview
14:31
245
Equation 2: Layernorm Overview
09:04
246
Turning Equation 2 into Code
14:34
247
Checking the Inputs and Outputs of Equation
05:41
248
Equation 3: Replication Overview
09:12
249
Turning Equation 3 into Code
11:26
250
Transformer Encoder Overview
08:51
251
Combining Equation 2 and 3 to Create the Transformer Encoder
09:17
252
Creating a Transformer Encoder Layer with In-Built PyTorch Layer
15:55
253
Bringing Our Own Vision Transformer to Life - Part 1: Gathering the Pieces of the Puzzle
18:20
254
Bringing Our Own Vision Transformer to Life - Part 2: Putting Together the Forward Method
10:42
255
Getting a Visual Summary of Our Custom Vision Transformer
07:14
256
Creating a Loss Function and Optimizer from the ViT Paper
11:27
257
Training our Custom ViT on Food Vision Mini
04:30
258
Discussing what Our Training Setup Is Missing
09:09
259
Plotting a Loss Curve for Our ViT Model
06:14
260
Getting a Pretrained Vision Transformer from Torchvision and Setting it Up
14:38
261
Preparing Data to Be Used with a Pretrained ViT
05:54
262
Training a Pretrained ViT Feature Extractor Model for Food Vision Mini
07:16
263
Saving Our Pretrained ViT Model to File and Inspecting Its Size
05:14
264
Discussing the Trade-Offs Between Using a Larger Model for Deployments
03:47
265
Making Predictions on a Custom Image with Our Pretrained ViT
03:31
266
PyTorch Paper Replicating: Main Takeaways, Exercises and Extra-Curriculum
06:51
267
What is Machine Learning Model Deployment and Why Deploy a Machine Learning Model
09:36
268
Three Questions to Ask for Machine Learning Model Deployment
07:14
269
Where Is My Model Going to Go?
13:35
270
How Is My Model Going to Function?
08:00
271
Some Tools and Places to Deploy Machine Learning Models
05:50
272
What We Are Going to Cover
04:02
273
Getting Setup to Code
06:16
274
Downloading a Dataset for Food Vision Mini
03:24
275
Outlining Our Food Vision Mini Deployment Goals and Modelling Experiments
08:00
276
Creating an EffNetB2 Feature Extractor Model
09:46
277
Create a Function to Make an EffNetB2 Feature Extractor Model and Transforms
06:30
278
Creating DataLoaders for EffNetB2
03:32
279
Training Our EffNetB2 Feature Extractor and Inspecting the Loss Curves
09:16
280
Saving Our EffNetB2 Model to File
03:25
281
Getting the Size of Our EffNetB2 Model in Megabytes
05:52
282
Collecting Important Statistics and Performance Metrics for Our EffNetB2 Model
06:35
283
Creating a Vision Transformer Feature Extractor Model
07:52
284
Creating DataLoaders for Our ViT Feature Extractor Model
02:31
285
Training Our ViT Feature Extractor Model and Inspecting Its Loss Curves
06:20
286
Saving Our ViT Feature Extractor and Inspecting Its Size
05:09
287
Collecting Stats About Our ViT Feature Extractor
05:52
288
Outlining the Steps for Making and Timing Predictions for Our Models
11:16
289
Creating a Function to Make and Time Predictions with Our Models
16:21
290
Making and Timing Predictions with EffNetB2
10:44
291
Making and Timing Predictions with ViT
07:35
292
Comparing EffNetB2 and ViT Model Statistics
11:32
293
Visualizing the Performance vs Speed Trade-off
15:55
294
Gradio Overview and Installation
08:40
295
Gradio Function Outline
08:50
296
Creating a Predict Function to Map Our Food Vision Mini Inputs to Outputs
09:52
297
Creating a List of Examples to Pass to Our Gradio Demo
05:27
298
Bringing Food Vision Mini to Life in a Live Web Application
12:13
299
Getting Ready to Deploy Our App Hugging Face Spaces Overview
06:27
300
Outlining the File Structure of Our Deployed App
08:12
301
Creating a Food Vision Mini Demo Directory to House Our App Files
04:12
302
Creating an Examples Directory with Example Food Vision Mini Images
09:14
303
Writing Code to Move Our Saved EffNetB2 Model File
07:43
304
Turning Our EffNetB2 Model Creation Function Into a Python Script
04:02
305
Turning Our Food Vision Mini Demo App Into a Python Script
13:28
306
Creating a Requirements File for Our Food Vision Mini App
04:12
307
Downloading Our Food Vision Mini App Files from Google Colab
11:31
308
Uploading Our Food Vision Mini App to Hugging Face Spaces Programmatically
13:37
309
Running Food Vision Mini on Hugging Face Spaces and Trying it Out
07:45
310
Food Vision Big Project Outline
04:18
311
Preparing an EffNetB2 Feature Extractor Model for Food Vision Big
09:39
312
Downloading the Food 101 Dataset
07:46
313
Creating a Function to Split Our Food 101 Dataset into Smaller Portions
13:37
314
Turning Our Food 101 Datasets into DataLoaders
07:24
315
Training Food Vision Big: Our Biggest Model Yet!
20:16
316
Outlining the File Structure for Our Food Vision Big
05:49
317
Downloading an Example Image and Moving Our Food Vision Big Model File
03:34
318
Saving Food 101 Class Names to a Text File and Reading them Back In
06:57
319
Turning Our EffNetB2 Feature Extractor Creation Function into a Python Script
02:21
320
Creating an App Script for Our Food Vision Big Model Gradio Demo
10:42
321
Zipping and Downloading Our Food Vision Big App Files
03:46
322
Deploying Food Vision Big to Hugging Face Spaces
13:35
323
PyTorch Mode Deployment: Main Takeaways, Extra-Curriculum and Exercises
06:14
324
Introduction to PyTorch 2.0
06:02
325
What We Are Going to Cover and PyTorch 2 Reference Materials
01:22
326
Getting Started with PyTorch 2.0 in Google Colab
04:20
327
PyTorch 2.0 - 30 Second Intro
03:21
328
Getting Setup for PyTorch 2.0
02:23
329
Getting Info from Our GPUs and Seeing if They're Capable of Using PyTorch 2.0
06:50
330
Setting the Default Device in PyTorch 2.0
09:41
331
Discussing the Experiments We Are Going to Run for PyTorch 2.0
06:43
332
Creating a Function to Setup Our Model and Transforms
10:18
333
Discussing How to Get Better Relative Speedups for Training Models
08:24
334
Setting the Batch Size and Data Size Programmatically
07:16
335
Getting More Speedups with TensorFloat-32
09:54
336
Downloading the CIFAR10 Dataset
07:01
337
Creating Training and Test DataLoaders
07:39
338
Preparing Training and Testing Loops with Timing Steps
04:59
339
Experiment 1 - Single Run without Torch Compile
08:23
340
Experiment 2 - Single Run with Torch Compile
10:39
341
Comparing the Results of Experiments 1 and 2
11:20
342
Saving the Results of Experiments 1 and 2
04:40
343
Preparing Functions for Experiments 3 and 4
12:42
344
Experiment 3 - Training a Non-Compiled Model for Multiple Runs
12:45
345
Experiment 4 - Training a Compiled Model for Multiple Runs
09:58
346
Comparing the Results of Experiments 3 and 4
05:24
347
Potential Extensions and Resources to Learn More
05:51
348
Thank You!
01:18

Unlock unlimited learning

Get instant access to all 347 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

Introduction to Data Engineering

Introduction to Data Engineering

Sources: zerotomastery.io
Companies of all sizes have access to enormous amounts of data, but the problem is that the data is often unstructured. In order to answer important...
57 minutes 26 seconds
Machine Learning in JavaScript with TensorFlow.js

Machine Learning in JavaScript with TensorFlow.js

Sources: udemy
Interested in using Machine Learning in JavaScript applications and websites? Then this course is for you! This is the tutorial you've been looking for to becom
6 hours 42 minutes 20 seconds
Machine Learning with Javascript

Machine Learning with Javascript

Sources: udemy, Stephen Grider
If you're here, you already know the truth: Machine Learning is the future of everything. In the coming years, there won't be a single industry in the world untouched by Machine...
17 hours 42 minutes 20 seconds
Modern Data Warehouses & Data Lakes

Modern Data Warehouses & Data Lakes

Sources: Andreas Kretz
As a data engineer, you will regularly work with analytics platforms where companies store data in Data Lakes and Data Warehouses for building...
58 minutes 9 seconds
Build a Large Language Model (From Scratch)

Build a Large Language Model (From Scratch)

Sources: Sebastian Raschka
"Creating a Large Language Model from Scratch" is a practical guide that will teach you step by step how to create, train, and fine-tune large language models..