Machine Learning with Hugging Face Bootcamp: Zero to Mastery
Course description
Learn to apply machine learning using the Hugging Face ecosystem — from scratch to professional level!
This practice-oriented course will guide you through the entire journey — from training models to deploying them. We will start with the basics, gradually proceed to real machine learning engineering skills, and most importantly, enjoy the process!
Why is this Machine Learning course with Hugging Face so awesome?
Because it is the most comprehensive and engaging online course, where you will master modern machine learning tools through real projects. There is minimal theory and maximum practice — you will not only understand how everything works but also be able to apply the acquired knowledge to real-world tasks.
Read more about the course
Hugging Face is a sort of "homepage" for artificial intelligence: here companies like OpenAI, Google, and Apple share their open models, and engineers and researchers create their own projects and portfolios.
What you will learn:
- Hugging Face Transformers — a powerful library for working with machine learning models in text, images, audio, video, and multimodal tasks.
- Hugging Face Datasets — a convenient tool for accessing datasets in NLP, Computer Vision, and Audio.
- Hugging Face Hub — an online platform for collaboration, model sharing, and project publication.
- And much more!
The entire training is based on real projects — you will write code, train, and fine-tune real ML models: from text classification and object detection to working with large language and multimodal models.
Ready to dive into the world of machine learning with Hugging Face and become an AI master? Then welcome to the course!
Watch Online
All Course Lessons (106)
| # | Lesson Title | Duration | Access |
|---|---|---|---|
| 1 | Machine Learning with Hugging Face Bootcamp: Zero to Mastery Demo | 01:49 | |
| 2 | Overview | 05:03 | |
| 3 | Introduction to Text Classification | 05:44 | |
| 4 | What We're Going To Build! | 07:22 | |
| 5 | Getting Setup: Adding Hugging Face Tokens to Google Colab | 05:53 | |
| 6 | Getting Setup: Importing Necessary Libraries to Google Colab | 09:36 | |
| 7 | Downloading a Text Classification Dataset from Hugging Face Datasets | 16:01 | |
| 8 | Preparing Text Data for Use with a Model - Part 1: Turning Our Labels into Numbers | 12:49 | |
| 9 | Preparing Text Data for Use with a Model - Part 2: Creating Train and Test Sets | 06:19 | |
| 10 | Preparing Text Data for Use with a Model - Part 3: Getting a Tokenizer | 12:54 | |
| 11 | Preparing Text Data for Use with a Model - Part 4: Exploring Our Tokenizer | 10:27 | |
| 12 | Preparing Text Data for Use with a Model - Part 5: Creating a Function to Tokenize Our Data | 17:58 | |
| 13 | Setting Up an Evaluation Metric (to measure how well our model performs) | 08:54 | |
| 14 | Introduction to Transfer Learning (a powerful technique to get good results quickly) | 07:11 | |
| 15 | Model Training - Part 1: Setting Up a Pretrained Model from the Hugging Face Hub | 12:20 | |
| 16 | Model Training - Part 2: Counting the Parameters in Our Model | 12:27 | |
| 17 | Model Training - Part 3: Creating a Folder to Save Our Model | 03:54 | |
| 18 | Model Training - Part 4: Setting Up Our Training Arguments with TrainingArguments | 15:00 | |
| 19 | Model Training - Part 5: Setting Up an Instance of Trainer with Hugging Face Transformers | 05:06 | |
| 20 | Model Training - Part 6: Training Our Model and Fixing Errors Along the Way | 13:35 | |
| 21 | Model Training - Part 7: Inspecting Our Models Loss Curves | 14:40 | |
| 22 | Model Training - Part 8: Uploading Our Model to the Hugging Face Hub | 08:02 | |
| 23 | Making Predictions on the Test Data with Our Trained Model | 05:59 | |
| 24 | Turning Our Predictions into Prediction Probabilities with PyTorch | 12:49 | |
| 25 | Sorting Our Model's Predictions by Their Probability | 05:11 | |
| 26 | Performing Inference - Part 1: Discussing Our Options | 09:41 | |
| 27 | Performing Inference - Part 2: Using a Transformers Pipeline (one sample at a time) | 10:02 | |
| 28 | Performing Inference - Part 3: Using a Transformers Pipeline on Multiple Samples at a Time (Batching) | 06:39 | |
| 29 | Performing Inference - Part 4: Running Speed Tests to Compare One at a Time vs. Batched Predictions | 10:34 | |
| 30 | Performing Inference - Part 5: Performing Inference with PyTorch | 12:07 | |
| 31 | OPTIONAL - Putting It All Together: from Data Loading, to Model Training, to making Predictions on Custom Data | 34:29 | |
| 32 | Turning Our Model into a Demo - Part 1: Gradio Overview | 03:48 | |
| 33 | Turning Our Model into a Demo - Part 2: Building a Function to Map Inputs to Outputs | 07:08 | |
| 34 | Turning Our Model into a Demo - Part 3: Getting Our Gradio Demo Running Locally | 06:47 | |
| 35 | Making Our Demo Publicly Accessible - Part 1: Introduction to Hugging Face Spaces and Creating a Demos Directory | 08:02 | |
| 36 | Making Our Demo Publicly Accessible - Part 2: Creating an App File | 12:15 | |
| 37 | Making Our Demo Publicly Accessible - Part 3: Creating a README File | 07:08 | |
| 38 | Making Our Demo Publicly Accessible - Part 4: Making a Requirements File | 03:34 | |
| 39 | Making Our Demo Publicly Accessible - Part 5: Uploading Our Demo to Hugging Face Spaces and Making it Publicly Available | 18:44 | |
| 40 | Summary Exercises and Extensions | 05:56 | |
| 41 | Introduction | 10:04 | |
| 42 | Setting Up Google Colab with Hugging Face Tokens | 05:52 | |
| 43 | Installing Necessary Dependencies | 03:44 | |
| 44 | Getting an Object Detection Dataset | 07:38 | |
| 45 | Inspecting the Features of Our Dataset | 06:24 | |
| 46 | Creating a Colour Palette to Visualize Our Classes | 09:36 | |
| 47 | Creating a Helper Function to Halve Our Image Sizes | 04:25 | |
| 48 | Creating a Helper Function to Halve Our Box Sizes | 06:02 | |
| 49 | Testing our Helper Functions | 04:33 | |
| 50 | Outlining the Steps to Draw Boxes on an Image | 06:27 | |
| 51 | Plotting Bounding Boxes on a Single Image Step by Step | 19:05 | |
| 52 | Different Bounding Box Formats | 08:18 | |
| 53 | Getting an Object Detection Model | 06:16 | |
| 54 | Transfer Learning Overview | 06:09 | |
| 55 | Downloading our Model from the Hugging Face Hub and Trying it Out | 09:27 | |
| 56 | Inspecting the Layers of Our Model | 06:54 | |
| 57 | Counting the Number of Parameters in Our Model | 10:55 | |
| 58 | Creating a Function to Build Our Custom Model | 13:16 | |
| 59 | Passing a Single Image Sample Through Our Model - Part 1 | 15:47 | |
| 60 | OPTIONAL: Data Preprocessor Model Workflow | 08:46 | |
| 61 | Loading Our Models Image Preprocessor and Customizing it for Our Use Case | 20:11 | |
| 62 | Exercise: Imposter Syndrome | 02:57 | |
| 63 | Discussing the Format Our Model Expects Our Annotations In (COCO) | 06:18 | |
| 64 | Creating Dataclasses to Hold the COCO Format | 09:55 | |
| 65 | Creating a Function to Turn Our Annotations into COCO Format | 12:06 | |
| 66 | Preprocessing a Single Image Sample and COCO Formatted Annotations | 07:27 | |
| 67 | Post Processing a Single Output | 12:03 | |
| 68 | Plotting a Single Post Processed Sample onto an Image | 12:45 | |
| 69 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 1: Overview | 10:45 | |
| 70 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 2: Replicating Scores by Hand | 28:33 | |
| 71 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 3: Replicating Labels by Hand | 12:33 | |
| 72 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 4: Replicating Boxes by Hand Overview | 10:24 | |
| 73 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 5: Replicating Boxes by Hand Implementation | 17:41 | |
| 74 | OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 6: Plotting Our Manual Post Processed Outputs on an Image | 06:44 | |
| 75 | Preparing Our Data at Scale - Part 1: Concept Overview | 09:22 | |
| 76 | Preparing Our Data at Scale - Part 2: Creating Train Validation and Test Splits | 12:14 | |
| 77 | Preparing Our Data at Scale - Part 3: Preprocessing Multiple Samples at a Time Overview | 08:17 | |
| 78 | Preparing our Data at Scale - Part 4: Making a Function to Preprocess Multiple Samples at a Time | 21:38 | |
| 79 | Preparing our Data at Scale - Part 5: Applying Our Preprocessing Function to Our Datasets | 09:38 | |
| 80 | Preparing Our Data at Scale - Part 6: Creating a Data Collation Function | 12:20 | |
| 81 | Training a Custom Model - Part 1: Overview | 07:43 | |
| 82 | Training a Custom Model - Part 2: Creating a Model and Folder to Save Our Model to | 04:12 | |
| 83 | Training a Custom Model - Part 3: Creating TrainingArguments for Our Model Overview | 12:54 | |
| 84 | Training a Custom Model - Part 4: Creating our First TrainingArguments | 11:12 | |
| 85 | Training a Custom Model - Part 5: Finishing Off the TrainingArguments | 12:40 | |
| 86 | Training a Custom Model - Part 6: OPTIONAL - Creating a Custom Optimizer for Different Learning Rates | 16:06 | |
| 87 | Training a Custom Model - Part 7: Creating an Evaluation Function for Our Model Overview | 13:09 | |
| 88 | Training a Custom Model - Part 8: Creating an Evaluation Function for Our Model Targets Processing | 22:50 | |
| 89 | Training a Custom Model - Part 9: Creating an Evaluation Function for Our Model Predictions Processing | 13:53 | |
| 90 | Training a Custom Model - Part 10: Training Our Model with Trainer | 12:54 | |
| 91 | Training a Custom Model - Part 11: Plotting Our Models Loss Curves | 08:36 | |
| 92 | Evaluating Our Model on the Test Dataset | 11:14 | |
| 93 | Making Predictions on Test Data and Visualizing Them | 24:21 | |
| 94 | Plotting Our Models Predictions vs. the Ground Truth Images | 12:01 | |
| 95 | Trying Our Model on Images from the Wild | 09:50 | |
| 96 | Uploading Our Trained Model to the Hugging Face Hub | 10:47 | |
| 97 | Turning Our Model into a Demo - Part 1: Gradio and Hugging Face Spaces Overview | 10:11 | |
| 98 | Turning Our Model into a Demo - Part 2: Creating an App File Overview | 07:10 | |
| 99 | Turning Our Model into a Demo - Part 3: Building the Main Function of Our App File | 27:33 | |
| 100 | Turning Our Model into a Demo - Part 4: Finishing Off Our App File and Testing Our Demo | 09:57 | |
| 101 | Turning Our Model into a Demo - Part 5: Creating a Readme and Requirements File | 03:32 | |
| 102 | Turning Our Model into a Demo - Part 6: Getting Example Images for Our Demo | 08:20 | |
| 103 | Turning Our Model into a Demo - Part 7: Uploading Our Demo to the Hugging Face Hub | 17:19 | |
| 104 | Turning Our Model into a Demo - Part 8: Embedding Our Demo into Our Notebook | 03:45 | |
| 105 | Summary, Extensions and Extra-Curriculum | 06:16 | |
| 106 | Thank You! | 01:18 |
Unlock unlimited learning
Get instant access to all 105 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.
Learn more about subscriptionComments
0 commentsWant to join the conversation?
Sign in to commentSimilar courses
Machine Learning with Spark ML
The Real-World ML Tutorial
Let’s Rust
Data Preparation & Cleaning for ML