TensorFlow Developer Certificate in 2023: Zero to Mastery
62h 43m 54s
English
Paid
Course description
Learn TensorFlow. Pass the TensorFlow Developer Certificate Exam. Get Hired as a TensorFlow developer. This course will take you from a TensorFlow beginner to being part of Google's Certification Network.
Read more about the course
- Learn to pass Google's official TensorFlow Developer Certificate exam (and add it to your resume)
- Complete access to ALL interactive notebooks and ALL course slides as downloadable guides
- Understand how to integrate Machine Learning into tools and applications
- Build image recognition, object detection, text recognition algorithms with deep neural networks and convolutional neural networks
- Applying Deep Learning for Time Series Forecasting
- Be recognized as a top candidate for recruiters seeking TensorFlow developers
- Build TensorFlow models using Computer Vision, Convolutional Neural Networks and Natural Language Processing
- Increase your skills in Machine Learning and Deep Learning
- Learn to build all types of Machine Learning Models using the latest TensorFlow 2
- Using real-world images in different shapes and sizes to visualize the journey of an image through convolutions to understand how a computer “sees” information, plot loss and accuracy
- Gain the skills you need to become a TensorFlow Certified Developer
Watch Online
Watch Online TensorFlow Developer Certificate in 2023: Zero to Mastery
0:00
/ #1: Course Outline
All Course Lessons (377)
| # | Lesson Title | Duration | Access |
|---|---|---|---|
| 1 | Course Outline Demo | 05:22 | |
| 2 | What is deep learning? | 04:39 | |
| 3 | Why use deep learning? | 09:39 | |
| 4 | What are neural networks? | 10:27 | |
| 5 | What is deep learning already being used for? | 08:37 | |
| 6 | What is and why use TensorFlow? | 07:57 | |
| 7 | What is a Tensor? | 03:38 | |
| 8 | What we're going to cover throughout the course | 04:30 | |
| 9 | How to approach this course | 05:34 | |
| 10 | Creating your first tensors with TensorFlow and tf.constant() | 18:46 | |
| 11 | Creating tensors with TensorFlow and tf.Variable() | 07:08 | |
| 12 | Creating random tensors with TensorFlow | 09:41 | |
| 13 | Shuffling the order of tensors | 09:41 | |
| 14 | Creating tensors from NumPy arrays | 11:56 | |
| 15 | Getting information from your tensors (tensor attributes | 11:58 | |
| 16 | Indexing and expanding tensors | 12:34 | |
| 17 | Manipulating tensors with basic operations | 05:35 | |
| 18 | Matrix multiplication with tensors part 1 | 11:54 | |
| 19 | Matrix multiplication with tensors part 2 | 13:30 | |
| 20 | Matrix multiplication with tensors part 3 | 10:04 | |
| 21 | Changing the datatype of tensors | 06:56 | |
| 22 | Tensor aggregation (finding the min, max, mean & more) | 09:50 | |
| 23 | Tensor troubleshooting example (updating tensor datatypes) | 06:14 | |
| 24 | Finding the positional minimum and maximum of a tensor (argmin and argmax) (9:31) | 09:32 | |
| 25 | Squeezing a tensor (removing all 1-dimension axes) | 03:00 | |
| 26 | One-hot encoding tensors | 05:47 | |
| 27 | Trying out more tensor math operations | 04:48 | |
| 28 | Exploring TensorFlow and NumPy's compatibility | 05:44 | |
| 29 | Making sure our tensor operations run really fast on GPUs | 10:20 | |
| 30 | Introduction to Neural Network Regression with TensorFlow | 07:34 | |
| 31 | Inputs and outputs of a neural network regression model | 09:00 | |
| 32 | Anatomy and architecture of a neural network regression model | 07:56 | |
| 33 | Creating sample regression data (so we can model it) | 12:47 | |
| 34 | The major steps in modelling with TensorFlow | 20:16 | |
| 35 | Steps in improving a model with TensorFlow part 1 | 06:03 | |
| 36 | Steps in improving a model with TensorFlow part 2 | 09:26 | |
| 37 | Steps in improving a model with TensorFlow part 3 | 12:34 | |
| 38 | Evaluating a TensorFlow model part 1 ("visualise, visualise, visualise") | 07:25 | |
| 39 | Evaluating a TensorFlow model part 2 (the three datasets) | 11:02 | |
| 40 | Evaluating a TensorFlow model part 3 (getting a model summary) | 17:19 | |
| 41 | Evaluating a TensorFlow model part 4 (visualising a model's layers) | 07:15 | |
| 42 | Evaluating a TensorFlow model part 5 (visualising a model's predictions) | 09:17 | |
| 43 | Evaluating a TensorFlow model part 6 (common regression evaluation metrics) | 08:06 | |
| 44 | Evaluating a TensorFlow regression model part 7 (mean absolute error) | 05:53 | |
| 45 | Evaluating a TensorFlow regression model part 7 (mean square error) | 03:19 | |
| 46 | Setting up TensorFlow modelling experiments part 1 (start with a simple model) | 13:51 | |
| 47 | Setting up TensorFlow modelling experiments part 2 (increasing complexity) | 11:30 | |
| 48 | Comparing and tracking your TensorFlow modelling experiments | 10:21 | |
| 49 | How to save a TensorFlow model | 08:20 | |
| 50 | How to load and use a saved TensorFlow model | 10:16 | |
| 51 | (Optional) How to save and download files from Google Colab | 06:19 | |
| 52 | Putting together what we've learned part 1 (preparing a dataset) | 13:32 | |
| 53 | Putting together what we've learned part 2 (building a regression model) | 13:21 | |
| 54 | Putting together what we've learned part 3 (improving our regression model) | 15:48 | |
| 55 | Preprocessing data with feature scaling part 1 (what is feature scaling?) | 09:35 | |
| 56 | Preprocessing data with feature scaling part 2 (normalising our data) | 10:58 | |
| 57 | Preprocessing data with feature scaling part 3 (fitting a model on scaled data) | 07:41 | |
| 58 | Introduction to neural network classification in TensorFlow | 08:26 | |
| 59 | Example classification problems (and their inputs and outputs) | 06:39 | |
| 60 | Input and output tensors of classification problems | 06:22 | |
| 61 | Typical architecture of neural network classification models with TensorFlow | 09:37 | |
| 62 | Creating and viewing classification data to model | 11:35 | |
| 63 | Checking the input and output shapes of our classification data | 04:39 | |
| 64 | Building a not very good classification model with TensorFlow | 12:11 | |
| 65 | Trying to improve our not very good classification model | 09:14 | |
| 66 | Creating a function to view our model's not so good predictions | 15:09 | |
| 67 | Make our poor classification model work for a regression dataset | 12:19 | |
| 68 | Non-linearity part 1: Straight lines and non-straight lines | 09:39 | |
| 69 | Non-linearity part 2: Building our first neural network with non-linearity | 05:48 | |
| 70 | Non-linearity part 3: Upgrading our non-linear model with more layers | 10:19 | |
| 71 | Non-linearity part 4: Modelling our non-linear data once and for all | 08:38 | |
| 72 | Non-linearity part 5: Replicating non-linear activation functions from scratch | 14:27 | |
| 73 | Getting great results in less time by tweaking the learning rate | 14:48 | |
| 74 | Using the TensorFlow History object to plot a model's loss curves | 06:12 | |
| 75 | Using callbacks to find a model's ideal learning rate | 17:33 | |
| 76 | Training and evaluating a model with an ideal learning rate | 09:21 | |
| 77 | Introducing more classification evaluation methods | 06:05 | |
| 78 | Finding the accuracy of our classification model | 04:18 | |
| 79 | Creating our first confusion matrix (to see where our model is getting confused) | 08:28 | |
| 80 | Making our confusion matrix prettier | 14:01 | |
| 81 | Putting things together with multi-class classification part 1: Getting the data | 10:38 | |
| 82 | Multi-class classification part 2: Becoming one with the data | 07:08 | |
| 83 | Multi-class classification part 3: Building a multi-class classification model | 15:39 | |
| 84 | Multi-class classification part 4: Improving performance with normalisation | 12:44 | |
| 85 | Multi-class classification part 5: Comparing normalised and non-normalised data | 04:14 | |
| 86 | Multi-class classification part 6: Finding the ideal learning rate | 10:39 | |
| 87 | Multi-class classification part 7: Evaluating our model | 13:17 | |
| 88 | Multi-class classification part 8: Creating a confusion matrix | 04:27 | |
| 89 | Multi-class classification part 9: Visualising random model predictions | 10:43 | |
| 90 | What "patterns" is our model learning? | 15:34 | |
| 91 | Introduction to Computer Vision with TensorFlow | 09:37 | |
| 92 | Introduction to Convolutional Neural Networks (CNNs) with TensorFlow | 08:00 | |
| 93 | Downloading an image dataset for our first Food Vision model | 08:28 | |
| 94 | Becoming One With Data | 05:06 | |
| 95 | Becoming One With Data Part 2 | 12:27 | |
| 96 | Becoming One With Data Part 3 | 04:23 | |
| 97 | Building an end to end CNN Model | 18:18 | |
| 98 | Using a GPU to run our CNN model 5x faster | 09:18 | |
| 99 | Trying a non-CNN model on our image data | 08:52 | |
| 100 | Improving our non-CNN model by adding more layers | 09:53 | |
| 101 | Breaking our CNN model down part 1: Becoming one with the data | 09:04 | |
| 102 | Breaking our CNN model down part 2: Preparing to load our data | 11:47 | |
| 103 | Breaking our CNN model down part 3: Loading our data with ImageDataGenerator | 09:55 | |
| 104 | Breaking our CNN model down part 4: Building a baseline CNN model | 08:03 | |
| 105 | Breaking our CNN model down part 5: Looking inside a Conv2D layer | 15:21 | |
| 106 | Breaking our CNN model down part 6: Compiling and fitting our baseline CNN | 07:15 | |
| 107 | Breaking our CNN model down part 7: Evaluating our CNN's training curves | 11:46 | |
| 108 | Breaking our CNN model down part 8: Reducing overfitting with Max Pooling | 13:41 | |
| 109 | Breaking our CNN model down part 9: Reducing overfitting with data augmentation | 06:53 | |
| 110 | Breaking our CNN model down part 10: Visualizing our augmented data | 15:05 | |
| 111 | Breaking our CNN model down part 11: Training a CNN model on augmented data | 08:50 | |
| 112 | Breaking our CNN model down part 12: Discovering the power of shuffling data | 10:02 | |
| 113 | Breaking our CNN model down part 13: Exploring options to improve our model | 05:22 | |
| 114 | Downloading a custom image to make predictions on | 04:55 | |
| 115 | Writing a helper function to load and preprocessing custom images | 10:01 | |
| 116 | Making a prediction on a custom image with our trained CNN | 10:09 | |
| 117 | Multi-class CNN's part 1: Becoming one with the data | 15:00 | |
| 118 | Multi-class CNN's part 2: Preparing our data (turning it into tensors) | 06:39 | |
| 119 | Multi-class CNN's part 3: Building a multi-class CNN model | 07:25 | |
| 120 | Multi-class CNN's part 4: Fitting a multi-class CNN model to the data | 06:03 | |
| 121 | Multi-class CNN's part 5: Evaluating our multi-class CNN model ( | 04:52 | |
| 122 | Multi-class CNN's part 6: Trying to fix overfitting by removing layers | 12:20 | |
| 123 | Multi-class CNN's part 7: Trying to fix overfitting with data augmentation | 11:47 | |
| 124 | Multi-class CNN's part 8: Things you could do to improve your CNN model | 04:24 | |
| 125 | Multi-class CNN's part 9: Making predictions with our model on custom images | 09:23 | |
| 126 | Saving and loading our trained CNN model | 06:22 | |
| 127 | What is and why use transfer learning? | 10:13 | |
| 128 | Downloading and preparing data for our first transfer learning model | 14:40 | |
| 129 | Introducing Callbacks in TensorFlow and making a callback to track our models | 10:02 | |
| 130 | Exploring the TensorFlow Hub website for pretrained models | 09:52 | |
| 131 | Building and compiling a TensorFlow Hub feature extraction model | 14:01 | |
| 132 | Blowing our previous models out of the water with transfer learning | 09:14 | |
| 133 | Plotting the loss curves of our ResNet feature extraction model | 07:36 | |
| 134 | Building and training a pre-trained EfficientNet model on our data | 09:43 | |
| 135 | Different Types of Transfer Learning | 11:41 | |
| 136 | Comparing Our Model's Results | 15:17 | |
| 137 | Introduction to Transfer Learning in TensorFlow Part 2: Fine-tuning | 06:17 | |
| 138 | Importing a script full of helper functions (and saving lots of space) | 07:36 | |
| 139 | Downloading and turning our images into a TensorFlow BatchDataset | 15:39 | |
| 140 | Discussing the four (actually five) modelling experiments we're running | 02:16 | |
| 141 | Comparing the TensorFlow Keras Sequential API versus the Functional API | 02:35 | |
| 142 | Creating our first model with the TensorFlow Keras Functional API | 11:39 | |
| 143 | Compiling and fitting our first Functional API model | 10:54 | |
| 144 | Getting a feature vector from our trained model | 13:40 | |
| 145 | Drilling into the concept of a feature vector (a learned representation) | 03:44 | |
| 146 | Downloading and preparing the data for Model 1 (1 percent of training data) | 09:52 | |
| 147 | Building a data augmentation layer to use inside our model | 12:07 | |
| 148 | Visualising what happens when images pass through our data augmentation layer | 10:56 | |
| 149 | Building Model 1 (with a data augmentation layer and 1% of training data) | 15:56 | |
| 150 | Building Model 2 (with a data augmentation layer and 10% of training data) | 16:38 | |
| 151 | Creating a ModelCheckpoint to save our model's weights during training | 07:26 | |
| 152 | Fitting and evaluating Model 2 (and saving its weights using ModelCheckpoint) | 07:15 | |
| 153 | Loading and comparing saved weights to our existing trained Model 2 | 07:18 | |
| 154 | Preparing Model 3 (our first fine-tuned model) | 20:27 | |
| 155 | Fitting and evaluating Model 3 (our first fine-tuned model) | 07:46 | |
| 156 | Comparing our model's results before and after fine-tuning | 10:27 | |
| 157 | Downloading and preparing data for our biggest experiment yet (Model 4) | 06:25 | |
| 158 | Preparing our final modelling experiment (Model 4) | 12:01 | |
| 159 | Fine-tuning Model 4 on 100% of the training data and evaluating its results | 10:20 | |
| 160 | Comparing our modelling experiment results in TensorBoard | 10:47 | |
| 161 | How to view and delete previous TensorBoard experiments | 02:05 | |
| 162 | Introduction to Transfer Learning Part 3: Scaling Up | 06:20 | |
| 163 | Getting helper functions ready and downloading data to model | 13:35 | |
| 164 | Outlining the model we're going to build and building a ModelCheckpoint callback | 05:39 | |
| 165 | Creating a data augmentation layer to use with our model | 04:40 | |
| 166 | Creating a headless EfficientNetB0 model with data augmentation built in | 08:59 | |
| 167 | Fitting and evaluating our biggest transfer learning model yet | 07:57 | |
| 168 | Unfreezing some layers in our base model to prepare for fine-tuning | 11:29 | |
| 169 | Fine-tuning our feature extraction model and evaluating its performance | 08:24 | |
| 170 | Saving and loading our trained model | 06:26 | |
| 171 | Downloading a pretrained model to make and evaluate predictions with | 06:35 | |
| 172 | Making predictions with our trained model on 25,250 test samples | 12:47 | |
| 173 | Unravelling our test dataset for comparing ground truth labels to predictions | 06:06 | |
| 174 | Confirming our model's predictions are in the same order as the test labels | 05:18 | |
| 175 | Creating a confusion matrix for our model's 101 different classes | 12:08 | |
| 176 | Evaluating every individual class in our dataset | 14:17 | |
| 177 | Plotting our model's F1-scores for each separate class | 07:37 | |
| 178 | Creating a function to load and prepare images for making predictions | 12:09 | |
| 179 | Making predictions on our test images and evaluating them | 16:07 | |
| 180 | Discussing the benefits of finding your model's most wrong predictions | 06:10 | |
| 181 | Writing code to uncover our model's most wrong predictions | 11:17 | |
| 182 | Plotting and visualizing the samples our model got most wrong | 10:37 | |
| 183 | Making predictions on and plotting our own custom images | 09:50 | |
| 184 | Introduction to Milestone Project 1: Food Vision Big™ | 05:45 | |
| 185 | Making sure we have access to the right GPU for mixed precision training | 10:18 | |
| 186 | Getting helper functions ready | 03:07 | |
| 187 | Introduction to TensorFlow Datasets (TFDS) | 12:04 | |
| 188 | Exploring and becoming one with the data (Food101 from TensorFlow Datasets) | 15:57 | |
| 189 | Creating a preprocessing function to prepare our data for modelling | 15:51 | |
| 190 | Batching and preparing our datasets (to make them run fast) | 13:48 | |
| 191 | Exploring what happens when we batch and prefetch our data | 06:50 | |
| 192 | Creating modelling callbacks for our feature extraction model | 07:15 | |
| 193 | Turning on mixed precision training with TensorFlow | 10:06 | |
| 194 | Creating a feature extraction model capable of using mixed precision training | 12:43 | |
| 195 | Checking to see if our model is using mixed precision training layer by layer | 07:57 | |
| 196 | Training and evaluating a feature extraction model (Food Vision Big™) | 10:20 | |
| 197 | Introducing your Milestone Project 1 challenge: build a model to beat DeepFood | 07:48 | |
| 198 | Introduction to Natural Language Processing (NLP) and Sequence Problems | 12:52 | |
| 199 | Example NLP inputs and outputs | 07:23 | |
| 200 | The typical architecture of a Recurrent Neural Network (RNN) | 09:04 | |
| 201 | Preparing a notebook for our first NLP with TensorFlow project | 08:53 | |
| 202 | Becoming one with the data and visualizing a text dataset | 16:42 | |
| 203 | Splitting data into training and validation sets | 06:27 | |
| 204 | Converting text data to numbers using tokenisation and embeddings (overview) | 09:23 | |
| 205 | Setting up a TensorFlow TextVectorization layer to convert text to numbers | 17:11 | |
| 206 | Mapping the TextVectorization layer to text data and turning it into numbers | 11:03 | |
| 207 | Creating an Embedding layer to turn tokenised text into embedding vectors | 12:28 | |
| 208 | Discussing the various modelling experiments we're going to run | 08:58 | |
| 209 | Model 0: Building a baseline model to try and improve upon | 09:26 | |
| 210 | Creating a function to track and evaluate our model's results | 12:15 | |
| 211 | Model 1: Building, fitting and evaluating our first deep model on text data | 20:52 | |
| 212 | Visualizing our model's learned word embeddings with TensorFlow's projector tool | 20:44 | |
| 213 | High-level overview of Recurrent Neural Networks (RNNs) + where to learn more | 09:35 | |
| 214 | Model 2: Building, fitting and evaluating our first TensorFlow RNN model (LSTM) | 18:17 | |
| 215 | Model 3: Building, fitting and evaluating a GRU-cell powered RNN | 16:57 | |
| 216 | Model 4: Building, fitting and evaluating a bidirectional RNN model | 19:35 | |
| 217 | Discussing the intuition behind Conv1D neural networks for text and sequences | 19:32 | |
| 218 | Model 5: Building, fitting and evaluating a 1D CNN for text | 09:58 | |
| 219 | Using TensorFlow Hub for pretrained word embeddings (transfer learning for NLP) | 13:46 | |
| 220 | Model 6: Building, training and evaluating a transfer learning model for NLP | 10:46 | |
| 221 | Preparing subsets of data for model 7 (same as model 6 but 10% of data) | 10:53 | |
| 222 | Model 7: Building, training and evaluating a transfer learning model on 10% data | 10:05 | |
| 223 | Fixing our data leakage issue with model 7 and retraining it | 13:43 | |
| 224 | Comparing all our modelling experiments evaluation metrics | 13:15 | |
| 225 | Uploading our model's training logs to TensorBoard and comparing them | 11:15 | |
| 226 | Saving and loading in a trained NLP model with TensorFlow | 10:26 | |
| 227 | Downloading a pretrained model and preparing data to investigate predictions | 13:25 | |
| 228 | Visualizing our model's most wrong predictions | 08:29 | |
| 229 | Making and visualizing predictions on the test dataset | 08:28 | |
| 230 | Understanding the concept of the speed/score tradeoff | 15:02 | |
| 231 | Introduction to Milestone Project 2: SkimLit | 14:21 | |
| 232 | What we're going to cover in Milestone Project 2 (NLP for medical abstracts) | 07:23 | |
| 233 | SkimLit inputs and outputs | 11:03 | |
| 234 | Setting up our notebook for Milestone Project 2 (getting the data) | 14:59 | |
| 235 | Visualizing examples from the dataset (becoming one with the data) | 13:19 | |
| 236 | Writing a preprocessing function to structure our data for modelling | 19:51 | |
| 237 | Performing visual data analysis on our preprocessed text | 07:56 | |
| 238 | Turning our target labels into numbers (ML models require numbers) | 13:16 | |
| 239 | Model 0: Creating, fitting and evaluating a baseline model for SkimLit | 09:26 | |
| 240 | Preparing our data for deep sequence models | 09:56 | |
| 241 | Creating a text vectoriser to map our tokens (text) to numbers | 14:08 | |
| 242 | Creating a custom token embedding layer with TensorFlow | 09:15 | |
| 243 | Creating fast loading dataset with the TensorFlow tf.data API | 09:50 | |
| 244 | Model 1: Building, fitting and evaluating a Conv1D with token embeddings | 17:22 | |
| 245 | Preparing a pretrained embedding layer from TensorFlow Hub for Model 2 | 10:54 | |
| 246 | Model 2: Building, fitting and evaluating a Conv1D model with token embeddings | 11:31 | |
| 247 | Creating a character-level tokeniser with TensorFlow's TextVectorization layer | 23:25 | |
| 248 | Creating a character-level embedding layer with tf.keras.layers.Embedding | 07:45 | |
| 249 | Model 3: Building, fitting and evaluating a Conv1D model on character embeddings | 13:46 | |
| 250 | Discussing how we're going to build Model 4 (character + token embeddings) | 06:05 | |
| 251 | Model 4: Building a multi-input model (hybrid token + character embeddings) | 15:37 | |
| 252 | Model 4: Plotting and visually exploring different data inputs | 07:33 | |
| 253 | Crafting multi-input fast loading tf.data datasets for Model 4 | 08:42 | |
| 254 | Model 4: Building, fitting and evaluating a hybrid embedding model | 13:19 | |
| 255 | Model 5: Adding positional embeddings via feature engineering (overview) | 07:19 | |
| 256 | Encoding the line number feature to used with Model 5 | 12:26 | |
| 257 | Encoding the total lines feature to be used with Model 5 | 07:57 | |
| 258 | Model 5: Building the foundations of a tribrid embedding model | 09:20 | |
| 259 | Model 5: Completing the build of a tribrid embedding model for sequences | 14:09 | |
| 260 | Visually inspecting the architecture of our tribrid embedding model | 10:26 | |
| 261 | Creating multi-level data input pipelines for Model 5 with the tf.data API | 09:01 | |
| 262 | Bringing SkimLit to life!!! (fitting and evaluating Model 5) | 10:36 | |
| 263 | Comparing the performance of all of our modelling experiments | 09:37 | |
| 264 | Saving, loading & testing our best performing model | 07:49 | |
| 265 | Congratulations and your challenge before heading to the next module | 12:34 | |
| 266 | Introduction to Milestone Project 3 (BitPredict) & where you can get help | 03:54 | |
| 267 | What is a time series problem and example forecasting problems at Uber | 07:47 | |
| 268 | Example forecasting problems in daily life | 04:53 | |
| 269 | What can be forecast? | 07:58 | |
| 270 | What we're going to cover (broadly) | 02:36 | |
| 271 | Time series forecasting inputs and outputs | 08:56 | |
| 272 | Downloading and inspecting our Bitcoin historical dataset | 14:59 | |
| 273 | Different kinds of time series patterns & different amounts of feature variables | 07:40 | |
| 274 | Visualizing our Bitcoin historical data with pandas | 04:53 | |
| 275 | Reading in our Bitcoin data with Python's CSV module | 10:59 | |
| 276 | Creating train and test splits for time series (the wrong way) | 08:38 | |
| 277 | Creating train and test splits for time series (the right way) | 07:13 | |
| 278 | Creating a plotting function to visualize our time series data | 07:58 | |
| 279 | Discussing the various modelling experiments were going to be running | 09:12 | |
| 280 | Model 0: Making and visualizing a naive forecast model | 12:17 | |
| 281 | Discussing some of the most common time series evaluation metrics | 11:12 | |
| 282 | Implementing MASE with TensorFlow | 09:39 | |
| 283 | Creating a function to evaluate our model's forecasts with various metrics | 10:12 | |
| 284 | Discussing other non-TensorFlow kinds of time series forecasting models | 05:07 | |
| 285 | Formatting data Part 2: Creating a function to label our windowed time series | 13:02 | |
| 286 | Discussing the use of windows and horizons in time series data | 07:51 | |
| 287 | Writing a preprocessing function to turn time series data into windows & labels | 23:36 | |
| 288 | Turning our windowed time series data into training and test sets | 10:02 | |
| 289 | Creating a modelling checkpoint callback to save our best performing model | 07:26 | |
| 290 | Model 1: Building, compiling and fitting a deep learning model on Bitcoin data | 16:59 | |
| 291 | Creating a function to make predictions with our trained models | 14:04 | |
| 292 | Model 2: Building, fitting and evaluating a deep model with a larger window size-27 | 17:44 | |
| 293 | Model 3: Building, fitting and evaluating a model with a larger horizon size | 13:16 | |
| 294 | Adjusting the evaluation function to work for predictions with larger horizons | 08:35 | |
| 295 | Model 3: Visualizing the results | 08:45 | |
| 296 | Comparing our modelling experiments so far and discussing autocorrelation | 09:45 | |
| 297 | Preparing data for building a Conv1D model | 13:22 | |
| 298 | Model 4: Building, fitting and evaluating a Conv1D model on our Bitcoin data | 14:52 | |
| 299 | Model 5: Building, fitting and evaluating a LSTM (RNN) model on our Bitcoin data | 16:06 | |
| 300 | Investigating how to turn our univariate time series into multivariate | 13:53 | |
| 301 | Creating and plotting a multivariate time series with BTC price and block reward | 12:13 | |
| 302 | Preparing our multivariate time series for a model | 13:38 | |
| 303 | Model 6: Building, fitting and evaluating a multivariate time series model | 09:26 | |
| 304 | Model 7: Discussing what we're going to be doing with the N-BEATS algorithm | 09:40 | |
| 305 | Model 7: Replicating the N-BEATS basic block with TensorFlow layer subclassing | 18:39 | |
| 306 | Model 7: Testing our N-BEATS block implementation with dummy data inputs | 15:03 | |
| 307 | Model 7: Setting up hyperparameters for the N-BEATS algorithm | 08:51 | |
| 308 | Model 7: Getting ready for residual connections | 12:56 | |
| 309 | Model 7: Outlining the steps we're going to take to build the N-BEATS model | 10:06 | |
| 310 | Model 7: Putting together the pieces of the puzzle of the N-BEATS model | 22:23 | |
| 311 | Model 7: Plotting the N-BEATS algorithm we've created and admiring its beauty | 06:47 | |
| 312 | Model 8: Ensemble model overview | 04:44 | |
| 313 | Model 8: Building, compiling and fitting an ensemble of models | 20:05 | |
| 314 | Model 8: Making and evaluating predictions with our ensemble model | 16:10 | |
| 315 | Discussing the importance of prediction intervals in forecasting | 12:57 | |
| 316 | Getting the upper and lower bounds of our prediction intervals | 07:58 | |
| 317 | Plotting the prediction intervals of our ensemble model predictions | 13:03 | |
| 318 | (Optional) Discussing the types of uncertainty in machine learning | 13:42 | |
| 319 | Model 9: Preparing data to create a model capable of predicting into the future | 08:25 | |
| 320 | Model 9: Building, compiling and fitting a future predictions model | 05:02 | |
| 321 | Model 9: Discussing what's required for our model to make future predictions | 08:31 | |
| 322 | Model 9: Creating a function to make forecasts into the future | 12:09 | |
| 323 | Model 9: Plotting our model's future forecasts | 13:10 | |
| 324 | Model 10: Introducing the turkey problem and making data for it | 14:16 | |
| 325 | Model 10: Building a model to predict on turkey data (why forecasting is BS) | 13:39 | |
| 326 | Comparing the results of all of our models and discussing where to go next | 13:00 | |
| 327 | What is the TensorFlow Developer Certification? | 05:29 | |
| 328 | Why the TensorFlow Developer Certification? | 06:58 | |
| 329 | How to prepare (your brain) for the TensorFlow Developer Certification | 08:15 | |
| 330 | How to prepare (your computer) for the TensorFlow Developer Certification | 12:44 | |
| 331 | What to do after the TensorFlow Developer Certification exam | 02:14 | |
| 332 | What is Machine Learning? | 04:52 | |
| 333 | AI/Machine Learning/Data Science | 06:53 | |
| 334 | Exercise: Machine Learning Playground | 06:17 | |
| 335 | How Did We Get Here? | 06:04 | |
| 336 | Exercise: YouTube Recommendation Engine | 04:25 | |
| 337 | Types of Machine Learning | 04:42 | |
| 338 | What Is Machine Learning? Round 2 | 04:45 | |
| 339 | Section Review | 01:49 | |
| 340 | Section Overview | 02:39 | |
| 341 | Introducing Our Framework | 03:09 | |
| 342 | 6 Step Machine Learning Framework | 05:00 | |
| 343 | Types of Machine Learning Problems | 10:33 | |
| 344 | Types of Data | 04:51 | |
| 345 | Types of Evaluation | 03:32 | |
| 346 | Features In Data | 05:59 | |
| 347 | Modelling - Splitting Data | 05:23 | |
| 348 | Modelling - Picking the Model | 04:36 | |
| 349 | Modelling - Tuning | 03:18 | |
| 350 | Modelling - Comparison | 03:36 | |
| 351 | Experimentation | 09:33 | |
| 352 | Tools We Will Use | 04:01 | |
| 353 | Section Overview | 02:28 | |
| 354 | Pandas Introduction | 04:30 | |
| 355 | Series, Data Frames and CSVs | 13:22 | |
| 356 | Describing Data with Pandas | 09:49 | |
| 357 | Selecting and Viewing Data with Pandas | 11:09 | |
| 358 | Selecting and Viewing Data with Pandas Part 2 | 13:07 | |
| 359 | Manipulating Data | 13:57 | |
| 360 | Manipulating Data 2 | 09:57 | |
| 361 | Manipulating Data 3 | 10:13 | |
| 362 | How To Download The Course Assignments | 02:41 | |
| 363 | Section Overview | 07:44 | |
| 364 | NumPy Introduction | 05:18 | |
| 365 | NumPy DataTypes and Attributes | 14:06 | |
| 366 | Creating NumPy Arrays | 09:23 | |
| 367 | NumPy Random Seed | 07:18 | |
| 368 | Viewing Arrays and Matrices | 09:36 | |
| 369 | Manipulating Arrays | 11:32 | |
| 370 | Manipulating Arrays 2 | 09:45 | |
| 371 | Standard Deviation and Variance | 07:11 | |
| 372 | Reshape and Transpose | 07:27 | |
| 373 | Dot Product vs Element Wise | 11:46 | |
| 374 | Exercise: Nut Butter Store Sales | 03:34 | |
| 375 | Comparison Operators | 13:05 | |
| 376 | Sorting Arrays | 06:20 | |
| 377 | Turn Images Into NumPy Arrays | 07:38 |
Unlock unlimited learning
Get instant access to all 376 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.
Learn more about subscriptionComments
0 commentsWant to join the conversation?
Sign in to commentSimilar courses
DS4B 101-P: Python for Data Science Automation
Sources: Business Science University
Python for Data Science Automation is an innovative course designed to teach data analysts how to convert business processes to python-based data science automations. The course...
27 hours 6 minutes 1 second
Statistics for Data Science and Business Analysis
Sources: udemy
Is statistics a driving force in the industry you want to enter? Do you want to work as a Marketing Analyst, a Business Intelligence Analyst, a Data Analyst, or
4 hours 49 minutes 30 seconds
Machine Learning with Python : COMPLETE COURSE FOR BEGINNERS
Sources: udemy
Machine Learning and artificial intelligence (AI) is everywhere; if you want to know how companies like Google, Amazon, and even Udemy extract meaning and insig
13 hours 12 minutes 31 seconds
dbt for Data Engineers
Sources: Andreas Kretz
dbt (data build tool) is a data transformation tool with a priority on SQL. It allows for simple and transparent transformation, testing, and documentation...
1 hour 52 minutes 55 seconds
Data Analysis for Beginners: Python & Statistics
Sources: zerotomastery.io
This course is your first step into the world of data analysis using one of the main tools for analysts - Python. Without complicated terms, advanced...
6 hours 34 minutes 20 seconds