If you're here, you already know the truth: Machine Learning is the future of everything. In the coming years, there won't be a single industry in the world untouched by Machine Learning. A transformative force, you can either choose to understand it now, or lose out on a wave of incredible change. You probably already use apps many times each day that rely upon Machine Learning techniques. So why stay in the dark any longer?
Machine Learning with Javascript
There are many courses on Machine Learning already available. I built this course to be the bestintroduction to the topic. No subject is left untouched, and we never leave any area in the dark. If you take this course, you will be prepared to enter and understand any sub-discipline in the world of Machine Learning.
A common question - Why Javascript? I thought ML was all about Python and R?
The answer is simple - ML with Javascript is just plain easier to learn than with Python. Although it is immensely popular, Python is an 'expressive' language, which is a code-word that means 'a confusing language'. A single line of Python can contain a tremendous amount of functionality; this is great when you understand the language and the subject matter, but not so much when you're trying to learn a brand new topic.
Besides Javascript making ML easier to understand, it also opens new horizons for apps that you can build. Rather than being limited to deploying Python code on the server for running your ML code, you can build single-page apps, or even browser extensions that run interesting algorithms, which can give you the possibility of developing a completely novel use case!
Does this course focus on algorithms, or math, or Tensorflow, or what?!?!
Let's be honest - the vast majority of ML courses available online dance around the confusing topics. They encourage you to use pre-build algorithms and functions that do all the heavy lifting for you. Although this can lead you to quick successes, in the end it will hamper your ability to understand ML. You can only understand how to apply ML techniques if you understand the underlying algorithms.
That's the goal of this course - I want you to understand the exact math and programming techniques that are used in the most common ML algorithms. Once you have this knowledge, you can easily pick up new algorithms on the fly, and build far more interesting projects and applications than other engineers who only understand how to hand data to a magic library.
Don't have a background in math? That's OK! I take special care to make sure that no lecture gets too far into 'mathy' topics without giving a proper introduction to what is going on.
A short list of what you will learn:
Advanced memory profiling to enhance the performance of your algorithms
Build apps powered by the powerful Tensorflow JS library
Develop programs that work either in the browser or with Node JS
Write clean, easy to understand ML code, no one-name variables or confusing functions
Pick up the basics of Linear Algebra so you can dramatically speed up your code with matrix-based operations. (Don't worry, I'll make the math easy!)
Comprehend how to twist common algorithms to fit your unique use cases
Plot the results of your analysis using a custom-build graphing library
Learn performance-enhancing strategies that can be applied to any type of Javascript code
Data loading techniques, both in the browser and Node JS environments
- Basic understanding of terminal and command line usage
- Ability to read basic math equations
- Javascript developers interested in Machine Learning
What you'll learn:
- Assemble machine learning algorithms from scratch!
- Build interesting applications using Javascript and ML techniques
- Understand how ML works without relying on mysterious libraries
- Optimize your algorithms with advanced performance and memory usage profiling
- Use the low-level features of Tensorflow JS to supercharge your algorithms
- Grow a strong intuition of ML best practices
About the Authors
Stephen Grider
Stephen Grider is one of the longest-running and most prolific instructors on Udemy, with a catalog covering essentially every major JavaScript framework, plus Docker, Kubernetes, AWS, and the broader full-stack development landscape. His teaching style is patient and project-oriented — most of his courses are structured around building a substantial application from scratch rather than working through disconnected tutorial examples.
The catalog covers React, Redux, Next.js, Vue, Angular, GraphQL, Node.js, Docker / Kubernetes, AWS infrastructure, React Native and Flutter for mobile, the algorithm / data-structure interview prep track, and the modern TypeScript / Bun / Rust adjacent material that working JavaScript developers increasingly encounter. Few independent instructors have maintained Stephen's breadth this consistently for this long.
The CourseFlix listing under this source carries over 25 Stephen Grider courses spanning that range. Material is paid; Stephen Grider courses are typically sold individually on Udemy. Courses are aimed primarily at developers picking up a specific technology through working through a complete project.
Udemy
Udemy is the largest open marketplace for online courses on the internet. Founded in 2010 by Eren Bali, Oktay Caglar, and Gagan Biyani and headquartered in San Francisco, the company went public on the Nasdaq in 2021 under the ticker UDMY. The platform hosts well over two hundred thousand courses across software development, IT and cloud, data science, design, business, marketing, and creative skills, taught by tens of thousands of independent instructors. Roughly seventy million learners use it worldwide, and the corporate arm — Udemy Business — supplies a curated subset of that catalog to enterprise customers.
Because Udemy is a marketplace rather than a single editorial publisher, the catalog is uneven by design. The strongest material lives in the long-form, project-based courses authored by working engineers — full-stack JavaScript, React, Node.js, Python data science, AWS, Docker and Kubernetes, mobile development with Flutter and React Native, and cloud certification preparation. The CourseFlix listing under this source is the slice of that catalog that has been mirrored here for offline-friendly viewing, organized by topic and updated as new releases land. Pricing on Udemy itself swings dramatically with the site's near-permanent sales, which is why the platform is best treated as a deep reference catalog: pick instructors with strong reviews and a track record of updating their material rather than buying on the headline price alone.
Watch Online 183 lessons
| # | Lesson Title | Duration | Access |
|---|---|---|---|
| 1 | Getting Started - How to Get Help Demo | 00:58 | |
| 2 | Solving Machine Learning Problems | 06:05 | |
| 3 | A Complete Walkthrough | 09:54 | |
| 4 | App Setup | 02:02 | |
| 5 | Problem Outline | 02:54 | |
| 6 | Identifying Relevant Data | 04:12 | |
| 7 | Dataset Structures | 05:48 | |
| 8 | Recording Observation Data | 04:00 | |
| 9 | What Type of Problem? | 04:36 | |
| 10 | How K-Nearest Neighbor Works | 08:24 | |
| 11 | Lodash Review | 09:57 | |
| 12 | Implementing KNN | 07:17 | |
| 13 | Finishing KNN Implementation | 05:54 | |
| 14 | Testing the Algorithm | 04:49 | |
| 15 | Interpreting Bad Results | 04:13 | |
| 16 | Test and Training Data | 04:06 | |
| 17 | Randomizing Test Data | 03:49 | |
| 18 | Generalizing KNN | 03:42 | |
| 19 | Gauging Accuracy | 05:19 | |
| 20 | Printing a Report | 03:30 | |
| 21 | Refactoring Accuracy Reporting | 05:14 | |
| 22 | Investigating Optimal K Values | 11:39 | |
| 23 | Updating KNN for Multiple Features | 06:37 | |
| 24 | Multi-Dimensional KNN | 03:57 | |
| 25 | N-Dimension Distance | 09:51 | |
| 26 | Arbitrary Feature Spaces | 08:28 | |
| 27 | Magnitude Offsets in Features | 05:37 | |
| 28 | Feature Normalization | 07:33 | |
| 29 | Normalization with MinMax | 07:15 | |
| 30 | Applying Normalization | 04:23 | |
| 31 | Feature Selection with KNN | 07:48 | |
| 32 | Objective Feature Picking | 06:11 | |
| 33 | Evaluating Different Feature Values | 02:54 | |
| 34 | Let's Get Our Bearings | 07:28 | |
| 35 | A Plan to Move Forward | 04:32 | |
| 36 | Tensor Shape and Dimension | 12:05 | |
| 37 | Elementwise Operations | 08:19 | |
| 38 | Broadcasting Operations | 06:48 | |
| 39 | Logging Tensor Data | 03:48 | |
| 40 | Tensor Accessors | 05:25 | |
| 41 | Creating Slices of Data | 07:47 | |
| 42 | Tensor Concatenation | 05:29 | |
| 43 | Summing Values Along an Axis | 05:14 | |
| 44 | Massaging Dimensions with ExpandDims | 07:48 | |
| 45 | KNN with Regression | 04:57 | |
| 46 | A Change in Data Structure | 04:05 | |
| 47 | KNN with Tensorflow | 09:19 | |
| 48 | Maintaining Order Relationships | 06:31 | |
| 49 | Sorting Tensors | 08:01 | |
| 50 | Averaging Top Values | 07:44 | |
| 51 | Moving to the Editor | 03:27 | |
| 52 | Loading CSV Data | 10:11 | |
| 53 | Running an Analysis | 06:11 | |
| 54 | Reporting Error Percentages | 06:27 | |
| 55 | Normalization or Standardization? | 07:34 | |
| 56 | Numerical Standardization with Tensorflow | 07:38 | |
| 57 | Applying Standardization | 04:02 | |
| 58 | Debugging Calculations | 08:15 | |
| 59 | What Now? | 04:01 | |
| 60 | Linear Regression | 02:40 | |
| 61 | Why Linear Regression? | 04:53 | |
| 62 | Understanding Gradient Descent | 13:05 | |
| 63 | Guessing Coefficients with MSE | 10:20 | |
| 64 | Observations Around MSE | 05:57 | |
| 65 | Derivatives! | 07:13 | |
| 66 | Gradient Descent in Action | 11:47 | |
| 67 | Quick Breather and Review | 05:47 | |
| 68 | Why a Learning Rate? | 17:06 | |
| 69 | Answering Common Questions | 03:49 | |
| 70 | Gradient Descent with Multiple Terms | 04:44 | |
| 71 | Multiple Terms in Action | 10:40 | |
| 72 | Project Overview | 06:02 | |
| 73 | Data Loading | 05:18 | |
| 74 | Default Algorithm Options | 08:33 | |
| 75 | Formulating the Training Loop | 03:19 | |
| 76 | Initial Gradient Descent Implementation | 09:25 | |
| 77 | Calculating MSE Slopes | 06:53 | |
| 78 | Updating Coefficients | 03:12 | |
| 79 | Interpreting Results | 10:08 | |
| 80 | Matrix Multiplication | 07:10 | |
| 81 | More on Matrix Multiplication | 06:41 | |
| 82 | Matrix Form of Slope Equations | 06:22 | |
| 83 | Simplification with Matrix Multiplication | 09:29 | |
| 84 | How it All Works Together! | 14:02 | |
| 85 | Refactoring the Linear Regression Class | 07:41 | |
| 86 | Refactoring to One Equation | 08:59 | |
| 87 | A Few More Changes | 06:14 | |
| 88 | Same Results? Or Not? | 03:20 | |
| 89 | Calculating Model Accuracy | 08:38 | |
| 90 | Implementing Coefficient of Determination | 07:45 | |
| 91 | Dealing with Bad Accuracy | 07:48 | |
| 92 | Reminder on Standardization | 04:37 | |
| 93 | Data Processing in a Helper Method | 03:39 | |
| 94 | Reapplying Standardization | 05:58 | |
| 95 | Fixing Standardization Issues | 05:37 | |
| 96 | Massaging Learning Rates | 03:16 | |
| 97 | Moving Towards Multivariate Regression | 11:45 | |
| 98 | Refactoring for Multivariate Analysis | 07:29 | |
| 99 | Learning Rate Optimization | 08:05 | |
| 100 | Recording MSE History | 05:22 | |
| 101 | Updating Learning Rate | 06:42 | |
| 102 | Observing Changing Learning Rate and MSE | 04:18 | |
| 103 | Plotting MSE Values | 05:22 | |
| 104 | Plotting MSE History against B Values | 04:23 | |
| 105 | Batch and Stochastic Gradient Descent | 07:18 | |
| 106 | Refactoring Towards Batch Gradient Descent | 05:07 | |
| 107 | Determining Batch Size and Quantity | 06:03 | |
| 108 | Iterating Over Batches | 07:49 | |
| 109 | Evaluating Batch Gradient Descent Results | 05:42 | |
| 110 | Making Predictions with the Model | 07:38 | |
| 111 | Introducing Logistic Regression | 02:28 | |
| 112 | Logistic Regression in Action | 06:32 | |
| 113 | Bad Equation Fits | 05:32 | |
| 114 | The Sigmoid Equation | 04:32 | |
| 115 | Decision Boundaries | 07:48 | |
| 116 | Changes for Logistic Regression | 01:12 | |
| 117 | Project Setup for Logistic Regression | 05:52 | |
| 118 | Importing Vehicle Data | 04:28 | |
| 119 | Encoding Label Values | 04:19 | |
| 120 | Updating Linear Regression for Logistic Regression | 07:09 | |
| 121 | The Sigmoid Equation with Logistic Regression | 04:28 | |
| 122 | A Touch More Refactoring | 07:47 | |
| 123 | Gauging Classification Accuracy | 03:28 | |
| 124 | Implementing a Test Function | 05:17 | |
| 125 | Variable Decision Boundaries | 07:17 | |
| 126 | Mean Squared Error vs Cross Entropy | 05:47 | |
| 127 | Refactoring with Cross Entropy | 05:09 | |
| 128 | Finishing the Cost Refactor | 04:37 | |
| 129 | Plotting Changing Cost History | 03:25 | |
| 130 | Multinominal Logistic Regression | 02:20 | |
| 131 | A Smart Refactor to Multinominal Analysis | 05:08 | |
| 132 | A Smarter Refactor! | 03:46 | |
| 133 | A Single Instance Approach | 09:51 | |
| 134 | Refactoring to Multi-Column Weights | 04:40 | |
| 135 | A Problem to Test Multinominal Classification | 04:38 | |
| 136 | Classifying Continuous Values | 04:42 | |
| 137 | Training a Multinominal Model | 06:20 | |
| 138 | Marginal vs Conditional Probability | 09:57 | |
| 139 | Sigmoid vs Softmax | 06:09 | |
| 140 | Refactoring Sigmoid to Softmax | 04:43 | |
| 141 | Implementing Accuracy Gauges | 02:37 | |
| 142 | Calculating Accuracy | 03:16 | |
| 143 | Handwriting Recognition | 02:11 | |
| 144 | Greyscale Values | 05:12 | |
| 145 | Many Features | 03:30 | |
| 146 | Flattening Image Data | 06:07 | |
| 147 | Encoding Label Values | 05:45 | |
| 148 | Implementing an Accuracy Gauge | 07:27 | |
| 149 | Unchanging Accuracy | 01:56 | |
| 150 | Debugging the Calculation Process | 08:13 | |
| 151 | Dealing with Zero Variances | 06:16 | |
| 152 | Backfilling Variance | 02:37 | |
| 153 | Handing Large Datasets | 04:15 | |
| 154 | Minimizing Memory Usage | 04:51 | |
| 155 | Creating Memory Snapshots | 05:15 | |
| 156 | The Javascript Garbage Collector | 06:50 | |
| 157 | Shallow vs Retained Memory Usage | 05:51 | |
| 158 | Measuring Memory Usage | 08:30 | |
| 159 | Releasing References | 03:15 | |
| 160 | Measuring Footprint Reduction | 03:51 | |
| 161 | Optimization Tensorflow Memory Usage | 01:32 | |
| 162 | Tensorflow's Eager Memory Usage | 04:41 | |
| 163 | Cleaning up Tensors with Tidy | 02:49 | |
| 164 | Implementing TF Tidy | 03:32 | |
| 165 | Tidying the Training Loop | 03:58 | |
| 166 | Measuring Reduced Memory Usage | 01:35 | |
| 167 | One More Optimization | 02:36 | |
| 168 | Final Memory Report | 02:45 | |
| 169 | Plotting Cost History | 04:04 | |
| 170 | NaN in Cost History | 04:19 | |
| 171 | Fixing Cost History | 04:47 | |
| 172 | Massaging Learning Parameters | 01:41 | |
| 173 | Improving Model Accuracy | 04:28 | |
| 174 | Loading CSV Files | 02:07 | |
| 175 | A Test Dataset | 02:01 | |
| 176 | Reading Files from Disk | 03:09 | |
| 177 | Splitting into Columns | 02:55 | |
| 178 | Dropping Trailing Columns | 02:31 | |
| 179 | Parsing Number Values | 03:37 | |
| 180 | Custom Value Parsing | 04:20 | |
| 181 | Extracting Data Columns | 05:36 | |
| 182 | Shuffling Data via Seed Phrase | 05:14 | |
| 183 | Splitting Test and Training | 07:45 |
Get instant access to all 182 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.
Learn more about subscriptionRelated courses
-
Updated 2y agoData Structures and Algorithms: Deep Dive Using Java
By: UdemySo you've worked with the basics of data structures and algorithms in Java (or another OO programming language) but feel like you need a deeper knowledge15h 53m -
Updated 2y agoMachine Learning with Python : COMPLETE COURSE FOR BEGINNERS
By: UdemyMachine Learning and artificial intelligence (AI) is everywhere; if you want to know how companies like Google, Amazon, and even Udemy extract meaning and insig13h 12m -
Updated 2y agoCats
By: Rock the JVMWe Scala programmers love abstractions and Cats is one of the most popular libraries. At the same time, Cats is notorious for having a steep learning curve. Fun10h 39m