Machine Learning A-Z : Become Kaggle Master
Want to become a good Data Scientist? Then this is a right course for you. This course has been designed by IIT professionals who have mastered in Mathematics and Data Science. We will be covering complex theory, algorithms and coding libraries in a very simple way which can be easily grasped by any beginner as well. We will walk you step-by-step into the World of Machine Learning. With every tutorial you will develop new skills and improve your understanding of this challenging yet lucrative sub-field of Data Science from beginner to advance level.
More
We have solved few Kaggle problems during this course and provided complete solutions so that students can easily compete in real world competition websites.
We have covered following topics in detail in this course:
1. Python Fundamentals
2. Numpy
3. Pandas
4. Some Fun with Maths
5. Inferential Statistics
6. Hypothesis Testing
7. Data Visualisation
8. EDA
9. Simple Linear Regression
10. Multiple Linear regression
11. Hotstar/ Netflix: Case Study
12. Gradient Descent
13. KNN
14. Model Performance Metrics
15. Model Selection
16. Naive Bayes
17. Logistic Regression
18. SVM
19. Decision Tree
20. Ensembles - Bagging / Boosting
21. Unsupervised Learning
22. Dimension Reduction
23. Advance ML Algorithms
24. Deep Learning
- Any Beginner Can Start this Course
- 2+2 knowledge is more than sufficient as we have covered almost everything from scratch.
- This course is meant for anyone who wants to become a Data Scientist
What you'll learn:
- Master Machine Learning on Python
- Learn to use MatplotLib for Python Plotting
- Learn to use Numpy and Pandas for Data Analysis
- Learn to use Seaborn for Statistical Plots
- Learn All the Mathmatics Required to understand Machine Learning Algorithms
- Implement Machine Learning Algorithms along with Mathematic intutions
- Projects of Kaggle Level are included with Complete Solutions
- Learning End to End Data Science Solutions
- All Advanced Level Machine Learning Algorithms and Techniques like Regularisations , Boosting , Bagging and many more included
- Learn All Statistical concepts To Make You Ninza in Machine Learning
- Real World Case Studies
- Model Performance Metrics
- Deep Learning
- Model Selection
Watch Online Machine Learning A-Z : Become Kaggle Master
# | Title | Duration |
---|---|---|
1 | Introduction to the course | 13:59 |
2 | Introduction to Kaggle | 09:02 |
3 | Installation of Python and Anaconda | 09:02 |
4 | Python Introduction | 03:34 |
5 | Variables in Python | 15:05 |
6 | Numeric Operations in Python | 05:28 |
7 | Logical Operations | 02:25 |
8 | If else Loop | 08:16 |
9 | for while Loop | 10:18 |
10 | Functions | 11:19 |
11 | String Part1 | 12:43 |
12 | String Part2 | 03:02 |
13 | List Part1 | 03:06 |
14 | List Part2 | 10:49 |
15 | List Part3 | 08:53 |
16 | List Part4 | 08:11 |
17 | Tuples | 08:42 |
18 | Sets | 07:28 |
19 | Dictionaries | 07:36 |
20 | Comprehentions | 07:09 |
21 | Introduction | 06:20 |
22 | Numpy Operations Part1 | 19:21 |
23 | Numpy Operations Part2 | 24:27 |
24 | Introduction | 06:30 |
25 | Series | 07:59 |
26 | DataFrame | 07:54 |
27 | Operations Part1 | 01:24 |
28 | Operations Part2 | 05:11 |
29 | Indexes | 06:07 |
30 | loc and iloc | 07:28 |
31 | Reading CSV | 05:29 |
32 | Merging Part1 | 03:44 |
33 | groupby | 05:26 |
34 | Merging Part2 | 04:26 |
35 | Pivot Table | 03:25 |
36 | Linear Algebra : Vectors | 43:18 |
37 | Linear Algebra : Matrix Part1 | 15:44 |
38 | Linear Algebra : Matrix Part2 | 16:22 |
39 | Linear Algebra : Going From 2D to nD Part1 | 08:45 |
40 | Linear Algebra : 2D to nD Part2 | 06:54 |
41 | Inferential Statistics | 03:02 |
42 | Probability Theory | 13:16 |
43 | Probability Distribution | 05:00 |
44 | Expected Values Part1 | 04:53 |
45 | Expected Values Part2 | 03:15 |
46 | Without Experiment | 06:02 |
47 | Binomial Distribution | 04:12 |
48 | Commulative Distribution | 02:25 |
49 | 04:44 | |
50 | Normal Distribution | 05:01 |
51 | z Score | 04:45 |
52 | Sampling | 09:42 |
53 | Sampling Distribution | 06:17 |
54 | Central Limit Theorem | 03:08 |
55 | Confidence Interval Part1 | 07:15 |
56 | Confidence Interval Part2 | 03:19 |
57 | Introduction | 08:30 |
58 | NULL And Alternate Hypothesis | 06:29 |
59 | Examples | 05:47 |
60 | One/Two Tailed Tests | 08:02 |
61 | Critical Value Method | 04:19 |
62 | z Table | 07:37 |
63 | Examples | 03:18 |
64 | More Examples | 03:03 |
65 | p Value | 05:16 |
66 | Types of Error | 02:54 |
67 | t- distribution Part1 | 03:28 |
68 | t- distribution Part2 | 02:43 |
69 | Matplotlib | 19:55 |
70 | Seaborn | 20:26 |
71 | Case Study | 10:24 |
72 | Seaborn On Time Series Data | 04:27 |
73 | Introduction | 01:07 |
74 | Data Sourcing and Cleaning part1 | 05:07 |
75 | Data Sourcing and Cleaning part2 | 03:15 |
76 | Data Sourcing and Cleaning part3 | 04:00 |
77 | Data Sourcing and Cleaning part4 | 03:57 |
78 | Data Sourcing and Cleaning part5 | 03:31 |
79 | Data Sourcing and Cleaning part6 | 04:15 |
80 | Data Cleaning part1 | 14:42 |
81 | Data Cleaning part2 | 09:27 |
82 | Univariate Analysis Part1 | 22:23 |
83 | Univariate Analysis Part2 | 17:33 |
84 | Segmented Analysis | 06:47 |
85 | Bivariate Analysis | 13:00 |
86 | Derived Columns | 12:15 |
87 | Introduction to Machine Learning | 02:14 |
88 | Types of Machine Learning | 08:57 |
89 | Introduction to Linear Regression (LR) | 03:06 |
90 | How LR Works? | 09:18 |
91 | Some Fun With Maths Behind LR | 09:30 |
92 | R Square | 10:54 |
93 | LR Case Study Part1 | 14:49 |
94 | LR Case Study Part2 | 04:54 |
95 | LR Case Study Part3 | 04:26 |
96 | Residual Square Error (RSE) | 01:04 |
97 | Introduction | 03:16 |
98 | Case Study part1 | 07:38 |
99 | Case Study part2 | 10:38 |
100 | Case Study part3 | 06:05 |
101 | Adjusted R Square | 00:46 |
102 | Case Study Part1 | 07:09 |
103 | Case Study Part2 | 09:18 |
104 | Case Study Part3 | 06:37 |
105 | Case Study Part4 | 14:39 |
106 | Case Study Part5 | 04:52 |
107 | Case Study Part6 (RFE) | 06:22 |
108 | Introduction to the Problem Statement | 05:18 |
109 | Playing With Data | 09:30 |
110 | Building Model Part1 | 04:43 |
111 | Building Model Part2 | 07:41 |
112 | Building Model Part3 | 03:52 |
113 | Verification of Model | 03:36 |
114 | Pre-Req For Gradient Descent Part1 | 15:58 |
115 | Pre-Req For Gradient Descent Part2 | 09:00 |
116 | Cost Functions | 02:22 |
117 | Defining Cost Functions More Formally | 07:26 |
118 | Gradient Descent | 10:51 |
119 | Optimisation | 04:14 |
120 | Closed Form Vs Gradient Descent | 04:53 |
121 | Gradient Descent case study | 05:40 |
122 | Introduction to Classification | 12:55 |
123 | Defining Classification Mathematically | 07:31 |
124 | Introduction to KNN | 11:34 |
125 | Accuracy of KNN | 12:45 |
126 | Effectiveness of KNN | 12:54 |
127 | Distance Metrics | 12:21 |
128 | Distance Metrics Part2 | 08:31 |
129 | Finding k | 09:36 |
130 | KNN on Regression | 02:53 |
131 | Case Study | 07:56 |
132 | Classification Case1 | 22:16 |
133 | Classification Case2 | 15:03 |
134 | Classification Case3 | 13:35 |
135 | Classification Case4 | 12:38 |
136 | Performance Metrics Part1 | 21:16 |
137 | Performance Metrics Part2 | 15:17 |
138 | Performance Metrics Part3 | 05:09 |
139 | Model Creation Case1 | 11:37 |
140 | Model Creation Case2 | 07:39 |
141 | Gridsearch Case study Part1 | 11:36 |
142 | Gridsearch Case study Part2 | 15:03 |
143 | Introduction to Naive Bayes | 14:58 |
144 | Bayes Theorem | 10:55 |
145 | Practical Example from NB with One Column | 08:45 |
146 | Practical Example from NB with Multiple Columns | 11:31 |
147 | Naive Bayes On Text Data Part1 | 08:43 |
148 | Naive Bayes On Text Data Part2 | 05:11 |
149 | Laplace Smoothing | 04:11 |
150 | Bernoulli Naive Bayes | 01:38 |
151 | Case Study 1 | 08:41 |
152 | Case Study 2 Part1 | 06:52 |
153 | Case Study 2 Part2 | 02:10 |
154 | Introduction | 07:31 |
155 | Sigmoid Function | 10:19 |
156 | Log Odds | 10:01 |
157 | Case Study | 16:29 |
158 | Introduction | 15:06 |
159 | Hyperplane Part1 | 06:28 |
160 | Hyperplane Part2 | 14:06 |
161 | Maths Behind SVM | 07:38 |
162 | Support Vectors | 04:04 |
163 | Slack Variable | 09:59 |
164 | SVM Case Study Part1 | 06:25 |
165 | SVM Case Study Part2 | 06:49 |
166 | Kernel Part1 | 08:55 |
167 | Kernel Part2 | 12:34 |
168 | Case Study : 2 | 07:28 |
169 | Case Study : 3 Part1 | 08:46 |
170 | Case Study : 3 Part2 | 05:24 |
171 | Case Study 4 | 16:33 |
172 | Introduction | 07:21 |
173 | Example of DT | 07:51 |
174 | Homogenity | 05:02 |
175 | Gini Index | 07:05 |
176 | Information Gain Part1 | 05:24 |
177 | Information Gain Part2 | 05:14 |
178 | Advantages and Disadvantages of DT | 04:11 |
179 | Preventing Overfitting Issues in DT | 09:59 |
180 | DT Case Study Part1 | 10:36 |
181 | DT Case Study Part2 | 09:06 |
182 | Introduction to Ensembles | 10:15 |
183 | Bagging | 13:10 |
184 | Advantages | 04:39 |
185 | Runtime | 03:53 |
186 | Case study | 05:41 |
187 | Introduction to Boosting | 06:06 |
188 | Weak Learners | 02:54 |
189 | Shallow Decision Tree | 02:31 |
190 | Adaboost Part1 | 07:49 |
191 | Adaboost Part2 | 06:45 |
192 | Adaboost Case Study | 04:47 |
193 | XGBoost | 04:28 |
194 | Boosting Part1 | 03:10 |
195 | Boosting Part2 | 06:49 |
196 | XGboost Algorithm | 08:36 |
197 | Case Study Part1 | 09:40 |
198 | Case Study Part2 | 10:45 |
199 | Case Study Part3 | 05:34 |
200 | Model Selection Part1 | 21:29 |
201 | Model Selection Part2 | 12:32 |
202 | Model Selection Part3 | 09:42 |
203 | Introduction to Clustering | 10:38 |
204 | Segmentation | 07:22 |
205 | Kmeans | 08:08 |
206 | Maths Behind Kmeans | 10:23 |
207 | More Maths | 02:22 |
208 | Kmeans plus | 10:11 |
209 | Value of K | 06:44 |
210 | Hopkins test | 02:32 |
211 | Case Study Part1 | 10:56 |
212 | Case Study Part2 | 06:48 |
213 | More on Segmentation | 04:13 |
214 | Hierarchial Clustering | 07:34 |
215 | Case Study | 05:35 |
216 | Introduction | 30:26 |
217 | PCA | 25:59 |
218 | Maths Behind PCA | 24:25 |
219 | Case Study Part1 | 05:16 |
220 | Case Study Part2 | 15:27 |
221 | Introduction | 07:20 |
222 | Example Part1 | 05:24 |
223 | Example Part2 | 09:07 |
224 | Optimal Solution | 15:23 |
225 | Case study | 03:25 |
226 | Regularization | 09:01 |
227 | Ridge and Lasso | 07:03 |
228 | Case Study | 08:51 |
229 | Model Selection | 05:32 |
230 | Adjusted R Square | 03:20 |
231 | Expectations | 02:42 |
232 | Introduction | 09:13 |
233 | History | 15:39 |
234 | Perceptron | 07:18 |
235 | Multi Layered Perceptron | 13:07 |
236 | Neural Network Playground | 10:27 |
237 | Introduction to the Problem Statement | 08:41 |
238 | Playing With The Data | 14:34 |
239 | Translating the Problem In Machine Learning World | 09:54 |
240 | Dealing with Text Data | 08:02 |
241 | Train, Test And Cross Validation Split | 10:24 |
242 | Understanding Evaluation Matrix: Log Loss | 16:56 |
243 | Building A Worst Model | 08:43 |
244 | Evaluating Worst ML Model | 05:49 |
245 | First Categorical column analysis | 12:14 |
246 | Response encoding and one hot encoder | 05:07 |
247 | Laplace Smoothing and Calibrated classifier | 12:06 |
248 | Significance of first categorical column | 06:54 |
249 | Second Categorical column | 04:08 |
250 | Third Categorical column | 06:53 |
251 | Data pre-processing before building machine learning model | 04:24 |
252 | Building Machine Learning model :part1 | 13:12 |
253 | Building Machine Learning model :part2 | 11:39 |
254 | Building Machine Learning model :part3 | 03:18 |
255 | Building Machine Learning model :part4 | 03:14 |
256 | Building Machine Learning model :part5 | 03:49 |
257 | Building Machine Learning model :part6 | 06:33 |