Skip to main content
CourseFlix

Building a Real-Time ML System. Together

48h 20m 35s
English
Paid

Learn to design, develop, deploy, and scale end-to-end real-time ML systems using Python, Rust, LLMs, and Kubernetes. This course offers a hands-on approach to mastering the technologies that power real-time machine learning applications.

Course Highlights

What awaits you in this comprehensive program:

  1. 150+ hours of recorded sessions from previous 4 cohorts, allowing you to learn at your own pace.
  2. Access to complete source codes of projects, including a cryptocurrency price prediction system and a credit card fraud detection system, providing real-world examples for practice.
  3. 50 hours of live coding and practice for each cohort, ensuring a dynamic learning experience.

Course Overview

In this interactive practical course, participants will create a real-time machine learning system from scratch, covering deployment and scalability aspects. Past cohorts worked on a cryptocurrency price predictor, with the upcoming cohort focusing on a transaction fraud detection system.

Who Should Enroll?

This course is engineered for ML engineers, data scientists, and developers who possess a foundational understanding of machine learning—having trained at least one model—and are eager to advance from theoretical knowledge to practical application.

Key Learning Outcomes

  • Master the development of microservice architectures integrated with real-time ML capabilities.
  • Implement a robust universal approach: Feature → Training → Inference Pipeline.
  • Gain proficiency in leveraging modern tools such as Kafka, Feature Store, Experiment Tracker, Model Registry, and Kubernetes for efficient ML system operations.

Why Choose This Course?

This is not a theoretical course offering "passive learning" opportunities. It is an immersive experience where you will build functional systems, thereby significantly boosting your career in the tech industry.

Additional

About the Author: Michael Guay

Michael Guay thumbnail

Michael Guay is a US software engineer and prolific independent instructor publishing course material on the .NET / C# stack and the modern web frameworks adjacent to it.

The course catalog covers C# and .NET fundamentals, ASP.NET Core for back-end development, Entity Framework for data access, Blazor for full-stack C# web applications, plus the surrounding tooling and deployment patterns. The teaching style is patient and project-oriented, with each course typically building a working application end-to-end.

The CourseFlix listing under this source carries over 20 Michael Guay courses spanning that range. Material is paid and aimed at developers picking up the .NET stack or extending their existing .NET experience into newer parts of the platform.

Watch Online 188 lessons

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 188 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing
0:00
/
#1: What's new in this cohort?
All Course Lessons (188)
#Lesson TitleDurationAccess
1
What's new in this cohort? Demo
07:24
2
How to install the tools
19:55
3
How to create the local Kubernetes cluster
08:26
4
How to open a Github issue
06:38
5
ML System design - Part 1
31:30
6
ML System design - Part 2
11:02
7
Dev and Prod environments - Say hi to Marius!
04:24
8
Bootstrap the uv workspace & Install Kafka in our dev environment
26:26
9
Install Kafka UI in our dev environment
14:37
10
Push fake data into Kafka
25:08
11
Push real trade data to Kafka - Part 1
31:11
12
Push real trade data to Kafka - Part 2
16:05
13
.gitignore
03:33
14
Why Docker?
04:23
15
Wrap up
03:05
16
3 questions for YOU
03:21
17
Working inside the devcontainer - Let's (re)create the dev cluster
15:38
18
Dockerfile for the trades service
10:49
19
How to deploy the trades service to the dev Kubernetes cluster
27:00
20
Debugging, debugging, debugging
30:36
21
Decouple config parameters from business logic code in the trades service
31:11
22
Let's recap
03:28
23
Pre-commits for automatic code linting and formatting
16:52
24
Candles service boilerplate code
26:10
25
Open question -> How to do a double port-forwarding?
01:38
26
Wrap up
01:57
27
Plan for today
04:36
28
Redeploy the trades service to the dev cluster
28:16
29
Add key=product_id to the trade messages - otherwise the candles service cannot process them
28:13
30
Deploy the candles service to the dev kubernetes cluster
14:21
31
Horizontaly scaling of the candles service - Kafka consumer groups to the rescue!
14:22
32
Build and push the docker image for the candles to the production Github Container Registry
30:26
33
Deploy the candles service to PROD cluster
27:07
34
Technical-indicators service boilerplate code
21:13
35
Wrap up
01:51
36
Quick recap before we start office hours
10:59
37
How to spin up a PROD Kubernetes cluster
28:41
38
How to Monitor a Kubernetes cluster
06:26
39
One of my candles servie replica is waiting for messages. Why?
06:49
40
A detour around ZenML and Flyte
06:59
41
What's the project repo structure of this course?
11:59
42
If you like it, please recommend us :-)
03:35
43
We want you Marius
13:48
44
How to inspect docker build logs - How to do port-forwarding of services
14:40
45
More port forwarding
19:39
46
How to push data into a Kafka topics?
08:58
47
Again: 1 kafka topic partition and 2 consumer replicas means one of them will be IDLE
16:20
48
Problems building the Docker image?
22:14
49
Wrap up
00:24
50
Goals for today
07:07
51
What are technical indicators and how to compute them in real time
10:18
52
Custom stateful transformation to compute indicators - Part 1
24:11
53
Other libraries for real time data processing
12:02
54
Custom stateful transformation to compute indicators - Part 2
34:35
55
Custom stateful transformation to compute indicators - Part 3
09:42
56
Write Dockerfile with talib library
26:31
57
Some homework for you
14:15
58
Deploying the technical indicators service to Kubernetes
28:37
59
Wrap up
01:46
60
Goals for today
04:46
61
Installing RisingWave in Kubernetes
20:48
62
Pull-based ingestion from Kafka topic to RisingWave
14:11
63
Pull-based vs Push-based data ingestion
28:16
64
Installing Grafana and adding RisingWave as a data source
14:57
65
Plotting candles with Grafana
20:40
66
Ingesting historical trades from Kraken REST API
39:01
67
Custom stateful transformation to compute indicators - Part 2 + Homework
24:19
68
Wrap up
03:19
69
Goals for today
07:05
70
Bash script to build and push Docker images to either `dev` or `prod` Kubernetes cluster
22:07
71
Bash script to deploy services to either `dev` or `prod`
17:15
72
Squashing 2 bugs in the trades service
21:55
73
Deploying the trades-historical to Kubernetes - Part 1
13:44
74
Deploying the trades-historical to Kubernetes - Part 2
04:36
75
Adding custom timestamp extractor in our candles service
04:08
76
Deploying the whole backfill pipeline
30:16
77
ConfigMap for our backfill pipeline
38:02
78
How to scale the backfill pipeline to process 100x volume of trades
09:07
79
Wrap up
01:43
80
Goals for today
11:56
81
Installing MLflow in Kubernetes cluster
28:50
82
Start building the training pipeline -> Load data from RisingWave
23:22
83
Adding the target column to the dataframe
08:44
84
Data validation with Great Expectations
14:01
85
Automatic Exploratory Data Analysis (EDA)
17:10
86
Instrumenting our training runs with MLflow
30:47
87
Build a baseline model
24:31
88
Question -> Do we need GPUs to train our model?
01:22
89
Wrap up
05:57
90
Goals for today
06:31
91
Finding a good model candidate with LazyPredict
42:48
92
Logging model candidate table to MLflow
22:07
93
What is model hyper-parameter tuning?
10:56
94
Hyperparameter tuning with Optuna
27:02
95
Fixing things
28:06
96
Scaling the inputs for our regularised linear model (HuberRegressor) - Part 1
15:05
97
Scaling the inputs for our regularised linear model (HuberRegressor) - Part 2
16:01
98
Wrap up
01:57
99
Goals for today
09:02
100
Refactoring the model selection step of the training pipeline
19:06
101
Checking and dropping NaN values
34:22
102
Validating and pushing the model to the registry
15:34
103
Extracting training inputs into a TrainingConfig
14:04
104
Homework -> Add data and model drift reports to each training run using Evidently
09:13
105
Dockerfile for the training-pipeline
14:34
106
What is Kustomize?
18:57
107
Job manifest for the training-pipeline
18:55
108
Debugging the Job (wout success so far)
13:22
109
Wrap up
01:26
110
Goals for today
04:23
111
Deploying the training pipeline as a CronJob
16:52
112
Adjust the deployment script to use kustomize build if there is a kustomization.yaml
05:55
113
Building the prediction generator - Part 1 - Bug fixing the name of the model in the registry
27:26
114
Building the prediction generator - Part 2 - Prediction handler
29:08
115
Building the prediction generator - Part 3 - Loading the model signature
22:49
116
Building the prediction generator - Part 4 - Saving predictions to RisingWave table
35:58
117
Building the prediction generator - Part 5 - Deploying the prediction generator
24:40
118
Wrap up
03:00
119
Goals for today
04:05
120
Fixing a bug in the prediction_handler
11:14
121
Question -> Where to log what?
03:16
122
Rebuild docker image and deploy prediction generator to Kubernetes
15:02
123
Let's build the prediction API in Rust - Part 1 - The tools you need
11:22
124
Let's build the prediction API in Rust - Part 2 - REST API skeleton
21:19
125
Let's build the prediction API in Rust - Part 3 - Unwrapping the unwrappable
16:21
126
Let's build the prediction API in Rust - Part 4 - Predictions endpoint
14:36
127
Let's build the prediction API in Rust - Part 5 - Connecting to PostgreSQL
21:37
128
Let's build the prediction API in Rust - Part 6 - Debugging
43:48
129
Let's build the prediction API in Rust - Part 7 - Squashing the bug
02:57
130
Wrap up
03:35
131
5/14/2025 11:02 AM CEST recording
01:21
132
5/14/2025 11:04 AM CEST recording
31:34
133
5/14/2025 11:35 AM CEST recording
05:22
134
5/14/2025 11:41 AM CEST recording
06:07
135
5/14/2025 11:48 AM CEST recording
15:02
136
5/14/2025 12:03 PM CEST recording
06:23
137
5/14/2025 12:10 PM CEST recording
08:10
138
5/14/2025 12:18 PM CEST recording
06:35
139
5/14/2025 12:34 PM CEST recording
35:50
140
5/14/2025 1:10 PM CEST recording
07:47
141
5/14/2025 1:18 PM CEST recording
15:00
142
5/14/2025 1:34 PM CEST recording
09:10
143
5/14/2025 1:43 PM CEST recording
09:03
144
5/14/2025 1:52 PM CEST recording
04:17
145
5/14/2025 1:57 PM CEST recording
01:42
146
Before we begin...
07:00
147
Goals for today
08:03
148
Questions
06:31
149
Adding the PgPool to the app State (so we don't need to recreate every time we get a request)
22:48
150
Creating a RisingWave materialized view with the latest predictions for each coin
16:14
151
Custom config objet to load and hold env variable values
17:36
152
Adding the Config to the app State
09:48
153
Adding some logging
11:41
154
Dockerizing our Prediction API Rust service
23:09
155
Mad scientist experiment to reduct the Docker image size with a scratch layer
07:04
156
Deploying to Kubernetes
27:15
157
Plan for the last 3 sessions
03:42
158
Wrap up
01:44
159
Goals for today
08:12
160
How to download crypto news from a REST API (Cryptopanic)
26:21
161
Load Cryptopanic API using env variables and pydantic-settings
07:12
162
Custom Quixstreams Stateful Source to ingest news into Kafka
27:10
163
Inspecting the news messages - Kafka UI
18:18
164
News sentiment service - iteration 1
21:56
165
Unpacking sentiment scores as N kafka messages
21:28
166
BAML to build LLMs with structured output (like the sentiment-extractor we want to build!)
26:55
167
Testing our BAML function
05:39
168
Wrap up
02:02
169
A question I forgot to answer!
01:09
170
6.4.2025 11:03 AM CEST recording
04:01
171
6.4.2025 11:08 AM CEST recording
31:30
172
6.4.2025 11:40 AM CEST recording
10:01
173
6.4.2025 11:50 AM CEST recording
33:31
174
6.4.2025 12:35 PM CEST recording
24:07
175
6.4.2025 1:00 PM CEST recording
04:38
176
6.4.2025 1:05 PM CEST recording
09:07
177
6.4.2025 1:14 PM CEST recording
27:14
178
6.4.2025 1:42 PM CEST recording
17:16
179
6.4.2025 1:59 PM CEST recording
04:20
180
Goals for today
05:22
181
Implement the evaluation metric
22:00
182
Manual prompt improvement
07:06
183
Automatic prompt optimization
11:21
184
How to use open-weights LLMs with Ollama
34:36
185
Kubernetes manifests to deploy news and news-sentiment services
30:16
186
MARIUS -> How to set up a GPU node in a production Kubernetes cluster
44:51
187
Time to say (see you later)!
03:29
188
Wrap up
04:36
Unlock unlimited learning

Get instant access to all 187 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Books

Read Book Building a Real-Time ML System. Together

#TitleTypeOpen
1slides_session_1
2slides_session_2
3slides_session_3
4slides_session_4
5slides_session_5
6slides_session_6
7slides_session_7
8slides_session_8
9slides_session_9
10slides_session_10
11slides_session_11
12slides_session_12
13slides_session_13
14slides_session_14
15slides_session_15

Course content

188 lessons · 48h 20m 35s
Show all 188 lessons
  1. 1 What's new in this cohort? 07:24
  2. 2 How to install the tools 19:55
  3. 3 How to create the local Kubernetes cluster 08:26
  4. 4 How to open a Github issue 06:38
  5. 5 ML System design - Part 1 31:30
  6. 6 ML System design - Part 2 11:02
  7. 7 Dev and Prod environments - Say hi to Marius! 04:24
  8. 8 Bootstrap the uv workspace & Install Kafka in our dev environment 26:26
  9. 9 Install Kafka UI in our dev environment 14:37
  10. 10 Push fake data into Kafka 25:08
  11. 11 Push real trade data to Kafka - Part 1 31:11
  12. 12 Push real trade data to Kafka - Part 2 16:05
  13. 13 .gitignore 03:33
  14. 14 Why Docker? 04:23
  15. 15 Wrap up 03:05
  16. 16 3 questions for YOU 03:21
  17. 17 Working inside the devcontainer - Let's (re)create the dev cluster 15:38
  18. 18 Dockerfile for the trades service 10:49
  19. 19 How to deploy the trades service to the dev Kubernetes cluster 27:00
  20. 20 Debugging, debugging, debugging 30:36
  21. 21 Decouple config parameters from business logic code in the trades service 31:11
  22. 22 Let's recap 03:28
  23. 23 Pre-commits for automatic code linting and formatting 16:52
  24. 24 Candles service boilerplate code 26:10
  25. 25 Open question -> How to do a double port-forwarding? 01:38
  26. 26 Wrap up 01:57
  27. 27 Plan for today 04:36
  28. 28 Redeploy the trades service to the dev cluster 28:16
  29. 29 Add key=product_id to the trade messages - otherwise the candles service cannot process them 28:13
  30. 30 Deploy the candles service to the dev kubernetes cluster 14:21
  31. 31 Horizontaly scaling of the candles service - Kafka consumer groups to the rescue! 14:22
  32. 32 Build and push the docker image for the candles to the production Github Container Registry 30:26
  33. 33 Deploy the candles service to PROD cluster 27:07
  34. 34 Technical-indicators service boilerplate code 21:13
  35. 35 Wrap up 01:51
  36. 36 Quick recap before we start office hours 10:59
  37. 37 How to spin up a PROD Kubernetes cluster 28:41
  38. 38 How to Monitor a Kubernetes cluster 06:26
  39. 39 One of my candles servie replica is waiting for messages. Why? 06:49
  40. 40 A detour around ZenML and Flyte 06:59
  41. 41 What's the project repo structure of this course? 11:59
  42. 42 If you like it, please recommend us :-) 03:35
  43. 43 We want you Marius 13:48
  44. 44 How to inspect docker build logs - How to do port-forwarding of services 14:40
  45. 45 More port forwarding 19:39
  46. 46 How to push data into a Kafka topics? 08:58
  47. 47 Again: 1 kafka topic partition and 2 consumer replicas means one of them will be IDLE 16:20
  48. 48 Problems building the Docker image? 22:14
  49. 49 Wrap up 00:24
  50. 50 Goals for today 07:07
  51. 51 What are technical indicators and how to compute them in real time 10:18
  52. 52 Custom stateful transformation to compute indicators - Part 1 24:11
  53. 53 Other libraries for real time data processing 12:02
  54. 54 Custom stateful transformation to compute indicators - Part 2 34:35
  55. 55 Custom stateful transformation to compute indicators - Part 3 09:42
  56. 56 Write Dockerfile with talib library 26:31
  57. 57 Some homework for you 14:15
  58. 58 Deploying the technical indicators service to Kubernetes 28:37
  59. 59 Wrap up 01:46
  60. 60 Goals for today 04:46
  61. 61 Installing RisingWave in Kubernetes 20:48
  62. 62 Pull-based ingestion from Kafka topic to RisingWave 14:11
  63. 63 Pull-based vs Push-based data ingestion 28:16
  64. 64 Installing Grafana and adding RisingWave as a data source 14:57
  65. 65 Plotting candles with Grafana 20:40
  66. 66 Ingesting historical trades from Kraken REST API 39:01
  67. 67 Custom stateful transformation to compute indicators - Part 2 + Homework 24:19
  68. 68 Wrap up 03:19
  69. 69 Goals for today 07:05
  70. 70 Bash script to build and push Docker images to either `dev` or `prod` Kubernetes cluster 22:07
  71. 71 Bash script to deploy services to either `dev` or `prod` 17:15
  72. 72 Squashing 2 bugs in the trades service 21:55
  73. 73 Deploying the trades-historical to Kubernetes - Part 1 13:44
  74. 74 Deploying the trades-historical to Kubernetes - Part 2 04:36
  75. 75 Adding custom timestamp extractor in our candles service 04:08
  76. 76 Deploying the whole backfill pipeline 30:16
  77. 77 ConfigMap for our backfill pipeline 38:02
  78. 78 How to scale the backfill pipeline to process 100x volume of trades 09:07
  79. 79 Wrap up 01:43
  80. 80 Goals for today 11:56
  81. 81 Installing MLflow in Kubernetes cluster 28:50
  82. 82 Start building the training pipeline -> Load data from RisingWave 23:22
  83. 83 Adding the target column to the dataframe 08:44
  84. 84 Data validation with Great Expectations 14:01
  85. 85 Automatic Exploratory Data Analysis (EDA) 17:10
  86. 86 Instrumenting our training runs with MLflow 30:47
  87. 87 Build a baseline model 24:31
  88. 88 Question -> Do we need GPUs to train our model? 01:22
  89. 89 Wrap up 05:57
  90. 90 Goals for today 06:31
  91. 91 Finding a good model candidate with LazyPredict 42:48
  92. 92 Logging model candidate table to MLflow 22:07
  93. 93 What is model hyper-parameter tuning? 10:56
  94. 94 Hyperparameter tuning with Optuna 27:02
  95. 95 Fixing things 28:06
  96. 96 Scaling the inputs for our regularised linear model (HuberRegressor) - Part 1 15:05
  97. 97 Scaling the inputs for our regularised linear model (HuberRegressor) - Part 2 16:01
  98. 98 Wrap up 01:57
  99. 99 Goals for today 09:02
  100. 100 Refactoring the model selection step of the training pipeline 19:06
  101. 101 Checking and dropping NaN values 34:22
  102. 102 Validating and pushing the model to the registry 15:34
  103. 103 Extracting training inputs into a TrainingConfig 14:04
  104. 104 Homework -> Add data and model drift reports to each training run using Evidently 09:13
  105. 105 Dockerfile for the training-pipeline 14:34
  106. 106 What is Kustomize? 18:57
  107. 107 Job manifest for the training-pipeline 18:55
  108. 108 Debugging the Job (wout success so far) 13:22
  109. 109 Wrap up 01:26
  110. 110 Goals for today 04:23
  111. 111 Deploying the training pipeline as a CronJob 16:52
  112. 112 Adjust the deployment script to use kustomize build if there is a kustomization.yaml 05:55
  113. 113 Building the prediction generator - Part 1 - Bug fixing the name of the model in the registry 27:26
  114. 114 Building the prediction generator - Part 2 - Prediction handler 29:08
  115. 115 Building the prediction generator - Part 3 - Loading the model signature 22:49
  116. 116 Building the prediction generator - Part 4 - Saving predictions to RisingWave table 35:58
  117. 117 Building the prediction generator - Part 5 - Deploying the prediction generator 24:40
  118. 118 Wrap up 03:00
  119. 119 Goals for today 04:05
  120. 120 Fixing a bug in the prediction_handler 11:14
  121. 121 Question -> Where to log what? 03:16
  122. 122 Rebuild docker image and deploy prediction generator to Kubernetes 15:02
  123. 123 Let's build the prediction API in Rust - Part 1 - The tools you need 11:22
  124. 124 Let's build the prediction API in Rust - Part 2 - REST API skeleton 21:19
  125. 125 Let's build the prediction API in Rust - Part 3 - Unwrapping the unwrappable 16:21
  126. 126 Let's build the prediction API in Rust - Part 4 - Predictions endpoint 14:36
  127. 127 Let's build the prediction API in Rust - Part 5 - Connecting to PostgreSQL 21:37
  128. 128 Let's build the prediction API in Rust - Part 6 - Debugging 43:48
  129. 129 Let's build the prediction API in Rust - Part 7 - Squashing the bug 02:57
  130. 130 Wrap up 03:35
  131. 131 5/14/2025 11:02 AM CEST recording 01:21
  132. 132 5/14/2025 11:04 AM CEST recording 31:34
  133. 133 5/14/2025 11:35 AM CEST recording 05:22
  134. 134 5/14/2025 11:41 AM CEST recording 06:07
  135. 135 5/14/2025 11:48 AM CEST recording 15:02
  136. 136 5/14/2025 12:03 PM CEST recording 06:23
  137. 137 5/14/2025 12:10 PM CEST recording 08:10
  138. 138 5/14/2025 12:18 PM CEST recording 06:35
  139. 139 5/14/2025 12:34 PM CEST recording 35:50
  140. 140 5/14/2025 1:10 PM CEST recording 07:47
  141. 141 5/14/2025 1:18 PM CEST recording 15:00
  142. 142 5/14/2025 1:34 PM CEST recording 09:10
  143. 143 5/14/2025 1:43 PM CEST recording 09:03
  144. 144 5/14/2025 1:52 PM CEST recording 04:17
  145. 145 5/14/2025 1:57 PM CEST recording 01:42
  146. 146 Before we begin... 07:00
  147. 147 Goals for today 08:03
  148. 148 Questions 06:31
  149. 149 Adding the PgPool to the app State (so we don't need to recreate every time we get a request) 22:48
  150. 150 Creating a RisingWave materialized view with the latest predictions for each coin 16:14
  151. 151 Custom config objet to load and hold env variable values 17:36
  152. 152 Adding the Config to the app State 09:48
  153. 153 Adding some logging 11:41
  154. 154 Dockerizing our Prediction API Rust service 23:09
  155. 155 Mad scientist experiment to reduct the Docker image size with a scratch layer 07:04
  156. 156 Deploying to Kubernetes 27:15
  157. 157 Plan for the last 3 sessions 03:42
  158. 158 Wrap up 01:44
  159. 159 Goals for today 08:12
  160. 160 How to download crypto news from a REST API (Cryptopanic) 26:21
  161. 161 Load Cryptopanic API using env variables and pydantic-settings 07:12
  162. 162 Custom Quixstreams Stateful Source to ingest news into Kafka 27:10
  163. 163 Inspecting the news messages - Kafka UI 18:18
  164. 164 News sentiment service - iteration 1 21:56
  165. 165 Unpacking sentiment scores as N kafka messages 21:28
  166. 166 BAML to build LLMs with structured output (like the sentiment-extractor we want to build!) 26:55
  167. 167 Testing our BAML function 05:39
  168. 168 Wrap up 02:02
  169. 169 A question I forgot to answer! 01:09
  170. 170 6.4.2025 11:03 AM CEST recording 04:01
  171. 171 6.4.2025 11:08 AM CEST recording 31:30
  172. 172 6.4.2025 11:40 AM CEST recording 10:01
  173. 173 6.4.2025 11:50 AM CEST recording 33:31
  174. 174 6.4.2025 12:35 PM CEST recording 24:07
  175. 175 6.4.2025 1:00 PM CEST recording 04:38
  176. 176 6.4.2025 1:05 PM CEST recording 09:07
  177. 177 6.4.2025 1:14 PM CEST recording 27:14
  178. 178 6.4.2025 1:42 PM CEST recording 17:16
  179. 179 6.4.2025 1:59 PM CEST recording 04:20
  180. 180 Goals for today 05:22
  181. 181 Implement the evaluation metric 22:00
  182. 182 Manual prompt improvement 07:06
  183. 183 Automatic prompt optimization 11:21
  184. 184 How to use open-weights LLMs with Ollama 34:36
  185. 185 Kubernetes manifests to deploy news and news-sentiment services 30:16
  186. 186 MARIUS -> How to set up a GPU node in a production Kubernetes cluster 44:51
  187. 187 Time to say (see you later)! 03:29
  188. 188 Wrap up 04:36

Related courses

  • AngularJS Fundamentals thumbnail

    AngularJS Fundamentals

    Sources: Ultimate Courses (Todd Motto)
    Start building modern AngularJS applications with component architecture and best practices. Build modern AngularJS applications.
    2 hours 41 minutes 33 seconds
  • AngularJS and Webpack for Modular Applications thumbnail

    AngularJS and Webpack for Modular Applications

    Sources: egghead.io
    How much work would it take for you to move all of your directives and their templates to several different new directories?
    43 minutes 56 seconds
  • AngularJS Pro thumbnail

    AngularJS Pro

    Sources: Ultimate Courses (Todd Motto)
    Get advanced AngularJS skills for scalable apps. The only deep dive into the entire framework. Take your AngularJS skills to the Pro level. Comprehensive Direct
    7 hours 23 minutes 55 seconds

Frequently asked questions

What is Building a Real-Time ML System. Together about?
Learn to design, develop, deploy, and scale end-to-end real-time ML systems using Python, Rust, LLMs, and Kubernetes . This course offers a hands-on approach to mastering the technologies that power real-time machine learning applications…
Who teaches Building a Real-Time ML System. Together?
Building a Real-Time ML System. Together is taught by Michael Guay. You can find more courses by this instructor on the corresponding source page.
How long is Building a Real-Time ML System. Together?
Building a Real-Time ML System. Together contains 188 lessons with a total runtime of 48 hours 20 minutes. All lessons are available to watch online at your own pace.
Is Building a Real-Time ML System. Together free to watch?
Building a Real-Time ML System. Together is part of CourseFlix's premium catalog. A CourseFlix subscription unlocks the full video player; the course description, table of contents, and preview information are available to everyone.
Where can I watch Building a Real-Time ML System. Together online?
Building a Real-Time ML System. Together is available to watch online on CourseFlix at https://courseflix.net/course/building-a-real-time-ml-system-together. The page hosts every lesson with the integrated video player; no download is required.