Skip to main content

Advanced AI: LLMs Explained with Math (Transformers, Attention Mechanisms & More)

4h 55m 29s
English
Paid

Course description

Dive into the mathematical foundations of transformers, such as GPT and BERT. From tokenization to attention mechanisms - analyze the algorithms that underpin modern AI systems. Enhance your skills to innovate and become a leader in the field of machine learning.

Watch Online

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 32 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing
0:00
/
#1: Advanced AI: LLMs Explained with Math

All Course Lessons (32)

#Lesson TitleDurationAccess
1
Advanced AI: LLMs Explained with Math Demo
03:01
2
Creating Our Optional Experiment Notebook - Part 1
03:22
3
Creating Our Optional Experiment Notebook - Part 2
04:02
4
Encoding Categorical Labels to Numeric Values
13:25
5
Understanding the Tokenization Vocabulary
15:06
6
Encoding Tokens
10:57
7
Practical Example of Tokenization and Encoding
12:49
8
DistilBert vs. Bert Differences
04:47
9
Embeddings In A Continuous Vector Space
07:41
10
Introduction To Positional Encodings
05:14
11
Positional Encodings - Part 1
04:15
12
Positional Encodings - Part 2 (Even and Odd Indices)
10:11
13
Why Use Sine and Cosine Functions
05:09
14
Understanding the Nature of Sine and Cosine Functions
09:53
15
Visualizing Positional Encodings in Sine and Cosine Graphs
09:25
16
Solving the Equations to Get the Values for Positional Encodings
18:08
17
Introduction to Attention Mechanism
03:03
18
Query, Key and Value Matrix
18:11
19
Getting Started with Our Step by Step Attention Calculation
06:54
20
Calculating Key Vectors
20:06
21
Query Matrix Introduction
10:21
22
Calculating Raw Attention Scores
21:25
23
Understanding the Mathematics Behind Dot Products and Vector Alignment
13:33
24
Visualizing Raw Attention Scores in 2D
05:43
25
Converting Raw Attention Scores to Probability Distributions with Softmax
09:17
26
Normalization
03:20
27
Understanding the Value Matrix and Value Vector
09:08
28
Calculating the Final Context Aware Rich Representation for the Word "River"
10:46
29
Understanding the Output
01:59
30
Understanding Multi Head Attention
11:56
31
Multi Head Attention Example and Subsequent Layers
09:52
32
Masked Language Learning
02:30

Unlock unlimited learning

Get instant access to all 31 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

Master the Model Context Protocol (MCP)

Master the Model Context Protocol (MCP)

Sources: Kent C. Dodds
The most interesting thing in software right now is MCP. It's a protocol that turns applications into smart conversational partners: instead of clicking...
7 hours 23 minutes 25 seconds
Perplexity AI for Professionals

Perplexity AI for Professionals

Sources: zerotomastery.io
Learn to use Perplexity AI to enhance research, automate tasks, and increase efficiency in the era of AI tools. The course is ideal...
56 minutes 25 seconds
Learn MCP (Model Context Protocol)

Learn MCP (Model Context Protocol)

Sources: zerotomastery.io
If you are interested in AI that doesn't just talk but actually does something, this compact course is for you. Get ready to dive into the Model Context...
1 hour 7 minutes 34 seconds
Master AI for Work

Master AI for Work

Sources: Towards AI, Louis-François Bouchard
The course "Master AI for Work" is designed for those who want to achieve real results from using large language models (LLM) in their professional...
2 hours 27 minutes 56 seconds
AI Engineering: Fine-Tuning LLMs

AI Engineering: Fine-Tuning LLMs

Sources: zerotomastery.io
If you're interested in an AI that actually works, not just sounds impressive, this compact course is just for you. Fine-tuning the GPT model is not just...
1 hour 35 minutes 46 seconds