AI Engineering Bootcamp: Building AI Applications (LangChain, LLM APIs + more)
Course description
This course is your practical path to the profession of a generative AI engineer: not just using technologies, but creating them.
First, you will enhance your Python skills: structuring modular code, working with APIs, and data processing. Then, you'll dive into the basics of large language models (LLMs) - how they are structured, trained, and how to interact with them effectively through advanced prompt engineering.
Next, practice. You will learn to create real AI applications based on OpenAI and Gemini API, including chat systems, working with images and audio. You'll master LangChain for building agents and prompt chains, as well as LangGraph for managing multi-step processes. You will add memory to your applications using embeddings and vector databases, and learn to debug and scale production systems with LangSmith.
Throughout the course, you will develop chatbots, intelligent tools for working with images, search Q&A systems, and much more. The final project will combine all the skills: you will create a research agent that uses search, tools, and reasoning to generate high-quality reviews of real data.
This course is a path from AI experiments to a real engineering approach.
Watch Online
# | Title | Duration |
---|---|---|
1 | AI Engineering Bootcamp: Building AI Applications with LLM APIs, LangChain + much more!Using Jupyter Notebook | 02:02 |
2 | Using Jupyter Notebook | 09:24 |
3 | Using Virtual Environments (venv) | 10:07 |
4 | Getting Started with the requests and httpx Libraries in Python | 08:50 |
5 | Handling HTTP Errors | 04:39 |
6 | Managing HTTP Authentication and Headers (OpenAI API) | 09:59 |
7 | Setting Up the Environment: Jupyter Notebook and Pandas | 03:55 |
8 | Introduction to Pandas: Series and DataFrames | 06:09 |
9 | Importing and Exporting Data: Working with CSV Files | 06:38 |
10 | Exporting Data to Different Formats: Excel, JSON, SQL, YAML | 07:47 |
11 | Modifying Data: Adding and Dropping Columns and Rows | 06:05 |
12 | Accessing Data: Using df.iloc[] and df.loc[] | 05:43 |
13 | Sampling and Previewing Data: Using df.sample() and df.head() | 06:15 |
14 | Filtering Data: Masks and pandas.Series.between() | 07:15 |
15 | Sorting Data: Understanding Pandas Sorting Methods | 07:11 |
16 | Handling Missing Data | 04:44 |
17 | Aggregations and Grouping Data | 04:54 |
18 | Project: Analyzing Website Traffic Data | 04:33 |
19 | Time Series Data Manipulation in Pandas | 06:59 |
20 | Foundations of LLMs and Generative AI | 08:32 |
21 | Tokens, Context Windows and Cost | 05:26 |
22 | Exploring LLM APIs: AI as a Service | 09:23 |
23 | OpenAI Playground, Google AI Studio, and Anthropic Workbench | 06:06 |
24 | Challenges and Limitations of LLMs | 09:03 |
25 | The State of AI: Present and Future – The Good and the Bad | 10:06 |
26 | Pretraining Data (Internet) | 06:41 |
27 | Tokenization | 06:07 |
28 | Training the Neural Network | 09:26 |
29 | Post-Training: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) | 08:26 |
30 | Reinforcement Learning (RL) | 05:30 |
31 | Becoming Better than Humans: AGI and ASI with RL | 07:32 |
32 | Reinforcement Learning with Human Feedback (RLHF) | 06:23 |
33 | How to Deal With Hallucinations | 07:37 |
34 | Using Tools: Internet Search, Interpreter, and Deep Search | 07:49 |
35 | Big Ideas Recap (Core Summary) | 09:51 |
36 | Authenticating to OpenAI using Python Dotenv | 08:17 |
37 | Chat Completions Endpoint | 06:58 |
38 | Developer Message | 04:31 |
39 | Streaming API Responses | 04:31 |
40 | Using Local Base64 Images as Input | 06:44 |
41 | Using Online Images as Input | 02:05 |
42 | Chat Completion API Parameters: Temperature and Seed | 06:14 |
43 | Chat Completion API Parameters: Top P, Max_Tokens, Penalties | 09:50 |
44 | Diving into OpenAI’s Reasoning Models (o1 and o3) | 07:56 |
45 | Best Practices for Prompting Reasoning Models | 05:26 |
46 | Transcriptions with Whisper | 05:48 |
47 | Translations with Whisper | 03:12 |
48 | Text-to-Speech (TTS) API | 07:03 |
49 | Generating Original Images Using the DALL-E 3 | 10:50 |
50 | Creating Variations of Images with DALL-E | 03:05 |
51 | Editing Images with DALL-E | 05:40 |
52 | Intro to Prompt Engineering | 02:41 |
53 | Tactic 1: Position Instruction Clearly with Delimiters | 04:13 |
54 | Tactic 2: Provide Detailed Instructions for the Context | 06:38 |
55 | Tactic 3: Use the Rich Text Format (RTF) | 07:46 |
56 | Tactic 4: Few Shot Prompting | 08:13 |
57 | Tactic 5: Specify the Steps Required to Complete a Task | 05:17 |
58 | Tactic 6: Give Models Time to Think | 02:13 |
59 | Other Tactics and Principles for Better Prompting | 05:38 |
60 | Avoid Hallucinations Using Guarding | 03:07 |
61 | Summary | 02:07 |
62 | Project Introduction | 02:32 |
63 | Creating a Daily Meal Plan Using OpenAI API | 05:39 |
64 | Creating the Prompt | 08:43 |
65 | Running the Program | 03:24 |
66 | Generating Original Images for the Recipes using DALL-E | 11:54 |
67 | Narrate the Meals using the Text-to-Speech Model | 10:24 |
68 | Setting Up the Python SDK and Authenticating for Gemini API | 09:51 |
69 | Generating Text From Text Prompts | 04:15 |
70 | Streaming Gemini Responses | 02:59 |
71 | Generating Text From Images | 05:49 |
72 | Gemini API Generation Parameters: Controlling How the Model Generates Responses | 06:12 |
73 | Gemini API Generation Parameters Explained | 10:14 |
74 | Building Chat Conversations | 07:54 |
75 | Project: Building a Conversational Agent Using Gemini Pro | 07:19 |
76 | System Instructions | 05:43 |
77 | The File API: Prompting with Media Files | 06:09 |
78 | Tokens | 06:42 |
79 | Prompting with Audio | 04:21 |
80 | Project Requirements | 05:54 |
81 | Building the Application | 05:23 |
82 | Testing the Application | 01:49 |
83 | Streamlit: Transform Your Jupyter Notebooks into Interactive Web Apps | 02:49 |
84 | Creating the Web App Layout With Streamlit | 11:20 |
85 | Saving and Displaying the History Using the Streamlit Session State | 05:20 |
86 | Exercise: Imposter Syndrome | 02:57 |
87 | Project Introduction | 00:57 |
88 | Getting Images Using a Generator | 06:18 |
89 | Renaming Images Using Gemini | 09:35 |
90 | LangChain Demo | 05:06 |
91 | Introduction to LangChain | 05:10 |
92 | Working with the OpenAI Models | 08:43 |
93 | Caching LLM Responses | 04:57 |
94 | LLM Streaming | 02:58 |
95 | Prompt Templates | 05:36 |
96 | ChatPromptTemplate | 05:55 |
97 | Understanding Chains | 07:48 |
98 | Installing the Python Libraries for Gemini and Authenticating to Gemini | 04:31 |
99 | Integrating Gemini with LangChain | 06:02 |
100 | Using a System Prompt and Enabling Streaming | 06:32 |
101 | Multimodal AI With Gemini | 14:13 |
102 | LangChain Tools: DuckDuckGo and Wikipedia | 11:08 |
103 | Creating a React Agent | 13:30 |
104 | Testing the React Agent | 04:50 |
105 | Intro to OpenAI's Text Embeddings | 03:16 |
106 | Generating Simple Embeddings | 05:54 |
107 | Embedding the Dataset for Similarity Searches | 04:52 |
108 | Estimating Embedding Costs With Tiktoken | 05:12 |
109 | Performing Semantic Searches | 07:05 |
110 | Project Introduction | 06:09 |
111 | Loading Your Custom (Private) PDF Documents | 07:28 |
112 | Loading Different Document Formats | 05:13 |
113 | Public and Private Service Loaders | 04:38 |
114 | Chunking Strategies and Splitting the Documents | 06:39 |
115 | Intro to Vector Stores and Authenticating to Pinecone | 09:02 |
116 | Working with Pinecone Indexes | 09:32 |
117 | Working with Vectors | 08:43 |
118 | Pinecone Namespaces | 06:44 |
119 | Embedding and Uploading to a Vector Database (Pinecone) | 13:53 |
120 | Asking and Getting Answers | 11:43 |
121 | Using Chroma as a Vector DB | 11:11 |
122 | Adding Memory to the RAG System (Chat History) | 09:26 |
123 | Using a Custom Prompt | 08:10 |
124 | Introduction to Agents and ReAct | 04:20 |
125 | Creating the Agent Class | 02:42 |
126 | Creating the ReAct Prompt | 02:31 |
127 | Creating the Tools | 02:41 |
128 | Testing the Agent | 06:06 |
129 | Automating the Agent | 07:01 |
130 | LangGraph Concepts and Core Components | 05:43 |
131 | Building a Chatbot | 05:30 |
132 | Visualizing the Graph | 02:13 |
133 | Running the Chatbot | 01:32 |
134 | Tavily AI | 08:29 |
135 | Enhancing the ChatBot with Tools | 08:17 |
136 | Adding Memory to the Chatbot | 07:06 |
137 | Intro to Reflection | 02:14 |
138 | Generate | 04:16 |
139 | Reflect and Repeat | 02:33 |
140 | Define the Graph - Part 1 | 03:44 |
141 | Define the Graph - Part 2 | 02:49 |
142 | Running the App | 03:55 |
143 | Intro to LangSmith | 03:29 |
144 | Setting Up LangSmith | 01:55 |
145 | Tracing with LangSmith | 06:17 |
146 | Tracing the Reflective Agentic App | 03:51 |
147 | Project Overview | 01:48 |
148 | Defining the Agent State and the Prompts | 07:39 |
149 | Implementing Agents and Nodes | 09:39 |
150 | Defining the Conditional Edge | 01:27 |
151 | Defining the Graph | 04:25 |
152 | Running the App | 04:07 |
153 | Tracing the App with LangSmith | 02:51 |
154 | Note | 02:16 |
155 | Application Overview | 03:34 |
156 | Extracting Data from ArXiv with Pandas | 12:44 |
157 | Downloading Research Papers | 04:53 |
158 | Loading, Splitting and Expanding Data | 09:54 |
159 | Building a Knowledge Base for RAG | 05:35 |
160 | Creating a Pinecone Index | 07:17 |
161 | Loading the Knowledge Base and Deploying to Pinecone | 05:04 |
162 | Developing Custom Tools | 05:13 |
163 | Implementing the ArXiv Fetch Tool | 08:01 |
164 | Unlocking Web Search with Google SerpAPI | 03:29 |
165 | Building Google SerpAPI Tools | 04:26 |
166 | Creating RAG Tools | 06:20 |
167 | Implementing the Final Answer Generation Tool | 02:18 |
168 | 06_14 Initializing the Oracle LLM | 11:02 |
169 | Testing the Ecosystem | 03:33 |
170 | Building a Decision-Making Pipeline | 08:34 |
171 | Defining the Agent State | 03:25 |
172 | Defining the Graph | 06:36 |
173 | Generating Reports | 04:27 |
174 | Building the Final Research Report | 05:20 |
175 | Concluding the Project | 06:23 |
176 | Understanding Python Modules | 06:17 |
177 | The OS Module | 07:57 |
178 | Advanced Import Techniques and Best Practices | 04:11 |
179 | Using __name__ == '__main__' for Modular and Reusable Code | 06:24 |
180 | Mastering Python Package Management with pip | 08:35 |
181 | Thank You! | 01:18 |
Comments
0 commentsSimilar courses

AI Design with Ideogram

AI Engineering: Fine-Tuning LLMs

5 Levels of Agents - Coding Agents

Design and Code User Interfaces with Galileo and Claude AI

Want to join the conversation?
Sign in to comment