Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
November 23, 2024
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
More
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events
Classic ReactBuild UI
Duration 4 hours 10 minutes 15 seconds
Course
Scale React Development with Nxegghead
Duration 1 hour 34 minutes 10 seconds
Course
The Automation Bootcamp: Zero to Masteryzerotomastery.io
Duration 22 hours 39 minutes 15 seconds
Course
Async Techniques and Examples in PythonTalkpython
Duration 5 hours 2 minutes 11 seconds
Course
The React practice course, learn by building projects.udemy
Duration 43 hours 45 minutes 48 seconds
Course
Understanding TypeScript - 2023 EditionudemyAcademind Pro
Duration 14 hours 54 minutes 54 seconds
Course
The Complete Guide to Advanced React Patterns (2020)udemy
Duration 6 hours 1 minute 51 seconds
Course
Build a Shopping Cart AppReed Barger
Duration 1 hour 41 minutes 52 seconds
Course
API Testing with Python 3 & PyTest, Backend Automation 2022udemy
Duration 13 hours 59 minutes 14 seconds
Course
Angular Architecture. How to Build Scalable Web Applicationsudemy
Duration 7 hours 34 minutes 45 seconds
Course