Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
Read more about the course
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events

Project React. Build a complex React project as a total beginnerCosden Solutions
Category: React.js
Duration 16 hours 31 minutes 5 seconds
Course

Full Web Apps with FastAPITalkpython
Category: Python, Django
Duration 7 hours 12 minutes 4 seconds
Course

React Testing Library and Jest: The Complete GuideudemyStephen Grider
Category: React.js
Duration 7 hours 40 minutes 24 seconds
Course

Go Full Stack with Spring Boot and Reactudemy
Category: React.js, Spring Boot, Spring Security
Duration 11 hours 43 minutes 36 seconds
Course

AWS & Typescript Masterclass - CDK V2, Serverless, Reactudemy
Category: TypeScript, AWS
Duration 10 hours 48 minutes 18 seconds
Course

Modern APIs with FastAPI and Python CourseTalkpython
Category: Python
Duration 3 hours 53 minutes 18 seconds
Course

Storybook for building React appsfullstack.io
Category: React.js
Duration 3 hours 16 minutes 25 seconds
Course

TypeScript Pro EssentialsMatt Pocock
Category: TypeScript
Duration 11 hours 2 minutes 12 seconds
Course

Data Science Jumpstart with 10 Projects CourseTalkpython
Category: Python
Duration 3 hours 12 minutes 21 seconds
Course

React Chrome Extension boilerplate | ShippedLuca Restagno (shipped.club)
Category: React.js, Assemblies, ready-made solutions for development
Duration
Course