Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
Read more about the course
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events

OpenAI Assistants with OpenAI Python APIudemy
Category: Python, ChatGPT
Duration 4 hours 13 minutes 2 seconds
Course

React and NestJS: A Practical Guide with Dockerudemy
Category: React.js, Docker, NestJS
Duration 6 hours 54 minutes 20 seconds
Course

Instagram Clone Coding 3.0Nomad Coders
Category: React.js, Node.js, GraphQL, React Native
Duration 20 hours 48 minutes 39 seconds
Course

NFT Marketplace in React, Typescript & Solidity - Full Guideudemy
Category: TypeScript, React.js, Decentralized Applications (dApps) / 'Web 3'
Duration 16 hours 20 minutes 55 seconds
Course

Build a Jira cloneCode With Antonio
Category: React.js, Next.js
Duration 16 hours 26 minutes 4 seconds
Course

TinyHouse: A Fullstack React Masterclass with TypeScript and GraphQLfullstack.io
Category: TypeScript, React.js, GraphQL
Duration 30 hours 50 minutes 42 seconds
Course

AWS & Typescript Masterclass - CDK V2, Serverless, Reactudemy
Category: TypeScript, AWS
Duration 10 hours 48 minutes 18 seconds
Course

React Simplified - Beginnerwebdevsimplified.com
Category: React.js
Duration 10 hours 58 minutes 46 seconds
Course

PHP Symfony 4 API Platform + React.js Full Stack Masterclassudemy
Category: React.js, Symfony
Duration 19 hours 24 minutes 17 seconds
Course