Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
Read more about the course
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events

React Node AWS - Build infinitely Scaling MERN Stack Appudemy
Category: React.js, AWS, Next.js, Node.js
Duration 25 hours 1 minute 19 seconds
Course
![[Full Stack] Airbnb Clone Coding](https://cdn.courseflix.net/courses/100x56/full-stack-airbnb-clone-coding.jpg?d=1743455282981)
[Full Stack] Airbnb Clone CodingNomad Coders
Category: Python, Django
Duration 29 hours 47 minutes 6 seconds
Course

React and Laravel: Breaking a Monolith to Microservicesudemy
Category: React.js, Docker, Laravel, Redis
Duration 15 hours 7 minutes 45 seconds
Course

Statistics Bootcamp (with Python): Zero to Masteryzerotomastery.io
Category: Python, ChatGPT, Data processing and analysis
Duration 20 hours 50 minutes 51 seconds
Course

Python and Django Full Stack Web Developer Bootcampudemy
Category: Python, Django
Duration 31 hours 54 minutes 39 seconds
Course

Fullstack Flask: Build a Complete SaaS App with Flaskfullstack.io
Category: Python
Duration 7 hours 33 minutes 4 seconds
Course

MERN Stack From ScratchBrad Traversy
Category: React.js, Node.js, MongoDB
Duration 13 hours 32 minutes 38 seconds
Course

Build a Notion Clone with React and TypeScriptzerotomastery.io
Category: TypeScript, React.js
Duration 7 hours 57 minutes 47 seconds
Course

The complete React Fullstack course ( 2021 edition )udemy
Category: React.js, MongoDB
Duration 76 hours 58 minutes 6 seconds
Course