Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
Read more about the course
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events

REACT ROUTER (v6)ui.dev (ex. Tyler McGinnis)
Category: React.js
Duration 3 hours 15 minutes 27 seconds
Course

Web Performance FundamentalsNadia Makarevich
Category: React.js, Other (Frontend)
Duration
Book

Modern Python ProjectsTalkpython
Category: Python
Duration 8 hours 45 minutes 6 seconds
Course

Build and Deploy a SaaS AI Agent PlatformCode With Antonio
Category: React.js, Next.js, Other (AI)
Duration 13 hours 24 minutes 14 seconds
Course

Build fancy landing pages with React and ThreejsPaul Henschel (@0xca0a)
Category: React.js, Three.js
Duration 38 minutes 9 seconds
Course

Microservices with NodeJS, React, Typescript and Kubernetesudemy
Category: TypeScript, React.js, Node.js, Kubernetes
Duration 95 hours 13 minutes 4 seconds
Course

Python 3: Deep Dive (Part 3 - Hash Maps)udemy
Category: Python
Duration 20 hours 23 minutes 50 seconds
Course

React Js A-Z With Laravel - For Beginner to Advanced Leveludemy
Category: React.js, Laravel
Duration 68 hours 1 minute 33 seconds
Course

Epic React (Epic React Pro)Kent C. Dodds
Category: React.js
Duration 27 hours 57 minutes 10 seconds
Course

The Ultimate React Course 2024: React, Redux & Moreudemy
Category: React.js, Redux
Duration 83 hours 56 minutes 37 seconds
Course