Responsive LLM Applications with Server-Sent Events
1h 18m 18s
English
Paid
November 23, 2024
Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
More
As part of the course, you will create an LLM application that includes:
- autocompletion scenario (translation from English to emoji),
- chat,
- retrieval augmented generation scenario,
- AI agent usage scenarios (code execution, data analysis agent).
This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.
By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.
Watch Online Responsive LLM Applications with Server-Sent Events
Join premium to watch
Go to premium
# | Title | Duration |
---|---|---|
1 | Introduction to AI Product Development | 03:48 |
2 | Picking the stack - Navigating JavaScript and Python | 06:10 |
3 | Designing a Hybrid Web Application Architecture with JavaScript and Python | 05:08 |
4 | Streaming events with Server-Sent Events and WebSockets | 06:31 |
5 | Discovering the OpenAI Completion API | 06:30 |
6 | Handling Server-Sent Events with JavaScript | 06:14 |
7 | Building the useCompletion hook | 07:01 |
8 | Rendering Completion Output | 01:26 |
9 | Mocking Streams | 03:29 |
10 | Testing the useCompletion hook | 03:11 |
11 | Creating a FastAPI server | 01:55 |
12 | Exploring asynchronous programming in Python | 03:42 |
13 | Integrating Langchain with FastAPI for Asynchronous Streaming | 04:34 |
14 | Testing with PyTest and LangChain | 01:02 |
15 | Building the useChat hook | 05:12 |
16 | Building the User Interface | 01:53 |
17 | Discovering Retrieval Augmented Generation | 03:19 |
18 | Building a Semantic Search Engine with Chroma | 03:37 |
19 | Adding Retrieval-Augmented Generation to the chat | 02:14 |
20 | Final words | 01:22 |
Similar courses to Responsive LLM Applications with Server-Sent Events
Loopback 4: Modern ways to Build APIs in Typescript & NodeJsudemy
Category: TypeScript, Node.js, MongoDB
Duration 5 hours 14 minutes 32 seconds
Course
The Complete Guide to Django REST Framework and Vue JSudemy
Category: Python, Vue, Django
Duration 13 hours 40 minutes 40 seconds
Course
Python Interview Espressointerviewespresso (Aaron Jack)
Category: Preparing for an interview, Python
Duration 5 hours 11 minutes 29 seconds
Course
React Chrome Extension boilerplate | ShippedLuca Restagno (shipped.club)
Category: React.js, Assemblies, ready-made solutions for development
Duration
Course
React Simplified - Advancedwebdevsimplified.com
Category: React.js
Duration 11 hours 29 minutes 50 seconds
Course
React - The Complete Guide 2024udemyAcademind Pro
Category: React.js
Duration 64 hours 33 minutes 17 seconds
Course
A/B Testing for Data ScienceLunarTech
Category: Others, Python
Duration 1 hour 47 minutes 56 seconds
Course
Build a ChatBot with Nuxt, TypeScript and the OpenAI Assistants APIzerotomastery.io
Category: TypeScript, ChatGPT, Nuxt
Duration 2 hours 41 minutes
Course
Composing Layouts in Reactfullstack.io
Category: React.js, CSS
Duration 4 hours 38 minutes 12 seconds
Course