Skip to main content

Responsive LLM Applications with Server-Sent Events

1h 18m 18s
English
Paid

Course description

Large Language Models (LLM) are transforming entire industries, but integrating them into user interfaces with real-time data streaming comes with unique challenges. In this course, you will learn to seamlessly embed LLM APIs into applications and create AI interfaces for streaming text and chats using TypeScript, React, and Python. We will develop a fully functional AI application step by step with high-quality code and flexible implementation.
Read more about the course

As part of the course, you will create an LLM application that includes:

  • autocompletion scenario (translation from English to emoji),
  • chat,
  • retrieval augmented generation scenario,
  • AI agent usage scenarios (code execution, data analysis agent).

This application can become a starting point for most projects, saving a lot of time, and its flexibility allows for the addition of new tools as needed.

By the end of the course, you will have mastered the end-to-end implementation of a flexible and high-quality LLM application. You will also gain the knowledge and skills necessary to create complex solutions based on LLM.

Watch Online

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 20 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing

Watch Online Responsive LLM Applications with Server-Sent Events

0:00
/
#1: Introduction to AI Product Development

All Course Lessons (20)

#Lesson TitleDurationAccess
1
Introduction to AI Product Development Demo
03:48
2
Picking the stack - Navigating JavaScript and Python
06:10
3
Designing a Hybrid Web Application Architecture with JavaScript and Python
05:08
4
Streaming events with Server-Sent Events and WebSockets
06:31
5
Discovering the OpenAI Completion API
06:30
6
Handling Server-Sent Events with JavaScript
06:14
7
Building the useCompletion hook
07:01
8
Rendering Completion Output
01:26
9
Mocking Streams
03:29
10
Testing the useCompletion hook
03:11
11
Creating a FastAPI server
01:55
12
Exploring asynchronous programming in Python
03:42
13
Integrating Langchain with FastAPI for Asynchronous Streaming
04:34
14
Testing with PyTest and LangChain
01:02
15
Building the useChat hook
05:12
16
Building the User Interface
01:53
17
Discovering Retrieval Augmented Generation
03:19
18
Building a Semantic Search Engine with Chroma
03:37
19
Adding Retrieval-Augmented Generation to the chat
02:14
20
Final words
01:22

Unlock unlimited learning

Get instant access to all 19 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

Mastering Next.js - 50+ Lesson Video Course on React and Next

Mastering Next.js - 50+ Lesson Video Course on React and Next

Sources: masteringnuxt.com
The premiere video course for building static and server-side rendered applications with Next.js and React. Sign up now and get two videos instantly!
5 hours 9 minutes 45 seconds
React, NodeJS, Express & MongoDB - The MERN Fullstack Guide

React, NodeJS, Express & MongoDB - The MERN Fullstack Guide

Sources: udemy
Building fullstack applications (i.e. frontend + backend) with the MERN stack is very popular - in this course, you will learn it from scratch at the example of a complete proje...
18 hours 45 minutes 10 seconds
Microservices with NodeJS, React, Typescript and Kubernetes

Microservices with NodeJS, React, Typescript and Kubernetes

Sources: udemy
In building large scale applications intended for growth, microservices architecture is the go-to solution. One issue for Javascript and NodeJS learners is the
95 hours 13 minutes 4 seconds