Skip to main content

Local LLMs via Ollama & LM Studio - The Practical Guide

3h 52m 28s
English
Paid

Course description

AI assistants like ChatGPT and Google Gemini have become everyday tools. However, when privacy, cost, offline operation, or flexible customization are important, the best solution is to run powerful open language models (LLM) directly on your own computer. In this course, you will learn how to run and use local AI models, such as Llama by Meta, Gemma by Google, and DeepSeek, even on a regular laptop—without clouds, subscriptions, or data leaks.
Read more about the course

Why Local and Open LLMs?

In a world dominated by cloud services, local LLMs offer a real advantage:

  • No subscriptions - use powerful models for free
  • 100% privacy - all data stays on your computer
  • Offline capability - run AI in offline mode
  • Freedom from vendors - access to a rapidly growing ecosystem of open-source models
  • Cutting-edge capabilities - open models are among the top in global rankings!

What you will learn:

  • Overview of open LLMs: where to find them, how to choose them, and why they are important
  • Hardware requirements: what you need to run models locally
  • Model quantization: how to run even "heavy" AI on a regular PC
  • LM Studio: installation, setup, launch, and use of models
  • Ollama: a convenient way to manage LLM from the terminal or API
  • Practice: image processing, PDF document summaries, text generation, few-shot prompting, and more
  • Integration into your own projects: working with APIs and automation

Who the course is for:

  • Developers who want to integrate AI into applications
  • Enthusiasts and students interested in cutting-edge technologies
  • Those who value privacy and control over their data
  • Anyone who wants to save on subscriptions and explore the capabilities of modern LLMs

Watch Online

This is a demo lesson (10:00 remaining)

You can watch up to 10 minutes for free. Subscribe to unlock all 54 lessons in this course and access 10,000+ hours of premium content across all courses.

View Pricing
0:00
/
#1: Welcome To The Course!

All Course Lessons (54)

#Lesson TitleDurationAccess
1
Welcome To The Course! Demo
00:19
2
What Exactly Are "Open LLMs"?
06:28
3
Why Would You Want To Run Open LLMs Locally?
06:53
4
Popular Open LLMs - Some Examples
03:44
5
Where To Find Open LLMs?
04:48
6
Running LLMs Locally - Available Options
07:18
7
Check The Model Licenses!
04:05
8
Module Introduction
01:21
9
LLM Hardware Requirements - First Steps
04:22
10
Deriving Hardware Requirements From Model Parameters
05:35
11
Quantization To The Rescue!
06:51
12
Does It Run On Your Machine?
05:51
13
Module Introduction
02:04
14
Running Locally vs Remotely
01:09
15
Installing & Using LM Studio
03:10
16
Finding, Downloading & Activating Open LLMs
09:05
17
Using the LM Studio Chat Interface
04:54
18
Working with System Prompts & Presets
03:27
19
Managing Chats
02:33
20
Power User Features For Managing Models & Chats
06:29
21
Leveraging Multimodal Models & Extracting Content From Images (OCR)
02:49
22
Analyzing & Summarizing PDF Documents
03:28
23
Onwards To More Advanced Settings
01:53
24
Understanding Temperature, top_k & top_p
06:33
25
Controlling Temperature, top_k & top_p in LM Studio
04:46
26
Managing the Underlying Runtime & Hardware Configuration
04:18
27
Managing Context Length
05:22
28
Using Flash Attention
05:09
29
Working With Structured Outputs
05:30
30
Using Local LLMs For Code Generation
02:36
31
Content Generation & Few Shot Prompting (Prompt Engineering)
05:22
32
Onwards To Programmatic Use
02:26
33
LM Studio & Its OpenAI Compatibility
06:01
34
More Code Examples!
05:05
35
Diving Deeper Into The LM Studio APIs
02:11
36
Module Introduction
01:42
37
Installing & Starting Ollama
02:09
38
Finding Usable Open Models
02:57
39
Running Open LLMs Locally via Ollama
07:44
40
Adding a GUI with Open WebUI
02:13
41
Dealing with Multiline Messages & Image Input (Multimodality)
02:39
42
Inspecting Models & Extracting Model Information
03:32
43
Editing System Messages & Model Parameters
06:02
44
Saving & Loading Sessions and Models
03:36
45
Managing Models
05:43
46
Creating Model Blueprints via Modelfiles
06:23
47
Creating Models From Modelfiles
03:27
48
Making Sense of Model Templates
06:40
49
Building a Model From Scratch From a GGUF File
06:38
50
Getting Started with the Ollama Server (API)
02:13
51
Exploring the Ollama API & Programmatic Model Access
05:19
52
Getting Structured Output
02:57
53
More Code Examples!
04:54
54
Roundup
01:45

Unlock unlimited learning

Get instant access to all 53 lessons in this course, plus thousands of other premium courses. One subscription, unlimited knowledge.

Learn more about subscription

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

The Hidden Foundation of GenAI

The Hidden Foundation of GenAI

Sources: Andreas Kretz
Generative AI is everywhere today, but few understand the fundamental concepts it is based on. "The Hidden Foundation of GenAI" is a starting point...
20 minutes 42 seconds
RAG for Real-World AI Applications

RAG for Real-World AI Applications

Sources: vueschool.io, Justin Schroeder, Daniel Kelly, Garrison Snelling
Study the RAG approach to enhance AI with your own data. Learn about vectors, embeddings, and integration. Apply the approach in real projects.
26 minutes 55 seconds
Build AI Agents with CrewAI

Build AI Agents with CrewAI

Sources: zerotomastery.io
Learn to build intelligent, collaboratively working AI agents with CrewAI. Master the organization of multi-agent workflows using...
2 hours 51 minutes 42 seconds
Building LLMs for Production

Building LLMs for Production

Sources: Towards AI, Louis-François Bouchard
"Creating LLM for Production" is a practical guide spanning 470 pages (updated in October 2024), designed for developers and specialists...
AI Design with Ideogram

AI Design with Ideogram

Sources: designcode.io
Meet Ideogram - an image generation tool powered by artificial intelligence that turns your ideas into stunning visuals. Whether...
1 hour 3 minutes 49 seconds