Self-Host a local AI platform! Ollama + Open WebUI

Subscribers:
253,000
Published on ● Video Link: https://www.youtube.com/watch?v=RQFfK7xIL28



Duration: 0:00
59,825 views
1,459


Check out Twingate and supercharge your security: https://bit.ly/3Y1OaZi

In this video, I'll show you my new self-hosted AI platform deployed in my HomeLab using the free and open-source Ollama platform. I'll walk you through setting up OpenWebUI for an easy-to-use web interface with advanced features even ChatGPT might envy—and securing it with Traefik and Authentik. Plus, I’ll share some valuable tips to avoid common pitfalls when building your own local AI server.

References

Ollama GPU Requirements: https://github.com/ollama/ollama/blob/main/docs/gpu.md
-AMD ROCm: https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/native_linux/install-radeon.html

Learn more

Traefik: https://www.patreon.com/posts/114924172
Authentik: https://www.patreon.com/posts/100779796

________________

💜 Support me and become a Fan!
https://christianlempa.de/patreon


💬 Join our Community!
https://christianlempa.de/discord

👉 Follow me everywhere
https://christianlempa.de/

________________

Read my Tech Documentation
https://christianlempa.de/docs

My Gear and Equipment
https://christianlempa.de/kit

________________

Timestamps:

00:00 Introduction
02:38 Hardware Requirements
06:58 Software Planning
08:35 Problems with Proxmox…
10:55 Installing a new LXC Container
14:44 Install Ollama on Linux
17:41 Ollama basics
19:56 Install OpenWeb UI
27:37 OpenWebUI basics
31:05 Using AI models
35:01 Web Searching
37:18 Why I still don’t trust AI

________________
Links can include affiliate links.