open-webui/open-webui
open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

Open source chat interfaces and assistant apps for private AI workflows.
ChatGPT alternatives
This page is for self-hosted ChatGPT-style products: a real chat interface, conversation history, model switching, private deployment, and enough admin or knowledge features to replace a hosted assistant for individuals or teams.
Not backend runtimes
Ollama, LocalAI, vLLM, and llama.cpp are model backends. They belong in the stack, but they are not ChatGPT UI replacements by themselves.
Intent matched
The curated list favors UI products: Open WebUI, LobeChat, LibreChat, AnythingLLM, Onyx, and Jan. Each solves a different self-hosted chat job.
Why it works
Open WebUI: best local-first default
Use it when you want a strong browser UI around Ollama or OpenAI-compatible endpoints with minimal friction.
LobeChat / LibreChat: polished chat portals
Use LobeChat for a polished modern chat UI and LibreChat when users, presets, agents, MCP, and enterprise-style controls matter.
AnythingLLM / Onyx / Jan: specialized workflows
Use AnythingLLM for document workspaces, Onyx for advanced team knowledge and connectors, and Jan for desktop-first offline chat.
Curated repositories
open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
danny-avila
Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active.
Mintplex-Labs
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.
onyx-dot-app
Open Source AI Platform - AI Chat with advanced features that works with every LLM
janhq
Jan is an open source alternative to ChatGPT that runs 100% offline on your computer.
Related pages
Local model runtimes and inference servers
Private inference stacks for running models locally or exposing an OpenAI-compatible endpoint inside your own infrastructure.
Self-hosted RAG tools
Document search, connectors, and knowledge assistants for private corpora and retrieval-heavy AI products.
Vector databases and retrieval storage
Storage and search layers for embeddings, filtering, persistence, and semantic retrieval at scale.
Agents, workflows, and app builders
Workflow engines, agent systems, and app builders for repeatable internal automation instead of one-off chat.
AI developer tools
Self-hostable coding assistants and repo-aware tools for local or private developer workflows.
Self-hosted AI tools
Browse open source AI tools you can run on your own infrastructure, from local LLM apps to RAG, agents, inference, and production tooling.
FAQ
Open WebUI is the best default for most local-model users. LobeChat is strong for a polished modern chat experience. LibreChat is stronger as a multi-provider ChatGPT-style portal. AnythingLLM is better for document workspaces. Onyx is better when enterprise knowledge, connectors, and advanced RAG matter. Jan is best when you want a desktop-first offline app.
Usually no. Most apps connect to model backends such as Ollama, LocalAI, vLLM, llama.cpp servers, OpenAI-compatible APIs, or hosted providers. The chat app is the user-facing layer, not always the inference engine.
Not on this page. Ollama is a local model runtime. It becomes a ChatGPT-like experience when paired with a UI such as Open WebUI, LobeChat, LibreChat, AnythingLLM, Jan, or another chat interface.