qdrant/qdrant
qdrant
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

Storage and retrieval infrastructure for embeddings and semantic search.
Retrieval infra
Use this page when retrieval quality, filtering, persistence, and semantic search infrastructure matter more than the chat UI.
Operational store
Qdrant, Chroma, Weaviate, Milvus, LanceDB, and pgvector cover the storage/search layer behind many private RAG deployments.
Backend first
These projects sit behind RAG apps and agent stacks; they are not chat interfaces or full assistants on their own.
Why it works
Operational vector stores
Qdrant, Chroma, and Weaviate are common choices when you need metadata filtering, predictable APIs, and strong product ergonomics.
Scale and performance
Milvus and LanceDB fit larger retrieval workloads, while pgvector keeps search close to Postgres-centric stacks.
RAG backends
These systems underpin private search, retrieval, and embedding storage for assistants and workflow apps.
Curated repositories
qdrant
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
chroma-core
Data infrastructure for AI
weaviate
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
milvus-io
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
lancedb
Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.
pgvector
Open-source vector similarity search for Postgres
Related pages
Self-hosted ChatGPT alternatives
Private assistant apps and team chat portals for people who want a familiar front end around local or private models.
Local model runtimes and inference servers
Private inference stacks for running models locally or exposing an OpenAI-compatible endpoint inside your own infrastructure.
Self-hosted RAG tools
Document search, connectors, and knowledge assistants for private corpora and retrieval-heavy AI products.
Agents, workflows, and app builders
Workflow engines, agent systems, and app builders for repeatable internal automation instead of one-off chat.
AI developer tools
Self-hostable coding assistants and repo-aware tools for local or private developer workflows.
Self-hosted AI tools
Browse open source AI tools you can run on your own infrastructure, from local LLM apps to RAG, agents, inference, and production tooling.
FAQ
It stores embeddings and supports similarity search, filtering, and retrieval for RAG and semantic search applications.
No. Some stacks use Postgres with pgvector or hybrid search layers. A dedicated store helps when retrieval scale, filtering, or operational separation matters.