Quick Start Guide

Get up and running with QDrant Loader in 5 minutes! This guide walks you through your first document ingestion and AI tool integration.

๐ŸŽฏ What You'll Accomplish

In one flow, you will:

  • Install the packages
  • Start QDrant
  • Create a workspace
  • Ingest your first content
  • Connect AI tools through MCP

Estimated time: 10 to 15 minutes.

๐Ÿ”ง Prerequisites

  • Python 3.12+
  • Docker (or an existing QDrant instance)
  • One LLM provider key (OpenAI, Azure OpenAI, Ollama, or OpenAI-compatible)

๐Ÿš€ Step 1. Install packages

pip install qdrant-loader qdrant-loader-mcp-server

Verify:

qdrant-loader --version
mcp-qdrant-loader --version

If you need OS-specific install help, see Installation Guide.

๐Ÿ“„ Step 2. Start QDrant

Local Docker option:

docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant

Or use QDrant Cloud and copy URL/API key.

๐Ÿค– Step 3. Create workspace

Recommended (wizard):

qdrant-loader setup --output-dir my-qdrant-workspace --mode default
cd my-qdrant-workspace

Alternative (manual):

mkdir my-qdrant-workspace
cd my-qdrant-workspace
qdrant-loader init --workspace .

Need more control over prompts and templates? See CLI setup command options.

๐Ÿ”ง Step 4. Configure environment

Create or edit .env:

QDRANT_URL=http://localhost:6333
QDRANT_COLLECTION_NAME=quickstart

LLM_PROVIDER=openai
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=your-openai-key
LLM_EMBEDDING_MODEL=text-embedding-3-small
LLM_CHAT_MODEL=gpt-4o-mini

Canonical configuration references:

๐Ÿ“„ Step 5. Add a minimal config and ingest

Create config.yaml:

global:
  qdrant:
    url: "${QDRANT_URL}"
    collection_name: "${QDRANT_COLLECTION_NAME}"
  llm:
    provider: "${LLM_PROVIDER}"
    base_url: "${LLM_BASE_URL}"
    api_key: "${LLM_API_KEY}"
    models:
      embeddings: "${LLM_EMBEDDING_MODEL}"
      chat: "${LLM_CHAT_MODEL}"
    embeddings:
      vector_size: 1536

projects:
  quickstart:
    project_id: "quickstart"
    display_name: "Quick Start"
    sources:
      localfile:
        docs:
          base_url: "file://./docs"
          include_paths: ["**/*.md"]

Create sample content and ingest:

mkdir docs
printf "# Hello QDrant Loader\n\nThis is my first document.\n" > docs/sample.md
qdrant-loader ingest --workspace .
# Expected output:
# ๐Ÿ“ Scanning directory: my-project/
# ๐Ÿ“„ Processing: 2 files found
# โœ… Ingested: 2 documents, multiple chunks
# ๐Ÿ” Collection: quickstart

For Git/Confluence/Jira and advanced source filters, see Data Sources Guide.

๐Ÿ“š Step 6. Start MCP server

mcp-qdrant-loader
# Expected output:
# ๐Ÿš€ QDrant Loader MCP Server starting...
# ๐Ÿ“ก Server running on stdio
# ๐Ÿ” Available tools: search, hierarchy_search, attachment_search
# โœ… Ready for connections

Detailed integration guides:

๐Ÿ” Step 7. Validate in your AI tool

In Cursor/Claude/Windsurf, run a query like: "Search my docs for QDrant Loader quick start notes"

If results are returned from ingested content, setup is complete.

๐ŸŽฏ Next steps

๐Ÿงช Quick Success Checklist

  • qdrant-loader --version and mcp-qdrant-loader --version return successfully
  • qdrant-loader ingest --workspace . finishes without errors
  • MCP server starts with mcp-qdrant-loader
  • Your AI tool returns results from ingested documents

๐ŸŽ‰ Quick Start Complete!

You're now ready to explore the full power of QDrant Loader. The next step is reviewing the Core Concepts summarized in Getting Started, or dive into the User Guides for specific features and workflows.