Configuration Reference

This section provides comprehensive documentation for configuring QDrant Loader. Learn how to set up data sources, optimize performance, configure security, and customize behavior for your specific needs.

Start here

  1. Environment variables: Environment Variables Reference
  2. LLM providers and model mapping: LLM Provider Guide
  3. Full YAML schema: Configuration File Reference
  4. Security practices: Security Considerations
  5. Runtime flags and setup modes: CLI Commands
  6. Workspace-vs-traditional config loading: Workspace Mode

Choose your path

Quick baseline

Use this minimal pair as a baseline and then extend from references above.

.env

QDRANT_URL=http://localhost:6333
QDRANT_COLLECTION_NAME=documents

LLM_PROVIDER=openai
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=your-openai-key
LLM_EMBEDDING_MODEL=text-embedding-3-small
LLM_CHAT_MODEL=gpt-4o-mini

config.yaml

global:
  qdrant:
    url: "${QDRANT_URL}"
    collection_name: "${QDRANT_COLLECTION_NAME}"
  llm:
    provider: "${LLM_PROVIDER}"
    base_url: "${LLM_BASE_URL}"
    api_key: "${LLM_API_KEY}"
    models:
      embeddings: "${LLM_EMBEDDING_MODEL}"
      chat: "${LLM_CHAT_MODEL}"
    embeddings:
      vector_size: 1536

projects:
  default:
    project_id: "default"
    display_name: "Default"
    sources:
      localfile:
        docs:
          base_url: "file://./docs"
          include_paths:
            - "**/*.md"

Notes

Quick validation checklist

  • qdrant-loader config --workspace . loads without errors
  • Required env vars are set for your chosen provider
  • At least one project and one source are configured
  • QDrant URL and collection name are valid