Architecture
how the pieces fit together.
The big picture
text
┌─────────────┐ GET /collections ┌──────────────┐
│ qdrant │◄───────────────────────│ go exporter │
│ :6333 │ │ :9999 │
└─────────────┘ └──────┬───────┘
│
scrape │ /metrics
▼
┌──────────────┐
│ prometheus │
│ :9091 │
└──────┬───────┘
│
queries │
▼
┌──────────────┐
│ grafana │
│ :3000 │
└──────────────┘components in the repo
main.gohttp server, entry point, wires the collector to the /metrics endpoint
client.goqdrant api client — reads QDRANT_URL and QDRANT_API_KEY from the env
collector.goimplements prometheus.Collector, turns collection data into metrics
docker-compose.ymlfull local stack: qdrant, exporter, prometheus, grafana
Makefileshortcuts for local and cloud modes
python_exporter/python client that reads the go exporter's /metrics endpoint
qdrant_cloud_client/python client that talks to the qdrant api directly
examples/prometheus scrape config and grafana dashboard json
request flow
- prometheus hits the exporter's
/metricsevery 15s - the exporter calls qdrant's
/collectionsapi - for each collection it fetches
/collections/{name} - collection data is turned into prometheus metrics with labels
- prometheus stores the samples, grafana queries them
configuration
the exporter reads two env vars:
bash
QDRANT_URL=http://localhost:6333 # or your cloud endpoint
QDRANT_API_KEY=your_api_key_if_any # optional, only for cloudwhen
QDRANT_API_KEY is set, the exporter sends it as the api-key header on every request.two python sdks, one exporter
the two python sdks are intentionally separate:
python_exporterscrapes the go exporter's/metrics— stays in sync with whatever the go side exposesqdrant_cloud_clienttalks to the qdrant api directly — no exporter needed