Configuration¶
DirectorConfig is a dataclass with environment variable, YAML file, and named profile loaders. All fields have sensible defaults.
Loading¶
Profiles¶
| Profile | NLI | Threshold | Candidates | Metrics | Use Case |
|---|---|---|---|---|---|
fast |
Off | 0.5 | 1 | Off | Development, low latency |
thorough |
On | 0.6 | 3 | On | Production default |
research |
On | 0.7 | 5 | On | Evaluation, benchmarking |
medical |
On | 0.30 | 3 | On | Healthcare (measured on PubMedQA) |
finance |
On | 0.30 | 3 | On | Financial services (measured on FinanceBench) |
legal |
On | 0.30 | 3 | On | Legal document review (not yet measured) |
creative |
Off | 0.4 | 1 | Off | Creative writing (low halt rate) |
customer_support |
Off | 0.55 | 1 | Off | Support agents |
summarization |
On | 0.15 | 1 | On | Document summarization |
lite |
Off | 0.5 | 1 | Off | Zero-dependency fast path |
Building Components¶
config = DirectorConfig.from_profile("thorough")
# Build scorer with all config applied
scorer = config.build_scorer(store=my_store)
# Build vector store from config
store = config.build_store()
Combining Profile + Overrides¶
config = DirectorConfig.from_profile("medical")
config.nli_model = "lytang/MiniCheck-DeBERTa-L"
config.cache_size = 4096
Or combine YAML + env vars (env vars take precedence):
Key Field Groups¶
Scoring¶
coherence_threshold, hard_limit, soft_limit, use_nli, nli_model, scorer_backend, max_candidates, history_window
LLM Provider¶
llm_provider (mock | openai | anthropic | huggingface | local), llm_api_key, llm_model, llm_temperature, llm_max_tokens
Vector Store¶
vector_backend (memory | chroma), embedding_model, chroma_collection, chroma_persist_dir, reranker_enabled
Server¶
server_host, server_port, server_workers, cors_origins, rate_limit_rpm, api_keys
Caching¶
cache_size, cache_ttl, redis_url
Observability¶
metrics_enabled, log_level, log_json, otel_enabled
See DirectorConfig API Reference for the complete field table.