CoherenceAgent¶
Integrated orchestrator that combines a generator, scorer, ground truth store, and safety kernel into a single process() call. Handles candidate generation, scoring, fallback, and output interlock.
Usage¶
from director_ai import CoherenceAgent
agent = CoherenceAgent(
use_nli=True,
fallback="retrieval",
)
result = agent.process("What is the refund policy?")
print(result.output)
print(result.coherence.score if result.coherence else None)
Constructor Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
llm_api_url |
str \| None |
None |
Direct URL to OpenAI-compatible endpoint |
use_nli |
bool \| None |
None |
Enable NLI model scoring |
provider |
str \| None |
None |
"openai" or "anthropic" (reads API key from env) |
fallback |
str \| None |
None |
Fallback mode: "retrieval", "disclaimer", None |
disclaimer_prefix |
str |
"[Unverified] " |
Prefix for disclaimer fallback text |
Mutual exclusivity
llm_api_url and provider are mutually exclusive. Use one or the other.
Methods¶
process()¶
Generate candidates, score them, return the best approved response (or fallback).
aprocess()¶
Async variant of process().
stream()¶
Async streaming with real-time coherence monitoring. Yields (token, coherence_score) tuples.
Fallback Modes¶
| Mode | Behavior |
|---|---|
None |
Reject if all candidates fail — returns empty response with halted=True |
"retrieval" |
Return KB context when all candidates fail |
"disclaimer" |
Prepend warning to the best-rejected candidate |
# Retrieval fallback
agent = CoherenceAgent(fallback="retrieval")
result = agent.process("What is the refund policy?")
if result.halted:
print("Fell back to KB retrieval")
# Disclaimer fallback
agent = CoherenceAgent(fallback="disclaimer")
result = agent.process("What is the refund policy?")
# Response prefixed with "[Unverified] ..."
Full API¶
director_ai.core.agent.CoherenceAgent
¶
CoherenceAgent(llm_api_url=None, use_nli=None, provider=None, fallback=None, disclaimer_prefix='[Unverified] ', *, _scorer=None, _store=None)
Integrated coherence-verification agent.
Orchestrates: - Generator: Candidate response generation (mock or real LLM). - Scorer: Weighted NLI divergence scoring. - Ground Truth Store: RAG-based fact retrieval. - Safety Kernel: Output interlock.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
llm_api_url
|
str | None — direct URL to OpenAI-compatible endpoint.
|
|
None
|
use_nli
|
bool | None — enable NLI model scoring.
|
|
None
|
provider
|
str | None — "openai" or "anthropic". Reads API key from env.
|
Mutually exclusive with llm_api_url. |
None
|
process
¶
Process a prompt end-to-end and return the verified output.
aprocess
async
¶
Async version of :meth:process via run_in_executor.
stream
async
¶
Stream tokens with StreamingKernel oversight.
Uses sliding window, trend detection, and hard/soft halt from
StreamingKernel. Yields (token, coherence) tuples.
Halting stops future tokens but does not retract delivered ones.