Director-AI¶
Real-time LLM hallucination guardrail — NLI + RAG fact-checking with token-level streaming halt.
v3.14.0 — 5-tier scoring, 6 advanced RAG strategies, multi-agent swarm guardian, config wizard
2-Line Integration — Wrap any LLM SDK client with guard(). Duck-type detection for OpenAI-compatible, Anthropic, Bedrock, Gemini, Cohere. Quickstart → |
Token-Level Halt — Catches hallucinations as they form, mid-stream, before the user sees incorrect information. Streaming → |
| Custom KB Grounding — Bring your own facts via RAG. ChromaDB, FAISS, Qdrant, or in-memory backends. KB Ingestion → | 75.6% Balanced Accuracy on LLM-AggreFact (29K samples, 11 datasets, #6 on leaderboard; 77.76% with per-dataset tuning) — FactCG-DeBERTa-v3-Large NLI model. 14.6 ms/pair ONNX GPU. SBOM on every release. Scoring → |
| Injection Detection — Two-stage pipeline: regex pattern matching + bidirectional NLI intent-drift scoring. Catches injection effects in the output regardless of encoding. Per-claim attribution. Injection Detector → | ProductionGuard — Batteries-included entry point: calibrated scoring, human feedback loop, conformal CIs, tool-call verification, and injection detection. Guard → |
| 5-Tier Scoring — From zero-dep rules engine (<1ms) to embedding similarity (3ms) to full NLI (14.6ms). Choose your accuracy/latency trade-off. Scoring → | SaaS-Ready — API key auth + token-bucket rate limiting middleware. Cloud Run Dockerfile included. Self-host or let us host. |
Install¶
Quick Example¶
from director_ai import guard
from openai import OpenAI
client = guard(
OpenAI(),
facts={"refund_policy": "Refunds within 30 days only"},
threshold=0.3,
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is the refund policy?"}],
)
If the LLM hallucinates, guard() raises HallucinationError with the coherence score and contradicting evidence.
How It Works¶
graph LR
LLM["LLM Response"]:::input --> SC["CoherenceScorer"]:::core
SC --> NLI["NLI Model<br/>(H_logical)"]:::nli
SC --> RAG["RAG Retrieval<br/>(H_factual)"]:::rag
NLI --> SCORE["coherence = 1 - (0.6·H_L + 0.4·H_F)"]:::core
RAG --> SCORE
SCORE --> GATE{score ≥ threshold?}:::gate
GATE -->|Yes| APPROVE["Approved"]:::approve
GATE -->|No| HALT["Halt + Evidence"]:::halt
classDef input fill:#7c4dff,stroke:#333,color:#fff
classDef core fill:#512da8,stroke:#333,color:#fff
classDef nli fill:#1565c0,stroke:#333,color:#fff
classDef rag fill:#00695c,stroke:#333,color:#fff
classDef gate fill:#ff8f00,stroke:#333,color:#fff
classDef approve fill:#2e7d32,stroke:#333,color:#fff
classDef halt fill:#c62828,stroke:#333,color:#fff
Competitive Positioning¶
| Feature | Director-AI | NeMo Guardrails | Guardrails-AI | LLM-Guard |
|---|---|---|---|---|
| Mid-stream halt | Yes | No | No | No |
| Async voice AI pipeline | Yes | No | No | No |
| Custom KB RAG | Yes | Partial | No | No |
| Token-level scoring | Yes | No | No | No |
| NLI contradiction detection | Yes | No | No | Partial |
| Evidence on rejection | Yes | No | No | No |
| Numeric verification | Yes | No | No | No |
| Agentic loop safety | Yes | No | No | No |
| Conformal prediction | Yes | No | No | No |
| EU AI Act Article 15 | Yes | No | No | No |
| Adversarial self-test | Yes | No | No | No |
| 5 SDK integrations | Yes | 1 | 1 | 0 |
| 6 framework integrations | Yes | 1 | 1 | 0 |
Paths Forward¶
| Path | Time | What You Get |
|---|---|---|
| Quickstart | 2 min | Score a response, guard an SDK client |
| Why Director-AI | 5 min | Problem statement, decision matrix, cost comparison |
| Tutorials | 30 min | 16 Jupyter notebooks from basics to production |
| API Reference | — | Every public class and function |
| Production Guide | 15 min | Scaling, caching, monitoring, Docker |
| Domain Cookbooks | 10 min | Legal, medical, finance, support recipes |
| Voice AI | 10 min | Async streaming guard + TTS adapters for voice pipelines |
| Glossary | — | 35 terms defined and cross-linked |
Obtain¶
pip install director-ai # base
pip install director-ai[nli] # + NLI model (recommended)
pip install director-ai[server] # + REST API server
pip install director-ai[nli,vector,server] # everything
PyPI: pypi.org/project/director-ai | Source: github.com/anulum/director-ai | Docs: anulum.github.io/director-ai
Feedback & Bugs¶
- Bug reports: GitHub Issues
- Feature requests: GitHub Issues
- Security: SECURITY.md
- Commercial inquiries: anulum.li
Used By¶
Early adopter logos coming soon. Get in touch to be featured.
Contributing¶
See CONTRIBUTING.md for code style, test requirements, and PR workflow.
License¶
AGPL-3.0 for open source / research. Commercial licensing available at anulum.li.
Contact: protoscience@anulum.li | GitHub Discussions | www.anulum.li
Maintained by Miroslav Šotek at Anulum. Current release: v3.14.0.
Developed by ANULUM / Fortis Studio