Agents act. Most can't remember mistakes meaningfully.
DriftGuard sits between intent and execution — reviewing each proposed action against a semantic graph of past failures, surfacing warnings before damage is done, and recording outcomes so the graph grows smarter with every run.
pip install driftguard-ai
from driftguard import DriftGuard
guard = DriftGuard()
# Review before acting
review = guard.before_step(
"increase salt"
)
if review.warnings:
risk = review.warnings[0].risk
print(f"⚠ {risk}")
# → "too salty"
# Record what happened
guard.record(
action="increase salt",
feedback="too salty",
outcome="dish ruined",
)How it works
Capabilities
Memory
Stores causal chains — action → feedback → outcome — as a semantic graph. Similar mistakes merge automatically so agents don't re-learn the same lesson twice.
Guard
Every proposed action is reviewed against memory before execution. Warn, block, or require acknowledgement — configurable per policy without touching your planner.
Graph
Merges paraphrased variants, reinforces repeated signals, prunes stale weak memories on schedule. The graph stays healthy without manual curation.
MCP
Run as a standalone MCP server or drop the LangGraph review node directly into your planner graph. Works with any tool-calling agent pipeline.
Guard Policies
DriftGuard doesn't force a single response model. Pick the policy that matches your agent's risk tolerance — and change it per-step if needed.
See all policieswarnSurface warning only — agent decidesblockRaise exception — hard stopacknowledgeRequire explicit confirmationrecord_onlyStore memory, skip reviewDriftGuard is open source, MIT licensed, and ready for early production experimentation.