Skip to main content

Metrics & Observability

DriftGuard tracks runtime metrics for every review, record, prune, and graph mutation.

Get a snapshot

from driftguard import build_runtime

runtime = build_runtime()
snapshot = runtime.metrics_snapshot()

print(snapshot["counters"])
print(snapshot["gauges"])

Or via DriftGuard:

guard = DriftGuard()
# ... run some reviews and records ...
snapshot = guard.runtime.metrics_snapshot()

Counters

CounterDescription
reviews_totalTotal before_step() calls (non-skipped)
review_warnings_totalTotal warnings surfaced across all reviews
review_confidence_samples_totalTotal reviews counted for average
reviews_blocked_totalReviews that raised GuardrailTriggered
reviews_ack_required_totalReviews that raised GuardrailAcknowledgementRequired
reviews_skipped_totalReviews skipped due to record_only policy
records_totalTotal record() calls
nodes_created_totalNew nodes added to the graph
nodes_merged_totalNodes merged into existing nodes
edges_created_totalNew edges created
edges_reused_totalExisting edges incremented
prune_runs_totalTotal prune() calls
prune_nodes_removed_totalNodes removed by pruning
prune_edges_removed_totalEdges removed by pruning

Gauges

GaugeDescription
last_review_confidenceConfidence of the most recent review
review_confidence_totalRunning sum (used for average)
review_confidence_averageRolling average confidence across all reviews

Via MCP

guard_metrics()

Returns the full counters + gauges snapshot as a JSON object — available to any MCP client including Claude Desktop.

Example snapshot

{
"counters": {
"reviews_total": 42,
"review_warnings_total": 18,
"reviews_blocked_total": 3,
"records_total": 12,
"nodes_created_total": 36,
"nodes_merged_total": 24,
"edges_created_total": 24,
"edges_reused_total": 18,
"prune_runs_total": 5,
"prune_nodes_removed_total": 6,
"prune_edges_removed_total": 4
},
"gauges": {
"last_review_confidence": 0.87,
"review_confidence_average": 0.74
}
}