Docs
Terminology

Terminology

Core terms used across AITracer observability, governance, verification, and operational intelligence workflows.

Operational Intelligence

TermDefinitionWhy It Matters
Cost AttributionTracking AI spend by user, workflow, action, or model.Helps teams identify which features are driving AI costs.
Token EfficiencyMeasuring how many tokens are consumed for each model interaction.Helps identify bloated prompts and inefficient workflows.
Model AllocationDistributing workloads across the appropriate model tiers.Prevents simple tasks from being routed to unnecessarily expensive models.
P95 LatencyThe response time of the slowest 5% of requests.Helps teams detect latency spikes before they impact users.
Anomaly DetectionIdentifying unusual spikes in latency, cost, or model behavior.Helps teams investigate unexpected operational failures.

Governance & Risk

TermDefinitionWhy It Matters
Policy EvaluationReviewing AI activity against predefined governance rules.Ensures workflows meet operational and compliance requirements.
Risk DetectionIdentifying sensitive data patterns such as credentials, payment data, or PII.Helps teams prevent risky outputs and compliance violations.
Audit RecordA stored record of AI execution activity.Creates historical accountability for model behavior.
Governance ControlsApproval workflows and operational safeguards for high-risk AI activity.Prevents unauthorized or risky actions.

Verification & Audit Vault

TermDefinitionWhy It Matters
Audit VaultAITracer’s storage layer for execution records.Centralizes trace history for compliance and investigations.
SHA-256 VerificationCryptographic hashing used to validate record integrity.Detects unauthorized modifications.
Integrity ValidationRecalculating hashes to confirm records remain unchanged.Proves records remain tamper-evident over time.
Execution RecordA complete record of a single AI action.Captures model usage, latency, cost, policies, and verification metadata.

Trace Operations

TermDefinitionWhy It Matters
TraceA record of a complete AI interaction from request to response.Helps teams understand what happened during execution.
SpanA smaller operation inside a trace.Helps teams isolate bottlenecks and failures.
Model InvocationA single call made to an LLM provider.Tracks provider usage and execution behavior.
Workflow ExecutionA sequence of AI tasks completed across a larger workflow.Helps teams understand multi-step automation performance.