API reference
Interactive documentation for the trace HTTP API. Authentication and entitlement behavior are described in the spec overview.
Integration quick start
Start here: use workspace keys (akt_…) on the routes below — never confuse them with OpenAI or other model provider secrets. Full schemas live in the interactive API reference.
Install
npm install aitracer-openai-agents @openai/agentsRegister the tracing bridge once at process startup. Spans flow to AITracer with your workspace key; use the interactive reference below for request bodies and auth headers.
Example
// npm install aitracer-openai-agents @openai/agents
import { registerAitracerTracing } from "aitracer-openai-agents";
registerAitracerTracing({
baseUrl: "https://your-app.example.com",
apiKey: process.env.AITRACER_API_KEY!, // akt_…
environment: "prod",
});Native ingest: POST /api/traces. Configure keys in Settings.
Official SDKs
First-party entrypoints for the same HTTP surface as the live reference below. Python and TypeScript ship from the main repository; OpenTelemetry follows collector config patterns in-repo.
Python
Install
pip install aitracer-sdkBatch jobs, FastAPI services, LangGraph nodes, and Python-first agent runtimes.
TypeScript
Install
npm install @noirstack/aitracer-sdkNext.js routes, Node services, dashboards, and orchestration workers using fetch.
OpenTelemetry
Install
Collector + OTLP/HTTP JSONForward resourceSpans from existing pipelines; Python helpers in aitracer-sdk optional.
Guardrails
Install
pip install aitracer-sdkNormalized policy / classifier events to governance and audit via the guardrails ingest route.
OpenAI Agents
Install
npm install @noirstack/aitracer-sdk · pip install aitracer-sdkExport structured spans from OpenAI Agents–style runtimes into AITracer traces.
Starter templates
Clone-ready folders in the no1rstack/aitracer repo—faster to ship than reading reference docs alone.
- FastAPI + AITracerWorkspace-key ingest from a minimal FastAPI service.View on GitHub →
- OpenAI Agents + AITracerTypeScript bridge pattern for Agents span export payloads.View on GitHub →
- OTLP + AITracerCollector exporter snippet + JSON OTLP/HTTP alignment.View on GitHub →
- Next.js + AITracerApp Router API route posting a demo trace with @noirstack/aitracer-sdk.View on GitHub →
- LangGraph + AITracerPython node loop with per-step trace.record calls.View on GitHub →
Explore the full API surface
Browse trace ingestion endpoints, integrations, authentication flows, model controls, and workspace APIs. Jump into the live reference below, or open a section directly.
Live API Reference
Interactive OpenAPI — same origin as your deployment.
Implementation examples
Minimal patterns for each integration path. Request and schema details live in the Live API Reference above.
import json
import time
import hashlib
import datetime
from typing import Any, Dict, Optional
from agents import (
Agent,
Runner,
RunConfig,
add_trace_processor,
custom_span,
trace,
)
from agents.tracing import TracingProcessor
class JsonlMirrorTraceProcessor(TracingProcessor):
def __init__(self, file_path: str = "agent_spans.jsonl"):
self.file_path = file_path
def _write(self, payload: Dict[str, Any]) -> None:
with open(self.file_path, "a", encoding="utf-8") as f:
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
def on_trace_start(self, trace_obj) -> None:
self._write({
"event": "trace_start",
"trace_id": trace_obj.trace_id,
"name": trace_obj.name,
"at": datetime.datetime.utcnow().isoformat() + "Z",
})
def on_trace_end(self, trace_obj) -> None:
self._write({
"event": "trace_end",
"trace_id": trace_obj.trace_id,
"name": trace_obj.name,
"at": datetime.datetime.utcnow().isoformat() + "Z",
})
def on_span_start(self, span) -> None:
self._write({
"event": "span_start",
"trace_id": span.trace_id,
"span_id": span.span_id,
"parent_id": span.parent_id,
"span_type": span.span_data.type,
"at": datetime.datetime.utcnow().isoformat() + "Z",
})
def on_span_end(self, span) -> None:
self._write({
"event": "span_end",
"trace_id": span.trace_id,
"span_id": span.span_id,
"parent_id": span.parent_id,
"span_type": span.span_data.type,
"started_at": span.started_at,
"ended_at": span.ended_at,
})
def force_flush(self) -> None:
return
def shutdown(self) -> None:
return
class AgentsTraceabilityWrapper:
def __init__(
self,
store_raw_prompt: bool = True,
store_raw_response: bool = True,
decision_log_file: str = "decision_records.jsonl",
):
self.store_raw_prompt = store_raw_prompt
self.store_raw_response = store_raw_response
self.decision_log_file = decision_log_file
add_trace_processor(JsonlMirrorTraceProcessor())
@staticmethod
def _canonicalize(data: Any) -> str:
return json.dumps(data, sort_keys=True, separators=(",", ":"))
@staticmethod
def _sha256(content: str) -> str:
return hashlib.sha256(content.encode("utf-8")).hexdigest()
async def run(
self,
*,
prompt: str,
workflow_name: str,
group_id: Optional[str],
context: Dict[str, Any],
model: str = "gpt-4.1-mini",
) -> str:
started = time.perf_counter()
agent = Agent(name="Traceability Assistant", model=model, instructions="You answer clearly.")
with trace(workflow_name=workflow_name, group_id=group_id, metadata=context):
with custom_span("input_validation", {"prompt_len": len(prompt)}):
cleaned_prompt = prompt.strip()
result = await Runner.run(
agent,
cleaned_prompt,
run_config=RunConfig(
workflow_name=workflow_name,
group_id=group_id,
trace_metadata=context,
trace_include_sensitive_data=(self.store_raw_prompt and self.store_raw_response),
),
)
latency_ms = round((time.perf_counter() - started) * 1000, 2)
response_text = str(result.final_output)
return response_text
Start building with AITracer
Ship traced agents, audits, and OTLP pipelines with workspace-scoped keys and the same HTTP surface you just explored.