—
— agents
·
— test cases
·
— scans run
Normal scan · Change mode
Scan History
| Scan | Workflow | Status | Current | Candidate | Change | Result | Time |
|---|
Workflows
Failure Analysis
Run a scan to see failure clusters
Select a cluster to view details
Rubrics
Loading workflow...
Rubric Library
Connect Your Agents ▸
Waiting for traces
Point your existing OpenTelemetry exporter at Spectral. One environment variable, zero code changes. Spectral is never in the hot path — if we go down, your agents keep running.
Environment Variables
export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.runspectral.com export OTEL_EXPORTER_OTLP_HEADERS="X-Spectral-Key=sk-spectral-your-key-here"
Python (auto-instrument)
# pip install opentelemetry-instrumentation-langchain from opentelemetry.instrumentation.langchain import LangchainInstrumentor LangchainInstrumentor().instrument() # That's it. All spans flow to Spectral.
Works with LangChain, LlamaIndex, CrewAI, OpenAI Agents SDK, Vercel AI SDK, and any OTel-compatible framework.
Developer Tools ▸
Case Explorer
— PASS
— FAIL
| Case ID | Agent | Config | Status | Score | Failure Clusters |
|---|
Experiments
Show scoring weights
Objective:
Accuracy 1.0×
·
Hallucination 5.0×
·
Tail Risk 10.0×
·
Cost 0.2×
Deploy
Validation Suite
Run the 3-case proof pack: Weak Rescue, Decent Lift, False Defense
Run the validation suite to generate a trust report
Tests 3 scenarios: weak agent rescue, decent agent lift, and false improvement defense