Getting Started
This guide walks you through setting up SideSeat and collecting your first traces from an AI agent.
Prerequisites
Section titled “Prerequisites”- Rust 1.75 or higher
- Node.js 20.19+ or 22.12+
- Make
Installation
Section titled “Installation”Clone and Build
Section titled “Clone and Build”# Clone the repositorygit clone https://github.com/spugachev/sideseat.gitcd sideseat
# Install dependencies and buildmake setupStart the Server
Section titled “Start the Server”# Start in development modemake devYou’ll see output like:
SideSeat v1.0.4 Local: http://127.0.0.1:5001/ui?token=abc123...Click the URL to open the dashboard in your browser.
Collecting Traces
Section titled “Collecting Traces”SideSeat accepts traces via the standard OpenTelemetry Protocol (OTLP). Configure your agent to send traces to:
- HTTP:
http://localhost:5001/otel/v1/traces - gRPC:
localhost:4317
Python with Strands SDK
Section titled “Python with Strands SDK”Strands is an AI agent framework with built-in OpenTelemetry support:
from strands import Agentfrom strands.models import BedrockModelfrom strands.telemetry import StrandsTelemetry
# Configure telemetry to export to SideSeattelemetry = StrandsTelemetry()telemetry.setup_otlp_exporter(endpoint="http://localhost:5001/otel/v1/traces")
# Create modelmodel = BedrockModel(model_id="us.anthropic.claude-haiku-4-5-20251001-v1:0")
# Create agent with trace attributes for filteringagent = Agent( name="my-assistant", model=model, trace_attributes={ "session.id": "conversation-123", # Group traces by session "user.id": "user-456", # Track by user "environment": "development", # Filter by environment },)
# Run the agentresponse = agent("What's the capital of France?")print(response)
# Important: Flush telemetry before exitingtelemetry.tracer_provider.force_flush()Python with OpenTelemetry SDK
Section titled “Python with OpenTelemetry SDK”For any Python application, use the OpenTelemetry SDK directly:
from opentelemetry import tracefrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.resources import Resource
# Configure resource attributesresource = Resource.create({ "service.name": "my-ai-agent", "deployment.environment": "development",})
# Configure exporterexporter = OTLPSpanExporter(endpoint="http://localhost:5001/otel/v1/traces")provider = TracerProvider(resource=resource)provider.add_span_processor(BatchSpanProcessor(exporter))trace.set_tracer_provider(provider)
# Create tracertracer = trace.get_tracer(__name__)
# Create traces with GenAI attributeswith tracer.start_as_current_span("llm.completion") as span: span.set_attribute("gen_ai.system", "openai") span.set_attribute("gen_ai.request.model", "gpt-4") span.set_attribute("gen_ai.operation.name", "chat")
# Your LLM call here response = call_openai()
# Record token usage span.set_attribute("gen_ai.usage.input_tokens", 150) span.set_attribute("gen_ai.usage.output_tokens", 200)Using gRPC (Higher Throughput)
Section titled “Using gRPC (Higher Throughput)”For higher throughput, use the gRPC endpoint:
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
# gRPC exporter (insecure for local dev)exporter = OTLPSpanExporter(endpoint="localhost:4317", insecure=True)Node.js with OpenTelemetry SDK
Section titled “Node.js with OpenTelemetry SDK”const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base');const { Resource } = require('@opentelemetry/resources');
// Configure resourceconst resource = new Resource({ 'service.name': 'my-ai-agent', 'deployment.environment': 'development',});
// Configure exporterconst exporter = new OTLPTraceExporter({ url: 'http://localhost:5001/otel/v1/traces',});
// Set up providerconst provider = new NodeTracerProvider({ resource });provider.addSpanProcessor(new BatchSpanProcessor(exporter));provider.register();
// Create tracerconst tracer = provider.getTracer('my-ai-agent');
// Create spansconst span = tracer.startSpan('llm.completion');span.setAttribute('gen_ai.system', 'anthropic');span.setAttribute('gen_ai.request.model', 'claude-3-opus');// ... your code ...span.end();Viewing Traces
Section titled “Viewing Traces”- Open the SideSeat dashboard at
http://localhost:5001/ui - Traces appear in real-time as they’re received
- Click a trace to see its spans and details
- Use filters to find specific traces by service, framework, or attributes
Filtering Traces
Section titled “Filtering Traces”Use the filter panel to narrow down traces:
- Service: Filter by service name (e.g.,
my-ai-agent) - Framework: Filter by detected framework (strands, langchain, openai)
- Errors Only: Show only traces with errors
- Search: Full-text search on trace IDs and service names
- Attributes: Filter by indexed attributes (environment, user.id, etc.)
Understanding Span Data
Section titled “Understanding Span Data”Each span shows:
- Basic Info: Name, service, framework, duration
- GenAI Fields: Model, system, tokens used, TTFT
- Events: User messages, assistant responses, tool calls
- Attributes: Custom fields you’ve attached
Real-time Streaming
Section titled “Real-time Streaming”Subscribe to trace events programmatically using Server-Sent Events:
const eventSource = new EventSource('http://localhost:5001/api/v1/traces/sse');
eventSource.onmessage = (event) => { const payload = JSON.parse(event.data);
switch (payload.event.type) { case 'NewSpan': console.log('New span received:', payload.event.data); break; case 'TraceCompleted': console.log('Trace finished:', payload.event.data.trace_id); break; }};
eventSource.onerror = (error) => { console.error('Connection error:', error); eventSource.close();};Using Sessions
Section titled “Using Sessions”Sessions group related traces (e.g., a multi-turn conversation). Set the session.id attribute on your traces:
# Strandsagent = Agent( model=model, trace_attributes={"session.id": "conversation-abc123"})
# OpenTelemetry SDKspan.set_attribute("session.id", "conversation-abc123")Then view sessions in the dashboard or query via API:
# List all sessionscurl http://localhost:5001/api/v1/sessions
# Get traces for a sessioncurl http://localhost:5001/api/v1/sessions/conversation-abc123/tracesQuerying the API
Section titled “Querying the API”SideSeat provides a REST API for programmatic access:
# List recent tracescurl http://localhost:5001/api/v1/traces
# Filter by servicecurl "http://localhost:5001/api/v1/traces?service=my-ai-agent"
# Get a specific tracecurl http://localhost:5001/api/v1/traces/abc123def456
# Get spans for a tracecurl "http://localhost:5001/api/v1/spans?trace_id=abc123def456"
# Get filter options (for building UIs)curl http://localhost:5001/api/v1/traces/filtersSee the API Reference for complete documentation.
Configuration
Section titled “Configuration”Custom Port
Section titled “Custom Port”# CLI flagsideseat start --port 8080
# Environment variableSIDESEAT_PORT=8080 make devDisable Authentication
Section titled “Disable Authentication”For development, you can disable authentication:
sideseat start --no-authSet Retention
Section titled “Set Retention”Configure how long to keep trace data:
{ "otel": { "retention": { "days": 7 } }}See Configuration for all options.
Troubleshooting
Section titled “Troubleshooting”Traces Not Appearing
Section titled “Traces Not Appearing”- Check the endpoint URL - Ensure your exporter points to
http://localhost:5001/otel/v1/traces - Flush before exit - Call
force_flush()on your tracer provider before the process exits - Check server logs - Look for ingestion errors in the terminal running SideSeat
- Verify OTel is enabled - Check that
otel.enabledistruein your config
High Memory Usage
Section titled “High Memory Usage”Reduce buffer sizes if memory is a concern:
{ "otel": { "ingestion": { "buffer_max_spans": 1000, "buffer_max_bytes": 5242880 } }}Connection Refused
Section titled “Connection Refused”- Ensure SideSeat is running
- Check the port isn’t blocked by a firewall
- For gRPC, ensure you’re using the correct port (4317 by default)
Next Steps
Section titled “Next Steps”- OpenTelemetry Collector - Advanced configuration for trace collection
- REST API Reference - Complete API documentation
- Configuration - All configuration options
- Authentication - Secure your instance