Skip to content

Docs

AI agents are hard to debug. Requests fly by, context builds up, and when something fails you’re left guessing. SideSeat captures every LLM call, tool call, and agent decision, then displays them in a web UI as they happen.

  1. Start the server

    Terminal window
    npx sideseat
  2. Install the SDK

    Terminal window
    pip install strands-agents sideseat
    # or
    uv add strands-agents sideseat
  3. Initialize in your code

    from strands import Agent
    from sideseat import SideSeat, Frameworks
    SideSeat(framework=Frameworks.Strands) # add this
    agent = Agent() # uses Amazon Bedrock by default
    agent("Analyze this dataset...")

Open http://localhost:5388. You’ll see a live timeline of each prompt, tool call, and model response.

from sideseat import SideSeat, Frameworks
from strands import Agent
SideSeat(framework=Frameworks.Strands)
agent = Agent()
response = agent("What is 2+2?")
print(response)

See all supported frameworks.

SideSeat includes a built-in MCP server that gives your coding agent direct access to your agent’s traces, conversations, and costs. Connect it and let your coding tool optimize prompts, debug failures, and reduce costs using real data.

Terminal window
# Kiro CLI
kiro-cli mcp add --name sideseat --url http://localhost:5388/api/v1/projects/default/mcp
# Claude Code
claude mcp add --transport http sideseat http://localhost:5388/api/v1/projects/default/mcp
# OpenAI Codex
codex mcp add --transport http sideseat http://localhost:5388/api/v1/projects/default/mcp

See the MCP Server guide for Kiro, Cursor, and other clients.