Microsoft Agent Framework
Microsoft Agent Framework (agent-framework) is a Python SDK for building AI agents with built-in OpenTelemetry support. SideSeat captures runs from agents using OpenAI, Anthropic, and Azure OpenAI providers.
Quick Start
Section titled “Quick Start”-
Start SideSeat
Terminal window npx sideseat -
Install dependencies
Terminal window pip install agent-framework sideseatTerminal window uv add agent-framework sideseat -
Add telemetry
import asynciofrom sideseat import SideSeat, Frameworksfrom agent_framework import Agentfrom agent_framework.openai import OpenAIChatClientSideSeat(framework=Frameworks.AgentFramework)client = OpenAIChatClient(model_id="gpt-5-nano-2025-08-07")agent = Agent(client=client, instructions="You are a helpful assistant.")result = asyncio.run(agent.run("What is the capital of France?"))print(result.text) -
View runs
Open http://localhost:5388 to see your runs.
Without SideSeat SDK
Section titled “Without SideSeat SDK”-
Start SideSeat
Terminal window npx sideseat -
Set the endpoint
Terminal window export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5388/otel/default -
Install dependencies
Terminal window pip install agent-framework opentelemetry-sdk opentelemetry-exporter-otlpTerminal window uv add agent-framework opentelemetry-sdk opentelemetry-exporter-otlp -
Add telemetry
from agent_framework.observability import OBSERVABILITY_SETTINGSfrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry import traceOBSERVABILITY_SETTINGS.enable_instrumentation = TrueOBSERVABILITY_SETTINGS.enable_sensitive_data = Trueprovider = TracerProvider()provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))trace.set_tracer_provider(provider) -
View runs
Open http://localhost:5388 to see your runs.
Providers
Section titled “Providers”Agent Framework supports OpenAI, OpenAI Responses, and Anthropic:
from agent_framework.openai import OpenAIChatClient
# Uses OPENAI_API_KEY env varclient = OpenAIChatClient(model_id="gpt-5-nano-2025-08-07")from agent_framework.openai import OpenAIResponsesClient
# Supports reasoning_effort for extended thinkingclient = OpenAIResponsesClient(model_id="gpt-5-nano-2025-08-07")from agent_framework.anthropic import AnthropicClient
# Uses ANTHROPIC_API_KEY env varclient = AnthropicClient(model_id="claude-haiku-4-5-20251001")Define tools with the @tool decorator using Annotated type hints:
from typing import Annotatedfrom agent_framework import Agent, toolfrom pydantic import Field
@tool(approval_mode="never_require")def get_weather( city: Annotated[str, Field(description="The city name")],) -> str: """Get weather for a city.""" return f"Sunny in {city}, 72°F"
agent = Agent( client=client, instructions="You are a weather assistant.", tools=[get_weather],)
result = await agent.run("What's the weather in Paris?")SideSeat shows each tool call with inputs and outputs in the message thread.
Multi-Agent Concurrency
Section titled “Multi-Agent Concurrency”Run multiple specialized agents in parallel using asyncio.gather():
import asynciofrom agent_framework import Agent
researcher = Agent(client=client, instructions="You are a researcher.")technical = Agent(client=client, instructions="You are a technical expert.")
prompt = "Evaluate building a weather app"
researcher_result, technical_result = await asyncio.gather( researcher.run(prompt), technical.run(prompt),)Structured Output
Section titled “Structured Output”Extract structured data using Pydantic models via the response_format parameter:
from pydantic import BaseModelfrom agent_framework import Agent
class Person(BaseModel): name: str age: int city: str
agent = Agent(client=client, instructions="Extract structured information.")result = await agent.run( "Jane Doe, 28, lives in New York.", response_format=Person,)
person = result.try_parse_value(Person)if person is not None: print(person.name) # "Jane Doe"Extended Thinking
Section titled “Extended Thinking”Enable extended thinking for supported models via additional_chat_options:
from agent_framework.openai import OpenAIResponsesClientfrom agent_framework import Agent
client = OpenAIResponsesClient(model_id="gpt-5-nano-2025-08-07")agent = Agent(client=client, instructions="Solve problems step by step.")
result = await agent.run( "Solve this logic puzzle...", additional_chat_options={"reasoning_effort": "medium"},)from agent_framework.anthropic import AnthropicClientfrom agent_framework import Agent
client = AnthropicClient(model_id="claude-sonnet-4-20250514")agent = Agent(client=client, instructions="Solve problems step by step.")
result = await agent.run( "Solve this logic puzzle...", additional_chat_options={"thinking": {"type": "enabled", "budget_tokens": 4096}},)Multimodal Input
Section titled “Multimodal Input”Pass images and documents alongside text using ChatMessage, DataContent, and TextContent:
from agent_framework import Agent, Content, Message
agent = Agent(client=client, instructions="You analyze images and documents.")
image_bytes = open("photo.jpg", "rb").read()pdf_bytes = open("report.pdf", "rb").read()
message = Message(role="user", contents=[ Content.from_text("Describe the image and summarize the PDF."), Content.from_data(data=image_bytes, media_type="image/jpeg"), Content.from_data(data=pdf_bytes, media_type="application/pdf"),])
result = await agent.run(message)MCP Servers
Section titled “MCP Servers”Connect to MCP servers using MCPStdioTool:
import shutilfrom agent_framework import Agent, MCPStdioTool
mcp_tool = MCPStdioTool( name="calculator", command=shutil.which("uv") or "uv", args=["run", "--directory", "/path/to/mcp-server", "mcp-calculator"], approval_mode="never_require",)
agent = Agent( client=client, instructions="You help users calculate.", tools=[mcp_tool],)What You’ll See
Section titled “What You’ll See”SideSeat shows a trace timeline with:
- A parent span for the agent session
- Child spans for each LLM call with model, tokens, and cost
- Tool calls and results inline in the message thread
- Images and documents rendered in multimodal conversations
- Error spans with exception details when failures occur
Next Steps
Section titled “Next Steps”- Python SDK — SDK reference
- Core Concepts — understanding runs, steps, and messages
- OpenAI — OpenAI provider details
- Anthropic — Anthropic provider details