Skip to content

Microsoft Agent Framework

Microsoft Agent Framework (agent-framework) is a Python SDK for building AI agents with built-in OpenTelemetry support. SideSeat captures runs from agents using OpenAI, Anthropic, and Azure OpenAI providers.

  1. Start SideSeat

    Terminal window
    npx sideseat
  2. Install dependencies

    Terminal window
    pip install agent-framework sideseat
  3. Add telemetry

    import asyncio
    from sideseat import SideSeat, Frameworks
    from agent_framework import Agent
    from agent_framework.openai import OpenAIChatClient
    SideSeat(framework=Frameworks.AgentFramework)
    client = OpenAIChatClient(model_id="gpt-5-nano-2025-08-07")
    agent = Agent(client=client, instructions="You are a helpful assistant.")
    result = asyncio.run(agent.run("What is the capital of France?"))
    print(result.text)
  4. View runs

    Open http://localhost:5388 to see your runs.

  1. Start SideSeat

    Terminal window
    npx sideseat
  2. Set the endpoint

    Terminal window
    export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5388/otel/default
  3. Install dependencies

    Terminal window
    pip install agent-framework opentelemetry-sdk opentelemetry-exporter-otlp
  4. Add telemetry

    from agent_framework.observability import OBSERVABILITY_SETTINGS
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    from opentelemetry import trace
    OBSERVABILITY_SETTINGS.enable_instrumentation = True
    OBSERVABILITY_SETTINGS.enable_sensitive_data = True
    provider = TracerProvider()
    provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
    trace.set_tracer_provider(provider)
  5. View runs

    Open http://localhost:5388 to see your runs.

Agent Framework supports OpenAI, OpenAI Responses, and Anthropic:

from agent_framework.openai import OpenAIChatClient
# Uses OPENAI_API_KEY env var
client = OpenAIChatClient(model_id="gpt-5-nano-2025-08-07")

Define tools with the @tool decorator using Annotated type hints:

from typing import Annotated
from agent_framework import Agent, tool
from pydantic import Field
@tool(approval_mode="never_require")
def get_weather(
city: Annotated[str, Field(description="The city name")],
) -> str:
"""Get weather for a city."""
return f"Sunny in {city}, 72°F"
agent = Agent(
client=client,
instructions="You are a weather assistant.",
tools=[get_weather],
)
result = await agent.run("What's the weather in Paris?")

SideSeat shows each tool call with inputs and outputs in the message thread.

Run multiple specialized agents in parallel using asyncio.gather():

import asyncio
from agent_framework import Agent
researcher = Agent(client=client, instructions="You are a researcher.")
technical = Agent(client=client, instructions="You are a technical expert.")
prompt = "Evaluate building a weather app"
researcher_result, technical_result = await asyncio.gather(
researcher.run(prompt),
technical.run(prompt),
)

Extract structured data using Pydantic models via the response_format parameter:

from pydantic import BaseModel
from agent_framework import Agent
class Person(BaseModel):
name: str
age: int
city: str
agent = Agent(client=client, instructions="Extract structured information.")
result = await agent.run(
"Jane Doe, 28, lives in New York.",
response_format=Person,
)
person = result.try_parse_value(Person)
if person is not None:
print(person.name) # "Jane Doe"

Enable extended thinking for supported models via additional_chat_options:

from agent_framework.openai import OpenAIResponsesClient
from agent_framework import Agent
client = OpenAIResponsesClient(model_id="gpt-5-nano-2025-08-07")
agent = Agent(client=client, instructions="Solve problems step by step.")
result = await agent.run(
"Solve this logic puzzle...",
additional_chat_options={"reasoning_effort": "medium"},
)

Pass images and documents alongside text using ChatMessage, DataContent, and TextContent:

from agent_framework import Agent, Content, Message
agent = Agent(client=client, instructions="You analyze images and documents.")
image_bytes = open("photo.jpg", "rb").read()
pdf_bytes = open("report.pdf", "rb").read()
message = Message(role="user", contents=[
Content.from_text("Describe the image and summarize the PDF."),
Content.from_data(data=image_bytes, media_type="image/jpeg"),
Content.from_data(data=pdf_bytes, media_type="application/pdf"),
])
result = await agent.run(message)

Connect to MCP servers using MCPStdioTool:

import shutil
from agent_framework import Agent, MCPStdioTool
mcp_tool = MCPStdioTool(
name="calculator",
command=shutil.which("uv") or "uv",
args=["run", "--directory", "/path/to/mcp-server", "mcp-calculator"],
approval_mode="never_require",
)
agent = Agent(
client=client,
instructions="You help users calculate.",
tools=[mcp_tool],
)

SideSeat shows a trace timeline with:

  • A parent span for the agent session
  • Child spans for each LLM call with model, tokens, and cost
  • Tool calls and results inline in the message thread
  • Images and documents rendered in multimodal conversations
  • Error spans with exception details when failures occur