Microsoft Agent Framework 1.0: Multi-Agent AI for .NET and Python
The definitive guide to Microsoft Agent Framework 1.0 -- the production-ready convergence of Semantic Kernel and AutoGen, shipped April 3, 2026. From first agent setup in C# and Python through five multi-agent orchestration patterns, IChatClient provider swapping, MCP + A2A interop, DevUI browser debugging, Azure AI Foundry deployment, and migration from Semantic Kernel.
By Jose Nobile | 2026-04-20 | 15 min read
What Microsoft Agent Framework Replaces
Microsoft Agent Framework is the production-ready unification of two previously separate projects: Semantic Kernel (the enterprise-grade orchestration SDK) and AutoGen (the Microsoft Research multi-agent conversation framework). Rather than maintaining two overlapping tools, Microsoft merged AutoGen's simple agent abstractions with Semantic Kernel's enterprise features -- session-based state management, type safety, middleware, telemetry, and extensive model and embedding support -- into a single, commercial-grade framework. The result is a unified SDK that ships for both .NET and Python with consistent APIs.
Before Agent Framework, developers building AI agents on the Microsoft stack faced a fragmented choice: Semantic Kernel offered robust plugin architecture and enterprise integrations but lacked native multi-agent orchestration, while AutoGen excelled at conversational multi-agent patterns but needed more production hardening. Agent Framework resolves this by providing a single installation, a single set of abstractions, and a single roadmap. Semantic Kernel is now in maintenance mode, receiving only critical security fixes. AutoGen has been archived. All new development happens in Agent Framework.
The framework is open-source under the MIT license, hosted at github.com/microsoft/agent-framework. Version 1.0 shipped on April 3, 2026, after a public preview (October 2025) and release candidate (February 2026). It carries a long-term support commitment from Microsoft, with stable APIs guaranteed not to break within major versions. The v1.0 stabilized surface includes the core single-agent abstraction, service connectors across .NET and Python, middleware hooks, agent memory and context providers, graph-based workflows for complex orchestration topologies, and five multi-agent orchestration patterns (sequential, concurrent, handoff, group chat, and Magentic-One).
Key Features
Unified SDK for .NET and Python
One framework, two languages, consistent APIs. Install via NuGet (Microsoft.Agents.AI) or pip (agent-framework). Both implementations share the same abstractions for agents, tools, orchestrations, and memory, ensuring feature parity across ecosystems.
Five Orchestration Patterns
Sequential, concurrent, handoff, group chat, and Magentic-One orchestrations ship out of the box. All patterns support streaming, checkpointing, human-in-the-loop approvals, and pause/resume for long-running workflows. Define topology in code or declarative YAML.
IChatClient Provider Swapping
Every model connector implements Microsoft.Extensions.AI.IChatClient. Swap providers with a single line change -- no agent code modifications. First-party connectors ship for Microsoft Foundry, Azure OpenAI, OpenAI, Anthropic Claude, Amazon Bedrock, Google Gemini, and Ollama.
MCP + A2A Protocol Support
Native support for the Model Context Protocol lets agents dynamically discover and invoke external tools exposed over MCP-compliant servers. A2A protocol support enables cross-runtime agent collaboration with agents built on other frameworks.
DevUI Browser Debugger
A browser-based local debugger that visualizes agent execution, message flows, tool calls, and orchestration decisions in real time. Renders the agent graph visually, highlights the executing node, and allows timeline scrubbing to replay executions step by step.
Azure AI Foundry Integration
Build locally, deploy to Azure AI Foundry with observability, durability, and compliance built in. Foundry Agent Service provides managed hosting, auto-scaling, and the OpenAI Responses API for wire-compatible agent endpoints.
Declarative Agent Definitions
Define agents' instructions, tools, memory configuration, and orchestration topology in version-controlled YAML files. Load and run with a single API call. Enables GitOps workflows where agent behavior changes go through code review like any other configuration.
Agent Evaluation Framework
Microsoft.Extensions.AI.Evaluation provides built-in evaluators: IntentResolutionEvaluator, TaskAdherenceEvaluator, ToolCallAccuracyEvaluator, and more. Score agent quality across dimensions before and after changes to prevent regressions.
Install and First Agent
Getting started takes under 5 minutes in either language. The .NET package is available on NuGet, and the Python package is on PyPI. Both include all sub-packages needed for core agent functionality, orchestrations, and provider connectors.
C# (.NET)
// Install the NuGet package
dotnet add package Microsoft.Agents.AI
// Program.cs - Your first agent in C#
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
// Any IChatClient implementation works here
IChatClient chatClient = new AzureOpenAIChatClient(
endpoint: "https://your-resource.openai.azure.com",
model: "gpt-4o",
credential: new DefaultAzureCredential()
);
var agent = new ChatClientAgent(
chatClient,
name: "Assistant",
instructions: "You are a helpful coding assistant."
);
// Single-turn
string response = await agent.RunAsync("Explain async/await in C#.");
Console.WriteLine(response);
// Multi-turn with conversation history
var thread = new ChatThread();
await agent.RunAsync("What is dependency injection?", thread);
await agent.RunAsync("Show me an example.", thread);
Console.WriteLine(thread.Last().Text);
Python
# Install all sub-packages
pip install agent-framework
# first_agent.py - Your first agent in Python
import asyncio
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
client = OpenAIChatClient(model="gpt-4o")
agent = Agent(
client=client,
name="Assistant",
instructions="You are a helpful coding assistant."
)
# Single-turn
result = asyncio.run(agent.run("Explain async/await in Python."))
print(result)
# Multi-turn with conversation history
async def multi_turn():
thread = agent.create_thread()
await agent.run("What is dependency injection?", thread=thread)
await agent.run("Show me an example.", thread=thread)
print(thread.last().text)
asyncio.run(multi_turn())
Both examples use the same conceptual API: create a client, create an agent with instructions, and call run. The agent manages conversation history, handles retries, and streams responses. Tools can be added as decorated functions (Python) or via function metadata (C#).
Multi-Agent Orchestration Patterns
Agent Framework provides five built-in orchestration patterns that emerged from Microsoft Research and real-world production use. Each pattern handles agent coordination differently depending on the task structure. All patterns support streaming, checkpointing, human-in-the-loop approvals, and pause/resume for long-running workflows.
1. Sequential Orchestration
Agents execute one after another in a defined order. The output of each agent feeds into the next. Use this for pipelines where each stage depends on the previous -- for example, a research agent that gathers data, followed by an analysis agent that interprets it, followed by a writing agent that produces a report.
// C# - Sequential orchestration
var researcher = new ChatClientAgent(client, name: "Researcher",
instructions: "Research the given topic thoroughly.");
var analyst = new ChatClientAgent(client, name: "Analyst",
instructions: "Analyze the research and extract key insights.");
var writer = new ChatClientAgent(client, name: "Writer",
instructions: "Write a clear summary from the analysis.");
var pipeline = new SequentialOrchestration(researcher, analyst, writer);
var result = await pipeline.RunAsync("Impact of AI on healthcare in 2026");
Console.WriteLine(result);
2. Concurrent Orchestration
Agents execute in parallel and their results are aggregated. Use this when subtasks are independent -- for example, querying multiple data sources simultaneously, or having several reviewers evaluate a document at the same time. Reduces total latency compared to sequential execution.
3. Handoff Orchestration
Responsibility transfers between agents as context evolves. An agent can decide to hand off to a more specialized agent based on the conversation state. Common in customer support scenarios: a triage agent routes to billing, technical support, or account management agents based on the user's issue.
# Python - Handoff orchestration
from agent_framework import Agent, HandoffOrchestration
triage = Agent(client, name="Triage",
instructions="Route the user to the right specialist.")
billing = Agent(client, name="Billing",
instructions="Handle billing and payment questions.")
technical = Agent(client, name="Technical",
instructions="Handle technical support issues.")
orchestration = HandoffOrchestration(
entry_agent=triage,
agents=[triage, billing, technical]
)
result = await orchestration.run("I was charged twice for my subscription")
4. Group Chat Orchestration
Agents collaborate in a shared conversation, taking turns based on a configurable selection strategy. Use this for brainstorming, code review, or any scenario where multiple perspectives improve the outcome. A selector function determines which agent speaks next based on the conversation history.
5. Magentic-One Orchestration
The most sophisticated pattern, derived from Microsoft Research's Magentic-One paper. A manager agent builds and continuously refines a dynamic task ledger, then coordinates specialized agents (web surfer, file handler, coder, terminal operator) to complete complex, multi-step tasks. The manager monitors progress, reassigns tasks on failure, and adapts the plan as new information emerges. Best suited for open-ended research tasks and complex autonomous workflows.
Swapping Providers via IChatClient
The IChatClient abstraction from Microsoft.Extensions.AI is the foundation of Agent Framework's provider flexibility. Every model connector implements this interface, which means swapping from Azure OpenAI to Anthropic Claude or a local Ollama model is a configuration change, not a code change. Your agent logic, tools, orchestrations, and evaluation code remain untouched.
// C# - Same agent, different providers
using Microsoft.Extensions.AI;
// Azure OpenAI
IChatClient azureClient = new AzureOpenAIChatClient(
endpoint: "https://myresource.openai.azure.com",
model: "gpt-4o",
credential: new DefaultAzureCredential());
// OpenAI direct
IChatClient openaiClient = new OpenAIChatClient(
model: "gpt-4o",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
// Anthropic Claude
IChatClient claudeClient = new AnthropicChatClient(
model: "claude-sonnet-4-20250514",
apiKey: Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY"));
// Google Gemini
IChatClient geminiClient = new GeminiChatClient(
model: "gemini-2.5-pro",
apiKey: Environment.GetEnvironmentVariable("GOOGLE_API_KEY"));
// Local Ollama (no API key needed)
IChatClient ollamaClient = new OllamaChatClient(
endpoint: "http://localhost:11434",
model: "llama3");
// All work identically with any agent
var agent = new ChatClientAgent(claudeClient,
instructions: "You are a helpful assistant.");
This is particularly powerful for development workflows: use Ollama locally for fast iteration with no API costs, test against OpenAI or Claude for quality validation, and deploy to Azure OpenAI in production with enterprise compliance. The same agent YAML definition can reference different provider configs per environment.
# Python - Provider swapping
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
from agent_framework.anthropic import AnthropicChatClient
from agent_framework.ollama import OllamaChatClient
# Development: local Ollama
dev_client = OllamaChatClient(model="llama3")
# Production: Claude via Anthropic
prod_client = AnthropicChatClient(model="claude-sonnet-4-20250514")
# Same agent definition works with either
agent = Agent(
client=prod_client, # swap to dev_client for local dev
name="CodeReviewer",
instructions="Review code for bugs, security issues, and style."
)
MCP + A2A Interop
Agent Framework has first-class support for both the Model Context Protocol (MCP) and the Agent-to-Agent (A2A) protocol. MCP lets agents dynamically discover and invoke external tools exposed by MCP-compliant servers -- databases, APIs, file systems, cloud services, and more. Instead of hardcoding tool integrations, agents connect to MCP servers at runtime and use whatever tools are available, making them composable and extensible.
A2A support enables cross-runtime agent collaboration. An agent built with Microsoft Agent Framework can discover, negotiate with, and delegate tasks to agents built on other frameworks (LangGraph, CrewAI, Claude Agent SDK) through the standardized A2A protocol. Agent Cards expose capabilities, and the task lifecycle model handles handoffs, progress tracking, and result aggregation across organizational boundaries.
// C# - Agent with MCP tools
using Microsoft.Agents.AI;
using Microsoft.Agents.MCP;
var mcpServer = new McpServerConnection("npx", new[] {
"-y", "@modelcontextprotocol/server-github"
}, new Dictionary<string, string> {
["GITHUB_TOKEN"] = Environment.GetEnvironmentVariable("GITHUB_TOKEN")
});
var agent = new ChatClientAgent(chatClient,
name: "DevAssistant",
instructions: "Help with code review and PR management.",
tools: await mcpServer.GetToolsAsync()
);
// The agent can now use all GitHub MCP tools
var result = await agent.RunAsync("Review PR #42 in my-org/my-repo");
# Python - Agent with A2A collaboration
from agent_framework import Agent
from agent_framework.a2a import A2AClient
# Discover external agents via A2A
external = A2AClient("https://partner-api.example.com/.well-known/agent.json")
partner_agent = await external.discover()
# Local agent can delegate to external agent
agent = Agent(
client=chat_client,
name="Coordinator",
instructions="Coordinate with the partner agent for data analysis.",
collaborators=[partner_agent]
)
result = await agent.run("Analyze Q1 sales data using the partner's tools")
DevUI Browser Debugging
DevUI is a browser-based local debugger included with Agent Framework that visualizes agent execution in real time. It renders the agent graph visually, highlights the currently executing node, and shows the full message history for each agent in a split-pane view. You can click on any agent to inspect its system prompt, tool calls, LLM responses, and state at that point in execution. Tool calls display both the request and response payloads with timing data.
The most powerful feature is execution replay. DevUI records every step of an agent run, and you can scrub backward and forward through the timeline to understand exactly how state changed, which agent made which decisions, and where things went wrong. This is invaluable for debugging complex multi-agent orchestrations where the interaction between agents produces unexpected behavior.
# Launch DevUI for Python pip install agent-framework-devui agent-framework-devui --port 8080 # Launch DevUI for .NET dotnet tool install -g Microsoft.Agents.DevUI agent-framework-devui --port 8080 --tracing # Options: # --port, -p Port (default: 8080) # --host Host (default: 127.0.0.1) # --headless API only, no UI # --no-open Don't auto-open browser # --tracing Enable OpenTelemetry tracing # --reload Enable auto-reload on code changes # --mode developer|user (default: developer)
DevUI also exposes an OpenAI-compatible Responses API, meaning you can point any OpenAI-compatible client at your DevUI instance to test agent interactions programmatically. Combined with OpenTelemetry tracing (--tracing flag), you get full distributed traces that integrate with your existing observability stack -- Jaeger, Zipkin, Azure Monitor, or any OTLP-compatible collector.
Deploying to Azure AI Foundry
Azure AI Foundry (formerly Azure AI Studio) provides managed hosting for Agent Framework agents. The deployment model is container-based: you package your agent as a Docker container, push it to Azure Container Registry, and deploy to Foundry Agent Service. Foundry manages the runtime, auto-scaling, and infrastructure. Hosted agents support the OpenAI Responses API, making them wire-compatible with any OpenAI client.
The typical workflow is: develop and debug locally with DevUI, validate with the evaluation framework, containerize, push to Azure Container Registry, and deploy via Foundry CLI or the Azure Portal. Foundry provides built-in observability (Azure Monitor integration), durability (conversation state persisted across restarts), and compliance features (data residency, encryption, RBAC). For production Kubernetes deployments, Foundry Agent Service integrates with AKS for custom scaling policies.
# Dockerfile for Agent Framework agent
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "-m", "agent_framework.serve", "--host", "0.0.0.0", "--port", "8080"]
# Deploy to Azure AI Foundry
az login
az acr build --registry myregistry --image my-agent:v1 .
az foundry agent deploy \
--name my-agent \
--image myregistry.azurecr.io/my-agent:v1 \
--project my-ai-project \
--region northcentralus
Note: As of April 2026, hosted agents in Foundry Agent Service are available in North Central US. Additional regions are being rolled out. For global deployments, consider deploying behind Azure Front Door or running on AKS with multi-region clusters.
Migration from Semantic Kernel
Migrating from Semantic Kernel to Agent Framework is straightforward because Agent Framework was built on Semantic Kernel's foundations. The key conceptual change is that Semantic Kernel's service-specific agent classes (ChatCompletionAgent, AzureAIAgent, OpenAIAssistantAgent) are replaced by a single ChatClientAgent that works with any IChatClient implementation. This simplifies the API surface significantly.
If you have existing Semantic Kernel code with KernelFunction instances (from prompts or methods), you can convert them to Agent Framework tools using the .as_agent_framework_tool() method in Python or .AsAgentTool() in C#. This requires semantic-kernel version 1.38 or higher. The conversion preserves parameter schemas, descriptions, and return types, so your tools work identically in the new framework.
// C# - Migration example
// BEFORE: Semantic Kernel
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion("gpt-4o", endpoint, credential)
.Build();
var agent = new ChatCompletionAgent {
Kernel = kernel,
Instructions = "You are a helpful assistant."
};
// AFTER: Agent Framework
IChatClient client = new AzureOpenAIChatClient(
endpoint: endpoint, model: "gpt-4o", credential: credential);
var agent = new ChatClientAgent(client,
instructions: "You are a helpful assistant.");
// KernelFunction conversion (requires SK >= 1.38)
KernelFunction oldTool = KernelFunctionFactory.CreateFromMethod(
() => DateTime.UtcNow.ToString(), "GetTime");
var newTool = oldTool.AsAgentTool();
Microsoft provides an official migration guide at learn.microsoft.com/en-us/agent-framework/migration-guide/from-semantic-kernel/ with per-feature code samples showing the Agent Framework equivalent for every Semantic Kernel feature. The migration can be done incrementally -- both frameworks can coexist in the same project during the transition period.
Comparison vs LangGraph, CrewAI, and Paperclip
The agent framework landscape in 2026 is crowded. Here is how Microsoft Agent Framework compares to the other major players on dimensions that matter for production use.
| Dimension | MS Agent Framework | LangGraph | CrewAI | Paperclip |
|---|---|---|---|---|
| Languages | .NET + Python | Python + JS/TS | Python | Node.js |
| Orchestration | 5 built-in patterns + YAML | Graph-based (explicit state) | Role-based crews | Org-chart hierarchy |
| Provider Swap | IChatClient (1-line) | ChatModel abstraction | LiteLLM wrapper | LLM adapters |
| MCP Support | Native, first-class | Via integration | Native (v1.10+) | Planned |
| A2A Support | Native, first-class | Via LangChain | Native | No |
| Visual Debugger | DevUI (browser) | LangSmith (cloud) | CrewAI Studio | React dashboard |
| Cloud Deploy | Azure AI Foundry | LangGraph Cloud | CrewAI Enterprise | Railway / self-host |
| Best For | Enterprise .NET + Azure shops | Complex stateful workflows | Rapid prototyping | Autonomous AI companies |
LangGraph remains the most battle-tested option for complex, stateful workflows requiring fine-grained control over execution flow. Its graph-based approach gives developers explicit control over state transitions, which is critical for regulated industries. However, it lacks first-party .NET support and requires LangSmith (a paid cloud service) for full observability.
CrewAI excels at rapid prototyping with its intuitive role-based crew metaphor. At 45,900+ GitHub stars and 12 million daily agent executions as of March 2026, it has the largest community. The tradeoff is that teams often hit CrewAI's control flow ceiling and migrate to LangGraph or Agent Framework for production. CrewAI is Python-only.
Paperclip is a different beast entirely -- an orchestration layer for "zero-human companies" where AI agents fill organizational roles (CEO, engineer, analyst). It crossed 30,000 GitHub stars within three weeks of its March 2026 launch. Paperclip is compelling for autonomous business automation but lacks the developer-focused primitives (tool schemas, structured outputs, evaluation) that production agent systems need.
Microsoft Agent Framework is the natural choice for teams already invested in the Microsoft ecosystem (Azure, .NET, Visual Studio, Copilot Studio). Its .NET support is unmatched -- no other framework offers first-class C# agents. The IChatClient abstraction and Azure AI Foundry integration make it the path of least resistance for enterprise deployments. The tradeoff is Azure lock-in for deployment features and a smaller community compared to LangGraph or CrewAI.
Declarative YAML Workflows
One of Agent Framework's most distinctive features is declarative agent definitions. Instead of defining agents and orchestrations purely in code, you can express them in YAML files that are version-controlled, code-reviewed, and environment-specific. This enables GitOps workflows where agent behavior changes go through the same pull-request review process as application code.
# agents/support-team.yaml
name: SupportTeam
orchestration: handoff
entry_agent: triage
agents:
- name: triage
instructions: |
You are a customer support triage agent.
Route billing issues to the billing agent.
Route technical issues to the technical agent.
model: gpt-4o
handoffs: [billing, technical]
- name: billing
instructions: |
You handle billing and subscription questions.
You can look up invoices and process refunds.
model: gpt-4o
tools:
- lookup_invoice
- process_refund
- name: technical
instructions: |
You handle technical support and debugging.
You have access to the knowledge base and logs.
model: claude-sonnet-4-20250514
tools:
- search_knowledge_base
- query_logs
# Load and run from YAML (Python)
from agent_framework import load_orchestration
team = load_orchestration("agents/support-team.yaml")
result = await team.run("I was charged twice last month")
Key Takeaways
No more choosing between two overlapping frameworks. Agent Framework merges Semantic Kernel's enterprise features with AutoGen's multi-agent patterns into a single SDK with stable, production-ready APIs and long-term support.
IChatClient enables true provider portability. Azure OpenAI, OpenAI, Anthropic Claude, Amazon Bedrock, Google Gemini, and Ollama all work interchangeably. Develop locally with Ollama, deploy with Azure OpenAI -- same agent code.
Sequential, concurrent, handoff, group chat, and Magentic-One cover the full spectrum from simple pipelines to complex autonomous research. All support streaming, checkpointing, and human-in-the-loop approvals.
First-class protocol support means agents can discover external tools (MCP) and collaborate with agents on other frameworks (A2A) without custom integration code. True interoperability across the agent ecosystem.