Claude Agent SDK
Build autonomous agents that request human review when needed.
The Claude Agent SDK provides a high-level framework for building AI agents with built-in tools. It supports MCP servers, allowing your agent to use Datashift tools directly for human-in-the-loop review.
Agent-Initiated Review
With MCP integration, the agent decides when to request human input. You provide guidance in the agent's prompt, but the agent autonomously determines when to call the Datashift MCP tools. For developer-enforced review where you define which actions require approval in code, see the LangChain guide.
Prerequisites
- •A Datashift account with MCP credentials
- •An Anthropic API key
- •Claude Code installed (setup guide)
Installation
pip install claude-agent-sdkSet your environment variables:
export ANTHROPIC_API_KEY=sk-ant-...
export DATASHIFT_ACCESS_TOKEN=<your-access-token>Get your access token from Settings → Credentials → MCP.
MCP Tools Available
The Datashift MCP server provides these tools to your agent:
submit_taskSubmit data for human review. Returns a task ID and resource URI.
list_queuesList available review queues and their configurations.
get_queue_configGet details about a specific queue.
When a review completes, the MCP server sends a notification. The agent reads the task resource to get the decision.
Full Example
Here's an agent that autonomously decides when to request human approval:
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def main():
async for message in query(
prompt="""You are a customer support agent that helps customers.
When you need to take important actions like sending emails, updating records,
or making changes that could affect customers, use the submit_task tool from
the datashift MCP server to get human approval first.
For example, before sending any email:
1. Call submit_task with queue_key="outbound-communications"
2. Include the email details in the data field
3. Wait for the task to be reviewed
4. Read the task resource to check if it was approved
5. Only proceed if approved
You decide when human review is needed based on the sensitivity of the action.
---
User request: Send a follow-up email to john@example.com thanking them for their purchase""",
options=ClaudeAgentOptions(
mcp_servers={
"datashift": {
"url": "https://mcp.datashift.io/sse",
"headers": {
"Authorization": "Bearer <your-access-token>"
}
}
}
)
):
if hasattr(message, "result"):
print(message.result)
asyncio.run(main())How It Works
Agent reasons about the task
Based on its instructions, the agent determines it should get human approval before sending an email.
Agent calls submit_task
The agent uses the Datashift MCP tool to submit the email for review.
Human reviews
A reviewer sees the task in Datashift Console and approves or rejects.
Agent receives notification
The MCP server notifies the agent. It reads the task resource and proceeds based on the decision.
Crafting Agent Instructions
Guide the agent on when to request review. Be specific about the criteria:
async for message in query(
prompt="""You process and enrich customer data.
When you encounter situations where you're uncertain about data quality
or need human verification, use the Datashift MCP tools:
- submit_task: Submit data for human review
- list_queues: See available review queues
- get_queue_config: Check queue settings
Use queue_key="data-verification" for data quality checks.
Use queue_key="customer-updates" for customer record changes.
You decide when to request review based on:
- Data confidence level
- Potential customer impact
- Regulatory requirements
After submitting a task, the MCP server will notify you when the review
is complete. Read the task resource to get the reviewer's decision.
---
Process and verify the customer data in customers.csv""",
options=ClaudeAgentOptions(
allowed_tools=["Read", "Glob"],
mcp_servers={
"datashift": {
"url": "https://mcp.datashift.io/sse",
"headers": {"Authorization": "Bearer <your-access-token>"}
}
}
)
):
if hasattr(message, "result"):
print(message.result)Using Subagents
Create specialized subagents that handle human review coordination:
async for message in query(
prompt="Review the authentication code for security issues",
options=ClaudeAgentOptions(
allowed_tools=["Read", "Glob", "Grep", "Task"],
mcp_servers={
"datashift": {
"url": "https://mcp.datashift.io/sse",
"headers": {"Authorization": "Bearer <your-access-token>"}
}
},
agents={
"security-reviewer": AgentDefinition(
description="Security expert that reviews code and escalates concerns to humans.",
prompt="""You are a security reviewer. Analyze code for vulnerabilities.
When you find potential security issues, use the Datashift submit_task tool
to get human verification before reporting. Use queue_key="security-reviews".
Include the file path, line numbers, and your analysis in the task data.""",
tools=["Read", "Glob", "Grep"]
)
}
)
):
if hasattr(message, "result"):
print(message.result)Best Practices
Be specific in prompts
Clearly define when the agent should request review: thresholds, action types, uncertainty levels.
Include queue guidance
Tell the agent which queues to use for different types of reviews.
Handle all outcomes
Instruct the agent on what to do when reviews are approved, rejected, or time out.
Combine with built-in tools
Use the SDK's built-in tools (Read, Edit, Bash) alongside Datashift MCP for powerful workflows.