LangChain
Add human review checkpoints to LangChain agents.
LangChain is a popular framework for building LLM applications. This guide shows how to add human review checkpoints to LangChain tools using the Datashift SDK.
Developer-Enforced Review
This guide shows how to add review checkpoints in your tool code. You decide which tools require human approval. For agent-initiated review where the agent decides when to request human input, see the MCP integration guide.
Prerequisites
- •A Datashift account with an API key
- •An OpenAI API key (or other LLM provider)
- •Python 3.9+
Installation
pip install langchain langchain-openai datashiftSet your environment variables:
export OPENAI_API_KEY=sk-...
export DATASHIFT_API_KEY=ds_xxxxxxxxxxxxxxxxFull Example
Here's an agent that requires human approval before sending emails:
import os
import time
from langchain_openai import ChatOpenAI
from langchain.agents import tool, AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from datashift import Datashift
# Initialize clients
llm = ChatOpenAI(model="gpt-4")
datashift = Datashift(api_key=os.environ["DATASHIFT_API_KEY"])
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a customer. Use this when a customer needs follow-up communication."""
# Submit for human review before executing
task = datashift.submit_task(
queue_key="outbound-communications",
data={
"action": "send_email",
"to": to,
"subject": subject,
"body": body,
},
summary=f"Review email to {to}: {subject}",
)
# Wait for human decision
result = datashift.get_task(task.id)
while result.state in ["queued", "in_review"]:
time.sleep(5)
result = datashift.get_task(task.id)
if "approved" in result.reviews[0].result:
# Actually send the email here
# send_actual_email(to, subject, body)
return f"Email approved and sent to {to}"
else:
feedback = result.reviews[0].feedback or "No reason provided"
return f"Email rejected by reviewer: {feedback}"
# Create the agent
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful customer support agent."),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_openai_functions_agent(llm, [send_email], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[send_email], verbose=True)
# Run the agent
result = agent_executor.invoke({
"input": "Send a thank you email to john@example.com for their recent purchase"
})
print(result["output"])How It Works
Agent calls a tool
The LLM decides to use the send_email tool based on the conversation.
Tool submits for review
Your tool code submits the action to Datashift before executing.
Human reviews
A reviewer sees the action in Datashift Console and approves or rejects.
Agent receives result
The tool returns the review result, and the agent continues accordingly.
Reusable Approval Decorator
Create a decorator to easily add approval to any tool:
def require_approval(queue_key: str):
"""Decorator that adds human approval to any tool."""
def decorator(func):
@tool
def wrapped(*args, **kwargs):
# Submit for review
task = datashift.submit_task(
queue_key=queue_key,
data={"function": func.__name__, "args": args, "kwargs": kwargs},
summary=f"Approve {func.__name__} call",
)
# Wait for decision
result = datashift.get_task(task.id)
while result.state in ["queued", "in_review"]:
time.sleep(5)
result = datashift.get_task(task.id)
if "approved" in result.reviews[0].result:
return func(*args, **kwargs)
else:
return f"Action rejected: {result.reviews[0].feedback}"
wrapped.__doc__ = func.__doc__
return wrapped
return decorator
# Usage
@require_approval("critical-actions")
def delete_customer(customer_id: str) -> str:
"""Delete a customer record permanently."""
# Actual deletion logic
return f"Customer {customer_id} deleted"Conditional Review
Only require review for certain conditions:
@tool
def update_customer(customer_id: str, updates: dict) -> str:
"""Update a customer record. Sensitive changes require approval."""
sensitive_fields = ["email", "billing_address", "subscription_tier"]
needs_review = any(field in updates for field in sensitive_fields)
if needs_review:
task = datashift.submit_task(
queue_key="customer-updates",
data={"customer_id": customer_id, "updates": updates},
context={"current_record": get_customer(customer_id)},
summary=f"Review update for customer {customer_id}",
)
result = datashift.get_task(task.id)
while result.state in ["queued", "in_review"]:
time.sleep(5)
result = datashift.get_task(task.id)
if "approved" not in result.reviews[0].result:
return f"Update rejected: {result.reviews[0].feedback}"
# Apply the update
apply_update(customer_id, updates)
return f"Customer {customer_id} updated"Best Practices
Identify sensitive tools
Add review to tools that send communications, modify data, make purchases, or have external effects.
Include context
Pass relevant context so reviewers can make informed decisions quickly.
Handle rejections gracefully
Return clear messages when actions are rejected so the agent can inform the user.
Use webhooks for production
Replace polling with webhooks for better scalability.