A modern starter template for building agentic applications using LangChain and createAgent. This template provides a clean foundation for building AI agents with tool calling, middleware support, and seamless LangGraph integration.
- LangChain API - Uses
createAgentfor a clean, simple interface - Built-in Tools - Calculator, time, weather, and knowledge search examples
- Middleware Ready - Easily add summarization, human-in-the-loop, and more
- TypeScript First - Full type safety with Zod schemas
- LangSmith Studio Compatible - Visualize and debug your agent
- LangSmith Integration - Automatic tracing for debugging and evaluation
git clone https://github.com/langchain-ai/react-agent-js.git
cd react-agent-js
pnpm installcp .env.example .envAdd your API key to .env:
# For Claude models (recommended)
ANTHROPIC_API_KEY=your-key-here
# OR for GPT models
OPENAI_API_KEY=your-key-here# Run the example script
pnpm start
# Or use LangGraph Studio
# Open the project folder in LangGraph Studiosrc/
βββ agent.ts # Main agent using createAgent
βββ tools.ts # Tool definitions with Zod schemas
βββ prompts.ts # System prompts and templates
βββ index.ts # CLI entry point for testingCreate tools in src/tools.ts using the tool function:
import { tool } from "langchain";
import { z } from "zod";
const myTool = tool(
async ({ query }) => {
// Your tool logic here
return `Result for: ${query}`;
},
{
name: "my_tool",
description: "Description of what this tool does",
schema: z.object({
query: z.string().describe("The search query"),
}),
}
);
// Add to TOOLS array
export const TOOLS = [myTool, ...otherTools];Update src/agent.ts:
export const agent = createAgent({
// Anthropic models
model: "anthropic:claude-sonnet-4-5-20250929",
// Or OpenAI models
// model: "openai:gpt-4o",
// model: "openai:gpt-4-turbo",
tools: TOOLS,
systemPrompt: SYSTEM_PROMPT,
});LangChain supports middleware for advanced customization:
import {
createAgent,
summarizationMiddleware,
humanInTheLoopMiddleware
} from "langchain";
export const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: TOOLS,
systemPrompt: SYSTEM_PROMPT,
middleware: [
// Auto-summarize long conversations
summarizationMiddleware({
model: "anthropic:claude-sonnet-4-5",
trigger: { tokens: 4000 },
}),
// Require approval for sensitive operations
humanInTheLoopMiddleware({
interruptOn: {
send_email: { allowedDecisions: ["approve", "reject"] },
},
}),
],
});Edit src/prompts.ts to change the agent's behavior:
export const SYSTEM_PROMPT = `You are a helpful AI assistant...`;LangSmith Studio provides a visual interface for:
- Visualizing your agent's graph structure
- Debugging tool calls and agent decisions
- Testing with interactive conversations
- Editing state to debug specific scenarios
Simply open this project folder in LangSmith Studio to get started.
Enable LangSmith for observability:
# In your .env file
LANGSMITH_API_KEY=your-key-here
LANGSMITH_TRACING=true
LANGSMITH_PROJECT=my-agent-projectAll agent invocations will automatically be traced, showing:
- Model calls and responses
- Tool invocations and results
- Token usage and latency
- LangChain Documentation
- LangGraph Documentation
- LangSmith Documentation
- LangChain v1 Migration Guide
MIT License - see LICENSE for details.
