- Calls tools when needed
- Survives crashes and resumes mid-reasoning
- Maintains conversation history
- Prevents duplicate API calls
How agents work
When you run an agent:- LLM reasons about the task - The agent analyzes your request and decides what to do
- Calls tools if needed - If the agent needs information or wants to take action, it calls the appropriate tools
- Iterates until complete - The agent continues reasoning and calling tools until it has a final answer or hits a stop condition
- Returns the result - You get the final response
Running agents
Direct execution
Useagent.run() to generate complete response from LLM.
- Calls LLM with the user input
- Executes tool calls suggested by the LLM - in this case,
get_weatherfor NYC andget_weatherfor London - Calls LLM with the results (or errors) of the tool calls
- Returns the final LLM response if no more tool calls are needed
Streaming responses
Stream responses for real-time user experience:Tools
Tools give agents the ability to take actions. Define them with the@tool decorator:
Sandbox tools
Give agents the ability to write code, run shell commands, and explore a codebase inside an isolated environment. A singlesandboxTools() call creates six tools (exec, read, write, edit, glob, grep):
env: 'docker' for isolated container execution, or env: 'local' to run directly on the host (with approval-based security by default). See Sandbox for the full reference.
Triggering agents from Slack
Agents can be triggered directly from Slack by @mentioning your bot. The output streams back to the originating thread:Structured outputs
Instead of natural language, agents can return structured data:Stop conditions
Control when an agent stops executing to prevent runaway costs or infinite loops:max_steps- Limit reasoning iterationsmax_tokens- Cap total token usage (input + output)
Conversational memory
Agents automatically maintain conversation history:Using agents in workflows
Agents are workflows, so you can compose them with other workflows:Human-in-the-loop
Combine agents with approval gates for sensitive operations:Key takeaways
- Agents handle LLM reasoning automatically - you just define tools and let them work
- Run with
agent.run()or stream withagent.stream() - Tools (defined with
@tool) give agents the ability to act - Agents are durable - they survive crashes and resume from the last completed step
- Use structured outputs for reliable data extraction
- Stop conditions control execution and prevent runaway costs
- Conversational memory maintained automatically
- Compose agents in workflows for complex multi-step tasks
- Sandbox tools let agents write and execute code in isolated environments
- Trigger agents from Slack with @mentions
Learn more
- Agent Guide – Advanced agent patterns and techniques
- Sandbox – Isolated execution environments
- Slack Integration – Trigger agents from Slack
- Examples – Real-world agent implementations