Tool Use in LLM Agents: Patterns, Pitfalls, and Best Practices
Tool use transforms LLMs from text generators into action-capable agents. This guide covers function calling, tool design principles, error handling, and security considerations.
What Is Tool Use?
Tool use (also called function calling) is the capability that allows a language model to invoke external functions, APIs, or services as part of generating its response. Instead of answering purely from parametric knowledge, the model can take actions in the world: search the web, run code, read files, call APIs, or query databases.
Tool use is the capability that most directly transforms an LLM from a text generator into an agent — a system that can affect the world.
How Function Calling Works
Most major model providers implement tool use through a structured interface:
- The developer declares available tools as JSON schemas (name, description, parameters).
- The model, in addition to generating text, can emit a tool call — a structured request to invoke a specific tool with specific arguments.
- The application executes the tool and returns the result to the model.
- The model incorporates the result and continues reasoning.
{
"type": "function",
"function": {
"name": "search_web",
"description": "Search the web for current information on a topic.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
Tool Design Principles
Good tools are a prerequisite for good agents. The principles below apply regardless of framework.
1. Clear, Unambiguous Descriptions
The model decides which tool to call and how to call it based entirely on the description and parameter names. Vague descriptions lead to incorrect usage.
Tip
Write tool descriptions as if explaining to a smart colleague who can't see your code. Include when to use the tool, what it returns, and its limitations.
2. Minimal, Orthogonal Tool Sets
Each tool should do one thing well. Overlapping capabilities confuse the model about which tool to select.
3. Idempotent Read Tools, Safe Write Tools
Read-only tools (search, retrieve) can be retried freely. Write tools (send email, create record) should be guarded with explicit confirmation steps or rate limits.
4. Structured Return Values
Return consistent, parseable data. The model must understand and act on the result. Unstructured blobs of text increase the chance of misinterpretation.
Common Pitfalls
Tool Call Loops
An agent can get stuck invoking the same tool repeatedly without making progress. Mitigations:
- Maximum iteration limits.
- Step-level logging to detect repetition.
- Explicit loop-detection checks in the orchestrator.
Hallucinated Arguments
Models sometimes fabricate parameter values — particularly for IDs, dates, and identifiers. Always validate tool inputs before execution.
Prompt Injection via Tool Output
If tool output contains adversarial instructions, the model may follow them, hijacking the agent's behavior.
Warning
Never pass raw, unsanitized tool output directly back to the model for tasks with elevated permissions. Treat tool output like user input — potentially adversarial.
Parallel vs. Sequential Tool Calls
Modern models support parallel tool calls — emitting multiple tool calls in a single response. This is ideal for independent sub-tasks (searching multiple sources simultaneously), but requires careful dependency tracking when calls are ordered.
Error Handling
Robust agents handle tool failures gracefully:
- Pass error messages back to the model with enough context to retry or escalate.
- Distinguish between transient errors (retry) and permanent errors (abort and report).
- Set timeouts on all tool calls.
- Log every tool invocation and its result for debugging.
Security Model
Before deploying tool-using agents, define a clear security model:
- Least privilege — tools should have the minimum permissions needed.
- Confirmation gates — require explicit user approval for destructive or irreversible actions.
- Audit logs — immutable records of every tool call and argument.
- Output sandboxing — especially critical for code execution tools.
Tool use dramatically expands what agents can do — and equally expands the attack surface. Treat it accordingly.