AI Tool Use in APIs Explained | Generated by AI

Home 2025.08

What is Tool Use in AI APIs?

Tool use, often referred to as “tool calling” or “function calling,” is a feature in AI APIs that allows large language models (LLMs) to interact with external tools, functions, or APIs. Instead of generating responses solely based on their internal knowledge, the model can decide to call predefined functions to fetch real-time data, perform calculations, or execute actions. This makes the AI more dynamic and useful for tasks like querying weather, searching databases, or integrating with other services.

The process typically works like this:

This is commonly inspired by OpenAI’s function calling API, and many providers like Mistral and DeepSeek support compatible implementations.

Mistral or DeepSeek for Tool Use?

Both Mistral AI and DeepSeek AI support tool calling in their APIs, making them suitable for building agents or applications that require external integrations. Here’s a quick comparison based on available information:

Ultimately, neither is strictly “better” for tool use—they both work well. DeepSeek might edge out for cost savings, while Mistral offers more polished agent integrations.

How to Use Tool Use

To use tool calling, you’ll need an API key from the respective provider (sign up at mistral.ai for Mistral or platform.deepseek.com for DeepSeek). Both use Python SDKs similar to OpenAI’s. Below are step-by-step examples for a simple weather query tool.

Using Tool Use with Mistral AI

Mistral’s API supports tool calling via their MistralClient in chat completions. Install the SDK with pip install mistralai.

Example Python Code (adapted from official and community sources):

from mistralai import Mistral

# Initialize client with your API key
api_key = "YOUR_MISTRAL_API_KEY"
model = "mistral-large-latest"  # Supports tool calling
client = Mistral(api_key=api_key)

# Define tools (functions)
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the weather for a location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "The city, e.g., San Francisco"}
                },
                "required": ["location"]
            }
        }
    }
]

# User message
messages = [{"role": "user", "content": "What's the weather in Hangzhou?"}]

# First API call: Model decides if tool is needed
response = client.chat.complete(
    model=model,
    messages=messages,
    tools=tools,
    tool_choice="auto"  # Auto-decides tool use
)

# Check for tool calls
tool_calls = response.choices[0].message.tool_calls
if tool_calls:
    # Append the model's response to messages
    messages.append(response.choices[0].message)
    
    # Simulate executing the tool (in real code, call an actual API)
    tool_call = tool_calls[0]
    if tool_call.function.name == "get_weather":
        location = eval(tool_call.function.arguments)["location"]
        weather_result = "24°C and sunny"  # Replace with real function call
        
        # Append tool result
        messages.append({
            "role": "tool",
            "tool_call_id": tool_call.id,
            "name": tool_call.function.name,
            "content": weather_result
        })
    
    # Second API call: Model generates final response
    final_response = client.chat.complete(model=model, messages=messages)
    print(final_response.choices[0].message.content)
else:
    print(response.choices[0].message.content)

This code sends a query, checks for a tool call, executes it (simulated here), and gets the final answer. For agent-based setups, use Mistral’s beta agents API for more complex workflows.

Using Tool Use with DeepSeek AI

DeepSeek’s API is OpenAI-compatible, so you can use the OpenAI Python SDK. Install with pip install openai.

Example Python Code (from official docs):

from openai import OpenAI

# Initialize client with DeepSeek base URL and API key
client = OpenAI(
    api_key="YOUR_DEEPSEEK_API_KEY",
    base_url="https://api.deepseek.com"
)

# Define tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get weather of a location, the user should supply a location first",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    }
                },
                "required": ["location"]
            },
        }
    },
]

# Function to send messages
def send_messages(messages):
    response = client.chat.completions.create(
        model="deepseek-chat",  # Supports tool calling
        messages=messages,
        tools=tools
    )
    return response.choices[0].message

# User message
messages = [{"role": "user", "content": "How's the weather in Hangzhou?"}]
message = send_messages(messages)
print(f"User>\t {messages[0]['content']}")

# Handle tool call
tool = message.tool_calls[0]
messages.append(message)

# Simulate tool execution (replace with real function)
messages.append({"role": "tool", "tool_call_id": tool.id, "content": "24℃"})

# Get final response
message = send_messages(messages)
print(f"Model>\t {message.content}")

This follows the same flow: query → tool call → execute → final response. DeepSeek limits context length, so keep messages concise.

For advanced usage, check rate limits, costs, and model-specific docs. Test in a sandbox to handle errors like invalid parameters.

References


Back Donate