Skip to content

Ollama model host for ADK agents

ADK 智能体的 Ollama 模型托管

Supported in ADKPython v0.1.0

Ollama is a tool that allows you to host and run open-source models locally. ADK integrates with Ollama-hosted models through the LiteLLM model connector library.

Ollama 是一个允许您在本地托管和运行开源模型的工具。ADK 通过 LiteLLM 模型连接器库与 Ollama 托管的模型集成。

Get started

开始使用

Use the LiteLLM wrapper to create agents with Ollama-hosted models. The following code example shows a basic implementation for using Gemma open models with your agents:

使用 LiteLLM 包装器创建具有 Ollama 托管模型的智能体。以下代码示例展示了在智能体中使用 Gemma 开源模型的基本实现:

root_agent = Agent(
    model=LiteLlm(model="ollama_chat/gemma3:latest"),
    name="dice_agent",
    description=(
        "hello world agent that can roll a dice of 8 sides and check prime"
        " numbers."
    ),
    instruction="""
      You roll dice and answer questions about the outcome of the dice rolls.
    """,
    tools=[
        roll_die,
        check_prime,
    ],
)

Warning: Use ollama_chatinterface

Make sure you set the provider ollama_chat instead of ollama. Using ollama can result in unexpected behaviors such as infinite tool call loops and ignoring previous context.

警告: 使用 ollama_chat 接口

确保您设置提供商 ollama_chat 而不是 ollama。使用 ollama 可能导致意外行为,如无限工具调用循环和忽略先前上下文。

Use OLLAMA_API_BASE environment variable

Although you can specify the api_base parameter in LiteLLM for generation, as of v1.65.5, the library relies on the environment variable for other API calls. Therefore, you should set the OLLAMA_API_BASE environment variable for your Ollama server URL to ensure all requests are routed correctly.

使用 OLLAMA_API_BASE 环境变量

虽然您可以在 LiteLLM 中指定 api_base 参数用于生成,但截至 v1.65.5,该库依赖于环境变量进行其他 API 调用。因此,您应该为您的 Ollama 服务器 URL 设置 OLLAMA_API_BASE 环境变量,以确保所有请求正确路由。

export OLLAMA_API_BASE="http://localhost:11434"
adk web

Model choice

模型选择

If your agent is relying on tools, make sure that you select a model with tool support from Ollama website. For reliable results, use a model with tool support. You can check tool support for the model using the following command:

如果您的智能体依赖工具,请确保您从 Ollama 网站选择一个支持工具的模型。为了获得可靠的结果,请使用支持工具的模型。您可以使用以下命令检查模型的工具支持:

ollama show mistral-small3.1
  Model
    architecture        mistral3
    parameters          24.0B
    context length      131072
    embedding length    5120
    quantization        Q4_K_M

  Capabilities
    completion
    vision
    tools

You should see tools listed under capabilities. You can also look at the template the model is using and tweak it based on your needs.

您应该看到 tools 列在 capabilities 下。您还可以查看模型使用的模板并根据您的需求进行调整。

ollama show --modelfile llama3.2 > model_file_to_modify