Increase tool performance with parallel execution¶
通过并行执行提高工具性能¶
Starting with Agent Development Kit (ADK) version 1.10.0 for Python, the framework attempts to run any agent-requested function tools in parallel. This behavior can significantly improve the performance and responsiveness of your agents, particularly for agents that rely on multiple external APIs or long-running tasks. For example, if you have 3 tools that each take 2 seconds, by running them in parallel, the total execution time will be closer to 2 seconds, instead of 6 seconds. The ability to run tool functions parallel can improve the performance of your agents, particularly in the following scenarios:
从适用于 Python 的智能体开发工具包 (ADK) 版本 1.10.0 开始,框架尝试并行运行任何智能体请求的函数工具。这种行为可以显著提高您的智能体的性能和响应能力,特别是对于依赖多个外部 API 或长运行任务的智能体。例如,如果您有 3 个各需要 2 秒的工具,通过并行运行它们,总执行时间将接近 2 秒,而不是 6 秒。并行运行工具函数的能力可以提高您的智能体的性能,特别是在以下场景中:
-
Research tasks: Where the agent collects information from multiple sources before proceeding to the next stage of the workflow.
研究任务: 智能体在进入工作流的下一阶段之前从多个来源收集信息。
-
API calls: Where the agent accesses several APIs independently, such as searching for available flights using APIs from multiple airlines.
API 调用: 智能体独立访问多个 API,例如使用多家航空公司的 API 搜索可用航班。
-
Publishing and communication tasks: When the agent needs to publish or communicate through multiple, independent channels or multiple recipients.
发布和通信任务: 当智能体需要通过多个独立渠道或多个收件人进行发布或通信时。
However, your custom tools must be built with asynchronous execution support to enable this performance improvement. This guide explains how parallel tool execution works in the ADK and how to build your tools to take full advantage of this processing feature.
然而,您的自定义工具必须具有异步执行支持才能实现此性能改进。本指南解释了并行工具执行在 ADK 中如何工作,以及如何构建您的工具以充分利用此处理功能。
Warning
警告
Any ADK Tools that use synchronous processing in a set of tool function calls will block other tools from executing in parallel, even if the other tools allow for parallel execution.
在一组工具函数调用中使用同步处理的任何 ADK 工具都将阻止其他工具并行执行,即使其他工具允许并行执行。
Build parallel-ready tools¶
构建支持并行的工具¶
Enable parallel execution of your tool functions by defining them as
asynchronous functions. In Python code, this means using async def and await
syntax which allows the ADK to run them concurrently in an asyncio event loop.
The following sections show examples of agent tools built for parallel
processing and asynchronous operations.
通过将工具函数定义为异步函数来启用它们的并行执行。在 Python 代码中,这意味着使用 async def 和 await 语法,这允许 ADK 在 asyncio 事件循环中并发运行它们。以下部分展示了为并行处理和异步操作构建的智能体工具的示例。
Example of http web call¶
HTTP Web 调用示例¶
The following code example show how to modify the get_weather() function to
operate asynchronously and allow for parallel execution:
以下代码示例显示如何修改 get_weather() 函数以异步运行并允许并行执行:
async def get_weather(city: str) -> dict:
async with aiohttp.ClientSession() as session:
async with session.get(f"http://api.weather.com/{city}") as response:
return await response.json()
Example of database call¶
数据库调用示例¶
The following code example show how to write a database calling function to operate asynchronously:
以下代码示例显示如何编写数据库调用函数以异步运行:
async def query_database(query: str) -> list:
async with asyncpg.connect("postgresql://...") as conn:
return await conn.fetch(query)
Example of yielding behavior for long loops¶
长循环的 Yielding 行为示例¶
In cases where a tool is processing multiple requests or numerous long-running requests, consider adding yielding code to allow other tools to execute, as shown in the following code sample:
在工具处理多个请求或大量长运行请求的情况下,考虑添加 yielding 代码以允许其他工具执行,如以下代码示例所示:
async def process_data(data: list) -> dict:
results = []
for i, item in enumerate(data):
processed = await process_item(item) # Yield point
# Yield 点
results.append(processed)
# Add periodic yield points for long loops
# 为长循环添加定期 Yield 点
if i % 100 == 0:
await asyncio.sleep(0) # Yield control
# Yield 控制权
return {"results": results}
Important
提示:重要
Use the asyncio.sleep() function for pauses to avoid blocking
execution of other functions.
使用 asyncio.sleep() 函数进行暂停,以避免阻塞其他函数的执行。
Example of thread pools for intensive operations¶
密集操作线程池示例¶
When performing processing-intensive functions, consider creating thread pools for better management of available computing resources, as shown in the following example:
在执行处理密集型函数时,考虑创建线程池以更好地管理可用的计算资源,如以下示例所示:
async def cpu_intensive_tool(data: list) -> dict:
loop = asyncio.get_event_loop()
# Use thread pool for CPU-bound work
# 使用线程池进行 CPU 密集型工作
with ThreadPoolExecutor() as executor:
result = await loop.run_in_executor(
executor,
expensive_computation,
data
)
return {"result": result}
Example of process chunking¶
进程分块示例¶
When performing processes on long lists or large amounts of data, consider combining a thread pool technique with dividing up processing into chunks of data, and yielding processing time between the chunks, as shown in the following example:
当对长列表或大量数据执行处理时,考虑将线程池技术与将处理分为数据块以及在块之间让出处理时间结合起来,如以下示例所示:
async def process_large_dataset(dataset: list) -> dict:
results = []
chunk_size = 1000
for i in range(0, len(dataset), chunk_size):
chunk = dataset[i:i + chunk_size]
# Process chunk in thread pool
# 在线程池中处理块
loop = asyncio.get_event_loop()
with ThreadPoolExecutor() as executor:
chunk_result = await loop.run_in_executor(
executor, process_chunk, chunk
)
results.extend(chunk_result)
# Yield control between chunks
# 在块之间 Yield 控制权
await asyncio.sleep(0)
return {"total_processed": len(results), "results": results}
Write parallel-ready prompts and tool descriptions¶
编写支持并行的提示和工具描述¶
When building prompts for AI models, consider explicitly specifying or hinting that function calls be made in parallel. The following example of an AI prompt directs the model to use tools in parallel:
在为 AI 模型构建提示时,考虑明确指定或暗示函数调用应并行进行。以下 AI 提示示例指示模型并行使用工具:
When users ask for multiple pieces of information, always call functions in
parallel.
当用户询问多条信息时,始终并行调用函数。
Examples:
示例:
- "Get weather for London and currency rate USD to EUR" → Call both functions
simultaneously
"获取伦敦的天气和美元兑欧元的汇率" → 同时调用两个函数
- "Compare cities A and B" → Call get_weather, get_population, get_distance in
parallel
"比较城市 A 和 B" → 并行调用 get_weather、get_population、get_distance
- "Analyze multiple stocks" → Call get_stock_price for each stock in parallel
"分析多只股票" → 为每只股票并行调用 get_stock_price
Always prefer multiple specific function calls over single complex calls.
始终优先使用多个特定的函数调用,而不是单个复杂的调用。
The following example shows a tool function description that hints at more efficient use through parallel execution:
以下示例显示了一个工具函数描述,该描述提示通过并行实现更高效的使用:
async def get_weather(city: str) -> dict:
"""Get current weather for a single city.
获取单个城市的当前天气。
This function is optimized for parallel execution - call multiple times for different cities.
此函数针对并行执行进行了优化 - 为不同城市多次调用。
Args:
city: Name of the city, for example: 'London', 'New York'
Returns:
Weather data including temperature, conditions, humidity
包括温度、条件、湿度的天气数据
"""
await asyncio.sleep(2) # Simulate API call
# 模拟 API 调用
return {"city": city, "temp": 72, "condition": "sunny"}
Next steps¶
下一步¶
For more information on building Tools for agents and function calling, see Function Tools. For more detailed examples of tools that take advantage of parallel processing, see the samples in the adk-python repository.
有关为智能体和函数调用构建工具的更多信息,请参阅函数工具。有关利用并行处理的工具的更详细示例,请参阅 adk-python 存储库中的示例。