Skip to content

BigQuery Agent Analytics Plugin / BigQuery 智能体分析插件

Supported in ADKPython v1.21.0Preview

Version Requirement / 版本要求

Use the latest version of the ADK (version 1.21.0 or higher) to make full use of the features described in this document. 使用 ADK 的最新版本 (1.21.0 或更高版本) 以充分利用本文档中描述的功能。

The BigQuery Agent Analytics Plugin significantly enhances the Agent Development Kit (ADK) by providing a robust solution for in-depth agent behavior analysis. Using the ADK Plugin architecture and the BigQuery Storage Write API, it captures and logs critical operational events directly into a Google BigQuery table, empowering you with advanced capabilities for debugging, real-time monitoring, and comprehensive offline performance evaluation. BigQuery 智能体分析插件通过为深入的智能体行为分析提供强大解决方案,显著增强了智能体开发工具包 (ADK)。使用 ADK 插件架构和 BigQuery Storage Write API,它将关键操作事件直接捕获并记录到 Google BigQuery 表中,为您提供了用于调试、实时监控和全面离线性能评估的高级能力。

Version 1.21.0 introduces Hybrid Multimodal Logging, allowing you to log large payloads (images, audio, blobs) by offloading them to Google Cloud Storage (GCS) while keeping a structured reference (ObjectRef) in BigQuery. 版本 1.21.0 引入了混合多模态日志记录,允许您将大负载(图像、音频、二进制大对象)卸载到 Google Cloud Storage (GCS),同时在 BigQuery 中保留结构化引用 (ObjectRef)。

Preview release / 预览版本

The BigQuery Agent Analytics Plugin is in Preview release. For more information, see the launch stage descriptions. BigQuery 智能体分析插件处于预览版本。有关更多信息,请参阅 发布阶段描述

BigQuery Storage Write API

This feature uses BigQuery Storage Write API, which is a paid service. For information on costs, see the BigQuery documentation. 此功能使用 BigQuery Storage Write API,这是一项付费服务。 有关费用信息,请参阅 BigQuery 文档

Use cases / 使用案例

  • Agent workflow debugging and analysis: Capture a wide range of plugin lifecycle events (LLM calls, tool usage) and agent-yielded events (user input, model responses), into a well-defined schema. 智能体工作流调试和分析: 将广泛的插件生命周期事件 (LLM 调用、工具使用) 和智能体生成的事件 (用户输入、模型响应) 捕获到定义良好的架构中。
  • High-volume analysis and debugging: Logging operations are performed asynchronously using the Storage Write API to allow high throughput and low latency. 高量分析和调试: 使用 Storage Write API 异步执行日志记录操作,以实现高吞吐量和低延迟。
  • Multimodal Analysis: Log and analyze text, images, and other modalities. Large files are offloaded to GCS, making them accessible to BigQuery ML via Object Tables. 多模态分析: 记录和分析文本、图像和其他模态。大文件卸载到 GCS,使它们可以通过 Object 表供 BigQuery ML 访问。
  • Distributed Tracing: Built-in support for OpenTelemetry-style tracing (trace_id, span_id) to visualize agent execution flows. 分布式跟踪: 内置支持 OpenTelemetry 风格的跟踪 (trace_id, span_id) 以可视化智能体执行流程。

The agent event data recorded varies based on the ADK event type. For more information, see Event types and payloads. 记录的智能体事件数据根据 ADK 事件类型而变化。有关更多信息,请参阅 事件类型和负载

Prerequisites / 前提条件

  • Google Cloud Project with the BigQuery API enabled. 启用了 BigQuery APIGoogle Cloud 项目
  • BigQuery Dataset: Create a dataset to store logging tables before using the plugin. The plugin automatically creates the necessary events table within the dataset if the table does not exist. BigQuery 数据集: 在使用插件之前创建一个数据集来存储日志表。如果表不存在,插件会自动在数据集中创建必要的事件表。
  • Google Cloud Storage Bucket (Optional): If you plan to log multimodal content (images, audio, etc.), creating a GCS bucket is recommended for offloading large files. Google Cloud Storage 存储桶(可选): 如果您计划记录多模态内容(图像、音频等),建议创建 GCS 存储桶以卸载大文件。
  • Authentication: 身份验证:
    • Local: Run gcloud auth application-default login. 本地: 运行 gcloud auth application-default login
    • Cloud: Ensure your service account has the required permissions. 云端: 确保您的服务帐户具有所需的权限。

IAM permissions / IAM 权限

For the agent to work properly, the principal (e.g., service account, user account) under which the agent is running needs these Google Cloud roles: 为了使智能体正常工作,运行智能体的主体(例如服务帐户、用户帐户)需要这些 Google Cloud 角色: * roles/bigquery.jobUser at Project Level to run BigQuery queries. 项目级别上的 roles/bigquery.jobUser 以运行 BigQuery 查询。 * roles/bigquery.dataEditor at Table Level to write log/event data. 表级别上的 roles/bigquery.dataEditor 以写入日志/事件数据。 * If using GCS offloading: roles/storage.objectCreator and roles/storage.objectViewer on the target bucket. 如果使用 GCS 卸载: 目标存储桶上的 roles/storage.objectCreatorroles/storage.objectViewer

Use with agent / 与智能体一起使用

You use the BigQuery Agent Analytics Plugin by configuring and registering it with your ADK agent's App object. The following example shows an implementation of an agent with this plugin, including GCS offloading: 您可以通过配置和注册 BigQuery 智能体分析插件与您的 ADK 智能体的 App 对象来使用它。以下示例展示了使用此插件的智能体实现,包括 GCS 卸载:

my_bq_agent/agent.py
# my_bq_agent/agent.py
import os
import google.auth
from google.adk.apps import App
from google.adk.plugins.bigquery_agent_analytics_plugin import BigQueryAgentAnalyticsPlugin, BigQueryLoggerConfig
from google.adk.agents import Agent
from google.adk.models.google_llm import Gemini
from google.adk.tools.bigquery import BigQueryToolset, BigQueryCredentialsConfig

# --- Configuration / 配置 ---
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-gcp-project-id")
DATASET_ID = os.environ.get("BIG_QUERY_DATASET_ID", "your-big-query-dataset-id")
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION", "US") # default location is US in the plugin / 插件中的默认位置是 US
GCS_BUCKET = os.environ.get("GCS_BUCKET_NAME", "your-gcs-bucket-name") # Optional / 可选

if PROJECT_ID == "your-gcp-project-id":
    raise ValueError("Please set GOOGLE_CLOUD_PROJECT or update the code.")

# --- CRITICAL: Set environment variables BEFORE Gemini instantiation / 关键:在 Gemini 实例化之前设置环境变量 ---
os.environ['GOOGLE_CLOUD_PROJECT'] = PROJECT_ID
os.environ['GOOGLE_CLOUD_LOCATION'] = LOCATION
os.environ['GOOGLE_GENAI_USE_VERTEXAI'] = 'True'

# --- Initialize the Plugin with Config / 使用配置初始化插件 ---
bq_config = BigQueryLoggerConfig(
    enabled=True,
    gcs_bucket_name=GCS_BUCKET, # Enable GCS offloading for multimodal content / 为多模态内容启用 GCS 卸载
    log_multi_modal_content=True,
    max_content_length=500 * 1024, # 500 KB limit for inline text / 内联文本的 500 KB 限制
    batch_size=1, # Default is 1 for low latency, increase for high throughput / 默认为 1 以实现低延迟,增加以实现高吞吐量
    shutdown_timeout=10.0
)

bq_logging_plugin = BigQueryAgentAnalyticsPlugin(
    project_id=PROJECT_ID,
    dataset_id=DATASET_ID,
    table_id="agent_events_v2", # default table name is agent_events_v2 / 默认表名是 agent_events_v2
    config=bq_config,
    location=LOCATION
)

# --- Initialize Tools and Model / 初始化工具和模型 ---
credentials, _ = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
bigquery_toolset = BigQueryToolset(
    credentials_config=BigQueryCredentialsConfig(credentials=credentials)
)

llm = Gemini(model="gemini-2.5-flash")

root_agent = Agent(
    model=llm,
    name='my_bq_agent',
    instruction="You are a helpful assistant with access to BigQuery tools.",
    tools=[bigquery_toolset]
)

# --- Create the App / 创建 App ---
app = App(
    name="my_bq_agent",
    root_agent=root_agent,
    plugins=[bq_logging_plugin],
)

Run and test agent / 运行和测试智能体

Test the plugin by running the agent and making a few requests through the chat interface, such as "tell me what you can do" or "List datasets in my cloud project ". These actions create events which are recorded in your Google Cloud project BigQuery instance. Once these events have been processed, you can view the data for them in the BigQuery Console, using this query 通过运行智能体并通过聊天界面发出一些请求(例如 "告诉我你能做什么" 或 "列出我的云项目 中的数据集")来测试插件。这些操作会创建事件,这些事件记录在您的 Google Cloud 项目 BigQuery 实例中。处理这些事件后,您可以使用此查询在 BigQuery Console 中查看它们的数据

SELECT timestamp, event_type, content 
FROM `your-gcp-project-id.your-big-query-dataset-id.agent_events_v2`
ORDER BY timestamp DESC
LIMIT 20;

Configuration options / 配置选项

You can customize the plugin using BigQueryLoggerConfig. 您可以使用 BigQueryLoggerConfig 自定义插件。

  • enabled (bool, default: True): To disable the plugin from logging agent data to the BigQuery table, set this parameter to False. enabled (bool,默认: True): 要禁用插件将智能体数据记录到 BigQuery 表,请将此参数设置为 False。
  • clustering_fields (List[str], default: ["event_type", "agent", "user_id"]): The fields used to cluster the BigQuery table when it is automatically created. clustering_fields (List[str],默认: ["event_type", "agent", "user_id"]): 自动创建 BigQuery 表时用于聚类的字段。
  • gcs_bucket_name (Optional[str], default: None): The name of the GCS bucket to offload large content (images, blobs, large text) to. If not provided, large content may be truncated or replaced with placeholders. gcs_bucket_name (Optional[str],默认: None): 用于卸载大内容(图像、blob、大文本)的 GCS 存储桶名称。如果未提供,大内容可能会被截断或替换为占位符。
  • connection_id (Optional[str], default: None): The BigQuery connection ID (e.g., us.my-connection) to use as the authorizer for ObjectRef columns. Required for using ObjectRef with BigQuery ML. connection_id (Optional[str],默认: None): 用作 ObjectRef 列授权器的 BigQuery 连接 ID (例如 us.my-connection)。将 ObjectRef 与 BigQuery ML 一起使用时需要。
  • max_content_length (int, default: 500 * 1024): The maximum length (in characters) of text content to store inline in BigQuery before offloading to GCS (if configured) or truncating. Default is 500 KB. max_content_length (int,默认: 500 * 1024): 在卸载到 GCS (如果已配置)或截断之前,在 BigQuery 中内联存储的文本内容的最大长度(以字符为单位)。默认为 500 KB。
  • batch_size (int, default: 1): The number of events to batch before writing to BigQuery. batch_size (int,默认: 1): 写入 BigQuery 之前要批量处理的事件数。
  • batch_flush_interval (float, default: 1.0): The maximum time (in seconds) to wait before flushing a partial batch. batch_flush_interval (float,默认: 1.0): 刷新部分批次之前等待的最长时间(以秒为单位)。
  • shutdown_timeout (float, default: 10.0): Seconds to wait for logs to flush during shutdown. shutdown_timeout (float,默认: 10.0): 关闭期间等待日志刷新的秒数。
  • event_allowlist (Optional[List[str]], default: None): A list of event types to log. If None, all events are logged except those in event_denylist. For a comprehensive list of supported event types, refer to the Event types and payloads section. event_allowlist (Optional[List[str]],默认: None): 要记录的事件类型列表。如果为 None,则记录所有事件,除了 event_denylist 中的事件。有关支持的事件类型的全面列表,请参阅 事件类型和负载 部分。
  • event_denylist (Optional[List[str]], default: None): A list of event types to skip logging. For a comprehensive list of supported event types, refer to the Event types and payloads section. event_denylist (Optional[List[str]],默认: None): 要跳过记录的事件类型列表。有关支持的事件类型的全面列表,请参阅 事件类型和负载 部分。
  • content_formatter (Optional[Callable[[Any, str], Any]], default: None): An optional function to format event content before logging. content_formatter (Optional[Callable[[Any, str], Any]],默认: None): 在记录之前格式化事件内容的可选函数。
  • log_multi_modal_content (bool, default: True): Whether to log detailed content parts (including GCS references). log_multi_modal_content (bool,默认: True): 是否记录详细的内容部分(包括 GCS 引用)。

The following code sample shows how to define a configuration for the BigQuery Agent Analytics plugin: 以下代码示例显示如何为 BigQuery 智能体分析插件定义配置:

import json
import re

from google.adk.plugins.bigquery_agent_analytics_plugin import BigQueryLoggerConfig

def redact_dollar_amounts(event_content: Any) -> str:
    """
    Custom formatter to redact dollar amounts (e.g., $600, $12.50)
    and ensure JSON output if the input is a dict.
    自定义格式化程序,用于编辑美元金额(例如 $600, $12.50)
    并确保如果输入是 dict 则输出 JSON。
    """
    text_content = ""
    if isinstance(event_content, dict):
        text_content = json.dumps(event_content)
    else:
        text_content = str(event_content)

    # Regex to find dollar amounts: $ followed by digits, optionally with commas or decimals.
    # Examples: $600, $1,200.50, $0.99
    # 正则表达式查找美元金额: $ 后跟数字,可选地带有逗号或小数。
    # 示例: $600, $1,200.50, $0.99
    redacted_content = re.sub(r'\$\d+(?:,\d{3})*(?:\.\d+)?', 'xxx', text_content)

    return redacted_content

config = BigQueryLoggerConfig(
    enabled=True,
    event_allowlist=["LLM_REQUEST", "LLM_RESPONSE"], # Only log these events / 仅记录这些事件
    # event_denylist=["TOOL_STARTING"], # Skip these events / 跳过这些事件
    shutdown_timeout=10.0, # Wait up to 10s for logs to flush on exit / 退出时等待日志刷新最多 10 秒
    client_close_timeout=2.0, # Wait up to 2s for BQ client to close / 等待 BQ 客户端关闭最多 2 秒
    max_content_length=500, # Truncate content to 500 chars / 将内容截断为 500 个字符
    content_formatter=redact_dollar_amounts, # Redact the dollar amounts in the logging content / 编辑日志内容中的美元金额

)

plugin = BigQueryAgentAnalyticsPlugin(..., config=config)

Schema and production setup / 架构和生产设置

The plugin automatically creates the table if it does not exist. However, for production, we recommend creating the table manually using the following DDL, which utilizes the JSON type for flexibility and REPEATED RECORDs for multimodal content. 如果表不存在,插件会自动创建表。但是,对于生产环境,我们建议使用以下 DDL 手动创建表,它使用 JSON 类型以实现灵活性,并使用 REPEATED RECORD 用于多模态内容。

Recommended DDL: / 推荐的 DDL:

CREATE TABLE `your-gcp-project-id.adk_agent_logs.agent_events_v2`
(
  timestamp TIMESTAMP NOT NULL OPTIONS(description="The UTC time at which the event was logged. / 记录事件的 UTC 时间。"),
  event_type STRING OPTIONS(description="Indicates the type of event being logged (e.g., 'LLM_REQUEST', 'TOOL_COMPLETED'). / 指示正在记录的事件类型(例如 'LLM_REQUEST', 'TOOL_COMPLETED')。"),
  agent STRING OPTIONS(description="The name of the ADK agent or author associated with the event. / 与事件关联的 ADK 智能体或作者的名称。"),
  session_id STRING OPTIONS(description="A unique identifier to group events within a single conversation or user session. / 用于在单个对话或用户会话中对事件进行分组的唯一标识符。"),
  invocation_id STRING OPTIONS(description="A unique identifier for each individual agent execution or turn within a session. / 会话内每个单独的智能体执行或轮次的唯一标识符。"),
  user_id STRING OPTIONS(description="The identifier of the user associated with the current session. / 与当前会话关联的用户标识符。"),
  trace_id STRING OPTIONS(description="OpenTelemetry trace ID for distributed tracing. / 用于分布式跟踪的 OpenTelemetry 跟踪 ID。"),
  span_id STRING OPTIONS(description="OpenTelemetry span ID for this specific operation. / 此特定操作的 OpenTelemetry 跨度 ID。"),
  parent_span_id STRING OPTIONS(description="OpenTelemetry parent span ID to reconstruct hierarchy. / 用于重构层次结构的 OpenTelemetry 父跨度 ID。"),
  content JSON OPTIONS(description="The event-specific data (payload) stored as JSON. / 存储为 JSON 的特定事件数据(负载)。"),
  content_parts ARRAY<STRUCT<
    mime_type STRING,
    uri STRING,
    object_ref STRUCT<
      uri STRING,
      version STRING,
      authorizer STRING,
      details JSON
    >,
    text STRING,
    part_index INT64,
    part_attributes STRING,
    storage_mode STRING
  >> OPTIONS(description="Detailed content parts for multi-modal data. / 多模态数据的详细内容部分。"),
  attributes JSON OPTIONS(description="Arbitrary key-value pairs for additional metadata. / 用于额外元数据的任意键值对。"),
  latency_ms JSON OPTIONS(description="Latency measurements (e.g., total_ms). / 延迟测量(例如 total_ms)。"),
  status STRING OPTIONS(description="The outcome of the event, typically 'OK' or 'ERROR'. / 事件的结果,通常是 'OK' 或 'ERROR'。"),
  error_message STRING OPTIONS(description="Populated if an error occurs. / 如果发生错误则填充。"),
  is_truncated BOOLEAN OPTIONS(description="Flag indicates if content was truncated. / 指示内容是否被截断的标志。")
)
PARTITION BY DATE(timestamp)
CLUSTER BY event_type, agent, user_id;

Event types and payloads / 事件类型和负载

The content column now contains a JSON object specific to the event_type. The content_parts column provides a structured view of the content, especially useful for images or offloaded data. content 列现在包含特定于 event_typeJSON 对象。 content_parts 列提供内容的结构化视图,对于图像或卸载数据特别有用。

Content Truncation / 内容截断

  • Variable content fields are truncated to max_content_length (configured in BigQueryLoggerConfig, default 500KB). 可变内容字段被截断为 max_content_length (在 BigQueryLoggerConfig 中配置,默认为 500KB)。
  • If gcs_bucket_name is configured, large content is offloaded to GCS instead of being truncated, and a reference is stored in content_parts.object_ref. 如果配置了 gcs_bucket_name,大内容将被卸载到 GCS 而不是被截断,并且引用存储在 content_parts.object_ref 中。

LLM interactions (plugin lifecycle) / LLM 交互(插件生命周期)

These events track the raw requests sent to and responses received from the LLM. 这些事件跟踪发送到 LLM 的原始请求和从 LLM 收到的响应。

Event Type / 事件类型 Content (JSON) Structure / 内容(JSON)结构 Attributes (JSON) / 属性(JSON) Example Content (Simplified) / 示例内容(简化)

LLM_REQUEST

{
  "prompt": [
    {"role": "user", "content": "..."}
  ],
  "system_prompt": "..."
}

{
  "tools": ["tool_a", "tool_b"],
  "llm_config": {"temperature": 0.5}
}

{
  "prompt": [
    {"role": "user", "content": "What is the capital of France?"}
  ],
  "system_prompt": "You are a helpful geography assistant."
}

LLM_RESPONSE

{
  "response": "...",
  "usage": {...}
}

{}

{
  "response": "The capital of France is Paris.",
  "usage": {
    "prompt": 15,
    "completion": 7,
    "total": 22
  }
}

LLM_ERROR

null

{}

null (See error_message column / 参见 error_message 列)

Tool usage (plugin lifecycle) / 工具使用(插件生命周期)

These events track the execution of tools by the agent. 这些事件跟踪智能体对工具的执行。

Event Type / 事件类型 Content (JSON) Structure / 内容(JSON)结构 Attributes (JSON) / 属性(JSON) Example Content / 示例内容

TOOL_STARTING

{
  "tool": "...",
  "args": {...}
}

{}

{"tool": "list_datasets", "args": {"project_id": "my-project"}}

TOOL_COMPLETED

{
  "tool": "...",
  "result": "..."
}

{}

{"tool": "list_datasets", "result": ["ds1", "ds2"]}

TOOL_ERROR

{
  "tool": "...",
  "args": {...}
}

{}

{"tool": "list_datasets", "args": {}}

Agent lifecycle & Generic Events / 智能体生命周期和通用事件

Event Type / 事件类型 Content (JSON) Structure / 内容(JSON)结构

INVOCATION_STARTING

{}

INVOCATION_COMPLETED

{}

AGENT_STARTING

"You are a helpful agent..."

AGENT_COMPLETED

{}

USER_MESSAGE_RECEIVED

{"text_summary": "Help me book a flight."}

GCS Offloading Examples (Multimodal & Large Text) / GCS 卸载示例(多模态和大文本)

When gcs_bucket_name is configured, large text and multimodal content (images, audio, etc.) are automatically offloaded to GCS. The content column will contain a summary or placeholder, while content_parts contains the object_ref pointing to the GCS URI. 当配置了 gcs_bucket_name 时,大文本和多模态内容(图像、音频等)会自动卸载到 GCS。content 列将包含摘要或占位符,而 content_parts 包含指向 GCS URI 的 object_ref

Offloaded Text Example / 卸载的文本示例

{
  "event_type": "LLM_REQUEST",
  "content_parts": [
    {
      "part_index": 1,
      "mime_type": "text/plain",
      "storage_mode": "GCS_REFERENCE",
      "text": "AAAA... [OFFLOADED]",
      "object_ref": {
        "uri": "gs://haiyuan-adk-debug-verification-1765319132/2025-12-10/e-f9545d6d/ae5235e6_p1.txt",
        "authorizer": "us.bqml_connection",
        "details": {"gcs_metadata": {"content_type": "text/plain"}}
      }
    }
  ]
}

Offloaded Image Example / 卸载的图像示例

{
  "event_type": "LLM_REQUEST",
  "content_parts": [
    {
      "part_index": 2,
      "mime_type": "image/png",
      "storage_mode": "GCS_REFERENCE",
      "text": "[MEDIA OFFLOADED]",
      "object_ref": {
        "uri": "gs://haiyuan-adk-debug-verification-1765319132/2025-12-10/e-f9545d6d/ae5235e6_p2.png",
        "authorizer": "us.bqml_connection",
        "details": {"gcs_metadata": {"content_type": "image/png"}}
      }
    }
  ]
}

Querying Offloaded Content (Get Signed URLs) / 查询卸载的内容(获取签名 URL)

SELECT
  timestamp,
  event_type,
  part.mime_type,
  part.storage_mode,
  part.object_ref.uri AS gcs_uri,
  -- Generate a signed URL to read the content directly (requires connection_id configuration)
  -- 生成签名 URL 以直接读取内容(需要 connection_id 配置)
  STRING(OBJ.GET_ACCESS_URL(part.object_ref, 'r').access_urls.read_url) AS signed_url
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`,
UNNEST(content_parts) AS part
WHERE part.storage_mode = 'GCS_REFERENCE'
ORDER BY timestamp DESC
LIMIT 10;

Advanced analysis queries / 高级分析查询

Trace a specific conversation turn using trace_id / 使用 trace_id 跟踪特定对话轮次

SELECT timestamp, event_type, agent, JSON_VALUE(content, '$.response') as summary
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`
WHERE trace_id = 'your-trace-id'
ORDER BY timestamp ASC;

Token usage analysis (accessing JSON fields) / 令牌使用分析(访问 JSON 字段)

SELECT
  AVG(CAST(JSON_VALUE(content, '$.usage.total') AS INT64)) as avg_tokens
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`
WHERE event_type = 'LLM_RESPONSE';

Querying Multimodal Content (using content_parts and ObjectRef) / 查询多模态内容(使用 content_parts 和 ObjectRef)

SELECT
  timestamp,
  part.mime_type,
  part.object_ref.uri as gcs_uri
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`,
UNNEST(content_parts) as part
WHERE part.mime_type LIKE 'image/%'
ORDER BY timestamp DESC;

Analyze Multimodal Content with BigQuery Remote Model (Gemini) / 使用 BigQuery 远程模型(Gemini)分析多模态内容

SELECT
  logs.session_id,
  -- Get a signed URL for the image / 获取图像的签名 URL
  STRING(OBJ.GET_ACCESS_URL(parts.object_ref, "r").access_urls.read_url) as signed_url,
  -- Analyze the image using a remote model (e.g., gemini-pro-vision)
  -- 使用远程模型(例如 gemini-pro-vision)分析图像
  AI.GENERATE(
    ('Describe this image briefly. What company logo?', parts.object_ref)
  ) AS generated_result
FROM
  `your-gcp-project-id.your-dataset-id.agent_events_v2` logs,
  UNNEST(logs.content_parts) AS parts
WHERE
  parts.mime_type LIKE 'image/%'
ORDER BY logs.timestamp DESC
LIMIT 1;

Latency Analysis (LLM & Tools) / 延迟分析(LLM 和工具)

SELECT
  event_type,
  AVG(CAST(JSON_VALUE(latency_ms, '$.total_ms') AS INT64)) as avg_latency_ms
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`
WHERE event_type IN ('LLM_RESPONSE', 'TOOL_COMPLETED')
GROUP BY event_type;

Span Hierarchy & Duration Analysis / 跨度层次结构和持续时间分析

SELECT
  span_id,
  parent_span_id,
  event_type,
  timestamp,
  -- Extract duration from latency_ms for completed operations / 从 latency_ms 提取已完成操作的持续时间
  CAST(JSON_VALUE(latency_ms, '$.total_ms') AS INT64) as duration_ms,
  -- Identify the specific tool or operation / 识别特定工具或操作
  COALESCE(
    JSON_VALUE(content, '$.tool'), 
    'LLM_CALL'
  ) as operation
FROM `your-gcp-project-id.your-dataset-id.agent_events_v2`
WHERE trace_id = 'your-trace-id'
  AND event_type IN ('LLM_RESPONSE', 'TOOL_COMPLETED')
ORDER BY timestamp ASC;

Additional resources / 其他资源