AI SDK

Access Metadata

Read AI metadata from your handler — persist it, surface it to end-users, bill against it, or stream incremental progress to the client.

The wide event already contains the full ai metadata, but you often want the same data inside your handler — to persist it, surface it to end-users, bill against it, or stream incremental progress to the client.

AILogger exposes three methods for that, with no need to touch internal state.

getMetadata() — final snapshot

Returns a structured AIMetadata object that mirrors the ai field on the wide event. Safe to call at any point, including after the run completes or inside the AI SDK's onFinish:

server/api/chat.post.ts
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
import { generateText } from 'ai'

export default defineEventHandler(async (event) => {
  const log = useLogger(event)
  const ai = createAILogger(log, {
    cost: { 'claude-sonnet-4.6': { input: 3, output: 15 } },
  })

  await generateText({
    model: ai.wrap('anthropic/claude-sonnet-4.6'),
    prompt: 'Summarize this document',
  })

  const metadata = ai.getMetadata()

  await db.aiRuns.insert({
    userId: event.context.userId,
    model: metadata.model,
    inputTokens: metadata.inputTokens,
    outputTokens: metadata.outputTokens,
    estimatedCost: metadata.estimatedCost,
    finishReason: metadata.finishReason,
    responseId: metadata.responseId,
  })

  return { ok: true }
})

The snapshot is a fresh copy: mutating it never affects the underlying state or subsequent calls.

getEstimatedCost() — quick cost check

Convenience for getMetadata().estimatedCost. Returns the cost in dollars, or undefined if no cost map was provided or the model is not in the map.

const ai = createAILogger(log, {
  cost: { 'claude-sonnet-4.6': { input: 3, output: 15 } },
})

await generateText({ model: ai.wrap('anthropic/claude-sonnet-4.6'), prompt })

const cost = ai.getEstimatedCost()
console.log(`This call cost $${cost?.toFixed(4)}`)

onUpdate(callback) — incremental updates

Subscribe to metadata updates. The callback fires every time the underlying state flushes:

  • Once per step in multi-step agent runs
  • Once per captureEmbed call
  • On model errors
  • On createEvlogIntegration's onFinish

Each invocation receives a fresh snapshot. Returns an unsubscribe function. Subscriber errors are isolated and never break the AI flow.

server/api/agent.post.ts
import { ToolLoopAgent, createAgentUIStreamResponse, stepCountIs } from 'ai'
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'

export default defineEventHandler(async (event) => {
  const log = useLogger(event)
  const { messages } = await readBody(event)
  const ai = createAILogger(log)

  ai.onUpdate((metadata) => {
    pushToClient(event, {
      type: 'ai-progress',
      step: metadata.steps,
      tokens: metadata.totalTokens,
      cost: metadata.estimatedCost,
    })
  })

  const agent = new ToolLoopAgent({
    model: ai.wrap('anthropic/claude-sonnet-4.6'),
    tools: { searchWeb, queryDatabase },
    stopWhen: stepCountIs(5),
  })

  return createAgentUIStreamResponse({ agent, uiMessages: messages })
})

For one-off cleanup:

const off = ai.onUpdate((metadata) => { /* ... */ })
// later
off()

AIMetadata shape

AIMetadata is a public type alias for the snapshot returned by getMetadata() and passed to onUpdate listeners. It has the same shape as the ai field on the wide event.

import type { AIMetadata, AIMetadataListener } from 'evlog/ai'

function handleProgress(metadata: AIMetadata) {
  console.log(`${metadata.calls} calls, $${metadata.estimatedCost ?? 0}`)
}

const listener: AIMetadataListener = handleProgress
ai.onUpdate(listener)

Captured Data Reference

Every field that may show up under ai.*:

Wide event fieldSourceDescription
ai.callsCall countNumber of AI calls in this request
ai.modelresponse.modelIdModel that served the response
ai.modelsAll model IDsArray of all models used (only when > 1)
ai.providermodel.providerProvider (anthropic, openai, google, etc.)
ai.inputTokensusage.inputTokens.totalTotal input tokens across all calls
ai.outputTokensusage.outputTokens.totalTotal output tokens across all calls
ai.totalTokensComputedinputTokens + outputTokens
ai.cacheReadTokensusage.inputTokens.cacheReadTokens served from prompt cache
ai.cacheWriteTokensusage.inputTokens.cacheWriteTokens written to prompt cache
ai.reasoningTokensusage.outputTokens.reasoningReasoning tokens (extended thinking)
ai.finishReasonfinishReason.unifiedWhy generation ended (stop, tool-calls, etc.)
ai.toolCallsContent / stream chunksstring[] of tool names by default, or Array<{ name, input }> when toolInputs is enabled
ai.responseIdresponse.idProvider-assigned response ID (e.g. Anthropic's msg_...)
ai.stepsStep countNumber of LLM calls (only when > 1)
ai.stepsUsagePer-step accumulationPer-step token and tool call breakdown (only when > 1 step)
ai.msToFirstChunkStream timingTime to first text chunk (streaming only)
ai.msToFinishStream timingTotal stream duration (streaming only)
ai.tokensPerSecondComputedOutput tokens per second (streaming only)
ai.errorError captureError message if a model call fails
ai.toolsTelemetryIntegrationPer-tool { name, durationMs, success, error? } (requires createEvlogIntegration)
ai.totalDurationMsTelemetryIntegrationTotal generation wall time (requires createEvlogIntegration)
ai.embeddingcaptureEmbed{ model?, tokens, dimensions?, count? } — embedding metadata
ai.estimatedCostComputedEstimated cost in dollars (requires cost option)