Deeper Telemetry
createAILogger covers tokens, model info, and streaming metrics. For deeper observability — per-tool execution timing, success/failure tracking, and total generation wall time — add createEvlogIntegration() on top. It implements the AI SDK's TelemetryIntegration interface and captures data middleware alone cannot see.
Combined with middleware (recommended)
When passed an AILogger, the integration shares its accumulator. Both paths write to the same ai.* field:
import { generateText } from 'ai'
import { createAILogger, createEvlogIntegration } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const result = await generateText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
tools: { getWeather, searchDB },
experimental_telemetry: {
isEnabled: true,
integrations: [createEvlogIntegration(ai)],
},
})
return { text: result.text }
})
Your wide event now includes per-tool timing:
{
"ai": {
"calls": 2,
"steps": 2,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"inputTokens": 3500,
"outputTokens": 800,
"totalTokens": 4300,
"toolCalls": ["getWeather", "searchDB"],
"tools": [
{ "name": "getWeather", "durationMs": 150, "success": true },
{ "name": "searchDB", "durationMs": 45, "success": true }
],
"totalDurationMs": 2340,
"msToFirstChunk": 180,
"msToFinish": 2100,
"tokensPerSecond": 380
}
}
Standalone (without middleware)
If your model is already wrapped (e.g. by another middleware), pass the request logger directly:
import { createEvlogIntegration } from 'evlog/ai'
const integration = createEvlogIntegration(log)
const result = await generateText({
model: somePreWrappedModel,
experimental_telemetry: {
isEnabled: true,
integrations: [integration],
},
})
What the integration captures
| Data | Source | Description |
|---|---|---|
ai.tools[] | onToolCallFinish | Per-tool name, durationMs, success, and error (if failed) |
ai.totalDurationMs | onStart → onFinish | Total wall time from generation start to completion |
The middleware captures tokens, model info, and streaming metrics. The integration captures tool execution timing. Together, they give you complete AI observability.
Composability
ai.wrap() works with models that are already wrapped by other tools. If you use supermemory, guardrails middleware, or any other model wrapper, pass the wrapped model to ai.wrap():
import { createAILogger } from 'evlog/ai'
import { withSupermemory } from '@supermemory/tools/ai-sdk'
import { createGateway } from 'ai'
const gateway = createGateway({ ... })
const ai = createAILogger(log)
const base = gateway('anthropic/claude-sonnet-4.6')
const model = ai.wrap(withSupermemory(base, 'your-org-id', { mode: 'full' }))
For explicit middleware composition, use createAIMiddleware to get the raw middleware and compose it yourself via wrapLanguageModel:
import { createAIMiddleware } from 'evlog/ai'
import { wrapLanguageModel } from 'ai'
const model = wrapLanguageModel({
model: base,
middleware: [createAIMiddleware(log, { toolInputs: true }), otherMiddleware],
})
createAIMiddleware returns the same middleware that createAILogger uses internally. The difference: createAIMiddleware does not include captureEmbed (embedding models don't use middleware). Use createAILogger for the full API, createAIMiddleware when you need explicit middleware ordering.
Metadata
Read AI metadata from your handler — persist it, surface it to end-users, bill against it, or stream incremental progress to the client.
Overview
Automatically identify users on every request. Every wide event includes who made the request — userId, user profile, and session metadata — with zero manual work.