Instrument your AI application with full tracing control using the TypeScript or Python SDK
The Adaline SDK is the recommended way to integrate your AI application with Adaline. Beyond sending traces and spans, the SDK gives you deployment management with automatic caching, smart buffering with batched flushes, built-in retries with exponential backoff, and health monitoring — all production-ready out of the box.
import { Adaline } from "@adaline/client";const adaline = new Adaline({ apiKey: "your-api-key" });
from adaline.main import Adalineadaline = Adaline(api_key="your-api-key")
Set the ADALINE_API_KEY environment variable and omit the apiKey parameter to avoid hardcoding secrets. The SDK reads from this variable automatically.
The SDK can fetch deployed prompt configurations — including the model, provider settings, messages, tools, and variables — so your application always uses the latest version without redeploying code.
For long-running services, use initLatestDeployment to set up a cached deployment that refreshes automatically (default every 60 seconds) in the background. When you deploy a new prompt version in Adaline, your application picks it up without a restart.
TypeScript
Python
const controller = await adaline.initLatestDeployment({ promptId: "your-prompt-id", deploymentEnvironmentId: "your-environment-id",});// Use the cached deployment in request handlersconst deployment = await controller.get();// Force a fresh fetch (bypasses cache)const fresh = await controller.get(true);// Check background refresh healthconst status = controller.backgroundStatus();// { stopped: false, consecutiveFailures: 0, lastError: null, lastRefreshed: Date }// Stop the background refresh when shutting downcontroller.stop();
The monitor manages the lifecycle of traces and spans — buffering them in memory, batching them together, and flushing them to the Adaline API on a timer or when the buffer fills up.
Each operation inside a trace is a span. Spans carry a content type that tells Adaline what kind of operation it represents — an LLM call, a tool execution, a vector retrieval, and more.
Spans can contain child spans to model hierarchical workflows — an agent span containing tool call spans, or a RAG span containing embedding and retrieval sub-spans:
The content.type field tells Adaline what kind of operation a span represents. Each type carries input and output as JSON strings, plus type-specific fields.
LLM chat completions and text generation. Captures the provider, model, cost, and optionally expected output for evaluation.For the best experience, stringify the exact request payload you send to your AI provider as input and the full response as output. When you use a supported provider, Adaline automatically extracts token usage, calculates cost, and surfaces model metadata. See Span content: input and output for full details and examples. You can also use Adaline’s own content schema for input and output, although this is more advanced and requires custom transformations.
Attach variable values to spans so they flow into continuous evaluations and can be captured into datasets. Variables are set on the span’s content object (specifically on Model or ModelStream content types), not on logSpan() directly:
In production, handle process signals to flush remaining data before the process exits:
Serverless environments (AWS Lambda, Vercel Functions, Cloudflare Workers, etc.): The SDK flushes buffered traces and spans on a background interval, but serverless functions can exit before the next flush fires. Always call await monitor.flush() (TypeScript) or await monitor.flush() (Python) explicitly before your handler returns to ensure nothing is lost.