Skip to main content
This SDK is available in an open beta, and its methods may change. We encourage you to reach out on Slack for help getting setup, and so we can communicate changes.

Overview

The Statsig Node AI SDK lets you manage your prompts, online and offline evals, and debug your LLM applications in production. It depends upon the Statsig Node Server SDK, but provides convenient hooks for AI-specific functionality.
1

Install the SDK

npm install @statsig/statsig-ai
If you have unique setup needs like a frozen lockfile, take a look at the Node Server SDK docs - the AI SDK will install Node Server if you don’t already have it.
2

Initialize the SDK

If you already have a Statsig instance, you can pass it into the SDK. Otherwise, we’ll create an instance for you internally.
  • Don't use Statsig
  • Already have Statsig instance
Initialize the AI SDK with a Server Secret Key from the Statsig console.
Server Secret Keys should always be kept private. If you expose one, you can disable and recreate it in the Statsig console.
import { StatsigAI } from '@statsig/statsig-ai-node';

const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY'});
await statsigAI.initialize();
Optionally, you can configure StatsigOptions and StatsigAI options:
import { StatsigAI, StatsigAIOptions } from '@statsig/statsig-ai-node';
import { StatsigOptions } from '@statsig/statsig-server-core-node';

// if you want to configure any statsig options, this is optional:
const statsigOptions: StatsigOptions = {
  environment: 'production',
};

// if you would like to configure any statsigAI options, this is optional:
const statsigAIOptions: StatsigAIOptions = {
  enableDefaultOtel: true,
};
const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY', statsigOptions}, statsigAIOptions);
await statsigAI.initialize();

// if you would like to use any statsig methods, you can access the statsig instance from the statsigAI instance:
const gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');

Using the SDK

Getting a Prompt

Statsig can act as the control plane for your LLM prompts, allowing you to version and change them without deploying code. For more information, see the Prompts documentation.
import { StatsigUser } from '@statsig/statsig-ai-node';
// Create a user object
const user = new StatsigUser({ userID: 'a-user' });

// Get the prompt
const myPrompt = statsigAI.getPrompt(user, 'my_prompt');

// Use the live version of the prompt
const liveVersion = myPrompt.getLive();

// Get the candidate versions of the prompt
const candidateVersions = myPrompt.getCandidates();

// Use the live version of the prompt in a completion
const response = await openai.chat.completions.create({
  model: liveVersion.getModel({ fallback: 'gpt-4' }), // optional fallback
  temperature: liveVersion.getTemperature(),
  max_tokens: liveVersion.getMaxTokens(),
  messages: [{ role: 'user', content: 'Your prompt here' }],
});

Logging Eval Results

When running an online eval, you can log results back to Statsig for analysis. Provide a score between 0 and 1, along with the grader name and any useful metadata (e.g., session IDs). Currently, you must provide the grader manually — future releases will support automated grading options.
import { StatsigUser } from '@statsig/statsig-ai-node';

const livePromptVersion = statsigAI.getPrompt(user, 'my_prompt').getLive();
// Create a user object
const user = new StatsigUser({ userID: 'a-user' });

// Log the results of the eval
statsigAI.logEvalGrade(user, livePromptVersion, 0.5, 'my_grader', {
  session_id: '1234567890',
});

// flush otel events or wait for the configured otel exporter flush interval
await statsigAI.flush();

OpenTelemetry (OTEL)

The AI SDK works with OpenTelemetry for sending telemetry to Statsig. You can either turn on the default OTel integration in the StatsigAIOptions, or set up your own OTel to send traces to Statsig. More advanced OTel configuration and exporter support are on the way.
import { StatsigAI, StatsigAIOptions } from '@statsig/statsig-ai-node';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { NodeSDK } from '@opentelemetry/sdk-node';


const statsigAIOptions: StatsigAIOptions = {
  enableDefaultOtel: true,
};
const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY'}, statsigAIOptions);
await statsigAI.initialize();

// flush otel events
await statsigAI.flush();

// or if you are using your own otel instance, you can setup the export to statsig with an OTLP exporter.
const statsigExporter = new OTLPTraceExporter({
  url: 'https://api.statsig.com/otlp/v1/traces',
  headers: {
    'statsig-api-key': 'YOUR_SERVER_SECRET_KEY',
  },
});

const otel = new NodeSDK({
    serviceName: 'my-service',
    spanProcessors: [new BatchSpanProcessor(statsigExporter)],
});
otel.start();

Wrapping OpenAI

The Statsig OpenAI Wrapper automatically adds tracing and log events to your OpenAI SDK usage, giving you in-console visibility with minimal setup.
import { wrapOpenAI, StatsigAI } from '@statsig/statsig-ai-node';
import { OpenAI } from 'openai';

// if you have your own otel, you do not need an statsigAI instance here. 
// But if you want to use the default Otel on statsigAI, you need to initialize the SDK.
statsigAI = new StatsigAI({"YOUR_SERVER_SECRET_KEY"});
await statsigAI.initialize();

const client = wrapOpenAI(
  new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  })
);

const response = await client.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello, world!" }],
});

AI Options

Options for configuring the AI SDK. Pass these into the StatsigAI constructor. See Initializing with Options.

Configuration Properties

enableDefaultOtel
boolean
Enables Statsig’s default OpenTelemetry setup for automatic tracing and instrumentation. Defaults to false.
statsigTracingConfig
StatsigTracingConfig
Configuration object for tracing and instrumentation settings.
serviceName
string
Name of the service for tracing identification. Used in OpenTelemetry spans and traces.
enableAutoInstrumentation
boolean
Enables automatic instrumentation of common libraries and frameworks for tracing. Defaults to false.
export interface StatsigTracingConfig {
  serviceName?: string;
  enableAutoInstrumentation?: boolean;
}

export interface StatsigAIOptions {
  enableDefaultOtel?: boolean;
  statsigTracingConfig?: StatsigTracingConfig;
}

Using other SDK methods

Whether you passed in a Statsig instance or not, you can access the Statsig instance from the statsigAI instance, and use its many methods:
// Check a gate value
const gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');

// Log an event
statsigAI.getStatsig().logEvent(statsigUser, 'my_event', { value: 1 });
Refer to the Statsig Node SDK docs for more information on how to use the Core Statsig SDK methods, plus information on advanced setup + singleton usage.