This SDK is available in an open beta, and its methods may change. We encourage you to reach out on Slack for help getting setup, and so we can communicate changes.
The Statsig Node AI SDK lets you manage your prompts, online and offline evals, and debug your LLM applications in production. It depends upon the Statsig Node Server SDK, but provides convenient hooks for AI-specific functionality.
1
Install the SDK
Copy
npm install @statsig/statsig-ai
If you have unique setup needs like a frozen lockfile, take a look at the Node Server SDK docs - the AI SDK will install Node Server if you don’t already have it.
2
Initialize the SDK
If you already have a Statsig instance, you can pass it into the SDK. Otherwise, we’ll create an instance for you internally.
Don't use Statsig
Already have Statsig instance
Initialize the AI SDK with a Server Secret Key from the Statsig console.
Server Secret Keys should always be kept private. If you expose one, you can
disable and recreate it in the Statsig console.
Copy
import { StatsigAI } from '@statsig/statsig-ai-node';const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY'});await statsigAI.initialize();
import { StatsigAI, StatsigAIOptions } from '@statsig/statsig-ai-node';import { StatsigOptions } from '@statsig/statsig-server-core-node';// if you want to configure any statsig options, this is optional:const statsigOptions: StatsigOptions = { environment: 'production',};// if you would like to configure any statsigAI options, this is optional:const statsigAIOptions: StatsigAIOptions = { enableDefaultOtel: true,};const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY', statsigOptions}, statsigAIOptions);await statsigAI.initialize();// if you would like to use any statsig methods, you can access the statsig instance from the statsigAI instance:const gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');
Statsig can act as the control plane for your LLM prompts, allowing you to version and change them without deploying code. For more information, see the Prompts documentation.
Copy
import { StatsigUser } from '@statsig/statsig-ai-node';// Create a user objectconst user = new StatsigUser({ userID: 'a-user' });// Get the promptconst myPrompt = statsigAI.getPrompt(user, 'my_prompt');// Use the live version of the promptconst liveVersion = myPrompt.getLive();// Get the candidate versions of the promptconst candidateVersions = myPrompt.getCandidates();// Use the live version of the prompt in a completionconst response = await openai.chat.completions.create({ model: liveVersion.getModel({ fallback: 'gpt-4' }), // optional fallback temperature: liveVersion.getTemperature(), max_tokens: liveVersion.getMaxTokens(), messages: [{ role: 'user', content: 'Your prompt here' }],});
When running an online eval, you can log results back to Statsig for analysis.
Provide a score between 0 and 1, along with the grader name and any useful metadata (e.g., session IDs).
Currently, you must provide the grader manually — future releases will support automated grading options.
Copy
import { StatsigUser } from '@statsig/statsig-ai-node';const livePromptVersion = statsigAI.getPrompt(user, 'my_prompt').getLive();// Create a user objectconst user = new StatsigUser({ userID: 'a-user' });// Log the results of the evalstatsigAI.logEvalGrade(user, livePromptVersion, 0.5, 'my_grader', { session_id: '1234567890',});// flush otel events or wait for the configured otel exporter flush intervalawait statsigAI.flush();
The AI SDK works with OpenTelemetry for sending telemetry to Statsig.
You can either turn on the default OTel integration in the StatsigAIOptions, or set up your own OTel to send traces to Statsig.
More advanced OTel configuration and exporter support are on the way.
Copy
import { StatsigAI, StatsigAIOptions } from '@statsig/statsig-ai-node';import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';import { NodeSDK } from '@opentelemetry/sdk-node';const statsigAIOptions: StatsigAIOptions = { enableDefaultOtel: true,};const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY'}, statsigAIOptions);await statsigAI.initialize();// flush otel eventsawait statsigAI.flush();// or if you are using your own otel instance, you can setup the export to statsig with an OTLP exporter.const statsigExporter = new OTLPTraceExporter({ url: 'https://api.statsig.com/otlp/v1/traces', headers: { 'statsig-api-key': 'YOUR_SERVER_SECRET_KEY', },});const otel = new NodeSDK({ serviceName: 'my-service', spanProcessors: [new BatchSpanProcessor(statsigExporter)],});otel.start();
The Statsig OpenAI Wrapper automatically adds tracing and log events to your OpenAI SDK usage, giving you in-console visibility with minimal setup.
Copy
import { wrapOpenAI, StatsigAI } from '@statsig/statsig-ai-node';import { OpenAI } from 'openai';// if you have your own otel, you do not need an statsigAI instance here. // But if you want to use the default Otel on statsigAI, you need to initialize the SDK.statsigAI = new StatsigAI({"YOUR_SERVER_SECRET_KEY"});await statsigAI.initialize();const client = wrapOpenAI( new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }));const response = await client.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: "Hello, world!" }],});
Whether you passed in a Statsig instance or not, you can access the Statsig instance from the statsigAI instance, and use its many methods:
Copy
// Check a gate valueconst gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');// Log an eventstatsigAI.getStatsig().logEvent(statsigUser, 'my_event', { value: 1 });
Refer to the Statsig Node SDK docs for more information on how to use the Core Statsig SDK methods, plus information on advanced setup + singleton usage.