This SDK is available in an open beta, and its methods may change. We encourage you to reach out on Slack for help getting setup, and so we can communicate changes.
Overview
The Statsig Python AI SDK lets you manage your prompts, online and offline evals, and debug your LLM applications in production. It depends upon the Statsig Python Server SDK, but provides convenient hooks for AI-specific functionality.1
Install the SDK
2
Initialize the SDK
For initialization requirements in forking and WSGI servers, see the Statsig Python Server SDK docs.
- Don't use Statsig
- Already have Statsig instance
Initialize the AI SDK with a Server Secret Key from the Statsig console.
Server Secret Keys should always be kept private. If you expose one, you can
disable and recreate it in the Statsig console.
Initializing With Options
Initializing With Options
Optionally, you can configure StatsigOptions for your Statsig instance:
Using the SDK
Getting a Prompt
Statsig can act as the control plane for your LLM prompts, allowing you to version and change them without deploying code. For more information, see the Prompts documentation.Logging Eval Results
When running an online eval, you can log results back to Statsig for analysis. Provide a score between 0 and 1, along with the grader name and any useful metadata (e.g., session IDs). Currently, you must provide the grader manually — future releases will support automated grading options.OpenTelemetry (OTEL)
The AI SDK works with OpenTelemetry for sending telemetry to Statsig. You can either turn on the default OTel integration in theStatsigAIOptions, or set up your own OTel to send traces to Statsig.
More advanced OTel configuration and exporter support are on the way.
Otel is not supported in the Python AI SDK yet. Coming soon!