Skip to main content

Android On Device Evaluation

Getting Started

The following will outline how to get up and running with Statsig for Android On Device Evaluation.

:::warn On-device evaluation sdks are for Enterprise and Pro Tier companies only. If you are trying to follow these instructions but do not meet that criteria, some of the setup steps may not work. :::

The AndroidOnDeviceEvaluations SDK uses a different paradigm then its precomputed counter part (Android Precomputed Evaluations SDK). It is a Client SDK that behaves more like a Server SDK. Rather than requiring a user up front, you can check gates/configs/experiments for any set of user properties, because the SDK downloads a complete representation of your project and evaluates checks in real time.

Pros

  • No need for a network request when changing user properties - just check the gate/config/experiment locally
  • Can bring your own cdn or synchronously initialize with a preloaded project definition
  • Lower latency to download configs cached at the edge, rather than evaluated for a given user (which cannot be cached as much)

Cons

  • Entire project definition is available client side - the names and configurations of all experiments and feature flags accessible by your client key are exposed.
  • Payload size is strictly larger than what is required for the Android Precomputed Evaluations SDK.
  • Evaluation performance is slightly slower - rather than looking up the value, the SDK must actually evaluate targeting conditions and an allocation decision
  • Does not support ID list segments with > 1000 IDs
  • Does not support IP or User Agent based checks (Browser Version/Name, OS Version/Name, IP, Country)

Create an Account

To work with the SDK, you will need a Statsig account. If you don't yet have an account, go ahead and sign up for a free account now.

You could skip this for now, but you will need an SDK key and some gates/experiments to use with the SDK in just a minute.

Installation

You can install the SDK using JitPack. See the latest version and installation steps at https://jitpack.io/#statsig-io/android-local-eval.

Initialize the SDK

Initialize the SDK using a Client SDK key from the "API Keys" tab on the Statsig console. When creating the key, or using an existing key, you will need to add the "Allow Download Config Specs" scope. Client keys, by default, are not able to download the project definition to do on device evaluation. You must opt in to allow your client key to access your full project definition on our cdn.

When creating a new client key, select "Allow Download Config Specs"

Add DCS Scope to Existing Key

caution

Do NOT embed a Server Secret Key in client side applications.

import com.statsig.androidsdk.*;
...

public class MainActivity extends AppCompatActivity implements IStatsigCallback {

...
StatsigOptions options = new StatsigOptions();
options.setTier(Tier.PRODUCTION);
StatsigUser user = new StatsigUser("UUID");
Statsig.initializeAsync(app, "client-key", user, this, options);
...
// SDK is usable, but values will be from the cache or defaults (false for gates, {} for configs)
// Once onStatsigInitialize fires, then


@Override
public void onStatsigInitialize() {
// SDK is initialized and has the most up to date values
}

@Override
public void onStatsigUpdateUser() {
// User has been updated and values have been refetched for the new user
}

}

Working with the SDK

Checking a Gate

Now that your SDK is initialized, let's check a Feature Gate. Feature Gates can be used to create logic branches in code that can be rolled out to different users from the Statsig Console. Gates are always CLOSED or OFF (think return false;) by default.

StatsigUser user = new StatsigUser("user_id)
if (Statsig.checkGate(user, "new_homepage_design")) {
// Gate is on, show new home page
} else {
// Gate is off, show old home page
}

Reading a Dynamic Config

Feature Gates can be very useful for simple on/off switches, with optional but advanced user targeting. However, if you want to be able send a different set of values (strings, numbers, and etc.) to your clients based on specific user attributes, e.g. country, Dynamic Configs can help you with that. The API is very similar to Feature Gates, but you get an entire json object you can configure on the server and you can fetch typed parameters from it. For example:

StatsigUser user = new StatsigUser("user_id")
DynamicConfig config = Statsig.getConfig(user, "awesome_product_details");

// The 2nd parameter is the default value to be used in case the given parameter name does not exist on
// the Dynamic Config object. This can happen when there is a typo, or when the user is offline and the
// value has not been cached on the client.
String itemName = config.getString("product_name", "Awesome Product v1");
Double price = config.getDouble("price", 10.0);
Boolean shouldDiscount = config.getBoolean("discount", false);

Getting an Layer/Experiment

Then we have Layers/Experiments, which you can use to run A/B/n experiments. We offer two APIs, but we recommend the use of layers to enable quicker iterations with parameter reuse.

StatsigUser user = new StatsigUser("user_id)

// Values via getLayer

Layer layer = Statsig.getLayer(user, "user_promo_experiments")
String promoTitle = layer.getString("title", "Welcome to Statsig!");
Double discount = layer.getDouble("discount", 0.1);

// or, via getExperiment

DynamicConfig titleExperiment = Statsig.getExperiment(user, "new_user_promo_title");
DynamicConfig priceExperiment = Statsig.getExperiment(user, "new_user_promo_price");

String promoTitle = titleExperiment.getString("title", "Welcome to Statsig!");
Double discount = priceExperiment.getDouble("discount", 0.1);

...

Double price = msrp * (1 - discount);

Logging an Event

Now that you have a Feature Gate or an Experiment set up, you may want to track some custom events and see how your new features or different experiment groups affect these events. This is super easy with Statsig - simply call the Log Event API for the event, and you can additionally provide some value and/or an object of metadata to be logged together with the event:

StatsigUser user = new StatsigUser("user_id")
Statsig.logEvent(user, "purchase", 2.99, Map.of("item_name", "remove_ads"));

Statsig Options

You can pass in an optional parameter options in addition to sdkKey and user during initialization to customize the Statsig client. Here are the current options and we are always adding more to the list:

  • configSpecAPI - String, default https://api.statsigcdn.com/v1/download_config_specs/

    • The endpoint to use for downloading config spec network requests. You should not need to override this (unless you have another API that implements the Statsig API endpoints)
  • eventLoggingAPI - String, default https://events.statsigapi.net/v1/rgstr

    • The endpoint to use for log events. You should not need to override this (unless you have another API that implements the Statsig API endpoints)
  • initTimeoutMs: Long, default 3000

    • used to decide how long the Statsig client waits for the initial network request to respond before calling the completion block. The Statsig client will return either cached values (if any) or default values if checkGate/getConfig/getExperiment is called before the initial network request completes.
    • if you always want to wait for the latest values fetched from Statsig server, you should set this to 0 so we do not timeout the network request.
    • unit is milliseconds.
  • overrideStableID: String?, default null

    • overrides the stableID in the SDK that is set for the user
  • loadCacheAsync: Boolean, default false

    • Whether or not the SDK should block on loading saved values from disk.
  • initializeValues: Map<String, Any>?, default null

    • Provide the initialize response values directly to the Android SDK to synchronously initialize the client. You can generate these values from a Statsig Server SDK like the NodeJS Server SDK
  • disableDiagnosticsLogging:boolean?, default false

    • Prevent the SDK from sending useful debug information to Statsig

Methods

  • setTier | setEnvironmentParameter | getEnvironment
    • used to signal the environment tier the user is currently in.
    • setTier can be PRODUCTION, STAGING or DEVELOPMENT. e.g. passing in a value of Tier.STAGING will allow your users to pass any condition that pass for the staging environment tier, and fail any condition that only passes for other environment tiers.
    • setEnvironmentParameter can be used for custom tiers, eg options.setEnvironmentParameter("tier", "test")

Shutting Statsig Down

In order to save users' data and battery usage, as well as prevent logged events from being dropped, we keep event logs in client cache and flush periodically. Because of this, some events may not have been sent when your app shuts down.

To make sure all logged events are properly flushed or saved locally, you should tell Statsig to shutdown when your app is closing:

Statsig.shutdown();

Using Persistent Evaluations

If you want to ensure that a user's variant stays consistent while an experiment is running, regardless of changes to allocation or targeting, you can implement the UserPersistentStorageInterface and set it in StatsigOptions when you initialize the SDK.

Synchronous Persistent Evaluations

The UserPersistentStorageInterface exposes two methods for synchronous persistent storage, which will be called by default when evaluating an experiment.

interface UserPersistentStorageInterface {
suspend fun load(key: String): PersistedValues
fun save(key: String, experimentName: String, data: String)
fun delete(key: String, experiment: String)
...
}

The key string is a combination of ID and ID Type: e.g. "123:userID" or "abc:stableID" which the SDK will construct and call get and set on by default

You can use this interface to persist evaluations synchronously to local storage. If you need an async interface, read on.

Asynchronous Persistent Evaluations

The UserPersistentStorageInterface exposes two methods for asyncronous persistent evaluations. Because the getExperiment call is synchronous, you must load the value first, and pass it in as userPersistedValues

interface UserPersistentStorageInterface {
fun loadAsync(key: String, callback: IPersistentStorageCallback)
fun save(key: String, experimentName: String, data: String)
fun delete(key: String, experiment: String)
...
}
interface IPersistentStorageCallback {
fun onLoaded(values: PersistedValues)
}

For your convenience, we've created a top level method to load the value for a given user and ID Type:

// Asynchronous load values
val userPersistedValues = Statsig.client.loadUserPersistedValuesAsync(
user: StatsigUser,
idType: string, // userID, stableID, customIDxyz, etc
callback: IPersistentStorageCallback
);

// Synchronous load values
val userPersistedvalues = Statsig.client.loaduserPersistedValues(
user: StatsigUser,
idType: string, // userID, stableID, customIDxyz, etc
)

Putting it all together, assuming you have implemented the UserPersistentStorageInterface and set it on StatsigOptions, your callsite will look like this:

// Asynchronous 
val callback = object: IPersistentStorageCallback {
@override
fun onLoaded(values: PersistedValues) {
Statsig.getExperiment(user, "sample_experiment", GetExprimentOptions(userPersistedValues = values))
}
}
val userValues = Statsig.client.loadUserPersistedValuesAsync(user, "userID", callback)

// Synchronous
val user = StatsigUser(userID = "user123")
val userValues = Statsig.client.loaduserPersistedValues(user, 'userID');
const experiment = statsig.getExperiment({userID: "123"}, 'the_allocated_experiment', { userPersistedValues: userValues });

If you are using java, you can only override loadAsync function and ignore load function as empty.

FAQ

How do I run experiments for logged out users?

See the guide on device level experiments