The Concept
As teams run a number of experiments, it is possible to glean learning across these experiments. This is meta-analysis. Examples of learning people seek to derive include- How hard is a metric to move
- Are there more sensitive proxies for the metric we care about?
- How are teams doing relative to each other?
Experiment Timeline View
This view lets you to filter down to experiments a team has run. At a glance you can answer questions like- What experiments are running now?
- When are they expected to end?
- What % of experiments ship Control vs Test?
- What is the typical duration?
- Do experiments run for their planned duration - or much longer or shorter?
- Do experiments impact key business metrics - or only shallow or team level metrics?
- How much do they impact key business metrics?
Metric Impact (Batting Average)
The “batting average” view lets you look at how easy or hard a metric is to move. You can filter to a set of shipped experiments and see how many experiments moved a metric by 1% vs 10%. Like with other meta-analysis views, you can filter down to a team, a tag or even if results were statistically significant. Common ways to use this include- Sniff testing whether the claim that the next experiment will move this metric by 15% is a good idea.
- Establishing reasonable goals, based on past ability to move this metric