You can analyze A/B experiments with Context.ai.
Justify launch decisions and new feature releases with quantitative comparisons between experiment and control groups.
Most of the work to set up an A/B experiment takes place outside of Context.ai.
When you have a new feature or change you want to test, you’ll need to assign a subset of your users to receive the experimental version. Be sure to randomize this group to avoid statistical biases.
We suggest you assign an equal percentage of your traffic to the control group, which will establish a baseline for comparison. For example, if 5% of your users are in the experiment group, we recommend allocating 5% of users to the control group.
Choosing a sample size depends on your overall amount of traffic and the statistical power you’re seeking, as well as the duration of time you’re willing to run the experiment. The more users or time you have, the smaller you can make your experiment.
Optionally, you can test more than one experimental version of your product. Just randomly assign an equal number of users to each experimental group and the control.
That’s it. Your results will be available for analysis in Context.ai.
Inside Context.ai, you’ll be able to work with experiment IDs in the same way as any other metadata field.
For the most direct comparison between experiment groups, it’s best to use the Workspace. Here’s how:
- 1.On the left-hand side, create a series for each experiment group and the control.
- 2.For each series, choose the metric of interest.
- 3.Set each series to show the appropriate
experiment_idunder Filters > Metadata.
Use metadata filters in the Workspace to compare experiment groups with the control.
You can also filter by topics or other metadata you’ve added for more granular analysis. This way, you can see if your experiment outperforms the control for certain topics or user groups.