Context
Search
K

A/B Testing

You can analyze A/B experiments with Context.ai.
Justify launch decisions and new feature releases with quantitative comparisons between experiment and control groups.

Configure Your Experiment

Most of the work to set up an A/B experiment takes place outside of Context.ai.
When you have a new feature or change you want to test, you’ll need to assign a subset of your users to receive the experimental version. Be sure to randomize this group to avoid statistical biases.
We suggest you assign an equal percentage of your traffic to the control group, which will establish a baseline for comparison. For example, if 5% of your users are in the experiment group, we recommend allocating 5% of users to the control group.
Choosing a sample size depends on your overall amount of traffic and the statistical power you’re seeking, as well as the duration of time you’re willing to run the experiment. The more users or time you have, the smaller you can make your experiment.
Optionally, you can test more than one experimental version of your product. Just randomly assign an equal number of users to each experimental group and the control.

Send Experiment IDs to Context.ai

For each experiment group or control group, assign a unique identifier that you can log to Context.ai. Then, each time you log a conversation, include a metadata field for experiment_id and populate it with that identifier.
That’s it. Your results will be available for analysis in Context.ai.

Analyze Your Results

Inside Context.ai, you’ll be able to work with experiment IDs in the same way as any other metadata field.
You can apply a global filter to view metrics or transcripts from a specific experiment group (or the control). This can give you a feel for user behavior in each cohort.
For the most direct comparison between experiment groups, it’s best to use the Workspace. Here’s how:
  1. 1.
    On the left-hand side, create a series for each experiment group and the control.
  2. 2.
    For each series, choose the metric of interest.
  3. 3.
    Set each series to show the appropriate experiment_id under Filters > Metadata.
Use metadata filters in the Workspace to compare experiment groups with the control.
You can also filter by topics or other metadata you’ve added for more granular analysis. This way, you can see if your experiment outperforms the control for certain topics or user groups.