Hello everyone, do we have solution to make the experiment sticky to users? As users are seeing two different flows on the same journey when we are changing percentage rollout, this is a bad UX.
If we rollout out experiment 50:50(50% control and 50% test) then it’s working fine but whenever we change rollout percentage then the users who fall in control version will now see the test version so how can we prevent this behaviour? And when we change the rollout percentage that will affect to new users only not the returning one.
01/14/2024, 11:06 AM
Hello. Just to make sure I understand your use case, you are starting with a 50/50 split, the variation is winning so you want to increase traffic to it, slowly ramping up so eventually 100% of users are getting the winning variation instead of control. Is that correct?
If so, what's the purpose of holding back the users who were originally assigned control? Is it so you can continue to analyze results as you increase traffic to the variation to make sure it's still winning as you scale up?
01/14/2024, 4:28 PM
Here, our primary concern is not concluding the experiment but rather presenting two distinct flows to the same user. We aim to maintain consistency in experiment assignments for users, ensuring that once a version is assigned, it remains unchanged even with increased traffic. We seek stickiness in every experiment.
01/15/2024, 5:43 PM
Our recommended approach is to keep an even split between variations for the entire experiment and instead adjust the overall traffic percent that is included to increase traffic.
For example, start with a 50/50 split on 10% of traffic, then increase that to 20%, etc. to send more traffic there.
When you do that, users in the experiment will not be reassigned, even without any stickiness or persistence. The hashing algorithm we use ensures that.
For more advanced changes beyond just increasing traffic, we did just launch a sticky bucketing solution you can check out. https://docs.growthbook.io/app/sticky-bucketing