Hi. I have a question about users split.
We use SDK to generate a variant based on user_id. Our developers say that the SDK returns the same variant value for the same user_id and we can determine the variant value even if the user doesn’t see the experiment. For example, when the experiment is running on a Product Page and a user lands on Main page, we already know the variant we will show the user even if their don’t go to Product Page.
My concern is in the next example. Suggest that 1000 users land on the Main page. The library distributes 498 users to variant A and 502 users to variant B. However, from variant A only 30 goes to Product Page and from variant B only 100 goes to Product Page where the experiment is running. The ultimate split is 23/77. So the split is not 50/50.
I thought that this could be resolved by settings IFs condition in the UI, however our developer says that this doesn’t influence the distribution, because the variant is determined by user_id and no matter where we run the experiment. Do I miss something? Can you assist please, I am confused with this final split we could get
03/30/2023, 8:11 AM
Hi Serhii - there are two ways to solve this. 1. bucket users as close to the exposure as possible - ie, on the Product page. 2. If that is unavoidable, you can use activation metrics to filter out from the experiment results, and only include users who saw the activation event (in this case, the event would be ‘viewed product page’).
Since the assignments are random, it is unlikely to bias your sample by those amounts. We have built in SRM checks to alert you about any significant imbalances.
03/30/2023, 8:29 AM
Hi @fresh-football-47124. Thank you for your response.
1. Does it mean that we need to generate for each experiment unique user_id and exactly on the experiment page? Otherwise I don’t understand your point. For example, the library returns for the user_id 1234 the variant A on the Main page. The same value it returns on the experiment page. So, the final result will be the same no matter what the page it is. If I understood this correctly
2. As I understand activation metric is used when we have already got the split
Yeach, I see that you have alerts about imbalances, we just try to avoid this, because it is not clear what to do in that case and experiment may fail because of this
03/30/2023, 4:29 PM
I am interested on the solution as well. Does using inline experiment on the experiment page solve this problem?
03/30/2023, 6:13 PM
No, you should not use a new id per experiment - you should always use the same id for the user if you can. The way we bucket users is to hash together the User id (or whatever you have as your hash attribute) and the experiment name/tracking key. This way users will get new assignments for every new experiment