Hello, I have a question regarding user distribution in GrowthBook. Does GrowthBook divide users equ...
w
Hello, I have a question regarding user distribution in GrowthBook. Does GrowthBook divide users equally for each country by default? If not, is there an option to enable this feature? Our issue is that in variant 1, there seem to be more users from high-paying countries compared to the baseline. This discrepancy skews our results, making it appear that variant 1 generates significantly more payments, making the results unreliable.
r
Hi there! There are a couple of different approaches you could take here; however, it's odd that this discrepancy occurred. Are you able to share a screenshot of the experimentation setup and results? Or, at least, share how many users were in each group? In terms of solutions, dimensions or segments might be able to help interpret the data. Dimensions is a user attribute that can have multiple values (like country) and can be used to explore experiment results. Segment is a specific user group and can be used to filter experiment results.
w
Hi @strong-mouse-55694 thank you for your response. We use Mixpanel for analysis, filtering only tier 1 countries as shown in the attached screenshot. You can see that the number of users in variant A is higher than the baseline. Users in tier 1 countries (US, CA, AU, ...) are more willing to pay than those in lower-tier countries. This is why I believe the “payment” metric results are not reliable. As I understand it, Dimensions is only used to break down the experiment results. We want users to be evenly distributed across each country. I’m not sure how we can use Segment to achieve this, as I don’t see any option for using Segment in Experiment attribute targeting.
r
Thanks for sharing. How many total users were in the experiment?
w
Hi @strong-mouse-55694 6782 total users until now
r
Thanks! I'm chatting with our data engineers to see what they recommend.
w
Thank you so much 🙏
r
Hi again! The data team said that's it's possible that the experiment assignment did turn out unbalanced—an "unlucky" draw. It doesn't mean that the results are invalid—they just aren't as precise as you'd like. In this case, as mentioned before, you can use ­dimensions to see how the experiment fared for various countries. If it's the case that it was just an unlucky assignment, then you can try running the experiment again and see if you get the same result. Otherwise, it could be that something is going amuck with your assignment, in which case, you'd need to debug what's happening. While we do have attribute targeting, which allows you to assign experiments based on attributes, it's not possible—based on how sampling works—to equally assign users based on country.
w
Thanks for the clarification. We end up with running the experiments only for tier 1 countries by adding the country condition in Attribute Targeting section.
🙌 1