We did an A/A test and the segmentation was equall...
# ask-ai
c
We did an A/A test and the segmentation was equally distributed, there's No Sample Ratio Mismatch (SRM) detected. P-value above 0.001. but the results for our goal metrics and guardrail metrics have differences between variations with a -28% change. this should not happen in an A/A test, what might be the problem here? @brash-pillow-71627
o
A message from kapa.ai
In an ideal world, an A/A test should not show any statistically significant differences between the variations, as they are identical. However, due to the inherent uncertainty in experimentation, it's possible to observe differences even in an A/A test[^1^]. Here are a few potential reasons for the observed differences: 1. Statistical Fluctuations: Even in an A/A test, it's possible to observe statistically significant differences purely due to random chance[^1^]. GrowthBook's Bayesian statistics engine uses a default level of uncertainty where the probability of a variation being higher or lower than the baseline is greater than 95% or less than 5%[^1^]. This means that about 10% of the time, you may see a statistically significant effect in an A/A Metric even if the test is set up correctly[^1^]. 2. Sample Size: The sample size of your test can also impact the results. If the sample size is small, the results can be more influenced by random chance[^7^]. 3. Implementation Errors: If there are any errors in the implementation of the test, it could lead to differences in the results. For example, if there are issues with the tracking setup, it could lead to imbalances in the data[^1^]. If you're seeing large differences (like -28%) consistently, it might be worth investigating further. You can start by checking your test setup and tracking implementation. If everything seems fine, you might want to consider running the test for a longer period or increasing the sample size to see if the results stabilize[^1^]. [^1^]: [Source](https://docs.growthbook.io/kb/experiments/aa-tests#problem-metrics-show-statistically-significant-lifts-in-the-aa-test) [^7^]: [Source](https://linen.growthbook.io/t/12008781/hi-growthbook-team-may-i-have-a-bit-of-advice-i-wanted-to-st)
c
is it possible that this is happening due to the random user types going into the experiment? @brash-pillow-71627
o
A message from kapa.ai
Yes, it's possible that the differences you're seeing in an A/A test could be due to the types of users entering the experiment. If the users entering each variation are not similar, it could lead to differences in the results. This is why it's important to ensure that users are randomly assigned to each variation[^1^]. In GrowthBook, users are randomly assigned to different variations in a way that ensures that past experiments do not affect one variation more than any other[^2^]. However, if there are issues with the randomization process, it could lead to imbalances in the types of users in each variation[^2^]. For example, if you make changes to the targeting of an experiment or create a new phase without re-randomizing users, it's possible that the traffic to the experiment will become imbalanced[^1^]. This is because the users who were already in the experiment may continue to be in the same variation at differing rates[^1^]. To prevent this, whenever creating a new phase, you should re-randomize traffic when possible[^1^]. Only certain changes to an experiment's targeting conditions can be done without re-randomizing[^1^]. [^1^]: [Source](https://docs.growthbook.io/kb/experiments/troubleshooting-experiments#solution-4-minimize-experiment-targeting-changes-while-running) [^2^]: [Source](https://docs.growthbook.io/kb/experiments/carryover-bias)
c
does growthbook have user groups or attributes that we can use? For example, including 10% male in each variation and 50% over 30 years old
does growthbook have user groups or attributes that we can use? For example, including 10% male in each variation and 50% over 30 years old @brash-pillow-71627
o
A message from kapa.ai
Yes, GrowthBook allows you to specify user attributes, which can be used for feature targeting and assigning persistent variations in A/B tests[^1^]. These attributes can be any JSON data type - boolean, integer, float, string, list, or dict[^1^]. Here's an example of how you can set user attributes:
Copy code
python attributes = {     'id': "123",     'loggedIn': True,     'age': 21.5,     'tags': ["tag1", "tag2"],     'account': {       'age': 90     }   }      # Pass into constructor   gb = GrowthBook(attributes = attributes)      # Or set later   gb.set_attributes(attributes)
[^1^] However, GrowthBook's SDKs currently only support random assignment of users to variations[^2^]. They do not support stratified sampling or cluster sampling, which would be required to ensure a certain percentage of a specific user group in each variation[^2^]. If you want to implement such a sampling strategy, you would need to do it outside of GrowthBook and then pass the assignment data into GrowthBook for analysis[^2^]. [^1^]: [Source](https://docs.growthbook.io/lib/python#attributes) [^2^]: [Source](https://linen.growthbook.io/t/2683978/hi-again-quick-question-is-there-any-way-to-stratify-the-ran)
c
Can I please get an expert opinion here? I'm not sure why the A/A is showing differences. @strong-mouse-55694
image.png,image.png