do chances to win in growthbook experiment results...
# ask-ai
c
do chances to win in growthbook experiment results is comparing variations to control or both control and variation to the past data, seeing if any of them will result in a win? @brash-pillow-71627
o
A message from kapa.ai
The "Chance to Win" in GrowthBook's experiment results is a comparison between the variations and the control. It is the probability that a variation is better than the control. It does not compare both the control and variation to past data. Anything above 95% (a customizable threshold set at the organization level) is highlighted green indicating a very clear winner. Anything below 5% is highlighted red, indicating a very clear loser. Anything in between is grayed out indicating it's inconclusive. If that's the case, there's either no measurable difference or you haven't gathered enough data yet. For more information, you can refer to the [GrowthBook documentation](https://docs.growthbook.io/app/experiment-results#bayesian-engine) and this [GitHub PR](https://github.com/growthbook/growthbook/pull/1751).
c
what would be the expected chance to win in an A/A test? @brash-pillow-71627
o
A message from kapa.ai
In an A/A test, where there is no difference between variations, the expected "Chance to Win" would be around 50%. This is because both variations are identical, so there should be an equal probability of either variation "winning" over the other. However, it's important to note that due to the inherent uncertainty in experimentation, there may be times when you see a statistically significant effect in an A/A test even if the test is set up correctly. This could happen about 10% of the time, according to the [GrowthBook documentation](https://docs.growthbook.io/kb/experiments/aa-tests#problem-metrics-show-statistically-significant-lifts-in-the-aa-test). This is also confirmed in a [forum conversation](https://linen.growthbook.io/t/2578333/hey-there-wave-i-m-trying-to-understand-i-e-replicate-in-a-s) where a user correctly stated that in an A/A test scenario with sufficient data, most metrics would converge to ~0% for “Percent Change” and ~50% for “Chance to Beat Control”.
c
and what does it mean if the chance to win is minus 20-40 in the A/A? @brash-pillow-71627
o
A message from kapa.ai
In GrowthBook, the "Chance to Win" is expressed as a percentage and typically ranges from 0% to 100%. If you're seeing a "Chance to Win" as a negative value like -20% to -40%, it might be a misunderstanding or a display error. In an A/A test, where both variations are identical, the "Chance to Win" should ideally be around 50%. However, due to the inherent randomness in any experiment, it's possible to see some variation from this ideal. If you're consistently seeing negative values or values far from 50%, it might indicate an issue with the setup of your A/A test or the way the data is being processed. It would be a good idea to review your experiment setup and consult the [GrowthBook documentation](https://docs.growthbook.io/kb/experiments/aa-tests#problem-metrics-show-statistically-significant-lifts-in-the-aa-test) or [forum](https://linen.growthbook.io/t/16090610/we-re-running-an-a-a-test-after-integrating-gb-to-measure-gb) for further guidance.