Hi Team This is in continuation to my question <he...
# announcements
i
Hi Team This is in continuation to my question here. Now, somehow, even with lesser data, the 'chance to beat control' and 'percentage change' are being reflected. However, given that conversion for both is 100%, why is there a 'chance to beat control' ? Am I interpreting the data wrong? What is 'control' here?
f
What ever your metric is, its selecting all the audience for that experiment, hence the 100%
if you have it balanced at 50/50 split you’d get closer to 0% difference
i
Thanks @fresh-football-47124. And what would be the 'control' in 'Chance to beat control'. The doc mentions one of the variants . However, both variants are performing the same. Where is the 77% 'chance to beat control' coming from?
f
have to check with @luke tomorrow
👍 1
its still not significant
i
@fresh-football-47124 please let me know when you hear back from @helpful-application-7107 https://growthbookusers.slack.com/archives/C01T6PKD9C3/p1690523055491989?thread_ts=1690520924.045089&amp;cid=C01T6PKD9C3
h
The reason for this is that 950/950 is more evidence for 100% than 208/208 is in a Bayesian context. We're more certain that the TG value here is actually 100% because we have more data in that group. This is one way that bayesian posteriors will differ from frequentist difference in means tests.
🙏 1
i
Hi Team During experiment analysis, I am getting the 'Sample ratio mismatch' when running a target vs control experiment . Our user base is extremely large and hence we take only a very small %age of users in our global control group ( in absolute terms which still is a few hundred thousands). Do you think this ratio mismatch would affect the results when using a bayesian engine for analysis?
f
What P value did you get?
We warn about SRM errors because it typically indicates an implementation problem that might bias the results.
i
@fresh-football-47124 it shows, p-value = 0.