Hi all, I'm trying to validate our setup with an A...
# announcements
r
Hi all, I'm trying to validate our setup with an A/A test but the results seem pretty inconclusive.
Variation 2
has gone from
+6.9%
with 80% Chance to Beat Control to the current value. I'd think the amount of data would be enough to hit statsig so I wonder if it's something wrong with our implementation. For context this is setup in our landing pages, we fire
Experiment Viewed
on every page and the conversion is set to whenever the visitor clicks on the
Book Appointment
button (also present in every page) which would then take them to another app to finish the booking process. Events are being sent through Amplitude to a Redshift instance and I can see the numbers matching.
I'm running 2 more A/A experiments with a similar setup to validate as well but so far I've seen the same behavior, I wonder if maybe I should run this in just one landing page
a
Perhaps run some that only have 1 variation, and for 4 weeks - so it gets more power. 5k users per experience sounds on the low side. a rule of thumb is to use 1k+ conversions per experience. (Though some claim 400 or 500 may be OK) And sounds wise to run 5+ A/A tests, so you don't just see 1 outlier
r
good point Chris, alright I'll add a couple of A/A tests with just 1 variation and be more patient, thank you!