Hi! We just integrated Growthbook and are about to do our first AB tests. I understand that the Bayesian approach frees us from calculating a sample size and therefore the test duration and enables us to continuously monitor our AB tests and make faster decisions. BUT how do we know when to stop a test? Should we define appropriate stopping rules to help us draw conclusions quickly but safely? If yes, do you have any tips? Thanks a lot
03/01/2023, 3:46 PM
HI Victoria, if you’re watching an experiment and you’ve set up email (if you’re self-hosting), we’ll send an email when a metric goes significant
there are a few main ways users use when stopping a test - significance reached on their primary metrics, metric risk drop below their risk thresholds, or guard rail metrics are not affected. It all depends on what you’re trying to do with the experiment - like if you’d like to know what impact your change has, or if you’re happy with the new version, and want to de-risk the roll out.
03/02/2023, 1:27 PM
Thanks Graham for your quick answers!
Can you indicate where I should set up my email to be notified when a metric goes significant? I would have expected the "watch" feature to do that.
Also, I thought the Bayesian approach frees us from calculating a sample size and therefore a duration test BUT I see in the "Edit Metric" popup that I can update the by-default minimum sample size value of 150. What's your reco? To calculate and update the sample size?
03/02/2023, 5:12 PM
Are you self-hosting?
03/03/2023, 9:15 AM
Hi Graham! Checking up on my last request. Let me know if you need more details to be able to answer. Thanks!
Gentle reminder @fresh-football-47124 🙂 Thanks again !
Yes, although Bayesian does free you from fixed horizon experiments, drawing conclusions from small sample sizes will increase your error rates. We added the min sample size to let you hide results that might be random.