Hey all, I am just starting to use GrowthBook and ...
# ask-questions
e
Hey all, I am just starting to use GrowthBook and I have issues to set my first binomial test. What I did so far : 1. I created a datasource including all users (only logged users, no anonymous) with the experiments & variations they are in. I set the timestamp using their entrance in the A/B test. 2. Then, in my metric definition, I just list users that activated a feature, with the first timestamp at which they activated the feature. 3. I created my experiment by setting experiment ID, variation IDs, start date & end dates. I also set conversion delays between 0 & 24h because it's the way we want to follow-up on this activation. But I have an issue when it comes to analyzing the results of the A/B test. My two cohorts (default & test) have 100% conversion on the activation of the feature. Did you already experiment something like this ? I guess I haven't set properly my datasource/table but I don't really understand how to deal with it right now. Any hep would be more than welcome !
f
Hi Kaveh, happy to help. On the experiment results, can you click the 3 dots next to "Update Data", click "View Queries" and paste in the full SQL here?
e
Hey Jeremy, We actually just understood the issue and it's corrected. We misinterpreted the "Activation metric" in the Configure experiment menu and we actually set it to the conversion metric. So no wonder it was 100% šŸ˜„
another question though - could you give more indications on how to read clearly the meaning of the different infos here (and statistical definition as well)? And more precisely the "Risk of choosing", "Chance to beat control" and the Percent change part ?
f
Sure. "Percent change" shows you how much better (or worse) the variation is likely to be given the data collected so far. "Chance to beat control" is the probability that the change is positive (the green part of the graph). "Risk of choosing" is a little more complicated. It answers the question "If I stop the test now and I'm wrong, how much am I going to lose?"
So in your example, there's an 86% chance it's better (pretty good, but not quite at the typical 95% threshold). However, it's likely only ~0.2% better so not going to be a huge win. And because it's such a small change, the risk is really low so it's safe to stop the test now and pick a winner.
e
All clear šŸ™‚ thanks for your answers. Is there any documentation on the actual way you calculate those ?
f
e
Hi again Jeremy and thanks for your answers. One more question regarding the definition of the metrics. I'm wondering if there is any way to actually create a "count" metric, but which focuses only on converted users ? I didn't find a way now. I'm looking to have something like : ā€¢ Binomial metric to calculate conversion to an event ā€¢ Count metric to calculate reconnection to an event for converted users only Do you have any reco on the way to proceed ?
f
Yes, that's supported. Under the behavior settings for count metrics, there's an option to only include converted users
e
Hey Jeremy, I have that option for revenue & duration metrics but I don't have it for count metrics. I should have it ?
f
Ah, I see that option is disabled for count metrics right now. I'll enable it after doing some quick testing to make sure it works.
e
ok awesome šŸ™‚ thanks jeremy. So it should be soon available
f
Yep, just tested and it works fine. I'm pushing out the fix now and it should be live in about 10 minutes