How can I look at the absolute increase in a metric, rather than the increase in the rate? Or to no...
h
How can I look at the absolute increase in a metric, rather than the increase in the rate? Or to not have a denominator? I have created a metric with
type = count
, however it still requires a denominator. I'd prefer to not have a denominator, or to have
denominator = 1
This is because I want to measure the absolute uplift from the test variation, rather than the uplift to the metric divided by some denominator. What would you suggest, please?
f
On the experiment results
there is a toggle to show relative or absolute effects
👀 1
the denominator of the number of users in the experiment is needed in case the split isn't equal
h
Thanks Graham for the quick reply. That makes sense, thanks. You can see I'm currently comparing 0.08 vs 0.074, whcih are very small numbers. Whereas I'd rather compare 1302 vs 1025.
Is it possible, @fresh-football-47124, or intentional?
f
@helpful-application-7107 thoughts?
s
Hi Joe, I'm a data scientist at GrowthBook. When you say you would like to "compare 1302 vs 1025", are you saying that you want to know what was the sum of "Product Viewers Count" for treatment vs control during the experiment?
h
Thanks for reaching out, Luke. I think I’m suggesting that I use the delta between 1302 vs 1025 for my test. I currently find myself dealing with some very small numbers because a tiny proportion of my users click from the homepage into this one specific product. So I thought that, rather than comparing the click-through rate, I should in fact compare the clicks (same as product page views). Because those are larger numbers with which to compare the test cohort versus the control, Sorry if I’m not making much sense. Please do guide me in the best practice.
s
Hi Joe, thanks for your response! I understand what you are saying. I think that absolute inference, which Graham suggested above, is best practice. The number of users in control is bigger than the number of users in treatment, so your test would be unfairly biased towards control if you compared unadjusted sums from the groups. Comparing mean clicks across the two groups ensures an unbiased comparison. Does that make sense?
👀 1
h
It does make sense, thanks Luke Have I got it set up correctly here? I can’t tell whether it’ll ever reach stat sig with numbers of this magnitude?
IMG_4619.jpg
s
I'm not sure if you have it correctly set up - I would check into the multiple exposures warning (please see here for guidance). We can't promise statistical significance, but if we momentarily ignore the potential multiple exposures issue, you currently have 10.7% chance to win for treatment, so you have evidence that control is probably better. Also, the longer you run your experiment, the more information you will glean.
h
Thank you @steep-dog-1694 and @fresh-football-47124 . I really appreciate the advice from both of you.
s
You are welcome Joe!