one more question (might be of interest to others,...
b
one more question (might be of interest to others, so posting here) We have a Ratio as target metric in many of our tests. How do you handle these best? Pick the Revenue one as this is continuous? To clarify; it’s a bit like Count, but it should be in relation to another count. ie click on Add to Cart after checking the price on a product. Any user will check price multiple times during test (for multiple products) and we want to have the ratio of number of add to carts / number of price checks. Just looking at the number of add to carts as a Count is not the right metric in this case.
f
We have it on our roadmap to add ratio's as a metric type- @future-teacher-7046 would be able to help you with the right thing to do in the mean time.
b
thanks
f
Hi Christian, the only real workaround until we add proper ratio support is to add both the numerator and denominator as separate count metrics in an experiment. That way you'll at least be able to see all the raw data and make some inferences in certain cases. For example, if the numerator has a 95% chance to beat the baseline and the denominator is flat, then the ratio probably has a high chance of winning as well.
b
@future-teacher-7046 thanks for that idea. I guess we’ll keep analyzing ratio’s in Jupyter Notebooks then 😉
f
we've planned to do it in Nov.
b
oh that’s awesome!, thanks for the headsup
not sure if i’m misunderstanding, but not sure the option that we need is listed there; We would need to have the ratio be per user, to have all users be equal in the analysis so for example we would be looking at add_to_cart (count) in relation to number of searches, per user. So if user 1 performs 10 searches and 1 add to cart. and user 2 is doing 100 searches and 1 add to cart we want to use 0.1 and 0.01 as the values to use in our analysis
f
Yeah, doing this per-user does seem to make more sense. So the value for each user would be
``"items added to cart" / "number of searches"``
. We obviously need to exclude users who did not have any searches to avoid dividing by zero. What about users who did not add items? Would you want those counted as zeros or excluded completely? I can see both being useful.
b
maybe it makes sense to connect this to a ‘test entry metric’.. so if users are only considered to be part of the test when they do a search, then it makes sense to exclude them if they didn’t search (not part of the test) and use 0 value when they don’t add an item but did search.
f
@breezy-crowd-53224 I updated that GitHub issue above with an alternate approach that I think will be cleaner and more flexible. Let me know what you think.
b
@future-teacher-7046 thats awesome thanks. I only think the query that is summing the values could/should be adjusted if I understand correctly. It’s still not on a per user base as it sums everything? Think it’s important to be able to do this on a user level, to avoid putting emphasis on ‘heavy users’. (you could imagine running a test that is very positive to only a minority of heavy users. question is what you want to do with that. For now, we want to treat every user equal, hence the ratio’s per user)
f
That aggregation formula is on a per-user level. Then, once you get a single value per user, we always get the overall average and standard deviation to do the statistical analysis
👍 1