Hi <@U01TCPDB58C> and <@U01T6HCHD0A>, we have a qu...
# give-feedback
m
Hi @future-teacher-7046 and @fresh-football-47124, we have a question regarding the metrics in GrowthBook. We use metrics to track conversions, as is described in the tool. We also visualise the metric under the path “/metric/{metric id}“. We found that the way of calculating average was a bit confusing for us, when you compare the numbers here to the numbers in experiments. On the metric page the average is calculated with regard to all users that are in the conversion table, if you do not actively write a query that actively includes non-converting users. In an experiment using the same metric under the path “/experiment/{experiment id}“, however, it is calculated it with regard to all users taking part in the experiment. This explicitly includes users that never convert. Hence, the experiment’s metric is lower than in the metric view. Is it a conscious decision to not use the set of all experimenters from all experiments to calculate the graphs in the metrics view? Our five cents are that this view would be very helpful if the metric averages can be calculated for all users automatically. Especially since the metric is defined for a data source containing all users, converting or not. This would make for much simpler metric queries. It would be the same out-of-the-box usefulness GrowthBook provides at other places when it comes to programmatically combining queries in a correct and transparent way - definitely a killer feature imho 😉 FYI @adventurous-dream-15065
h
Hi Peter. I think this is a good suggestion. I think allowing people to define a "user population" source so that we know what "all users" consists of and then averaging over that population is a great option for the metrics page.
Is it a conscious decision to not use the set of all experimenters from all experiments to calculate the graphs in the metrics view?
Doing this as a default would be more expensive than just averaging the values in the conversion table, and the original intent of that view was to ensure that the metric was being defined correctly. We agree that adding more functionality to the metrics page would be valuable (e.g. the ability to merge with dimensions for dimensional slices, average over "all users").
👍 1
m
Thanks for the answer! My best guess for your decision was efficiency and debugging purpose, too 😉 We found that people looking at the metrics page get confused by the choice of population size. If implementing a population size toggle feature takes a lots of resources, it can already help to communicate to users e.g. that this is not necessarily the metric values users see in the experiment. I didn’t find a hint on this in the documentation or GUI. We also observed that when people see the name of their favourite KPI and a graph, they tend to jump to using that graph in their argumentation and reasoning.
h
That makes sense! Thanks for your feedback 🙂
(and sorry for any confusion)