Hey guys, a question and a request:
1. Question: We’re finding that we want to test the effect of a feature on new users, and existing users as two separate groups, since new users would never know what it was like before, and existing users have to adapt to the change. What is the best way to set this up as far as feature flagging and experimenting goes? Right now, I’m creating two separate feature flags (one for new users, one for existing), creating separate override rules for each as part of setting up the A/B test and then creating two experiments in the experiments list. This feels hacky, is there perhaps a better way to do this?
2. Request: In the experiments, results tab, in the metrics table, can we see the type of metric it is e.g. binomial or count or duration, etc.. Especially when we turn on “converted only” for a count metric, it means something pretty different than a binomial metric and my non-tech team members may get confused