Btw unrelated to the above, but it'd be good if me...
# give-feedback
b
Btw unrelated to the above, but it'd be good if metrics could have additional conditions/variations added inline e.g. we have a simple
course_page_viewed
event, which has
course_key
,
chapter_key
and
page_id
fields. If we're testing whether a content change helps people progress further, we might want to see whether more people make it to the end of the course, or to the next chapter, or whatever. Currently we have to make an individual metric for every single thing we might want to look at. It'd be better if we could have a base
course_page_viewed
metric, maybe with some minimal initial filtering, then optionally filter on extra fields Or as a rough equivalent, defining a metric with specific parameters which have to be filled in when selecting it (not as powerful, but maybe more compatible with e.g. SQL-backed metrics)
f
We've talked about ways to override metric settings within an experiment (e.g. change the conversion window). I can see us also letting you modify the SQL or fill in template variables.
👍 1