Another question (and sorry if i’m spamming - new ...
# announcements
Another question (and sorry if i’m spamming - new around here!) - is there a best practice for running a test where something other than a user determines the exposure? in my case, we are testing a variations of a widget, but a single user may see a different variation per session. i want to test the performance of each variation (based on some conversion metric). My initial thinking is that I can instead make the session_id be the user_id: so each session is bucketed instead of users.
you can use session id as the randomization unit
my concern about this is if a user saw two variations, how would you know which variation causes the change in the metric?
practically, you can add any attribute as a an 'identifier' and then you can add that as the hashAttribute ("Assign value based on attribute") value
when setting up the feature flag experiment