Hi team,I want to know how to see metrics from dif...
# ask-questions
a
Hi team,I want to know how to see metrics from different datasource in the same one experiment? because in a big experiment, client side and server side may save their own conversion event data in different datasource
w
Hi. As of right now I don't think we allow that. We expect the metrics to be in the same datasource as the experiment assignment. Which datasource is the "Experiment Viewed" event from the trackingCallback going to?
a
Thx 4 ur reply!
So, even thought client side and server side may select the same experiment assignment table, they can not analysis their own metrics in the same one experiment for now, right? If so, any suggestions to resolve the problem?
w
If the "Experiment Viewed" was being sent to both datasources then you could have two different experiments with each having the same tracking key and analyzing their own metrics. I don't know how feasible that is for your situation.
a
If we do this ,we will have 2 experimeents. Do we have the guarantee that same userId is always bucketed into the same feature flag value?
w
If they are using both the same feature key then yes.
a
ok
w
In a nutshell, GrowthBook hashes together the user id and the experiment
trackingKey
which produces a decimal between 0 and 1. Each variation is assigned a range of values (e.g.
0 to 0.5
and
0.5 to 1.0
) and the user is assigned to whichever one their hash falls into.
a
ok, I see. As long as each experiment does not have a different hash salt, then we are good.
w
correct
🙌 1
a
what time is the
trackingCallback
function being called exactly? Immediately when I set attributes for a growthBook instance? or later when I really call
getFeatures()
or
isOn()
? I mean, if the SDK decide which variation the user will enter immediately after the attributes are set?
w
The later.
🙌 1
a
More, I have set two experiment with the same tracking key(experiment key) for one feature. So now, when client side and server side get the same feature, they will get the same value by the first experiment rule, right? The other experiment rule with the same experiment key is just used for analysis the metric from another datasource, right? I can even set different variations for the second experiment rule (image blew) because it will never be matched, right?
the tow experiment rules on the same feature has the same tracking key👆
Is this your suggestion to my initial problem? did i got it?
w
Yes. This is a along the lines of what I was thinking. However it does need you in your trackingCallback function to make a call to both your front-end and back-end for the assignment - which I'm not sure how easy that would be to do. I suppose the front end call always call the back end to send the request off, but the back-end might not easily be able to do likewise.
Also since this is a bit unorthodox setup, I'll run it by some colleagues later today to make sure I'm not suggesting anything incorrect.
a
Ok, thx!
By the way, in my situation, our front-end and back-end implement their own trackingCallback separately, they save each
experimentResult
to different datasource (GA/bigquery and clickhouse)but if they are doing experiment on the same feature, actually the
experimentResult
they save are identical, so I just wonder if GrowthBook can support analysis different metrics from different datasources as long as they use the same base attributes and they can select the same experiment assignment records from their datasources in the future?😃
w
Yes so others confirm what I said above. With your set up I would just caution that for the analysis to work correctly you would have to make sure that the trackingCallback gets called at least once on both the frontend and the backend for each user in the experiment. Otherwise the two experiments won't be looking at the same set of users.
🙌 1
It only gets called when you actually try and evaluate the feature. In many experiments you may only be doing that in one place or another.