Hi, We are using BigQuery with a custom event sou...
# ask-questions
j
Hi, We are using BigQuery with a custom event source and are trying to set up the Experiment Assignment Queries for the data source. Have some questions about that process: 1. Confused on how to get the
viewed_experiment
table in BigQuery. Are the records in this table supposed to be events triggered by the SDK's
trackingCallback
? 2. We want to add Dimensions to this table (like the GrowthBook UI suggests). If Dimensions = Targeting Attributes, how do we get those into the same event that populates the
viewed_experiment
table? It doesn't seem like the
trackingCallback
returns with the Targeting Attributes, what's the best practice here?
f
Hi Vincent: 1. yes, you will use your event tracker to add that, and it will end up in BigQuery (like GA4) 2. Dimensions can be the targeting attributes, but keep in mind that the attributes used in targeting may not be available in the BigQuery data warehouse automatically. Depending on your event tracker, you can append additional parameters to that exposure event. You don’t have to add dimensions from the assignment/exposure query, you can also add them through the ‘dimensions’ page. Best practice depends on your data and if those attributes are not already in your data warehouse. If they are not, getting the dimensions from the assignment/exposure query is more efficient.
j
Hi Graham, to clarify: 1. To get the
viewed_experiment
table in BigQuery, this is the flow?
trackingCallback from Growthbook SDK
-->
our custom event source appends additional parameters
-->
send this exposure event to BigQuery
? 2. The reason why I'd want at least the Targeting Attribute dimensions to be in the assignment/exposure event is to guarantee that we know what those values are at time of assignment. Adding dimensions from the 'dimensions' page wouldn't guarantee that. Is that accurate?
f
1. yes, are you using an event tracker or your own custom event tracking? 2. No, this is not required to do it that way. For example, you could bucket users in a test based on an anonymous id, and then later find out about some attribute you care about - like location, or purchase intent. When you find out about this information, you also send it to your event tracker. As long as that data ends up in your warehouse somewhere, you can use GrowthBook to make that value a dimension, and we will join it back to the original exposed user. The reason you would do it on exposure, is if you’re sending an event anyway, you can append on things like Browser, or location or whatever is available. Adding it this way makes the experiment query have one less join, but this doesn’t generally effect performance.
j
We're not using anything like Segment or Rudderstack, does that mean we're using "custom event tracking"? Just want to make sure we're aligned on terminology here.
f
yes
event pipelining can be tricky to setup well, so we recommend using something like this or building your own that will be batched and async
j
"The reason you would do it on exposure, is if you’re sending an event anyway" --> isn't it required to send an even on exposure? The event being triggered by
trackingCallback
from the SDK
f
yes
you’re sending that one event, so other user info can tag along for ‘free’
j
So we send the user's Targeting Attributes to the SDK, the SDK's trackingCallback tells us what experiments a user is in, and from there we can append the user's Targeting Attributes to the event and send it on to BigQuery?
If a user is in multiple experiments, will the SDK fire one trackingCallback for each experiment they're in?