Hi all, we want to make sure we're tracking events...
# ask-questions
t
Hi all, we want to make sure we're tracking events/Metrics that happen within the duration of our experiments. Making a new Metric and editing the SQL shows some existing logic for where I think GrowthBook is pre-building in the logic to account for this. Myself and the data team are wondering if
_table_suffix
is a parameter of GB or is this something we need to set up in our own data warehouse for the tables we’re querying? cc @refined-addition-9216
b
Hi Matt -
_TABLE_SUFFIX
isn't a GrowthBook-specific parameter. This is most common with GA4/BigQuery Data Sources. Typically, GA4 stores events in Intraday tables, so most often, the
FROM
clause in the query is wildcarded (e.g. it queries
FROM events_*
) and the
_TABLE_SUFFIX
allows you to filter based on your metrics conversion window specifications.
👍 1
r
@billions-xylophone-11752 and how do i interpret the latter half of the conditional text? is this supposed to be a templating injection? because we need to be able to see how to get the event within the duration of when the experiment is running…is the flow supposed to be essentially when the expeirment begins, we just make it greater than or equal to that date for that event?
@billions-xylophone-11752 so if i flip the query to have this: ((timestamp BETWEEN ‘2023-12-05’ AND ‘2023-12-07’)) — that runs — so type issue wise…are StarDateISO parameters that are defineed somewhere
b
Hey, Daniel! You are correct in those are template variables, and they're defined by the Experiment phase. So when the experiment analysis queries run, it will replace the template placeholder with the experiment's phase dates, to ensure that you're only looking at metrics within the experiment start/end date. If you need more control, you can configure the
Conversion Delay
for a particular metric. More details on this can be found here.
t
Are "experiment analysis queries" the same as Metric queries? Or do you mean 'Experiment Assignment Queries'?
r
how would you modify this for a query given that our date filter is in date type (so the equivalent of table_suffix) for filtering of the string literal ISO param
b
@tall-branch-42668 Sorry for the lack of clarity - when building an experiment analysis, we run a number of queries, including queries for the metrics and the experiment assignment queries. From the experiment analysis tab, you can click the "3 dot menu" and select "Show Queries" where you can see the actual queries we're running to provide the experiment analysis. And this is where you can see the template variables being replaced with the actual values.
@refined-addition-9216 Sorry - I don't fully understand your last question. Can you confirm you're using Google Analytics 4 with BigQuery as your data source? If so, you shouldn't need to customize that part of the query.
t
We are using GBQ and GA4
Anyone at GB free to hop on a quick Zoom to sort this out? @billions-xylophone-11752 @fresh-football-47124
b
@tall-branch-42668 I can't at the moment, unfortunately. If you outline what issue you're trying to solve, I can dig in a bit later today/first thing tomorrow. I'm just not exactly sure what the issue is.
t
We want to ensure the results of the experiment are only including Metrics built for events we track like
trial_created
or
purchase
that happen during the duration of the experiment we're running. Right now we're not confident that the queries we have in place are doing this. We think we're close to understanding this, however the
'{{date startDateISO "yyyyMMdd"}}'
and
'{{date endDateISO "yyyyMMdd"}}'
are unclear as to where
startDateISO
and
endDateISO
are getting defined. When we keep those in place without editing them, we get an error in the Rendered SQL (screenshots attached). The startDateISO seems to grab today, and endDateISO grabs a date 2 days from now - why is this happening? Additionally, per the above, we don't have
_TABLE_SUFFIX
in our GA4 setup, so we had assumed that potentially using
timestamp
in-lieu of this would work - which does not seem to be the case, but in-lieu of
_TABLE_SUFFIX
how can we set this up to only display results inside the duration of the test? @refined-addition-9216 Is this all simply because we don't have
timestamp
annotated as
"yyyyMMdd"
?
Screenshot 2023-12-05 at 4.44.36 PM.png,Screenshot 2023-12-05 at 4.44.51 PM.png
h
First things first:
we don't have
_TABLE_SUFFIX
in our GA4 setup
Just checking, is this because you have some custom export to BQ that has a different schema than the one we assume for most people with GA4?
t
That's a question for our Head of Data @refined-addition-9216, I'm unsure
h
I think you want to just have
timestamp BETWEEN '{{ startDate }}' AND '{{ endDate}}'
t
Should I include
ISO
in those?
h
If
timestamp
is
DATETIME
or
TIMESTAMP
then you don't want to use the formatting that we use for most people's GA4 Schemas where we need to filter the table_suffix down to ensure we aren't scanning too much data. You need formatting for the string literal that comes is more like
yyyy-MM-dd
and comes with time as well. https://docs.growthbook.io/app/metrics#sql-templates
Yes, use startDateISO
and endDateISO
We want to ensure the results of the experiment are only including Metrics built for events we track like
trial_created
or
purchase
that happen during the duration of the experiment we're running.
Also, to clarify, this particular filter we're talking about here only matters for improving query performance to make sure we are only scanning relevant data and hopefully filtering based on some table you have partitioning set up on. We handle the logic to make sure we only track user's metrics for the {exposure -> end of experiment} time window internally in growthbook.
t
Good! That's what we're after. An update: The edited query in the
trial_created
Metric I've set up is now fixed and seems happy with reducing the line to
timestamp BETWEEN '{{startDateISO}}' AND '{{endDateISO}}'
However, the results of the experiment we have this Metric set up on did not change upon updating the data. My next question is: do these start and end dates need to also exist in our Experiment Assignment Queries? We have some pretty basic queries set for the 'Experiment Assignment Queries'. For 'Anonymous Visitors', we have:
SELECT
user_pseudo_id,
TIMESTAMP_MICROS(event_timestamp) as timestamp,
experiment_name AS experiment_id,
experiment_variation AS variation_id,
geo.country AS country,
traffic_source.source AS source,
traffic_source.medium AS medium,
device.category AS device,
device.web_info.browser AS browser
device.operating_system AS os
FROM
``puck-data-platform`.
analytics
.`int_fct_ga_events``
WHERE
event_name = 'experiment-viewed'
AND user_pseudo_id is not null
For 'Logged-in Users', we have:
SELECT
user_pseudo_id,
TIMESTAMP_MICROS(event_timestamp) as timestamp,
experiment_name AS experiment_id,
experiment_variation AS variation_id,
geo.country AS country,
traffic_source.source AS source,
traffic_source.medium AS medium,
device.category AS device,
device.web_info.browser AS browser
device.operating_system AS os
FROM
``puck-data-platform`.
analytics
.`int_fct_ga_events``
WHERE
event_name = 'experiment-viewed'
AND user_id is not null
However, since we had set these up in GB first before diving as deep as we have, we had commented out the templated
(_TABLE_SUFFIX BETWEEN '{{date startDateISO "yyyyMMdd"}}' etc etc etc
So because this is a novel, question is here again: Do we need the
startDateISO
and
endDateISO
in our Assignment Queries as well?
h
You don't need them as we'll handle date filtering downstream of these queries. Furthermore, we even try to hit your partitions with an outer CTE, so adding them may do nothing. However, if you want to be safe and do your best to ensure BQ hits your partitions, you can also add them to your Assignment Queries.
👍 1
t
When I go to set up a Visual Editor change for a new experiment I'm receiving this error:
'load-viz-changeset-failed'
- any ideas how to remedy? I had set a new Admin Secret Key for the Chrome extension to implement changes with the Visual Editor but now I'm running into this
s
Hi @tall-branch-42668 I can help w/ the visual editor. Can you confirm what API host you're using in the visual editor options dialog?
t
Yeah the host I entered is:
<https://cdn.growthbook.io>
From our SDK Config -> SDK Connections
s
Ah so that's the issue - It should be pointing to your API host and not the CDN ... Do know know what that is? I can help you find it if needed
(If you're on cloud, it's
<https://api.growthbook.io>
)
t
Our API Host shows as CDN, should that change?
s
Oh yeah, that can be confusing. The SDKs use a different endpoint than our Visual Editor for caching reasons. That should probably be re-worded
t
Alrighty that worked, thank you!
s
Great no problem! One question I have - did you experience an error when trying to open the Visual Editor the first time that led you to entering the API credentials manually? It's intended to set those values for you automatically on first load.
If there was that would be helpful to know so we can investigate
t
Def re-word, for non-technical laymen like me it'd be helpful 🙃
👍🏾 1
I installed the chrome extension, then went back to the Experiment page, clicked
Open Visual Editor
, it loaded my URL and then I clicked the extension in my pinned extensions bar which prompted me for the credentials
s
I see, so the visual editor did not show up immediately when you loaded the page, is that right?
t
Tbh I can't totally recall, it may have popped up and prompted me with a button to enter those things, which opened a new tab for the extension
<chrome://extensions/?etcetc>
where I entered that info
s
Ok, no worries. Would you mind sharing the URL that you were trying to open the visual editor with? I can investigate on my side. Really appreciate your help
t
s
Got it, thanks!
t
Additionally, we're trying to implement a visual editor change on a specific JS expression on our site; our paywall (not tied to any URL), my dev says
I don't see a way to only run this test when the anon first paywall is visible. To do that, we need to be able to target the test to the result of a JS expression. I can only find ways to target based on URLs and attributes. If you see the same thing, I guess we need to check with GB but it doesn't seem possible from what I see so far
Is there another implementation route we can take for making this visual change for this test?
s
I think you'd need to set the result of the JS expression as an attribute to the GB SDK, visual experiment or not, in order to bucket the user into the right variation. Let me know if I misunderstood the scenario
Once you do that, you can make the visual changes you need to to the variation that corresponds with that attribute value.
t
Yep that's exactly what we did and it works
Thanks!
🙌🏾 1
Another question now that we've got an experiment live on our production site: Previewing the variation does not show the visual change from our JS but running the same code in the console, for which that variation should apply, does work as expected. So why does previewing the variation not work? (it worked when we tested this in Staging?) Also unable to see variation through several incognito mode sessions that should populate it 50% of the time.
Bug maybe?
This is resolved, chalked up to user error thankyou
🙌 1
🙌🏾 1
I have an additional question related to setting this up. For some reason, our results are returning a single variation_id as
[object Object]
when I've set 2 variation_id's for the experiment as
bucket-a
(for my Control) and
bucket-b
(for my Variant). Any idea what might be causing this? Experiment runs as designed on the site, we get control/variant to display properly. But something odd is happening with sending variation_id's to GA. This is happening with this particular experiment using GB's Visual Editor. We did not have this issue when setting up our A/A test that used GB's Feature Flags
f
this is the JS SDK?
t
Yes it is (also forgot to attach screenshot of error from Experiment results, now attached)
My dev speculates:
it may be that there is an error trying to run the experiment, and the tracking callback is sending the error object instead of a variation name in that case
Any thoughts?
h
It indeed looks like for some reason the
trackingCallback
is not sending the variation names as expected.
t
How do we resolve this?
h
I think we'd have to debug what your tracking callback looks like. Are you able to get the person who is in charge of that to share the information?
1
cc: @swift-helmet-3648 in case you've seen this before.
s
I haven't 😞 Sounds interesting though
t
Screenshot 2023-12-12 at 5.38.08 PM.png,Screenshot 2023-12-12 at 5.38.21 PM.png
Let me know if this is not what you're looking for
h
Ah, you're returning
result.value
for the variation id, but that's going to return the actual value rather than the variation key. Like this example shows, you want to submit
result.key
to
gtag
, not
result.value
t
We had made the change from
key
to
value
a month ago based on a Zoom call we had with @fresh-football-47124.. we're sure that's the change?
Do these vary based on type of test?
Also we had/have an A/A set up with the code screenshotted above that returned values correctly
h
No, it should be one implementation here that is the same across tests, but using
value
may have just "happened" to work before if the value for each feature was the same as the id for each feature. But I'm not 100% confident on what @fresh-football-47124 saw in that call.
t
Alright we're swapping back to the values we had originally, per your suggestion. Will let you know how that goes
We've made the change, however I'm still receiving the same error from GB in the results. Why are we only getting 1 id returned?
s
I noticed a console.log statement in your tracking callback. Could you copy/paste the output of that from your browser console?
f
you may still have that error unless you change the experiment name or date limit the experiment so that just new events are considered - or just look at the most recent rows and make sure the variation id is being tracked correctly - it's not going to update past data.
t
Would adding a new Phase accomplish this?
h
Yes, I think in this case just creating a new phase should resolve this issue.
t
Do I need to reset all of my Visual Editor changes? When I click "Add New Phase" there's a warning that reads "Starting a new phase will immediately affect all linked Feature Flags and Visual Changes" - how will these be affected?
h
No you shouldn't need to reset your changes. It should be fine.
t
Setting a new Phase seems to have cleared the issue
Does GB create a param or event around Phases and send to GA? Want to make sure if I look at our users in GA who've been exposed to this experiment that I can see which Phase they were apart of. For this particular instance, I'll know based on the different variation_ids, but for future tests we may change something that's not so obvious
h
Phases just rely on date to filter results. So we don't send phase information to the warehouse. The only way to reliably send this information is to change the experiment key altogether (often best done by creating an entirely new experiment, potentially linking the same features/visual changes).
t
Got it. How can I link the same features/visual changes from other experiments in GB? Or do you just mean manually replicating the same steps to set them up for the newly created experiment?
We're getting the proper Callback in the console.log() with this change - thanks all!
👍 1
A feature we had as part of the Google Optimize Visual Editor was a stock banner they provided that our marketing team really liked to implement when running promos Does the GB Visual Editor allow the inclusion of something similar, implementing a block of content (like a promo banner) that doesn't currently exist on site? Most of what I can see with the Visual Editor tool is the ability to alter what's currently present
f
we don't offer that currently
t
Got it thank you
A follow up question here around returning different variation ids from yesterday.. We now have this warning displaying - is this because we're returning results from Phase 1 and Phase 2?
h
Yeah, it looks like some of the tracking callbacks from before your change are still sneaking through in the phase filter.
You could either (a) bump up the phase start date ahead and hope that that's enough to drop users that for some reason still fired a tracking callback in the date range you're analyzing (b) edit that Experiment Assignment Query to just drop any rows where the variation ID has a
{
in it (e.g. just drop rows that look like JSON). You can do that by adding something like the following to your experiment assignment queries WHERE clause:
experiment_variation NOT LIKE '{%}'
That will drop any rows where the experiment variation looks like JSON instead of just a string.
Sorry for all this difficulty getting set up! Some bumps in the process here have sent us wayward but hopefully we're close.
t
No problem, good to work out all of these bumps! I was also wondering how the Conversion Window Hours affect our experiments and results, particularly in this case. I have one of our Metrics set to 96hrs for this, does that mean that the trackingCallback would return the original `result.value`s from Phase 1 for exposed users for that time duration? As in, should Phase 2 start 96hrs later to make sure that we don't get the incorrect variation ID returns from Phase 1? Additionally, somewhat separately, if we're running an experiment and monitoring Metrics for something broad like trial_conversions or purchases, we don't always have customers convert on Day 0, sometimes they take 30 days to convert. If we ran a test for 30 days, how does GB monitor results of users who experienced the experiment on, say, Day 28 but who then converted on, say, Day 45 when the experiment is no longer running? Should we wait some time period beyond the experiment stop date to update the data for a more accurate result?
h
As in, should Phase 2 start 96hrs later to make sure that we don't get the incorrect variation ID returns from Phase 1?
No. The phases will filter out the experiment exposures, and your 96 hour conversion window will be from the first experiment exposure we can find in the phase 2 date range.
we don't always have customers convert on Day 0, sometimes they take 30 days to convert. If we ran a test for 30 days, how does GB monitor results of users who experienced the experiment on, say, Day 28 but who then converted on, say, Day 45 when the experiment is no longer running?
Generally, I would advise against trying to end an experiment and then having data change afterwards. You can change the experiemnt attribution model to "Experiment Duration" to ignore conversion windows and get all values for each user from {first_exposure -> experiment_end_date}, or you can use Fact Table metrics to define conversion windows as on or off by metric.