Hi! I’ve come across what appears to be a bug in ...
# give-feedback
s
Hi! I’ve come across what appears to be a bug in GrowthBook’s handling of conversion window settings for metrics. While the metric delay (burn-in) is correctly reflected in the auto-generated SQL (e.g.
m.timestamp >= d.timestamp + INTERVAL '1 hour'
), the conversion_window (e.g. +
AND m.timestamp <= d.timestamp + INTERVAL '24 hours'
) is consistently missing, even when clearly configured in the UI and stored in the metric’s JSON (inspected via Web UI). I’ve tested: • Updating existing metrics → conversion_window is ignored. • Creating new metrics via the UI → still ignored. • Metrics (self-labeled) “in 90d”, “in 48h”, etc. → do not apply a user-relative time cap in the SQL, and we’ve therefore been consistently misreporting these numbers under the assumption that the window was respected. • Burn-in does work reliably, so this appears to be a partial implementation. Interestingly, I found one older metric that does apply the conversion window correctly, so it might be related to how/when the metric was created. Let me know if you’d like reproduction steps or example SQL — happy to share more. We're using GrowthBook Cloud w/ Build: 3.6.0+2d22835 (2025-06-10) Thanks in advance!
h
Hi Dante! I don't think there's a bug from what I can tell, but let's track down what's going on here. Reproduction steps + more SQL + screenshots of metric settings and experiment settings would be useful. From our side, conversion windows are working as expected for metrics. I would suggest there are two likely candidates for why you're seeing this: 1. There is an experiment level override that ignores all conversion windows. This is something of a legacy setting but many orgs use it for one-off analyses without conversion windows. See image 1 and check whether your experiments are set to respect, or ignore, the conversion windows. There is an organization setting to set this default. Of course, our default behavior is to respect conversion windows. 2. Somehow the conversion window isn't actually being set correctly on the metric, but it seems you've been pretty thorough here.
s
Hi Luke! You were absolutely right - option 1 was the issue. The "*Conversion Window Override"* option in the experiment analysis settings was set to "*Ignore Conversion Windows"*. Once I flipped it on the experiment level, the correct auto-generated SQL appeared 🙂 This was of course caused by our organization-level default that was equally set to ignore the conversion windows, which explains why none of our conversion-window dependent metrics ever worked the way we thought they did (🤦‍♂️). I was just granted admin access, and was able to change the "*Default Conversion Window Override*" setting -> "*Respect Conversion Windows*", and it all works now! Really appreciate the support - great turnaround and super helpful!
🙌 1