Hey! :wave: We're testing out Growthbook and I'm t...
# ask-questions
s
Hey! 👋 We're testing out Growthbook and I'm trying to wrap my head a little around the different SDK's and how they would work in our current setup. We have a Node application serving both static and React pages with Fastly CDN in front. We rely heavily on caching pages as they're created by news editors. We've previously used Fastly to split traffic between users in A/B test experiments but would like an easier to use setup. Looking into the SDK for Fastly it seems interesting with the cache KV store to prevent reaching out to Growthbook with the large amount of requests we get. However, I don't understand how the Fastly Growthbook Edge App sends feature flags to the origin? How are they passed along and how would I retrieve them when developing different variants in the origin application? The important thing is to have the Fastly Application help us do traffic splitting while maintaining a high cache hit ratio. I would think that the Edge App should send all the feature flag information to the origin and the React SDK would pick that up instead of reaching out to the growthbook server, but I'm not convinced how that would work. If somebody could fill me in on these details or knows of a knowledge article it would be much appreciated.
👀 1
Essentially I think the big thing is how bucketting is done when running our setup of Fastly CDN in front of a Node backend, where SSR and caching are important factors.
h
Our Fastly Edge SDK basically evaluates experiments and feature flags on-edge and then proxies traffic to your origin based on the results of that evaluation. So you could set up a URL redirect test and the edge SDK will split traffic to 2+ origin URLs based on that test. It uses the same bucketing system that our vanilla JS SDK uses (actually uses the JS SDK within the edge runtime). The Edge SDK doesn't actually hold any page cache and is a dynamic routing layer that sits on top of your origin. It does forward response headers, however, so you can set your own cache control headers which would be respected by the user's browser.
The Fastly edge runtime can be unforgiving in terms of resource constraints (namely, CPU time per request), so if your origin webpage is complex it can sometimes go over the computation limit and timeout. If this is happening, another option is implementing our JS SDK directly and doing your own traffic splitting (redirects + headers) manually.
s
Thanks for the response @happy-autumn-40938 😄 Let's say I have an experiment with a feature flag that splits traffic in "Control", "A" and "B". The user hits the fastly edge node and get's evaluated to get "B". There's no cached page of variant "B" so it goes to the origin. How does the origin know that the user should get variation "B"? Maybe that's only possible with url redirects instead of more basic feature flags being setup? I would expect the Edge SDK to evaluate the feature flag variation the user should receive, respond with it if it's cached or generate it if not. But I feel like there's a big point I'm missing on the idea of the Edge SDK. In many cases at TV2 Denmark, we just want to be able to run an experiment where users are served 4 different variations of the same page. For example with different ad display logic. We would set it up so that the react application would know, based on the incoming request, how to render a page with a specific variation. Is URL redirect tests the only way to handle this? The dream scenario would be to have the Edge SDK split traffic for us, and then when the request hits the origin we just use the standard node/React SDK's to get the feature flag information and let it render based on our application logic.