gentle-london-51976
08/30/2023, 5:45 AMautoRefresh
properties on sdk.loadFeatures({ autoRefresh })
. We couldn't find a place where the JS SDK allows us to configure or check what the default TTL is. Is that piece of info available?fresh-football-47124
autorefresh
is one of them, but you can also set `timeout`: 30 or whatever you want.gentle-london-51976
08/30/2023, 6:05 AMautoRefresh: true
with combination of timeout: 5000
.
That means we should be expecting our singleton GrowthBook instance be retrieving latest snapshots from https://cdn.growthbook.io every 5s?fresh-football-47124
gentle-london-51976
08/30/2023, 6:27 AMfresh-football-47124
configureCache
methodconst cacheSettings: CacheSettings = {
// Consider a fetch stale after 1 minute
staleTTL: 1000 * 60,
cacheKey: "gbFeaturesCache",
backgroundSync: true,
};
gentle-london-51976
08/30/2023, 6:31 AMloadFeatures()
🤔 periodically?fresh-football-47124
configureCache({staleTTL: 5000})
gentle-london-51976
08/30/2023, 6:35 AMhappy-autumn-40938
08/30/2023, 7:06 AMautoRefresh: true
enables the SDK to subscribe to streaming updates from the server (SSE). If you're using cloud, this comes out of the box. For self-hosted, you'll need to set up the GrowthBook Proxy to do this. With streaming, there is no notion of polling or intervals; all updates are realtime.
• For local SDK cache (which really only pertains to initial payload hydration via loadFeatures()
), you can configure the TTL, as Graham mentioned, via configureCache({staleTTL: 5000})
. This is an independent concept from streaming / autoRefresh however.gentle-london-51976
08/30/2023, 7:16 AMhappy-autumn-40938
08/30/2023, 7:19 AMgentle-london-51976
08/30/2023, 8:18 AMloadFeatures()
to keep it up to date. Following similar paths from https://growthbookusers.slack.com/archives/C01T6PKD9C3/p1693377228197689
Just wanna confirm if this is going to be an anti-pattern on using the SDK. If not will be following that route (thanks in advance)happy-autumn-40938
08/30/2023, 5:12 PMnpm install eventsource
, and then in your app calling:
setPolyfills({
EventSource: require("eventsource"),
});
I don't know if I'd call the scheduled polling an anti-pattern, but I'd recommend going for the streaming solution if possible because it gives you the most current (instantaneous) results without you needing to set up a scheduler task on your end.gentle-london-51976
08/31/2023, 7:49 AMcache
design.
[context]
• One intuition our BE team had developed being:
◦ It might be most efficient if a cluster of servers could share one cache (like Redis). All of them would read from the snapshot while the refresh action would only be performed by a single node
• Combining with our existing setup of SSE
refreshing + retrieving the flags via sdk.isOn()
.
◦ By checking the source code from GrowthBook.ts, it seems like all of the refreshed flag values were going to be stored inside private _ctx: Context;
(with the above setup, not calling loadFeatures()
periodically)
◦ Which means each server node would essential maintain its own snapshot version of all the flags.
TL;DR: There seems to be no mechanism in place to have all server nodes share the same FF snapshot, each of them had to listening using SSE
. Is my observation correct 🤔?happy-autumn-40938
08/31/2023, 8:28 AMsetPolyfills({
EventSource: require("eventsource"),
localStorage: customRedisConnector,
});
the connector must then implement this interface:
interface LocalStorageCompat {
getItem(key: string): string | null | Promise<string | null>;
setItem(key: string, value: string): void | Promise<void>;
}
Of course there is local in-mem cache that sits in front of your shared cache. If you want to exclusively use shared cache with nothing in front, you can override the SDK's cache TTL to 0.
2. GrowthBook Proxy - preferred solution
Our proxy server can be configured as a backend that seeds your SDK instances via both REST calls and Streaming/SSE. It sits between you SDK and the GB features API. It features a shared cache and can be autoscaled to suit your needs -- you can even plug in a Redis instance or Redis cluster as the proxy's cache and the GB Proxy will keep all instances in sync using Redis PubSub. We actually use the proxy under the hood to power our SSE service, but you can install your own hosted GB Proxy cluster even as a Cloud customer.
In this scenario your SDKs' shared cache would actually be their connection to your hosted proxy instance / cluster. As before, use SSE on the SDKs to keep them up to date, and/or set the TTL to a small number (or 0) to force your SDK instances to use the latest definitions at all times.
----
For orgs where backend latency is important, running a local proxy cluster instead of hitting our CDN / SSE endpoints directly makes a lot of sense as it keeps the network distance much shorter / local. But I also wouldn't worry too much about keeping all SDK instances synced perfectly and would probably suggest not setting your TTL too low so that you can avoid unnecessary lookups (even if they're in a local/shared proxy or DB). The performance of using in-memory cache in front outweighs the need for perfectly synced SDK instances in most cases. In a microservice-heavy infrastructure, this might mean evaluating feature flags/experiments once per user request and sharing the results between services instead of having each service perform the same lookup.