Hello, our team is currently utilizing the JS SDK....
# ask-questions
g
Hello, our team is currently utilizing the JS SDK. As we're using the
autoRefresh
properties on
sdk.loadFeatures({ autoRefresh })
. We couldn't find a place where the JS SDK allows us to configure or check what the default TTL is. Is that piece of info available?
f
loadFeatures takes some options,
autorefresh
is one of them, but you can also set `timeout`: 30 or whatever you want.
g
Alright, so if we set
autoRefresh: true
with combination of
timeout: 5000
. That means we should be expecting our singleton GrowthBook instance be retrieving latest snapshots from https://cdn.growthbook.io every 5s?
f
actually, let me check the code
ya, that timeout is not the TTL, its the request timeout
g
Hmm ok, so I guess there isn't an option for SDK consumers to configure the auto refresh frequency? (mind if you tell me the default frequency for auto refreshing 😃?) (edit: thanks for checking)
f
well, its in the code
there is a
configureCache
method
which has the option for overriding the defaults:
Copy code
const cacheSettings: CacheSettings = {
  // Consider a fetch stale after 1 minute
  staleTTL: 1000 * 60,
  cacheKey: "gbFeaturesCache",
  backgroundSync: true,
};
I’m just not sure it’s exposed
🙏 1
g
Or otherwise, if this auto-fresh behavior is still desired with a customizable figure. Would the maintainers suggest us to just write a timer ourselves to call
loadFeatures()
🤔 periodically?
f
you can try just calling
configureCache({staleTTL: 5000})
const { configureCache } = require(“@growthbook/growthbook”);
for node
g
Ok lemme try
h
Wanted to chime in to clarify a few points: •
autoRefresh: true
enables the SDK to subscribe to streaming updates from the server (SSE). If you're using cloud, this comes out of the box. For self-hosted, you'll need to set up the GrowthBook Proxy to do this. With streaming, there is no notion of polling or intervals; all updates are realtime. • For local SDK cache (which really only pertains to initial payload hydration via
loadFeatures()
), you can configure the TTL, as Graham mentioned, via
configureCache({staleTTL: 5000})
. This is an independent concept from streaming / autoRefresh however.
g
My bad on not listing out the contexts. The department I worked in also had interest in integrating with this SDK on the server environment (Node.js). One observation I've had was server-side-events seems to not work with Node.js runtime without any polyfill
h
That is correct, you will most likely need to add a polyfill for SSE within a Node context (see here).
g
Considering the amount of work, and unfamiliarities of browser fresh & cache mechanisms. On the server-side. We might take the approach of having a Flagging singleton class. While maintaining a scheduler to periodically call
loadFeatures()
to keep it up to date. Following similar paths from https://growthbookusers.slack.com/archives/C01T6PKD9C3/p1693377228197689 Just wanna confirm if this is going to be an anti-pattern on using the SDK. If not will be following that route (thanks in advance)
h
You're using Cloud correct? In that case, there's almost no extra work setting up streaming. Just
npm install eventsource
, and then in your app calling:
Copy code
setPolyfills({
  EventSource: require("eventsource"),
});
I don't know if I'd call the scheduled polling an anti-pattern, but I'd recommend going for the streaming solution if possible because it gives you the most current (instantaneous) results without you needing to set up a scheduler task on your end.
g
Thanks GrowthBook staff 🙏, the SSE solution does work well after the polyfill were configured. A single server instance updates itself in a snap. Some remaining ask regarding the
cache
design. [context] • One intuition our BE team had developed being: ◦ It might be most efficient if a cluster of servers could share one cache (like Redis). All of them would read from the snapshot while the refresh action would only be performed by a single node • Combining with our existing setup of
SSE
refreshing + retrieving the flags via
sdk.isOn()
. ◦ By checking the source code from GrowthBook.ts, it seems like all of the refreshed flag values were going to be stored inside
private _ctx: Context;
(with the above setup, not calling
loadFeatures()
periodically) ◦ Which means each server node would essential maintain its own snapshot version of all the flags. TL;DR: There seems to be no mechanism in place to have all server nodes share the same FF snapshot, each of them had to listening using
SSE
. Is my observation correct 🤔?
h
Out of the box, yes. But you can achieve what you're talking about a few different ways with a bit of extra work. A few methods: 1. Polyfill LocalStorage - quick solution You can write your own Redis (or whatever else) connector that uses the same interface as LocalStorage and then plug it into the SDK as a polyfill.
Copy code
setPolyfills({
  EventSource: require("eventsource"),
  localStorage: customRedisConnector,
});
the connector must then implement this interface:
Copy code
interface LocalStorageCompat {
  getItem(key: string): string | null | Promise<string | null>;
  setItem(key: string, value: string): void | Promise<void>;
}
Of course there is local in-mem cache that sits in front of your shared cache. If you want to exclusively use shared cache with nothing in front, you can override the SDK's cache TTL to 0. 2. GrowthBook Proxy - preferred solution Our proxy server can be configured as a backend that seeds your SDK instances via both REST calls and Streaming/SSE. It sits between you SDK and the GB features API. It features a shared cache and can be autoscaled to suit your needs -- you can even plug in a Redis instance or Redis cluster as the proxy's cache and the GB Proxy will keep all instances in sync using Redis PubSub. We actually use the proxy under the hood to power our SSE service, but you can install your own hosted GB Proxy cluster even as a Cloud customer. In this scenario your SDKs' shared cache would actually be their connection to your hosted proxy instance / cluster. As before, use SSE on the SDKs to keep them up to date, and/or set the TTL to a small number (or 0) to force your SDK instances to use the latest definitions at all times. ---- For orgs where backend latency is important, running a local proxy cluster instead of hitting our CDN / SSE endpoints directly makes a lot of sense as it keeps the network distance much shorter / local. But I also wouldn't worry too much about keeping all SDK instances synced perfectly and would probably suggest not setting your TTL too low so that you can avoid unnecessary lookups (even if they're in a local/shared proxy or DB). The performance of using in-memory cache in front outweighs the need for perfectly synced SDK instances in most cases. In a microservice-heavy infrastructure, this might mean evaluating feature flags/experiments once per user request and sharing the results between services instead of having each service perform the same lookup.
👏 1