Good afternoon everyone! We are planning on havin...
# ask-questions
m
Good afternoon everyone! We are planning on having a central self-hosted growthbook instance with multiple kubernetes clusters connecting to it. Is it possible to have multiple proxy servers? E.g. one in each of the clusters? According to the documentation it looks like you can only have one because you need to set the env variable
PROXY_HOST_PUBLIC=<proxy_url>
on the main growthbook instance.
Also wondering what happens if the main instance goes down? Will the proxy still be able to serve the feature flags to prevent downtime?
h
You can have multiple proxies, but we strongly recommend keeping them in their own cluster.
PROXY_HOST_PUBLIC
could be a load-balanced or dedicated entrypoint. If you are horizontally scaling your proxy server, then you really should be using a central store (Redis) to ensure each proxy instance has synchronized state. We use Redis's built in pub/sub for letting the proxy instances fan-out and update each other's local state based on global state change. In the proxy readme, check out these fields for setting up Redis pubsub:
Copy code
CACHE_ENGINE=redis
CACHE_CONNECTION_URL=<your redis connection url>
PUBLISH_PAYLOAD_TO_CHANNEL=1  #this is what enables pub/sub for syncing state on scaled proxy clusters
The proxy does keep a local cache of all your SDK payloads, which should help with GrowthBook app downtime (you can also autoscale GrowthBook to minimize this risk). You can customize the TTLs (
CACHE_STALE_TTL
and
CACHE_EXPIRES_TTL
) if you'd like to fine-tune cache availability.
🙌 1
m
Interesting, that could work although our clusters are in different regions and a shared Redis would mean some cross region network requests. But I think that would be ok if the proxies hold state locally as well.
h
Sure, you could set long TTLs to keep local state for as long as possible and then just use Redis for the occasional state sync as well as pub/sub (which only triggers on GB feature changes). I'd be interested to know if you decide to go this route how it's working for you.
g
👋 I work with Marcus so following the thread. From what I read it seems promising with such a config, but I'm wondering if the PROXY_HOST_PUBLIC would still end up being problematic for us 🤔 I'm thinking ideally the proxies will end up being served on different domains corresponding to the region they are run in, because really they are really separate environments (altho for developer experience and so it would be very nice to have a central configuration). So probably each region would be configured to load configs from different project + env in growthbook. We are still pretty early in ironing this all out 😄 But growthbook looks pretty promising for us
h
Hi Markus. I want to make sure I'm on the same page with you both. In order to run separate proxy instances on multiple clusters (possibly with proxy autoscaling within each cluster), you would need to connect the dots on a handful of topics: 1. Decide on whether each proxy can share the same set of SDK Connections or not. For example, each proxy cluster (A, B, and C) could each provide proxying for all SDK Connections (1, 2, and 3) -- even if SDK 1 would only ever be used on Proxy A, SDK 2 on Proxy B, etc. This approach is what I'd recommend, as it reduces complexity in your setup. If you truly need these to be separate, it gets a bit more complicated: you'd need to hard code via .env vars the specific SDK connections you want to allow on each proxy instance. This currently can be done out of the box - it's an undocumented feature 🙂 (see helper.ts
getConnectionsFromEnv
, which lets you specify things like
CONNECTION.1.API_KEY
,
CONNECTION.1.SIGNING_KEY
;
CONNECTION.2.API_KEY
,
CONNECTION.2.SIGNING_KEY
; etc...) 2. PROXY_HOST_PUBLIC. This kind of goes back to point 1 above, but if you allow all proxies to keep information about all SDK Connections, then you'd just need to come up with a fan-out mechanism to post updates from your Growthbook app(s) to your proxy webhook endpoints (which is what PROXY_HOST_PUBLIC basically is). We don't currently have something out-of-the box for this, but you could probably rig something up at whatever URL you provide; you'd just need to roll your own fan out. However... 3. Central vs per-Cluster Redis: ...if you did make the concession that each proxy instance (even those in different clusters) could share a common central Redis store, then you would not need to roll your own fan out (you could use the built-in Redis pub/sub mechanism which GrowthBook proxy supports natively). Most of what I've mentioned above will work mostly out of the box, with the exception of rolling your own fan-out. Let me know which way you're leaning and hopefully we can narrow in on a strategy.
🙌 1
g
Thanks for all the details Bryce! This is very helpful! We'll do some experimentation with this in the coming weeks and try some variants. Sounds like the central redis with pub/sub could be a neat way to go about it 🙂
a
Hello, Do you guys find out how to deploy growthbook on kubernetes ?