I am trying to migrate from posthog with the equiv...
# ask-questions
b
I am trying to migrate from posthog with the equivalent of
posthog.feature_enabled("feature-name", user_id, only_evaluate_locally=True)
. I can’t find a way to both: 1. Cache at the server level, and 2. Get per-user feature flags If I only use one instance, it will cache but I won’t be able to check per user flags since the user attributes is tied to the instance. If I use one instance per request per the docs recommendation, I won’t be able to share cache. I don’t want a http fetch every time I check the flag and I’m ok with stale-while-revalidate. How do I do this?
f
so you want to do per-user feature flags only for specific flags?
b
Yes
And I don’t want to have to know ahead of time if a flag will be global or per-user (progressive rolled out) and hard code this
f
interesting
I can bring this up with the team
so you'd be able to define how to look up the flag from the flag state set through the platform UI?
b
😮 shouldn’t this be a super common use case or am I misusing the platform? The use case is I have a feature flag that I want to rollout. Before I enable this rollout, it should be zero cost to check (globally false). When rollout starts, it can send the http request to check.
The most ideal version of this is if the rollout logic is cached server side so it doesn’t ever need to send an http request in the critical path. (ie let’s say it’s rolled out in 10% increments, the server side sdk should just cache that this feature should be shipped to 30% of users right now so it does
hash(user_id) % 10 < 3
instead of http request
f
we support that latter case
b
oh, how does caching work there then if every request has it’s own gb instance?
or the gb instances can actually share cache? for context this is considering a serverful environment
in python
f
it caches the feature payload, which contains the rules for assignment - so the hashing would happen every time, but that's super fast
b
ok, so in a serverful python environment, constructing and destroying gb instances per request will still allow new gb instances to share the cache?
f
I'm not an expert in python, I can check with the team on that
b
like there’s a singleton in the sdk I guess?
just reading the code I don’t think this is the case
or no actually I can see that this is the case
I think?
r
Hi Wesley, just a quick note to say this is still on our radar. I should have an update for you on Monday.