^ we would also be keen to add some kind of rules/...
# give-feedback
b
^ we would also be keen to add some kind of rules/conditions around what can be set up by default, or perhaps e.g. some templates for how features should be set up
f
We are thinking about adding templates to experiment forms, how are you thinking this would work for features?
šŸ¤© 1
šŸ‘ 1
b
So to give a general overview, currently I have to give people a fair number of very strict "you must not do this or it might break things badly" etc instructions Even with those explained and documented in depth, it's still hard for people to consistently cover everything Just as a few examples: ā€¢ when setting up a webflow redirect, you _must_: ā—¦ only select the webflow environment, deselect nonprod + prod ā—¦ select the redirects project ā—¦ use a string type ā—¦ ensure the default value & control variation are completely empty, incl no spaces ā—¦ add a targeting condition for which page to redirect from - or the entire site will break immediately ā—¦ A/B hashing attribute must be
mojo_device_vendor_id
ā€¢ when setting up a feature for the CMS or app: ā—¦ env must be prod + nonprod, or just nonprod - never webflow, and never prod by itself ā—¦ ensure the default & control values are falsey/off ā—¦ must have a rollout rule targeting staff, above any A/B experiment rule (to exclude staff from analysis) ā—¦ if any user will be exposed to a given value, at least 20% of staff must also be exposed to it - ie you can't roll out a feature to 100% of staff and 50% of users (because the control might be broken/buggy, but we won't notice when testing) ā—¦ A/B hashing attribute must be
mojo_user_id
There's probably a few more I've missed going from memory
This isn't a permission/trust issue - we 100% trust our staff on this kind of thing. But we need to make sure they have some chance of getting it right - making it almost impossible means they'll be scared to test
f
ok, interesting. I can discuss this with our team tomorrow,
ā¤ļø 1
b
I'd ideally have two templates - the first is prob pretty obvious The second, it'd be one rule targeting staff defaulting to 80%, another rule doing A/B testing for users defaulting to 50% With that, for 90% of our tests, probably only the feature name + values would need to be set - everything else would be correct already Ideally we'd flag specific fields as being required to set/update, or at least we'd default them as obviously-incorrect values like "[changeme]" etc
Also, it's hard for me to explain why experiments have to be set up independently from features. I'd really like one form to cover the whole thing If they miss the experiment and set it up later, it's easy to get the start date wrong, then it becomes confusing why there's so little data
f
I know you said it's not a permission or trust issue but it seems like an approval step before publishing feature changes might help. Some of those requirements and checks seem pretty custom so not sure there's a good way to automate all of it. Only thing I can think of is to let you write custom JavaScript validation code.
b
imo that papers over the real issue
Think about e.g. Google Optimize - it's much simpler and harder to break. Growthbook is great, but it is complex and exposes a lot to users
Either way, to be clear, I don't really need GB to actually enforce those rules as such (although custom validation would be very neat - which you could probably achieve with JSON schemas)
If I just made a couple of base/example/template features, in 95%+ of cases, only the feature name would need to be chosen - everything else would be fine with the defaults
Every single feature flag looks exactly like this - incl all the numbers etc
Every single redirect test looks exactly like this Only inputs here would be the feature name, from URL, to URL
f
Yeah, feature templates make sense. I think a lot of the issues you have are a side effect of the redirect hack you're currently doing. I can see us adding first class support for those types of experiments in the future. So you could create a feature flag experiment, visual editor experiment, or redirect experiment.
b
Some carefully customised defaults would resolve the issue entirely
To clarify, we have only done a couple of the redirect ones. The issue is actually more pressing for the normal features - particularly making sure the rules are correct, staff are handled appropriately, etc
f
I was thinking about JSON schema, but it's not the best when it comes to enforcing order of array elements. So requiring a rollout rule before an experiment rule is hard to express.
b
hmm yeah
in practice I think we probably wouldn't use custom validation 1 - because it doesn't fix the usability, just prevents mistakes (but would still be hard to know what you should do) 2 - because there are sometimes cases where we would decide to do something specific for a given feature, but which wouldn't be typical
f
If we did custom JavaScript validation, you could throw exceptions and we would show the error message to users. And we would have a way for you to ignore the validation if needed for a specific feature.
šŸ‘ 1