https://www.growthbook.io/ logo
#ask-questions
Title
# ask-questions
s

square-yacht-46188

10/02/2023, 1:32 PM
Hi, are there some utilities/good practices in order to exclude crawlers from the experiments? For example, the possibility of defining a "global" rule (in the administration panel or through SDK)?
f

fresh-football-47124

10/03/2023, 8:36 AM
You can block crawlers, depending on the SDK - but nothing built in at the moment
there are plenty libraries that can help you detect them
c

cuddly-finland-73937

10/03/2023, 3:02 PM
I know this is not really related to what is being asked but made me wonder if there is a defined way to provide a custom robots.txt for the GrowthBook instance to use? The default install appears to return 404 for that.
t

tall-sundown-70325

10/30/2023, 9:07 PM
Hmmmmm! I just joined this Slack to ask how we might do this. I saw that we can force a test based on an attribute. I feel it would be super handy if there was a 'bot' attribute. By chance would this be possible with the
browser
attribute? It's difficult to know what that encompasses. Is it user-agent or something more general?
h

happy-autumn-40938

10/30/2023, 9:47 PM
Using a custom
isBot
attribute is probably a good way to do this. That way you can build bot detection into code rather than in targeting rules / regexes on browser attributes. Determining whether something is a bot or not is not always straightforward. There are some libraries like this one. There are also botlists by IP address that Google publishes. And some bots are cloaked by design.
👍 1
t

tall-sundown-70325

10/31/2023, 4:59 PM
ohhhh I see. I didn't realize we could create custom attributes. thanks!
s

square-yacht-46188

11/08/2023, 10:40 AM
in our case, when we detect that it is a bot, we set to true the option disable in the GrowthBook instance. in this way, the default value is always returned and we don't need to remember to set a custom rule in the features
👍 1
2 Views