MCP Request (Claude Summarized based on limitation...
# request-features
r
MCP Request (Claude Summarized based on limitations I hit when wanting to interact with metrics)
Feature Request: Add Metric Metadata Support to GrowthBook MCP Server Current Limitation The GrowthBook MCP server provides experiment data including metric IDs in the
goals
,
secondaryMetrics
, and
guardrails
arrays, but only returns opaque identifiers like
met_405opf1mkz00om4b
and
fact__4mzj622m1v7orjj
. There's no way to retrieve human-readable metric names, descriptions, or definitions through the MCP. Use Case When analyzing experiment data via the MCP (e.g., "which metrics are used most frequently in recent experiments"), users get unhelpful metric IDs instead of meaningful names like "Trial to Paid Conversion Rate" or "Ad Revenue per User." Requested Features Add these MCP tools: 1.
get_metrics
- List all metrics with metadata: json
Copy code
{
  "id": "met_405opf1mkz00om4b",
  "name": "Trial to Paid Conversion",
  "description": "Users who convert from trial to paid subscription",
  "type": "binomial"
}
1.
get_metric
- Get detailed info for a specific metric ID: json
Copy code
{
  "id": "met_405opf1mkz00om4b",
  "name": "Trial to Paid Conversion",
  "description": "Users who convert from trial to paid subscription",
  "sql": "SELECT ...",
  "type": "binomial",
  "tags": [
    "conversion",
    "growth"
  ]
}
1. Enhanced experiment responses - Include metric names inline: json
Copy code
"goals": [
     {
       "metricId": "met_405opf1mkz00om4b",
       "metricName": "Trial to Paid Conversion",
       "overrides": {}
     }
   ]
Business Value • Makes experiment analysis via AI/MCP much more useful • Enables better experiment insights and metric usage analysis • Reduces need to context-switch to web interface for basic metric understanding • Supports automated experiment reporting and analysis workflows Currently tested with the GrowthBook MCP server version
@growthbook/mcp@latest
- this functionality would make experiment meta-analysis significantly more helpful for AI-assisted workflows.
f
@strong-mouse-55694
s
Hey @rapid-sundown-31376. Thanks for the request. This makes sense! I'll put this on my list for next week and update here when changes are live. I'd also be interested in hearing more about how you're using our MCP server, so let me know if you'd be interested in chatting more.
r
One of the things I was trying to figure out is which metrics that we have get the most usage across experiments. That wasn't obvious how to do any other way in the UI, so I thought maybe the MCP would be able to help me figure it out. It was partially successful but got a result like this which wasn't super interpretable. (though I figured out how to put those in the URL to interpret what metric it was)