rapid-sundown-31376
09/12/2025, 5:10 PMrapid-sundown-31376
09/12/2025, 5:11 PMgoals, secondaryMetrics, and guardrails arrays, but only returns opaque identifiers like met_405opf1mkz00om4b and fact__4mzj622m1v7orjj. There's no way to retrieve human-readable metric names, descriptions, or definitions through the MCP.
Use Case
When analyzing experiment data via the MCP (e.g., "which metrics are used most frequently in recent experiments"), users get unhelpful metric IDs instead of meaningful names like "Trial to Paid Conversion Rate" or "Ad Revenue per User."
Requested Features
Add these MCP tools:
1. get_metrics - List all metrics with metadata:
json
{
"id": "met_405opf1mkz00om4b",
"name": "Trial to Paid Conversion",
"description": "Users who convert from trial to paid subscription",
"type": "binomial"
}
1. get_metric - Get detailed info for a specific metric ID:
json
{
"id": "met_405opf1mkz00om4b",
"name": "Trial to Paid Conversion",
"description": "Users who convert from trial to paid subscription",
"sql": "SELECT ...",
"type": "binomial",
"tags": [
"conversion",
"growth"
]
}
1. Enhanced experiment responses - Include metric names inline:
json
"goals": [
{
"metricId": "met_405opf1mkz00om4b",
"metricName": "Trial to Paid Conversion",
"overrides": {}
}
]
Business Value
• Makes experiment analysis via AI/MCP much more useful
• Enables better experiment insights and metric usage analysis
• Reduces need to context-switch to web interface for basic metric understanding
• Supports automated experiment reporting and analysis workflows
Currently tested with the GrowthBook MCP server version @growthbook/mcp@latest - this functionality would make experiment meta-analysis significantly more helpful for AI-assisted workflows.fresh-football-47124
strong-mouse-55694
09/12/2025, 7:52 PMrapid-sundown-31376
09/12/2025, 9:06 PM