-
Clear file modification boundaries — The "Modify these / Do NOT modify" structure immediately tells me where I can safely work vs. what's managed upstream. This is critical for avoiding wasted effort.
-
Concrete code examples — The tool/resource/prompt snippets with actual decorators (
@dr_mcp_tool,@mcp.resource) give me copy-paste starting points rather than abstract descriptions. -
Command reference is complete — All the
dr task run mcp_server:*commands are listed with brief descriptions. I know how to install, test, lint, and run the server. -
Auto-discovery explanation — Knowing that
.pyfiles inapp/tools/are automatically discovered means I don't need to hunt for registration code. -
Port isolation note — "Dev=9000, ETE=8082 (isolated; can run simultaneously)" prevents confusion when both are running.
-
Testing pattern — The pytest example shows me the async pattern and import structure I need to follow.
Gap: The first sentence mentions "Built on FastMCP with datarobot_genai.drmcp integration" but I have no idea what these are, whether they're internal libraries, or where to find their docs.
Fix: Add to the intro:
## MCP Server Sub-project
DataRobot MCP server exposing tools/prompts/resources to AI agents via Model Context Protocol.
**Tech stack**:
- [FastMCP](https://github.com/jlowin/fastmcp) — Python framework for MCP servers
- `datarobot_genai.drmcp` — DataRobot's MCP integration library (internal)
- Model Context Protocol — Anthropic's standard for AI agent tool callingGap: Tools use @dr_mcp_tool, resources use @mcp.resource, prompts use @mcp.prompt(). Where does mcp come from? Do I import it differently?
Fix: Add to "Adding resources/prompts":
# app/resources/my_resources.py
from app.main import mcp # ← Import the server instance
@mcp.resource("resource://my-data/{id}")
async def my_resource(id: str) -> str:
"""Resource description."""
return f"Data for {id}"Gap: The constraints mention "Do not re-implement predict_*, list_deployments, list_projects, etc." but I don't know where these are defined or how to discover what already exists.
Fix: Add to "Critical constraints":
- **Built-in tools**: Core DataRobot operations are pre-implemented in `datarobot_genai.drmcp`.
Run `dr task run mcp_server:dev` and check startup logs, or see `docs/deployment_info_tools.md`
for the full list. Do not re-implement `predict_*`, `list_deployments`, `list_projects`, etc.Gap: "Add to user-metadata.yaml + app/core/user_config.py + infra/infra/mcp_server_user_params.py" — but I don't see infra/ in the directory structure, and there's no example of what this looks like.
Fix: Either remove infra/infra/mcp_server_user_params.py if it's not in this sub-project, or add:
## Adding custom runtime parameters
1. Define schema in `user-metadata.yaml`:
```yaml
custom_param:
type: string
description: "My custom parameter"-
Wire in
app/core/user_config.py:def get_custom_param() -> str: return os.getenv("CUSTOM_PARAM", "default")
-
Use in tools:
from app.core.user_config import get_custom_param
### 5. **What's in `ToolResult`?**
**Gap**: The example shows `ToolResult(structured_content={"result": argument})` but I don't know what other fields exist or when to use them.
**Fix**: Add to "Adding a tool":
```python
from fastmcp.tools.tool import ToolResult
# Return structured data (preferred for agents)
return ToolResult(structured_content={"key": "value"})
# Return plain text
return ToolResult(text="Plain response")
# Return error
return ToolResult(error="Something went wrong")
Gap: Troubleshooting mentions .env and DATAROBOT_API_TOKEN, but I don't know if I need to create .env from .env.template, or if there are other required variables.
Fix: Add to "Commands" section:
## Setup
1. Copy `.env.template` to `.env`:
```shell
cp mcp_server/.env.template mcp_server/.env-
Edit
.envand set:DATAROBOT_API_TOKEN=your_token_here DATAROBOT_ENDPOINT=https://app.datarobot.com/api/v2 -
Install dependencies:
dr task run mcp_server:install
### 7. **What does "project root" mean?**
**Gap**: "Run from **project root**" — is that the repo root or `mcp_server/`? The directory structure shows `mcp_server/` as a subdirectory.
**Fix**: Clarify:
```markdown
Run from **repository root** (parent of `mcp_server/`):
dr task run mcp_server:test # all tests (unit + integration + ete)Issue: The comment says "unit + integration + ete" but the directory structure only shows unit tests mentioned. Are there integration tests? Where?
Fix:
dr task run mcp_server:test # all tests (runs pytest with coverage)
dr task run mcp_server:unit # unit tests only (tests/unit/)
dr task run mcp_server:ete # end-to-end tests (starts server + Jaeger)The tool example imports ToolResult but not the decorator:
Fix:
# app/tools/user_tools.py
from datarobot_genai.drmcp import dr_mcp_tool
from fastmcp.tools.tool import ToolResult
@dr_mcp_tool(tags={"my_category"})
async def my_tool(argument: str) -> ToolResult:
"""Description shown to agent — be precise and actionable."""
return ToolResult(structured_content={"result": argument})Place this right after the intro:
## Quick Start
```shell
# 1. Setup environment
cp mcp_server/.env.template mcp_server/.env
# Edit .env with your DATAROBOT_API_TOKEN
# 2. Install dependencies
dr task run mcp_server:install
# 3. Run tests
dr task run mcp_server:test
# 4. Start dev server
dr task run mcp_server:dev
# Server runs on http://localhost:9000
# 5. Test interactively
dr task run mcp_server:test-interactive
### 2. **Add "Understanding the Architecture" section**
```markdown
## Architecture Overview
- **Tools** (`app/tools/`) — Functions AI agents can call (e.g., "create deployment")
- **Resources** (`app/resources/`) — Data agents can read (e.g., "deployment://123")
- **Prompts** (`app/prompts/`) — Reusable prompt templates agents can invoke
- **Core** (`app/core/`) — Server lifecycle, auth, config (DO NOT EDIT — managed by copier template)
**Request flow**: Agent → MCP client → this server → DataRobot API → response → agent
## Common Patterns
### Calling DataRobot API from a tool
```python
from datarobot_genai.drmcp import dr_mcp_tool
from fastmcp.tools.tool import ToolResult
import datarobot as dr
@dr_mcp_tool(tags={"deployments"})
async def get_deployment_status(deployment_id: str) -> ToolResult:
"""Get the current status of a deployment."""
try:
deployment = dr.Deployment.get(deployment_id)
return ToolResult(structured_content={
"id": deployment.id,
"label": deployment.label,
"status": deployment.status
})
except Exception as e:
return ToolResult(error=f"Failed to get deployment: {str(e)}")Always wrap DataRobot API calls in try/except and return ToolResult(error=...) on failure.
All tools must be async def even if they don't use await internally (FastMCP requirement).
### 4. **Add "File Discovery Reference"**
```markdown
## What Gets Auto-Discovered
| Location | Pattern | Registered As |
|----------|---------|---------------|
| `app/tools/*.py` | `@dr_mcp_tool` decorated functions | MCP tools |
| `app/resources/*.py` | `@mcp.resource` decorated functions | MCP resources |
| `app/prompts/*.py` | `@mcp.prompt` decorated functions | MCP prompts |
**Exceptions**: `__init__.py` files are never scanned for tools.
NEEDS WORK — The AGENTS.md provides good structural guidance and command reference, but lacks critical context about dependencies, imports, the broader architecture, and setup steps that would leave a first-time agent guessing or grepping the codebase for basic information like "where does mcp come from?" or "how do I set up credentials?"