main*
📝what-good-mcp-tools-look-like.md
📅August 22, 20256 min read

What Good MCP Tools Look Like

#mcp#ai-agents#tooling#developer-experience#api-design

What Good MCP Tools Look Like

The Protocol Is Not the Hard Part

Once you start building MCP servers, it becomes obvious that the protocol itself is not where most of the quality problems come from.

The hard part is tool design.

You can have a perfectly valid MCP server with technically correct tools that still feel terrible in practice because the tools are:

  • too broad
  • too vague
  • too side-effectful
  • too hard to reason about
  • too dependent on hidden human assumptions

That is not an MCP problem. It is an interface design problem.

Tools Need to Be Designed for Models, Not Just Humans

Humans are pretty good at filling in gaps.

If a CLI command is a little awkward, a human can often infer the right usage from docs, examples, and experience.

Models are different.

They need tools that are:

  • explicit
  • predictable
  • low-ambiguity
  • structured in their inputs and outputs

That means good MCP tools are usually more constrained than human-oriented interfaces, not less.

The First Rule: One Tool, One Clear Job

If a tool does five things depending on hidden flags or optional combinations, it is probably too broad.

Bad shape:

manage_project(action, input, options, flags, maybeFilters)

Better shape:

  • get_project
  • list_project_issues
  • create_project_update
  • archive_project

This is not about purity. It is about reducing ambiguity.

The more a model has to infer intent from a fuzzy surface, the more room there is for wrong tool choice and wrong parameters.

Read Tools and Write Tools Should Feel Different

I strongly prefer a clean separation between read-only tools and mutating tools.

That helps with:

  • planning
  • safety
  • recoverability
  • agent reasoning about consequences

It is much better when a model can first inspect state with a clearly read-only tool and then choose a separate write tool only when it actually intends to change something.

That also makes guardrails easier to apply.

Inputs Should Be Concrete, Not Implicit

One of the easiest ways to create fragile tools is relying on implied state.

Examples of inputs I distrust:

  • "current project"
  • "latest issue"
  • "active environment"
  • "default user"

Those can be convenient for humans in an interactive flow. For agent systems, they create hidden coupling and make retry behavior harder.

I would rather pass explicit identifiers.

{
  "projectId": "proj_123",
  "issueId": "ISSUE-441",
  "stateId": "in_progress"
}

That is boring. It is also much safer.

Outputs Should Be Structured for Follow-Up

A good tool output should help the next step happen cleanly.

That means returning:

  • stable IDs
  • relevant status fields
  • enough metadata for the agent to continue
  • machine-usable structure, not just prose

Bad output:

Project updated successfully.

Better output:

{
  "id": "proj_123",
  "name": "Agent Rollout",
  "state": "started",
  "updatedAt": "2025-08-22T14:21:00Z"
}

The second version is immediately usable by the next step in a workflow.

Side Effects Should Be Obvious

The tool name, description, and schema should make it hard to miss whether an operation changes the world.

This is a place where subtlety is bad.

I want mutating tools to feel unmistakably mutating.

Examples:

  • create_issue
  • update_project
  • delete_document
  • add_user_to_team

Not:

  • manage_issue
  • handle_project
  • sync_document

The more clearly the side effect is represented, the easier it is for both humans and agents to apply judgment around it.

Validation Errors Need to Teach

When a tool call fails validation, the response should help the model correct itself.

Bad error:

Invalid request.

Better error:

{
  "error": "Missing required field 'projectId'",
  "expected": {
    "projectId": "string",
    "name": "string"
  }
}

The goal is not just correctness. It is recoverability.

If the tool teaches the model how to retry correctly, the whole system gets more robust.

Tool Surfaces Should Reflect Real Workflows

A good tool set should map to the natural steps of a task.

For example, a well-shaped issue-management server might support a workflow like:

  1. search issues
  2. get a specific issue
  3. list labels or workflow states
  4. update the issue
  5. comment on the issue

That flow is legible.

When the available tools do not line up with how work is actually done, the model starts improvising around the interface.

That is when awkward tool use shows up.

Narrow Tools Beat Clever Tools

I think teams overvalue cleverness in tool design.

They build giant Swiss Army Knife tools because they want fewer endpoints to maintain.

But the cost gets pushed into the agent's reasoning burden.

Narrow tools usually perform better because they reduce:

  • parameter ambiguity
  • hidden branches
  • side-effect uncertainty
  • recovery complexity

That is a trade I will usually make.

Documentation Still Matters

Even with structured schemas, descriptions matter a lot.

The most useful tool descriptions usually include:

  • what the tool does
  • when to use it
  • when not to use it
  • any important safety or sequencing constraints

That gives the model better decision support than a bare schema alone.

The strongest tool ecosystems combine:

  • strong schemas
  • clean naming
  • good descriptions
  • outputs that chain well into the next step

My Practical MCP Tool Design Rules

When I am designing tools for models, I want these properties.

  1. each tool has one obvious job
  2. read and write actions are separate
  3. inputs are explicit, not environment-dependent
  4. outputs include stable identifiers and next-step context
  5. side effects are obvious from the name
  6. validation errors help the model recover
  7. the tool set reflects actual workflow order

If a tool violates several of those, it is probably going to create friction in real use.

The Main Takeaway

Most MCP quality issues are not about transport, authentication, or protocol compliance.

They are about whether the tool surface is actually designed for model-driven work.

Good MCP tools are usually:

  • narrower
  • more explicit
  • more structured
  • easier to recover from
  • less clever than people first expect

That is a good thing.

The best tool ecosystems are the ones where both the human and the model can tell, very quickly, what each tool is for and what will happen when it is used.