Skip to content
Vorp Labs//MCP
December 11, 2025

Technical

Setting Up MCP Servers: A Practical Guide for AI Development

A practical walkthrough of configuring MCP servers for Claude Code—from basic setup to debugging common issues.

P
Phil GlazerFounder
8 min read

MCP (Model Context Protocol) is quickly becoming a common way for AI clients to discover and call external tools. Anthropic announced on December 9, 2025 that MCP had over 10,000 active public servers and adoption across products including ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code. If you're building AI-powered development tools or want to give your AI assistant access to external systems, understanding how MCP servers are configured is increasingly useful.

What MCP actually does (and why you'd want it)

MCP solves a mundane problem: models need access to tools and current data. MCP provides a standard way for AI clients to discover and invoke external capabilities—reading files, querying databases, controlling browsers, whatever you expose.

Under the hood, MCP uses JSON-RPC 2.0 for communication. If you've worked with any RPC-based systems, the request/response pattern is familiar. An MCP server exposes capabilities, and MCP clients (like Claude Code) discover and invoke those capabilities through a consistent interface.

Without tool access, an AI assistant is limited to what you paste into the context window. With MCP servers configured, it can read your actual files, interact with your development environment, and take concrete actions. The difference between "summarize what I pasted" and "operate on my codebase" is tool access.

We use MCP servers extensively in our Claude Code setup—filesystem access for navigating projects, Playwright for browser automation testing, and servers for working with Excel and PowerPoint when client work requires it. Each one started with the same configuration pattern.

The configuration basics: JSON files and environment variables

Server configuration is client-specific. Some clients use JSON config files (like Claude Code); others use TOML or UI-driven settings. For Claude Code, you edit a JSON configuration file that specifies which servers to load and how to run them. The examples below use Claude Code's format.

A basic server configuration looks like this:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"],
      "env": {}
    }
  }
}

The structure is consistent across servers: a name, a command to run the server, arguments for that command, and environment variables. Most servers are distributed as npm packages, so npx is the common launcher—though you'll also see Python-based servers using uvx or direct executable paths.

Environment variables handle the sensitive bits. API keys, database credentials, and service tokens go in the env object rather than the args (where they'd show up in process listings). This isn't bulletproof—config files can still be committed accidentally, and env vars are visible to other processes—but it avoids the most obvious leaks:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxx"
      }
    }
  }
}

Microsoft has announced it's collaborating with Anthropic on an official C# SDK for MCP, suggesting the ecosystem is expanding beyond Node and Python. For now, npm packages cover most use cases.

One thing that trips people up: the configuration file needs to be valid JSON, not JSON5 or "JSON with comments." Trailing commas and inline comments can cause parse errors, and depending on the client, the error message may be unhelpful. Run configs through a JSON validator before debugging anything else.

Common setup problems and how to fix them

After setting up MCP servers across multiple machines and helping others do the same, we've seen the same problems repeatedly. Here's what actually breaks and how to fix it.

Path permissions are the most common issue in our setups. Filesystem servers need explicit access to directories, and they enforce this strictly. If you configure access to /projects but your code lives in /Users/you/projects, the server will refuse to read anything. Use absolute paths, and when debugging, start with a broader path to confirm the server works before narrowing permissions.

API keys in the wrong format. Some servers expect bare tokens, others expect them prefixed with "Bearer," and still others want them in specific environment variable names that aren't obvious from the documentation. When a server fails silently or returns authentication errors, check the server's README for exact environment variable names—they're not always standardized.

Timeout issues during startup. MCP servers need to initialize before they're available, and complex servers (especially those connecting to external services) can exceed default timeouts. If a server works when you run it manually but fails when launched through your AI client, look for timeout configuration options in your client settings.

npx caching problems. When you're iterating on server versions or switching between different server packages, npx's cache can serve stale versions. If you suspect cache weirdness, use npm cache verify or delete the _npx cache directory. (Note: npx clear-npx-cache works by running a third-party package, so treat it like any other dependency you execute.)

JSON-RPC communication failures. Since MCP uses JSON-RPC 2.0, malformed responses or unexpected error codes can be confusing if you're not familiar with the protocol. The MCP specification includes standard error codes, but server implementations don't always follow them precisely. When debugging, capturing the raw JSON-RPC traffic (if your client supports it) often reveals what's actually happening.

A debugging approach that's saved us time: run the MCP server manually in a terminal before configuring it in your AI client. If it works standalone, the problem is configuration or permissions. If it fails standalone, you know to focus on dependencies, environment variables, or network issues.

Real examples from our Claude Code workflow

Our working configuration has evolved through trial and error. Here's what we actually run and why.

The filesystem server is foundational—without it, Claude Code can't read or modify project files directly. We configure it with access to our development directories but explicitly exclude sensitive paths:

{
  "filesystem": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", 
             "/Users/team/projects", 
             "/Users/team/documents"],
    "env": {}
  }
}

For browser automation, we use a Playwright server that lets Claude Code interact with web applications during testing. This has been particularly useful for debugging frontend issues where we can say "click through the signup flow and tell me where it breaks" rather than describing steps manually.

The Excel and PowerPoint servers handle a specific need: client deliverables often require specific formats, and being able to generate or modify Office documents programmatically through natural language saves significant time. These servers require no special configuration beyond the basic command structure.

One server we experimented with but removed: a general web research server. The latency added to every response that might need external information wasn't worth it for our use cases—we found it better to handle research as explicit requests rather than having it available implicitly.

When MCP servers are worth the setup overhead

MCP servers add complexity. Configuration files need maintenance, servers can fail, and every additional tool is another thing to debug when something goes wrong. The question isn't whether MCP is technically impressive—it is—but whether the capability justifies the overhead for your specific work.

MCP servers are clearly worth it when your AI assistant needs persistent access to external systems. If you're frequently pasting file contents, running commands and copying output, or manually bridging between your AI tool and other services, an MCP server eliminates that friction. The time investment in setup pays back quickly when you're doing something repeatedly.

They're also valuable when you need capabilities that don't exist in base models. Browser automation, database queries, API integrations—these aren't things language models can do from training alone. MCP servers add genuine new capabilities rather than just convenience.

They're probably not worth it for occasional or one-off tasks. If you need to check something in a spreadsheet once a month, manually copying data is fine. MCP servers shine for workflows you repeat, not tasks you do once.

The ecosystem's growth suggests we're past the "early adopter curiosity" phase. With Anthropic donating MCP to the Agentic AI Foundation to keep it open and neutral, and major platforms adopting it, the protocol looks like it's becoming standard infrastructure rather than one vendor's approach. Setting up MCP servers now is an investment in understanding how AI tooling will work going forward—not just an optimization for today's workflow.

If you're building AI-powered development tools or want to extend your existing AI assistant's capabilities, starting with one or two MCP servers on a real project is more instructive than reading about the protocol abstractly. The filesystem server is the obvious first choice: it's simple, immediately useful, and teaches you the configuration patterns you'll use for everything else.

Interested in exploring this further?

We're looking for early partners to push these ideas forward.