I've been experimenting with Claude Code and other AI coding assistants for months now, and while they're incredibly helpful, I kept running into the same frustration. They know everything about Drupal and React hooks and Python decorators, but nothing about the specific commands and scripts in my projects. I finally decided to try MCP (Model Context Protocol) to see if I could fix this problem, and it worked better than I expected.
What is MCP anyway?
Model Context Protocol is Anthropic's new standard for connecting AI assistants to external data sources and tools. While you could document project-specific commands in a CLAUDE.md file for a single project, MCP solves a bigger problem: what if you work on 10 different projects, each with its own custom commands?
The real power of MCP is its ability to work across all your projects from a single server. Instead of maintaining separate CLAUDE.md files in each repository, copying and syncing documentation between projects, you build one MCP server that holds all your custom knowledge. Switch from your Drupal project to Next.js to Python - Claude instantly knows the specific commands for each because they're all served from the same place.
The protocol itself is refreshingly simple. You build a server that exposes tools, resources, or prompts, and AI clients like Claude can discover and use them. It's like creating a centralized command center that follows you everywhere, regardless of which codebase you're working in.
My use case
I work on several projects with different tech stacks, and each has its own set of custom commands. One project uses Drupal with custom Drush commands for syncing data. Another is a Next.js app with specific build scripts for different environments. I'm constantly switching between projects and forgetting the exact syntax of these commands.
What if Claude Code could just know about these commands? Not generic Drupal commands you can Google, but my specific drush commerce:sync-warehouse-stock command that I wrote for my project. That's what I set out to build.
Building the dev assistant MCP server
I started with a simple Node.js server using the MCP SDK. My approach was straightforward: create markdown files documenting my project-specific commands and expose them through an MCP tool that Claude Code can query.
Here's the structure I came up with:
dev-assistant-mcp/
├── server.ts # The MCP server
├── knowledge/ # My custom documentation
│ ├── default.md # Fallback for unknown topics
│ ├── drupal.md
│ └── nextjs.md
└── package.json
The server exposes a single tool called dev-assistant that takes a topic parameter (like "drupal" or "nextjs") and returns the corresponding documentation. The entire server is less than 150 lines of TypeScript.
Here's the core of how it works:
// Define the tool that Claude can discover and use
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'dev-assistant',
description: 'ALWAYS use this when user mentions: programming...',
inputSchema: {
type: 'object',
properties: {
topic: {
type: 'string',
description: 'Technology name (drupal, nextjs, etc.)'
},
},
required: ['topic'],
},
},
],
};
});
The magic happens in the tool execution handler:
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'dev-assistant') {
const topic = args?.topic;
// Sanitize input to prevent directory traversal
const sanitizedTopic = topic.replace(/[^a-zA-Z0-9-_]/g, '');
const filePath = path.join(__dirname, 'knowledge', `${sanitizedTopic}.md`);
try {
await fs.access(filePath);
const content = await fs.readFile(filePath, 'utf-8');
return {
content: [{ type: 'text', text: content }],
};
} catch {
// Fall back to default.md if topic doesn't exist
const defaultPath = path.join(__dirname, 'knowledge', 'default.md');
const content = await fs.readFile(defaultPath, 'utf-8');
return {
content: [{
type: 'text',
text: `Note: No file found for "${topic}". Showing default.\n\n${content}`
}],
};
}
}
});
I also added a default.md file as a fallback. This turned out to be surprisingly useful. For example, when I asked Claude Code "how to sync stock for warehouse in next?", it first tried with topic "next" (which doesn't exist) and got the default file telling me the correct file is "nextjs". Claude Code was smart enough to immediately try again with "nextjs" and found the right command. It's like having autocorrect for my documentation.
The simple (some might say primitive) approach
Let's be honest - my approach here is pretty primitive. The system literally just takes a topic name, sanitizes it, and looks for a file with that exact name. That's it. No fancy AI, no semantic search, just good old filename matching.
Looking at the code, you can see how basic it is:
- Take the topic parameter (e.g., "drupal")
- Sanitize it to prevent directory traversal
- Look for knowledge/drupal.md
- Return the contents or fall back to default.md
A smarter approach would use RAG (Retrieval-Augmented Generation) with vector embeddings. Imagine if instead of exact filename matching, the system could:
- Understand that "sync inventory", "update stock", and "warehouse quantities" are all related concepts
- Search across all documentation to find relevant snippets, not just return entire files
- Handle natural queries like "how do I update product quantities?" without needing the exact topic name
- Combine information from multiple knowledge files to provide comprehensive answers
With RAG, I wouldn't need aliases or fallback files. The system would understand semantic meaning and find the right information regardless of how I phrase my question.
But here's the thing - for my use case, this primitive approach works perfectly. I have maybe a dozen knowledge files, I know what they're called, and I just need Claude Code to read them. Building a full RAG system would be overkill. Sometimes the simplest solution is the right solution.
That said, if my knowledge base grows to hundreds of files, or if I want to share this with a team who might not know all the exact topic names, implementing proper semantic search would become essential. For now, though, my primitive file matcher does exactly what I need.
Setting it up with Claude Code CLI
I use Claude Code as my AI coding assistant, and connecting my MCP server to it was surprisingly easy.
A quick note on transports: MCP supports two main transport types - stdio (standard input/output) which is great for local CLI tools, and HTTP with SSE (Server-Sent Events) for web-based scenarios. I chose SSE because it's more flexible for development, allows multiple connections, and I can easily deploy it later if needed. Plus, it's easier to debug since I can just open the browser and check if the server is running.
First, I needed to set up the Express server with SSE transport:
// Express app with SSE endpoint
const app = express();
app.get('/sse', async (req: Request, res: Response) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const transport = new SSEServerTransport('/message', res);
await server.connect(transport);
});
app.listen(3000, () => {
console.log('MCP Server running on http://localhost:3000');
});
Then I start my server:
npm run dev
And add it to Claude Code:
claude mcp add dev-assistant --transport sse http://localhost:3000/sse
That's it. Claude Code can now access my custom documentation.
How AI selects the right tool
When you register multiple MCP servers, the AI must decide which tool to use for each query. This decision is heavily influenced by the description you provide in your tool definition.
For example, in my server I use a very explicit description:
"description": "ALWAYS use this when user mentions: programming, coding, development, any programming language (Python, JavaScript, Java, etc.), any framework (React, Next.js, Django, etc.), any tool (Docker, Git, npm, etc.), or asks how to run/install/setup/use any technology."
This clear, keyword-rich description helps the AI:
- Analyze the user's request for relevant keywords and intent
- Compare that intent against all available tool descriptions
- Identify that the dev-assistant tool is the best match
- Call the tool with the appropriate input
The more specific and well-structured your description, the more reliably the AI chooses your tool over its general knowledge or other servers. Think of it as giving the AI clear guidance on when to use your custom knowledge base.
Why this matters
You might wonder - why not just use a CLAUDE.md file? The key difference is that this MCP server works across all my projects. I don't need to copy documentation into each project repository or maintain multiple CLAUDE.md files. One server, all my custom commands, available everywhere.
The real power comes from having a centralized knowledge base that works across different codebases. When I'm working on my Drupal e-commerce site, Claude Code knows about those warehouse sync commands. When I switch to my Next.js app, it knows about those build scripts. When I jump to a Python data pipeline, it has that documentation too. All from the same MCP server.
I've started adding more knowledge files for different technologies and frameworks I use regularly. Each file captures the specific commands, patterns, and procedures I've developed over time. It's becoming my personal development encyclopedia that travels with me across every project.
Try it yourself
If you want to build something similar, you really don't need much. The MCP SDK does all the heavy lifting, so you can focus on what information would be valuable for your workflow.
The beauty of MCP is that it's an open protocol. We're not locked into one implementation or one AI provider. As more tools support MCP, this investment in documenting our custom tooling becomes even more valuable.
Sometimes the best developer tools aren't the flashiest or most complex - they're the ones that quietly solve your daily frustrations and let you focus on what actually matters: building great software.