residue

mcp: the missing syscall layer for language models

every sufficiently advanced ai system eventually becomes a distributed systems problem.

here's the thing nobody tells you about building with llms: the model isn't the hard part anymore. the hard part is the glue. every integration is a special snowflake. every api speaks its own dialect. every tool needs its own adapter. we're drowning in bespoke plumbing.

anthropic's model context protocol (mcp) is trying to fix this. let's understand what it actually is, why it matters, and what it tells us about where ai systems are headed.

the nĂ—m problem

imagine you have 5 ai assistants (claude, gpt, gemini, llama, mistral) and 10 tools they need to use (github, slack, postgres, stripe, etc). without a standard protocol, you need 50 custom integrations. with mcp, you need 15: each assistant speaks mcp, each tool speaks mcp, done.

this isn't a new idea. we've seen this pattern before:

mcp is trying to be the usb port for ai agents.

what mcp actually is

at its core, mcp is surprisingly simple. it's a client-server protocol using json-rpc over either stdio (for local tools) or server-sent events (for remote tools).

here's the mental model: the protocol defines three key primitives:

  1. tools: functions the ai can call

    {
      "name": "create_issue",
      "arguments": {"title": "...", "body": "..."}
    }
    
  2. resources: data the ai can access (files, logs, schemas)

  3. prompts: reusable instruction templates

what's clever is the discovery mechanism. when a client connects, it asks "what can you do?" and gets back a full json schema of available operations. no more reading api docs.

a concrete example

let's trace through what actually happens when an ai uses mcp to file a github issue:

  1. discovery phase

    client → server: "tools/list"
    server → client: [schema for create_issue, list_repos, etc]
    
  2. llm gets the schema the host injects this into the model's context. now the model knows what tools exist and their exact signatures.

  3. model generates tool call based on user request, the model outputs structured json:

    {
      "tool": "github",
      "method": "create_issue",
      "params": {
        "repo": "myapp",
        "title": "TypeError in worker.ts:12",
        "body": "Stack trace: ..."
      }
    }
    
  4. client executes sends over stdio/sse, gets response, continues conversation.

the beauty is that this same pattern works for any tool. the model doesn't need tool-specific training. it just needs to understand json schemas.

why stateful matters

mcp chose to be stateful, maintaining persistent connections. this is heavier than rest but enables: this is a key insight: ai agents aren't doing one-shot api calls. they're having conversations with tools.

the security puzzle

here's where it gets tricky. mcp puts the host in charge of permissions, but this creates interesting challenges:

the current solution is somewhat awkward. for oauth flows, you run a wrapper server (npx mcp-remote) that handles the dance for you. but this feels like a band-aid.

the deeper question: how do you safely give an ai access to production systems? mcp's answer is "human in the loop" for now, but that won't scale.

latency and the speed of thought

every mcp hop adds overhead:

user → ai host → mcp client → transport → mcp server → actual api

in testing, each hop adds ~50-200ms. for a complex workflow with 10 tool calls, you're looking at seconds of overhead.

this matters because ai interactions should feel like "thinking at the speed of thought." every added second breaks the illusion.

some ideas for fixing this:

what mcp reveals about ai's future

mcp is a bet on several things:

  1. ai systems will be agentic: not just answering questions but taking actions
  2. interoperability matters: no single company will own all the tools
  3. context is everything: the more an ai knows, the more useful it becomes
  4. human oversight remains critical: at least for now

but more fundamentally, mcp represents a shift in how we think about ai integration. instead of teaching models about specific apis, we're creating a universal language for tool use.

the ecosystem play

right now, mcp is in the "early adopters" phase. you've got: the chicken-and-egg problem is real. developers won't build mcp servers without clients. clients won't adopt without servers. anthropic is trying to bootstrap both sides, but it's unclear if they have enough gravity.

my prediction: mcp succeeds if it becomes boring infrastructure. like http or usb, you'll use it without thinking about it.

building with mcp today

if you want to experiment, here's the fastest path:

  1. pick a client: cursor is probably easiest
  2. install a server: try github or postgres to start
  3. understand the flow: watch the json-rpc messages
  4. build your own: wrap your api in an mcp server

the protocol is simple enough that you can implement a basic server in ~200 lines of code. the hard part isn't the protocol—it's deciding what operations to expose and how to handle errors gracefully.

what's missing

mcp v1 has some obvious gaps:

these feel solvable, but they'll determine whether mcp becomes critical infrastructure or remains a nice experiment.

the bigger picture

zoom out and mcp is part of a larger trend: we're building the system call interface for ai.

just like operating systems provide syscalls for processes to interact with hardware, we need standard interfaces for ai to interact with software. mcp is one attempt at this layer.

the alternative is the status quo: every ai company builds their own plugin system, developers write the same integrations n times, and we all pretend this is fine.

final thoughts

mcp feels like it's solving a real problem. the implementation is pragmatic—json-rpc isn't sexy but it works. the stateful design matches how agents actually operate. the security model at least acknowledges the problems even if it doesn't solve them.

will it succeed? depends on adoption. protocols are social constructs. they succeed when enough people agree to use them.

but the problem mcp addresses—the combinatorial explosion of ai-tool integrations—isn't going away. if not mcp, something like it will emerge. the only question is whether anthropic's version becomes the standard or just influences what comes next.

in the meantime, it's worth experimenting with. even if mcp itself doesn't win, understanding the problem space will matter as we build increasingly capable ai systems that need to interact with the messy real world.

because in the end, that's what this is about: teaching silicon to speak the language of software. and maybe, just maybe, making it as easy as plugging in a usb cable.