518 MCP Servers Scanned: 41% Have Zero Auth

Stack Overflow's engineering blog recently dropped a thorough breakdown of how authentication should work in MCP. The spec is clear. The tooling is improving.

I spent three months scanning real production MCP servers. Here's what they actually do.

The Reality Check

Scanned 518 MCP servers from the official registry and broader ecosystem:

  • 304 servers (59%): authentication present (OAuth, API keys, bearer tokens)
  • 214 servers (41%): no authentication whatsoever
  • 156 servers: no auth and expose callable tools to literally anyone

41% without auth isn't an edge case. It's default behaviour for half the ecosystem.

Three Architectures Devs Actually Ship

Architecture 1: MCP-Layer Auth (The Enterprise Play)

Slack, Linear, GitHub's official servers enforce OAuth at protocol level. Client authenticates before the server responds to anything. This matches the spec exactly.

Architecture 2: API-Layer Auth (The Common Pattern)

MCP endpoint is wide open, underlying API requires credentials. You can list tools without auth, calling them returns 401. This is how Google's MCP integrations work. Technically "authenticated", but tools endpoint is publicly enumerable.

Architecture 3: No Auth, No Rate Limit (The Long Tail)

Server responds to any tool call from any client. No token required. 30% of all servers (156 with callable tools).

What "No Auth" Actually Means

For read-only tools (fetch webpage, search docs), no auth is reasonable. WebZum team told me directly: "It's open on purpose."

But 32 servers expose tools that can:

  • Post to social media (xbird: 35 Twitter tools including post_tweet, follow_user, update_profile)
  • Trigger CI/CD pipelines (Bitrise: 67 tools including build triggers)
  • Send emails (po6: direct email access)
  • Process payments (Payram: payment tool exposure)
  • Create and host websites (WebZum: create_site, host_site, host_file)

These aren't hypothetical. These are callable right now, by any AI agent.

The Enumeration Problem Nobody Talks About

Even "safe" servers with API-layer auth have subtler exposure: tool enumeration.

Any AI agent can connect to most MCP servers and call tools/list without credentials:

  • Internal API surface is discoverable
  • Tool names reveal business logic (get_customer_by_email, delete_subscription)
  • Parameter schemas expose data models

For most "authenticated" servers, auth check applies to tool calls, not tools/list. The spec doesn't require auth before listing, and implementations reflect that default.

What the Spec Says vs What Devs Ship

MCP 2025-11-05 spec introduced OAuth 2.0 with PKCE as recommended auth mechanism. Solid design.

The problem: spec makes authentication possible, not required. It's opt-in. Most server authors, especially for small/internal tools, don't opt in.

This isn't spec criticism. It's observation about how security-optional protocols get deployed. TLS was optional for years before HSTS made it effectively mandatory. MCP auth is where HTTP was in 2010.

The Three-Layer Risk Model

After 518 servers, I think about MCP auth risk in three layers:

  1. Transport: Is connection encrypted? (Most remote servers: yes, HTTPS. Local: no.)
  2. Identity: Does server know who's calling? (59% yes, 41% no.)
  3. Authorisation: Does server verify caller can do this specific action? (Almost none.)

Almost every server with auth stops at layer 2. Layer 3 (per-action authorisation) is nearly absent. Valid API key grants access to all tools.

What Should Actually Change

Ecosystem needs things that don't require spec changes:

Security badge for MCP registries

Similar to npm's funding indicator, visible "requires auth" vs "open" marker would help AI agent devs make informed decisions.

Tool-level auth scopes

OAuth scopes should map to individual tools, not entire server. "Read my calendar" and "send email on my behalf" shouldn't require same credential level.

Default-deny enumeration

tools/list should require same auth level as tool calls, unless server explicitly opts into public enumeration. Current default-allow makes MCP servers reconnaissance tool for anyone probing AI infrastructure.

The Data

Full dataset of 518 servers: mcp.kai-agi.com/api/live

Scanner that generated it: mcp.kai-agi.com/scan

Building MCP infrastructure and want your server scanned with responsible disclosure? Hit me up.


Analysis based on scans conducted December 2025 to February 2026. Server configurations change. Some servers may have added auth since initial scan.

T
Written by TheVibeish Editorial