Back to blog

The State of Postgres MCP Servers in 2026

Spencer Pauly
Spencer Pauly
7 min read
The State of Postgres MCP Servers in 2026

I spent a weekend in April reading source code for every Postgres MCP server I could find — open source, commercial, side projects, archived references. There are more than you'd think, and they're more different than you'd think. This is the landscape review.

I'm going to try to be fair here. I'm building one of these (QueryBear). I'll mark where I think we win and where we don't, and you can discount accordingly.

The criteria

A Postgres MCP server, evaluated for production use, needs to handle a few things:

  1. Read-only enforcement — the agent cannot, by any path, mutate data.
  2. Allowlist for tables and columns — the agent only sees what you explicitly expose.
  3. Cost guards — row limit, statement timeout, EXPLAIN cost cap.
  4. Audit log — every query, with enough detail to reproduce.
  5. Schema discovery — agents can find tables and columns without dumping the full schema every time.
  6. Authentication — multi-tenant by default, scoped credentials, no shared secrets.
  7. Maintenance posture — actively developed, responsive to issues.

I'll grade each implementation against these.

The contenders

1. modelcontextprotocol/servers/postgres (archived)

The original reference. Archived April 2026.

  • Read-only enforcement: BEGIN TRANSACTION READ ONLY. Insufficient — see my earlier post on what READ ONLY doesn't stop.
  • Allowlist: None.
  • Cost guards: None.
  • Audit log: None.
  • Schema discovery: Schema dump per request, not cached.
  • Authentication: Single connection string at startup, no multi-tenant story.
  • Maintenance: Archived.

Verdict: Was a great proof of concept. Don't run it in production. Anthropic's archive note effectively says the same thing.

2. crystaldba/postgres-mcp-pro

A community-maintained implementation focused on Postgres-specific features and read scale.

  • Read-only enforcement: Read-only transaction mode, plus a configurable list of "dangerous" function calls that get blocked. Better than the reference. Still parser-light.
  • Allowlist: Schema-level only. You can scope to a specific schema. Table and column-level blocking is on the roadmap.
  • Cost guards: Configurable statement timeout. No EXPLAIN cap as of last I checked.
  • Audit log: Logs to stderr or a file. Not structured.
  • Schema discovery: Caches schema metadata. Refreshes on demand.
  • Authentication: Connection string per process. Multi-tenancy via running multiple processes. Workable but not great.
  • Maintenance: Active, responsive maintainers.

Verdict: A serious step up from the reference. Good fit if you have a single Postgres instance and a single team using it. Multi-tenant deployment is a hassle, and the gap on column-level controls is real if you have PII.

3. Supabase MCP

Supabase's official MCP, scoped specifically to Supabase-hosted databases.

  • Read-only enforcement: Inherits Supabase's role system. Read-only roles work as you'd expect.
  • Allowlist: Through Postgres RLS. Strong but requires you to have RLS configured correctly. If you don't, the MCP doesn't help.
  • Cost guards: Statement timeout configurable at the project level. Row limits not enforced by the MCP.
  • Audit log: Goes to Supabase's logging stack. Useful if you're on Supabase, useless otherwise.
  • Schema discovery: Cached in the MCP.
  • Authentication: OAuth via Supabase. The good story for multi-tenant.
  • Maintenance: Active.

Verdict: Clean if you're on Supabase. The MCP is an integration with Supabase's permission model, not a gateway in front of Postgres generally. Doesn't help you if your Postgres lives somewhere else.

4. Neon MCP

Neon's official MCP, scoped to Neon-hosted databases.

Same shape as Supabase: tight integration with Neon's branching, role, and permission model. OAuth-based auth. Good audit if you're already on Neon.

  • Doesn't apply if your Postgres is anywhere else.
  • Branching support is genuinely cool — agents can spin up a branch, run experiments, throw the branch away.

Verdict: Best-in-class for Neon users specifically. Not a general-purpose option.

5. QueryBear

What I'm building. Caveats above.

  • Read-only enforcement: Parser-level (libpg_query). DDL, DML, volatile function calls are rejected before reaching Postgres. READ ONLY transaction mode underneath as a backstop.
  • Allowlist: Table-level, column-level, and function-level. Hand-maintained YAML, surfaced in a UI.
  • Cost guards: Default row limit, statement timeout, EXPLAIN cost cap. All gateway-side.
  • Audit log: Every query, every parameter, every result size, per-workspace, queryable.
  • Schema discovery: Two-pass — table directory first, full metadata only for relevant tables. Cached, refreshed on demand.
  • Authentication: OAuth with Dynamic Client Registration. Multi-tenant by default. Per-workspace credentials.
  • Maintenance: Solo-built, actively shipping. I'm the bottleneck and the maintainer.

Verdict: I think we're the strongest option for "I have an arbitrary Postgres I want to expose to AI agents safely, and I don't want to write the gateway myself." Whether that's true for your specific use case depends on factors I can't see from here.

6. Various single-engineer projects

There are at least eight on GitHub I won't enumerate. Most are at the "thin wrapper around the database" stage. Some have one specific feature better than QueryBear (e.g., one ships Postgres pg_trgm integration for fuzzy search). None I've seen cover all seven criteria. Most don't claim to.

If you're building one, I respect it. If you're using one in production, please re-read the Anthropic archive announcement.

What's missing across the landscape

A few things nobody's done well yet.

Cross-replica routing. Most of these MCPs take a single connection string. None I've seen route reads to a replica intelligently. You can do it manually by pointing the connection string at a replica, but the MCP doesn't know to do retries against a different replica if one is down.

Per-user permissions inside a single MCP server. The OAuth-flavor MCPs handle multi-tenancy, but per-user permission inside a tenant — "the customer success rep can see support_messages but not users.salary" — is mostly missing. You either run multiple MCP server processes or you build this layer yourself.

Standardized audit format. Every implementation logs differently. There should be a community-agreed schema for "AI agent ran query X against database Y at time Z, returning N rows." There isn't.

Cross-database joins. None handle "join a Postgres table to a MySQL table." Maybe never the right job, but it's the kind of thing real teams want.

Where I'd place a bet

If I had to pick the implementation that survives the next year, I'd bet on the ones that treat the MCP server as a real gateway, not a thin wrapper. That includes whatever Supabase and Neon iterate to (because they have to support real customers), QueryBear (because it's our entire product), and probably one of the community ones that figures out parser-level enforcement.

The thin-wrapper implementations will keep existing because they're easy to build and easy to demo. They will not be the ones running in production at the bigger AI companies in 2027.

What you should pick today

If you're starting from zero and you want a Postgres MCP for your team:

  • You're on Supabase: use Supabase MCP. Don't fight it.
  • You're on Neon: use Neon MCP. Same.
  • You have arbitrary Postgres, you're a single team: crystaldba's implementation is reasonable, our implementation is reasonable, the choice mostly comes down to whether you want a UI and audit log out of the box.
  • You have multiple teams or multiple databases: I'd recommend QueryBear. I'm biased.
  • You're building one yourself: start with the seven criteria. Get past four of them and you have something most people don't.

This space is moving fast. The landscape will look different in six months. The criteria, I think, won't.

5 comments

  • mcp_dabbler

    Fairer than I expected from a vendor. Marking your own as best-fit for 'arbitrary Postgres' is defensible — it's basically true.

  • joel_pgsql

    Standardized audit format is the gap I'd most like to see closed. Every MCP server has its own. Cross-tool monitoring is impossible right now.

  • engineering_today

    Cross-database joins are the white-whale feature. I want it, I know nobody's going to ship it, I'll keep wanting it.

  • tjones_dba

    Per-user permissions inside one MCP process is genuinely missing across the landscape. We currently spin up a separate gateway process per tenant. Not great.

  • skeptical_dba

    Counterpoint: 'thin wrapper' MCPs serve a real purpose for one-team-one-DB use cases. Not everything needs to be enterprise-grade. The pendulum will swing back.

Database Access

Give Your AI Agents
Database Access. Securely.

Connect any database. Control permissions. Audit every query. All running locally on your machine.