Menu
Pinterest Engineering·March 19, 2026

Pinterest's AI Agent Ecosystem with Model Context Protocol (MCP)

Pinterest engineered an ecosystem around the open-source Model Context Protocol (MCP) to enable AI agents to interact with internal tools and data sources. This system features multiple domain-specific MCP servers, a central registry for discovery and governance, and robust security mechanisms including two-layer authentication and human-in-the-loop controls. The architecture prioritizes cloud-hosted servers, unified deployment, and extensive observability to support safe and scalable AI agent automation.

Read original on Pinterest Engineering

Overview of the MCP Ecosystem Architecture

Pinterest's Model Context Protocol (MCP) ecosystem serves as a unified interface for large language models (LLMs) and AI agents to securely interact with diverse internal tools and data sources, moving away from fragmented, bespoke integrations. The core architectural decision was to favor internal cloud-hosted MCP servers over local ones, enabling centralized routing, security, and consistent deployment. This approach aligns with best practices for enterprise-grade AI infrastructure, ensuring scalability and compliance.

Key Architectural Decisions

  • Many Small Servers vs. Monolithic: Pinterest opted for multiple domain-specific MCP servers (e.g., Presto, Spark, Knowledge) instead of a single monolith. This allows for granular access control, prevents context crowding for the model, and enables teams to own smaller, more coherent toolsets.
  • Unified Deployment Pipeline: To streamline the creation of new MCP servers, a unified deployment pipeline was developed. This abstracts away infrastructure concerns like deployment and scaling, allowing domain experts to focus on business logic rather than operational mechanics.
  • Central MCP Registry: A critical component, the registry acts as the source of truth for approved MCP servers. It provides an API for AI clients to discover and validate servers and enables internal services to perform authorization checks. A web UI offers human operators visibility into server status, ownership, and security posture. This registry is foundational for governance and ensures only approved servers are used in production.

Security and Governance Mechanisms

Given that AI agents interact with sensitive systems, security was paramount. The design incorporates a multi-layered security model and strict governance processes from the outset:

  • Dedicated MCP Security Standard: All production-bound MCP servers undergo a formal review process (Security, Legal/Privacy, GenAI) to define and enforce security policies.
  • Two-Layer Authentication & Authorization: Almost all MCP calls are governed by both end-user JWTs and mesh identities (SPIFFE-based).
  • Business-Group-Based Access Gating: For sensitive data systems (e.g., Presto MCP server), access is limited to specific business groups extracted from the user's JWT, ensuring least privilege. This prevents accidental data exposure even if a server is broadly reachable.
  • Human-in-the-Loop: For sensitive or expensive actions, agents propose actions using MCP tools, and humans must approve or reject them, adding a crucial safety net and preventing unintended automated changes. Elicitation is also used to confirm dangerous actions.
💡

System Design Takeaway: Unified API for AI Agents

This article demonstrates a powerful pattern for building scalable and secure AI agent systems: by abstracting diverse backend tools behind a unified protocol and central registry, organizations can empower AI agents while maintaining strong governance and security. Key elements include domain-specific services, a centralized discovery and policy enforcement layer, and explicit human oversight for critical operations.

AI AgentsLLMsToolingAPIAuthenticationAuthorizationMicroservicesPlatform Engineering

Comments

Loading comments...
Pinterest's AI Agent Ecosystem with Model Context Protocol (MCP) | SysDesAi