Pinterest engineered an ecosystem around the open-source Model Context Protocol (MCP) to enable AI agents to interact with internal tools and data sources. This system features multiple domain-specific MCP servers, a central registry for discovery and governance, and robust security mechanisms including two-layer authentication and human-in-the-loop controls. The architecture prioritizes cloud-hosted servers, unified deployment, and extensive observability to support safe and scalable AI agent automation.
Read original on Pinterest EngineeringPinterest's Model Context Protocol (MCP) ecosystem serves as a unified interface for large language models (LLMs) and AI agents to securely interact with diverse internal tools and data sources, moving away from fragmented, bespoke integrations. The core architectural decision was to favor internal cloud-hosted MCP servers over local ones, enabling centralized routing, security, and consistent deployment. This approach aligns with best practices for enterprise-grade AI infrastructure, ensuring scalability and compliance.
Given that AI agents interact with sensitive systems, security was paramount. The design incorporates a multi-layered security model and strict governance processes from the outset:
System Design Takeaway: Unified API for AI Agents
This article demonstrates a powerful pattern for building scalable and secure AI agent systems: by abstracting diverse backend tools behind a unified protocol and central registry, organizations can empower AI agents while maintaining strong governance and security. Key elements include domain-specific services, a centralized discovery and policy enforcement layer, and explicit human oversight for critical operations.