How to Use MCP Servers with Custom GPTs
Why Bring MCP into Your GPT Workflow?
Custom GPTs become powerful when they can interact with external tools and services. The Model Context Protocol (MCP) provides a standardized way for servers to expose tools, resources, and prompts that AI assistants can discover and invoke. By connecting MCP servers to Custom GPTs, you can unlock access to a growing ecosystem of MCP-compatible tools without building custom integrations for each one.
The Architecture Challenge
Custom GPTs and MCP servers operate on fundamentally different principles. Custom GPTs interact with tools through individual REST endpoints (/api/weather
, /api/search
, /api/database
), each representing a specific capability. In contrast, MCP servers expose all their tools through a single JSON-RPC method called tools/call
, where the tool name is passed as a parameter.
This architectural difference means you can't directly connect an MCP server to a Custom GPT. Instead, you need an HTTP gateway to bridge the gap.
System Architecture
How the Gateway Translates Between Protocols
The HTTP gateway performs protocol translation between REST and MCP. On the Custom GPT side, it exposes individual REST endpoints like /api/weather
or /api/database-query
. On the MCP side, it consolidates all requests through the single tools/call
JSON-RPC method.
This translation is essential, because without it, Custom GPTs have no way to communicate with MCP's unified tool interface.
Tool Discovery Mechanisms
Custom GPT (OpenAPI) | MCP Server |
---|---|
Tools defined statically in OpenAPI spec | Tools discovered dynamically via tools/list method |
ChatGPT reads spec once when Action is added | AI agents can query available tools at runtime |
Fixed set of endpoints | Tools can change based on context/permissions |
MCP servers expose a tools/list
method that returns available tools and their schemas dynamically. This enables AI clients to discover new capabilities at runtime. Custom GPTs, however, require static OpenAPI specifications defined when the Action is configured.
Putting It All Together
To connect MCP servers to Custom GPTs, deploy an HTTP gateway that:
- Exposes each MCP tool as a separate REST endpoint
- Translates incoming REST requests to MCP
tools/call
invocations - Converts MCP responses back to HTTP format
Once deployed, generate an OpenAPI specification for the gateway's REST endpoints and configure it as a Custom GPT Action. From ChatGPT's perspective, it's interacting with a standard REST API-the underlying MCP protocol remains transparent.
Implementation Guide
These steps apply whether you're running your own HTTP gateway or using a managed service.
Step 1: Create or Configure Your Custom GPT
Navigate to GPTs → + Create → Configure in ChatGPT. Name your GPT and optionally add an avatar. Scroll down to the Actions section and click Create new action.
Step 2: Configure Authentication
Choose an authentication method based on your deployment stage:
Auth option | Good for… | Trade-offs |
---|---|---|
None | Disposable demos; public data | Anyone with the URL can invoke the tool; no audit trail |
API Key (Basic/Bearer) | Solo testing; small-team pilots | Manual key rotation; a shared secret is visible to anyone who configures the GPT |
OAuth 2.0 | Production roll-outs; per-user attribution | Requires an auth server and consent flow |
Note: MCP servers must be wrapped with an HTTP gateway before adding them as Actions. When configuring authentication, you'll need to provide your gateway's base URL and credentials based on the auth method you select.
Production Considerations
When deploying to production:
- Authentication strategy: Start with API keys for prototypes, but migrate to OAuth 2.0 for production. OAuth provides per-user attribution and eliminates shared secrets
- Key rotation policies: If using API keys, implement rotation schedules tied to personnel changes. Rotate immediately when team members leave or change roles
- Secure credentials properly: Store API keys server-side and never expose them in client-side code. For OAuth, register your gateway as a client with your identity provider
- Implement comprehensive logging: Track all tool invocations with rate limiting and audit trails
- Standardize configurations: Define tools once and reuse across multiple Custom GPTs
Step 3: Add the OpenAPI Schema
Custom GPTs require an OpenAPI specification to understand available actions. You need to either import an existing spec or generate one for your HTTP gateway.
If your gateway already publishes an OpenAPI spec, paste it in or import it via URL in the Custom Actions configuration panel. Otherwise, you can use ChatGPT's "Actions GPT" helper to generate a spec from your API documentation.
Open source option: mcpo is a simple MCP-to-OpenAPI proxy that automatically generates OpenAPI schemas for any MCP server.
Managed gateways like MintMCP go further-they generate OpenAPI specs automatically, handle user management and SSO, and provide built-in audit logs. This eliminates the manual work of maintaining schemas and implementing authentication infrastructure.
Step 4: Share with Your Team
If you're on a ChatGPT Teams or Enterprise plan, you can share your Custom GPT with team members. When sharing:
- OAuth-enabled GPTs: Each team member will be prompted to authenticate when they first use the GPT, ensuring individual access control and audit trails
- API Key GPTs: All users share the same credentials, making it impossible to track individual usage
- Documentation: Provide clear instructions on what the GPT can do-well-written action descriptions help the GPT select the right tools
This is why OAuth is recommended for team deployments: it provides per-user authentication and enables proper access control and auditing.
Next Steps
- Start prototyping with any MCP-compliant service wrapped in an HTTP gateway
- For production deployments, consider using a managed gateway that handles OAuth, logging, and policy enforcement
- Get access to MintMCP for a fully managed solution with automated OpenAPI generation, SSO, and audit logs