Streamlining Mapbox Development with the Model Context Protocol DevKit

Overview of Modern Location Development

Building mapping applications traditionally requires a constant context-switch between the IDE, documentation, and the

console. You might be writing
JavaScript
in one window while jumping to a dashboard to create a new access token or adjust a map style in another. This fragmented workflow slows down the creative process. The
Mapbox DevKit MCP server
solves this by bringing the entire
Mapbox
ecosystem directly into the developer's conversation with an AI agent.

By implementing the

(MCP) created by
Anthropic
, developers can now grant AI coding assistants—like
Claude Code
—the ability to perform complex location-based tasks. This isn't just about code completion. It's about giving an LLM the specific tools it needs to generate styles, manage authentication, and process geographic data like
GeoJSON
without leaving the terminal. This approach, often called "vibe coding," allows for rapid prototyping through natural language, where the agent handles the heavy lifting of API orchestration.

Streamlining Mapbox Development with the Model Context Protocol DevKit
Vibe coding with the Mapbox DevKit MCP Server

Prerequisites and Technical Foundation

To effectively use the

, you should have a solid footing in modern web development. Familiarity with
TypeScript
is helpful if you plan to extend the server, though not strictly required for general use. You will need a
Mapbox
account and a primary
access token
with specific scopes enabled.

Crucially, you must be comfortable using command-line interface (CLI) tools. The server operates best when paired with an MCP-compatible client. While

is the primary example used in many demonstrations, the protocol is open-source, meaning any tool that supports the
Model Context Protocol
standard can interact with these tools. You should also understand the basics of
JWT
(JSON Web Tokens), as the server uses them to identify your
Mapbox
username and validate permissions for API calls.

Key Libraries and Architecture

The

is built on a modern stack designed for safety and speed. The architecture relies on three primary pillars:

  • mcpdk
    : This is the official
    Anthropic
    SDK for building MCP servers. It handles the low-level protocol communication and tool registration, allowing developers to focus on tool logic rather than connection management.
  • TypeScript
    : The entire codebase uses
    TypeScript
    to ensure type safety. This reduces runtime errors when the LLM attempts to pass arguments to various tools.
  • Zod
    : The server utilizes
    Zod
    schemas for runtime validation. These schemas serve a dual purpose: they validate the data coming from the LLM and provide the metadata (descriptions) that the LLM uses to understand how to call the tool.

Code Walkthrough: Token Creation Logic

Understanding how a tool is structured is key to mastering the DevKit. Let's look at the implementation of the token creation tool, which inherits from a base

API class. The structure follows a strict pattern to ensure the LLM knows exactly what inputs are required.

Defining the Schema

Every tool starts with a schema. This schema defines the parameters the LLM can manipulate. For a token, we need notes, scopes, and potentially an expiration time.

import { z } from 'zod';

const CreateTokenSchema = z.object({
  note: z.string().describe("A description of the token's purpose"),
  scopes: z.array(z.string()).describe("The Mapbox scopes to grant the token"),
  allowedUrls: z.array(z.string()).optional().describe("Restrict token to specific URLs"),
  expires: z.string().optional().describe("ISO 8601 timestamp for token expiration")
});

The .describe() methods are the most critical part here. They act as the "documentation" for the AI. When the agent reads the tool's manifest, it sees these descriptions and uses them to decide which user input should map to which JSON field.

Implementing the Tool Logic

The implementation class handles the actual HTTP request to the

API. It uses a base tool class to handle boilerplate like
JWT
extraction and error logging.

export class CreateTokenTool extends MapboxApiBaseTool {
  name = "create_token";
  description = "Creates a new Mapbox public access token with specified scopes.";

  async execute(input: z.infer<typeof CreateTokenSchema>) {
    const username = this.getUsernameFromToken();
    const url = `https://api.mapbox.com/tokens/v2/${username}`;

    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${this.accessToken}`
      },
      body: JSON.stringify(input)
    });

    if (!response.ok) throw new Error("Failed to create token");
    return await response.json();
  }
}

This pattern separates the interface (the schema) from the implementation (the API call). The execute function only runs if the input matches the

schema, providing a robust layer of protection against malformed LLM outputs.

Syntax Notes and Conventions

When working within the MCP ecosystem, certain conventions help maintain compatibility. The

implementation follows the snake_case naming convention for tool names (e.g., create_style, list_tokens), which is the standard expected by most MCP clients.

A notable pattern in the DevKit is the use of "Style Helpers." Instead of forcing the LLM to guess the entire

, the server provides a helper that breaks down style creation into high-level features like "buildings," "roads," and "water." This abstraction makes it much easier for the LLM to generate valid styles without getting lost in the deep nesting of the GL JS style JSON format.

Practical Examples: The "Halloween Night" Workflow

Imagine you are building a holiday-themed landing page. Instead of manually picking hex codes for a dark map, you can prompt your coding agent: "Create a Halloween-themed style and apply it to my local index.html."

  1. Style Generation: The agent calls the style_helper tool. It identifies that "Halloween" implies dark backgrounds, orange labels, and purple accents. It sends these preferences to the
    Mapbox
    Styles API.
  2. Visualization: The agent then calls preview_style, which returns a URL. The agent can even open your browser automatically so you can inspect the "vibe."
  3. Local Integration: Once the style is created, the agent searches your local directory for an
    HTML
    file. It finds the mapboxgl.Map initialization and updates the style property with the new Style URL it just generated.
  4. Refinement: If the map is too dark, you simply say, "Make the labels more readable." The agent updates the existing style in-place. This iterative loop happens in seconds rather than minutes.

Tips and Gotchas

Security is paramount when working with AI and API keys. The

intentionally blocks the creation of secret scopes through the LLM. You should never pass your secret keys into an LLM prompt; instead, provide them as environment variables when starting the server. This ensures the AI can use the key to perform actions without ever needing to expose the key itself in its output.

Another common mistake is providing overly large

files to the preview tool. Browsers have URL length limits, and since the preview tool often encodes the data directly into the URL for instant visualization, extremely large datasets may fail to load. For large data, it is better to use the
Mapbox Tiling Service
(MTS) tools, which are currently being integrated into the roadmap, to convert raw data into optimized vector tiles.

7 min read