Why We Replaced Our MCP Server With a Folder
TagManifest's initial approach to AI-assisted container auditing was an MCP (Model Context Protocol) server. The idea made sense on paper: register a set of tools that an AI agent could call, pass container data through structured API endpoints, and let the agent query specific aspects of the audit programmatically. Tool registration, typed inputs and outputs, a formal protocol.
In practice, the MCP server was an instruction set with infrastructure overhead. Every "tool" was a read operation: fetch the container profile, get the findings list, read the Custom HTML classifications, pull the consent overview. No tool modified anything, no tool called an external API, and no tool did computation that the agent couldn't do by reading a file. The protocol was a formality wrapped around data retrieval.
The MCP server had six tools. getProfile() returned the container profile. getFindings() returned the findings list. getCustomHtml() returned classified code blocks. getConsent() returned the consent overview. getEvents() returned the trigger-to-tag mapping. getReference() returned rule explanations. Each tool read from a data structure and returned JSON, with no computation, no external calls, and no state changes. The protocol overhead of connecting to a server, discovering tools, and making typed API calls was buying nothing that reading a file wouldn't provide.
So we replaced it with a folder.
What's in the export folder
The export is a zip file containing structured JSON and one markdown file.
CLAUDE.md is the instruction set. It tells the agent what the files contain and how to read them in order: orient with the profile first (what is this container, how does it score, what's the consent status), then review Custom HTML code blocks (where the real discoveries happen), then verify findings against the code you've read. Three passes: orient, review code, verify rules. This is the same workflow a human consultant follows, and the agent doesn't need special tools to execute it. It needs context and a sequence.
profile.json contains the container profile: tag count, trigger count, variable count, functional health score, hygiene profile, consent model, and platform inventory. Everything a consultant would build in their first two minutes with a container, structured so the agent reads this file and understands what it's working with.
findings.json contains the audit findings grouped by severity tier (errors, warnings, suggestions, info) with cascade metadata. Root-cause findings are labeled, cascade symptoms reference their root cause, and the agent can read the structure and prioritize the same way a human would: fix root causes first, symptoms resolve.
custom-html.json contains classified code blocks from every Custom HTML tag. Each block includes the tag name, the code, and a classification label (loads external script, pushes to dataLayer, reads/writes cookies, manipulates DOM, uses eval, contains ad pixels). The agent can review the code with classification context already attached.
events.json maps triggers to tags to destinations: which events fire which tags, and where does the data go. This is the container's data flow in structured form.
audit.json is the complete export for anything the other files don't cover.
reference.json contains rule explanations and concept definitions. When the agent encounters a finding like consent-partial-coverage, it can look up what the rule checks, the reasoning behind it, and what the fix is.
Files versus protocol for read-only data
An MCP server requires a running process. The agent connects to it, discovers available tools, calls them with parameters, and receives responses. For a container audit, this means starting a server, maintaining a connection, and making sequential API calls to retrieve data that already exists in a known structure. If the server crashes or the connection drops, the agent loses access to the data. If the server's API changes, the agent's tool calls break. The protocol adds a dependency where none is needed.
A folder requires nothing. The agent reads files. Claude Code, ChatGPT with file upload, Cursor, Windsurf, or any AI tool that can read files can consume the export, with no server process, no connection management, no tool discovery, and no API versioning. The export works today and will work with whatever AI tools exist next year because file reading is the most universal capability any agent has.
The CLAUDE.md file does the work that tool descriptions would do in an MCP server, explaining what each file contains, what order to read them in, and what questions each file answers. But instead of being locked into a protocol specification, it's plain markdown that the agent reads as context. The instructions can be nuanced in ways that tool descriptions can't express: "Start with the profile, but if consent_model is 'none', jump to findings.json and look at consent-tier findings first." Or: "When reviewing Custom HTML, cross-reference any tags classified as 'pushes to dataLayer' with the PII-related findings in findings.json." These are workflow instructions, not API parameters. They work because the agent reads them as natural language and applies judgment, which is what agents are good at.
The structured JSON files do the work that tool responses would do, but they're available all at once rather than requiring sequential API calls. The agent can cross-reference findings with Custom HTML code, check the profile while reviewing events, and build a complete picture without making round trips to a server. The data is there, in files, ready to read, and the agent processes it in whatever order makes sense for the specific container it's looking at.
When MCP is the right choice
MCP is the right choice when the agent needs to perform actions, not just read data. An MCP server that connects to the GTM API and can read container versions, compare changes, or push fixes would justify the protocol overhead, because the tools would do things the agent can't do by reading files: authenticate, make API calls, modify state.
For read-only data that's generated once and consumed by the agent, a folder with clear documentation is simpler, more portable, and works with every AI tool that exists today. The protocol adds complexity without adding capability.
There's an existing GTM API MCP server that handles the write side: reading container versions, comparing changes, managing workspaces. TagManifest's folder export complements that by providing the diagnostic layer. One tool diagnoses, the other executes. They don't need to be the same system.
The AI-ready export as a product pattern
The insight isn't specific to TagManifest or GTM auditing. Any product that generates structured analysis from data can export it as an AI-ready folder. The components are consistent: a markdown file that explains the workflow, structured data files that the agent reads, and reference documentation for domain-specific terms. Security scanners, performance audit tools, code quality analyzers, compliance checkers, accessibility auditors: each produces findings from automated analysis, and each would benefit from an export format that an AI agent can consume without custom integration.
The folder pattern has three properties that make it work.
First, it's self-contained. Every piece of data and every instruction needed to understand that data lives inside the folder, and the agent doesn't need external access, network connectivity, or authentication. Drop the folder into a conversation, and the agent has everything it needs.
Second, it's sequenced. The CLAUDE.md file tells the agent what to read first, what to read second, and the reasoning behind the order. Without sequencing, an agent looking at 6 JSON files has to guess which one to start with. With sequencing, the agent follows a workflow that mirrors how a human expert would approach the same data: orient first, then investigate, then verify. The sequence is the methodology, expressed as reading order.
Third, it's portable. The same folder works with any AI tool that reads files, with no integration work, no plugin development, and no SDK. The cost of making your product's output AI-consumable drops to zero because you're not building an integration. You're organizing data and writing instructions.
Build the MCP server when the agent needs to do things. Ship the folder when the agent needs to understand things.