Skip to content
← Back to Blog

Making GTM containers readable for AI

A GTM container export is a JSON file designed for Google's systems, not yours. Here's the practical workflow for turning raw container data into something an AI agent can actually work with.

A GTM container export is a JSON file. For a mid-complexity container (80 to 150 tags), that's 300 to 500KB of nested objects describing every tag, trigger, variable, folder, and consent setting. You can export it from the Admin panel and open it in a text editor, but making sense of it requires understanding GTM's internal data model: entity types, template references, consent parameters, trigger conditions stored as nested arrays of condition objects. The format exists so Google Tag Manager can reconstruct the container. Not designed for humans. Not designed for AI agents.

The common approach right now is to paste the raw JSON into ChatGPT or Claude and ask "what's in this container?" For small containers, this works. Rough inventory, some observations about tag types, maybe a note about consent. It breaks down quickly as the container grows. The AI has to parse the structure, infer what matters, and guess at severity with no scoring framework, no rule set, no consent model to work from. Run the same prompt twice and you get different findings in a different order with different severity assessments. No consistency. No way to compare results over time.

The AI is fine. The input format is the bottleneck. Parsing a complex data structure and interpreting what the contents mean is two jobs. Separating them produces better results from both.

What structured GTM audit data looks like

TagManifest runs the container JSON through 85 rules and produces scored, categorized findings. Each finding has a rule ID, a severity tier (error, optimization, or info), a category (consent, GA4 data quality, advertising, dead code, and six others), the specific tags affected, evidence for why it was flagged, and a recommended action. The container gets a functional health score, a hygiene profile across five dimensions, and per-category subscores.

Raw JSON tells you what exists. Structured audit data tells you what matters, how much it matters, and what to do about it. An AI agent working from structured findings can focus entirely on interpretation and research instead of spending its context window on parsing.

The results are also consistent. Same container, same score, same findings. That consistency matters when you're comparing container versions, tracking remediation progress, or handing the data to an AI agent that needs a stable foundation to reason from.

Four export formats for GTM audit data

The scan results need to go somewhere useful. TagManifest produces four export formats, each built for a different way of working with the data.

Markdown report. A narrative walkthrough of the container: health score, top findings, category breakdown, work plan organized by effort. This reads like a document. An AI agent can load it as context and orient itself the same way a human would, starting with the overall picture and drilling into specifics. It's also the format that works best for sharing with stakeholders who want to understand the container's state without opening a spreadsheet.

CSV. Every finding in a flat row with rule ID, severity, effort tier, category, and affected tags. Good for sorting, filtering, and bulk analysis. If you want to count how many consent findings you have, filter by effort tier, or cross-reference findings against a client's priority list, CSV is the format that gets out of your way. It also loads cleanly into any spreadsheet, database, or data tool.

JSON. The raw audit results as structured data. Every score, every finding, every tag inventory item, every hygiene dimension. This is the format for programmatic consumption: feeding results into other tools, building dashboards, running custom analysis, or integrating the audit into a larger pipeline.

MCP server. This generates a Model Context Protocol server that AI assistants can connect to directly. Claude, Cursor, Windsurf, any MCP-compatible tool. Instead of copy-pasting a report into a chat window, the agent queries the audit programmatically. "Show me all critical consent findings." "List tags firing on All Pages." "What's the effort breakdown for advertising findings?" The agent gets structured responses it can reason over without you acting as the intermediary.

MCP is an open protocol that standardizes how AI applications connect to external data sources and tools. The TagManifest MCP server exposes the audit results as queryable resources: the agent can browse findings by category, look up individual tags, check scores, and pull the work plan. It's read-layer access to the audit data, not a connection to the GTM API. The container JSON never leaves your machine, and the MCP server runs locally.

Using AI with GTM audit findings

The practical sequence: scan the container on TagManifest, then connect the MCP server to your AI tool (or load the markdown export into a project). From there, the agent has full context. Not just raw JSON, but interpreted, scored, categorized findings with evidence and recommendations attached.

Ask the agent to explain what a specific finding means. "This container has six advertising tags with consent type set to analytics_storage instead of ad_storage. What does that mean for compliance?" The agent can research the consent type requirements for each platform and explain the practical impact. It's working from a specific, verified finding rather than trying to infer the problem from raw configuration.

Ask it to build a remediation plan. The findings already have effort tiers (quick win, focused remediation, structural work, strategic improvement), so the agent groups them into a realistic work plan: fix this afternoon, schedule for next week, scope as a project.

Ask it to draft client deliverables. A summary email for a marketing director looks different from a technical changelog for the developer who'll implement the fixes. The agent produces both from the same findings because the structured data gives it raw material and context to frame it for each audience.

Structured findings go in, contextual interpretation comes out. The audit handles the work that rules do well: identifying misconfigurations, scoring severity, cataloging affected tags. The AI handles the work that requires synthesis: explaining implications, researching fixes, adapting communication for different audiences.

Portable GTM audit data

The export formats were a design decision, not a feature bolted on after the audit engine was built. A health score that only exists inside a browser tab is useful for one person at one moment. A health score that exports as structured data becomes part of a workflow: feeding into reports, loading into AI agents, comparing across container versions, tracking over time.

The MCP server is the natural extension. The audit produces structured, queryable data. AI agents consume structured, queryable data. The protocol just formalizes the connection. Instead of copying a markdown report into Claude and hoping the context window holds, the agent queries exactly the findings it needs, in the structure it needs them.

The audit results are also portable. CSV works in Excel. JSON works in any programming language. Markdown works in any chat interface. MCP works with any compatible agent. The data moves wherever you need it to go.

A GTM container export is a configuration file designed for Google's systems. The gap between that raw format and something an AI agent can productively work with is where the friction lives. Structured, scored, categorized audit data closes that gap. The difference between an AI that gives you observations and one that gives you a work plan.

Audit your GTM container

TagManifest gives you an instant health score and prioritized fixes.

Scan Your Container