Skip to content
← Back to Blog

What static analysis can and can't tell you about a GTM container

A GTM container export is a blueprint, not a recording. Here's what a JSON scan can reliably catch, where it reaches its limits, and how to use it as the first step in a proper audit.

When you export a GTM container, you get a JSON file describing every tag, trigger, variable, and consent setting. Complete structural definition. What it is not is a recording of what actually happens when those tags run on a live page. That distinction matters for understanding what a scan can tell you and where you need other tools to fill the gaps.

TagManifest is a static analysis tool. It reads the container definition and runs 85 rules against it. Some of those rules produce findings with high confidence: a tag referencing a Universal Analytics property is dead code, full stop. Others produce findings that need verification in the live environment before you act on them. The tool got more useful when it got more honest about which was which.

What the GTM container JSON contains

A GTM container export contains the full configuration: tag definitions, trigger conditions, variable declarations, consent settings, folder structure, and firing sequences. It's everything GTM needs to reconstruct the container. Google's container export documentation describes the format and what each section represents.

What the JSON doesn't contain is runtime state. It doesn't know whether your data layer exists on the page, whether it gets populated with the values your tags expect, or whether the CSS selectors in your click triggers still match the current DOM. It describes intent. What actually happens depends on the page, the browser, the consent banner, and a dozen other things outside the container.

This is obvious when you say it plainly, but it has real consequences for how you read scan results. A finding that says "this trigger uses a CSS selector that may not match any element" is the tool being honest: it can see the selector in the JSON, but it can't visit your site to check whether the element exists. That's a thread to pull, not a verdict.

What a GTM container scan catches

Static analysis is strongest when the answer lives entirely inside the JSON. No runtime context needed.

Dead code. Tags sending data to Universal Analytics properties are dead. Google sunset UA in July 2023, and those tags generate network requests that go nowhere. The JSON contains the tracking ID, and if it starts with UA-, the tag is definitively obsolete. Same logic applies to orphaned triggers (not connected to any tag) and orphaned variables (not referenced anywhere). The container defines these relationships explicitly; a scan can map them completely.

Configuration errors. A GA4 event tag without a measurement ID won't send data. An advertising tag gated behind analytics_storage instead of ad_storage is using the wrong consent type. A waitForUpdate value of 0 means GTM isn't waiting for the consent banner before tags fire. These are configuration facts visible in the JSON. The tag either has the right value or it doesn't.

Consent setup. The container JSON specifies what consent types each tag checks and whether Additional consent is configured on top of Built-In consent. A scan can detect the double-gate problem (conflicting consent layers that suppress conversions), missing Consent Mode v2 signals on advertising tags, and tags with no consent protection at all. The consent configuration is structural. Whether the CMP actually enforces it at runtime is a different question.

Architecture patterns. How many tags fire on the All Pages trigger. How many Custom HTML tags exist and whether they load jQuery. Whether there are multiple GA4 measurement IDs (intentional in some setups, accidental in most). How many contributor patterns show up in the naming conventions. These are container-level observations that help you understand the shape of what you're working with before you open Preview mode.

Completeness gaps. If a container has GA4 page_view and purchase events but no add_to_cart, begin_checkout, or view_item events, the ecommerce funnel has gaps. Google's recommended ecommerce events define what a complete implementation looks like. A scan can compare what's configured against what's expected.

Where static analysis reaches its limits

The boundary is anywhere the answer depends on what happens outside the JSON.

Whether tags actually fire on real pages. The JSON says a tag should fire on All Pages with a DOM Ready trigger. Whether it does depends on whether the GTM snippet is installed, whether consent is granted, and whether a tag sequencing dependency failed. Tag Assistant and Preview mode exist for exactly this: watching tags fire (or not fire) on actual pages.

Whether the data layer provides expected values. A GA4 event tag might reference a data layer variable called ecommerce.transaction_id. The JSON confirms the variable is defined and the tag references it. Whether your checkout page actually pushes that value into the data layer is a question the container can't answer. Inspecting window.dataLayer on the live page or checking GA4's DebugView shows what actually arrives.

Whether consent banners actually block tags in the browser. TagManifest checks whether tags have the right consent types configured in GTM. But your CMP (Cookiebot, OneTrust, Usercentrics) maintains its own cookie classification. If _ga is classified as "Necessary" in the CMP dashboard, analytics cookies fire regardless of what GTM says. The CMP sits upstream of GTM in the consent stack, and cookie classifications can override tag-level settings. Verifying that requires a CMP audit, not a container scan.

Event volumes and data quality in GA4. A container can have a perfectly configured purchase event that fires on zero pages because the trigger condition never matches. Or it fires correctly but sends $0 revenue because the data layer value is missing. GA4's DebugView and BigQuery exports show what data actually arrives. The container only shows what was supposed to be sent.

Scripts and cookies outside GTM. Third-party scripts hardcoded in page templates, injected by CMS plugins, or loaded by embedded widgets create cookies that no GTM audit can see. Server-side cookies set in HTTP response headers bypass the client-side consent stack entirely. A container scan covers the container. The page has a larger surface area.

Findings that need live verification

An earlier version of the TagManifest audit was too aggressive. It read the JSON and drew conclusions that turned out to be wrong in practice. A tag with consentStatus: NOT_NEEDED got flagged as a consent bypass. For Custom HTML tags, that's accurate. For native Google tags with Built-In consent, it's the recommended configuration. The scan was flagging correct setups as problems.

The tool improved when it started distinguishing between high-confidence findings (this is wrong in the JSON, period) and contextual findings (this looks wrong, but verify it on the page). The consent audit now uses a Built-In consent map to evaluate findings against what each tag template enforces. The ecommerce checks note which gaps are definitive (no purchase event exists) versus which need runtime verification (purchase event exists but revenue parameter may not be populated).

Sometimes the fix is a configuration change in GTM. Sometimes it's a mismatched classification in a CMP dashboard. Sometimes the finding is technically correct in the JSON but intentional in practice, and the right response is to document why and move on. The scan surfaces findings. The person reading them decides what to do.

GTM audit workflow: Scan, verify, fix

Static analysis is step one, not the whole audit. Scan the container to understand its structure, use Preview mode and Tag Assistant to verify findings on live pages, then build the work plan based on confirmed problems.

The scan handles orientation: what's in the container, what looks misconfigured, what's dead, what's missing. A few minutes with a tool versus several hours without one. Live testing handles verification: which findings are real problems and which are correct configurations that just look unusual in the JSON. The work plan comes from the intersection.

Starting with static analysis means you go into Preview mode knowing what to look for. You're not clicking through 200 tags hoping to spot something. You have a list: does this trigger fire on the checkout page, does the data layer provide the value this tag expects, does the consent banner block this tag when consent is denied. Directed testing is faster and catches more than exploratory testing.

Scan your container for the structural picture and the threads to pull. What you do with those threads requires the live environment, and often tools and access that sit outside GTM entirely.

Audit your GTM container

TagManifest gives you an instant health score and prioritized fixes.

Scan Your Container