TagManifest processes your GTM container JSON entirely in the browser. Upload the file, the scan runs client-side, results render on screen. Nothing is sent to a server. Nothing is stored. Nothing leaves your machine.
That's not a temporary limitation while the "real" version gets built. It's the architecture. Everything else about the product follows from it.
Why GTM container data stays in your browser
GTM containers sometimes contain information you'd rather not share with a third party. Account IDs for Google Ads, Meta, and LinkedIn sit in tag configurations. Custom HTML tags reveal tracking architecture, pixel implementations, and conversion logic. Variables reference internal systems, staging environments, and endpoint URLs. PII patterns show up in tag configurations more often than most people expect.
A consultant scanning a client's container during a kickoff call doesn't want that data hitting someone else's server. An agency running a pre-engagement audit doesn't want to explain where the data went. An in-house team evaluating their own setup shouldn't need to trust a third party with their tracking configuration to get a diagnostic.
Browser-only removes that question entirely. The JSON goes from your machine into browser memory, the scan runs, results display, and when you close the tab it's gone. No trust decision to make because there's no data transfer to evaluate.
This isn't a novel approach to client-side security. It follows the same principle behind tools like hat.sh for file encryption and Squoosh for image compression, both of which process files entirely in the browser without server uploads. The pattern works well for tools that handle sensitive input and produce immediate output.
How browser-only keeps the tool free
Browser-only architecture is also a cost decision, and the cost math is simple: zero.
No server processes each scan. No database stores results. No auth system manages users. No infrastructure scales with usage. The tool works for one person or a hundred thousand at the same cost because the compute happens on their machines.
That cost structure is what makes "free, permanently" a credible statement rather than a marketing claim. Free tools are usually loss leaders, ad-supported, or running on a timeline until hosting costs force a pricing decision. When the infrastructure cost per scan is literally zero, there's no usage threshold where the economics break. Netlify's analysis of Jamstack economics found that client-side architectures reduce infrastructure costs by 60-70%. For TagManifest, it's closer to 100%.
No accounts, no logins. Drop a JSON file on it, the scan runs. That's the entire onboarding experience.
What browser-only processing gives up
Every limitation in TagManifest traces back to the browser-only decision. These are tradeoffs, not oversights.
No container history. Each scan is independent. You can't compare this month's scan to last month's because nothing persists between sessions. If you want to track progress over time, you'd need to export the report from each scan and compare them manually. The tool answers a point-in-time question: what's in this container right now?
No API connection. The tool doesn't connect to the GTM API. You export the container JSON manually from GTM (Container → Admin → Export Container) and upload the file. That's an extra step compared to an OAuth flow that pulls the container directly, but it also means TagManifest never has access to your GTM account.
No live behavior data. This is static analysis. The tool reads the container definition, not runtime behavior. It can tell you that a tag is configured with the wrong consent type, but it can't tell you whether that tag actually fired on your production site in the last 24 hours. It reads what the container says it will do, not what it did.
No collaboration features. No shared workspaces, no team views, no commenting on findings. The exported markdown report and CSV are the collaboration layer. Copy them into whatever project management tool your team uses.
Why not store GTM container history
The obvious next question is: why not store scans so people can track changes over time? Container history would make the tool more useful for ongoing maintenance. Trending scores over time would show whether a container is getting better or worse.
Persistence changes the nature of the product. Storing containers means a database. A database means authentication, because containers from different organizations need partitioning. Authentication means user management, password resets, session handling. And now you're storing GTM container data on a server, which circles back to the privacy question the browser-only architecture was designed to avoid.
What if PII slips through into the database? Container JSON sometimes contains email addresses, phone numbers, internal identifiers embedded in tag configurations. On a server, that's a data handling problem. In browser memory, it's gone when the tab closes.
It gets complex and messy for a problem that most people scanning a container don't actually have. The person uploading a JSON file wants to know what's in this container right now and what to fix first. Not a longitudinal study.
Future directions
Container history, third-party testing integrations, API connections to pull containers directly: all interesting directions. If those happen, they'd be a separate product with a separate architecture. The free, browser-only scanner stays as it is. It doesn't graduate into a platform.
TagManifest is simple, basic, and not trying to change how GTM auditing works. A helpful automation step for a task that consultants and in-house teams currently do manually or skip entirely. Every feature it chose not to build is a feature that would have changed what the product is.
Upload a container, scan it, read the results. That's the tool.