Skip to main content
By Dante Perea, Founder, unifounder.ai. Founder of unifounder.ai. Building agent-native publishing infrastructure for the Business-to-Agent economy. Previously shipping AI products at the intersection of multimodal models, retrieval, and developer tooling.Follow on X · GitHub · growth.dante.id.
The conventional view is that programmatic SEO died with Google’s Helpful Content Update. In fact, the templated implementation died. The architecture that replaces it runs one autonomous research agent per URL, and the math actually got better, not worse.

The context

Programmatic SEO at Zillow, TripAdvisor, G2, and Yelp followed the same pattern for a decade: pick a high-volume keyword shape, build one HTML template, loop through a database, ship millions of pages. The pages had real data inside but identical structure outside. They were structural clones with swapped variables. Google’s Helpful Content Update was specifically designed to detect that fingerprint. The damage was uneven but devastating where it landed. Travel publisher analyses found roughly 32 percent of large travel sites lost more than 90 percent of organic traffic post-HCU, with similar 30 to 90 percent declines reported across digital publishing through 2024 and 2025 (Search Engine Land coverage of the helpful content rollout). G2 took measurable hits in October 2023 from spam updates targeting versus-pages. The recovery story most operators are waiting for isn’t coming. In March 2024, Google folded the Helpful Content ranking system into core. There is no separate update left to recover from. The August 2025 spam update that ran for 26 days, explicitly aimed at large-scale, auto-generated content and thin programmatic pages, was a continuation, not a one-time event.

Why this works

Templated programmatic SEO failed for one reason and the fix follows from it.
Failure modeWhy it brokeWhat replaces it
One template, N rowsPages share identical structure, easily fingerprintedOne agent per URL, doing real research before writing
One knowledge base, N pagesAI sees the same context for every page, output collapses to paraphrasePer-URL knowledge base (top SERPs, entities, internal links, schema)
Generation is the unit of workPage is a string-replace exercisePage is a research job that ends in a write step
Cost optimized per page$0.001 per page, 80 percent deindex rate0.50to0.50 to 5 per page, near-zero deindex on quality systems
The mechanism: unique inputs are upstream of unique outputs. If every page in the campaign is built from the same context, the model has no signal to differentiate them. Per-URL research changes that contract. The page now starts with its own research artifact (top 10 SERP analysis, entity graph, internal link plan, JSON-LD schema). The writer is the last agent in the chain, not the first. Lead with the conclusion: the moat in 2026 programmatic SEO is the architecture, not the prose. Two systems that produce the same word count from the same prompt will rank differently if one fed the writer per-URL research and the other fed it a template variable.

What I tried / what I saw

The root cause of templated programmatic SEO failure is structural, not stylistic. All pages in a templated campaign share the same knowledge base. The AI has identical information about every page it writes. Unique content cannot emerge from identical inputs. That sentence is the death certificate. You cannot rephrase your way out of it. The fix is to give each page its own research before any prose gets written. Sight AI runs 13+ specialized sub-agents per single URL. Competitive analysis pulls the top 10 ranking pages for that exact query. Entity extraction identifies the people, places, products that should appear on this specific page. Internal linking decides which other pages on the site this one connects to. Schema builds JSON-LD structured data matching the content type. Only after that research swarm completes does a body writer produce prose, informed by everything the research agents found. Harbor SEO positions the same model in one line: before writing a single word for any given URL, Harbor launches an autonomous research agent specific to that page. The architectural detail that separates production-grade systems from prompt chains is how the agents communicate. The best multi-agent systems pass JSON objects or semantic schemas between agents, not plain text. A research agent’s output is not a paragraph for the writer to rephrase, it is a structured object the writer queries and reasons over. Most “agentic SEO” tools fail this test silently, which is why their output feels templated even when the marketing claims otherwise. The economic shift looks like a cost increase if you read it wrong.
EraCost per pageDeindex rateCost per ranked page
Templated (pre-HCU)~$0.001lowlow
Templated (post-HCU)~$0.001~80%very high
Per-URL agent (today)0.50to0.50 to 5near zerolow
Templated production cost roughly 0.001perpage.PerURLagentproductioncosts0.001 per page. Per-URL agent production costs 0.50 to $5 per page, which is 500 to 5000 times more expensive per page. But the deindex rate on templated pages now sits near 80 percent, and on quality per-URL implementations near zero. The cost per page went up. The cost per ranked page went down. Productized platforms now run 2,000permonthattheentrytierand2,000 per month at the entry tier and 30,000 plus at enterprise, with most mid-market teams paying 4,000to4,000 to 10,000. Compute scales horizontally, which is the part agencies cannot match. Your 100th URL costs the same to manage as your first.

When this fails

Per-URL research does not save every programmatic campaign. Two failure modes survive the architectural shift. The first is when the underlying content has no real differentiation to surface. If you are publishing 50,000 pages for “best dentist in [city]” and your data source is just a generic directory of practitioner names, no amount of agent research will make those pages genuinely distinct, because the substrate is not distinct. The agent is not a magic box, it is a research executor. Garbage substrate plus great research equals slightly-better-formatted garbage. The cost per page is now 0.50to0.50 to 5, the deindex rate is still high, and you have spent 500x more to get there. The second is when the SERP is dominated by entities the agent cannot beat structurally. If you are trying to rank for “[product] reviews” and the top 10 are owned by Amazon, G2, Reddit, and YouTube, per-URL research will not change the fact that you are a new domain trying to displace authority sites. The architecture solves “templated content gets deindexed.” It does not solve “weak domains lose to strong domains on competitive queries.” Founders sometimes treat the agent as a shortcut around topical authority. It is not. The strongest counterargument is that this is just better content marketing, not a structural shift. It is true that in some sense the agent is “doing what a good content team would do.” The shift is that a 30-person content team could not economically produce 50,000 pages of bespoke research, and now one operator with $5,000 per month in compute can. The change is the unit cost of bespoke at scale, not the existence of bespoke.

What sticks

  1. Templates are dead, the strategy is fine. Programmatic SEO is alive. The variable-substitution implementation is not. Stop trying to fix the template, change the architecture.
  2. Inputs, not prose, are the unique-content lever. All pages sharing one knowledge base means the AI knows the same things about every URL. Per-URL research is the only path to genuinely unique output.
  3. There is no HCU recovery update coming. It folded into core in March 2024. The site-wide classifier keeps applying. Operators waiting for a rollback are waiting for nothing.
  4. JSON between agents, not prose. If your “multi-agent” system passes paragraphs around, it is a prompt chain wearing the wrong label. Production systems pass structured data.
  5. The cost per ranked page dropped. The 500 to 5000 times per-page cost increase is real, and it is also the wrong number to look at. Index rate matters more than unit cost.
The ranking surface is no longer just blue links. It is being cited in AI Overviews, ChatGPT answers, Perplexity summaries, and Gemini responses. Brands skipping per-URL research are not just losing Google traffic, they are becoming invisible to the agents now responsible for the majority of organic search activity.

FAQ

It killed the templated implementation, not the strategy. The HCU was specifically tuned to detect structural sameness across pages. Programmatic SEO that runs a per-URL research agent before writing produces pages whose structure varies by query, so the fingerprint that HCU targets isn’t there. The strategy of building many pages from a database is fine. The execution model where one template produces 10,000 near-identical pages is not.
Before any prose is written, the system launches 10+ specialized sub-agents that gather page-specific context: top SERP results for the exact query, named entities that should appear, internal-link targets, JSON-LD schema, and competitive gaps. Their outputs are structured JSON, not prose, and the body writer queries those structures rather than rephrasing them. Sight AI runs 13+ such sub-agents per URL.
Per page, yes. Templated generation costs about 0.001(ascriptandadatabaserow).PerURLagentgenerationcosts0.001 (a script and a database row). Per-URL agent generation costs 0.50 to $5 because you’re running 10 to 13 agents across competitive analysis, entity extraction, internal linking, schema, and writing. The number that matters is cost per ranked page. Templated pages now have about an 80 percent deindex rate post-HCU. Per-URL pages on quality systems sit near zero. Multiply through and per-URL is cheaper per ranked page.
No. Google folded the Helpful Content ranking system into core in March 2024. There is no separate update left to roll back. The August 2025 spam update that explicitly targeted thin programmatic content was a continuation of the same site-wide classifier, not a one-off. The recovery path is architectural change, not patience.
No. The same pattern is now table stakes for AI Overviews, ChatGPT search, Perplexity, and Gemini retrieval. Those surfaces extract structured chunks (citations, tables, schema, FAQ blocks). Per-URL research naturally produces those chunks because the agents are gathering structured data first and the body writer is consuming it last. Templated content rarely passes the chunk extraction test.
Productized platforms run 2,000permonthattheentrytierand2,000 per month at the entry tier and 30,000+ at enterprise, with most mid-market teams paying 4,000to4,000 to 10,000. The cost is mostly LLM tokens and search API calls. The architecture scales horizontally. Your 100th URL costs the same per unit as your first, which is the structural advantage agencies cannot replicate.
It works at any scale, but the ROI is asymmetric. A 50-page site can match per-URL research quality manually. The arbitrage shows up at 500 pages and obliterates competition at 50,000. The interesting middle is sites that previously had 100 pages and could now reasonably ship 5,000 high-quality, individually researched pages without expanding the team.
Two places. First, when the underlying data source is not actually differentiated (a generic directory), agents cannot manufacture differentiation that isn’t in the substrate. Second, when the SERP is dominated by domain-authority incumbents (Amazon, Reddit, G2, YouTube). Per-URL research solves “templated content gets deindexed.” It does not solve “weak domain loses to strong domain on a competitive query.”
Unique content cannot emerge from identical inputs. The architecture is the moat, not the writing.