Skip to main content
By Dante Perea, Founder, unifounder.ai. Founder of unifounder.ai. Building agent-native publishing infrastructure for the Business-to-Agent economy. Previously shipping AI products at the intersection of multimodal models, retrieval, and developer tooling.Follow on X · GitHub · growth.dante.id.
The conventional view is that Y Combinator’s Spring 2026 Request for Startups names ten distinct opportunities founders can pick from. In fact, three of them are the same opportunity from three angles, and the founders who recognize that are about to compound.

The context

YC released its Spring 2026 RFS on April 27, 2026. Three of the ten themes (AI-Native Service Companies by @gustaf, SaaS Challengers by @snowmaker, Software for Agents by @aaron_epstein) read like separate market bets when scanned individually. Read together, they describe one structural shift at three layers: agents doing the work for humans (services), agents replacing the tools humans used (SaaS challengers), and agents using software directly as the primary user (software for agents). The market has already priced this shift, even if most founders haven’t internalized it. In February 2026, approximately $285 billion vanished from SaaS company valuations in roughly 48 hours after Anthropic launched Claude Cowork. The market concluded that AI agents could replace entire categories of knowledge work that SaaS companies had been charging per seat to support. Per-seat pricing assumes humans sit in seats. Agents do not.

Why this works

The three themes stack because they target different layers of the same B2A surface, and each layer reinforces the next.
ThemeBuyerSoldRevenue modelMoat source
AI-Native ServicesHuman business buyerOutcome (closed books, filed taxes, processed claims)Per-task / per-outputEmbedded workflow + customer data
SaaS ChallengersHuman teamSoftware replacing legacy SaaSPer-seat / per-workspaceSpeed of iteration + AI-native UX
Software for AgentsAgent (acting for human)API access, MCP server, agent-shaped toolsPer-call / per-token / usageAgent-readable surface + deterministic behavior
The mechanism that makes them stack: each layer’s cost structure feeds the next. Software for agents gets cheaper because AI inference cost drops roughly 10x per year, with tasks that cost 10in2024costing10 in 2024 costing 0.10 in 2026. SaaS challengers get cheaper because AI collapsed the cost of producing software 10–100x. AI-native services get cheaper because they’re built on top of SaaS challengers and agent infrastructure that already paid those collapses forward. The compounding is multiplicative, not additive. That math is also why per-seat SaaS lost 285Bintwodays.Ifthebuyercanreplacetheseatswithagentsthatare10xcheapertooperate,paying285B in two days. If the buyer can replace the seats with agents that are 10x cheaper to operate, paying 50 per seat per month for software the agent doesn’t need stops being a defensible economic position. The market priced the inversion before most founders did.

What I tried / what I saw

I read the three YC tweets back-to-back and noticed they’re written by three different YC partners who are not coordinating. Gustaf writes about service replacement. @snowmaker writes about SaaS rebuilds. @aaron_epstein writes about software-for-agents. None of the three explicitly names the stacking pattern. The pattern shows up in the YC portfolio itself. The clearest worked example is Foaster (a YC company). Foaster markets itself as “AI-native partner for AI transformation.” That positioning hits all three layers at once. The work is done by agents (services layer). The internal stack is rebuilt for an AI-first workflow rather than retrofitting legacy SaaS (challenger layer). The agents themselves run on agent-shaped infrastructure with MCP servers and tool-calling primitives rather than human-clicking-buttons UI (software-for-agents layer). The customer pays for the outcome, not the tool. The infrastructure for that stacking exists and is compounding fast. The Model Context Protocol went from zero to 97 million monthly SDK downloads across Python and TypeScript in its first year, with 10,000+ active MCP servers. OpenAI, Google DeepMind, Microsoft, and Cloudflare all adopted the protocol. Nango supports 700+ APIs in an agentic integrations platform where coding agents build integrations and agents consume via MCP, tool calls, webhooks, and data syncs. Arcade runs an MCP-first runtime for agent tool calling with 112 first-party integrations. Composio ships managed auth and a tool library wrapped in framework adapters (LangChain, CrewAI, Autogen, OpenAI Agents SDK). The signal in those numbers is not “MCP is hyped.” It is that the agent layer of the stack now has standardized primitives, which means founders no longer have to build it from scratch. Cloudflare’s Code Mode MCP server optimizes token usage when agents call large APIs. The cost-to-launch for software-for-agents dropped to roughly the cost-to-launch for a Next.js side project. That is the SaaS challenger thesis, exactly.

When this fails

The stacking thesis breaks in two specific cases. The first is regulated workflows where compliance requires human attestation. AI-native services in tax, audit, healthcare, or insurance still need a licensed human signature on the line. The agent does the work, but a human owns the legal risk. If the regulatory framework cannot be satisfied without a credentialed human in the loop, your unit economics regress toward services-business margins rather than software-business margins. YC named these verticals (insurance brokerage, accounting, tax and audit, compliance, healthcare administration) as targets, but the licensed-human constraint is the reason most of them are still services and not software. The second is workflows where the moat was the user interface. Founders building yet another SaaS challenger to a category like CRM or design tools assume “10–100x cheaper to build” translates to “10–100x cheaper to win.” It does not. The moat in those categories was distribution, integrations, and brand, not engineering hours. AI agents flatten interface moats because they don’t care about the interface. An agent can navigate a clumsy UI just as easily as a sleek one, or bypass the UI entirely via API. That cuts both ways. You can’t beat Salesforce with prettier React components, and Salesforce can’t keep you out by shipping prettier React components either. The strongest counterargument is that all moats are gone, so nothing is defensible. That overshoots. As Fortune put it after the SaaS selloff, “code alone was never a real moat.” What survives the AI cost collapse is SEO, brand, taste, speed, data, and trust. Four of those six (SEO, brand, taste, speed) are content and distribution, not engineering. The center of gravity for moat-building shifted from the codebase to the surrounding system, which is why “make your entire company queryable” matters more than another framework migration.

What sticks

  1. Pick a layer or stack the layers, but don’t ignore the framing. AI-native services, SaaS challengers, and software for agents are three sides of one B2A pyramid. Founders who pick one layer can win that surface. Founders who stack all three (build agent-shaped software, use it to challenge a legacy SaaS, wrap the result as a service) compound across all three.
  2. Services TAM is structurally larger than software TAM. Outsourced services (accounting, legal, compliance, HR ops) are 5–20x larger markets than the SaaS tools that support them. Most founders default to building tools because they’re engineers. The bigger market is doing the work, not selling a tool to do the work.
  3. Make the company queryable. YC explicitly named this as the pattern that separates AI-native winners from AI-using laggards. Internal MCP servers exposing company state (docs, customer history, compliance status, tooling) so any agent can act on the full system. This is the operational moat replacing the codebase moat.
  4. The next trillion users are not human. Every API endpoint that is not agent-readable (no machine-friendly schema, no rate limits sized for agent volume, no auth flow that survives autonomous use) is leaking the trillion-user audience. Most product teams still treat agent traffic as a noise category, not a buyer category. That is the gap.
  5. Per-seat pricing is structurally short. $285B in 48 hours is the short side. If your revenue model assumes the buyer keeps adding seats, ask what happens when the buyer replaces the seats with agents that operate at 10x lower cost. The pricing must follow the work, not the seat.

FAQ

Both, ideally as a stack. Build the SaaS challenger as the agent-shaped tool, then wrap it in a services layer that delivers the outcome to a human buyer. The services layer captures the larger TAM. The SaaS challenger layer captures defensibility through workflow data. If you can only pick one, pick services, because services TAM is 5–20x software TAM and the buyer cares about outcomes, not tools.
Internal MCP servers (or equivalent agent-readable APIs) that expose every meaningful piece of company state: documentation, customer records, compliance status, infrastructure metrics, support history, internal policies. The test is whether an autonomous agent inside your company can answer any operational question without a human in the loop. Most companies fail this test because their state lives in dashboards, Slack threads, and Notion pages no agent can parse.
97 million monthly SDK downloads across Python and TypeScript in the first year, 10,000+ active MCP servers, and adoption by OpenAI, Google DeepMind, Microsoft, and Cloudflare. Those are real numbers from a real protocol, documented at modelcontextprotocol.io. The hype question is the wrong question. The right question is whether your category will have agent-shaped primitives within 12 months, and the answer for almost every category is yes.
Anthropic launched Claude Cowork in February 2026, and the public market priced in that AI agents could replace categories of knowledge work that SaaS companies had been charging per seat to support. The seat assumption breaks once a single agent can do the work of a team. Per-seat pricing is structurally short on any workflow that is not regulatory-bound to a licensed human.
Two places. First, regulated workflows where a credentialed human must sign off (tax, audit, healthcare, insurance) still need humans, so unit economics regress toward services margins rather than software margins. Second, categories where the moat was distribution or brand rather than engineering hours (CRM, design tools): the 10–100x build-cost collapse does not translate into a 10–100x win-probability gain.
Yes. Current GPUs hit only 30–40% peak utilization on agent workloads because the work is bursty, switching between memory-bound model calls, I/O-bound tool use, and CPU-bound orchestration. Purpose-built silicon designed for fast context switching between models, native speculative decoding, and memory built for KV caches that persist across an entire execution graph is an open category. YC named it explicitly in the Spring 2026 RFS.
Software for agents, then services. Software for agents has the lowest capital requirement (you can ship an MCP server or agent-shaped API in days), the fastest distribution loop (agents discover tools through registries and aggregators), and the cleanest pricing model (per-call, no sales motion). Services come second because they require more domain knowledge but reward it with much higher revenue per customer. SaaS challengers come last for indie hackers because the distribution moat is the hardest part to replicate.
The three YC themes are one shape: agents on every side of every transaction. Pick a layer, stack the layers, but stop treating them as separate bets.