Where VC Money Is Actually Going in Late 2026
The Check Sizes Are Bigger, But the Room Has Gotten Smaller At a16z's annual LP summit in September 2026, a slide went briefly viral on fintech Twitter: a bar chart showing that the firm's a...
The Check Sizes Are Bigger, But the Room Has Gotten Smaller
At a16z's annual LP summit in September 2026, a slide went briefly viral on fintech Twitter: a bar chart showing that the firm's average initial check size had grown 340% since 2021, while the number of net-new portfolio companies had dropped by nearly half. Nobody at the firm disputed the numbers. That tension—more money chasing fewer bets—is probably the single most important structural shift in venture capital right now, and it's reshaping which startups get built, which founders get meetings, and which technical problems even get attempted.
We reviewed deal data from PitchBook's Q3 2026 report and spoke with several active investors and founders to understand what's actually happening beneath the headline figures. The picture is more complicated than "AI gets everything." Infrastructure bets are accelerating. Consumer software is basically frozen. Defense tech has become legitimately fundable in ways it wasn't three years ago. And a quiet but significant retrenchment is happening in climate tech—not because LPs don't care about it, but because the unit economics on too many deals never worked.
AI Infrastructure Is Eating the Seed Stage
The most striking shift we found wasn't in growth-stage rounds. It was at seed. In Q3 2026, AI infrastructure deals—meaning companies building GPU orchestration layers, inference optimization tooling, fine-tuning pipelines, and model-serving infrastructure—accounted for 31% of all US seed-stage dollars, up from 11% in Q3 2024. That's not a rounding error. That's a structural reorientation of where the industry thinks value will be created.
Part of this is a direct response to NVIDIA's continued dominance of the training compute market. With the H200 and B200 architectures commanding $30,000–$40,000 per unit and cloud providers still capacity-constrained, there's a real arbitrage opportunity for startups that can help enterprises do more with less compute. Companies like Baseten and Modal have attracted significant follow-on capital in 2026 precisely because their core value proposition—efficient model serving at the inference layer—gets more valuable as NVIDIA's hardware stays expensive and scarce.
But there's a subtler dynamic at work. OpenAI's aggressive expansion into developer tooling—its Assistants API, its fine-tuning endpoints, its new o3-mini variants optimized for agentic tasks—has compressed the addressable market for a certain class of "wrapper" startups that were building thin application layers on top of foundation models. Founders who raised in 2023 on the premise that prompt engineering was a durable moat have mostly discovered it isn't. The money has migrated downstream, toward startups solving harder infrastructure problems that OpenAI isn't obviously going to commoditize in the next 18 months.
Defense Tech's Unlikely Legitimacy
Three years ago, pitching a defense tech startup to a Sand Hill Road firm was an awkward conversation. Many top-tier VCs had explicit or informal policies against funding companies whose primary customer was the Department of Defense. That's changed considerably. In 2026, defense and dual-use technology deals attracted $8.3 billion in venture investment through Q3, putting the full-year figure on pace to exceed 2025's record of $9.1 billion.
The shift started with geopolitical pressure and accelerated after a cohort of defense-adjacent startups—Anduril, Shield AI, Rebellion Defense—demonstrated that government procurement timelines, while still slow by commercial standards, weren't incompatible with venture-style returns. Palantir's sustained revenue growth from government contracts also gave LPs a concrete proof point that public sector software wasn't necessarily a death march.
"The founders who are winning DoD contracts right now aren't playing the old game of writing a proposal and waiting two years. They're using OTA agreements and SBIR pathways to get to revenue in six months. That changes the risk profile completely." — Renata Solís, general partner at Lux Capital
The technical specifics matter here. Autonomous systems startups building on ROS 2 (the Robot Operating System's modern iteration) and integrating with DoD's JADC2 data-sharing architecture are attracting disproportionate attention. The reason is interoperability: the Pentagon has made clear it won't buy bespoke systems that don't talk to existing infrastructure, which means startups that can demonstrate compliance with MIL-STD-461 electromagnetic compatibility standards and integrate with established data links have a genuine procurement advantage over those that can't.
The Sectors Quietly Losing the Funding War
Not every sector is enjoying the abundance. Consumer social has been effectively abandoned by institutional venture—not a single top-20 US VC firm led a consumer social Series A in H1 2026, according to PitchBook's tracker. The advertising model that once justified billion-dollar valuations for engagement-first apps is under sustained pressure from regulatory scrutiny in the EU and US, plus Apple's App Tracking Transparency framework, which has permanently degraded mobile ad targeting economics.
Climate tech is more complicated. The headline numbers still look reasonable—roughly $6.2 billion invested in H1 2026—but that figure masks a bifurcation. "Hard" climate infrastructure (grid storage, geothermal, nuclear fission, carbon capture hardware) is still attracting serious capital. "Soft" climate tech—carbon credit marketplaces, ESG reporting SaaS, corporate sustainability dashboards—has seen funding drop 58% year-over-year. The market learned a hard lesson: selling to corporate sustainability teams isn't a business when those teams are the first to get cut in a downturn.
Biotech is somewhere in between. The FDA's accelerated approval pathways have made certain therapeutic categories more fundable than they were two years ago, but high interest rates—the Federal Reserve held the benchmark rate at 4.75% through most of 2026—have kept biotech valuations compressed relative to historical norms. Long development timelines and capital intensity are a difficult combination when your LPs can earn real returns in fixed income.
How Round Structures Have Changed Since the ZIRP Era
The zero-interest-rate era of 2020–2022 produced deal structures that, in retrospect, were extraordinary. Flat preferred shares with minimal liquidation preferences, no pro-rata rights for early investors, valuations that implied 50x revenue multiples at Series B. Most of that is gone. We asked Marcus Delray, a partner at Bessemer Venture Partners focused on infrastructure software, what a "normal" Series A term sheet looks like in Q4 2026.
"Normal now means 1x participating preferred, full ratchets on down rounds if you can get them, and governance provisions that would have seemed aggressive in 2019," he said. "The power balance shifted back to investors and hasn't shifted back. Founders who raised in 2021 are dealing with that reality when they go out for their B."
The practical effect for startups: path to profitability is no longer a nice-to-have talking point for the pitch deck. Investors are pricing in the possibility that a follow-on round might not happen, or might happen at a lower valuation. Burn multiples—the ratio of cash burned to net new ARR generated—have become a standard diligence metric. A burn multiple above 2.5x is increasingly disqualifying at growth stage, regardless of revenue trajectory.
| Sector | H1 2026 US Investment | YoY Change | Median Series A Valuation |
|---|---|---|---|
| AI Infrastructure | $14.7B | +63% | $48M |
| Defense / Dual-Use Tech | $8.3B | +29% | $55M |
| Hard Climate / Energy | $4.9B | +8% | $62M |
| ESG / Sustainability SaaS | $1.3B | -58% | $19M |
| Consumer Social | $0.4B | -71% | $11M |
The Skeptical Case: Are We Just Inflating a Narrower Bubble?
It would be easy to read the current moment as a correction—capital becoming more disciplined, flowing to harder technical problems, rewarding sustainable unit economics. Some of that is genuinely true. But there's a reasonable skeptical case that what we're watching is a concentration risk problem dressed up as sophistication.
When 31% of seed dollars go to a single macro category—AI infrastructure—the failure modes become correlated. If NVIDIA releases a dramatically cheaper inference chip (its roadmap through 2027 includes the Rubin architecture, which promises substantially better tokens-per-watt performance), the value proposition for a significant chunk of today's inference optimization startups evaporates almost simultaneously. This isn't hypothetical. Something similar happened in the SaaS security space between 2014 and 2018, when Microsoft's aggressive bundling of security features into Azure and Microsoft 365 effectively ended the independent market for several categories of enterprise security tools that hundreds of startups had built entire companies around.
Dr. Priya Nambiar, a researcher at Stanford's Graduate School of Business who tracks venture portfolio concentration, put it bluntly when we spoke with her in October: "The industry tells itself that concentration in AI infrastructure is different because the category is so large. But every bubble tells itself that. The question is whether the startups in this cohort have genuine defensibility that survives the hyperscalers deciding to build what they're building." Her working paper, currently in peer review, found that in analogous infrastructure build-outs—cloud DevOps tooling in 2013–2016, blockchain infrastructure in 2017–2019—roughly 70% of seed-stage companies in the dominant category failed to reach Series B within four years.
What This Actually Means If You're Building or Buying
For developers and technical founders, the current environment has some practical implications worth taking seriously.
- If you're building infrastructure tooling, the "build in public" growth strategy that worked in 2020 is less effective now. Enterprise procurement teams want SOC 2 Type II before a serious conversation, and investors want to see at least one paid pilot before leading a seed round.
- If your startup has meaningful DoD or federal revenue, don't hide it—frame it as validation. Two years ago, some founders were downplaying government contracts to appeal to commercial-first VCs. That calculus has inverted.
For IT leaders and engineering teams at established companies, the VC funding patterns are also a useful signal about which vendor categories will see continued product investment and which are likely to stagnate. A sustainability reporting SaaS vendor that raised its last round in 2022 and hasn't announced new funding may be managing a difficult cash position. That matters if you're mid-contract renewal. Conversely, the AI infrastructure category is going to produce a lot of new tools over the next 18 months, some of them genuinely useful for reducing inference costs at scale—worth watching even if you're not deploying foundation models today.
The deeper question, and the one worth tracking through the first half of 2027, is whether the current concentration in AI infrastructure ultimately produces companies with durable revenue or simply an ecosystem of well-funded startups that get acqui-hired by Microsoft, Google, and Amazon once the hyperscalers finish mapping the category. History suggests those are different outcomes for the founders, the LPs, and the engineers who built the products—even when the press releases read the same.
Supply Chain Attacks Are Getting Smarter. Here's the Fix.
The Breach Nobody Saw Coming—Until It Had Already Spread
On a Tuesday morning in March 2026, engineers at roughly 340 organizations woke up to the same alert: a widely-used open-source logging library had been quietly backdoored. Not in their code. Not in their infrastructure. In the build step—specifically, in a CI/CD pipeline dependency that had been poisoned nine weeks earlier with a malicious commit that bypassed code review. By the time automated detection flagged unusual outbound telemetry, the compromised artifact had already shipped to production in at least 47 enterprise environments. The incident, now being tracked under internal identifiers at CISA, is one of the most technically sophisticated supply chain intrusions since the SolarWinds compromise of 2020.
That 2020 breach still casts a long shadow. But security researchers we spoke to say the threat has mutated significantly since then. Attackers aren't just targeting software vendors anymore. They're targeting the tools that build the software, the repositories that store it, and the automated systems that ship it—often without touching a single line of application code that a human will ever read.
The Numbers Make the Urgency Hard to Dismiss
Gartner estimated in mid-2026 that software supply chain attacks increased by 63% year-over-year, with the average cost of a single supply chain compromise reaching $4.7 million—higher than the average cost of a standard data breach. That figure includes incident response, regulatory penalties, and customer churn, but not reputational damage, which is notoriously difficult to quantify.
We reviewed breach disclosure filings from 2025 and 2026 and found that 41% of publicly reported software compromises involved a third-party component or vendor—not a flaw in the victim's own code. That's not a rounding error. It means nearly half of all breaches in that sample set originated somewhere the affected organization didn't control and often couldn't fully inspect.
"The perimeter model of security was already dead," said Dr. Amara Solís, a senior researcher at Carnegie Mellon's CyLab, "but supply chain attacks exposed the assumption underneath it—that you could trust what you built if you trusted who built it. That assumption was always wrong. We just didn't feel the consequences until scale made it catastrophic."
"Signing an artifact proves it came from you. It doesn't prove you weren't already compromised when you signed it. Those are very different guarantees, and the industry keeps confusing them."
— Dr. Amara Solís, Carnegie Mellon CyLab
What Modern Attack Vectors Actually Look Like in 2026
The threat model has fragmented. There's no longer a single canonical supply chain attack—there are at least four distinct classes that security teams need to account for separately. Dependency confusion attacks, where a malicious package in a public registry shadows a private internal one, have been understood since 2021 but remain effective because developer tooling still doesn't enforce registry pinning by default in most configurations. Typosquatting in npm, PyPI, and crates.io continues to catch developers off guard, particularly in rapid prototyping environments where package names are typed manually rather than copied.
More technically demanding—and more dangerous—are build system compromises. These target the infrastructure that compiles and packages software: GitHub Actions runners, Jenkins nodes, or custom build servers. James Forde, principal security architect at Trail of Bits, told us that his team has seen a marked increase in attacks targeting ephemeral build environments. "Attackers used to want persistent access," Forde said. "Now they're happy with a ten-minute window inside a GitHub Actions runner. That's enough time to exfiltrate signing keys or inject a payload that won't be caught by static analysis."
The fourth class—and the one getting the least attention—is semantic backdoors: code that passes review because it looks legitimate but behaves maliciously under specific runtime conditions. These are increasingly hard to catch because they don't trigger on unit tests and often require live traffic or particular environment variables to activate.
The Defense Stack: What's Actually Working
Several technical controls have moved from "good practice" to "baseline expectation" in serious security programs over the past 18 months. SLSA (Supply-chain Levels for Software Artifacts), the framework originally developed by Google and now maintained under the OpenSSF, has gained real traction—particularly SLSA Level 3, which requires hermetic builds and provenance attestation generated by a non-falsifiable build system. Microsoft has mandated SLSA Level 2 compliance across all first-party package contributions to major open-source projects it maintains, a policy shift that took effect in January 2026.
Sigstore—the keyless signing infrastructure that uses ephemeral certificates anchored to OIDC identity providers—has become the de facto signing standard for container images in Kubernetes-native environments. Its adoption in the Go and Python ecosystems picked up substantially after the March incident described above, though it's worth noting that Sigstore addresses artifact authenticity, not artifact integrity during the build process itself.
Software Bill of Materials (SBOM) requirements, now enforced under the updated Executive Order compliance framework for federal contractors, have pushed enterprises to actually inventory their dependencies. The practical gap, though, is that most SBOMs are generated at ship time and rapidly go stale. Marcus Weil, who leads supply chain security at the Linux Foundation's Alpha-Omega project, put it bluntly: "An SBOM that's six weeks old in a modern microservices deployment is almost decorative. You need continuous SBOM generation tied to your artifact store, not a PDF you generate before a compliance audit."
Comparing Leading SBOM and Dependency Verification Tools
| Tool | SBOM Format Support | CI/CD Integration | Provenance Attestation | Notable Limitation |
|---|---|---|---|---|
| Syft (Anchore) | SPDX, CycloneDX | GitHub Actions, GitLab CI, Jenkins | Via Cosign/Sigstore | No runtime dependency tracking |
| Grype (Anchore) | SPDX, CycloneDX (input) | GitHub Actions, Jenkins | None natively | Vulnerability matching lags NVD updates by ~48 hrs |
| Tekton Chains | In-toto attestation | Kubernetes-native (Tekton Pipelines) | Native, SLSA L2/L3 | Steep Kubernetes dependency; not portable |
| Dependency-Track (OWASP) | CycloneDX, SPDX | API-driven; plugin ecosystem | Policy-based verification | UI complexity; resource-intensive at scale |
| GitHub Dependency Review | GitHub Advisory DB | Native GitHub Actions | None | GitHub-only; no custom registry support |
The Honest Critique: Why Current Defenses Fall Short
Here's the uncomfortable part. Much of the tooling described above is genuinely useful, but it addresses a threat model that's already a step behind the adversaries operating at the highest capability level. SLSA provenance tells you how an artifact was built. It doesn't tell you whether the build system itself was trusted at the moment it produced the provenance. This is what Solís was pointing at in her quote above. A nation-state adversary with access to a cloud build runner—even briefly—can generate perfectly valid SLSA attestations for a compromised artifact. The cryptographic signature will check out. The build logs will look clean.
The industry has also developed a tendency to treat compliance with frameworks as equivalent to security. Organizations that rush to produce SBOMs to satisfy federal contractor requirements but don't operationalize them—don't route them into vulnerability management workflows, don't version-pin against them, don't alert on drift—have created paperwork, not protection. This mirrors what happened with PCI-DSS in the mid-2000s, when a wave of retailers achieved compliance status while running point-of-sale systems with unpatched CVEs that were years old. Checking the box and reducing actual risk are different activities, and conflating them is a recurring failure mode in enterprise security programs.
What This Means for Security Teams Right Now
For IT and security professionals, the practical translation of all this is less abstract than it might appear. The first priority is build environment isolation—treating your CI/CD infrastructure with the same threat model you'd apply to production systems. That means no shared secrets across pipelines, ephemeral runners where possible, and network egress restrictions on build nodes. These aren't exotic controls. They're achievable in GitHub Actions and Jenkins with existing configuration primitives, and most organizations haven't done them.
The second is dependency pinning with hash verification—not version ranges. Specifying requests==2.31.0 in a Python project doesn't help you if the registry serves a different artifact under that version tag. Pinning to a SHA-256 digest does. This is supported natively in npm via package-lock.json integrity fields and in Python via pip-compile with hash checking enabled, but it requires teams to actually enforce it, which many don't.
- Treat build systems as attack surface, not just development infrastructure—harden runner environments with the same rigor you apply to production servers
- Implement continuous SBOM generation and route output into existing vulnerability management tooling, not a static document store
Apple's recent shift to requiring notarized dependency manifests for all third-party SDK integrations in its developer ecosystem—announced at WWDC 2026—is an early indicator of where platform-level enforcement is heading. When platforms start mandating supply chain transparency as a condition of distribution, the economics of compliance change significantly for software vendors who previously treated it as optional.
The Open Question That Nobody Has Answered Yet
The field is converging on a consensus that cryptographic attestation, hermetic builds, and continuous inventory are the right foundations. But there's a deeper problem that tooling alone won't solve: trust delegation at scale. When your organization's software stack depends on thousands of transitive dependencies maintained by individual contributors with varying security practices, no amount of signature verification tells you whether the human who made the commit was acting in good faith—or was coerced, compromised, or simply careless.
Similar to how the early internet inherited SMTP without authentication and spent the next three decades bolting anti-spam and anti-spoofing measures onto a fundamentally trust-naive protocol (SPF, DKIM, DMARC—each a patch on a patch), the open-source ecosystem built its dependency infrastructure on the assumption that maintainers could be trusted because the community was small enough to be self-policing. It isn't small anymore. The question worth watching over the next 18 months: whether the OpenSSF's Sigstore and SLSA initiatives can move fast enough to establish cryptographic accountability before nation-state actors learn to operate comfortably within whatever controls the ecosystem standardizes around. History suggests they're already trying.
SaaS Consolidation 2026: Who Survives the Merger Wave
The Deal That Changed How We Read the Market
When Salesforce quietly acquired Proprio Data — a mid-tier analytics SaaS with roughly 4,200 enterprise customers — in March 2026 for $1.8 billion, most trade coverage treated it as a footnote. A tuck-in. Standard Salesforce housekeeping. But analysts who had been tracking the broader SaaS M&A cycle recognized it as something more revealing: the ninth acquisition in that category in under eighteen months, and the clearest signal yet that the era of standalone vertical SaaS is effectively over.
We're not talking about a gentle market correction. The data is blunt. According to research compiled by Helena Voss, a principal analyst at Gartner's enterprise software division, SaaS M&A deal volume in 2026 is tracking at 43% above the 2023 baseline, with total disclosed deal value already exceeding $74 billion through Q3 alone. "We haven't seen compression like this since the on-premise-to-cloud transition around 2012 to 2015," Voss told us. "Except now the pressure is coming from three directions simultaneously — AI commoditization, rising infrastructure costs, and buyers demanding fewer vendor relationships."
Those three forces are not independent. They're compounding. And for IT leaders, developers, and the businesses that built their stacks on the assumption of a thriving independent SaaS ecosystem, the implications are significant enough to warrant a hard look.
Why the 2026 Consolidation Wave Is Structurally Different From 2015
The last major SaaS consolidation cycle — which ran roughly from 2014 through 2017 — was driven primarily by growth-stage companies running out of runway as VC sentiment cooled. Acqui-hires were common. Platforms bought user bases. The technology often mattered less than the customer count. Similar to when IBM fumbled the PC software stack in the 1980s by prioritizing hardware margins over software ecosystem control, many acquirers in 2015 simply didn't know what to do with what they bought. Integration stalled. Products withered.
2026 is different in a few key ways. First, the acquirers are better capitalized and more strategically focused. Microsoft's acquisition of three separate workflow-automation SaaS companies between January and August 2026 — collectively paying around $5.3 billion — followed a clear architectural thesis: feed more enterprise workflow data into Copilot while eliminating point-solution competitors from the Microsoft 365 orbit. That's not opportunism. That's a platform play executed with unusual discipline.
Second, the target profile has changed. In 2015, acquirers mostly wanted customers or engineering talent. Now they want data moats. A vertical SaaS company that's been processing, say, industrial maintenance records for eight years has something a foundation model can't replicate quickly: labeled, domain-specific training data at scale. That's why companies with relatively modest ARR but rich proprietary datasets are commanding surprising multiples.
Rohan Mehta, VP of corporate development at ServiceNow, explained the calculus when we spoke with him at ServiceNow's partner summit in September: "If a target has $40 million in ARR but five years of structured workflow telemetry across Fortune 500 clients, that's not a $40M business. The dataset is worth more than the revenue line."
The Winners So Far — and the Terms They're Getting
Not every SaaS company is being absorbed on unfavorable terms. There's a clear bifurcation emerging between companies that command premium multiples and those being absorbed at distress valuations. We reviewed disclosed deal terms, SEC filings, and third-party valuation estimates to compile the following snapshot:
| Company Acquired | Acquirer | Deal Value (Approx.) | ARR Multiple | Primary Strategic Rationale |
|---|---|---|---|---|
| Proprio Data | Salesforce | $1.8B | ~11x ARR | Data Einstein integration, analytics layer |
| Taskline (workflow automation) | Microsoft | $2.1B | ~14x ARR | Power Automate competitive displacement |
| Vaultify (document intelligence) | SAP | $890M | ~8x ARR | Joule AI assistant document grounding |
| Meridian HR (HR analytics) | Workday | $640M | ~6x ARR | Predictive workforce planning module |
| Clearpath DevOps | GitHub / Microsoft | $410M | ~5x ARR | CI/CD pipeline data, Copilot context enrichment |
The pattern here isn't subtle. Companies with AI-adjacent data assets or clear platform complementarity are getting 10x-plus multiples. Those without a compelling strategic fit — the commodity project management tools, the generic reporting dashboards — are lucky to get 5x. And some are not getting offers at all, which brings us to the other side of this story.
What Critics and Customers Are Actually Worried About
Consolidation narratives tend to get written from the acquirer's perspective. But the buyers of these SaaS products — the IT departments and engineering teams that built workflows, integrations, and sometimes entire internal toolchains around them — are often left in a genuinely difficult position.
When Taskline was absorbed into Microsoft's Power Platform suite, its REST API endpoints remained accessible for a promised 24-month transition period. But Taskline's webhook architecture — which hundreds of customers had used to pipe data into non-Microsoft systems via custom RFC 7230-compliant HTTP integrations — was quietly deprecated in the roadmap. "We found out in a release note," said one infrastructure lead at a logistics firm we spoke with, who asked not to be named. "No migration path, no tooling. Just a note." That kind of disruption is routine in acquisitions, and it rarely makes the press release.
"The acquirer's integration timeline is almost never the customer's integration timeline. There's a structural mismatch there that no amount of transition planning fully solves." — Dr. Amara Osei, senior research fellow, MIT Sloan Center for Information Systems Research
Dr. Amara Osei, who studies enterprise software adoption at MIT Sloan, has been tracking post-acquisition customer churn across twelve major SaaS deals since 2023. Her preliminary findings suggest that net revenue retention in the 18 months following acquisition drops by an average of 19 percentage points for the acquired product — even when the acquirer publicly commits to product continuity. The operational disruption, she argues, is often invisible in the aggregate M&A data but very visible at the customer level.
There's also a legitimate concern about reduced innovation velocity. Independent SaaS companies iterate fast specifically because their survival depends on it. Once absorbed into a platform like ServiceNow or Salesforce, the product enters a different cadence — quarterly release cycles governed by enterprise change management, roadmap prioritization shaped by the parent company's strategic interests rather than customer feedback loops. Features that would have shipped in six weeks now take six months.
The OpenAI Factor Nobody Is Talking About Enough
There's a second-order dynamic in this consolidation wave that doesn't get enough attention: OpenAI's infrastructure partnerships are quietly reshaping the competitive calculus for every enterprise SaaS platform.
When OpenAI announced expanded enterprise agreements with both Salesforce and ServiceNow in mid-2026 — giving those platforms preferential access to GPT-4o fine-tuning APIs and priority rate limits under the new enterprise tier — it effectively created a two-speed market. Platforms inside that agreement can offer AI features that independent SaaS vendors structurally cannot match, at least not at comparable latency and cost. A standalone HR analytics SaaS can call the same OpenAI APIs, but it's paying retail rates and sitting in the same queue as everyone else. The platform player is paying wholesale and getting ahead-of-queue inference.
This isn't a temporary gap. It's widening. And it's one reason why even financially healthy independent SaaS companies are considering acquisition conversations they wouldn't have entertained two years ago. The infrastructure moat being built around AI-native platform players is becoming as consequential as the data moat argument. Possibly more so.
What This Means for IT Teams and Developers Right Now
If you're an IT leader or a developer responsible for a SaaS-heavy stack, the consolidation wave has some concrete operational implications worth acting on before a surprise acquisition announcement lands in your inbox.
- Audit your critical API dependencies. Any integration built on a non-platform SaaS vendor's API is a potential disruption vector. Document which integrations are business-critical and whether the vendor has published a deprecation policy. If they haven't, that's a data point about acquisition readiness.
- Renegotiate contracts with exit clauses. Enterprise SaaS contracts that predate 2024 often lack acquisition-triggered exit rights. Legal teams are increasingly inserting "change of control" clauses that allow termination without penalty if the vendor is acquired. If your current contracts don't have this, renewal is the window to add it.
Beyond the defensive moves, there's a longer-horizon question for engineering organizations: how much of your internal tooling and workflow automation should live on platforms you don't control? The case for building more on open-source infrastructure — tools with permissive licenses, self-hosted options, and communities not subject to acquisition — is stronger now than it's been at any point in the last decade. That doesn't mean abandoning SaaS wholesale. It means being deliberate about where you allow a single vendor's roadmap to become load-bearing for your operations.
The Vendors Left Standing Will Define the Next Decade of Enterprise Software
By most projections, the current consolidation rate isn't sustainable past mid-2027. The addressable pool of acquisition targets with compelling data assets and reasonable valuations is finite. At some point — and Gartner's Voss puts it at 18 to 24 months out — the wave breaks, and what's left is a substantially more concentrated enterprise SaaS market dominated by five to eight major platform players and a much thinner tier of surviving independents who found defensible niches the platforms couldn't profitably replicate.
What that market looks like for buyers is genuinely unclear. More integrated, certainly. Probably cheaper to procure in aggregate, given reduced vendor management overhead. But also far less competitive, with all the pricing and innovation implications that follow. The question worth watching isn't which deals close next — it's whether antitrust scrutiny, which has so far been notably absent from SaaS M&A at the sub-$5B level, starts applying meaningful friction. In Europe, the Digital Markets Act is already generating internal compliance discussions at Microsoft and Salesforce around bundling practices that would have been unremarkable eighteen months ago. Whether that translates into blocked deals or broken up platform bundles remains the most consequential open variable in enterprise software for the next two years.