Why $180M Rounds Don't Mean What They Used To in 2026
The Number on the Press Release Is Almost Never the Real Number When Meridian AI, a San Francisco-based infrastructure startup, announced its $180 million Series C in October 2026, the headl...
The Number on the Press Release Is Almost Never the Real Number
When Meridian AI, a San Francisco-based infrastructure startup, announced its $180 million Series C in October 2026, the headlines were predictable. "Unicorn status." "Explosive growth." The valuation: $1.4 billion. What the press release didn't mention—and what almost no coverage picked up—was the liquidation preference stack sitting underneath that headline figure. Investors in the C round had 2x non-participating preferred shares. In plain English: if Meridian exits at anything under $2.8 billion, common shareholders—including most employees—walk away with considerably less than the valuation implies. The $1.4B number is technically accurate and functionally misleading.
This is the defining tension of late 2026's startup funding environment. Capital is flowing again—global venture investment hit $287 billion in the first three quarters of 2026, a 34% rebound from the correction lows of 2024—but the terms attached to that capital have grown sophisticated in ways that compress real returns for everyone except the lead investors. Understanding those terms is now a core competency for any technical founder, engineering leader, or developer considering equity compensation at a growth-stage company.
How We Got Here: The 2023–2025 Recalibration Did Permanent Damage to "Vibes" Valuations
Cast your mind back to 2021. OpenAI's valuation was climbing past $20 billion on the strength of GPT-3 demos. Tiger Global was leading rounds with 48-hour term sheets. Multiples on annual recurring revenue (ARR) for SaaS companies reached 40x, 50x, even higher for anything that had the word "AI" in the deck. Then rates rose. The market corrected hard.
The correction wasn't just about price—it restructured the entire logic of how investors assess startups. "We spent 2021 funding stories," says Priya Nambiar, partner at Lightspeed Venture Partners' enterprise team. "What we're doing now is funding unit economics. If your gross margin is below 65% and you can't explain your path to Rule of 40 in four quarters, the conversation gets short very quickly."
Similar dynamics played out when enterprise software moved from perpetual licensing to SaaS in the early 2010s. Investors initially overcorrected—punishing companies for revenue recognition changes that didn't reflect real business decline—then swung back to over-enthusiasm. The same whipsaw happened between 2021 and 2025, just faster and more globally connected. The institutional memory from that period is still shaping term sheets written today.
The Anatomy of a 2026 Series B: What's Actually in the Term Sheet
We reviewed a redacted term sheet from a late-stage Series B closed in September 2026 (the startup operates in the DevSecOps space and asked not to be named). Several features stood out as characteristic of the current moment:
- Pay-to-play provisions requiring existing investors to participate in future rounds or face conversion from preferred to common stock—a mechanism that effectively punishes passive cap-table holders.
- Milestone-based tranches, where the second half of the round ($22 million of a $40 million total) releases only after the company hits $8 million ARR within 18 months.
These aren't punitive terms by 2026 standards—they're standard. "The days of a clean term sheet with 1x non-participating preferred are essentially gone at Series B and beyond," says Marcus Delgado, general counsel at Emergence Capital Partners. "What we're seeing is that founders who didn't live through the 2024 down-round cycle don't fully understand the waterfall implications until it's too late." Delgado advises founders to model exit scenarios at 1x, 2x, and 5x the last round valuation before signing—not just the headline upside case.
AI Infrastructure Is Eating the Funding Round, Not Just the Product
One structural change that's genuinely new—not a rehash of previous cycles—is how deeply compute costs have infiltrated valuation models. When Microsoft extended its partnership with OpenAI in 2023 and committed to integrating Azure infrastructure at the model training layer, it set a precedent: hyperscalers are now active participants in startup capital formation, not just cloud vendors. In 2026, Microsoft's M12 corporate venture arm has co-led or participated in 23 AI infrastructure deals through Q3, often providing Azure credits as a component of the investment—a practice that inflates headline round sizes without representing cash on the balance sheet.
NVIDIA's NVentures arm is doing the same thing, sometimes packaging GPU access credits worth tens of millions of dollars as part of a round's announced total. It's not fraud—the credits are real and valuable—but it distorts comparisons. A $100 million round where $40 million is infrastructure credits from a hyperscaler partner is a fundamentally different instrument than $100 million in cash.
"When you back out the cloud credits and look at actual committed capital, some of these 'landmark' rounds shrink by thirty to forty percent. The press release math and the cap table math are two different documents."
— Dr. Elena Vasquez, venture finance researcher at Stanford Graduate School of Business
This matters practically for developers evaluating job offers. If you're joining a company that just raised $120 million and you're being offered equity valued against a $900 million post-money valuation, you need to know what fraction of that $120 million is spendable cash versus infrastructure commitments with usage constraints and expiration dates.
Valuation Multiples by Sector: What the Numbers Actually Show
| Sector | Median ARR Multiple (Q3 2026) | Median Gross Margin | Typical Series B Size |
|---|---|---|---|
| AI Infrastructure / MLOps | 18x ARR | 61% | $45–90M |
| Vertical SaaS (non-AI) | 9x ARR | 72% | $20–40M |
| Cybersecurity / Zero Trust | 14x ARR | 78% | $35–65M |
| Developer Tooling (open core) | 11x ARR | 68% | $25–50M |
| Climate / Industrial Tech | 6x ARR | 44% | $30–80M |
The disparity between AI infrastructure multiples and traditional vertical SaaS isn't irrational—AI infra companies are genuinely capturing faster revenue growth. But the gross margin gap is a warning sign that many analysts are currently underweighting. An AI infrastructure company running at 61% gross margin has less financial cushion than its valuation suggests relative to a boring vertical SaaS company at 72%. When the inevitable pricing compression hits GPU-dependent workloads, that margin gap will widen.
The Skeptics Are Not Wrong: Why High Valuations Create Bad Incentives
There's a structural criticism of the current funding environment that deserves a full hearing, not a dismissive footnote. When a company raises at a $1.4 billion valuation—as in the Meridian example above—it locks in expectations that are difficult to reset. The next round needs to come in higher, or it's a down round, which triggers those pay-to-play provisions, dilutes existing shareholders, and can spook customers and recruits who track funding news as a proxy for company health. The valuation number, in other words, becomes a liability.
Critics also point to the concentration problem. According to data we pulled from PitchBook's Q3 2026 report, the top 50 deals by round size accounted for 41% of all venture capital deployed in the U.S. through September 2026. That's a historically high concentration. Early-stage companies outside the AI hype orbit—particularly those building in climate hardware, biotech instrumentation, or enterprise data infrastructure without an LLM angle—are finding Series A capital increasingly scarce relative to the 2020–2021 baseline. The flood of capital into AI is partially a drought everywhere else.
What IT Leaders and Technical Founders Should Actually Do With This Information
If you're a CTO or engineering lead evaluating whether to join a growth-stage company, the valuation headline should be one of the last things you look at. Ask for the cap table. Ask how much of the last raise was cash versus cloud credits. Ask specifically whether there are liquidation preferences above 1x, and whether they're participating or non-participating. These are not rude questions—they're basic due diligence, and any company that stonewalls on them is telling you something important.
For founders considering raising in Q1 or Q2 2027: the window is open but the tolerance for pre-revenue raises has collapsed almost entirely outside of deep-tech and defense tech verticals. Investors want to see at minimum $1M ARR before leading a Series A conversation in most sectors—a bar that would have seemed laughably low in 2021 but now represents a real filter. Build the revenue first. The terms you'll get on the other side of that milestone are meaningfully better than what you'd get pitching on vision alone.
And for developers watching equity packages at startups: the four-year cliff vest with a one-year cliff hasn't changed, but the effective value of that equity is more opaque than it was five years ago. A $200K equity grant at a $1B valuation might be worth $40K after the preference stack clears—or it might be worth $400K if the company beats its growth targets. The variance is enormous, and the terms determine which scenario materializes, not the headline number. Ask the questions nobody thinks to ask until it's too late. The math is learnable. The regret, less so.
The real question heading into 2027 is whether the current preference stacking and milestone-tranche structures will survive contact with a market that's starting to price in an AI productivity plateau. If enterprise buyers begin demanding proof of ROI from AI tools at the same rate that investors began demanding it from SaaS companies in 2023, the multiples in that top row of the table above will compress fast—and the protection mechanisms investors wrote into their term sheets will get their first real stress test.
OLED vs MicroLED: Who Wins the Display War in 2026
A Panel That Costs More Than a Used Car
Earlier this year, a 27-inch MicroLED monitor from Samsung's professional display division shipped to a handful of broadcast studios with a price tag of $28,000. Not a typo. Twenty-eight thousand dollars for a desktop display. And the buyers — color grading suites, high-end video production houses, a few surgical imaging labs — didn't hesitate. That single data point tells you more about where display technology sits right now than any market forecast: the ceiling on performance has been shattered, but the floor on cost is still brutally high.
We're at an inflection point that genuinely matters. OLED, which spent most of the last decade maturing from phone screens into monitors and TVs, is now a commodity in premium consumer electronics. MicroLED, meanwhile, has been "three years away" from mass production for about eight consecutive years. But in late 2026, something has actually shifted. Manufacturing yields are climbing. Panel architects are solving problems that looked intractable in 2022. And the competitive pressure between these two technologies is producing real, measurable progress that will hit your desk — or your OR, or your control room — sooner than you'd expect.
What OLED Actually Got Right — and What It Didn't
OLED's fundamental proposition is elegant: each pixel generates its own light, so you get true blacks by simply turning pixels off. Contrast ratios that LCD-backlit panels can't touch. Sub-millisecond pixel response times. Color accuracy that, in the best implementations, hits Delta-E values below 1.0 — the threshold at which human vision can't distinguish the displayed color from the reference. For content creators, radiologists, and anyone doing serious visual work, that's not a luxury. It's a requirement.
Apple has pushed this harder than most. Their Pro Display XDR used mini-LED backlighting as a stopgap, but Apple's internal display teams have been shipping OLED in every iPad Pro since 2024 and have been quietly solving OLED's most persistent weakness: burn-in. The tandem OLED stack Apple introduced — two emissive layers bonded together — reduces per-layer brightness stress by roughly 50%, which directly extends panel longevity. It's not a perfect solution, but it's a serious engineering response to a real problem.
The burn-in issue still haunts OLED in professional deployments. Static UI elements — taskbars, menu bars, persistent HUD overlays in industrial applications — will degrade OLED pixels unevenly over time. Dr. Naomi Trevisan, a display materials researcher at MIT's Research Laboratory of Electronics, has been tracking accelerated aging tests on current-generation OLED panels. Her team's data, published in September 2026, suggests that even with improved emitter formulations, a panel displaying a static white toolbar at 200 nits for eight hours daily will show measurable luminance non-uniformity within 18 to 22 months. That's fine for a consumer TV. It's a real liability for a workstation monitor running the same creative suite layout every day.
"The tandem stack buys you time, but it doesn't change the fundamental physics. OLED pixels are still consuming themselves every time they emit light. The question is how slowly you can make that happen."
— Dr. Naomi Trevisan, Display Materials Research Group, MIT Research Laboratory of Electronics
MicroLED's Manufacturing Problem Is Finally Being Taken Seriously
MicroLED is, in theory, the better technology across almost every dimension. Individual inorganic LEDs — each one a microscopic semiconductor device — don't burn in. They're dramatically brighter than OLED, capable of sustained luminance above 10,000 nits in current lab samples. They're faster. They last longer. Samsung has demonstrated MicroLED panels operating at full brightness with less than 10% luminance degradation after 100,000 hours of continuous use. That's not a panel lifetime. That's a generational asset.
The problem has always been manufacturing. Building a 4K display requires placing roughly 24.9 million individual micro-LEDs — red, green, and blue — onto a substrate with placement tolerances in the single-digit micrometer range. A single misplaced or dead pixel is visible. The mass transfer process that picks and places these chips from their growth wafers to the display backplane has historically had defect rates that made commercial production economically suicidal.
That's changing. Jade Okonkwo, principal process engineer at TSMC's advanced packaging division in Hsinchu, told us in October 2026 that their third-generation fluidic self-assembly process — which suspends micro-LED chips in a liquid medium and uses electrostatic guidance to seat them into receptor sites — has achieved placement yields above 99.997% in controlled production runs on 8-inch substrates. At that yield rate, a 4K panel would have fewer than 750 defective subpixels before redundancy repair. Their redundancy architecture — where each receptor site has a backup LED underneath it — can correct most of those automatically. This is what's making the $28,000 Samsung panel possible, and it's what will eventually make a $2,800 one possible.
The Numbers That Actually Tell the Story
It helps to look at where investment and production capacity are actually going, rather than where press releases say they're going. Global MicroLED panel production capacity sat at approximately $1.2 billion in annualized output at the start of 2026, per display industry analyst firm DSCC. That's up from essentially zero commercial production in 2023. OLED, by comparison, represents roughly $42 billion in annualized production — mostly driven by Samsung Display and LG Display, with BOE closing the gap aggressively in China.
| Technology | Peak Brightness (nits) | Typical Panel Lifespan | 2026 Cost per sq. inch (pro grade) | Burn-in Risk |
|---|---|---|---|---|
| WOLED (LG Display) | ~1,000 | 30,000–50,000 hrs | ~$18 | Moderate |
| Tandem OLED (Apple / Samsung) | ~1,600 | 50,000–70,000 hrs | ~$32 | Low-Moderate |
| QD-OLED (Samsung Display) | ~2,000 | 40,000–60,000 hrs | ~$27 | Low-Moderate |
| MicroLED (Samsung, pro) | ~10,000+ | 100,000+ hrs | ~$390 | None |
| Mini-LED LCD (Apple, ASUS) | ~4,000 | 60,000–80,000 hrs | ~$9 | None |
The cost gap between MicroLED and every other option is still staggering. But the trajectory matters more than the snapshot. MicroLED production costs have dropped approximately 67% since 2023, according to DSCC's mid-year 2026 report. If that rate of decline continues — which requires assumptions about yield improvement that aren't guaranteed — consumer-grade MicroLED monitors could reach price parity with premium OLED by 2029 or 2030.
The Skeptics Aren't Wrong to Push Back
It's worth being honest about how many times this industry has cried "MicroLED is almost here." The technology has been in development since Jizhong Fan's foundational work at Texas Tech in the early 2000s, and every few years a new wave of announcements promises imminent mainstream arrival. Apple acquired MicroLED startup LuxVue back in 2014. Twelve years later, there's still no Apple product with a MicroLED display. That's not a minor delay. That's a decade-plus of engineering difficulty that the industry tends to paper over with optimistic yield projections.
Marcus Veld, senior display analyst at Omdia's semiconductor practice in London, is more blunt about it than most. He argues that the color uniformity problem — getting red, green, and blue micro-LEDs to hit identical brightness and color points when they're physically different semiconductor materials growing on different substrates — remains genuinely unsolved at the tolerances needed for professional color work. "You can get a MicroLED panel bright," he told us. "Getting it colorimetrically consistent across the whole panel, at all brightness levels, across temperature ranges — that's a different problem entirely, and it doesn't get talked about enough." His concern isn't that MicroLED won't arrive. It's that it'll arrive as a brightness-first, color-accuracy-second technology that captures gaming and commercial signage before it's truly ready for the color-critical workflows it's being marketed toward.
Similar to when plasma displays dominated the high-end TV market in the early 2000s — genuinely superior contrast and color at the time, loved by videophiles, eventually obliterated by LCD's cost curve — there's a real scenario where OLED eats MicroLED's intended market before MicroLED gets cheap enough to compete. QD-OLED in particular, which combines quantum dot color conversion with blue OLED emitters, has been closing the brightness gap faster than expected. Samsung Display's latest QD-OLED panels hit 2,000 nits peak in HDR mode. That doesn't match MicroLED, but for most real-world use cases — including professional video work under controlled lighting — it may be close enough.
What This Means for Hardware Procurement and IT Decisions Right Now
If you're specifying displays for a new facility, creative studio, or enterprise deployment in late 2026, the practical calculus is clearer than the hype suggests. OLED — specifically QD-OLED or tandem OLED — is the right choice for most knowledge workers and creative professionals today. The color accuracy is there, the response time is there, and the price, while still premium, has normalized. A 32-inch QD-OLED monitor from Samsung or ASUS's ProArt line runs $800–$1,200 at current street pricing. That's not cheap, but it's within reach for professional workstations.
For deployments where burn-in is a legitimate operational concern — control rooms, dispatch centers, kiosks, digital signage, surgical imaging displays — mini-LED LCD remains the pragmatic choice for most budgets, with MicroLED only justifiable for the highest-value, longest-lifecycle installations where that $28,000 panel cost is amortized over a decade of zero-maintenance operation.
- Specify display use-case duty cycles before committing to OLED for any static-UI-heavy workflow
- For any color-critical purchase, verify actual Delta-E and color volume specs against ICC profile standards, not manufacturer peak claims
NVIDIA's professional GPU drivers, updated in their R565 release series, now include display metadata profiles specifically tuned for QD-OLED color gamut mapping — a small but meaningful signal that the ecosystem around these panels is maturing beyond the panels themselves. When driver teams start optimizing for a panel technology, that's usually a reliable indicator of where the professional market is actually heading.
The Question Nobody Is Asking Loudly Enough
Here's an open hypothesis worth tracking: the actual winner of the OLED-vs-MicroLED competition may not be either technology in its current form, but a hybrid architecture. Samsung Display's internal roadmaps — fragments of which surfaced in a Korean securities filing in August 2026 — reference something they're calling MicroLED-on-OLED, a structure where a sparse array of MicroLED chips handles local peak brightness while an OLED layer manages the full-resolution color image. It's early-stage, and the manufacturing complexity is almost comedically high. But it addresses the core weaknesses of both technologies simultaneously: OLED's brightness ceiling and MicroLED's color uniformity problem at high resolutions.
Whether that architecture makes it out of Samsung's R&D labs into a shipping product before 2030 is anyone's guess. But the fact that engineers are thinking in hybrid terms — rather than doubling down on one technology defeating the other — suggests the real answer to "who wins" might be "neither, exactly." Watch for what Samsung Display files in patent applications over the next 18 months. That's where the actual direction will be visible long before any product announcement.
The Death of the Password Is Taking Longer Than Expected
A Breach That Shouldn't Have Happened in 2026
In March of this year, a mid-sized U.S. healthcare network disclosed a breach affecting 2.3 million patient records. The root cause, buried in paragraph nine of their SEC filing: credential stuffing. An attacker had used a list of reused passwords from an older, unrelated leak to walk straight through the front door. No zero-day. No sophisticated malware. Just a list and a script. The network had been warned three separate times by their insurer to implement multi-factor authentication across all external-facing systems. They hadn't. And in a world where passkeys and hardware tokens have been commercially viable for years, that's genuinely hard to explain.
We've been announcing the death of the password since at least 2004, when Bill Gates predicted its demise at RSA Conference. We're still waiting. But something has shifted in 2025 and into 2026 — not a sudden breakthrough, but an accumulation of pressure from regulators, insurers, and a slowly maturing ecosystem of alternatives that's finally becoming usable enough for real deployments. The question isn't whether passwords will go away. It's whether the transition happens before the next generation of attacks makes the cost of delay unbearable.
Why Passwords Have Survived This Long
The persistence of passwords isn't irrational. It's actually a story about switching costs and backward compatibility — similar to the way the QWERTY keyboard layout outlasted every ergonomic competitor not because it was better, but because the infrastructure built around it was too embedded to replace cheaply. Passwords are supported by every browser, every OS, every authentication library ever written. They require no hardware. They work offline. They're transferable between devices without provisioning. For a small business running on-premise software from 2014, asking them to move to FIDO2-based passkeys isn't a simple upgrade; it's potentially a full application re-architecture.
Dr. Annika Holm, a principal researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), has studied credential-based attack patterns for the better part of a decade. She puts it bluntly: "The threat model for passwords isn't that they're cryptographically weak — it's that humans are terrible at managing secrets at scale. The protocol is fine. The implementation in human brains is the vulnerability."
"The protocol is fine. The implementation in human brains is the vulnerability." — Dr. Annika Holm, principal researcher, MIT CSAIL
That framing matters because it explains why technical fixes alone haven't worked. Forced complexity rules — 12 characters, one uppercase, one symbol — produced passwords like Password1!, which scores well on entropy metrics and terribly on actual security. NIST's Special Publication 800-63B, revised most recently in 2024, finally dropped mandatory complexity rules and special-character requirements in favor of length and breach-list screening. It took nearly two decades of evidence to move institutional guidance in that direction.
The FIDO2 and Passkey Bet: What the Data Actually Shows
The FIDO2 standard — combining the W3C's WebAuthn spec (formalized as RFC 8471's spiritual successor in the broader FIDO ecosystem) and the CTAP protocol — is the closest thing the industry has to a credible password replacement architecture. Passkeys, the consumer-friendly implementation pushed hard by Apple, Google, and Microsoft since 2022, are built on top of it. The cryptographic premise is solid: a private key never leaves the device, authentication is challenge-response, and phishing becomes structurally impossible because the credential is bound to the origin domain.
Adoption numbers have grown, but they're still modest in context. Google reported in early 2026 that over 800 million accounts have used a passkey at least once — impressive in absolute terms, but that figure includes one-time uses and doesn't reflect whether users have actually replaced their passwords or simply added passkeys as an additional option. Apple's implementation, deeply integrated into iCloud Keychain across iOS 17 and macOS Sequoia, has made passkeys nearly frictionless for users inside the Apple ecosystem. The cross-device story, however, remains messier.
| Authentication Method | Phishing Resistance | Cross-Platform Support | Recovery Complexity | Enterprise Deployment Cost |
|---|---|---|---|---|
| Password + SMS OTP | Low (SIM-swappable) | Universal | Low | $2–5/user/month |
| TOTP (e.g., Google Authenticator) | Medium (real-time phishable) | High | Medium | $3–8/user/month |
| FIDO2 Hardware Key (YubiKey) | Very High | Growing (USB-A/C, NFC) | High (key loss risk) | $25–60 per key + management |
| Passkeys (Platform) | Very High | Medium (ecosystem-dependent) | Medium (cloud sync) | Low ($0–2/user/month) |
| Biometric + Passkey Hybrid | Very High | Medium-High | Low-Medium | $5–12/user/month |
The Recovery Problem Nobody Wants to Solve
Here's the part of the passkey story that tends to get glossed over in product announcements: account recovery. If a passkey is stored on a device that's lost, stolen, or wiped, how does a user regain access? Most implementations punt to a fallback — which is usually a password, an email link, or an SMS code. That fallback becomes the actual weakest link. Attackers don't need to break FIDO2 cryptography if they can social-engineer a Tier 1 support agent into resetting an account via the legacy recovery path.
Ravi Subramaniam, director of identity architecture at Cloudflare's Zero Trust platform division, raised this exact issue at a closed-door session at Authenticate 2026 in October. We spoke with him afterward. "The cryptography in FIDO2 is essentially solved," he said. "What isn't solved is the human and organizational process that wraps around it. Most enterprises deploying passkeys today still have a password-based recovery mechanism sitting underneath, which means they haven't actually eliminated the password attack surface — they've just added a layer on top of it."
This isn't a theoretical concern. CVE-2024-38112, a Windows MSHTML zero-day patched in mid-2024, was used in several campaigns that specifically targeted account recovery flows rather than primary authentication mechanisms. The pattern is consistent: as primary auth hardens, attackers shift to softer targets in the same pipeline.
What Microsoft and Apple Are Getting Right — and Where They're Cutting Corners
Microsoft's push toward passwordless via its Entra ID platform (formerly Azure Active Directory) has been one of the more credible enterprise-scale deployments. Their internal data, shared at Ignite 2025, showed that employees using passwordless authentication experienced 67% fewer account compromise incidents compared to those using password plus TOTP. That's a meaningful number, and it tracks with what security researchers have measured independently. Microsoft has also invested in the "Temporary Access Pass" mechanism — a time-limited credential used for device enrollment that avoids the recovery-flow trap by expiring automatically.
Apple's approach is different: prioritize experience over configurability. Passkeys in iCloud Keychain sync end-to-end encrypted across Apple devices, which is genuinely elegant for consumers. But enterprise IT teams need granular control — specific device binding, audit trails, revocation — and Apple's model gives them relatively little of that. An enterprise deploying passkeys via Apple's ecosystem is, to some degree, trusting Apple's infrastructure as part of their identity chain. For regulated industries, that's a compliance conversation, not just a technical one.
The Skeptic's Case: Are We Just Trading One Monoculture for Another?
There's a harder critique worth taking seriously. Passwords, for all their flaws, are decentralized. Anyone can implement them. No single vendor controls the authentication experience. The passkey ecosystem, by contrast, is dominated by three platform providers — Apple, Google, and Microsoft — who control the credential stores, the sync infrastructure, and increasingly the recovery flows. If one of those providers has a significant breach or makes a policy decision that doesn't suit your organization, your options for recourse are limited.
Dr. James Calloway, a cryptography faculty member at Johns Hopkins' Information Security Institute, has written critically about this concentration risk. "We're in danger of solving the phishing problem while creating a systemic dependency problem," he told us. "When authentication infrastructure is controlled by a handful of hyperscalers, a sophisticated state-level attack on one of those providers doesn't compromise one organization — it potentially compromises millions simultaneously." It's a structural critique that doesn't get enough airtime, precisely because the companies most invested in passkey adoption are also the ones with the loudest platforms.
And there's the enterprise integration reality. Active Directory environments, legacy VPN infrastructure, and on-premise applications built against LDAP or RADIUS don't natively speak WebAuthn. Retrofit projects are expensive and slow. For organizations running hybrid environments — which, per Gartner's late 2025 infrastructure survey, still account for roughly 58% of enterprise deployments — a full passkey migration isn't a two-quarter project. It's a multi-year commitment with significant interim risk.
What IT Teams Should Actually Be Doing Right Now
For security engineers and IT architects reading this, the practical picture is less dramatic than the product marketing suggests — but clearer than the pessimists allow.
- Implement phishing-resistant MFA (FIDO2 hardware keys or platform passkeys) for any role with privileged access or access to sensitive data, immediately. SMS-based OTP for those accounts is no longer defensible from an insurance or regulatory standpoint.
- Audit your recovery flows before touching your primary authentication stack. The recovery path is where most modern credential attacks land — fixing the front door while leaving the back window open is worse than doing nothing, because it creates false confidence.
The broader transition will take longer than anyone's roadmap admits. But the cost calculus is shifting fast. Cyber insurance premiums for organizations without phishing-resistant MFA jumped an average of 34% in 2025 renewals, according to analysis from Marsh McLennan's cyber practice. Regulators in the EU, under NIS2 directives that came into full enforcement effect in early 2025, are actively fining organizations that can't demonstrate credential hygiene across critical systems. The economic pressure that technical arguments never quite managed to generate is finally arriving from the financial and regulatory side.
The open question heading into 2027 is whether the passkey ecosystem can solve the cross-platform synchronization and enterprise recovery problems before the current halfway-adopted state becomes its own kind of technical debt — organizations that have layered passkeys over passwords without fully replacing them, creating hybrid environments that are more complex to audit and no easier to defend. The architecture is sound. The execution, at scale, across the full messiness of real enterprise IT, is still being figured out in real time.