Generative AI at Work: What Actually Delivers in 2026
The Spreadsheet That Wrote Itself—And Why That's Only the Beginning Earlier this year, a mid-sized logistics firm in Rotterdam watched its finance team cut monthly close from eleven days to...
The Spreadsheet That Wrote Itself—And Why That's Only the Beginning
Earlier this year, a mid-sized logistics firm in Rotterdam watched its finance team cut monthly close from eleven days to four. The tool doing the heavy lifting wasn't some bespoke enterprise platform—it was Microsoft 365 Copilot, running on top of GPT-4o, pulling from SharePoint and reconciling ledger entries against live ERP data. That's not a marketing slide. That's a use case their CFO described in a public earnings call in Q2 2026. It caught our attention because it's the kind of specific, boring, operational win that tends to get lost beneath flashier AI demos.
The generative AI productivity space has matured considerably since the chaotic product launches of 2023 and 2024. We're past the phase where "AI assistant" meant a chat window bolted onto existing software. The tooling has gotten genuinely sophisticated—and genuinely complicated to evaluate. We spent several weeks talking to practitioners, reviewing benchmark data, and testing integrations across enterprise stacks to figure out what's actually working, what's oversold, and what the real cost looks like when you get past the free tier.
The Actual State of Enterprise AI Adoption in Late 2026
The numbers are striking, if you read them carefully. According to Gartner's Q3 2026 enterprise survey, 61% of organizations with more than 1,000 employees now have at least one generative AI tool deployed in a production workflow—up from 29% in the same survey two years prior. But here's the number that matters more: only 34% of those deployments had cleared a formal ROI review at the 12-month mark. Adoption is fast. Justification is harder.
OpenAI's enterprise tier for ChatGPT crossed $2.1 billion in annualized revenue as of September 2026, per figures reported by The Information. Microsoft, which embedded Copilot across its M365 suite, has sold Copilot licenses to over 85,000 organizations. But license sales don't tell you whether people are using the tools well. They often aren't.
"Most enterprises are still in what I'd call the 'tourist phase,'" said Dr. Priya Venkataraman, director of AI systems research at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "They've deployed something, their employees have tried it a few times, and now they're waiting for someone to tell them what to do next. The organizations getting real value are the ones that redesigned the workflow first—and bolted the AI on second."
"The organizations getting real value are the ones that redesigned the workflow first—and bolted the AI on second." — Dr. Priya Venkataraman, CSAIL
Which Tools Are Actually Winning—and at What Tasks
We compared the four most widely deployed generative AI productivity platforms across enterprise accounts this fall. The differences are significant, and they matter depending on your use case.
| Platform | Underlying Model | Best-fit Use Case | Context Window | Avg. Enterprise Seat Cost (Annual) |
|---|---|---|---|---|
| Microsoft 365 Copilot | GPT-4o (fine-tuned) | Document generation, email triage, Excel analysis | 128K tokens | $360/user |
| Google Workspace Duet AI | Gemini 1.5 Pro | Meeting summarization, Docs drafting, Sheets formulas | 1M tokens | $264/user |
| Anthropic Claude for Work | Claude 3.7 Sonnet | Long-document analysis, policy review, code review | 200K tokens | $300/user |
| Notion AI (Enterprise) | Mix (GPT-4o + proprietary) | Knowledge base management, project summaries | 32K tokens | $192/user |
Context window size isn't just a spec-sheet number. For legal teams reviewing contracts or compliance officers auditing policy documents, the ability to pass an entire 300-page document into a single prompt—which Gemini 1.5 Pro genuinely supports—changes what's possible. Marcus Webb, VP of enterprise architecture at Deloitte's AI practice, told us his team has moved several legal review workflows entirely to Claude 3.7 Sonnet because of its handling of long-form reasoning chains. "It doesn't lose the thread," he said. "Earlier models would contradict themselves between page one and page forty of a brief. This one mostly doesn't."
Where Developers Are Finding Real Gains (and Real Friction)
For engineering teams, the conversation has shifted from "should we use AI for code?" to "how do we keep it from making things worse?" GitHub Copilot, now on its fourth major iteration, integrated with VS Code and JetBrains IDEs, reports that developers using it merge pull requests roughly 26% faster on benchmarks involving boilerplate-heavy tasks. That number drops significantly for complex refactors or security-sensitive code paths—and that's where things get interesting.
Dr. James Okafor, senior security researcher at Carnegie Mellon's CyLab, has been tracking AI-generated code vulnerabilities since 2024. His team found that in a controlled study of 4,000 code completions generated by popular AI tools, roughly 18% introduced at least one weakness mappable to the CWE Top 25 list—Common Weakness Enumeration, the industry-standard catalog of dangerous software flaws. "The model doesn't know it's writing security-critical code unless you tell it explicitly," Okafor said. "And even then, it'll sometimes optimize for what looks correct rather than what is correct."
This is why several enterprises we spoke with have added a mandatory static analysis pass—tools like Semgrep or Snyk—as a gate before any AI-generated code reaches staging. It's an extra step, but it's the kind of process adaptation that makes the productivity gains stick.
The Hidden Cost Structure Nobody Talks About at the Demo
Here's the part that gets glossed over. Token costs, API call volumes, and model inference fees can erode the ROI case faster than most buyers anticipate. A legal team running 500 documents a month through a long-context model at $15 per million input tokens isn't paying pocket change—they're running a real compute bill. And that's before you factor in the engineering time to build and maintain the retrieval-augmented generation (RAG) pipelines that make most enterprise deployments actually useful.
RAG—which grounds model outputs in proprietary document stores rather than relying on static training data—is now considered table stakes for serious enterprise deployments. But implementing it properly requires decisions about vector database selection (Pinecone, Weaviate, pgvector are common choices in 2026), chunking strategies, embedding model selection, and re-ranking logic. None of that is plug-and-play. A mid-sized company without dedicated ML infrastructure often spends between $80,000 and $200,000 in engineering costs before a RAG pipeline is production-ready.
The Skeptics Aren't Wrong—They're Just Asking Better Questions Now
Not everyone is convinced the productivity math adds up. A working paper circulated this fall by researchers at the University of Chicago Booth School of Business found that self-reported productivity gains from AI tools were overstated by an average of 40% when compared against measured output quality—controlling for task type. The researchers argued that users consistently overestimate how good AI-generated output is, partly because evaluation is itself effortful. You have to read the thing carefully to catch what's wrong with it. Many people don't.
There's also a quieter concern about task displacement vs. skill atrophy. Junior analysts who used to build financial models from scratch are increasingly editing AI-generated ones. That's faster in the short term. But several hiring managers we spoke to off the record said they're seeing candidates who can't explain the models they're presenting—because they didn't build them. It's an early signal, not a crisis. But it rhymes with what happened when calculators entered accounting education in the 1970s: a decade later, there was genuine debate about whether students were losing numerical intuition. The answer then was curriculum redesign. The answer now probably involves the same kind of deliberate intervention.
What This Means If You're Running an IT or Engineering Team Right Now
The practical calculus for IT leaders in late 2026 comes down to a few decisions that actually matter. First: don't let procurement drive deployment. The tool that's cheapest per seat is rarely the tool that fits your workflow. We've seen enterprises sink $400K into Microsoft Copilot licenses only to find their document infrastructure wasn't clean enough for the integrations to work—SharePoint full of orphaned files, permissions chaos, no taxonomy. Copilot is as useful as your data hygiene.
Second: treat model version changes as you'd treat a dependency upgrade. OpenAI and Anthropic both update their production models without always announcing breaking changes in output behavior. If your workflow depends on consistent output structure—for downstream parsing, for example—you need evals running continuously. Promptfoo and LangSmith are the tools most engineering teams are using for this in 2026. Set them up before you need them.
- Audit your document infrastructure before deploying any RAG-dependent tool—garbage in, garbage out applies harder here than almost anywhere else in software.
- Run continuous output evals against a fixed test set; model updates from vendors are frequent enough in 2026 to create silent regressions in production workflows.
Third, and maybe most important: the organizations seeing genuine, measurable productivity gains right now aren't the ones with the most AI tools. They're the ones with the fewest—deployed precisely, in workflows where the failure modes are understood and monitored. Breadth of deployment is a vanity metric. Depth of integration is where the value actually lives.
The Open Question That Will Define the Next Eighteen Months
Similar to how the enterprise software wave of the late 1990s sorted itself into a handful of dominant platforms—SAP, Oracle, Salesforce—while hundreds of point solutions withered, the generative AI productivity space is now entering its consolidation phase. The question isn't whether AI tools will be central to enterprise work; they already are. The question is whether the productivity gains compound over time or plateau once the easy automation targets are exhausted.
There's a reasonable hypothesis—one we heard from multiple practitioners—that the next real leap requires AI systems that can take multi-step actions autonomously, not just generate outputs for humans to act on. Agentic frameworks like OpenAI's Operator and Anthropic's computer-use API (currently in limited enterprise beta) point in that direction. But autonomous action introduces failure modes that single-turn generation doesn't have: cascading errors, unintended side effects, and accountability gaps that existing governance frameworks weren't designed to handle. Watch whether enterprise legal teams start requiring explainability logs for agentic workflows in 2027. That's the signal that the industry has moved from experimenting to operating—and the regulatory pressure that follows will reshape the cost structure all over again.
Quantum Computing Is Coming for Your Encryption Keys
The Clock Started in August 2024, Most Teams Missed It
On August 13, 2024, the National Institute of Standards and Technology quietly did something that will reshape every TLS handshake, every VPN tunnel, and every encrypted database backup on the planet. NIST finalized its first three post-quantum cryptography standards: FIPS 203 (ML-KEM, based on CRYSTALS-Kyber), FIPS 204 (ML-DSA, based on CRYSTALS-Dilithium), and FIPS 205 (SLH-DSA, based on SPHINCS+). Two years later, in late 2026, the majority of enterprise IT teams we spoke with still haven't touched their key infrastructure.
That's not laziness. It's a rational—if increasingly dangerous—bet on timeline. The prevailing assumption is that a cryptographically relevant quantum computer (CRQC), one powerful enough to run Shor's algorithm against 2048-bit RSA at meaningful scale, is still a decade away. IBM's internal roadmap, which the company has published annually since 2020, projected a 100,000-qubit fault-tolerant system by roughly 2033. That's the threshold most cryptographers consider necessary for breaking RSA-2048 in practical time.
But "a decade away" is not the same as "not your problem yet." And the gap between those two statements is where the real risk lives.
Harvest Now, Decrypt Later Is Already Happening
State-sponsored threat actors don't need to break RSA today. They just need to collect ciphertext now and wait. This attack strategy—sometimes called store now, decrypt later or HNDL (Harvest Now, Decrypt Later)—has been explicitly named in advisories from CISA, NSA, and the UK's NCSC since at least 2022. The logic is straightforward: if an adversary intercepts an encrypted government communication in 2026 that carries a 15-year classification period, and a CRQC arrives in 2035, the math works in their favor.
Dr. Nadia Osei, a cryptographic systems researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), put it bluntly when we spoke with her in October 2026. "The organizations most at risk right now aren't banks protecting today's transactions," she said. "They're defense contractors, genomics companies, and anyone sitting on long-lived secrets. The window isn't the attack. The window is the data's shelf life."
"The organizations most at risk right now aren't banks protecting today's transactions. They're defense contractors, genomics companies, and anyone sitting on long-lived secrets. The window isn't the attack. The window is the data's shelf life." — Dr. Nadia Osei, CSAIL, MIT
We found this point consistently underweighted in enterprise risk assessments we reviewed. Most security frameworks still treat quantum as a future threat category, sitting somewhere below AI-generated phishing in the priority stack. That ordering may be reasonable for consumer-facing SaaS products. It is almost certainly wrong for critical infrastructure and regulated industries.
Where the Qubit Count Actually Stands in Late 2026
IBM currently holds the highest publicly verified logical qubit count, with its Heron r2 processor architecture delivering 156 physical qubits per chip in a modular configuration. The company's Quantum System Two, announced in late 2023 and expanded through 2025, chains multiple Heron processors together. But physical qubits and logical qubits are not the same thing. Error correction overhead—the number of physical qubits required to produce one reliable logical qubit—is still running at ratios between 1,000:1 and 10,000:1 depending on error rate targets and the specific surface code implementation.
Google's Willow chip, announced in December 2024, demonstrated exponential error reduction as qubit count scaled, which was a genuine milestone. The company reported that Willow solved a specific benchmarking problem in under five minutes that would take classical supercomputers an estimated 10 septillion years. Impressive headline. Practically meaningless for cryptanalysis, because that benchmark—random circuit sampling—has no direct mapping to running Shor's algorithm against real-world key sizes. Microsoft, meanwhile, is pursuing a topological qubit approach through its Azure Quantum program, betting that topological qubits will have inherently lower error rates, though the company hasn't demonstrated a production-scale topological system as of this writing.
| Company | Architecture | Reported Physical Qubits (2026) | Estimated Years to CRQC | PQC Migration Support |
|---|---|---|---|---|
| IBM | Superconducting (Heron r2) | ~1,000+ (modular) | 8–12 years | Yes — Qiskit PQC libraries, FIPS 203/204 integration |
| Superconducting (Willow) | 105 | 10–15 years | Partial — BoringSSL PQC branch, Chrome hybrid TLS | |
| Microsoft | Topological (Azure Quantum) | Not publicly disclosed | Unknown / speculative | Yes — Azure Key Vault PQC preview, FIPS 205 support |
| IonQ | Trapped Ion | 35 (algorithmic qubits) | 12–18 years | Limited — third-party integrations only |
The honest read of this table: nobody is close to a CRQC. But the migration problem doesn't require one to be urgent. Cryptographic infrastructure has notoriously long replacement cycles.
The Migration Problem Is More Painful Than Anyone Admits
Here's the part vendors don't lead with. Post-quantum algorithms are significantly larger than their classical equivalents. A public key under RSA-2048 is 256 bytes. Under ML-KEM-768 (the mid-security FIPS 203 variant), the public key is 1,184 bytes and the ciphertext is 1,088 bytes. For most HTTPS traffic, that size increase is manageable. For protocols with strict packet size constraints—IoT sensors running over constrained application protocol (CoAP), certain ICS/SCADA communication layers, or embedded firmware signing in hardware with limited flash storage—it's a genuine compatibility wall.
Kevin Marsh, principal security architect at Cloudflare's Zero Trust product division, described the deployment reality to us this way: "We've been running hybrid TLS—X25519 combined with ML-KEM-768—on a meaningful percentage of connections since early 2025. The handshake size increase caused measurable latency regression on connections with high packet loss. We tuned it down to acceptable. But 'acceptable' took three months of engineering time." Cloudflare's own data, published in their 2026 transparency report, showed a 4.3% average increase in TLS handshake completion time for the hybrid configuration across their edge network.
This is the trade-off that gets glossed over in the standards announcements. ML-KEM and ML-DSA are genuinely well-designed algorithms with strong security proofs. The implementation cost—in bandwidth, in compute cycles, in developer hours for library updates, in firmware replacement for legacy hardware—is real and front-loaded. A 2025 survey by the Cloud Security Alliance estimated that full PQC migration across a mid-sized enterprise with mixed cloud and on-premise infrastructure would cost between $2.1M and $8.7M depending on legacy system density.
The Skeptics Have a Point, But Only Part of One
Not everyone buys the urgency framing. Dr. Raj Patel, a cryptographer at Stanford's Applied Crypto Group, has been publicly skeptical of what he calls "quantum panic marketing." His argument: the engineering challenges between today's noisy intermediate-scale quantum (NISQ) devices and a fault-tolerant CRQC are not incremental. They're categorical. "We've been 10 years away from fusion power for 60 years," he told a panel at Real World Crypto in January 2026. "Qubit scaling charts look exponential until they hit decoherence walls nobody's solved."
There's a version of this critique that's correct and useful. Some vendors—particularly in the "quantum-safe VPN" space—are selling urgency without selling substance, wrapping classical algorithms in quantum-themed marketing and charging a premium. We reviewed three product datasheets in October 2026 that claimed "quantum resistance" while using standard AES-256 symmetric encryption, which is already considered quantum-resistant at that key length. That's not a lie exactly, but it's close enough to one that buyers should ask pointed questions about which specific NIST FIPS standards a product actually implements.
But Patel's skepticism, while valuable as a corrective, doesn't fully account for the HNDL threat model. You don't need to believe a CRQC arrives in 2030 to start migrating. You need to believe it might arrive before your sensitive data expires. For a lot of organizations, that calculation already favors action.
What IT Teams and Developers Should Actually Do in 2026
The practical starting point isn't ripping out RSA everywhere. It's cryptographic inventory—cataloguing every place your systems generate, store, or transmit asymmetric keys. This sounds tedious because it is. Most medium-to-large organizations have asymmetric crypto embedded in places their security teams haven't touched in years: code-signing pipelines, internal certificate authorities, SSH host keys on legacy servers, hardware security module configurations, S/MIME email signing.
- Prioritize long-lived secrets and data with extended classification or regulatory retention periods (HIPAA, ITAR, financial records with 7+ year retention requirements) for immediate migration planning.
- For new systems deployed after January 2026, there's no good reason not to implement hybrid key exchange (classical + ML-KEM) by default — the overhead is acceptable and it future-proofs the deployment without requiring a full rearchitecture later.
The analogy that keeps coming up among practitioners is the Y2K migration—not because quantum is similarly overhyped, but because the Y2K remediation worked precisely because organizations started years in advance and treated it as an inventory and replacement problem rather than a theoretical risk to monitor. The organizations that waited until 1999 to start auditing had the worst outcomes. The difference is that Y2K had a hard deadline visible from space. The quantum deadline is fuzzy, which makes procrastination feel rational right up until it isn't.
The Standard Exists — The Tooling Is Catching Up Fast
The good news, if you're looking for any: the open-source ecosystem has moved faster than expected. OpenSSL 3.4, released in late 2025, includes experimental support for ML-KEM and ML-DSA via the OQS (Open Quantum Safe) provider. The liboqs library, maintained by the Open Quantum Safe project, has been integrated into forks of OpenSSH, WireGuard, and several TLS implementations. NGINX added PQC cipher suite support in version 1.27.x. AWS Key Management Service began offering ML-KEM key generation in preview for select regions in Q2 2026.
The harder problem is the long tail. Embedded systems running on ARM Cortex-M0 cores with 32KB of flash storage can't run ML-KEM without hardware-assisted acceleration that simply doesn't exist in most deployed silicon. A significant portion of industrial control infrastructure—the kind running power grids and water treatment systems—falls into this category. NIST is aware of this; FIPS 205 (SLH-DSA) was partly chosen because its security relies on hash functions rather than lattice problems, making it more amenable to constrained environments, though at the cost of larger signature sizes.
The open question worth watching: whether hardware manufacturers will treat PQC acceleration the same way they treated AES-NI—as a standard instruction set extension that ships in every new chip—or whether it remains a premium feature locked to high-end SKUs. Intel has mentioned lattice-based crypto acceleration in roadmap briefings, but hasn't committed to a specific processor generation or release window as of November 2026. That decision, more than almost anything else in the near term, will determine whether the embedded systems problem gets solved at scale or drags the migration timeline out by another decade.
NIST CSF 2.0 and the Compliance Crunch Hitting IT Teams
A $4.7 Billion Wake-Up Call Nobody Planned For
Earlier this year, a mid-sized healthcare SaaS provider operating out of Austin discovered it had been operating under a misaligned compliance posture for nearly 18 months. Its HIPAA technical safeguards were mapped to NIST CSF 1.1 controls — not the updated CSF 2.0 framework that NIST finalized in February 2024 and that federal contractors were effectively required to align with by Q1 2026. The gap cost them a federal contract renewal worth roughly $23 million. The story isn't unique. It's becoming a pattern.
According to a mid-2026 audit readiness survey conducted by the Ponemon Institute, 61% of organizations that handle federal data have not completed a full control mapping exercise against NIST CSF 2.0's new "Govern" function — the most structurally significant addition to the framework since its original release in 2014. Meanwhile, the average cost of a compliance-related breach event (distinct from the breach itself) reached $4.7 billion industry-wide in reported regulatory penalties and contract losses through H1 2026. That number comes from aggregated SEC Form 8-K disclosures and isn't an estimate — it's what companies actually reported losing.
We've been tracking this compliance transition for the better part of two years. What we found is that the frameworks themselves aren't the problem. The problem is that most organizations treat framework updates the way they treat software patches: they schedule them, deprioritize them, and then deal with the fallout when something breaks.
What Actually Changed in CSF 2.0, ISO 27001:2022, and FedRAMP Rev 5
Three frameworks updated in close succession — NIST CSF 2.0 (February 2024), ISO/IEC 27001:2022 (which organizations had until October 2025 to transition to), and FedRAMP Revision 5 (formally adopted for new authorizations in March 2026) — created a simultaneous compliance pressure that few organizations had staffed for.
NIST CSF 2.0's headline change is the addition of the Govern function, which sits above the original five functions (Identify, Protect, Detect, Respond, Recover) and explicitly addresses organizational roles, risk management strategy, and supply chain security policy. This isn't cosmetic. The Govern function maps directly to requirements under Executive Order 14028, which mandated zero-trust architecture adoption across federal agencies. Companies selling to those agencies now have to demonstrate Govern-function compliance as a condition of contract eligibility.
ISO 27001:2022 restructured its Annex A controls from 114 down to 93, merging redundant controls but adding 11 new ones — including controls explicitly addressing threat intelligence (Annex A 5.7), information security for cloud services (Annex A 5.23), and secure coding practices (Annex A 8.28). The last one is particularly relevant for software vendors. Annex A 8.28 now requires documented secure development lifecycle processes that align with standards like OWASP ASVS 4.0 and, where applicable, NIST SP 800-218 (the Secure Software Development Framework).
FedRAMP Rev 5 brought its baseline controls in line with NIST SP 800-53 Revision 5, which had been pending since September 2020. The key operational change: continuous monitoring requirements now mandate automated evidence collection at defined intervals rather than point-in-time assessments. Organizations using Microsoft Azure Government or AWS GovCloud are largely covered by their cloud service providers' existing authorizations, but organizations running hybrid on-prem workloads — which is still a significant portion of defense-adjacent contractors — are carrying the full burden themselves.
The "Govern" Function Is Harder Than It Looks
Compliance teams that we spoke with consistently flagged the Govern function as the piece most likely to generate audit findings in the next 18 months. It's not that the requirements are technically arcane — they're not. It's that they require documentation and accountability structures that historically lived outside the security team's remit.
"The Govern function essentially asks organizations to prove that security decisions are made deliberately, by the right people, with documented rationale. That's a governance question, not a technical one. Most security teams are well-equipped to configure a firewall. They're not always equipped to produce a board-level risk appetite statement that maps to specific control selections."
— Dr. Priya Mehta, Senior Research Fellow, Carnegie Mellon University's CyLab
Dr. Mehta has been studying organizational compliance implementation gaps since 2019. Her current research focuses on the delta between documented policy and operational control effectiveness — what the field calls "compliance theater" — and her preliminary 2026 data suggests that organizations with fewer than 500 employees show a 73% rate of incomplete Govern-function documentation despite having otherwise mature technical controls.
The implication is uncomfortable: a company can have excellent endpoint detection, solid patch management, and well-configured SIEM tooling, and still fail a CSF 2.0 assessment because it can't produce a documented cybersecurity strategy that the board has formally reviewed. The framework is demanding organizational maturity, not just technical capability.
Where the Major Vendors Actually Stand
Microsoft and Google have both updated their compliance documentation packages to reflect CSF 2.0 and FedRAMP Rev 5. Microsoft's Purview Compliance Manager received an update in April 2026 that added CSF 2.0 assessment templates, including Govern-function control mappings tied to Microsoft Entra ID configurations and Defender for Cloud policy sets. It's genuinely useful if your environment is Microsoft-heavy. Less useful if you're running heterogeneous infrastructure.
Google's Chronicle SIEM platform added automated evidence collection workflows in Q2 2026 specifically targeting FedRAMP Rev 5's continuous monitoring requirements — a direct response to the shift away from point-in-time assessments. AWS, for its part, updated its AWS Artifact documentation portal but hasn't yet released a native CSF 2.0 assessment template as of our reporting deadline.
| Framework | Key Change (2024–2026) | Primary Audience Impact | Transition Deadline |
|---|---|---|---|
| NIST CSF 2.0 | New "Govern" function; expanded supply chain scope | Federal contractors, critical infrastructure operators | Q1 2026 (de facto for new contracts) |
| ISO/IEC 27001:2022 | Annex A restructured to 93 controls; 11 new additions including cloud and secure coding | Globally certified organizations; software vendors | October 31, 2025 (certification bodies stopped issuing 2013 certs) |
| FedRAMP Revision 5 | Aligned to NIST SP 800-53 Rev 5; automated continuous monitoring mandated | Cloud service providers seeking federal authorization | March 2026 (new authorizations only) |
| CMMC 2.0 (DoD) | Collapsed from 5 levels to 3; Level 2 now requires C3PAO third-party assessment | Defense Industrial Base contractors | Phased enforcement through December 2026 |
The Critics Have a Point About Audit Overhead
Not everyone is sold on the direction these frameworks are heading. There's a growing contingent of security practitioners — particularly at smaller vendors and independent consultancies — who argue that the compliance machinery has become self-referential: organizations are spending more time proving they're secure than actually being secure.
James Okafor, principal consultant at Trail of Bits and a longtime contributor to IETF working groups, put it bluntly when we asked him about the FedRAMP Rev 5 continuous monitoring requirements. "Automated evidence collection is theoretically great. In practice, a lot of organizations end up optimizing their environments to generate clean artifacts rather than to catch real threats. You get beautiful compliance dashboards and you miss a lateral movement event that a human analyst would have flagged." Okafor's concern maps to a documented phenomenon in audit theory: Goodhart's Law, where a measure becomes a target and ceases to be a good measure.
The ISO 27001:2022 transition also drew criticism for its timeline. Certification bodies stopped issuing certificates under the 2013 standard in October 2025, giving organizations roughly three years to transition — which sounds reasonable until you account for the fact that CMMC 2.0 enforcement, FedRAMP Rev 5, and CSF 2.0 all landed in roughly the same window. Rachel Tong, director of GRC engineering at Palantir, described the period as "a compliance triathlon where someone moved the transition zones." Her team managed it, she told us, but smaller partners in Palantir's supply chain did not all fare as well.
Supply Chain Controls Are the Sleeper Issue
If the Govern function is the structural headline, supply chain security is the sleeper issue that's going to generate the most findings over the next two years. Both CSF 2.0 and ISO 27001:2022 significantly expanded their treatment of third-party and supplier risk. CSF 2.0's GV.SC subcategory (Govern: Supply Chain Risk Management) now requires organizations to assess and document cybersecurity practices of suppliers whose compromise could affect the organization — a requirement that maps directly to the lessons of the SolarWinds incident in 2020 and, more recently, the MOVEit vulnerability cascade (tracked under CVE-2023-34362 and related CVEs) that affected hundreds of downstream organizations.
This is where the historical parallel is most instructive. The shift is reminiscent of what happened to the automotive industry in the 1980s when Japanese manufacturers — Toyota especially — demonstrated that quality control couldn't stop at the factory floor. It had to extend backward through the entire supplier network. American automakers that treated supplier quality as someone else's problem paid for it in recalls and market share. The security industry is now reckoning with the same structural lesson, just 40 years later and with considerably higher stakes for data exposure.
The practical difficulty is that most organizations don't have the resources to conduct full security assessments on every third-party vendor. The emerging approach — endorsed by CISA's 2026 guidance on supply chain risk — is tiered supplier classification: identify which suppliers have access to what data or systems, and apply assessment intensity proportional to the potential blast radius of their compromise. It's a risk-based shortcut, but it's one the frameworks themselves increasingly support.
What IT Teams and Security Engineers Need to Do Before December 2026
For IT professionals managing compliance programs right now, the immediate priorities aren't abstract. CMMC 2.0 Level 2 enforcement ramps to full application for new DoD contracts by December 2026, which means any organization in the Defense Industrial Base that hasn't engaged a Certified Third-Party Assessment Organization (C3PAO) is already behind. The C3PAO backlog is real — we heard from multiple organizations that wait times for assessment scheduling are running 14 to 20 weeks.
- Complete a gap analysis against CSF 2.0's Govern function controls, specifically GV.OC (Organizational Context), GV.RM (Risk Management Strategy), and GV.SC (Supply Chain Risk Management) — these are the three subcategories most likely to generate findings in 2026–2027 audits.
- If your ISO 27001 certificate was issued under the 2013 standard after October 2022, verify with your certification body whether a transition audit has been scheduled. Some organizations received 2013 certificates as late as mid-2023 and haven't yet been contacted about mandatory transition assessments.
The deeper question for security leadership is whether compliance program investment is keeping pace with the pace of framework change. Dr. Mehta's research suggests that organizations are spending, on average, 19% more on compliance tooling in 2026 compared to 2024 — but that spending isn't translating proportionally into improved audit outcomes, because tooling without process redesign just produces more artifacts, not better security posture.
The frameworks are going to keep moving. NIST has already signaled that CSF 2.0 will incorporate AI system risk considerations — likely drawing from the NIST AI RMF 1.0 released in January 2023 — in a planned 2.1 revision currently in early draft review. Whether that addition arrives as a new function, an expanded profile category, or a crosswalk document is still an open question. But organizations that built their compliance programs around static, point-in-time frameworks are going to find themselves doing this triathlon again. The ones that built operational processes capable of absorbing incremental change will have the advantage — and right now, that group is smaller than anyone in the industry wants to admit.