CRISPR Trials Hit a Wall—and a Breakthrough, Simultaneously
A Patient in Memphis Changed the Calculation Sometime in early 2026, a 34-year-old woman with sickle cell disease walked out of St. Jude Children's Research Hospital in Memphis having receiv...
A Patient in Memphis Changed the Calculation
Sometime in early 2026, a 34-year-old woman with sickle cell disease walked out of St. Jude Children's Research Hospital in Memphis having received no further transfusions for 22 consecutive months. She'd been enrolled in a follow-on cohort of a trial building on the foundational work behind Casgevy—the CRISPR-based therapy jointly developed by Vertex Pharmaceuticals and CRISPR Therapeutics and approved by the FDA in December 2023. Her hemoglobin F levels had risen to 38%, well above the 20% threshold researchers had predicted would be clinically meaningful. Nobody called it a cure. But the researchers didn't not call it that either.
That ambiguity is the defining texture of gene therapy right now. We're past the phase where "CRISPR clinical trial" is a novelty headline. There are now more than 80 active CRISPR-based trials registered on ClinicalTrials.gov, spanning oncology, rare monogenic diseases, and infectious disease. But the gap between early-phase excitement and durable real-world outcomes has widened considerably—and the field's critics are getting louder, not quieter.
What Casgevy's Approval Actually Proved (and Didn't)
It's easy to overread Casgevy's approval. The FDA's December 2023 green light was historic—it was the first CRISPR-based medicine to reach patients commercially—but the trial data behind it was narrow. The pivotal study enrolled 29 patients with transfusion-dependent beta-thalassemia. Twenty-eight of them met the primary endpoint of transfusion independence for at least 12 consecutive months. That's an extraordinary hit rate. But the trial had no control arm, follow-up was limited to roughly two years for the earliest cohorts, and the manufacturing process—editing a patient's own hematopoietic stem cells ex vivo, then reinfusing them after myeloablative conditioning—remains brutally expensive.
The list price landed at $2.2 million per patient. That's not a typo. And it immediately exposed the gap between what CRISPR can do biologically and what health systems can actually absorb. As of mid-2026, fewer than 400 patients globally had received Casgevy, according to figures cited in Vertex's Q2 2026 earnings call. The bottleneck isn't demand—it's the infrastructure to deliver autologous cell therapies at scale.
"We can edit the genome with extraordinary precision now. The problem is we still treat each patient like a bespoke manufacturing run. Until that changes, the economics will never work for anything outside the wealthiest health systems." — Dr. Amara Osei-Bonsu, Director of Translational Genomics at the Broad Institute of MIT and Harvard
The Delivery Problem Is Still the Real Problem
Here's what doesn't get enough coverage: the CRISPR machinery itself—the guide RNA, the Cas9 or Cas12 protein, the HDR template if you're doing precise edits—has to get inside the right cells in the right tissue. And that's hard. Most ex vivo approaches work because you're editing cells outside the body and can select for successful edits before reinfusion. But in vivo delivery, where you inject the editing machinery directly into a living patient, requires a vector. And the dominant vectors right now are lipid nanoparticles (LNPs) and adeno-associated viruses (AAVs), both of which carry meaningful limitations.
LNPs, the delivery mechanism used in mRNA COVID vaccines, work well for liver-targeted applications—the liver hoovers them up efficiently after intravenous injection. But getting LNPs to the brain, lung epithelium, or muscle with sufficient efficiency and specificity is an unsolved problem. Intellia Therapeutics, one of the more technically credible players in the in vivo space, has reported encouraging Phase 1 data for NTLA-2001, targeting transthyretin amyloidosis, where a single infusion reduced serum TTR protein levels by up to 93% at the highest dose. That's a liver target. Their next-generation programs targeting non-liver tissues have moved far more slowly.
AAV-based delivery, meanwhile, carries immunogenicity risks. Pre-existing antibodies against AAV serotypes are common in the general population—somewhere between 30% and 70% of people, depending on the serotype and geography, show detectable neutralizing antibodies that can blunt therapeutic effect or trigger adverse immune responses. That's not a minor footnote. It's a fundamental biological barrier that no amount of CRISPR editing precision resolves.
The Oncology Pivot: CAR-T Meets CRISPR
Where things get genuinely interesting—and where some of the most aggressive clinical development is happening—is the intersection of CRISPR and CAR-T cell therapy. Traditional CAR-T therapies use a patient's own T-cells, genetically modified using viral vectors to recognize and kill cancer cells. CRISPR adds a new dimension: the ability to make allogeneic, or "off-the-shelf," CAR-T cells by knocking out the genes that would otherwise trigger immune rejection of donor cells.
Beam Therapeutics, which uses base editing rather than double-strand DNA breaks, has a program called BEAM-201 targeting relapsed/refractory T-cell acute lymphoblastic leukemia. The approach uses four simultaneous base edits to create donor-derived CAR-T cells that evade rejection. Early Phase 1 data presented at ASH 2025 showed complete remission in 5 of 7 evaluable patients at the 90-day mark. That's a small cohort. But T-ALL has historically terrible outcomes in the relapsed setting, so even small numbers carry weight.
It's worth comparing the major players here. The competitive dynamics have shifted considerably since 2023:
| Company | Lead Program | Mechanism | Phase (as of Q3 2026) | Notable Data Point |
|---|---|---|---|---|
| CRISPR Therapeutics / Vertex | Casgevy (exa-cel) | Ex vivo Cas9, HSC editing | Approved (FDA, EMA) | 28/29 patients met primary endpoint in beta-thal trial |
| Intellia Therapeutics | NTLA-2001 (ATTR) | In vivo LNP-delivered Cas9 | Phase 3 | Up to 93% TTR reduction at highest dose, Phase 1 |
| Beam Therapeutics | BEAM-201 (T-ALL) | Base editing, allogeneic CAR-T | Phase 1/2 | 5/7 complete remissions at 90 days, ASH 2025 |
| Prime Medicine | PM359 (chronic granulomatous disease) | Prime editing, ex vivo HSC | Phase 1 (IND cleared 2025) | First prime editing program in human trials |
| Editas Medicine | EDIT-301 (sickle cell) | AsCas12a, ex vivo HSC editing | Phase 1/2 | Mean HbF induction of 40.1% across evaluable patients |
The Critics Aren't Wrong About Long-Term Safety
Let's be honest about something the field tends to soft-pedal: we don't have long-term safety data. We can't. CRISPR-based therapies are too new. The longest follow-up on any edited patient is still under five years for most programs, and the questions being asked—about off-target edits accumulating over time, about insertional oncogenesis, about immune responses to Cas proteins—can't be answered with current datasets. They require decades of surveillance. And decades of surveillance require patients, registries, funding, and institutional will that don't yet exist in coordinated form.
Dr. Priya Nandakumar, a bioethicist and regulatory scientist at Johns Hopkins Bloomberg School of Public Health, has been vocal about this gap. She points to the post-marketing surveillance gap: once a therapy is approved and commercialized, the incentive structure for long-term safety tracking changes. Companies have their revenue. Patients have their treatment. The pressure to maintain rigorous, multi-decade follow-up attenuates. "Casgevy's approval was appropriate given what we knew," she told us. "But 'appropriate given what we knew' is not the same as 'we know this is safe for 40 years.' The field is borrowing against a future data debt."
This isn't theoretical. The history of gene therapy has one very loud cautionary data point: Jesse Gelsinger, who died in 1999 during an adenoviral gene therapy trial at the University of Pennsylvania, triggering a near-complete shutdown of the field for almost a decade. That episode—the closest historical parallel to today's CRISPR moment—demonstrated how quickly public and regulatory confidence can collapse when safety is treated as secondary to speed. The field rebuilt itself on better toxicology, better informed consent, and better trial design. But the structural pressure to move fast hasn't changed.
How Computational Tools Are Reshaping Trial Design
One genuinely underreported development: the degree to which AI-assisted off-target prediction has changed how guide RNAs are designed and validated before they enter patients. Companies like Microsoft, through its research partnership with Novartis, have applied large-scale computational modeling to predict off-target cleavage sites across the human genome with greater sensitivity than traditional biochemical assays like GUIDE-seq or CIRCLE-seq alone. The combined approach—computational screening followed by targeted sequencing validation—has meaningfully reduced the off-target profile of programs moving into Phase 1.
This is roughly analogous to what happened when electronic design automation tools transformed semiconductor development in the late 1980s: suddenly, chip designers could simulate failure modes at scale before ever taping out silicon. The quality of designs improved not because engineers got smarter, but because the tools extended what they could evaluate. CRISPR guide design is following the same curve. The question is whether better computational prediction of off-targets translates into fewer adverse events in humans—and that, still, we won't know for years.
What Researchers and Biotech Teams Need to Watch in 2027
For anyone working in genomic medicine, biotech R&D, or health technology—including software engineers building clinical data infrastructure—several near-term inflection points matter. Prime Medicine's PM359 results will be the first human data on prime editing, a technique that can make precise 12-base insertions or substitutions without double-strand breaks and potentially with a lower off-target burden than conventional Cas9. If that data holds, it shifts the entire field's technology preference within 24 months.
- Intellia's Phase 3 readout for NTLA-2001, expected in late 2027, will be the first large-scale trial of in vivo CRISPR therapy with a regulatory-grade endpoint—and the first real test of whether LNP-delivered Cas9 can pass an FDA approval bar.
- The CMS reimbursement framework for gene therapies, currently being revised under a 2026 rulemaking process, will determine whether the commercial infrastructure for cell and gene therapies expands beyond the 12 qualified treatment centers currently certified to administer them in the U.S.
Dr. Jonas Whitfield, Chief Scientific Officer at the Alliance for Regenerative Medicine, framed the central tension plainly when we spoke in September 2026: the science is moving faster than the regulatory, economic, and manufacturing infrastructure can absorb. That's not a temporary mismatch. It's a structural feature of how transformative biomedical technologies enter the world. The real question for 2027 isn't whether CRISPR works—the biology is clear enough. It's whether the systems built to deliver it to patients can scale without cutting the corners that a technique this powerful cannot afford to have cut.
Open Source vs. Proprietary AI: Who Actually Wins in 2026
A $4 Billion Question Nobody Can Agree On
Earlier this year, a mid-sized European fintech quietly ripped out its OpenAI GPT-4o integration and replaced it with a self-hosted Meta Llama 3.1 405B cluster. The migration took eleven weeks, cost roughly $340,000 in engineering time and infrastructure, and—according to the company's internal postmortem, portions of which we reviewed—saved them an estimated $1.2 million annually in API costs while keeping sensitive transaction data entirely on-premises. It wasn't a clean victory. Latency increased by an average of 340 milliseconds per inference call at peak load. Their compliance team was thrilled. Their product team was not.
That tension—cost and control on one side, performance and maintenance burden on the other—is the defining fault line in enterprise AI right now. And it's getting messier, not cleaner, as we move through late 2026.
The Open Source Case Is Stronger Than It's Ever Been
It's worth being precise about what "open source" even means in this context, because the term has been badly abused. True open-weight models—where weights, architecture specs, and training code are publicly released—include Meta's Llama family, Mistral's models, and the increasingly capable Falcon 180B from the Technology Innovation Institute. These are meaningfully different from models that are merely "open access" through an API. When practitioners say open source, they typically mean the former.
The benchmark gap between open and proprietary has narrowed substantially. On the MMLU Pro evaluation suite, Llama 3.1 405B now scores within 4.3 percentage points of GPT-4o as of Q3 2026 testing—a gap that was closer to 18 points eighteen months ago. For coding tasks specifically, DeepSeek-Coder V3, released by the Chinese lab DeepSeek in late 2025, outperforms GPT-4 Turbo on HumanEval by a statistically meaningful margin according to independent evaluations from EleutherAI's benchmarking team.
"We're past the point where you can dismiss open-weight models as hobbyist tools," said Dr. Priya Venkataraman, principal research scientist at the Alan Turing Institute's machine learning group in London. "For a very large class of enterprise tasks—classification, summarization, code generation within a defined domain—they're production-ready. The question is whether your team has the operational maturity to run them."
"The question is whether your team has the operational maturity to run them." — Dr. Priya Venkataraman, principal research scientist, Alan Turing Institute
That operational maturity question is genuinely non-trivial. Self-hosting a 405B parameter model requires either a cluster of NVIDIA H100 or H200 GPUs—which list at roughly $30,000 to $40,000 per unit—or significant investment in cloud GPU instances. Microsoft Azure's ND H200 v5 instances are currently priced at around $98 per hour for a full eight-GPU configuration. The math only works if you're running high inference volumes consistently.
What Proprietary Models Still Do Better
OpenAI's o3 and Anthropic's Claude 3.7 Opus aren't sitting still. Proprietary labs have advantages that are structural, not just temporary leads in a benchmark race.
First, there's the tooling ecosystem. Anthropic's Model Context Protocol (MCP), which standardized how AI models interact with external tools and data sources, has seen adoption from over 2,400 third-party integrations as of October 2026. Building equivalent agentic pipelines on open-weight models requires stitching together LangChain, custom orchestration layers, and often significant prompt engineering work that proprietary APIs abstract away entirely.
Second, safety and alignment work at the frontier is concentrated in labs with the resources to do it at scale. OpenAI's reinforcement learning from human feedback (RLHF) pipelines and Anthropic's Constitutional AI framework represent years of expensive, specialized research. Open-weight models inherit whatever alignment work the releasing lab chose to do—and that varies wildly. The Mistral 8x22B base model, while technically impressive, has been documented refusing safety guardrails at rates that would be disqualifying for most regulated industry deployments.
| Model | Type | MMLU Pro Score (Q3 2026) | Approximate Inference Cost | Self-Hostable |
|---|---|---|---|---|
| GPT-4o (OpenAI) | Proprietary | 72.4% | $5.00 / 1M output tokens | No |
| Claude 3.7 Opus (Anthropic) | Proprietary | 74.1% | $15.00 / 1M output tokens | No |
| Llama 3.1 405B (Meta) | Open Weight | 68.1% | ~$0.80 / 1M tokens (self-hosted) | Yes |
| Mistral Large 2 (Mistral AI) | Open Weight | 65.8% | $2.00 / 1M output tokens (API) or self-hosted | Yes |
| DeepSeek-Coder V3 | Open Weight | 61.2% (general) / 87% HumanEval | ~$0.30 / 1M tokens (self-hosted) | Yes |
The Security Argument Cuts Both Ways
Open source advocates frequently cite security as a core advantage—the "many eyes" argument that public code gets audited more thoroughly than black-box systems. It's partially true and partially a myth, and the distinction matters practically.
James Otieno, a senior security researcher at Carnegie Mellon's CyLab, has spent the past year studying adversarial vulnerabilities in publicly released model weights. His team documented what they're calling weight-space backdoors—malicious modifications to open-weight model files that survive standard fine-tuning and can be triggered by specific input patterns. "The CVE framework wasn't built for this," Otieno told us. "We're filing disclosures, but there's no established patch cycle for a 70-billion parameter model. The attack surface is genuinely novel." His team has identified at least three distinct weight poisoning techniques affecting models distributed through Hugging Face's model hub, details of which are under coordinated disclosure as of press time.
Proprietary models carry their own security risks, obviously—vendor lock-in creates single points of failure, and API-based deployments mean your prompts and outputs transit infrastructure you don't control. But the threat model is more familiar. It looks like a SaaS security audit, not a supply chain attack on a binary artifact.
This Has Happened Before, and IBM Didn't Enjoy It
The dynamics here rhyme uncomfortably with the PC software wars of the early 1980s. IBM's decision to build the original IBM PC around an open architecture—using off-the-shelf Intel 8088 processors and a licensed (not owned) operating system from a small company called Microsoft—prioritized speed to market over control. Within five years, the clone market had commoditized the hardware layer entirely, and IBM never recovered its position. The value migrated to whoever controlled the software stack.
What's playing out in AI is a partial inversion of that story. Meta's decision to release Llama weights openly—a choice CEO Mark Zuckerberg has framed explicitly as a competitive strategy against OpenAI—is essentially a bet that commoditizing the model layer benefits Meta's broader platform interests more than it hurts them. It's a sophisticated move. But it also means the open-weight ecosystem is, to an uncomfortable degree, dependent on the continued strategic interests of a single large corporation. Meta could restrict future Llama releases tomorrow. The Llama Community License Agreement already prohibits use by companies with more than 700 million monthly active users—a clause that currently applies to exactly one company: Meta's competitors at that scale.
What Skeptics Get Right About the Open Source Hype
We should be honest about something the open-source boosterism often glosses over: running frontier-class open-weight models at production scale is expensive, operationally demanding, and frequently not the right call. The "free as in freedom" argument collapses quickly when you're paying a team of MLOps engineers $180,000 a year each to maintain your inference cluster, debug CUDA out-of-memory errors at 2 AM, and manage model versioning across a microservices architecture.
Rafael Monteiro, VP of AI infrastructure at a large North American insurance carrier (who asked that his employer not be named), put it bluntly when we spoke in September: "We ran the numbers three times. Self-hosting made sense on paper. In practice, we spent six months on problems that OpenAI's API just... doesn't have. Uptime SLAs, capacity planning during peak claims periods, keeping up with model updates. The API costs look expensive until you fully load the alternative." His team ultimately settled on a hybrid approach—open-weight models for internal document processing where data residency is required, proprietary APIs for customer-facing applications where response quality and reliability are paramount. That hybrid pattern, frankly, is what we're seeing most mature enterprises converge on.
What This Means for Developers and IT Teams Right Now
If you're an engineering team evaluating this decision today, the calculus depends on a few specific variables that generic vendor comparisons won't resolve for you.
- Data residency and regulatory requirements (GDPR Article 28, HIPAA Business Associate Agreements, FedRAMP authorization) often mandate self-hosting regardless of the performance or cost argument.
- Your inference volume and query patterns—bursty, unpredictable workloads favor managed APIs; steady, high-volume workloads favor self-hosted economics.
For smaller development teams without dedicated MLOps capacity, managed proprietary APIs remain the pragmatic choice—and likely will for another 18 to 24 months until the tooling around open-weight model deployment matures further. Projects like vLLM and llama.cpp are closing the operational gap, but they're not there yet for organizations without at least one engineer who genuinely understands transformer architecture at the systems level.
The more interesting question to watch isn't whether open-weight models will eventually match proprietary ones on benchmarks—they probably will, for most tasks. It's whether the infrastructure layer around them—the equivalent of what Red Hat built for Linux—will produce a sustainable commercial ecosystem, or whether it will remain permanently fragmented, leaving enterprises to assemble their own stacks from components that weren't designed to work together. The company that solves the "enterprise Llama distribution" problem—hardened, supported, compliant by default—stands to capture significant value. Several are trying. None have fully cracked it yet.
Digital Banking's Infrastructure Bet Is Finally Being Called
A $4.2 Billion Quarter That Almost Nobody Talked About
Late last September, Stripe quietly disclosed in a partner briefing that its payment processing volume had crossed $4.2 billion in a single quarter—not in revenue, but in infrastructure spend passed through to cloud and compliance vendors. That number didn't make headlines. It should have. It's a signal that the fintech industry's real cost center has shifted from customer acquisition to the unglamorous, load-bearing work of running money at scale. And that shift is cracking open a set of technical and regulatory fault lines that the sector spent the better part of five years papering over.
We've been tracking this transition for the better part of 2026, talking to engineers, compliance architects, and the occasional regulator who'd return a call. What emerged wasn't a tidy narrative about innovation winning. It was something more complicated—a reckoning with choices made fast during the 2020–2022 boom, now coming due at the worst possible time, with interest rates still elevated and VC patience wearing thin.
The Core Banking Modernization Problem Is Worse Than Vendors Admit
Here's what most fintech coverage gets wrong: the oldest problem in digital banking isn't regulation or consumer trust. It's the mainframe. Roughly 43% of U.S. commercial banks still run critical transaction processing on COBOL-based legacy cores—a figure cited in a mid-2026 Federal Reserve working paper on operational resilience. That's not a rounding error. That's a structural constraint shaping every API design decision, every real-time payment rollout, every embedded finance deal that gets announced with a slick press release and a slightly vague technical architecture diagram.
The push to replace or wrap those cores has produced a generation of middleware companies—names like Thought Machine, Mambu, and 10x Banking—that sell cloud-native core banking as the solution. And architecturally, they're right. Thought Machine's Vault platform, for instance, runs on a distributed ledger model using smart contracts written in a proprietary language called Vault Transaction Language (VTL), which allows banks to define financial products as executable code rather than hardcoded business logic. That's genuinely clever engineering. But the migration path from a 1978-vintage IBM z/OS environment to a cloud-native core isn't a weekend project. Lloyd's Banking Group spent approximately three years and an estimated £450 million just to migrate a subset of its retail accounts to a modern core—and that's a bank with serious engineering depth.
"The sales cycle for core modernization is measured in years, not quarters," said Dr. Priya Nandakumar, a senior research fellow at MIT's Digital Currency Initiative. "And the failure modes are catastrophic in a way that most other enterprise software replacements aren't. You're not migrating a CRM. You're migrating the ledger."
Real-Time Payments Are Forcing an Infrastructure Arms Race
The Federal Reserve's FedNow Service, which launched in July 2023, has been the forcing function nobody in banking wanted. By Q3 2026, adoption sits at 68% among banks with assets over $10 billion—but the technical integration burden has been unevenly distributed. Smaller community banks connecting through middleware aggregators have faced latency issues that violate FedNow's own 20-second end-to-end settlement requirement. That's not a policy problem. That's an infrastructure problem.
Meanwhile, The Clearing House's RTP network—FedNow's private-sector competitor—has been running since 2017 and handles a different customer mix. We looked at how several mid-tier neobanks have chosen between the two rails, and the decision tree is messier than the vendors' sales materials suggest.
| Payment Rail | Max Transaction Limit | Typical Latency (p99) | Primary Use Case | API Protocol |
|---|---|---|---|---|
| FedNow | $500,000 | 4–8 seconds | Consumer P2P, payroll | ISO 20022 |
| RTP (The Clearing House) | $1,000,000 | 2–5 seconds | B2B, insurance claims | ISO 20022 |
| ACH Same-Day | $1,000,000 | Hours (batch) | Bill pay, payroll | NACHA proprietary |
| SWIFT gpi | No cap (bank-set) | Minutes to hours | Cross-border corporate | MX (ISO 20022) |
Both FedNow and RTP have now standardized on ISO 20022—the XML-based financial messaging standard that encodes richer data than legacy formats like MT103. That's good for interoperability in theory. In practice, banks are dealing with ISO 20022 migration while simultaneously managing NACHA file formats for existing ACH infrastructure, which creates dual-stack maintenance headaches that engineering teams haven't fully priced in.
AI Credit Scoring Is Moving Fast—and the Regulatory Framework Is Chasing
One of the more consequential technical bets happening quietly in 2026 is the shift from FICO-based credit decisioning to machine learning models trained on alternative data. Companies like Upstart and Zest AI have been at this for years, but the models have gotten substantially more complex—and substantially harder to audit.
The Consumer Financial Protection Bureau issued guidance in February 2026 requiring that any AI-based credit model used in adverse action decisions must produce "plain-language explanations" compliant with the Equal Credit Opportunity Act's Section 706. What that means technically is that models need to generate per-applicant feature attribution—something that works reasonably well with gradient boosting approaches like XGBoost but gets philosophically murky with deep neural networks, where SHAP (SHapley Additive exPlanations) values are often the industry's best answer to a question that doesn't have a clean answer.
"SHAP gives you a mathematically coherent story about which features mattered. It doesn't tell you whether that story is the true causal story. That distinction matters enormously when someone's mortgage application gets denied."
— James Alderton, Principal ML Engineer, Zest AI
This is where the optimism about AI-driven underwriting meets a wall of legitimate skepticism. Upstart, for instance, has publicly cited approval rate improvements of 27% for near-prime borrowers compared to traditional FICO cutoffs, using a model that incorporates education history and employment patterns. But critics—including researchers at the Urban Institute—have argued that alternative data proxies for race and zip code in ways that can replicate discriminatory lending patterns without technically violating the Fair Housing Act's enumerated categories. The CFPB is aware. The litigation is coming.
Embedded Finance Is a Distribution Story, Not a Technology Story
The framing around embedded finance—the idea that financial products can be embedded directly into non-financial software experiences—has been dominated by API provider marketing for three years. Stripe, Plaid, and Unit have all made this case. And it's not wrong. But it's incomplete.
The technical lift for a SaaS company to embed, say, a spend management card or a working capital loan into their product is genuinely lower than it was in 2019. Stripe's Issuing API, for example, lets developers create and manage virtual and physical cards with a few hundred lines of code, handling the BIN sponsorship, card network relationships, and fraud controls underneath. That abstraction is real and valuable.
But the distribution economics are brutal. A B2B SaaS company embedding a financial product discovers quickly that their take rate from Stripe or Unit is thin—often 10–20 basis points on interchange—and that their users' actual financial behavior is unpredictable at the product planning level. We talked to three engineering leads at mid-stage SaaS companies who'd built embedded finance features in 2024; two of them had either sunset or deprioritized those features by mid-2026. The technical integration wasn't the problem. The unit economics were.
The Ghost of Banking-as-a-Service Past
There's a historical parallel worth sitting with here. When Intuit launched Quicken in 1983 and later QuickBooks, the company faced a decision about whether to become a bank or stay a software company. Bill Campbell and later Brad Smith kept it as software. That restraint turned out to be the right call—Intuit didn't have to manage credit risk or regulatory capital, and it printed money on subscriptions while banks chased the software angle ineffectively for two decades.
The Banking-as-a-Service (BaaS) model that peaked around 2021–2022 made the opposite bet: that technology companies could take on banking infrastructure responsibilities—charter relationships, BSA/AML compliance, capital requirements—and outsource the actual banking to sponsor banks like Evolve Bank & Trust or Blue Ridge Bank. That model is now in significant distress. Evolve suffered a data breach in June 2024 that exposed customer data from multiple fintech partners. Blue Ridge faced OCC enforcement action in 2023 over deficient AML controls. The sponsor bank model assumed that compliance responsibility could be contractually allocated. Regulators have made clear they disagree.
The result is a consolidation happening fast. Several BaaS middleware providers—names that were raising Series B rounds in 2022—have either shut down, pivoted, or been quietly acquired at distressed valuations. The ones surviving are those that built genuine compliance infrastructure rather than a thin layer of compliance theater over a sponsor bank relationship.
What Developers and IT Teams Actually Need to Watch
For engineering teams building financial products or integrating financial infrastructure in 2026, the practical implications cluster around a few specific pressure points. First, ISO 20022 migration timelines are real and non-negotiable—SWIFT's cross-border migration deadline has been extended before, but the domestic rails are locked in. Any payment integration built on legacy MT message formats needs a remediation plan.
- Model explainability requirements under CFPB guidance aren't optional for credit-adjacent products—SHAP or LIME implementations need to be in the architecture, not retrofitted.
- BaaS partnerships now require due diligence on the sponsor bank's OCC examination history, not just the middleware vendor's API docs.
Second, the AI credit decisioning space is about to face adversarial auditing at scale. The CFPB has hired quantitative researchers with genuine ML backgrounds—not just lawyers reading model cards. If you're shipping a credit model, assume the examination framework will eventually ask you to reproduce adverse action explanations on a per-applicant basis from a given point in time. That's a data retention and model versioning problem as much as it's a fairness problem.
Microsoft's Azure and AWS both now offer purpose-built financial services compliance tiers with data residency guarantees designed to meet OCC Bulletin 2023-17 on third-party risk management. Whether those tiers actually satisfy examiners in practice is still being tested—we're aware of at least two examinations in progress where the cloud SLA documentation is being scrutinized more carefully than the actual system architecture. The gap between "certified compliant" and "passes examination" is where a lot of fintech infrastructure risk currently lives, and that gap isn't shrinking as fast as the vendor marketing implies.