$5.5 billion. That is the combined capital OpenAI and Anthropic committed this month to solving a single problem: why enterprises cannot turn capable AI models into reliable production systems. OpenAI announced the OpenAI Deployment Company (DeployCo) on May 11. Anthropic had already moved first on May 4. The models aren't the problem. Read the announcements carefully, and that is the only reasonable conclusion.
The $5.5 Billion Admission: Enterprise AI's Real Bottleneck Is Everything Around the Model
OpenAI's Deployment Company is majority-owned and controlled by OpenAI, with total investment exceeding $4 billion from TPG, Advent, Bain Capital, and Brookfield. Anthropic's joint venture draws on Blackstone, Hellman & Friedman, Goldman Sachs, Apollo Global Management, General Atlantic, GIC, Leonard Green, and Sequoia Capital — combined commitment ~$1.5 billion.
| OpenAI DeployCo | Anthropic Joint Venture | |
|---|---|---|
| Total committed | ~$4B | ~$1.5B |
| Launch | May 11, 2026 | May 4, 2026 |
| Lead investors | TPG, Advent, Bain, Brookfield | Blackstone, Hellman & Friedman, Goldman Sachs |
| Structure | Majority OpenAI-owned | JV with founding partners |
| Premise | Data, workflow, governance gap — not a model gap | Same |
Both ventures are built around the same structural premise. As OpenAI CRO Denise Dresser put it: "The challenge now is helping companies integrate these systems into the infrastructure and workflows that power their businesses. DeployCo is designed to help organizations bridge that gap." That gap is not a model gap. It is a data, workflow, and governance gap — and the AI labs are now spending billions to say so explicitly.
Blackstone President Jon Gray framed the problem as breaking down "one of the most significant bottlenecks to enterprise AI adoption" — the scarcity of engineers who can implement frontier AI systems at speed. Gartner analysts have been more direct, noting that "a lot of customers are not seeing clear value, and some of that is primarily because they don't have the internal expertise."
The Root Cause Forward-Deployed Engineers Won't Fix: Your Knowledge Base
The arrival of Forward Deployed Engineers (FDEs) from DeployCo or Anthropic's venture will be a meaningful step forward. They will connect models to business workflows, instrument evaluation pipelines, and build integrations between AI systems and existing toolchains. What they cannot do is retroactively clean years of accumulated knowledge base drift.
A 2025 analysis found that 73% of enterprise RAG deployments fail within the first year — not due to model or retrieval algorithm problems, but due to knowledge base maintenance failures: stale documents, coverage gaps, and poor extraction quality from source files. The architecture of enterprise RAG makes this problem structural: when an employee asks a question, the AI retrieves from your knowledge base and generates a response grounded in what it finds. If what it finds is wrong, outdated, or missing, no amount of prompt engineering fixes the output.
The Three Failure Modes Compounding in Your Knowledge Base Right Now
Retrieval quality — not model quality — determines whether enterprise AI gives correct answers about your organisation. Misdiagnosed hallucinations are a particularly costly version of this problem: what looks like the LLM inventing information is frequently a retrieval system returning a stale or conflicting document — the model then accurately describes something that is no longer true.
Three failure modes tend to compound each other in production:
• Document rot: Policies, procedures, and product information that were accurate at ingestion but have since been superseded — without the knowledge base being updated.
• Coverage gaps: Topics where user queries exist but no source documents do, forcing the model into speculation.
• Extraction failures: Scanned PDFs, garbled tables, and improperly parsed files that produce corrupted chunks — noise that degrades retrieval precision across the entire index.
Current RAG implementations fail at enterprise scale because they treat knowledge infrastructure as separate from security, governance, and observability. Seventy percent of RAG systems still lack systematic evaluation frameworks, making it impossible to detect quality regressions before they surface in production. FDEs can build the pipeline. They arrive expecting the source content to be production-ready.
The Governance Gap: Only 1 in 4 Organisations Are Actually Ready
Even if your knowledge base content is clean, the governance layer surrounding it determines whether your AI deployment is auditable, defensible, and scalable. The current state of enterprise AI governance is not encouraging.
Seventy-two percent of organisations have AI in production, yet only 9% have mature governance. Research from AuditBoard found that only 1 in 4 organisations have fully operational AI governance, despite widespread awareness of new regulations. Fewer than 1 in 10 integrate AI risk and compliance reviews directly into development pipelines.
The downstream consequences are already measurable. McKinsey's State of AI report found nearly half of organisations encountered measurable governance or ethical lapses linked to GenAI projects. Per IBM's 2025 Cost of a Data Breach Report, 13% of organisations reported breaches involving AI models or applications — and among those, 97% had no proper AI access controls in place.
The core problem: You cannot audit, explain, or scale AI if your data catalogue is incomplete, your lineage unknown, or your quality metrics opaque.
Effective AI data governance requires knowing which sources feed which systems, who has access to what, and whether the content those systems retrieve is accurate and current. Most enterprise knowledge bases — spread across help centers, CRMs, internal wikis, and legacy document repositories — have never been systematically audited for any of these properties.
The EU AI Act Omnibus: A Regulatory Clock Now Ticking on Knowledge Governance
For European enterprises — and any organisation with EU operations — a significant regulatory development arrived just days ago. On 7 May 2026, the European Parliament and the Council of the European Union reached a provisional political agreement under the European Commission's "Digital Omnibus" package to amend and streamline aspects of the EU AI Act.
The agreement restructures key deadlines, extending the compliance window for many high-risk AI obligations. The extension of certain deadlines should not be interpreted as an invitation to pause AI governance efforts. The AI Act is already in force, organisations remain expected to prepare for compliance now, and the penalty framework is unchanged.
Non-compliance with the prohibitions in Article 5 attracts fines of up to €35 million or 7% of worldwide annual turnover. Non-compliance with high-risk obligations and with Article 50 attracts fines of up to €15 million or 3%.
Article 10 of the EU AI Act specifically mandates data governance practices for high-risk AI systems — including examination for bias, ensuring training and retrieval data is relevant and error-free, and maintaining detailed records for regulatory compliance. RAG systems in regulated industries — healthcare, finance, legal — are squarely in scope.
Critically, not everything was delayed. The provisional agreement reduces the grace period for providers to implement transparency solutions for artificially generated content from 6 months to 3 months, with the new deadline set on 2 December 2026. Compliance is not a separate workstream from deployment readiness. It is the same workstream.
The Pre-Deployment Checklist Your AI Vendor Won't Hand You
The arrival of DeployCo and Anthropic's enterprise venture signals a new phase: model access is commoditised, implementation quality is the differentiator. Both ventures will send engineers on-site to connect AI to your systems. The organisations that get the most from those engagements will be the ones that show up prepared.
Before deploying any RAG-based AI system, three questions must be answered with evidence — not assumption:
1. Are your source documents fresh, complete, and properly extractable? Scanned PDFs, policy documents not updated in 18 months, and product documentation that predates your last major release are not edge cases — they are the majority of most enterprise knowledge bases.
2. Are there conflicting or inconsistent records across distributed knowledge bases? Salesforce, ServiceNow, Zendesk, SharePoint, and internal wikis often contain contradictory answers to the same question. Your AI will retrieve and present all of them with equal confidence.
3. Have coverage gaps been mapped against actual query patterns? The topics your customers and employees most frequently ask about are frequently the least documented.
Informatica's CDO Insights 2025 survey identifies the top obstacles to AI success: data quality and readiness (43%), lack of technical maturity (43%), and shortage of skills (35%). Winning programmes invert typical spending ratios, earmarking 50–70% of the timeline and budget for data readiness. Organisations that adopt systematic evaluation frameworks before deployment consistently reduce post-deployment issues by the same margin.
The winners won't be those with access to the best models — those will be commoditised. The winners will be organisations that have systematically captured institutional knowledge, made it accessible through sophisticated retrieval architectures, and built governance frameworks that enable safe deployment at scale.
No. The model rarely fails. The knowledge base feeding it does. Upgrading the model while leaving the data untouched produces more confident wrong answers, not fewer.
Human Delta delivers results in under 24 hours, no code changes required.
EU AI Act enforcement applies to any organisation deploying AI used by EU residents. US state-level AI legislation is also expanding rapidly.
A one-time audit captures the state of your knowledge base on a single day. Continuous governance catches regressions as documents change, new content is added, and business rules evolve — the only way to keep production AI trustworthy over time.