Private equity warned over fragile AI foundations
Software Improvement Group has warned that private equity firms may overestimate the strength of their AI plans as software quality, security and governance lag behind investment narratives.
The Amsterdam-based software analysis and advisory firm has published research on AI adoption in private equity, finding a widening gap between pilot activity and repeatable deployment across portfolio companies.
AI is now a common element in deal theses and portfolio plans, according to the report. Yet many businesses have operational foundations that are too opaque or brittle for AI to scale consistently.
Roughly two-thirds of general partners report AI pilots across their portfolios, while only about 40% see AI embedded across multiple business processes. SIG described that gap as a sign that expectations are outpacing delivery.
SIG linked the issue to a tougher buyout market, with higher pricing pressure on assets and closer scrutiny at exit. That environment can increase the perceived value of technology differentiation claims, while raising the downside when those claims do not withstand diligence.
Moats Under Pressure
A central theme in the research is the changing nature of defensibility for software-led businesses. As generative AI accelerates the creation of standard features, SIG argued that "generic functionality" is becoming easier to replicate.
Defensibility, it said, increasingly depends on less visible assets such as proprietary data, domain knowledge and software architecture. These assets often lack documentation and governance within organisations.
That shift matters for private equity because the investment model relies on predictable improvement during the hold period and credible positioning at exit. If differentiation rests on ungoverned or poorly understood assets, buyers and lenders may apply discounts or demand remediation plans.
Low AI Maturity
SIG's analysis suggests production-scale AI remains limited despite widespread discussion. Of all production systems it analysed in 2025, around 1.5% qualified as AI systems.
SIG presented the figure as evidence that many portfolios are still early in their AI adoption, with the focus shifting from experimentation to systems running in production.
Quality and maintainability also emerged as core concerns. SIG found that 72% of AI systems scored below its recommended build-quality threshold, linking low scores to higher risks in maintainability, security and compliance.
Code And Security
The research also examined AI-assisted software development and found results varied widely. Reported productivity outcomes ranged from a 19% slowdown to a 26% speed-up, depending on context and controls.
In experiments cited by SIG, AI-generated code produced roughly twice as many security-risk violations as comparable human-written projects. SIG also warned that AI introduces new "attack surfaces" linked to data, models, prompts and external dependencies.
That dynamic can add exposure on top of existing cyber risk rather than replace it. For private equity owners, SIG said, AI adoption can expand the scope of technical diligence and ongoing oversight rather than reduce it.
"AI is changing the way we do business fundamentally, and reliance on technology increases to previously unseen levels," said Luc Brandts, CEO of Software Improvement Group. "Yet, no clear oversight exists. So, it's not a new problem, but AI is increasing tenfold. Can we trust what is being built, do we invest in the right places, how ready are we for AI? Questions any investor has, and that need a clear answer."
Governance And Regulation
SIG highlighted governance as a weak point as AI rules diverge across jurisdictions. It pointed to enforcement action in the European Union against misleading AI claims and to emerging AI-specific obligations globally.
SIG said AI narratives now face scrutiny similar to financial disclosures, particularly at exits and refinancings. That raises the stakes for private equity firms that promote AI-led value creation when marketing assets to buyers or the capital markets.
International standards can be a practical tool for investors operating across borders, SIG argued. It said standards can provide a shared approach for demonstrating control over how AI systems are built, governed, monitored and corrected.
Across the private equity lifecycle, SIG said AI maturity is increasingly a determinant of value. Weak visibility at entry can obscure remediation costs and inflate perceived moats.
During the hold period, poor governance can slow delivery while increasing exposure to security and compliance risks. At exit, SIG said, buyers are increasingly testing whether AI systems are controllable, traceable, secure and defensible.
SIG added that scrutiny can extend to investment committees and limited partners, with expectations that AI claims and technical moats are backed by independent evidence. Brandts said oversight has not kept pace with adoption, and investors are seeking clearer answers on readiness and investment priorities.