Only 14% of firms have clear AI strategy, study finds
Altimetrik and HFS Research have published research showing that only 14% of Global 2000 companies have a documented AI strategy with clear goals.
The findings are based on a survey of more than 500 senior executives across Global 2000 organisations in five industries.
The study highlights a gap between the adoption of artificial intelligence tools and the governance structures meant to oversee them. Many large businesses are using AI in decisions on hiring, capital allocation, compliance and operations without clearly defining who is responsible for the outcomes.
That issue appears particularly stark given the scale of adoption. Among the Global 2000, 68 companies are based in the UK, and the report suggests most of the index has moved ahead with deployment before establishing formal accountability.
Strategy Gap
According to the research, just one in seven Global 2000 companies has a clear AI business strategy. Only 13% were classified as highly mature, a term the report uses for organisations that treat AI as a governed, enterprise-wide function rather than a series of team-led projects.
This more mature group was more than twice as likely to report faster, more accurate decisions, along with measurable effects on customers and revenue. The rest continue to deal with long execution cycles, unclear ownership and governance models built before AI systems became part of day-to-day business operations.
Those findings add to a wider boardroom debate over whether companies have moved too quickly from experimentation to operational use. In many organisations, AI systems are no longer confined to narrow pilots but now influence decisions that affect staffing, spending and regulatory responses.
Raj Sundaresan, chief executive officer of Altimetrik, said the problem was not only the spread of AI but the lack of clear responsibility around it.
“AI is accelerating decisions across the enterprise, but, done well, it requires deep engineering discipline. Too many organisations are scaling AI without redesigning accountability, which risks scaling bad decisions faster. Putting humans at the helm is about ensuring every AI-driven decision is governed with the same engineering rigour, ownership, and scrutiny we expect from any critical business system. Without that accountability, you're scaling risk instead of intelligence,” Sundaresan said.
Workforce Concerns
The survey also points to employee concerns about how AI is being introduced. More than half, or 52%, said fear of replacement was their biggest barrier to engaging with AI.
Training levels also appear limited. Nearly 80% of employees receive fewer than 10 hours of training each year, even as businesses ask staff to work alongside systems that increasingly shape recommendations and decisions.
One of the more striking findings relates to judgement and oversight. The ability to challenge AI outputs ranked last among the skills executives said they value, despite the report arguing that it is central to safe and effective oversight.
This creates a mismatch between deployment and workforce readiness. If employees are not trained to question outputs, and if managers have not defined where human responsibility starts and ends, companies risk creating cultures in which staff follow algorithmic recommendations without testing them.
The research also found that 75% of organisations said their teams defer to external partners because they lack the confidence to push back. That suggests many companies remain dependent on outside advisers and suppliers when assessing how AI systems should be used and governed.
Phil Fersht, founder and chief analyst of HFS Research, said the issue had become as much a workforce problem as a technology one.
“Enterprises are scaling AI faster than accountability, and that gap is now a workforce crisis. When leaders don't define what AI decides and what humans own, employees stop questioning it. That's not augmentation, it's abdication. Fix it now, or you're not building an intelligent organisation. You're scaling unmanaged risk,” Fersht said.
Governance Pressure
The findings come as large companies face growing pressure to show that AI systems are subject to the same controls as other critical business processes. While many boards now discuss AI regularly, the research suggests formal structures for ownership, review and escalation are still missing in most organisations.
That matters because AI tools are increasingly embedded in routine workflows rather than used as stand-alone software. Once systems influence decisions on recruitment, spending or compliance, the absence of documented rules can raise operational and management risks.
The report argues that the divide is no longer between companies that use AI and those that do not. It is now between organisations that have built internal discipline around AI decision-making and those still relying on fragmented experiments, with unclear lines of control and limited staff confidence.