BeyondTrust warns of 467% rise in enterprise AI agents
BeyondTrust has published research showing a 466.7% year-on-year rise in AI agents operating inside enterprise environments. The findings point to what its researchers describe as a shadow AI workforce inside companies.
According to BeyondTrust's Phantom Labs team, the growth reflects a rapid increase in AI-driven identities running across cloud services and business applications without central oversight or a clear view of their access rights.
Its analysis found that some organisations are running more than 1,000 AI agents, many of them unknown to security teams. These systems are no longer limited to basic chatbot tasks and are increasingly acting with a degree of autonomy across internal tools and application programming interfaces.
That shift matters because AI agents are being introduced through a widening range of enterprise software. The study cited tools including Microsoft Copilot and Azure AI Foundry, AI functions built into Salesforce and ServiceNow, coding assistants, and AI features in workplace products such as Jira and Confluence.
Identity risks
One of the main concerns is that these agents can inherit access from users or service roles, leaving them with broad permissions that are not always visible in conventional reviews. In practice, an AI identity may appear properly governed in a static report while still being able to elevate privileges in unexpected ways once in use.
The researchers also found that machine and AI identities now outnumber human identities by a wide margin in many environments, with the gap continuing to widen. They highlighted long-lived API keys and static credentials used by AI agents without rotation policies or lifecycle controls as another source of risk.
Unlike traditional service accounts, AI agents can combine privileged access with autonomous action across systems. The report argues that this mix creates attack paths many existing security tools were not built to detect.
Fletcher Davis, Director of Research at BeyondTrust Phantom Labs, said the pace of deployment has left many organisations with limited awareness of how much access these systems hold. "Organisations are introducing thousands of new machine identities through AI agents, often without realizing the level of access those agents inherit. In many environments we studied, AI agents were operating with privileges comparable to human administrators. As organizations move from chatbot use cases to more autonomous agentic AI, the identity attack surface will only expand," Davis said.
Broader pattern
The findings sit within a broader debate about AI adoption inside large companies, where attention has often focused on model safety, data leakage and compliance rather than identity management. Security specialists have increasingly warned that machine identities, service accounts and automated tools can create blind spots when governance processes are designed mainly around human users.
BeyondTrust's analysis suggests AI agents are adding to that problem by operating across multiple systems at once. In a typical setup, an AI tool may connect to productivity software, customer relationship platforms, cloud infrastructure and internal databases, giving a single digital identity a wide operational footprint.
The report says low-code development tools and embedded AI functions in existing software are helping these agents spread outside formal IT approval channels. That can make it difficult for central security teams to maintain an accurate inventory of AI-related identities or understand how privileges are linked across environments.
Earlier research
Phantom Labs linked the latest findings to its previous work on AI-related access risks. Earlier research by the team examined a breach scenario involving Microsoft Copilot Studio in which AI agents exposed secrets and granted unauthorised access to cloud infrastructure despite existing security controls.
Separate work on AWS Bedrock showed how long-term API keys could automatically create identity and access management users with broad permissions. The team has used those examples to argue that AI platforms can introduce new privilege chains that are hard to spot through standard configuration checks.
The latest analysis was surfaced through BeyondTrust's Identity Security Insights on its Pathfinder platform, which is designed to map hidden identity relationships and identify attack paths tied to privilege, including those involving AI agents.
While the study focuses on risks, it also reflects how quickly AI tools have become embedded in routine enterprise operations. As companies adopt AI features through mainstream software suppliers, the number of machine identities appears to be growing faster than many governance frameworks can track.
For security teams, the issue is not simply the number of AI agents but the uncertainty over what they can reach once deployed. Some of the agents identified in the research were described as sitting outside established governance structures while still interacting with sensitive systems and data.
The report's central finding is that AI adoption is creating a new layer of identity exposure inside enterprises, one that may not be visible through traditional user-focused controls. In the environments reviewed by Phantom Labs, AI agents were in some cases operating with privileges comparable to human administrators.