IT Brief Ireland - Technology news for CIOs & IT decision-makers
Corporate security ops room network map ai agents permissions governance

Entro launches AI agent governance tool for enterprises

Thu, 19th Mar 2026

Entro Security has launched a governance product designed to help companies track and control how artificial intelligence agents connect to corporate systems, as businesses struggle to understand which tools are in use, what data they can access and which identities sit behind them.

The product, Agentic Governance & Administration, or AGA, is aimed at security and identity teams managing growing use of AI assistants, agent platforms and locally run agents across enterprise environments.

It addresses a problem emerging as organisations adopt AI tools quickly: access often begins with a simple connection made by a developer, employee or business team, but oversight of those links can lag. That leaves security teams trying to determine which applications and systems an AI agent can reach, what permissions it has and whether those permissions are still appropriate.

AGA applies established identity governance principles to AI-related access, including inventory, ownership, least-privilege access, auditability and enforcement. Entro argues that conventional identity governance tools do not fully address AI agents because the acting entity is often not a human user, but a service, local agent or software process using tokens, service accounts, API keys or secrets.

Three Layers

The system builds what Entro describes as an AI agent profile by combining three sets of data: the sources where agents are identified, the enterprise targets they touch and the identities used to access them.

Those sources include endpoint telemetry, agent development platforms, cloud environments where non-human identities are used and MCP servers. Targets are the enterprise applications, assets and systems an agent interacts with. Identities include human and non-human accounts, as well as the secrets used to authenticate access.

By bringing those elements together, Entro aims to give customers a single view of how an AI agent operates across the organisation, rather than treating endpoint activity, cloud behaviour and identity management as separate issues.

This matters because many AI deployments rely on non-human identities, including service accounts, secrets and machine credentials rather than employee logins. In these cases, risk may depend less on a single user session and more on broad OAuth permissions, integrations, data syncing and automated workflows.

Shadow AI

One part of the product is designed to uncover what Entro calls shadow AI. That includes not only unsanctioned use of AI software-as-a-service products and large language model tools, but also locally running agents, workstation-based AI clients and agents created inside cloud and agent-building platforms.

AGA integrates with endpoint detection and response tools to identify AI clients and local runtimes on employee devices. It also connects with agent foundries including AWS Bedrock and Copilot Studio, as well as cloud service providers, to find agents and the non-human identities they depend on, such as OAuth applications, IAM roles and service accounts.

The second part focuses on monitoring and enforcement. Entro says the product gives customers visibility into MCP activity, the tools agents invoke and the services they connect to while running. It also provides policy controls for approved MCP targets and AI client behaviour, along with audit trails showing allowed and blocked activity and controls intended to reduce exposure of sensitive data and secrets.

That approach reflects a wider shift in enterprise security, where discovery alone is no longer enough. Security teams increasingly want to know not just that an AI tool exists, but whether it is operating within policy, whether its access can be limited and whether activity can be reviewed after the fact.

Itzik Alvas, co-founder and CEO of Entro Security, said companies often find themselves trying to answer basic questions only after AI adoption has spread across departments.

"Enterprise AI adoption rarely starts with a strategy deck. It starts with a connection," said Itzik Alvas, Co-Founder and CEO of Entro Security. "A developer connects a tool to an LLM, a team installs an AI app in SaaS, or someone authenticates an agent against SharePoint, GitHub, Salesforce, or internal APIs. It works, spreads fast, and then security teams get questions they can't answer fast enough.

"Who connected what, to which systems, with what permissions, and using which identities? Our AGA helps teams regain clarity and control as AI access becomes the default."

The launch highlights how identity management vendors are adapting to the spread of autonomous and semi-autonomous AI systems inside large organisations. Traditional identity governance and administration products were designed mainly around human users and established application access patterns. AI agents, by contrast, can be deployed quickly, run continuously and change their behaviour or reach as teams add integrations and automate tasks.

For security teams, that creates a governance challenge spanning endpoint security, cloud visibility and identity management. Entro's new product is intended to bring those strands together as organisations try to put guardrails around AI use without blocking adoption altogether.

AGA is now available as part of the Entro platform. Entro positions it as a way for security and identity teams to map AI connections, review permissions and enforce policy as AI use spreads across enterprise systems.