ExtraHop launches AI network visibility & governance tool
ExtraHop has launched a new offering to help enterprises monitor and govern their AI and agentic infrastructure, giving organisations greater visibility into AI activity across their networks.
The launch reflects growing concern among large businesses that AI agents and related tools are creating security gaps that established controls do not fully address. As companies connect large language models, application interfaces and automation tools to core systems, security teams are under pressure to identify what is running, what data is moving and whether those actions align with internal policies.
Network View
ExtraHop, which specialises in network detection and response, is positioning the network as the main layer for observing AI systems. Its approach focuses on maintaining an inventory of AI-related assets, monitoring traffic in real time and identifying suspicious behaviour linked to models, agents and tool connections.
Enterprises need a continuous record of AI assets across cloud and on-premises environments, according to ExtraHop. That includes large language model usage, Model Context Protocol servers, application programming interfaces and the communication patterns between agents and other parts of the network.
By mapping those elements, security teams can compare approved tools with those that appear without authorisation. This matters because unsanctioned AI services, often described as shadow AI, can expose sensitive data or create unmonitored paths into internal systems.
Threat Focus
ExtraHop says its system analyses AI traffic as it moves across the network to detect unusual behaviour. Examples include prompt injection attempts, suspicious data flows and unexpected agent actions.
It argues that network-level analysis can also help investigators determine which user or service initiated an AI request, which destinations received data and how credentials and permissions moved through a multi-step workflow. That information is becoming increasingly important for businesses trying to distinguish between legitimate automated actions and malicious activity that mimics normal AI operations.
Traditional tools often struggle to see those interactions clearly, the company argues, because AI systems may span cloud services, internal applications and external interfaces. In that environment, the security challenge is not only detecting attacks but also establishing a forensic record of what happened after an incident.
Governance Pressure
Governance is another part of the new approach. Organisations are facing compliance and policy challenges as AI use spreads quickly across departments and employees adopt tools outside approved channels, ExtraHop said.
Its platform is designed to help businesses detect unapproved AI gateways, flag data moving through unvetted interactions and maintain audit trails of agent activity. For regulated industries in particular, the ability to show how AI systems accessed data and which controls were applied is becoming a central requirement.
The issue is becoming more urgent as businesses move beyond trials and begin integrating AI agents into routine operations. Once agents are granted access to internal tools and sensitive information, errors or misuse can have broader operational and legal consequences than a standalone chatbot deployment.
Kanaiya Vasani, chief product officer at ExtraHop, described this as a broader market shift. "AI is the ultimate competitive advantage, yet it quickly becomes a disadvantage if deployed without transparency and control," he said. "To scale safely, enterprises must establish definitive oversight of every agent and autonomous workflow on their network. By harnessing deep network insights, we are giving leaders the real-time visibility and context they need to move fast and innovate boldly, ensuring their AI remains a powerful engine for growth rather than an unmanaged risk."
Industry analysts are also pointing to a trust problem as companies weigh the benefits of automation against the risk of losing control over how systems behave and where data goes.
"The rapid adoption of AI is creating a trust gap in the enterprise; organizations want the agility and scale of autonomous agents but fear the loss of control," said Chris Kissel, research vice president for security and trust at IDC. "ExtraHop is bridging this gap by treating visibility into AI traffic as a foundational security requirement. By providing a clear window into these agents, what they're doing, and how they interact with one another, ExtraHop is enabling businesses to move from cautious experimentation to confident, large-scale AI deployment throughout the modern enterprise."