IT Brief Ireland - Technology news for CIOs & IT decision-makers
Secure enterprise control room ai agent nodes toolchain anomaly cloud

Miggo expands runtime defence for AI agents & tools

Wed, 25th Mar 2026

Miggo has expanded its Runtime Defence Platform to cover AI and agent-based software, adding tools to observe and respond to runtime activity across AI components and toolchains.

The release includes an AI Bill of Materials, runtime guardrails, and Agentic Detection & Response features intended to give security teams visibility into AI agents, Model Context Protocol toolchains, and shadow AI in production environments.

Miggo is framing the update around the view that AI security problems emerge during execution rather than in source code alone. It argues that AI applications and agents choose models, call tools, and access data dynamically at runtime, making static checks and conventional controls less effective at identifying misuse or compromise.

The platform update targets organisations deploying AI assistants and agent frameworks, including environments built with tools such as LangChain and MCP-connected systems. It uses execution data gathered during live operation to track which agents exist, how they behave, what they can access, and how that behaviour changes over time.

The move follows Miggo's recent security research into indirect prompt injection affecting Google Gemini integrations. That work examined how a malicious calendar invitation could alter downstream AI behaviour through trusted context, highlighting how attacks can emerge through connected workflows rather than direct interaction with a model.

At the centre of the product is what Miggo calls AI-BOM discovery and execution visibility. This automatically identifies AI components across applications, MCP toolchains, and agent runtimes, and maps reasoning and execution paths as models, tools, and data sources are invoked.

Another part of the release focuses on behavioural drift detection. The platform establishes a baseline for agent behaviour, then flags changes over time with supporting security context to help teams determine whether an agent is acting outside expected patterns.

Runtime guardrails are designed to let security teams approve or reject behavioural changes by enforcing rules around which models, tools, and permissions are allowed. The platform can also trace tool calls, model and artefact loading, system actions, file access, and network activity to identify compromise paths involving AI agents.

For organisations using MCP-based toolchains, the system adds monitoring tailored to protocol-mediated tool use. It is designed to detect abnormal access, risky chaining patterns, and execution paths that could have a large operational impact.

The update also extends Miggo's WAF Copilot product into AI-related application protection. The system correlates AI functionality with runtime execution context to generate detection and policy rules intended to address missing guardrails and unintended exposure in live systems.

Risk scoring and incident analysis are also part of the release. The platform correlates events into a timeline and ranks risk according to factors such as blast radius, data access, and internet exposure, with the aim of helping security teams prioritise triage and response.

Miggo also said the product can provide runtime evidence to support internal AI policy requirements and emerging regulatory frameworks, including the EU AI Act. The claim reflects a broader push in the security market to link AI governance requirements with operational evidence from production systems rather than relying solely on design-time documentation.

"AI risk materializes at runtime," said Daniel Shechter, CEO of Miggo Security.

"For teams using popular agent frameworks, like LangChain, and MCP-connected toolchains, this architecture makes runtime execution the primary attack surface.

"I'm proud of the technology we've built at Miggo, which has always been centered around deep context - and by extending our patented DeepTracing capabilities, we're now bringing robust AI and agentic defense directly into modern environments," Shechter said.