IT Brief Ireland - Technology news for CIOs & IT decision-makers
Split boardroom execs vs stressed engineers ai data leak scene

Manifest flags AI readiness gap between execs & AppSec

Wed, 4th Mar 2026

Manifest has published research pointing to a gap between executive confidence in AI readiness and the view from application security teams, with implications for software and AI supply chain risk management.

The report, Beyond the Black Box: How AI Is Forcing a Rethink of the Software Supply Chain, finds that 80% of executives believe their organisations are prepared for AI-related supply chain threats, while only 40% of application security practitioners agree. It describes this as a "readiness gap" that could increase exposure as AI adoption spreads across products, datasets, and third-party services.

The research says organisations are integrating AI systems without consistent inventory and policy enforcement, raising risks tied to licensing, provenance, and third-party dependencies. It also highlights limited visibility into what is running in production environments and within vendor software.

SBOM usage

Software bills of materials (SBOMs) feature heavily in the findings. The report says 60% of organisations generate SBOMs, but more than half of those do not consume or manage them in practice. That suggests a gap between producing compliance artefacts and using them operationally.

Larger enterprises report higher SBOM adoption (59%) than small organisations (nearly 32%). The report links that difference to regulatory pressure, which has pushed more companies to document software components and dependencies.

SBOMs are positioned as one part of a broader transparency picture. The report groups SBOMs with provenance records and signed binaries as examples of "verifiable transparency data" that vendors can provide to customers.

Shadow AI

The research also highlights "shadow AI"-AI tools and services deployed without formal approval, oversight, or consistent controls. It finds that 63% of respondents acknowledge shadow AI within their organisations.

Governance appears uneven. The report says 42.4% of teams handle AI separately from standard software governance rather than integrating AI components into existing review and risk management processes. That split can make it harder for security teams to maintain a single view of software components and AI-related dependencies.

The report describes models, datasets, and third-party AI services entering organisations through multiple routes. It also notes that ownership is often fragmented, which can slow decisions and complicate accountability when incidents occur.

Tooling concerns

Legacy approaches to software composition analysis (SCA) also come under scrutiny. The report says 56% of participants believe SCA tools are noisy and delay development teams, and links those perceptions to "cynicism" about whether the tools reduce software-related risk in practice.

SCA has become a common control in modern development pipelines, scanning codebases and dependencies for known vulnerabilities and policy issues. The research suggests that operational burden and false positives can reduce trust and limit adoption across development teams.

Rather than pointing to a shortage of security tooling, the report argues the more pressing issue is operational alignment. It highlights disconnected workflows and the lack of a shared system of record as factors that limit an organisation's ability to turn security signals into measurable risk reduction.

Transparency effect

The report links vendor-provided transparency data to operational efficiency, saying organisations that receive verifiable transparency data see "huge efficiencies," including quicker adoption of new technology and faster resolution of security issues.

It finds that 64% of organisations with access to such data report quicker implementation of new technology, while 61.6% report faster resolution of security issues. The report contrasts this with organisations that lack vendor transparency, which it says face a "transparency tax" in the form of extra time and cost spent investigating opaque software.

The report also connects limited visibility to broader organisational pressures, including audit readiness, incident response coordination, and vendor risk oversight. It positions AI systems as an amplifier of these challenges because they introduce new types of components and dependency chains.

Daniel Bardenstein, Chief Executive at Manifest, said: "This report surfaces a hard truth. Executive confidence in AI readiness does not match what AppSec teams are dealing with day to day. Leaders believe governance is in place, but practitioners are seeing unmanaged AI usage, unclear ownership, and blind spots in what is actually running across products and vendors. AI is scaling faster than enterprise visibility and accountability. To close the gap, organizations need operational control, a unified way to inventory AI components, understand how they enter the environment, and enforce consistent decisions across teams. Without that, the disconnect between strategy and execution will continue to widen."