IT Brief Ireland - Technology news for CIOs & IT decision-makers
Worried office security team ai network shield cracks risk charts
Tue, 24th Mar 2026

OpenText has published research with the Ponemon Institute indicating that many organisations are deploying generative AI without matching security and governance measures. The survey found that 52% of enterprises have fully or partially deployed GenAI.

The results highlight a gap between adoption and oversight as companies expand their use of AI in cybersecurity and other operations. The study surveyed 1,878 IT and IT security practitioners across North America, Asia-Pacific, Europe, the Middle East, Africa and Latin America, spanning sectors including financial services, healthcare, technology, energy and manufacturing.

Governance Gap

Only one in five enterprises has reached what the study describes as AI maturity in cybersecurity, with AI fully deployed in security activity and risks assessed. Nearly 8 in 10 organisations (79%) have not reached that stage.

Policy adoption also remains limited. Just 41% of organisations have AI-specific data privacy policies in place, while 43% have adopted a risk-based governance approach covering issues such as bias, security threats and ethical concerns.

The report indicates that the pace of implementation is outstripping internal controls. Nearly six in ten respondents (59%) said AI makes it harder to comply with privacy and security regulations, yet most organisations have not introduced dedicated privacy rules for AI systems.

Operational concerns were also prominent. Fifty-eight per cent of respondents said prompt or input risks, including misleading or harmful responses, were very or extremely difficult to minimise. More than half (56%) reported challenges in managing user risks, including the unintended spread of misinformation.

Trust Issues

The study also examined whether AI systems are delivering the expected benefits in security operations. Just 51% of respondents said AI is effective in reducing the time needed to detect anomalies or emerging threats.

Confidence was lower for more advanced uses. Fewer than half, or 48%, rated AI as effective for threat detection and hunting deeper insights while reducing manual work.

Bias and reliability remain major obstacles. Nearly two-thirds of respondents (62%) said it is very or extremely difficult to minimise model and bias risks, including unfair or discriminatory outputs.

Other barriers relate to how AI systems are built and used. 45% cited errors in AI decision rules as a leading problem, while 40% pointed to errors in the data fed into AI systems.

For many businesses, fully autonomous AI still appears some way off. Only 47% said their AI models can learn robust norms and make safe decisions autonomously, while 51% said human oversight is needed in AI governance because attackers can adapt quickly.

Muhi Majzoub, EVP of Product & Engineering at OpenText, said the issue goes beyond adoption. "AI maturity isn't just about adopting AI tools-it's about doing it responsibly," he said.

He said security and governance need to be built in early. "Security and governance are foundational to getting real value from AI. When they're built into AI systems from the start, organisations can operate with greater transparency, monitor systems continuously, and trust the outcomes AI delivers."

Regional Sample

The survey drew responses from executives, decision-makers and practitioners involved in IT security, engineering, infrastructure, risk and compliance, and other areas linked to AI and security strategy. The cross-regional sample was intended to capture views from organisations of different sizes and sectors as AI adoption broadens.

For businesses considering further AI deployments, the figures suggest implementation is moving ahead faster than the controls needed to manage risk. That is particularly significant as AI tools become more embedded in day-to-day operations and more closely tied to critical business processes.

Majzoub said organisations best placed for the next stage of adoption will be those that address oversight from the outset. "The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start," he said. "As AI becomes embedded in day-to-day operations, organisations need secure information management as the foundation; clear governance frameworks, policy-based controls, and continuous monitoring that ensure AI systems remain trustworthy and compliant. Just as important is aligning AI with the right data, security practices, and oversight from the outset so innovation can scale responsibly and deliver measurable business value."