Irish firms warned on custom AI agents replacing software
Irish businesses that replace established software platforms with custom-built AI agents could expose themselves to significant legal and operational risk, according to Bobby Brown of Dublin consultancy Nucleo. He said boards are pursuing the shift in search of clearer returns on AI spending.
Brown's warning focuses on a growing effort to cut software costs by dropping software-as-a-service subscriptions and building internal AI tools instead. He argued that many organisations, particularly in regulated sectors, do not yet have the data controls needed to do that safely.
The issue comes as company leaders face pressure to show that AI investment is delivering savings. Research cited by Nucleo from PwC's Irish CEO survey found that 51 per cent of Irish leaders see the pace of technological change as their main concern, while 23 per cent say they have achieved cost reductions from AI so far.
That gap has prompted some boards to seek savings elsewhere in technology budgets. One target is long-running software licences, which can be costly but usually come with vendor responsibility, contractual protections and established compliance processes.
Companies that develop their own AI workflows take direct responsibility for any resulting failures, Brown said. That can include inaccurate outputs, poor oversight of training data and potential breaches of privacy or sector rules.
"Cutting enterprise licenses looks great on the balance sheet for 2026, but it is a false economy," said Bobby Brown, founder and chief executive of Nucleo.
"Most organisations simply do not have the rigorous data governance required to build bespoke AI agents. If your underlying data is messy, building a custom bot to run your operations is just a very fast way to automate your errors."
Accountability Shift
Brown argues that the main risk lies in the transfer of accountability from supplier to customer. With licensed software, businesses can look to the vendor when systems fail or promised controls do not work as expected. With an internally built AI agent, that protection may be limited or absent.
"When you buy a software license, you have a vendor to hold accountable," he said.
"If a bespoke agent hallucinates a financial forecast or breaches GDPR, the company owns that error 100 per cent. Under the EU AI Act, non-compliance can trigger fines of up to €35 million or 7% of global turnover. There is no vendor to support and nowhere to hide."
The warning comes amid wider investor concern about the outlook for traditional software groups. Nucleo pointed to a sharp sell-off in listed software stocks following updates to Anthropic's Claude model, which it said wiped more than USD 285 billion from the global software market value within days as investors weighed the impact of AI agents on licence-based business models.
Whether that market reaction proves lasting or not, the debate has reached boardrooms in Ireland. Companies are under pressure to decide whether AI should sit alongside existing software estates or replace parts of them.
For Brown, the answer depends less on enthusiasm for AI than on the state of a company's data architecture. Businesses with fragmented records, inconsistent governance or weak audit trails may struggle to explain how an AI tool arrived at a decision, especially if that decision affects customers, finance or compliance.
The concern extends to the board. Directors who approve bespoke AI systems without a clear grasp of the underlying data and control structure may create governance problems under Irish company law, where they are expected to show care and skill in oversight, Brown said.
Nucleo works with mid-sized and large organisations in sectors including financial services, utilities, home care, and the public sector, where compliance demands are typically stricter and audit requirements more extensive. In those environments, replacing established platforms with internally developed AI systems may increase operational complexity and legal exposure.
Data First
Brown said companies should focus first on improving data quality and governance before replacing core software tools with custom AI agents. He argued that existing secure platforms may offer a safer route for deployment if the underlying information is reliable.
"The goal isn't to avoid AI; it's to avoid building a liability," he said. "Success starts with getting your data in order so you can deploy AI within the secure platforms your business already trusts."