IT Brief Ireland - Technology news for CIOs & IT decision-makers
Elaine mullan image

Where are the godmothers of AI in the tech industry?

Fri, 6th Mar 2026

International Women's Day exists to focus on rights, recognition and structural inequality. But at its heart, it asks a simple question: who holds power, and how is it exercised? 

Similarly, Artificial intelligence (AI) is often described in dramatic terms, disruptive, transformative, even existential. What we talk about far less is that AI is inherently political. It reflects choices about priorities, trade-offs and acceptable risk, both at a societal level and within individual organisations. 

AI systems do not appear fully formed. There is no ta-da magic wand that is waved to create an AI system. They are built through thousands of human decisions: what data to use, what outcomes to optimise for, which risks are tolerable and which are not. That process requires judgment. And judgment is where power sits. Afterall, depending on who is making these choices, it can be very easy for bias to slip in. 

At present, much of that power does not sit with women. And whilst we have strong examples of women leading that charge in AI. Expert figures like Daniela Amodei, Mira Murati and Amanda Askell. They are woefully underrepresented compared to their male counterparts. 

So where are the rest of the "Godmothers of AI"? Where are the experienced leaders in governance, compliance, behavioural science and risk? The professionals who have spent decades assessing unintended consequences and balancing competing interests? These are precisely the skills required to shape systems that increasingly influence opportunity and access. 

AI is no longer experimental or optional. It sits inside everyday decision-making: screening CVs, determining creditworthiness, informing medical triage, detecting fraud and allocating resources. In each of these contexts, AI is participating in judgment. 

Judgement without broad representation is not neutral and carries the risk of embedding structural bias at scale. 

In the AI economy, meaningful rights must include meaningful participation in the design and governance of these systems. A genuine seat at the table matters, not as symbolism but because design choices have direct effects on real-world outcomes. 

On top of this, accountability is vital. When systems discriminate or cause harm, there must be enforceable consequences. Without that, governance becomes merely a statement of intent rather than an active mechanism for protection. 

Behavioural science offers a useful lens here: people optimise for what they are rewarded for. If founders are rewarded primarily for speed and valuation, speed and valuation will dominate their choices. Likewise If boards are measured mainly on growth, growth will guide their practices even when it may cause risk. Unless fairness, transparency and inclusion are built into incentives, they will remain secondary to the primary objective of the AI model. 

This is where the disconnect becomes clear. We celebrate women in technology once a year, yet female-led AI ventures remain structurally underfunded. We see diversity on panels, but authority over capital allocation, product design and governance often sits elsewhere. 

Representation without decision-making power does little to reshape systems and operates from a place of vanity rather than practicality. This isn't just a moral or feminist argument; it's also an economic one. 

From a risk perspective, this is not only a question of equity. AI models trained on incomplete or historically biased data can produce flawed outputs. And those same outputs can drive poor decisions, create legal exposure and invite regulatory scrutiny. In regulated sectors, that is a material business risk. 

Inclusion in AI design is therefore not simply a social aspiration; it needs to be part of sound governance. 

To be truly meaningful, progress should look practical rather than performative. Think procurement standards that demand transparency around training data and model governance; AI risk committees with clear mandates and accountability; impact assessments embedded into product development lifecycles, including gender impact analysis and capital that genuinely supports women building AI solutions.