Governments to deploy AI agents, tighten oversight by 2029
Most governments will use AI agents for routine decision-making within the next few years as public sector leaders seek faster transaction handling and more consistent outcomes. Gartner forecasts that at least 80% of governments will deploy AI agents for routine decisions by 2028, reflecting a broader shift from pilots to operational use across public services.
Gartner also expects stronger oversight requirements. By 2029, it forecasts that 70% of government agencies will require explainable AI and human-in-the-loop mechanisms for automated decisions that affect citizen service delivery. The aim is greater transparency, auditability, and clear routes to challenge when decisions have real-world consequences.
AI agents are software systems that take actions based on goals and rules, often through conversational interfaces. In government, they can handle tasks with consistent decision logic, such as eligibility checks, triage, case routing, and appointment scheduling. They can also draw on multiple data sources, including text and images, expanding where automation can be applied.
Gartner links the growth of AI agents to advances in multimodal AI-which processes different types of data-and to conversational systems. Daniel Nieto, a senior director analyst at Gartner, said pressure is rising on government technology leaders to deploy quickly while meeting public accountability standards.
"Government CIOs are under growing pressure to embed AI into decision-making capabilities rapidly and responsibly," Nieto said. "The rise of multimodal AI, alongside conversational and agentic systems, has expanded what public organisations can automate, understand, and anticipate."
Fragmentation issues
Structural barriers within government remain a key constraint on how quickly AI can move from individual projects to shared services. Gartner identified internal fragmentation as one of the most persistent obstacles to delivering measurable value from AI in the public sector.
In a Gartner survey of 138 government respondents worldwide, conducted between July and September 2025, 41% cited siloed strategies as a key challenge in adopting and implementing digital solutions. Another 31% pointed to legacy systems as a major issue. The results suggest many agencies still struggle to standardise decision processes and data flows across departments.
Technology modernisation alone has not resolved these constraints, Gartner said. "Technology modernisation alone has not resolved these issues," Nieto added.
The survey findings also highlight investment priorities. Digital transformation programmes often focus on replacing ageing systems, improving data integration, and migrating services to cloud platforms. These steps can make AI easier to adopt, but they do not automatically resolve inconsistent rules, duplicated workflows, or competing mandates across organisations.
Decision governance
Gartner argues that public sector AI governance is shifting from a focus on models and algorithms to a focus on decisions. In this view, what matters is not only how an AI system is built, but how a decision is defined, when it is executed, how it is monitored, and how it is audited.
Gartner calls this approach decision intelligence. It treats decision-making as an operational asset that can be designed and tested, with greater emphasis on clear decision pathways. In government, that can be critical to legitimacy, particularly when automated decisions affect access to services or the handling of sensitive cases.
Gartner reported that 39% of survey respondents cited improved service and citizen satisfaction as primary reasons to invest in building citizen trust. This links trust to outcomes such as timeliness, accuracy, and consistency in service delivery.
Nieto said oversight must extend beyond technical controls when automated decisions shape people's interactions with government. "By governing decisions, rather than just isolated AI components, governments can better balance automation with human judgment, particularly in high-stakes or rights-impacting contexts," he said. "Regulated industries and governments cannot rely on opaque 'black box' systems for consequential decisions. DI elevates explainability from a technical requirement to a governance imperative."
Explainable AI refers to methods and processes that make it possible to inspect and communicate how an automated system reached an outcome. Human-in-the-loop designs place human decision-makers in the process, often for exception handling, appeals, and high-risk assessments. Gartner positions both as central to public sector use, particularly when outcomes can be contested.
Citizen experience
Efficiency still underpins many automation business cases, but Gartner said citizen experience is becoming a stronger measure of value. Half of government respondents ranked improved citizen experience among their top three priorities, according to the survey.
As more services are delivered automatically, with fewer direct interactions, citizen experience increasingly depends on perceived fairness, reliability, and transparency. It also raises the need for clear explanations when outcomes affect entitlements, services, or compliance processes.
"As AI and decision intelligence increasingly automate and streamline service delivery, the traditional notion of 'citizen experience' evolves," Nieto said. "When citizens receive what they need from the government automatically, direct interactions may decrease, making trust in the system's reliability, fairness, and transparency even more critical. That trust increases the need to anticipate potential needs that could reshape how government digital services are delivered."
Gartner's decision intelligence framing also points to changes in service design. Agencies can map decision flows across services and identify where rules can be standardised, where discretion is required, and where automated triage is appropriate. In practice, that can shift service delivery from reactive processing toward earlier identification of needs, using available data and policy rules as inputs.
The forecast suggests the next phase of public sector AI deployment will be defined less by experimentation and more by governance that can withstand scrutiny, with explainability and human review becoming common requirements as AI agents take on a larger share of routine decisions.