Use of Artificial Intelligence in Government Agencies: Balancing Innovation and Accountability
Overview
Few terms today are as ubiquitous—or as often misunderstood —as Artificial Intelligence (AI). It is at the forefront of conversations about the near-and long-term transformation of the world’s economy-, and government is no exception. Leaders are focused on how to regulate and deploy AI, and how to procure it responsibly. Even as that debate continues, states and federal agencies are working to identify and capitalize on the current state of technology and its associated best practices to harness AI for their missions. 2024 saw a doubling of its use in federal agencies and states are also accelerating its adoption.
The term itself can be misleading. It is often used to describe everything from AI chatbots, to large language models, to robotic process automation to machine learning, and deep data analytics. That is why some prefer the term “intelligent automation.” recognizing that automation is the core of most of today’s “AI” implementations and that it can involve a range of components.
Whatever we call it, we’re already using “AI” – in customer support, healthcare data mapping, procurement automation, and even national security applications. At its core, AI is a tool that allows us to gather, analyze and pull patterns from the whole of human-created data and tease out new inferences and understanding. Powerful. But how, where and why to apply remains our choice. Context, mission knowledge, and the systems underlying service delivery allows government agencies to strike the right balance between technology-driven efficiency and human oversight while ensuring program integrity. In other words, the effective deployment of AI today continues to require significant human intervention; automation can only take us so far.
Data Driven, but Not Data Dominated
The early work of the Department of Government Efficiency (DOGE) fueled debate over how heavily agencies should rely on large data models and machine learning. Some early decisions driven solely by data (itself sometimes misunderstood) have already been reversed, underscoring the criticality of mission and systems knowledge. Decades of institutional knowledge and mission-specific application cannot be replaced by algorithms alone.
Although efficiency and customer service (ie, effectiveness) are often related, they are distinct measures of performance. Government applications need speed and accuracy, but also effective outcomes that provide excellent service and build public trust. Recent federal deployments of AI verification tools, for example, have shown that duplicate identity cases can be reduced by more than half while keeping decisions on appeals subject to human review for those affected. The lesson is clear: AI can enhance, not replace, agency execution of state and federal programs.
High-Risk Decision Making
Analyses of “High-risk AI” are intended to protect individuals from bias in private sector decisions such as college admissions or mortgage approvals. As government integrates AI into benefits programs such as SNAP, Medicaid, and unemployment insurance, these same concerns arise and take on even greater weight. It’s our government, and we want it to be efficient, effective and fair. The efficiency of all of these programs could be significantly improved by advanced technologies, data sharing, and automation. But human oversight and intervention is essential to ensure effectiveness and to safeguard fairness, privacy, and trust.
For context: the White House’s July 2025 report, “Winning the AI Race: America’s AI Action Plan”, outlines one approach to setting federal priorities for advancing AI capacity, meeting data center demand, streamlining regulation, and ensuring government models remain free of bias.
AI and the Blended Workforce
Introducing AI into how government meets the needs of the citizens it serves doesn’t have to be overly complex or expensive. Successful transformations need to start small,with specific goals, demonstrate success, and repeat. In the case of public benefits programs, we could start with automating routine, time-consuming tasks, freeing staff to focus on policymaking and citizen engagement, and then expand from there.
This kind of agile approach represents the best path toward achieving new levels of effectiveness and efficiency in benefits administration. As another example, dispute resolution processes in one agency were accelerated dramatically when intelligent automation was used to process routine cases, while complex matters remained subject to human review and action. This is the model for a sustainable blending of this new tool created by humans to improve their productivity – like every other technological advance before it.
Conclusion
Intelligent automation is no longer optional—it is required to meet mission demands. This is especially true today, given new requirements in federal law, notably those in the “One Big Beautiful Bill (OBBB),” under which federal and state agencies face a host of changes and challenges in benefits administration. Let’s not make it overly complex. AI offers important opportunities to powerfully modernize government operations. Agencies and their contractors are at the forefront of this transformation and bear responsibility for ensuring its use delivers measurable, positive outcomes. As government teams adopt AI—whether as developers or deployers—they must maintain human oversight to protect data, prevent bias, and ensure improved citizen satisfaction.
The imperative is clear: AI in government must be implemented with precision, oversight, and accountability. Done right, it can cut costs, speed delivery, and improve outcomes—delivering the right services to the right people at the right time.
1. https://fedscoop.com/federal-government-discloses-more-than-1700-ai-use-cases
2. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf