Mitigating AI Risks in State Government 

State officials are leading the way to identify and tackle AI risks – from misinformation to data privacy concerns. In a December 2023 installment of an ongoing webinar series, the NGA Center for Best Practices brought together state leaders and national experts to examine major categories of AI risk in state government and highlight promising practices state governments are implementing to ensure AI technologies are deployed successfully and responsibly.

Co-organized with the Center for Scientific Evidence in Public Issues at the American Association for the Advancement of Science (AAAS EPI Center), the conversation provided an opportunity to share lessons learned and identify both challenges and opportunities.


The Reality of Current AI Harms

Alexandra Reeve-Givens, CEO of the Center for Democracy and Technology, provided an overview of various types of AI risk state governments are confronting. Categories of risk include: security and surveillance, education, consumer fraud and abuse, commercial data practices, benefits and public health, information harms and elections.

Illustrating the real-world impact of AI hazards, Reeve-Givens shared examples related to automated administration of public benefits systems – such as unemployment insurance, SSI and Medicaid. While the use of automation in benefits systems isn’t new, its increasing use over the past 10-15 years has uncovered risks that significantly impact citizens’ rights and expose state government agencies to litigation. In one instance, a system adopted to detect fraud in unemployment insurance programs seems to have wrongly accused 20,000 – 40,000 people of fraud due to a miscorrelation in data sets – leading to immediate termination of benefits and substantial punitive fines. Reeve-Givens also cited examples of AI risks in the criminal justice system, including wrongful arrests based on faulty facial recognition technology, as well as bias in AI systems used to make decisions regarding bail and probation. 


Elements of Trustworthy AI

As state and federal policymakers grapple with AI implementation, several guideline frameworks emerging in the past year pinpoint common elements as essential to ensuring trustworthy AI. “One of the key elements is how we move from lofty ideals [such as] ‘great AI systems shouldn’t discriminate’ to how governments and individual decision-makers operationalize that in practice,” Reeve-Givens stated. Several state plans have drawn on recent frameworks like the White House Blueprint for an AI Bill of Rights and the U.S. Commerce Department’s National Institute of Standards & Technology’s (NIST) Artificial Intelligence Risk Management Framework.

One of the latest models is a Proposed Memorandum for Federal Agency Use of AI released in November 2023 by the Office of Management and Budget (OMB).

Reeve-Givens outlined key elements of OMB’s guidelines that can be helpful to states. 

Mandate Risk Management practices: Before developing or deploying an AI tool, it is critical to determine if the AI system impacts rights or safety. If it does, states should require minimum practices, such as completing an AI impact assessment, testing performance in real-world contexts, independently evaluating the AI, conducting ongoing monitoring and specifying a threshold for human review, and ensuring adequate training for operators. For AI systems that have been determined to impact rights, additional minimum practices include testing for equity and nondiscrimination (pre- and post-deployment), consulting impacted groups, and notifying impacted individuals when AI meaningfully influences the outcome of decisions concerning them.

Comply with due process & Administrative Procedure Act requirements: Most litigation arising from AI have centered on violation of due process rights, or Administrative Procedure Act (APA) obligations, by deployment of an automated tool that impacts rights without appropriate public notice or consideration, or in a manner that is inconsistent or of poor quality.  

Require reporting & documentation: States can increase accountability and understanding around AI by directing agencies to inventory their uses of AI, designate which uses impact rights and safety, and issue templates for reporting outcomes in high-risk use cases so that both internal operators and the public will have access to and understanding of AI uses and impacts. 

Designate appropriate staff: OMB’s guidance recommends designating Chief AI officers (which can be dual function roles drawn from existing staff like chief data or privacy officers) to develop best practices and serve as a central source of accountability. 

Take specific steps on procurement: The procurement process is instrumental in shaping AI risk management, and OMB outlines several steps to ensure government agencies ask the right questions and ensure responsible use of taxpayer dollars. An effective procurement process should include the ability to test the technology, ensure government agencies retain sufficient control and ownership over data, ensure quality control, privacy and security, and maintain adequate access and visibility to ensure due process requirements are met.

Develop strategies to counter harmful AI uses: To help protect the public from misinformation and consumer fraud scams, government officials must act to protect their role as trusted sources of civic information. Strategies include maintaining consistent branding and trust indicators (e.g. the use of .gov domains); engaging in proactive messaging to “pre-bunk” and debunk false narratives; and establishing trusted channels for communication so reporters know who to call when questionable information spreads.

Take AI-driven harms seriously: Ensure law enforcement is equipped to address consumer fraud, extortion, non-consensual intimate imagery (NCII), and election interference; address critical infrastructure & cybersecurity risks; provide guidance to the private sector around housing, civil rights, and consumer protection issues; ensure any AI funds require responsible innovation; advance a strategic legislative agenda to prioritize AI risk management.


How States Are Managing AI Risk

Washington: Katy Ruckle, State Chief Privacy Officer

Virginia: Andrew Wheeler, Office of Regulatory Management Director

During a previous webinar, state speakers outlined ways their states are innovating with AI to improve efficiency and effectiveness in a variety of government functions. Numerous states have established advisory committees and working groups to study AI and issue guidelines. As state efforts expand, their findings continue to identify risks and develop best practices to both mitigate risks and capitalize on opportunities.

You can watch each of the state speakers’ presentations at this link, or below. A summary of key points made by each state are presented here:

Washington state launched an AI Community of Practice (CoP), which includes both state and local government agencies, to facilitate collaboration, identify best practices, enhance accountability and oversight, and promote alignment of new AI technologies to business and IT strategies. CoP’s recommendations include guidelines for applying existing Washington State Agency Privacy Principles to risks related to data privacy – including common risks such as data persistence (retaining data longer than needed) and data repurposing (using data for a purpose beyond that for which it was originally collected/intended). Building on interim guidelines for the responsible use of generative AI, which the state published in August 2023, the CoP highlights several “dos and don’ts” for generative AI use, including: do review content for biases and inaccuracies before using AI-generated audiovisual content; do implement robust measures to protect resident data; don’t include sensitive or confidential information in prompts; when using chatbots / automated responses, don’t use generative AI as a substitute for human interaction, and do provide mechanisms for residents to easily seek human assistance if the AI system cannot address their needs effectively.

In Virginia, Governor Glenn Youngkin issued an Executive Directive in September 23 directing the state’s Office of Regulatory Management (ORM) to coordinate with the Virginia Information Technologies Agency (VITA) to develop standards and guidelines to ensure effective oversight of AI technology across four focus areas: legal protections, policy standards, IT safeguards, and K-12 and higher education implications. The effort generated findings organized into three categories: education (developing guidelines to guard against misuse of AI in schools and to equip students with AI knowledge to prepare for future careers); economic development (determining strategies to attract AI companies, as well as companies outside the AI industry, to Virginia), and energy impact (examining the impact of AI on power generation requirements). ORM and VITA were also tasked with identifying pilot projects – both internal and public-facing — that can be implemented to test the standards and make government services more efficient and effective. Examples include use of chatbots to more efficiently administer government services, as well as a potential pilot project to use AI to help the housing department analyze 700,000+ building codes active in Virginia in order to identify overlapping requirements with a view toward streamlining regulations.


State AI Resources

The NGA Center for Best Practices has updated a State Resource List on Artificial Intelligence that provides links to federal-level activities, state executive branch activities, state legislative actions, local-level activities, and resources for technical assistance. As additional items are identified, the resource list will continue to be expanded.

Governors’ offices and other policy experts and stakeholders are encouraged to contact NGA to share input on the resource list or offer suggestions of specific AI topics the NGA Center and the AAAS EPI Center can address in future events. NGA’s intended audience includes Governors’ advisors and staff, state policymakers and executive leaders, state procurement officials, state chief information and technology officers, state offices that oversee automated systems such as public benefits distribution, government hiring, fraud detection or other systems.

The AAAS EPI Center also has AI resources to share on the EPI Center website which includes, for example, Foundational Issues in AI and Glossary of AI Terms and other useful resources.


Contacts