{"id":21696,"date":"2023-09-12T13:31:14","date_gmt":"2023-09-12T13:31:14","guid":{"rendered":"https:\/\/web3unplugged.io\/blog\/?p=21696"},"modified":"2023-09-12T13:31:16","modified_gmt":"2023-09-12T13:31:16","slug":"artificial-intelligence-for-retrospective-regulatory-review","status":"publish","type":"post","link":"https:\/\/web3unplugged.io\/blog\/artificial-intelligence-for-retrospective-regulatory-review\/","title":{"rendered":"Artificial Intelligence for Retrospective Regulatory Review"},"content":{"rendered":"\n<p>Statutes, executive orders, and good governance considerations alike impose a duty on many federal agencies to analyze past rules periodically and to remove duplicative, inconsistent, anachronistic, or otherwise ineffective regulations from their books. To comply, agencies perform \u201cretrospective reviews\u201d by re-assessing the costs and benefits of their regulations at some time after the regulations are issued.<\/p>\n\n\n\n<p>This longstanding practice of retrospective review has been endorsed in various Administrative Conference of the United States (ACUS) recommendations since 1995.<\/p>\n\n\n\n<p>Executive orders from the Obama, Trump, and Biden Administrations expanded many agencies\u2019 retrospective review obligations. More recently, agencies have begun to consider how technology might maximize the scope of the review they can perform, notwithstanding limited resources. For example, the Office of Management and Budget suggested in its November 2020 guidance that artificial intelligence (AI) might be an effective tool \u201cto promote retrospective analysis of rules that may be outmoded, ineffective, insufficient, or excessively burdensome\u201d and to \u201cmodify, streamline, expand, or repeal them in accordance with what has been learned.\u201d<\/p>\n\n\n\n<p>Observing this trend toward algorithmic review, one of us (Sharkey) proposed a study to ACUS and produced a report assessing how certain agencies now use, and how others plan to use, AI technologies in retrospective review. Prior to the ACUS study, little was known about agencies\u2019 use of algorithms to facilitate retrospective review, so Professor Sharkey first endeavored to conduct field studies to discover relevant uses of AI-enabled technology.<\/p>\n\n\n\n<p>Four representative case studies proved instructive: three from cabinet-level executive branch departments\u2014the U.S. Department of Health and Human Services (HHS), the U.S. Department of Transportation, and the U.S. Department of Defense\u2014and one of a project by the General Services Administration (GSA) in collaboration with the Centers for Medicare and Medicaid Services (CMS).<\/p>\n\n\n\n<p>The first case study on HHS\u2019s use of AI technology was particularly striking. The department christened its 2019 AI pilot project \u201cAI for Deregulation,\u201d a clear statement of its intent to use AI to remove rules rather than repair or replace them. In November 2020, HHS launched a \u201cRegulatory Clean Up Initiative\u201d through which the department has applied an AI and natural language processing (NLP) tool created by a private contractor to HHS regulations. This culminated in a final rule that corrected nearly 100 citations, deleted erroneous terms, and rectified many misspellings and typographical errors. The public only learned of the AI tool and its use by HHS upon the issuance of the rule.<\/p>\n\n\n\n<p>The second case study explored retrospective review at the Transportation Department, which reviews all of its regulations on a ten-year cycle. To date, the Transportation Department\u2019s largest AI-based retrospective review tool is its \u201cRegData Dashboard,\u201d which, according to internal documents, seeks to \u201capply a data-driven approach to analyzing regulations\u201d to \u201cinform policy decisions, analyze trends, provide management reports\/monitoring, and display the entire \u2018lifecycle\u2019 of regulatory actions.\u201d The Transportation Department\u2019s Office of the Chief Information Officer built the RegData Dashboard to implement QuantGov\u2014an open-source policy analytics platform developed by the Mercatus Center\u2014which uses data tools to analyze regulations and estimate their regulatory load.<\/p>\n\n\n\n<p>The Transportation Department\u2019s algorithmic tools exert a unique influence on the agency\u2019s rulemaking process. The department drafts its rules to fit a structured, agency-wide format designed to organize key meta-data elements, such as who or what the regulated entity is and who is responsible for enforcement. Compared to a less structured approach to drafting rules, the Transportation Department\u2019s consistent format makes it relatively straightforward for subject-matter experts to encode the substance of Transportation Department rules into a \u201cmachine-readable\u201d format, thus decreasing the cost of \u201cteaching\u201d the Transportation Department\u2019s algorithmic tools the semantic meaning of regulatory text and obviating the need for NLP.<\/p>\n\n\n\n<p>The Defense Department presents a third example of an agency\u2019s use of algorithmic tools to perform retrospective review. The Defense Department faced a thorny problem: a \u201cmountain of policies and requirements\u201d authored by various officials and officers across its constituent agencies and military services. The Defense Department\u2019s rules constituted a sprawling, decentralized structure which rendered many forms of coordination prohibitively difficult. In response, Congress required the Defense Department to create a \u201cframework and supporting processes\u201d to ensure retrospectively that the department\u2019s shared intelligence community \u201cmissions, roles, and functions\u201d are \u201cappropriately balanced and resourced.\u201d<\/p>\n\n\n\n<p>Thus, the Department created \u201cGAMECHANGER,\u201d an AI and NLP tool which centralized a structured, unified repository for its entire catalogue of guidance. GAMECHANGER also assists the Defense Department staff in drafting new policy documents to avoid conflicting with or duplicating prior department positions. The Defense Department prototyped GAMECHANGER in-house before transferring development duties to a contractor tasked with supporting other Defense Department data analytics tools. According to one Defense Department official interviewed, GAMECHANGER decreased the time of responding to policy-related queries \u201cfrom months to seconds,\u201d and saved almost $11 million in annual costs.<\/p>\n\n\n\n<p>The final case study explored a GSA initiative to leverage its centralized technical expertise to provide shared software for government. GSA partnered with CMS\u2014an executive agency within HHS\u2014to perform a pilot study into \u201ccross-domain analysis,\u201d which the agencies hoped would reduce burdens and avoid duplicative regulation by coordinating rules across regulatory domains.<\/p>\n\n\n\n<p>Since the Dutch government had implemented similar technology in its health care and immigration services, GSA and CMS sourced the technology for the project from two Dutch organizations. The ensuing three-month study investigated a test case of CMS regulations governing subsidies for portable oxygen systems. CMS had already identified inconsistencies in the selected rules, and the Dutch companies\u2019 tools were able to locate the expected contradictions successfully.<\/p>\n\n\n\n<p>In addition to performing these case studies, Professor Sharkey interviewed officials from eight other agencies about their interest in using algorithmic tools for retrospective review. These interviews revealed that, for the most part, agencies\u2019 retrospective review methods are nowhere near automated. The U.S. Department of Education, for example, performs retrospective review by a completely manual process which one official described as \u201cpretty labor intensive.\u201d Agency representatives theorized that concerns with the government\u2019s capacity and resources to develop AI tools for retrospective review had stymied adoption, and none thought it would be realistic to develop such tools in-house.<\/p>\n\n\n\n<p>Nonetheless, officials at all but one agency were open to the use of AI-enabled tools in the retrospective review process, and several interviewees pointed out particular agency tasks that might lend themselves to automation, such as finding broken citation links or flagging long-standing, unedited regulations as ripe for review.<\/p>\n\n\n\n<p>To obtain another perspective, Professor Sharkey surveyed regulatory beneficiaries and regulated entities using a sample list provided by ACUS. Of the six regulatory beneficiaries and one regulated entity that agreed to be interviewed, most said their chief concerns with AI-enabled tools for retrospective review were AI trustworthiness and explainability. Indeed, two self-described \u201cAI skeptics\u201d contended that much regulatory text is too context-specific and difficult to unpack for an algorithm ever to replicate or replace an agency\u2019s expertise and experience. The other interviewees, however, were at least cautiously\u2014and some, enthusiastically\u2014supportive of exploring the use of AI in reviewing regulations. One interviewee said that \u201cAI is key in retrospective review, because no one wants to do the work, and it\u2019s low priority; so AI is perfect for that.\u201d<\/p>\n\n\n\n<p>The report and case studies are, we think, both encouraging and instructive for governmental creators and users of AI. One lesson learned is that the resources and technical expertise required to carry an AI project to the finish line are rare among federal agencies. Where internal capacity exists, agencies should consider launching pilot projects on algorithmic retrospective review and sharing their tools openly with other federal agencies. The Defense Department\u2019s AI tool, GAMECHANGER, for example, sparked interest in creating spinoff projects across the government, both at individual agencies, such as the U.S. Patent and Trademark Office, and at agencies with a cross-government focus, such as the Office of Management and Budget.<\/p>\n\n\n\n<p>Another position we take, more uncompromisingly, is that AI tools for retrospective review must be open source and able to operate in synergy with other government technology initiatives. GAMECHANGER again is a shining example: nearly all its code is available on open source platforms, so any agency interested in implementing it would incur relatively low startup costs. Likewise, GSA required that its pilot with CMS be open source and compatible with common government-used data architectures such as United States Legislative Markup. By choosing to prioritize interoperability and to stay open source, the agencies created non-proprietary, widely shared tools and developed internal technical capacity to insulate themselves against the possibility of being locked into contracts with a single vendor.<\/p>\n\n\n\n<p>Finally, we suggest that the case studies of agency experience with AI in retrospective review also shed light on the value of AI at the moment of prospective rulemaking. An example of how AI-enabled retrospective review could come into play at the time of rulemaking is the Transportation Department\u2019s practice of drafting regulations in a structured format which facilitates better comprehension of rules by computers. Algorithmic retrospective review tools could also be used throughout the lifecycle of crafting new regulations to ensure that the new rules are well-drafted, consistent, and non-duplicative.<\/p>\n\n\n\n<p>It may well be that, in the near future, AI will wear many hats in the rulemaking process, such as modeling the effects of policy choices or gauging their costs and benefits. Easing AI into prospective rulemaking by learning from and replicating its contributions to retrospective review is, in our view, a prudent first step.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Statutes, executive orders, and good governance considerations alike impose a duty on many federal agencies to analyze past rules periodically and to remove duplicative, inconsistent, anachronistic, or otherwise ineffective regulations from their books. To comply, agencies perform \u201cretrospective reviews\u201d by re-assessing the costs and benefits of their regulations at some time after the regulations are [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":21698,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[2],"tags":[],"class_list":["post-21696","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"rttpg_featured_image_url":{"full":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"landscape":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"portraits":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"medium":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"large":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"1536x1536":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"2048x2048":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"post-thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false],"graptor-sq-xs":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-36.jpg",100,59,false]},"rttpg_author":{"display_name":"Admin CG","author_link":"https:\/\/web3unplugged.io\/blog\/author\/admin-cg\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/web3unplugged.io\/blog\/category\/news\/\" rel=\"category tag\">news<\/a>","rttpg_excerpt":"Statutes, executive orders, and good governance considerations alike impose a duty on many federal agencies to analyze past rules periodically and to remove duplicative, inconsistent, anachronistic, or otherwise ineffective regulations from their books. To comply, agencies perform \u201cretrospective reviews\u201d by re-assessing the costs and benefits of their regulations at some time after the regulations are&hellip;","_links":{"self":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21696","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/comments?post=21696"}],"version-history":[{"count":1,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21696\/revisions"}],"predecessor-version":[{"id":21699,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21696\/revisions\/21699"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media\/21698"}],"wp:attachment":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media?parent=21696"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/categories?post=21696"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/tags?post=21696"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}