{"id":23153,"date":"2024-02-07T12:55:20","date_gmt":"2024-02-07T12:55:20","guid":{"rendered":"https:\/\/web3unplugged.io\/blog\/?p=23153"},"modified":"2024-02-07T12:55:22","modified_gmt":"2024-02-07T12:55:22","slug":"the-eus-artificial-intelligence-rulebook-explained","status":"publish","type":"post","link":"https:\/\/web3unplugged.io\/blog\/the-eus-artificial-intelligence-rulebook-explained\/","title":{"rendered":"The EU&#8217;s Artificial Intelligence Rulebook, Explained"},"content":{"rendered":"\n<p>The European Union&#8217;s&nbsp;Artificial Intelligence Act&nbsp;aims to rein in a technology whose influence grows by the day.<\/p>\n\n\n\n<p>The new rules are now on track to be approved in April and may enter into force later this year, following a lengthy debate on whether the EU should ban certain AI practices and constrain advanced AI.<\/p>\n\n\n\n<p>Here\u2019s your cheat sheet on what made it into the final text.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What it covers<\/h3>\n\n\n\n<p>The European Commission proposed the AI Act in 2021 to establish rules not merely for the technology, but for its use in scenarios where it could have grave consequences for the public.&nbsp;<\/p>\n\n\n\n<p>In other words, an AI tool that adds bunny ears to your Instagram shots does not need regulation; AI systems used to screen candidates for university admission definitely do \u2014 also known as the \u201crisk-based\u201d approach.<\/p>\n\n\n\n<p>The AI Act&#8217;s Article 5 outlines use cases that are outright banned. Meanwhile, the EU created rules for so-called general-purpose AI to address the rapid growth of systems like OpenAI&#8217;s ChatGPT, which have no specific use and could be used to generate anything from recipes to propaganda.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What&#8217;s forbidden<\/strong><\/h3>\n\n\n\n<p>Article 5 lays out what AI uses are forbidden.<\/p>\n\n\n\n<p>Such banned uses include those aimed at influencing behavior, such as subliminal, manipulative or deceptive AI-aided techniques or using AI to exploit a person or group&#8217;s vulnerabilities.<\/p>\n\n\n\n<p>Using biometric information to ascertain a person&#8217;s race, sexual orientation, beliefs or trade union membership isn&#8217;t allowed. Nor is social scoring, which involves tracking a person&#8217;s behavior in a way that could result in their unfavorable treatment in an unrelated situation \u2014 for instance, being denied access to a public service because of past drug use.<\/p>\n\n\n\n<p>The use of real-time facial recognition (also known as \u201cremote biometric identification,\u201d or RBI) in public places is banned. There are exceptions: Law enforcement will still be able to use the technology when investigating serious crimes or searching for missing people, as long as they have a judge\u2019s authorization. Using AI to estimate a person&#8217;s likelihood to commit a crime based solely on personal characteristics \u2014 so-called predictive policing \u2014 isn&#8217;t allowed.<\/p>\n\n\n\n<p>AI tools cannot be used to create databases of facial images by scraping the internet or CCTV videos. An obvious example: the&nbsp;controversial U.S. company Clearview AI.<\/p>\n\n\n\n<p>Finally, using AI tools to infer a person\u2019s emotions in the workplace or an educational environment is forbidden.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Addressing high-risk AI &#8230;<\/h3>\n\n\n\n<p>Creators of AI systems used in \u201chigh-risk\u201d scenarios are required to follow data governance practices, including the ethical collection of datasets for training and ensuring that they&#8217;re representative and as bias-free as possible.<\/p>\n\n\n\n<p>They must also draft technical documentation of the AI&#8217;s functionality and its risk-management measures, keep a record of the AI&#8217;s use to help monitor incidents, ensure that the AI&#8217;s use can be overseen by actual people, and guarantee appropriate levels of accuracy, robustness and cybersecurity.<\/p>\n\n\n\n<p>Anyone rolling out high-risk AI systems will also have to conduct an assessment of how the tool might affect fundamental rights under EU law.<\/p>\n\n\n\n<p>Practically, high-risk AI systems will likely be presumed compliant if they follow standards and specifications set by the Commission and standard-setting organizations.<\/p>\n\n\n\n<p>The areas considered high-risk, according to Annex III, include biometrics and facial recognition, when not explicitly forbidden as per Article 5; critical infrastructure components; education and the workplace; and access to public services, benefits or essential private services like banking and insurance.<\/p>\n\n\n\n<p>Uses related to law enforcement, migration, justice and elections also qualify as high-risk.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">&#8230; and&nbsp;<strong>general-purpose AI<\/strong><\/h3>\n\n\n\n<p>The AI Act\u2019s rules target general-purpose \u201cmodels\u201d that underpin AI tools \u2014 not the customer-facing apps, but the software architecture that is integrated into different providers\u2019 products.&nbsp;<\/p>\n\n\n\n<p>Developers of these models \u2014 such as those powering ChatGPT or Google&#8217;s Bard \u2014 will have to keep detailed technical documentation; help the companies or people deploying their models understand the tools&#8217; functionality and limits; provide a summary of the copyrighted material (such as texts or images) used to train the models; and cooperate with the European Commission and the national enforcing authorities when it comes to compliance with the rulebook.<br><br>Some general-purpose models are labeled a \u201csystemic risk\u201d thanks to their power and reach \u2014 an ability to precipitate catastrophic events. Developers of these systems will also have to put mitigation strategies in place and pass on details of any incident to the Commission&#8217;s brand-new \u201cAI Office\u201d which is lined up to police the rules.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The European Union&#8217;s&nbsp;Artificial Intelligence Act&nbsp;aims to rein in a technology whose influence grows by the day. The new rules are now on track to be approved in April and may enter into force later this year, following a lengthy debate on whether the EU should ban certain AI practices and constrain advanced AI. Here\u2019s your [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":23155,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[2],"tags":[],"class_list":["post-23153","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"rttpg_featured_image_url":{"full":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"landscape":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"portraits":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21-150x150.jpg",150,150,true],"medium":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21-300x200.jpg",300,200,true],"large":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"1536x1536":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"2048x2048":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",900,600,false],"post-thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",630,420,false],"graptor-sq-xs":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2024\/02\/Untitled-21.jpg",100,67,false]},"rttpg_author":{"display_name":"Admin CG","author_link":"https:\/\/web3unplugged.io\/blog\/author\/admin-cg\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/web3unplugged.io\/blog\/category\/news\/\" rel=\"category tag\">news<\/a>","rttpg_excerpt":"The European Union&#8217;s&nbsp;Artificial Intelligence Act&nbsp;aims to rein in a technology whose influence grows by the day. The new rules are now on track to be approved in April and may enter into force later this year, following a lengthy debate on whether the EU should ban certain AI practices and constrain advanced AI. Here\u2019s your&hellip;","_links":{"self":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/23153","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/comments?post=23153"}],"version-history":[{"count":1,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/23153\/revisions"}],"predecessor-version":[{"id":23156,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/23153\/revisions\/23156"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media\/23155"}],"wp:attachment":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media?parent=23153"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/categories?post=23153"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/tags?post=23153"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}