{"id":22789,"date":"2023-12-27T13:21:00","date_gmt":"2023-12-27T13:21:00","guid":{"rendered":"https:\/\/web3unplugged.io\/blog\/?p=22789"},"modified":"2023-12-28T13:24:49","modified_gmt":"2023-12-28T13:24:49","slug":"does-it-make-sense-to-talk-about-the-safety-of-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/web3unplugged.io\/blog\/does-it-make-sense-to-talk-about-the-safety-of-artificial-intelligence\/","title":{"rendered":"Does it make sense to talk about the safety of artificial intelligence?"},"content":{"rendered":"\n<p>There is one aspect of the&nbsp;<strong>video published by Google<\/strong>&nbsp;for Gemini\u2019s debut particularly interesting. Let\u2019s start with a now established fact:&nbsp;<strong>the content has been largely altered, to oversize the system\u2019s capabilities<\/strong>, to position it as a credible competitor for OpenAI. Moreover, after the soap opera between Sam Altman and the ChatGPT company, it was important for Big G to give a signal to the market. And then unveil a new product,&nbsp;<strong>regardless of his actual capabilities&nbsp;<\/strong>and its level of development.<\/p>\n\n\n\n<p>The&nbsp;<strong>rush to commercialization&nbsp;<\/strong>and marketing which matters more than security and research, also appear to have been among the causes of Altman\u2019s dismissal (later revoked) from OpenAI. According to many analysts, in this commercial challenge the risk is that we lose sight of safety, which&nbsp;<strong>dangerous systems develop<\/strong>for companies and for users.<\/p>\n\n\n\n<p>And so today the point is precisely&nbsp;<strong>understand what we are talking about&nbsp;<\/strong>when we talk about safe artificial intelligence.<\/p>\n\n\n\n<p>Artificial intelligence The strange appeal against AI: \u201cWe risk extinction\u201d. But companies continue to develop them by Emanuele Capone 30 May 2023<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI Safety, aligning artificial intelligence with human values<\/h2>\n\n\n\n<p>\u201cAn important distinction in the field of artificial intelligence, both in the academic and industrial fields, is that between AI Safety and AI Ethics \u2013 he explained to us&nbsp;<strong>Giada Pistilli, Ethics Manager of Hugging Face<\/strong>a French-American company that develops tools for machine learning \u2013 On the one hand we have security, as understood by companies like OpenAI and Anthropic: that idea that we must avoid long-term damage by optimizing systems\u201d.<\/p>\n\n\n\n<p>Start from here. The safety of artificial intelligence, the&nbsp;<strong>AI Safety<\/strong>is a concept very dear to Silicon Valley and&nbsp;<strong>to the currents of thought of effective altruism and long-termism<\/strong>. It has to do with the idea that the so-called artificial general intelligence, that is, an AI more intelligent than a human being, may arrive, sooner or later. The risk, in this context, could be extinction. And this is the&nbsp;<strong>start point&nbsp;<\/strong>of the very concept of safety: creating technical barriers to ensure that the worst scenario does not occur.<\/p>\n\n\n\n<p>An AI Safety tool is&nbsp;<strong>what is called alignment&nbsp;<em>(here we highlighted its weak points)<\/em><\/strong>, a technique that should make AI reflect the values \u200b\u200band intentions of humans. An alignment strategy is Reinforcement Learning From Human Feedback (RLHF), i.e. the review of the outputs of an AI by groups of human beings, who choose between different options&nbsp;<strong>what is the most suitable or right answer<\/strong>. It\u2019s a strategy that OpenAI is trying to automate: last July, the Sam Altman-led company did&nbsp;<strong>announced its intention to allocate 20% of the budget to create automatic alignment systems<\/strong>. AIs training AIs to think like humans, we might say.<\/p>\n\n\n\n<p>\u201cIn the alignment of artificial intelligence,&nbsp;<strong>there is an elephant in the room \u2013 wrote researcher De Kai in the New York Times<\/strong>&nbsp;\u2013 Alignment, but to what types of human goals? Philosophers, politicians and populations have long struggled with all the thorny trade-offs between different goals. Short-term instant gratification? Long-term happiness? Avoid extinction? Individual freedoms? There is no universal consensus on these goals, let alone on&nbsp;<strong>even more burning issues&nbsp;<\/strong>such as gun rights, reproductive rights, or geopolitical conflicts. Indeed, the OpenAI saga amply demonstrates how impossible it is to align goals even among a small group of leaders in the same company.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI Ethics, the responsible development of artificial intelligence<\/h2>\n\n\n\n<p>\u201cAI Ethics focuses on more immediate risks, such as bias, discrimination and manipulation \u2013 Pistilli clarified to us \u2013 And it starts from an assumption: there is no universal solution, a silver bullet, to solve all problems\u201d. Second&nbsp;<strong>Diletta Huyskes, head of advocacy di&nbsp;Privacy Network<\/strong>&nbsp;and CEO and co-founder of&nbsp;<strong>Immanence<\/strong>\u201cthis vision is organic: creating cycles and value chains in the production of AI systems, where it is essential to ensure the&nbsp;<strong>compliance with certain standards<\/strong>such as the protection of fundamental rights\u201d.<\/p>\n\n\n\n<p>In other words, the&nbsp;<strong>difference&nbsp;<\/strong>it is the way in which safety is understood: the Safety approach provides for an ex-post intervention, to set limits, to engineer the risks of already developed systems, while ethics aims to intervene at every moment of the construction and development of the \u2018artificial intelligence. Again: \u201cLike Immanence, the main activity is to support technicians and governance teams in the implementation of AI systems, both in&nbsp;<strong>build Ethics by Design technologies&nbsp;<\/strong>and in evaluating already existing technologies. This involves analyzing the risks and impacts of these technologies and providing recommendations on how to mitigate these factors.\u201d<\/p>\n\n\n\n<p>Pistilli specified that \u201c<strong>this perspective can be applied in two key moments&nbsp;<\/strong>of the pipeline: in the development phase, where ethics can be integrated into specific decisions such as the choice of dataset or architecture, and in the implementation phase, where action can be taken on use cases and high-risk sectors\u201d . According to Huyskes, however, \u201call decisions made during every moment of AI design, from dataset creation to model choice, must be conscious. The idea is that specific ethical needs are determined by the&nbsp;<strong>context in which each project fits<\/strong>. For example, evaluating discrimination or impacts on certain social categories, or human interaction with software or a bot, requires careful consideration of the specific context.\u201d<\/p>\n\n\n\n<p>In-depth analysis Generative AI has a problem with female beauty, but it\u2019s not (only) its fault by Francesco Marino 07 October 2023<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How to intervene today?<\/h2>\n\n\n\n<p>On the responsible development of systems it seems to have&nbsp;<strong>Europe\u2019s AI Act is also targeted<\/strong>, approved last December 9th. The regulation, starting from risk, provides for an increasing degree of supervision and restrictions for artificial intelligence systems. In other words, companies that produce AI tools with greater, potential harm to individuals and society will have to provide regulators with the&nbsp;<strong>risk assessment test<\/strong>details of what data was used to train the systems, and assurances that the software does not cause harm such as perpetuating racial bias.<\/p>\n\n\n\n<p>However,&nbsp;<strong>the rules will not come into force for two years<\/strong>: a huge time, especially for artificial intelligence. In this period, users will have to find countermeasures, to live with systems trained on an enormous amount of data, including many&nbsp;<strong>copyrighted content&nbsp;<em>(here is an example)<\/em><\/strong>. Furthermore, the training material has (almost) never been made known and is an industrial secret, a sort of Coca-Cola formula for artificial intelligence: \u201cThese AIs often have one main problem \u2013 Enrico Panai explained to us, AI ethicist \u2013 If the data has been collected incorrectly, with internal biases, discrepancies,&nbsp;<strong>omitting certain types of information<\/strong>, the models work in a partial way, they report biases and precise visions of the&nbsp;world. Moreover, given is a past participle: it is something that happened in the past, which however risks being constantly repeated in the future\u201d.<\/p>\n\n\n\n<p>And it\u2019s&nbsp;<strong>what can happen with generative artificial intelligence<\/strong>: Trained on much of the material on the Web, these systems can, among other things, replicate biases or produce misinformation. All effects that every Internet user is already experiencing and which are already having consequences on the quality of information on the Web: \u201cLet\u2019s imagine we have 3 bottles in the kitchen,&nbsp;<strong>water, cheap white wine and \u201988 Monbazillac&nbsp;<\/strong>\u2013 Panai told us \u2013 These bottles are magical: you enter the kitchen, say you want a new bottle and an automatic system creates a mixture. You will have an acceptable wine, but not an excellent bottle. We can always improve the indication, i.e. the prompt\u201d. Again: \u201cLet\u2019s imagine this same situation with 100 million bottles, which are the parameters.&nbsp;<strong>You will never have an original bottle again<\/strong>, but always poorer mixtures. The risk is that in the future the average quality of content on the Internet will decrease more and more, that everything will become a mixed bag.\u201d<\/p>\n\n\n\n<p>In conclusion, Pistilli clarified to us that \u201c<strong>palliative measures exist&nbsp;<\/strong>that can be adopted to guide these systems responsibly, for example by focusing on the value of consent, such as opt-out mechanisms for data, or watermarking to label AI-generated content. However, they are not easy solutions to implement. But if the problem were the foundation models,&nbsp;<strong>the large and vast systems like ChatGPT<\/strong>? It seems to me that there is an increasing need to have AI models that carry out specific tasks, controlled both ethically and technically. And this could be a path towards a contextualized and more controllable use of artificial intelligence.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There is one aspect of the&nbsp;video published by Google&nbsp;for Gemini\u2019s debut particularly interesting. Let\u2019s start with a now established fact:&nbsp;the content has been largely altered, to oversize the system\u2019s capabilities, to position it as a credible competitor for OpenAI. Moreover, after the soap opera between Sam Altman and the ChatGPT company, it was important for [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":22791,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[2],"tags":[],"class_list":["post-22789","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"rttpg_featured_image_url":{"full":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",1800,1020,false],"landscape":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",1800,1020,false],"portraits":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",1800,1020,false],"thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14-150x150.jpg",150,150,true],"medium":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14-300x170.jpg",300,170,true],"large":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14-1024x580.jpg",1024,580,true],"1536x1536":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14-1536x870.jpg",1536,870,true],"2048x2048":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",1800,1020,false],"post-thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",741,420,false],"graptor-sq-xs":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/12\/Untitled-14.jpg",100,57,false]},"rttpg_author":{"display_name":"Admin CG","author_link":"https:\/\/web3unplugged.io\/blog\/author\/admin-cg\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/web3unplugged.io\/blog\/category\/news\/\" rel=\"category tag\">news<\/a>","rttpg_excerpt":"There is one aspect of the&nbsp;video published by Google&nbsp;for Gemini\u2019s debut particularly interesting. Let\u2019s start with a now established fact:&nbsp;the content has been largely altered, to oversize the system\u2019s capabilities, to position it as a credible competitor for OpenAI. Moreover, after the soap opera between Sam Altman and the ChatGPT company, it was important for&hellip;","_links":{"self":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/22789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/comments?post=22789"}],"version-history":[{"count":1,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/22789\/revisions"}],"predecessor-version":[{"id":22792,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/22789\/revisions\/22792"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media\/22791"}],"wp:attachment":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media?parent=22789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/categories?post=22789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/tags?post=22789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}