Does it make sense to talk about the safety of artificial intelligence?

Home/Does it make sense to talk abo...
Does it make sense to talk about the safety of artificial intelligence?
Does it make sense to talk about the safety of artificial intelligence? Admin CG December 27, 2023

There is one aspect of the video published by Google for Gemini’s debut particularly interesting. Let’s start with a now established fact: the content has been largely altered, to oversize the system’s capabilities, to position it as a credible competitor for OpenAI. Moreover, after the soap opera between Sam Altman and the ChatGPT company, it was important for Big G to give a signal to the market. And then unveil a new product, regardless of his actual capabilities and its level of development.

The rush to commercialization and marketing which matters more than security and research, also appear to have been among the causes of Altman’s dismissal (later revoked) from OpenAI. According to many analysts, in this commercial challenge the risk is that we lose sight of safety, which dangerous systems developfor companies and for users.

And so today the point is precisely understand what we are talking about when we talk about safe artificial intelligence.

Artificial intelligence The strange appeal against AI: “We risk extinction”. But companies continue to develop them by Emanuele Capone 30 May 2023

AI Safety, aligning artificial intelligence with human values

“An important distinction in the field of artificial intelligence, both in the academic and industrial fields, is that between AI Safety and AI Ethics – he explained to us Giada Pistilli, Ethics Manager of Hugging Facea French-American company that develops tools for machine learning – On the one hand we have security, as understood by companies like OpenAI and Anthropic: that idea that we must avoid long-term damage by optimizing systems”.

Start from here. The safety of artificial intelligence, the AI Safetyis a concept very dear to Silicon Valley and to the currents of thought of effective altruism and long-termism. It has to do with the idea that the so-called artificial general intelligence, that is, an AI more intelligent than a human being, may arrive, sooner or later. The risk, in this context, could be extinction. And this is the start point of the very concept of safety: creating technical barriers to ensure that the worst scenario does not occur.

An AI Safety tool is what is called alignment (here we highlighted its weak points), a technique that should make AI reflect the values ​​and intentions of humans. An alignment strategy is Reinforcement Learning From Human Feedback (RLHF), i.e. the review of the outputs of an AI by groups of human beings, who choose between different options what is the most suitable or right answer. It’s a strategy that OpenAI is trying to automate: last July, the Sam Altman-led company did announced its intention to allocate 20% of the budget to create automatic alignment systems. AIs training AIs to think like humans, we might say.

“In the alignment of artificial intelligence, there is an elephant in the room – wrote researcher De Kai in the New York Times – Alignment, but to what types of human goals? Philosophers, politicians and populations have long struggled with all the thorny trade-offs between different goals. Short-term instant gratification? Long-term happiness? Avoid extinction? Individual freedoms? There is no universal consensus on these goals, let alone on even more burning issues such as gun rights, reproductive rights, or geopolitical conflicts. Indeed, the OpenAI saga amply demonstrates how impossible it is to align goals even among a small group of leaders in the same company.”

AI Ethics, the responsible development of artificial intelligence

“AI Ethics focuses on more immediate risks, such as bias, discrimination and manipulation – Pistilli clarified to us – And it starts from an assumption: there is no universal solution, a silver bullet, to solve all problems”. Second Diletta Huyskes, head of advocacy di Privacy Network and CEO and co-founder of Immanence“this vision is organic: creating cycles and value chains in the production of AI systems, where it is essential to ensure the compliance with certain standardssuch as the protection of fundamental rights”.

In other words, the difference it is the way in which safety is understood: the Safety approach provides for an ex-post intervention, to set limits, to engineer the risks of already developed systems, while ethics aims to intervene at every moment of the construction and development of the ‘artificial intelligence. Again: “Like Immanence, the main activity is to support technicians and governance teams in the implementation of AI systems, both in build Ethics by Design technologies and in evaluating already existing technologies. This involves analyzing the risks and impacts of these technologies and providing recommendations on how to mitigate these factors.”

Pistilli specified that “this perspective can be applied in two key moments of the pipeline: in the development phase, where ethics can be integrated into specific decisions such as the choice of dataset or architecture, and in the implementation phase, where action can be taken on use cases and high-risk sectors” . According to Huyskes, however, “all decisions made during every moment of AI design, from dataset creation to model choice, must be conscious. The idea is that specific ethical needs are determined by the context in which each project fits. For example, evaluating discrimination or impacts on certain social categories, or human interaction with software or a bot, requires careful consideration of the specific context.”

In-depth analysis Generative AI has a problem with female beauty, but it’s not (only) its fault by Francesco Marino 07 October 2023

How to intervene today?

On the responsible development of systems it seems to have Europe’s AI Act is also targeted, approved last December 9th. The regulation, starting from risk, provides for an increasing degree of supervision and restrictions for artificial intelligence systems. In other words, companies that produce AI tools with greater, potential harm to individuals and society will have to provide regulators with the risk assessment testdetails of what data was used to train the systems, and assurances that the software does not cause harm such as perpetuating racial bias.

However, the rules will not come into force for two years: a huge time, especially for artificial intelligence. In this period, users will have to find countermeasures, to live with systems trained on an enormous amount of data, including many copyrighted content (here is an example). Furthermore, the training material has (almost) never been made known and is an industrial secret, a sort of Coca-Cola formula for artificial intelligence: “These AIs often have one main problem – Enrico Panai explained to us, AI ethicist – If the data has been collected incorrectly, with internal biases, discrepancies, omitting certain types of information, the models work in a partial way, they report biases and precise visions of the world. Moreover, given is a past participle: it is something that happened in the past, which however risks being constantly repeated in the future”.

And it’s what can happen with generative artificial intelligence: Trained on much of the material on the Web, these systems can, among other things, replicate biases or produce misinformation. All effects that every Internet user is already experiencing and which are already having consequences on the quality of information on the Web: “Let’s imagine we have 3 bottles in the kitchen, water, cheap white wine and ’88 Monbazillac – Panai told us – These bottles are magical: you enter the kitchen, say you want a new bottle and an automatic system creates a mixture. You will have an acceptable wine, but not an excellent bottle. We can always improve the indication, i.e. the prompt”. Again: “Let’s imagine this same situation with 100 million bottles, which are the parameters. You will never have an original bottle again, but always poorer mixtures. The risk is that in the future the average quality of content on the Internet will decrease more and more, that everything will become a mixed bag.”

In conclusion, Pistilli clarified to us that “palliative measures exist that can be adopted to guide these systems responsibly, for example by focusing on the value of consent, such as opt-out mechanisms for data, or watermarking to label AI-generated content. However, they are not easy solutions to implement. But if the problem were the foundation models, the large and vast systems like ChatGPT? It seems to me that there is an increasing need to have AI models that carry out specific tasks, controlled both ethically and technically. And this could be a path towards a contextualized and more controllable use of artificial intelligence.”


PUBLISHING PARTNERS

Tags