Scientists have designed a new form of the Turing test for the era of artificial intelligence

Home/Scientists have designed a new...
Scientists have designed a new form of the Turing test for the era of artificial intelligence
Scientists have designed a new form of the Turing test for the era of artificial intelligence Admin CG January 11, 2024

Philip Nicholas Johnson-Laird of Princeton University and Marco Ragni of the Technical University of Chemnitz suggest that in the era of artificial intelligence it is already clear that programs can successfully pretend to be human. Therefore, we should now focus more on another question: Is the program able to think in the way that people think?
The classic Turing test does not require an AI to be self-aware or to understand its own reasoning. It focuses only on external interactions and reactions and neglects the introspective aspect of human cognition.
The new evaluation framework to determine whether an AI truly thinks like a human has three main components:

1. Testing in psychological experiments
The researchers propose subjecting AI programs to a set of psychological experiments designed to distinguish between human-like reasoning and standard logical processes. These experiments examine various aspects of reasoning, including how people infer possibilities from compound statements and how they summarize consistent possibilities into one. In general: people often deviate from standard (formally) logical procedures. (Note: This also brings us to the question of whether we really want artificial intelligence to repeat human mistakes.)

2. Self-reflection
The goal of this step is to assess how the program understands its own way of thinking, a critical aspect of human cognition. The program should be able to introspect its thought processes and provide explanations for its decisions. By asking questions that require awareness of ways of thinking, researchers are trying to see if AI exhibits human-like introspection. (Note: what is the relationship between introspection and self-awareness/consciousness? Today’s chatbots can sometimes give explanations of their processes, but deny that they are conscious beings.)

3. Examining the source code
In the last step, the researchers dive into the source code of the program (Note: or rather the structure and behavior of the objects in the relevant neural network? See the comparison with brain imaging below.). The key here is to detect the presence of components known to simulate human performance. These components include systems for rapid inference, deliberate reasoning, and the ability to interpret concepts based on context and general knowledge. If a program’s source code reflects these principles, the program is said to think in a human-like manner.
By treating AI as a participant in cognitive experiments and even subjecting its code to analysis similar to a brain imaging study, the authors try to get closer to understanding whether AI systems really do think in a human-like way.


PUBLISHING PARTNERS

Tags