Artificial intelligence is experiencing a remarkable breakthrough. New research indicates that AI is beginning to “think” in a way that resembles human thought. Traditionally, the focus of AI development has been on precision and the ability to perform large-scale tasks. However, a new group of researchers is exploring how AI makes decisions, seeking to make it more like the human mind.
To solve the problem of AI “hallucinations”, where the system generates incorrect answers, these researchers are introducing more humanized decision-making processes. Convolutional neural networks, for example, are essential for computers to understand images, identifying shapes and patterns. However, even these advanced networks lack the nuances of the human decision-making process.
To overcome this limitation, RTNet was developed, a neural network that incorporates cognitive models from neuroscience. Consisting of five convolutional layers and three fully connected layers, RTNet combines the image processing capabilities of AI with the dynamic stochastic reasoning of humans. This system processes each image several times, using a Bayesian neural network that mimics the behavior of human neurons. This is referred to by the researchers as “noisy accumulation”.
RTNet was tested using the MNIST dataset, an image database of handwritten numbers widely used in machine learning experiments. To make the task more difficult, visual noise was introduced into the images, making them more challenging to read. During the development of RTNet, the team “didn’t just check whether the model correctly determined the digit the image showed; they also checked how it compared to a group of 60 real humans who performed the same task more than 960 times each.” This study resulted in one of the largest datasets on human reactions to MNIST, highlighting the lack of human data in computer science literature. Farshad Rafiei, author of the study, noted: “In general, we don’t have enough human data in the existing computer science literature.”
A crucial aspect of the research was the inclusion of the “speed-accuracy trade-off” (SAT). This trade-off reflects the balance between the time spent solving a problem and the accuracy of the answer. In addition to speed and accuracy, trust was a third important criterion. Unlike conventional AI models, which avoid responding when uncertain, RTNet was able to assign a confidence rating to each decision, similar to human behavior.
RTNet not only provided correct answers, but also mimicked the pattern of human decision-making, including variations in responses to the same stimulus. The more time spent decoding, the more accurate the answers. RTNet’s evidence accumulation system allowed for robust empirical validation, suggesting that future iterations of RTNet could come even closer to human brain behavior.
One of RTNet’s most intriguing points is the way it handles confidence in its answers. Traditional AI models often avoid giving an answer when they are unsure, but RTNet manages to assign a level of confidence to each decision, reflecting the likelihood of it being correct. This is very much in line with human behavior, where people also assign levels of confidence to their decisions, even when they are not completely right.
RTNet’s ability to mimic human decision-making is particularly noteworthy in situations where the same stimulus can generate different responses depending on the context or the time spent analyzing it. This was demonstrated during the tests with MNIST, where RTNet showed that the more time it spent analyzing an image, the more accurate its response became. This characteristic is a direct reflection of human behavior, where reflection and time can lead to more accurate decisions.
In addition, RTNet was designed to outperform other AI models in terms of evidence accumulation and empirical validation. This means that the network not only processes information efficiently, but also validates its responses in a way that approximates the way humans check and re-evaluate their own decisions. This continuous process of evaluation and re-evaluation is crucial to the accuracy and reliability of decision-making.
The article published in the journal Nature Human Behavior, entitled “The neural network RTNet exhibits the signs of human perceptual decision-making”, highlights the importance of this advance. The research suggests that the future development of RTNet could include more recurrent systems, further increasing its ability to predict and imitate human behavior. This evolution could allow RTNet to extrapolate simple past instances to solve more complex problems in the future.
- See also: Machine Learning for Cell Analysis
In summary, RTNet represents a significant advance in the field of machine learning, integrating aspects of human reasoning into its decision-making processes. This development not only improves the accuracy and reliability of AI, but also opens up new possibilities for the application of neural networks in areas that require a deep and nuanced understanding of decision-making. RTNet demonstrates that AI is getting closer to replicating the complexity of human thought, marking an important step towards the future of artificial intelligence.