- Researchers developed a method to reduce AI overconfidence and improve response accuracy
- The approach uses a warm-up phase with random noise before training on real data
- This method helps AI models recognize when they do not know an answer, mimicking human learning
Artificial Intelligence models often generate vague, random, repeated, and sometimes wrong responses when it doesn't know the correct answer, which leads to confusion. But now, research has claimed that they have developed a new way to finally make AI models say "no" on their own. A group of South Korean researchers proposed a new approach to address the problem of "overconfidence", which, according to the researchers from the Korea Advanced Institute of Science and Technology, is one of the most critical risks of AI in areas such as autonomous driving and medical diagnosis.
Also read | Video: Massive Debris Falls Inside New York City Tunnel, Driver Escapes Serious Injury
The researchers have developed a training method which would allow the AI to recognise situations and analyse them, so that overconfidence can be reduced. As per the press release, a research team led by Professor Se-Bum Paik from the Department of Brain and Cognitive Sciences identified that random initialisation may be a fundamental cause of overconfidence in AI.
In the study, published in the journal Nature Machine Intelligence, the researchers proposed a "warm-up" strategy to address the issue. With the help of this strategy, the neural network is briefly trained using random noise before learning from real data.
Similar to humans, when brain signals are generated without external input, even before birth. Human babies explore uncertainty through random movements before learning specific skills. The researchers gave AI models a "warm-up phase" with random noise inputs before actual training.
The process simply means that before learning from real data, the AI model first learns the state of "I don't know anything yet", which results in better accuracy and high confidence.
This study suggests the possibility that AI can go beyond simply producing correct answers and develop the ability to distinguish "what it knows" from "what it does not know".
"This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans," Professor Se-Bum Paik said.
"This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer."














