🔬 Research Summary by Suman Devkota, Bishal Lamichhane, Aniket Kumar Singh, and Uttam Dhakal.
Suman Devkota is a Manufacturing Engineer at First Solar and has a Master’s in Electrical Engineering from Youngstown State University.
Bishal Lamichhane is a PhD student pursuing Statistics and Data Science at the University of Nevada, Reno.
Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells and has a Master’s in Computing and Information Systems from Youngstown State University.
Uttam Dhakal is a Graduate Student pursuing Master’s of Electrical Engineering at Youngstown State University.
[Original Paper by Aniket Kumar Singh, Suman Devkota, Bishal Lamichhane, Uttam Dhakal,
Chandra Dhakal]
Overview: The ubiquitous adoption of Large Language Models (LLMs) like ChatGPT (GPT-3.5) and GPT-4 across various applications has raised concerns regarding their trustworthiness. This research explores the alignment between self-assessed confidence and actual performance in large language models like GPT-3.5, GPT-4, BARD, Google PaLM, LLaMA-2, and Claude, shedding light on critical areas requiring caution.
Introduction
The integration of LLMs in different applications shows how capable these models are. As these models evolve, their capabilities become increasingly impressive, but at the same time, there is a growing need to assess their behavior from cognitive and psychological perspectives. With the current models, there is a concern about trusting their responses. Our research dives deep into the confidence-competence gap in models like GPT-3, GPT-4, BARD, GooglePaLM, LLaMA, and Claude. Our meticulously designed experiment of diverse questionnaires and real-world scenarios illuminates how these language models exhibit confidence in their responses, revealing intriguing behavior. GPT-4 demonstrated high confidence regardless of correctness, while other models like Google Palm underestimated their capabilities. The study also found that certain models were more confident in answering questions in one domain. This suggests that fine-tuning smaller models for specific tasks might be more beneficial for tasks where such cognitive capabilities in LLM’s response are desired. These results direct researchers/engineers to be extra cautious during their involvement with language models, especially in critical real-world applications where incorrect responses could lead to significant repercussions.
Key Insights
The Complex Landscape of Confidence in LLMs
Experimental Set-up
The authors implemented a rigorous experimental framework across various categories to investigate the self-assessment of different language models. The questionnaires were designed with varying levels of difficulties across multiple domains. The study recorded the responses for all language models before and after assessment. The recorded response gave an in-depth analysis of the accuracy of LLMs’ reactions and the evolution of confidence in them. This also provided insight into the confidence competence gap across different problem scenarios and difficulty spectrums.
Intricate Dynamics of Self-assessment
The paper revealed a complex landscape where language models like GPT-4 consistently exhibited high confidence across every problem level regardless of domains and the correctness of the answer. GPT-4 was correct 62.5 % of the time but was confident in most problems. On the other hand, models like Claude-2 and Claude-Instant demonstrated higher variability in their confidence scores based on the task difficulties they were subjected to. This reveals a more adaptive self-assessment mechanism in response to the complexity of the problem. How these different models assess themselves isn’t just about numbers; it impacts their usefulness and reliability in real-life applications. Regardless of the domain and problem level, GPT-4 was always confident, which could be a problem in scenarios where overconfidence can lead to misleading information.
Patterns in Confidence-Competence Alignment
Diving deeper into the study, the paper looks at how confident the models are before and after they do a task and whether they get the answers correct. Claude-2 adjusted how confident the model was after answering the question. It was able to change and adjust its evaluation correctly in some instances. Google’s Bard maintains its evaluation before and after the question is asked, regardless of the correctness of the problem. These detailed observations show that self-assessment in these models is a complex task. We need to understand this complexity better to be able to use it reliably in real-world applications, especially where reliability is a concern.
Between the lines
The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences, as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.