• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Confidence-Competence Gap in Large Language Models: A Cognitive Study

December 2, 2023

🔬 Research Summary by Suman Devkota, Bishal Lamichhane, Aniket Kumar Singh, and Uttam Dhakal.

Suman Devkota is a Manufacturing Engineer at First Solar and has a Master’s in Electrical Engineering from Youngstown State University.

Bishal Lamichhane is a PhD student pursuing Statistics and Data Science at the University of Nevada, Reno.

Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells and has a Master’s in Computing and Information Systems from Youngstown State University. 

Uttam Dhakal is a Graduate Student pursuing Master’s of Electrical Engineering at Youngstown State University.

[Original Paper by Aniket Kumar Singh, Suman Devkota, Bishal Lamichhane, Uttam Dhakal,
Chandra Dhakal]


Overview: The ubiquitous adoption of Large Language Models (LLMs) like ChatGPT (GPT-3.5) and GPT-4 across various applications has raised concerns regarding their trustworthiness. This research explores the alignment between self-assessed confidence and actual performance in large language models like GPT-3.5, GPT-4, BARD, Google PaLM, LLaMA-2, and Claude, shedding light on critical areas requiring caution.


Introduction

The integration of LLMs in different applications shows how capable these models are. As these models evolve, their capabilities become increasingly impressive, but at the same time, there is a growing need to assess their behavior from cognitive and psychological perspectives. With the current models, there is a concern about trusting their responses. Our research dives deep into the confidence-competence gap in models like GPT-3, GPT-4, BARD, GooglePaLM, LLaMA, and Claude. Our meticulously designed experiment of diverse questionnaires and real-world scenarios illuminates how these language models exhibit confidence in their responses, revealing intriguing behavior. GPT-4 demonstrated high confidence regardless of correctness, while other models like Google Palm underestimated their capabilities. The study also found that certain models were more confident in answering questions in one domain. This suggests that fine-tuning smaller models for specific tasks might be more beneficial for tasks where such cognitive capabilities in LLM’s response are desired.  These results direct researchers/engineers to be extra cautious during their involvement with language models, especially in critical real-world applications where incorrect responses could lead to significant repercussions. 

Key Insights

The Complex Landscape of Confidence in LLMs

Experimental Set-up 

The authors implemented a rigorous experimental framework across various categories to investigate the self-assessment of different language models. The questionnaires were designed with varying levels of difficulties across multiple domains. The study recorded the responses for all language models before and after assessment. The recorded response gave an in-depth analysis of the accuracy of LLMs’ reactions and the evolution of confidence in them. This also provided insight into the confidence competence gap across different problem scenarios and difficulty spectrums.

Intricate Dynamics of Self-assessment

The paper revealed a complex landscape where language models like GPT-4 consistently exhibited high confidence across every problem level regardless of domains and the correctness of the answer. GPT-4 was correct 62.5 % of the time but was confident in most problems.  On the other hand, models like Claude-2 and Claude-Instant demonstrated higher variability in their confidence scores based on the task difficulties they were subjected to. This reveals a more adaptive self-assessment mechanism in response to the complexity of the problem. How these different models assess themselves isn’t just about numbers; it impacts their usefulness and reliability in real-life applications. Regardless of the domain and problem level, GPT-4 was always confident, which could be a problem in scenarios where overconfidence can lead to misleading information.

Patterns in Confidence-Competence Alignment

Diving deeper into the study, the paper looks at how confident the models are before and after they do a task and whether they get the answers correct. Claude-2 adjusted how confident the model was after answering the question.  It was able to change and adjust its evaluation correctly in some instances. Google’s Bard maintains its evaluation before and after the question is asked, regardless of the correctness of the problem. These detailed observations show that self-assessment in these models is a complex task. We need to understand this complexity better to be able to use it reliably in real-world applications, especially where reliability is a concern.

Between the lines 

The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences,  as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

    From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

  • The Impact of Artificial Intelligence on Military Defence and Security

    The Impact of Artificial Intelligence on Military Defence and Security

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

  • Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

    Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

  • Responsible Use of Technology: The IBM Case Study

    Responsible Use of Technology: The IBM Case Study

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

    Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.