• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Confidence-Competence Gap in Large Language Models: A Cognitive Study

December 2, 2023

🔬 Research Summary by Suman Devkota, Bishal Lamichhane, Aniket Kumar Singh, and Uttam Dhakal.

Suman Devkota is a Manufacturing Engineer at First Solar and has a Master’s in Electrical Engineering from Youngstown State University.

Bishal Lamichhane is a PhD student pursuing Statistics and Data Science at the University of Nevada, Reno.

Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells and has a Master’s in Computing and Information Systems from Youngstown State University. 

Uttam Dhakal is a Graduate Student pursuing Master’s of Electrical Engineering at Youngstown State University.

[Original Paper by Aniket Kumar Singh, Suman Devkota, Bishal Lamichhane, Uttam Dhakal,
Chandra Dhakal]


Overview: The ubiquitous adoption of Large Language Models (LLMs) like ChatGPT (GPT-3.5) and GPT-4 across various applications has raised concerns regarding their trustworthiness. This research explores the alignment between self-assessed confidence and actual performance in large language models like GPT-3.5, GPT-4, BARD, Google PaLM, LLaMA-2, and Claude, shedding light on critical areas requiring caution.


Introduction

The integration of LLMs in different applications shows how capable these models are. As these models evolve, their capabilities become increasingly impressive, but at the same time, there is a growing need to assess their behavior from cognitive and psychological perspectives. With the current models, there is a concern about trusting their responses. Our research dives deep into the confidence-competence gap in models like GPT-3, GPT-4, BARD, GooglePaLM, LLaMA, and Claude. Our meticulously designed experiment of diverse questionnaires and real-world scenarios illuminates how these language models exhibit confidence in their responses, revealing intriguing behavior. GPT-4 demonstrated high confidence regardless of correctness, while other models like Google Palm underestimated their capabilities. The study also found that certain models were more confident in answering questions in one domain. This suggests that fine-tuning smaller models for specific tasks might be more beneficial for tasks where such cognitive capabilities in LLM’s response are desired.  These results direct researchers/engineers to be extra cautious during their involvement with language models, especially in critical real-world applications where incorrect responses could lead to significant repercussions. 

Key Insights

The Complex Landscape of Confidence in LLMs

Experimental Set-up 

The authors implemented a rigorous experimental framework across various categories to investigate the self-assessment of different language models. The questionnaires were designed with varying levels of difficulties across multiple domains. The study recorded the responses for all language models before and after assessment. The recorded response gave an in-depth analysis of the accuracy of LLMs’ reactions and the evolution of confidence in them. This also provided insight into the confidence competence gap across different problem scenarios and difficulty spectrums.

Intricate Dynamics of Self-assessment

The paper revealed a complex landscape where language models like GPT-4 consistently exhibited high confidence across every problem level regardless of domains and the correctness of the answer. GPT-4 was correct 62.5 % of the time but was confident in most problems.  On the other hand, models like Claude-2 and Claude-Instant demonstrated higher variability in their confidence scores based on the task difficulties they were subjected to. This reveals a more adaptive self-assessment mechanism in response to the complexity of the problem. How these different models assess themselves isn’t just about numbers; it impacts their usefulness and reliability in real-life applications. Regardless of the domain and problem level, GPT-4 was always confident, which could be a problem in scenarios where overconfidence can lead to misleading information.

Patterns in Confidence-Competence Alignment

Diving deeper into the study, the paper looks at how confident the models are before and after they do a task and whether they get the answers correct. Claude-2 adjusted how confident the model was after answering the question.  It was able to change and adjust its evaluation correctly in some instances. Google’s Bard maintains its evaluation before and after the question is asked, regardless of the correctness of the problem. These detailed observations show that self-assessment in these models is a complex task. We need to understand this complexity better to be able to use it reliably in real-world applications, especially where reliability is a concern.

Between the lines 

The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences,  as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • Research summary: Algorithmic Colonization of Africa

    Research summary: Algorithmic Colonization of Africa

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

    Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

    Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

  • Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

    Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.