• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Confidence-Competence Gap in Large Language Models: A Cognitive Study

December 2, 2023

馃敩 Research Summary by Suman Devkota, Bishal Lamichhane, Aniket Kumar Singh, and Uttam Dhakal.

Suman Devkota is a Manufacturing Engineer at First Solar and has a Master’s in Electrical Engineering from Youngstown State University.

Bishal Lamichhane is a PhD student pursuing Statistics and Data Science at the University of Nevada, Reno.

Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells and has a Master鈥檚 in Computing and Information Systems from Youngstown State University.聽

Uttam Dhakal is a Graduate Student pursuing Master鈥檚 of Electrical Engineering at Youngstown State University.

[Original Paper by Aniket Kumar Singh, Suman Devkota, Bishal Lamichhane, Uttam Dhakal,
Chandra Dhakal]


Overview: The ubiquitous adoption of Large Language Models (LLMs) like ChatGPT (GPT-3.5) and GPT-4 across various applications has raised concerns regarding their trustworthiness. This research explores the alignment between self-assessed confidence and actual performance in large language models like GPT-3.5, GPT-4, BARD, Google PaLM, LLaMA-2, and Claude, shedding light on critical areas requiring caution.


Introduction

The integration of LLMs in different applications shows how capable these models are. As these models evolve, their capabilities become increasingly impressive, but at the same time, there is a growing need to assess their behavior from cognitive and psychological perspectives. With the current models, there is a concern about trusting their responses. Our research dives deep into the confidence-competence gap in models like GPT-3, GPT-4, BARD, GooglePaLM, LLaMA, and Claude. Our meticulously designed experiment of diverse questionnaires and real-world scenarios illuminates how these language models exhibit confidence in their responses, revealing intriguing behavior. GPT-4 demonstrated high confidence regardless of correctness, while other models like Google Palm underestimated their capabilities. The study also found that certain models were more confident in answering questions in one domain. This suggests that fine-tuning smaller models for specific tasks might be more beneficial for tasks where such cognitive capabilities in LLM鈥檚 response are desired.  These results direct researchers/engineers to be extra cautious during their involvement with language models, especially in critical real-world applications where incorrect responses could lead to significant repercussions. 

Key Insights

The Complex Landscape of Confidence in LLMs

Experimental Set-up 

The authors implemented a rigorous experimental framework across various categories to investigate the self-assessment of different language models. The questionnaires were designed with varying levels of difficulties across multiple domains. The study recorded the responses for all language models before and after assessment. The recorded response gave an in-depth analysis of the accuracy of LLMs’ reactions and the evolution of confidence in them. This also provided insight into the confidence competence gap across different problem scenarios and difficulty spectrums.

Intricate Dynamics of Self-assessment

The paper revealed a complex landscape where language models like GPT-4 consistently exhibited high confidence across every problem level regardless of domains and the correctness of the answer. GPT-4 was correct 62.5 % of the time but was confident in most problems.  On the other hand, models like Claude-2 and Claude-Instant demonstrated higher variability in their confidence scores based on the task difficulties they were subjected to. This reveals a more adaptive self-assessment mechanism in response to the complexity of the problem. How these different models assess themselves isn’t just about numbers; it impacts their usefulness and reliability in real-life applications. Regardless of the domain and problem level, GPT-4 was always confident, which could be a problem in scenarios where overconfidence can lead to misleading information.

Patterns in Confidence-Competence Alignment

Diving deeper into the study, the paper looks at how confident the models are before and after they do a task and whether they get the answers correct. Claude-2 adjusted how confident the model was after answering the question.  It was able to change and adjust its evaluation correctly in some instances. Google鈥檚 Bard maintains its evaluation before and after the question is asked, regardless of the correctness of the problem. These detailed observations show that self-assessment in these models is a complex task. We need to understand this complexity better to be able to use it reliably in real-world applications, especially where reliability is a concern.

Between the lines 

The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences,  as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

    Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

  • More Trust, Less Eavesdropping in Conversational AI

    More Trust, Less Eavesdropping in Conversational AI

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • AI-synthesized faces are indistinguishable from real faces and more trustworthy

    AI-synthesized faces are indistinguishable from real faces and more trustworthy

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection an...

    Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection an...

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • Research Summary: The cognitive science of fake news

    Research Summary: The cognitive science of fake news

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.