• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

January 3, 2023

✍️ Column by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri.

Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at Union College.

Dr. Pamela Lirio is an Associate Professor in the Faculty of Arts and Sciences – School of Industrial Relations at Université de Montréal.

Dr. Andrea Pedeferri is an instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.


ChatGPT is the latest Open AI chatbot, able to interact with human agents conversationally. You can ask questions, many of which will be answered in seconds. Syntactically this chatbot writes like a pro: its sentences are usually well-structured and grammatically correct. The tone of its writing sounds – for the most part – professional, courteous, and well-polished. Often, the answers generated sound legitimate: it feels like ChatGPT knows what it’s talking about!  

But is this AI ethical? Can it be used responsibly? What harm might it generate?

Let’s first note that ChatGPT seems to have yet another example of Big Tech (OpenAI) making headlines and deciding what technology is new and cool. The tech giants dominate the market with advanced AI solutions that flow steadily out of their labs and into our workplaces, institutions, and homes. However, these Big Tech firms – like the field of AI overall – do not accurately reflect our diverse society of tech users. They still lack diversity in their workforce of data scientists, engineers, and developers, thereby overrepresenting the inherent biases of a dominant majority (cisgender hetero men). With the quick adoption of ChatGPT, we fear that AI will continue to lack diversity and inclusivity.

Furthermore, if more power and resources were to be given to other actors in AI, such as smaller tech players, we would be able to hear a broader range of voices and ideas. Big Tech firms decide what is interesting and worth pursuing. Still, it needs to be clarified that their products are necessarily where we, as a society of users, should be directing our time, money, and energy. In other words, should we prioritize building yet another chatbot when the same resources could be devoted to building more responsible and impactful technologies?

Among the range of reactions to deploying this new technology are a growing number of discussions in the academic world about the alleged pedagogical danger that ChatGPT might pose. In particular, the worry is that students will use the chatbot to write their class papers. There are some general concerns about the “passivity” that this will produce in students from secondary school to university. For example, students might use the software to get answers for quizzes/tests, thus enabling passive receivers and not active intellectual learners. Some educators and professors are also worried about plagiarism and have rushed to change the format of the assignments and exams they give to students. 

While it is important to approach any new technology with a critical skepticism, we believe that many of those concerns are primarily misdirected and remind us of some of the “doomsday” worries about intellectual competence, knowledge acquisition, etc., that circulated the arrival of the Google search engine and Wikipedia. We think that the real problem lies elsewhere. 

Think about it: learning is complex, and research requires intellectual skills and serious work. This new chatbot sounds like the latest and greatest, but it only gives you a picture of the online discourse rather than any actual knowledge. It’s effectively scraping from the Internet without little guidance; it does not fact-check or understand what it is saying. On top of this, we know very little about how this tech works. Users have no idea whether it is reliable or not, nor what its goals and values represent. As any good epistemologist could tell you, this product offers no reliable testimony: it sounds great, it looks professional, and it writes clearly, but it is yet another black box that does not inherently deserve our epistemic trust. Hence, rather than worrying about plagiarism, we should make sure that students (and all other users, for that matter) do not take this tech to be a reliable source of information just because at face value, it sounds legitimate. The chatbot is not telling or educating us on anything; it merely collects and repackages information from disparate publicly available sources. 

Bottom line? ChatGPT is an experimental tool. It can help us gather our thoughts and find ways to express ourselves when we can’t find the right words. Great! However, writing requires research, deep thinking, and building arguments, something a chatbot can’t yet do. This technology cannot be a source of knowledge unless you can fact-check what it says. Relying on ChatGPT would be like asking an alien who just landed on Earth for directions to the closest metro station: they would have no idea where they are, so you would be better off asking the closest person next to you.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • How Machine Learning Can Enhance Remote Patient Monitoring

    How Machine Learning Can Enhance Remote Patient Monitoring

  • Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

    Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • The Sociology of AI Ethics (Column Introduction)

    The Sociology of AI Ethics (Column Introduction)

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

  • The irony of having a clean AI chatbot

    The irony of having a clean AI chatbot

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.