• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

January 3, 2023

✍️ Column by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri.

Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at Union College.

Dr. Pamela Lirio is an Associate Professor in the Faculty of Arts and Sciences – School of Industrial Relations at Université de Montréal.

Dr. Andrea Pedeferri is an instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.


ChatGPT is the latest Open AI chatbot, able to interact with human agents conversationally. You can ask questions, many of which will be answered in seconds. Syntactically this chatbot writes like a pro: its sentences are usually well-structured and grammatically correct. The tone of its writing sounds – for the most part – professional, courteous, and well-polished. Often, the answers generated sound legitimate: it feels like ChatGPT knows what it’s talking about!  

But is this AI ethical? Can it be used responsibly? What harm might it generate?

Let’s first note that ChatGPT seems to have yet another example of Big Tech (OpenAI) making headlines and deciding what technology is new and cool. The tech giants dominate the market with advanced AI solutions that flow steadily out of their labs and into our workplaces, institutions, and homes. However, these Big Tech firms – like the field of AI overall – do not accurately reflect our diverse society of tech users. They still lack diversity in their workforce of data scientists, engineers, and developers, thereby overrepresenting the inherent biases of a dominant majority (cisgender hetero men). With the quick adoption of ChatGPT, we fear that AI will continue to lack diversity and inclusivity.

Furthermore, if more power and resources were to be given to other actors in AI, such as smaller tech players, we would be able to hear a broader range of voices and ideas. Big Tech firms decide what is interesting and worth pursuing. Still, it needs to be clarified that their products are necessarily where we, as a society of users, should be directing our time, money, and energy. In other words, should we prioritize building yet another chatbot when the same resources could be devoted to building more responsible and impactful technologies?

Among the range of reactions to deploying this new technology are a growing number of discussions in the academic world about the alleged pedagogical danger that ChatGPT might pose. In particular, the worry is that students will use the chatbot to write their class papers. There are some general concerns about the “passivity” that this will produce in students from secondary school to university. For example, students might use the software to get answers for quizzes/tests, thus enabling passive receivers and not active intellectual learners. Some educators and professors are also worried about plagiarism and have rushed to change the format of the assignments and exams they give to students. 

While it is important to approach any new technology with a critical skepticism, we believe that many of those concerns are primarily misdirected and remind us of some of the “doomsday” worries about intellectual competence, knowledge acquisition, etc., that circulated the arrival of the Google search engine and Wikipedia. We think that the real problem lies elsewhere. 

Think about it: learning is complex, and research requires intellectual skills and serious work. This new chatbot sounds like the latest and greatest, but it only gives you a picture of the online discourse rather than any actual knowledge. It’s effectively scraping from the Internet without little guidance; it does not fact-check or understand what it is saying. On top of this, we know very little about how this tech works. Users have no idea whether it is reliable or not, nor what its goals and values represent. As any good epistemologist could tell you, this product offers no reliable testimony: it sounds great, it looks professional, and it writes clearly, but it is yet another black box that does not inherently deserve our epistemic trust. Hence, rather than worrying about plagiarism, we should make sure that students (and all other users, for that matter) do not take this tech to be a reliable source of information just because at face value, it sounds legitimate. The chatbot is not telling or educating us on anything; it merely collects and repackages information from disparate publicly available sources. 

Bottom line? ChatGPT is an experimental tool. It can help us gather our thoughts and find ways to express ourselves when we can’t find the right words. Great! However, writing requires research, deep thinking, and building arguments, something a chatbot can’t yet do. This technology cannot be a source of knowledge unless you can fact-check what it says. Relying on ChatGPT would be like asking an alien who just landed on Earth for directions to the closest metro station: they would have no idea where they are, so you would be better off asking the closest person next to you.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

    Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • The Ethical AI Startup Ecosystem 05: Governance, Risk, and Compliance (GRC)

    The Ethical AI Startup Ecosystem 05: Governance, Risk, and Compliance (GRC)

  • Illustration of a coral reef ecosystem

    Tech Futures: Diversity of Thought and Experience: The UN's Scientific Panel on AI

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

    AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

    Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

  • Recess: Your wrist, your data, their access: Are you trading convenience for control?

    Recess: Your wrist, your data, their access: Are you trading convenience for control?

  • Close-up of a cat sleeping on a computer keyboard

    Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.