✍️ Column by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri.
Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at Union College.
Dr. Pamela Lirio is an Associate Professor in the Faculty of Arts and Sciences – School of Industrial Relations at Université de Montréal.
Dr. Andrea Pedeferri is an instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.
ChatGPT is the latest Open AI chatbot, able to interact with human agents conversationally. You can ask questions, many of which will be answered in seconds. Syntactically this chatbot writes like a pro: its sentences are usually well-structured and grammatically correct. The tone of its writing sounds – for the most part – professional, courteous, and well-polished. Often, the answers generated sound legitimate: it feels like ChatGPT knows what it’s talking about!
But is this AI ethical? Can it be used responsibly? What harm might it generate?
Let’s first note that ChatGPT seems to have yet another example of Big Tech (OpenAI) making headlines and deciding what technology is new and cool. The tech giants dominate the market with advanced AI solutions that flow steadily out of their labs and into our workplaces, institutions, and homes. However, these Big Tech firms – like the field of AI overall – do not accurately reflect our diverse society of tech users. They still lack diversity in their workforce of data scientists, engineers, and developers, thereby overrepresenting the inherent biases of a dominant majority (cisgender hetero men). With the quick adoption of ChatGPT, we fear that AI will continue to lack diversity and inclusivity.
Furthermore, if more power and resources were to be given to other actors in AI, such as smaller tech players, we would be able to hear a broader range of voices and ideas. Big Tech firms decide what is interesting and worth pursuing. Still, it needs to be clarified that their products are necessarily where we, as a society of users, should be directing our time, money, and energy. In other words, should we prioritize building yet another chatbot when the same resources could be devoted to building more responsible and impactful technologies?
Among the range of reactions to deploying this new technology are a growing number of discussions in the academic world about the alleged pedagogical danger that ChatGPT might pose. In particular, the worry is that students will use the chatbot to write their class papers. There are some general concerns about the “passivity” that this will produce in students from secondary school to university. For example, students might use the software to get answers for quizzes/tests, thus enabling passive receivers and not active intellectual learners. Some educators and professors are also worried about plagiarism and have rushed to change the format of the assignments and exams they give to students.
While it is important to approach any new technology with a critical skepticism, we believe that many of those concerns are primarily misdirected and remind us of some of the “doomsday” worries about intellectual competence, knowledge acquisition, etc., that circulated the arrival of the Google search engine and Wikipedia. We think that the real problem lies elsewhere.
Think about it: learning is complex, and research requires intellectual skills and serious work. This new chatbot sounds like the latest and greatest, but it only gives you a picture of the online discourse rather than any actual knowledge. It’s effectively scraping from the Internet without little guidance; it does not fact-check or understand what it is saying. On top of this, we know very little about how this tech works. Users have no idea whether it is reliable or not, nor what its goals and values represent. As any good epistemologist could tell you, this product offers no reliable testimony: it sounds great, it looks professional, and it writes clearly, but it is yet another black box that does not inherently deserve our epistemic trust. Hence, rather than worrying about plagiarism, we should make sure that students (and all other users, for that matter) do not take this tech to be a reliable source of information just because at face value, it sounds legitimate. The chatbot is not telling or educating us on anything; it merely collects and repackages information from disparate publicly available sources.
Bottom line? ChatGPT is an experimental tool. It can help us gather our thoughts and find ways to express ourselves when we can’t find the right words. Great! However, writing requires research, deep thinking, and building arguments, something a chatbot can’t yet do. This technology cannot be a source of knowledge unless you can fact-check what it says. Relying on ChatGPT would be like asking an alien who just landed on Earth for directions to the closest metro station: they would have no idea where they are, so you would be better off asking the closest person next to you.