🔬 Research Summary by Dimitri Ognibene, Emily Theophilou, Alessia Telari, Alessia Testa, Davide Taibi, Davinia Hernández-Leo, Mona Yavari, and Cansu Koyuturk.
Dimitri Ognibene, Associate Professor of Human Technology Interaction at Università degli Studi di Milano-Bicocca
Emily Theophilou, PhD Candidate in Educational Technologies at Universitat Pompeu Fabra
Alessia Telari, PhD Student in Social Psychology, at Università degli Studi di Milano-Bicocca
Alessia Testa, Post Lauream Intern at Università degli Studi di Milano-Bicocca
Davide Taibi, Senior Researcher, Institute for Educational Technology – National Research Council of Italy
Davinia Hernández-Leo, Full Professor, ICT Department, Universitat Pompeu Fabra
Mona Yavari, MSc. Student in Applied Experimental Psychological Science, at Università degli Studi di Milano-Bicocca
Cansu Koyuturk, MSc. Student in Applied Experimental Psychological Sciences, at Università degli Studi di Milano-Bicocca
[Original paper by Emily Theophilou, Cansu Koyuturk, Mona Yavari, Sathya Bursic, Gregor Donabauer, Alessia Telari, Alessia Testa, Raffaele Boiano, Davinia Hernandez-Leo, Martin Ruskov, Davide Taibi, Alessandro Gabbiadini, and Dimitri Ognibene]
Overview: To be able to fully exploit the potential of Large Language Models (LLMs), it is crucial to acknowledge their fallibility and limitations. This makes a critical approach to their output possible and helps reduce fear and negative attitudes that may impair the societal benefits of LLMs and AI. A pilot educational intervention with high school students involving hands-on non-trivial interactions with ChatGPT showed promising results, including improved interaction, decreased negativity, and increased understanding.
Recent progress in Large Language Models (LLMs) and new AI capabilities sparked hyped discussions ranging from super-intelligent computers that would help achieve incredible civilization advances and answer the most disparate questions (finally not with 42) to apocalyptic extinction scenarios. During a symposium in Rome on this debate, an AI-generated presentation about AI and human self-worth showed a catching, carefully selected example of its own limitations by choosing the picture of a Leopard as a portrait of the Italian poet Giacomo Leopardi. LLM’s tendency to make up responses contrast the current hype on AI capabilities and could have wide-scale nefarious effects if unaware users dogmatically spread such unintentionally generated misinformation. AI literacy interventions are necessary to improve users’ proficiency with these technologies and raise awareness about their limitations. They could also counter fear and other negative effects of the current hype allowing for larger societal benefits from AI. A pilot intervention involving the presentation of high-level concepts about intelligence, AI, LLM, and prompting, as well as direct hands-on practice with ChatGPT in a non-trivial task, improved students’ skills, sentiments toward AI and understanding of ChatGPT’s limitations, specifically in reliability, understanding of commands, and presentation flexibility.
Motivation: ChatGPT and the AI hype
The rapid advancement of AI has transformed various aspects of our lives, promising limitless opportunities and benefits. However, alongside this progress, there has been a surge of hype toward AI technologies contributing to misconceptions and unrealistic beliefs about AI’s capabilities. As a result, this can obscure crucial limitations of AI advancements, increasing the impact of AI errors on human decisions and the spread of misinformation it generates through hallucinations.
Acknowledging these limitations is important to counteract the impact of blind overconfidence in AI’s potentially erroneous outputs. This can be achieved by performing extended AI literacy interventions that allow the public to understand such LLM limits and learn how to use them more effectively. At the same time, AI literacy interventions can help reduce fear and other negative attitudes toward AI. By fostering this type of literacy, users can be empowered to utilize AI more effectively, minimizing the spread of misinformation and the negative impact of AI errors on decision-making. Furthermore, promoting literacy and fostering a positive attitude toward AI is crucial in democratizing access to this technology and preventing an increase in inequality.
A hands-on interaction with ChatGPT to educate about AI limitations
With this aim, a pilot educational intervention was performed in a high school with 30 students. The intervention was designed to provide students with a hands-on encounter with AI limitations, allowing them to discover these boundaries firsthand. At the initial stage, students completed a preliminary survey to collect their initial attitudes and knowledge toward AI. Students were then presented with high-level concepts about intelligence, AI, and LLMs. Once the presentation was over, students were instructed to utilize ChatGPT to create a natural educational conversation. During this interaction, students found it difficult to reach a successful conclusion.
The next step of the intervention introduced students to prompting strategies, an approach designed to empower users in shaping their interactions with AI and optimizing outcomes. Armed with this newfound knowledge, students proceeded to repeat the same interaction as before. This time, their interactions with AI were more meaningful with purposeful outputs. After completing the intervention, a final survey was conducted to assess the students’ resulting attitudes and knowledge.
The educational intervention utilizing ChatGPT as a learning tool to educate about AI limitations was successful. This simple exercise improved students’ skills in using an LLM agent and fostered a positive shift in their sentiments toward AI. Moreover, this practical interaction offered them a deeper understanding of ChatGPT’s limitations, particularly regarding reliability, comprehension of commands, and presentation flexibility.
Utilizing an LLM as a learning tool to explore the boundaries and potentials of AI systems exemplifies an educational approach that embraces and incorporates AI advancements in the classroom that can successfully mitigate fears and bring to light limitations. Students themselves have shown a positive outlook on incorporating ChatGPT into their learning journey. However, as we embrace this new approach to AI literacy interventions, several critical considerations come to the forefront.
Ethical AI issues, including copyright, privacy, and access to sensitive information, are crucial within AI literacy interventions. By incorporating these debates into the educational framework, students can become more critical when utilizing these tools for learning, cultivating responsible AI usage.
Nonetheless, teaching prompting techniques can present its own set of challenges. The lack of transparency and the absence of clear formal grammar and semantics can pose difficulties in an educational approach. Educators must skillfully navigate the task, domain, and tool dependence to ensure effective prompting skills among students.
Furthermore, evaluating users’ interactions with AI systems, prompting skills, and LLM responses calls for a multifaceted approach. Capturing errors depends on context, pragmatics, and conversational styles in a generalized context. Automating this process may require more reliable systems than LLMs themselves, ensuring comprehensive evaluations.
Between the lines
The rapidly evolving AI technology landscape challenges developing effective educational interventions for maximum societal benefits and safety. In particular, users’ awareness of AI fallibility must be raised to address the impact of dogmatic overconfidence in potentially erroneous AI suggestions and their dissemination. This pilot study highlights the potential of interactive learning activities to improve users’ proficiency, understanding of AI limitations, and attitudes toward AI. However, for broader impact, replication with a larger population and transformation into a more controlled and accessible format is necessary.
Debates on ethical issues, such as copyright, privacy, and access to sensitive information in Large Language Models (LLMs), need yet to be integrated into most AI literacy interventions. Additionally, skills learned for the current generation of AI tools may quickly become outdated. For instance, while LLMs and their applications are generally versatile, evaluating users’ prompting skills and LLMs’ responses remains challenging and highly task-dependent.
Prompting is a significant new method of interacting with computers. However, it is an emerging art rather than a transparent and predictable procedure with formal grammar and semantics. While it allows specifying tasks through natural language, finding effective wording for tasks remains a challenge.