🔬 Research Summary by Wayne Holmes, a learning sciences and innovation researcher who teaches at University College London, is a consultant researcher on Artificial Intelligence (AI) and education for UNESCO, and is a member of the Education Scientific Committee for IRCAI (the International Research Centre for Artificial Intelligence under the auspices of UNESCO). Wayne’s research interests focus on a critical studies perspective to the connections between AI and education, and their ethical, human and social justice implications.
[Original paper by Wayne Holmes (UK), Kaśka Porayska-Pomsta (UK), Ken Holstein (USA), Emma Sutherland (UK), Toby Baker (UK), Simon Buckingham Shum (Australia), Olga C. Santos (Spain), Mercedes T. Rodrigo (Philippines), Mutlu Cukurova (UK), and Ig Ibert Bittencourt (Brazil)]
Overview: It is almost certainly the case that all members of the Artificial Intelligence in Education (AIED) research community are motivated by ethical concerns, such as improving students’ learning outcomes and lifelong opportunities. However, as has been seen in other domains of AI application, ethical intentions are not by themselves sufficient, as good intentions do not always result in ethical designs or ethical deployments (e.g., Dastin, 2018; Reich & Ito, 2017; Whittaker et al., 2018). Significant attention is required to understand what it means to be ethical, specifically in the context of AIED. The educational contexts which AIED technologies aspire to enhance highlight the need to differentiate between doing ethical things and doing things ethically, to understand and to make pedagogical choices that are ethical, and to account for the ever-present possibility of unintended consequences, along with many other considerations. However, addressing these and related questions is far from trivial, not least because it remains true that “no framework has been devised, no guidelines have been agreed, no policies have been developed, and no regulations have been enacted to address the specific ethical issues raised by the use of AI in education” (Holmes et al., 2018, p. 552).
While other AI communities are increasingly attending to ethical considerations around the design and deployment of AI-based technologies, ethical dimensions of AIED do not yet appear to be a central area of focus for many in the AIED community. Since Aiken & Epstein published their ethical guidelines two decades ago to begin a conversation around the ethics of AIED (Aiken & Epstein, 2000), there has been a striking paucity of published work in the AIED community that explicitly focuses on ethics. In any case, the potential impact of AIED designs and deployment methods on students, teachers, and wider society appears yet to be fully worked out.
Nonetheless, it is generally accepted within the community that AIED raises far-reaching questions with important ethical implications for students, educators, parents, policymakers, and other stakeholders. Ethical concerns permeate many of the community’s core interests, including but not limited to the accuracy of diagnoses of learners interacting with AIED systems; choices of pedagogies employed by AIED systems; predictions of learning outcomes made by those systems; issues of fairness, accountability, and transparency; and questions related to the influence of AI and learning analytics on teachers’ decision making.
In many ways, AIED is itself a response to each of those issues and more – with most being addressed, one way or another, in work conducted in the varied research subdomains of AIED. For example, questions around data ownership and control over their interpretations have long been recognized as critical in AIED (e.g., in the context of open learner modeling: Bull & Kay, 2016; Conati et al., 2018). However, what is currently missing is a basis for the meaningful ethical reflection necessary for innovation in AIED to help researchers determine how they might best contribute towards addressing those challenges explicitly. This requires a deep engagement both with recent ethics-related debates and frameworks in other AI research subdomains and with ethical questions specific to the AIED subdomain itself (e.g., issues around pedagogy), many of which remain unasked and unanswered (although there are some exceptions – e.g., Aiken & Epstein, 2000; Friedman & Kahn Jr, 1992; Holmes, Bialik, et al., 2019; Sharkey, 2016).
The Ethics of AI in General
As with any transformative technology, some AI applications may raise new ethical and legal questions, for example, related to liability or potentially biased decision-making. The ethics of artificial intelligence, in general, has received a great deal of attention from researchers (e.g., Boddington, 2017; Floridi, 2019; Jobin et al., 2019; Whittaker et al., 2018; Winfield & Jirotka, 2018) and more widely (e.g., the European Union, 2019; the UK’s House of Lords, 2018; UNESCO, 2019; and the World Economic Forum, 2019), with numerous other AI ethics initiatives emerging in recent years (e.g., Ada Lovelace Institute, 2019; AI Ethics Initiative, 2017; AI Now Institute, 2017; DeepMind Ethics & Society, 2017; Future of Life Institute, 2013; The Institute for Ethical AI & Machine Learning, 2018).
All of these efforts principally focus on data (involving issues such as informed consent, data privacy, and biased data sets) and how that data is analyzed (involving issues such as biased assumptions, transparency, and statistical apophenia—finding patterns where no meaningful, causal patterns are present). The Montréal Declaration for Responsible Development of Artificial Intelligence (2018), for example, offers a comprehensive approach involving ten human-centered principles, encompassing: well-being, respect for autonomy, protection of privacy, solidarity, democratic participation, equity, diversity, prudence, responsibility, and sustainable development. No such declaration currently exists for the specific issues raised by AIED.
The Ethics of Educational Data
Similarly, the ethics of educational data and learning analytics have also been the focus of much research (e.g., Ferguson et al., 2016; Slade & Prinsloo, 2013; Potgieter, 2020). This work is extensive and far too wide to summarise; however, some key issues can be noted. First, because the field is still emerging, exactly what ethics of learning analytics should include remains the subject of debate (Ferguson et al., 2016). Second, the ethics of learning analytics involves several types of questions, including but not limited to: informed consent and privacy, the interpretation of data, the management of data, and perspectives on data (e.g., institutional versus individual); as well as on much broader issues such as power relations, surveillance, and the purpose of education (Slade & Prinsloo, 2013). Third, it has been argued that ‘educational data mining […] is not the superconductor of truth that some of its proponents believe […] and the transformative impact that it will have on the autonomy of learners is cause for concern’ (Potgieter, 2020, pp. 3, 6).
The learning analytics community has endeavored to agree on principles against which learning analytics research and practice can judge itself and be judged. The DELICATE checklist, for example, comprises guidance centered on determining added value and the rights of participants, being open about intentions and objectives, legitimizing the collection of data, involving all stakeholders, including the data subjects, ensuring consent is genuinely informed and freely given, ensuring that data is truly anonymized, establishing and implementing procedures to guarantee individual privacy, and adopting clear and transparent obligations with any external agencies that might be involved with the data (Drachsler & Greller, 2016). The clear overlaps between learning analytics and AIED, which are centered on educational data, suggest that the ethics of AIED might usefully draw on approaches such as the DELICATE checklist. However, there are also clear differences between the two fields, “with an emphasis on agents and tutors for AIED, […] and visualization for LA” (Labarthe et al., 2018, p. 70). This active engagement/passive representation distinction, although radically oversimplified, suggests that a comprehensive ethics of AIED is likely to have additional requirements.
The Ethics of AIED in Particular
As with AI in general, concerns exist about the large volumes of data collected to support AIED (such as the recording of student competencies, inferred emotional states, strategies, and misconceptions). Who owns and who can access these data, what are the privacy concerns, and who should be considered responsible if something goes wrong?
Other major AIED ethical concerns, as with AI in general, center on computational approaches. How should the data be analyzed, interpreted, shared, and acted upon? How should the biases (conscious or unconscious) that might impact negatively on the civil rights of individual students be prevented or ameliorated—especially given that the scale of AIED in the coming years is likely to amplify any design biases (e.g., about gender, age, race, social status, income inequality…)? Finally, as the Facebook and Cambridge Analytica data scandal showed, data is vulnerable to hacking and manipulation: ‘it’s impossible to have personal privacy and control at scale, so it is critical that the uses to which data will be put are ethical – and that the ethical guidelines are clearly understood’ (Tarran, 2018, pp. 4–5).
However, the ethics of AIED cannot be reduced to questions about data or computational approaches alone (Holmes, Bialik, et al., 2019). AIED research also needs to account for the ethics of education, which, although the subject of decades of research, is often overlooked. For example, AIED research needs to address explicitly issues such as (1) the purpose of the learning (e.g., to prepare students to pass exams or to help them self-actualize), (2) the choice of pedagogy (with a common approach, instructionism, being contested by the learning sciences community), (3) the role of the technology concerning teachers (to replace or augment human functions), and (4) access to education (often seen by the community through the ethical dimension of fairness and equity). In addition, there remains limited research into what teachers and students want from AIED systems – such as requirements around student agency and privacy about which teachers and students might disagree (Holstein et al., 2019). Furthermore, where AIED interventions target behavioral change (such as by ‘nudging’ individuals towards a particular course of action), the entire sequence of AIED-enhanced pedagogical activity needs to be ethically warranted in the context of the broader activities within which AIED systems are being deployed.
To highlight just some of the potential breadth of issues, AIED ethical questions include:
- How does the transient nature of student goals, interests, and emotions impact the ethics of AIED?
- How can K12 students give genuinely informed consent for their involvement with AIED tools?
- What are the AIED ethical obligations of private organizations (developers of AIED products) and public authorities (schools and universities involved in AIED research)?
- How might schools, students, and teachers opt out from, or challenge, how they are represented in large datasets?
- What are the ethical implications of not being able to easily interrogate how some AIED deep decisions (e.g., those using multi-level neural networks) are made?
- What are the ethical consequences of encouraging students to work independently with AI-supported software (rather than with teachers or in collaborative groups)?
Notably, as mentioned above, some guidelines were proposed almost twenty years ago (Aiken & Epstein, 2000) but have not been widely adopted by the AIED community. Aiken and Epstein start with a negative and a positive meta-principle related to the impact of AIED on ‘dimensions of human being’; with the dimensions ethical, aesthetic, social, intellectual, and physical, together with psychological traits such as ‘the individual’s ability to lead a happy and fulfilling life’. Aiken and Epstein’s negative meta-principle is that ‘(AIED) technology should not diminish the student along any of the fundamental dimensions of human being.’ The positive meta-principle is that ‘AIED technology should augment the student along at least one of the fundamental dimensions of human being.’ They go on to provide and discuss ten ‘fundamental principles’ for educational technologies that incorporate AI methods. Some of these (such as ‘avoid information overload’) are essential tenets of good user experience and effective pedagogy for most educational technologies. The principles that are most specific to the application of AI in education are: ‘7. Develop systems that give teachers new and creative roles that might not have been possible before the use of technology. Systems should not attempt to replace the teacher.’ and ‘10. Avoid glorifying the use of computer systems, thereby diminishing the human role and the human potential for learning and growth.’
The survey results presented in the full paper indicate that AIED researchers recognize the importance and value of engaging with the ethics of their work. Nonetheless, there are nuances of opinion concerning what this might include and how it might be best achieved. With this in mind, in this section, we first summarise key emergent themes and identify some issues we believe are missing from the discussion. We propose a draft of an initial AIED ethics framework to help galvanize further debate.
As noted previously, there is no doubt that the AIED community does wish to explore, better understand and engage with the ethics of the design and application of AI in educational contexts. However, some respondents appear to believe that we, as a community, are already ‘doing ethics’ by operating with the best intentions in the educational domain, which is in and of itself ethical. However, the reality is that “no ethical oversight is required to deploy an e-learning system or an AIED system as part of the normal teaching process” (du Boulay).
Good intentions are not by themselves sufficient. As was acknowledged by several respondents, we need to understand more precisely the ethical risks and to be always on the lookout for unintended consequences that relate specifically to the pedagogical designs (including their readiness for real-world use) that are encapsulated in the AI systems we develop and deploy.
The precision of such an understanding needs to be expressed in an actionable code of best practice that the community can rely on in designing and deploying AIED technologies in diverse educational contexts. However, although there appears to be a clear appetite for some kind of ethics of AIED framework that would build upon university research ethics, the community also recognizes that such a framework also needs to be distinct from the generally established research ethics approvals and procedures. In particular, such a framework would need to incorporate guidance addressing the many issues raised by respondents (including fairness, accountability, transparency, bias, autonomy, agency, and inclusion), specifically at the intersection of AI and the learning sciences, and ensure that AIED is ethical by design and not just by intention.
In this context, the respondents were also cognizant of the problems that such a framework might itself inadvertently entail. In particular, in line with Bietti (2020), having such a framework might both stifle innovation and lead to accusations of, or actual, in-house “ethics washing”. So, while the community recognizes the critical importance of AIED’s specific code of ethical practice, what is meant by ethical AIED remains an open question. Thus, establishing a useful framework for the ethics of AI in education, although desirable and potentially useful, is likely to be a challenging and long-term task.
AIED’s explicit aim to foster behavior change in its users adds additional pressure on the community to embrace ethics and the related dimensions both as a necessity and as a moral obligation for the community. The flip side of this is that the field (alongside related fields such as educational data mining, learning analytics, and user modeling) has a real potential to influence the educational systems at the frontline precisely because it designs for and deploys in real-world contexts and to contribute to broader approaches of designing for and deploying AI in other than educational human contexts.
In particular, it might be useful to consider how AIED research may contribute to broader debates about how AI might impact human cognition, decision-making, and functioning. Given AIED’s focus on human learning and development, it is at least worth considering its potential role in informing those broader debates from its unique perspective of designing AI to influence human cognition. AIED’s ambition to support human learning and the field’s proximity to the learning sciences, educational neuroscience, and educational practice likely affords a very human-centric, human-developmental understanding of ethics and related dimensions of fairness, transparency, accountability, etc., than is afforded by the more general socio-political and legal considerations in the context of other AI subfields.
The stakeholders (developers, educators, and policymakers) all need to be provided key information about the pros and cons of specific AIED technologies – perhaps something in the style of the list of ingredients and allergy warnings on food or side effects on medicines. Such key information could include both the known limitations (e.g. in terms of pedagogies, biases of interpretation, privacy, etc.) as well as benefits that are likely to emerge from the use of specific AIED systems. Can such an open approach better inform the users’ choices and transparency of what we create? Would it allow the community to become more accountable, and more in touch with the broader developments and debates in AI and educational practice?
First steps towards a framework
As noted above, the ethics of AI raises a variety of complex issues centered on data (e.g., consent and data privacy) and how that data is analyzed (e.g., transparency and trust). However, it is also clear that the ethics of AIED cannot be reduced to questions about data and computational approaches alone. In other words, investigating the ethics of AIED data and computations is necessary but insufficient. Given that, by definition, AIED is the application of AI techniques and processes in education, the ethics of AIED also, as noted earlier, needs to account for the ethics of education (Holmes, Bialik, et al., 2019). Yet, while the ethics of education has been the focus of debate and research for more than 2000 years (e.g., Aristotle, 2009; Macfarlane, 2003; Peters, 1970), it is mostly unacknowledged and unaccounted for by the wider AIED community.
Because of the rich history, there is inevitably insufficient space here to provide a comprehensive account of the ethics of education. Instead, we will simply identify some pertinent issues, each of which continues to be the subject of debate: the ethics of teacher expectations, resource allocations (including teacher expertise), gender and ethnic biases, of behavior and discipline, of the accuracy and validity of assessments, of what constitutes useful knowledge, of teacher roles, of power relations between teachers and their students, and particular approaches to pedagogy (teaching and learning, such as instructionism and constructivism).
The three foci identified—the ethics of data, computational approaches, and education—constitute the foundational level for a hypothesized comprehensive ethics of AIED framework (see Figure 1). There is, however, a second level, which is concerned with the overlaps between the three foci (as illustrated in Figure 1): the ethics of data in AI in general (which is, as discussed, an ongoing focus of much research; e.g., Boddington, 2017), the ethics of data in education (also an ongoing focus; e.g., Ferguson et al., 2016), and the ethics of algorithms applied in educational contexts. This third overlap remains the least developed area of research.
Fig. 1: A ‘strawman’ draft framework for the ethics of AIED.
However, a serious effort to develop a full ethics of AIED cannot be limited even to these six areas (data, computational approaches, education, and the overlaps between them). These constitute the ‘known unknowns’ of AIED ethics, but what about the ‘unknown unknowns’, the ethical issues raised by AIED that have yet to be even identified (i.e., those issues at the central intersection of data, computation, and education, and the specific interaction between AI systems use and human cognition at the individual level, indicated by the question mark in Figure 1)? Any sufficient ethics of AIED needs to involve horizon scanning, interdisciplinary conversations, explicitly taking into account insights from the learning sciences, cognitive and educational neuroscience, and philosophical introspection. All of these are necessary to help us identify and explore the unknown unknowns to establish a comprehensive framework for the ethics of AIED. Establishing such a framework is only the first step in the process. Suppose our efforts in this area are to have genuine and future use value for the AIED community, teachers, students, policymakers, and other stakeholders. In that case, considerable effort must be focused on how that framework might be best implemented in practice.
Finally, but no less importantly, we should recognize another perspective on AIED ethical questions: ethics is not just about stopping ‘unethical’ activities. Instead, the ethical cost of inaction and failure to innovate must be balanced against the potential for AIED innovation to result in real benefits for learners, educators, educational institutions, and broader society. In other words, the ethics of AIED cannot just be preventative—it cannot just be about stopping researchers and developers from ‘doing harm’. Instead, it needs to provide a proactive set of foundational guidance within which to ground AIED research and development, which is both protective and facilitative, to help ensure the best outcomes for all stakeholders from the inevitable push of AI into educational contexts.
Between the lines
Many AIED researchers recognize the importance and value of engaging with the ethics of their work (indeed, there is no evidence of AIED work that is deliberately unethical). However, as the survey responses demonstrated, this engagement now needs to be surfaced, the nuances of opinion need to be discussed in depth, and issues around data, human cognition, and pedagogy choices need to be investigated, challenged, and resolved. In particular, the AIED community needs to debate the value and usefulness of developing an ethical framework and practical guidelines to inform our ongoing research and to ensure that the AIED tools that we develop and the approaches we take are, in the widest sense, ethical by design. It is also clear that without a more targeted approach to the ethics of AIED, the work conducted by the community may remain largely invisible to the rest of the AI subfields and related policies, also potentially stifling the impact of the AIED research on the increasingly human-oriented, real-world applications of AI. With its deep understanding of the human users of AI and the AI’s potential to support human learning and behavior change, AIED offers a critical perspective on the way that people interact with and change due to the interaction with AI systems and on the potential benefits and pitfalls of such an engagement. The time is ripe to bring this perspective into the open and to allow for the cross-fertilization of AI science. It approaches the benefits for human learning and development as investigated for decades within the AIED field as the guiding principles for AIED and beyond.