• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?

January 5, 2026

✍️By Emma Edney from Encode Canada.

Emma is a BCL/JD McCall MacBain Scholar candidate at McGill University. Her interests include ethical issues surrounding personal information in technology and the impact of AI on litigation work. Emma is a writer at Encode Canada and a junior editor for the McGill Health and Law Journal.


📌 Editor’s Note: This piece is part of our Recess series, featuring university students from Encode’s Canadian chapter at McGill University. The series aims to share insights from university students on current issues in AI ethics. In this article, Emma Edney examines how to navigate the ongoing entanglement between studying AI and the law.

Photo credit: Giammarco Boscaro on Unsplash.


Artificial Intelligence (AI) has become a “hot topic” across all different types of employment and education. AI has been increasingly used in the legal profession, as demonstrated not only in law schools but with ripple effects into the courtroom. Although AI improves access to case information for law students, it risks reducing critical class engagement, such as logical reasoning and research skills. This is problematic because practising lawyers cannot depend on AI in the courtroom, and relying on AI during law school undermines the foundational skills the profession demands.

It’s no secret that AI is being used and promoted in law schools, for example, LexisAI+ (LexisNexis, 2025). LexisAI is an AI platform that helps law students and lawyers search case law, drafting documents, and anything under the sun that is covered in the legal sphere  (LexisNexis Legal & Professional, 2023). In fact, more than a third of lawyers (36%) and almost half of law students (44%) have used it either personally or professionally (LexisNexis Legal & Professional, 2023). This over-reliance limits students’ independent development throughout law school and in their careers as lawyers. After graduating from law school and passing the Canadian Bar, lawyers cannot rely solely on AI to develop their arguments due to the ethical concerns of confidentiality with client information (Goodman, 2019). Even the Federal Court of Canada has mandatory declarations for AI-Generated Content because public confidence in the administration of justice might be undermined (Federal Court of Canada, 2024). Public confidence is illustrated by delivering reasoning to judgments, but also in client relationships between lawyers (Fine & Marsh, 2024). Having relationships with your client cannot be taught by AI because the interpersonal skills of client rapport is a skill learned from time spent during law school, or so it should be (Goodman, 2019). Consequently, students who depend on AI risk entering the practice without the necessary personal and emotional skills to function effectively. 

The Canadian legal profession upholds high standards of integrity, equality, and collegiality exemplified in the Canadian Bar Association principles (Canadian Bar Association, 2025). Can this “high standard” be considered satisfied if law students and lawyers resort to using AI even if it will help with time efficiency for clients (Westfahl & Wilkins, 2017)? If yes, then shouldn’t clients be paying a lower fee due to less time spent on the file? Skills such as analytical thinking, legal reasoning, and the ability to react quickly during litigation can’t be done by AI (So, 2025). An American Study discovered that at least 58% of the time, LLMs struggle to predict their own hallucinations and accept incorrect legal assumptions using ChatGPT (Dahl, Magesh, Suzgun, & Ho, 2024). This is important because the purpose of law school is so that individuals can become competent lawyers who encompass those skills; thus, it would undermine the profession as a whole. 

Some law schools could adopt wifi systems that block AI portals to prevent access to them on campus; however, this does not stop law students from accessing them at home. Therefore, law schools should integrate AI-focused courses to teach law students about the use and limitations of viewing AI as a tool, not as a reliance (Goswami, 2025). We see this integration in law firms, like Gowling LLP, which embrace AI by announcing publicly the use of AI in the firm to create client transparency (Gowling LLP, 2025). This matters because the legal system depends on public trust. Lawyers are expected to protect client information, given their training and oath. If clients learned their data was being put into an AI system, it could be seen as an ethical breach (Linna & Muchman, 2020). The clients might then question why they paid for legal services at all, arguing they could have used the technology themselves to save both time and money (McGinnis, 2014). 

As discussed, there is a limit to what AI can replicate for law students and the bias that it can bring (Linna & Muchman, 2020). Once law students complete their education, they need to pass the Canadian Bar exam for a majority of provinces, except for other provinces like Alberta, who completed the Canadian Centre for Professional Legal Education (CPLED) and PREP (CPLED, 2025; Ferguson, 2021). The Canadian Bar Exam is composed of written questions to test each individual’s understanding of the law. CPLED is very similar, but it has online modules instead of an official exam (CPLED, 2025). Both still qualify as the same level of admissibility to become a lawyer. To help combat the use of AI in law schools, the Canadian Bar Exam could adopt a practical skill component such as oral advocacy, negotiation, and client interaction to get at the root that AI cannot replicate, “people skills.” (Legg, 2024). In the United States, they have developed a pilot project about practical skills being adopted for their Bar Exam that Canada could pull as inspiration (Green, 2025). Recently, the Ontario Bar announced the possibility for their Bar Exam to be scrapped and replaced with a skills-based course, which mirrors Alberta (Weingarten, 2025). As well, British Columbia just started, as of September 2026, to join Alberta in CPLED (Carolino, 2025). Thus, adopting a skills-based approach in Canadian Bar Exams and courses to help guide law students invites the possibility that AI doesn’t have to be something avoided if proper barriers are in place (McKeith, 2023). I invite individuals like yourself to advocate that law schools and law firms can offer training on the use of AI so not to be scared of AI, as it is being used everywhere, nor will it replace lawyers (McKeith, 2023).

In the end, AI is here to stay. Due to the rapid growth of AI, it shines a light on the younger generation coming into the legal profession, which raises the question of how much can we hand over to technology before it undermines the profession and ethical concerns? From AI ethics perspectives, law schools need to ensure that AI is used as a tool to help with case research but not dependent on it since client information can slip through the cracks if training is not at the forefront. Society risks producing law students who can generate answers, but lack judgment and communication. At times, it matters most, whether with clients or in a courtroom before a judge. The real issue is whether we will draw that line now or only after it’s already been crossed.


References

Bennett Moses, L., & Misel (2023, March 23). Chat GPT is Putting the Future of Grad Lawyers under the Microscope. Law Society Journal. https://lsj.com.au/articles/chat-gpt-is-putting-the-future-of-grad-lawyers-under-the-microscope/. 

Canadian Bar Association. (2025). Principles of Conduct. https://www.cba.org/about-us/governance/operational-policies/principles-of-conduct/. 

Canadian Professional Legal Education (CPLed). (2025). CPLed. https://cpled.ca/. 

Canadian Centre for Professional Legal Education. (2025). Practice Readiness Education Program (PREP). CPLed. https://cpled.ca/students/cpled-prep/. 

Carolino, B. (2025, October 28). BC Law Society to Launch Practice Readiness Education Program in September 2026. Canadian Lawyer. https://www.canadianlawyermag.com/resources/professional-regulation/bc-law-society-to-launch-practice-readiness-education-program-in-september-2026/393274. 

Dahl, M., Magesh, V., Suzgun, M., & Ho, D. E. (2024). Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Journal of Legal Analysis. https://arxiv.org/abs/2401.01301.  

Federal Court of Canada. (2024). Notice to the parties and the profession: Update to the use of artificial intelligence in court proceedings.  https://www.fct-cf.ca/Content/assets/pdf/base/FC-Updated-AI-Notice-EN.pdf. 

Federal Court of Canada. (2025). Artificial intelligence. https://www.fct-cf.ca/en/pages/law-and-practice/artificial-intelligence. 

Ferguson, D. D. (2021, February 22). Douglas D. Ferguson on Legal Education. CanLII Commentary. https://canlii.org/en/commentary/doc/2021CanLIIDocs471. 

Fine, A., & Marsh, S. (2024, June 28). Judicial Leadership Matters (Yet Again): The Association Between Judge and Public Trust for Artificial Intelligence in Courts. Discover Artificial Intelligence. https://doi.org/10.1007/s44163-024-00142-3. 

Goodman, Chris Chambers. (2019). Impacts of Artificial Intelligence in Lawyer-Client Relationships. Oklahoma Law Review, 72(1), pp.149-184. 

Goswami, P. (2025, April 3). Revolutionizing legal education: The Role of Artificial Intelligence in Shaping the Future of Law Teaching and Learning. Social Science Research Network. https://doi.org/10.2139/ssrn.5123719. 

Gowling WLG. (2025, July 11). Innovation in Action: Gowling WLG Becomes First Canadian Law Firm to Roll out Harvey AI Enterprise-Wide. Gowling WLG. https://gowlingwlg.com/en/news/firm-news/2025/gowling-wlg-becomes-first-canadian-law-firm-to-roll-out-harvey-ai-enterprise-wide. 

Legg, M. (2024, November 12). Better Than a Bot – Instilling Ethical Judgement into the Lawyers of the Future in the Age of AI. Griffith Law Review, 33(3), 273–293. https://doi.org/10.1080/10383441.2025.2493493. 

LexisNexis. (2025). Lexis+ AI legal research platform & AI assistant. LexisNexis. https://www.lexisnexis.com/en-int/products/lexis-plus-ai 

LexisNexis Legal & Professional. (2023). Generative AI & the legal profession: 2023 survey report. LexisNexis. https://www.lexisnexis.com/pdf/ln_generative_ai_report.pdf 

LexisNexis Legal & Professional. (2023). Generative AI & the legal profession: 2023 survey report. LexisNexis. https://www.lexisnexis.com/pdf/ln_generative_ai_report.pdf. 

Linna, Daniel W. Jr., & Muchman, W. J. (2020). Ethical Obligations to Protect Client Data When Building Artificial Intelligence Tools: Wigmore Meets AI. Professional Lawyer. 27-38.

McGinnis, J. O., & Pearce, R. G. (2014). The great disruption: how machine intelligence will transform the role of lawyers in the delivery of legal services. Fordham Law Review, 3041-3066. 

So, J. (2025, September 9). Pre-Law Student Survey Unmasks Fears of Artificial Intelligence Taking Over Legal Roles. Canadian Lawyer. https://www.canadianlawyermag.com/news/international/to-come-international-2/393028. 

Westfahl, S. A., & Wilkins, D. B. (2017). The Leadership Imperative: A Collaborative Approach to Professional Development in The Global Age of More for Less. Stanford Law Review. 69(6). https://review.law.stanford.edu/wp-content/uploads/sites/3/2017/06/69-Stan.-L.-Rev.-1667.pdf. 

Weingarten, N. (2025, November 21). Ontario Bar Exam for Future Lawyers could be Scrapped, Replaced with Skills‑Based Course. CBC News. https://www.cbc.ca/news/canada/toronto/ontario-bar-exam-replaced-9.6987640. 

Wilbur, T. (2025, July 25). AI in Law Firms Should be a Training Tool, Not a Threat, for Young Lawyers. Canadian Lawyer. https://www.canadianlawyermag.com/news/opinion/ai-in-law-firms-should-be-a-training-tool-not-a-threat-for-young-lawyers/392807. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Research summary: Snapshot Series: Facial Recognition Technology

    Research summary: Snapshot Series: Facial Recognition Technology

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

    AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

  • Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

    Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

  • Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

    Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.