• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

February 16, 2026

✍️By Alexandria Workman.

Alexandria is an Undergraduate Student in Political Science and minoring in Business at Indiana University, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. This article will explore how AI policy may affect labor outcomes in the United States by analysing state and federal policies and relevant court opinions. Proposals that promote AI safety and support industry growth are common, but a recurring question is how much human oversight will be required. The Healthy Technology Act of 2025 shows how these questions about oversight show up in practice.


In recent years, there has been an influx of AI policy allowing companies to expand their capabilities by using AI programs to support their work. This is predominant in the healthcare industry where legislators are offering support through policy.

In January 2025, Congress introduced the Healthy Technology Act of 2025, a bill that would allow artificial intelligence systems to serve as legal drug prescribers. It would require that the FDA has authorized the AI as a regulated medical device and individual states permit AI prescribing within their jurisdiction. The bill signals a shift in regulatory approach by permitting AI to independently perform prescribing functions, allowing AI in roles that once required human judgment.

The bill allows AI to perform the prescribing function and as a result one AI deployment could fulfill the prescribing function of multiple human prescribers. The AI systems could be deployed at a scale and cost that licensed human providers can not compete with. Proponents would argue that using AI will outweigh the job loss due to these financial gains.

The Act includes oversight provisions, based on existing FDA approval processes. The opacity of AI systems could make it hard for approval processes to detect new concerns, such as bias.

The bill doesn’t mention how it would approach current employees who have a license in this practice or how humans will be kept in the loop when AI prescribes medication. In terms of liability issues, the bill does not indicate what would happen if the AI is incorrect.

The proposals in the Healthy Technology Act are being applied to other industries too. In transportation and legal services, some recent bills and regulatory proposals are also reconsidering human oversight requirements and professional licensing standards.

The AMERICA DRIVES Act (2025) would preempt state laws that require a human driver in fully automated commercial trucks operating in interstate commerce, allowing these vehicles to run driverless across state lines. If adopted, this action could increase pressure on CDL holding workers. However, businesses may see using AI for commercial driving as a benefit because it would lower costs and potentially increase road safety.

Tennessee is also looking closely at how the legal profession should adapt to AI. AI is already being used in business law to draft contracts, review deal documents, and track compliance rules, while the Tennessee Bar Association emphasizes that lawyers must verify outputs, protect confidentiality, and supervise AI use. The Tennessee Supreme Court has also sought public input on regulatory reform. Although the request does not mention AI, it sits in the same reform push that AI is accelerating: cheaper delivery of legal services, new service models, and pressure on licensing. Even so, the Court’s inquiry suggests AI may reshape how we view lawyer licensing and future policy.

However, not all policymakers’ responses have focused on deregulation. For example, in November last year, Sens. Mark Warner and Josh Hawley released the AI-Related Job Impacts Clarity Act (2025), which would require major companies and federal agencies to report AI-related layoffs to the Department of Labor. Similar trends appear at the state level, like in Utah, where the “Utah’s Pro-Human Leadership in the Age of AI” summit brought together leaders from government, business, and academia to promote “pro-human” AI that supports workers and human values instead of replacing people.

Even though government AI policy is still new, we’re seeing a growing number of bills across the political spectrum, from stricter AI safety regulation to measures aimed at supporting business growth. The real question is not whether AI will transform work, but whether policymakers will shape that transformation around workers or around AI.

Further Readings:

  • Two federal judges say use of AI led to errors in US court rulings | Reuters
  • House passes bill to ease permits for building out AI infrastructure
  • Florida Senate backs “Artificial Intelligence Bill of Rights” amid tech group’s opposition, Trump’s AI push

Photo credit: https://news.harvard.edu/gazette/story/2025/03/how-ai-is-transforming-medicine-healthcare/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • AI Policy Corner: How U.S. Cities Are Governing AI: Emerging Patterns and Collaborative Efforts

    AI Policy Corner: How U.S. Cities Are Governing AI: Emerging Patterns and Collaborative Efforts

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes

    Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes

  • AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence

    AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

    AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

  • AI Policy Corner: Japan’s AI Promotion Act

    AI Policy Corner: Japan’s AI Promotion Act

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.