• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

February 16, 2026

✍️By Alexandria Workman.

Alexandria is an Undergraduate Student in Political Science and minoring in Business at Indiana University, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. This article will explore how AI policy may affect labor outcomes in the United States by analysing state and federal policies and relevant court opinions. Proposals that promote AI safety and support industry growth are common, but a recurring question is how much human oversight will be required. The Healthy Technology Act of 2025 shows how these questions about oversight show up in practice.


In recent years, there has been an influx of AI policy allowing companies to expand their capabilities by using AI programs to support their work. This is predominant in the healthcare industry where legislators are offering support through policy.

In January 2025, Congress introduced the Healthy Technology Act of 2025, a bill that would allow artificial intelligence systems to serve as legal drug prescribers. It would require that the FDA has authorized the AI as a regulated medical device and individual states permit AI prescribing within their jurisdiction. The bill signals a shift in regulatory approach by permitting AI to independently perform prescribing functions, allowing AI in roles that once required human judgment.

The bill allows AI to perform the prescribing function and as a result one AI deployment could fulfill the prescribing function of multiple human prescribers. The AI systems could be deployed at a scale and cost that licensed human providers can not compete with. Proponents would argue that using AI will outweigh the job loss due to these financial gains.

The Act includes oversight provisions, based on existing FDA approval processes. The opacity of AI systems could make it hard for approval processes to detect new concerns, such as bias.

The bill doesn’t mention how it would approach current employees who have a license in this practice or how humans will be kept in the loop when AI prescribes medication. In terms of liability issues, the bill does not indicate what would happen if the AI is incorrect.

The proposals in the Healthy Technology Act are being applied to other industries too. In transportation and legal services, some recent bills and regulatory proposals are also reconsidering human oversight requirements and professional licensing standards.

The AMERICA DRIVES Act (2025) would preempt state laws that require a human driver in fully automated commercial trucks operating in interstate commerce, allowing these vehicles to run driverless across state lines. If adopted, this action could increase pressure on CDL holding workers. However, businesses may see using AI for commercial driving as a benefit because it would lower costs and potentially increase road safety.

Tennessee is also looking closely at how the legal profession should adapt to AI. AI is already being used in business law to draft contracts, review deal documents, and track compliance rules, while the Tennessee Bar Association emphasizes that lawyers must verify outputs, protect confidentiality, and supervise AI use. The Tennessee Supreme Court has also sought public input on regulatory reform. Although the request does not mention AI, it sits in the same reform push that AI is accelerating: cheaper delivery of legal services, new service models, and pressure on licensing. Even so, the Court’s inquiry suggests AI may reshape how we view lawyer licensing and future policy.

However, not all policymakers’ responses have focused on deregulation. For example, in November last year, Sens. Mark Warner and Josh Hawley released the AI-Related Job Impacts Clarity Act (2025), which would require major companies and federal agencies to report AI-related layoffs to the Department of Labor. Similar trends appear at the state level, like in Utah, where the “Utah’s Pro-Human Leadership in the Age of AI” summit brought together leaders from government, business, and academia to promote “pro-human” AI that supports workers and human values instead of replacing people.

Even though government AI policy is still new, we’re seeing a growing number of bills across the political spectrum, from stricter AI safety regulation to measures aimed at supporting business growth. The real question is not whether AI will transform work, but whether policymakers will shape that transformation around workers or around AI.

Further Readings:

  • Two federal judges say use of AI led to errors in US court rulings | Reuters
  • House passes bill to ease permits for building out AI infrastructure
  • Florida Senate backs “Artificial Intelligence Bill of Rights” amid tech group’s opposition, Trump’s AI push

Photo credit: https://news.harvard.edu/gazette/story/2025/03/how-ai-is-transforming-medicine-healthcare/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence

    AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence

  • AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

    AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

    We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

  • This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

    Tech Futures: Co-opting Research and Education

  • Close-up of a cat sleeping on a computer keyboard

    Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.