
✍️By Alexandria Workman.
Alexandria is an Undergraduate Student in Political Science and minoring in Business at Indiana University, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. This article will explore how AI policy may affect labor outcomes in the United States by analysing state and federal policies and relevant court opinions. Proposals that promote AI safety and support industry growth are common, but a recurring question is how much human oversight will be required. The Healthy Technology Act of 2025 shows how these questions about oversight show up in practice.
In recent years, there has been an influx of AI policy allowing companies to expand their capabilities by using AI programs to support their work. This is predominant in the healthcare industry where legislators are offering support through policy.
In January 2025, Congress introduced the Healthy Technology Act of 2025, a bill that would allow artificial intelligence systems to serve as legal drug prescribers. It would require that the FDA has authorized the AI as a regulated medical device and individual states permit AI prescribing within their jurisdiction. The bill signals a shift in regulatory approach by permitting AI to independently perform prescribing functions, allowing AI in roles that once required human judgment.
The bill allows AI to perform the prescribing function and as a result one AI deployment could fulfill the prescribing function of multiple human prescribers. The AI systems could be deployed at a scale and cost that licensed human providers can not compete with. Proponents would argue that using AI will outweigh the job loss due to these financial gains.
The Act includes oversight provisions, based on existing FDA approval processes. The opacity of AI systems could make it hard for approval processes to detect new concerns, such as bias.
The bill doesn’t mention how it would approach current employees who have a license in this practice or how humans will be kept in the loop when AI prescribes medication. In terms of liability issues, the bill does not indicate what would happen if the AI is incorrect.
The proposals in the Healthy Technology Act are being applied to other industries too. In transportation and legal services, some recent bills and regulatory proposals are also reconsidering human oversight requirements and professional licensing standards.
The AMERICA DRIVES Act (2025) would preempt state laws that require a human driver in fully automated commercial trucks operating in interstate commerce, allowing these vehicles to run driverless across state lines. If adopted, this action could increase pressure on CDL holding workers. However, businesses may see using AI for commercial driving as a benefit because it would lower costs and potentially increase road safety.
Tennessee is also looking closely at how the legal profession should adapt to AI. AI is already being used in business law to draft contracts, review deal documents, and track compliance rules, while the Tennessee Bar Association emphasizes that lawyers must verify outputs, protect confidentiality, and supervise AI use. The Tennessee Supreme Court has also sought public input on regulatory reform. Although the request does not mention AI, it sits in the same reform push that AI is accelerating: cheaper delivery of legal services, new service models, and pressure on licensing. Even so, the Court’s inquiry suggests AI may reshape how we view lawyer licensing and future policy.
However, not all policymakers’ responses have focused on deregulation. For example, in November last year, Sens. Mark Warner and Josh Hawley released the AI-Related Job Impacts Clarity Act (2025), which would require major companies and federal agencies to report AI-related layoffs to the Department of Labor. Similar trends appear at the state level, like in Utah, where the “Utah’s Pro-Human Leadership in the Age of AI” summit brought together leaders from government, business, and academia to promote “pro-human” AI that supports workers and human values instead of replacing people.
Even though government AI policy is still new, we’re seeing a growing number of bills across the political spectrum, from stricter AI safety regulation to measures aimed at supporting business growth. The real question is not whether AI will transform work, but whether policymakers will shape that transformation around workers or around AI.
Further Readings:
- Two federal judges say use of AI led to errors in US court rulings | Reuters
- House passes bill to ease permits for building out AI infrastructure
- Florida Senate backs “Artificial Intelligence Bill of Rights” amid tech group’s opposition, Trump’s AI push
Photo credit: https://news.harvard.edu/gazette/story/2025/03/how-ai-is-transforming-medicine-healthcare/
