• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Legal Risks of Adversarial Machine Learning Research

July 20, 2020

Summary contributed by Sundar Narayanan, Director at Nexdigm and ethics & compliance professional.

*Authors of full paper & link at the bottom


Mini-summary:

Adversarial attacks have existed for years now. There are currently no structured legal frameworks to deal with adversarial attacks including terms of service for ML use or the regulatory framework for ML environment/ model themselves.

The paper analyses the confusing landscapes ahead for legal practitioners and the risks that are there for ML researchers in the current environment.
To drive this point home, the researchers reflect upon a specific regulation (Computer Fraud and Abuse Act) and a specific clause that defines the offence as intentional access without authorization or exceeding authorized access; obtaining any information from a protected computer; intentionally causing damage, and by knowingly transmitting a program, information, code or command.

The paper also reflects on how the courts are divided between a broad interpretation (exceeding authorized access itself is an offence) and narrow interpretation (exceeding authorized access for improper purpose).
The researchers also classify the type of attacks into exploratory attacks, poisoning attacks, attack on ML environments and attack on software dependencies. The researchers conclude that in their view the Supreme Court is expected to take a narrow view and mention that the narrow view may encourage ML security researchers to pursue their efforts towards such exploits for a better robustness of the ML environments/ Models.


Full summary:

Context

  • For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. 
  • For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research.

About Computer Fraud and Abuse Act (CFAA)

  • Was enacted in 1980’s but was amended multiple times. 2008 is the recent amendment.
  • Has very broad definitions.
  • The CFAA prohibits intentionally accessing a computer without authorization or in excess of authorization, but fails to define what “without authorization” means.
  • Distribution of malicious code and denial of service attack was also included as offences.
  • There was an attempt to enhance the law in 2015 under the Obama administration but this was argued to be detrimental to many of the internet activities.
  • In the past, many security researchers have gotten themselves embroiled into regulatory/ enforcement tangles under CFAA (eg. Weev vs AT&T here).

Source: Wikipedia, NACDL 

Key clause considered for CFAA analysis

  • Intentional Access Without or Exceeding Authorization — Section 1030(a)(2)
  • Important aspects:
    • Without authorization or exceeds authorized access
    • Obtains any information 
    • On a protected computer
    • Intentionally causing damage
    • By knowingly transmitting a program, information, code or command

Narrow and broader interpretation of the clause

Broad interpretation: “exceed authorized access” includes accessing information on a computer system for an “improper purpose,” which usually means breaching some agreement, policy, or terms of service.

Narrow interpretation: Exceeding  authorized access must be for an improper purpose for it to constitute a violation.

The research speaks about Black-box setup attacks wherein the attacker has no direct access to the training data, has no knowledge of the algorithm, and no knowledge about the features used in the algorithm.

Types of attacks:

  1. Exploratory attacks: Attacks that sends queries and observes data.
  • Evasion attack: Tricking the ML system into misclassifying 
  • Model stealing: Replicate a ML model by strategically querying the model and observing the response
  • Model inversion/ Membership interference: Inferring sensitive information about private training data by exploiting confidence intervals and reconstructing the features
  • Reprogramming the ML system: Making the ML system do an activity that was not desired by the developer
  1. Poisoning attack: Attacks that taint the training data. ML models are retrained on outcomes it generates (along with human feedback in some cases) to address the shifts in data distribution. This approach could be used to poison the training data.
  1. Attacks on ML environment: The attacker subverts the machine learning system by tampering with the source code, build processes, or update mechanisms (e.g: Use of pre-trained models).
  1. Exploit software dependencies: The attacker exploits unpatched vulnerabilities in popular ML packages like numpy and tensorflow.  

Assessed impacts of various types of adversarial attacks

Conclusion

The paper concludes that a narrow interpretation focused on hacking and bypassing technological barriers would be more consistent to how the Supreme Court may view the same. 

The paper finally concludes that if narrow interpretation is adopted ML security researchers will be less likely chilled from conducting tests and other exploratory work on ML systems, again leading to better security in the long term.

Future research the paper opens up to:

The following are certain points of limitations in the current paper that helps in extending to future research areas. The limitations include:

  1. The paper looks at adversarial attack in a limited sense. For example, it does not see how the actions can be considered as a violation in general principles of law (Common Law perspective). For example, extraction of information could be tried as theft. Similarly, replication of content / model could be tried under intellectual property or other aspects from common law perspective.
  2. The paper is limited to select assessment of impact in light of Computer Fraud and Abuse Act. The act expects determination of intent. There are cases where intent is considered to be fraudulent or otherwise even in occasions where security researchers have identified a possible exploit. 
  3. The paper is limited to the CFAA but there are other regulations that impact data privacy and security in general (which is the base for adversarial attack) including wiretapping, wire fraud, identity theft, access device fraud, unlawful access to stored communications, federal information security management, Gramm-Leach-Bliley, and HIPAA.
  4. The paper is looking at exposures from ML security attackers perspective, however, there are implications to the company whose ML security is attacked in the context of data privacy or security requirements.
  5. More than these, it is necessary to understand that the cases are decided based on facts and circumstances rather than a generalistic view/ interpretation of such clauses.

Read more here and here


Original paper by Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert: https://arxiv.org/ftp/arxiv/papers/2006/2006.16179.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

    Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Fusing Art and Engineering for a more Humane Tech Future

    Fusing Art and Engineering for a more Humane Tech Future

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

  • Fair Generative Model Via Transfer Learning

    Fair Generative Model Via Transfer Learning

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • The State of AI Ethics Report (June 2020)

    The State of AI Ethics Report (June 2020)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.