• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Legal Risks of Adversarial Machine Learning Research

July 20, 2020

Summary contributed by Sundar Narayanan, Director at Nexdigm and ethics & compliance professional.

*Authors of full paper & link at the bottom


Mini-summary:

Adversarial attacks have existed for years now. There are currently no structured legal frameworks to deal with adversarial attacks including terms of service for ML use or the regulatory framework for ML environment/ model themselves.

The paper analyses the confusing landscapes ahead for legal practitioners and the risks that are there for ML researchers in the current environment.
To drive this point home, the researchers reflect upon a specific regulation (Computer Fraud and Abuse Act) and a specific clause that defines the offence as intentional access without authorization or exceeding authorized access; obtaining any information from a protected computer; intentionally causing damage, and by knowingly transmitting a program, information, code or command.

The paper also reflects on how the courts are divided between a broad interpretation (exceeding authorized access itself is an offence) and narrow interpretation (exceeding authorized access for improper purpose).
The researchers also classify the type of attacks into exploratory attacks, poisoning attacks, attack on ML environments and attack on software dependencies. The researchers conclude that in their view the Supreme Court is expected to take a narrow view and mention that the narrow view may encourage ML security researchers to pursue their efforts towards such exploits for a better robustness of the ML environments/ Models.


Full summary:

Context

  • For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. 
  • For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research.

About Computer Fraud and Abuse Act (CFAA)

  • Was enacted in 1980’s but was amended multiple times. 2008 is the recent amendment.
  • Has very broad definitions.
  • The CFAA prohibits intentionally accessing a computer without authorization or in excess of authorization, but fails to define what “without authorization” means.
  • Distribution of malicious code and denial of service attack was also included as offences.
  • There was an attempt to enhance the law in 2015 under the Obama administration but this was argued to be detrimental to many of the internet activities.
  • In the past, many security researchers have gotten themselves embroiled into regulatory/ enforcement tangles under CFAA (eg. Weev vs AT&T here).

Source: Wikipedia, NACDL 

Key clause considered for CFAA analysis

  • Intentional Access Without or Exceeding Authorization — Section 1030(a)(2)
  • Important aspects:
    • Without authorization or exceeds authorized access
    • Obtains any information 
    • On a protected computer
    • Intentionally causing damage
    • By knowingly transmitting a program, information, code or command

Narrow and broader interpretation of the clause

Broad interpretation: “exceed authorized access” includes accessing information on a computer system for an “improper purpose,” which usually means breaching some agreement, policy, or terms of service.

Narrow interpretation: Exceeding  authorized access must be for an improper purpose for it to constitute a violation.

The research speaks about Black-box setup attacks wherein the attacker has no direct access to the training data, has no knowledge of the algorithm, and no knowledge about the features used in the algorithm.

Types of attacks:

  1. Exploratory attacks: Attacks that sends queries and observes data.
  • Evasion attack: Tricking the ML system into misclassifying 
  • Model stealing: Replicate a ML model by strategically querying the model and observing the response
  • Model inversion/ Membership interference: Inferring sensitive information about private training data by exploiting confidence intervals and reconstructing the features
  • Reprogramming the ML system: Making the ML system do an activity that was not desired by the developer
  1. Poisoning attack: Attacks that taint the training data. ML models are retrained on outcomes it generates (along with human feedback in some cases) to address the shifts in data distribution. This approach could be used to poison the training data.
  1. Attacks on ML environment: The attacker subverts the machine learning system by tampering with the source code, build processes, or update mechanisms (e.g: Use of pre-trained models).
  1. Exploit software dependencies: The attacker exploits unpatched vulnerabilities in popular ML packages like numpy and tensorflow.  

Assessed impacts of various types of adversarial attacks

Conclusion

The paper concludes that a narrow interpretation focused on hacking and bypassing technological barriers would be more consistent to how the Supreme Court may view the same. 

The paper finally concludes that if narrow interpretation is adopted ML security researchers will be less likely chilled from conducting tests and other exploratory work on ML systems, again leading to better security in the long term.

Future research the paper opens up to:

The following are certain points of limitations in the current paper that helps in extending to future research areas. The limitations include:

  1. The paper looks at adversarial attack in a limited sense. For example, it does not see how the actions can be considered as a violation in general principles of law (Common Law perspective). For example, extraction of information could be tried as theft. Similarly, replication of content / model could be tried under intellectual property or other aspects from common law perspective.
  2. The paper is limited to select assessment of impact in light of Computer Fraud and Abuse Act. The act expects determination of intent. There are cases where intent is considered to be fraudulent or otherwise even in occasions where security researchers have identified a possible exploit. 
  3. The paper is limited to the CFAA but there are other regulations that impact data privacy and security in general (which is the base for adversarial attack) including wiretapping, wire fraud, identity theft, access device fraud, unlawful access to stored communications, federal information security management, Gramm-Leach-Bliley, and HIPAA.
  4. The paper is looking at exposures from ML security attackers perspective, however, there are implications to the company whose ML security is attacked in the context of data privacy or security requirements.
  5. More than these, it is necessary to understand that the cases are decided based on facts and circumstances rather than a generalistic view/ interpretation of such clauses.

Read more here and here


Original paper by Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert: https://arxiv.org/ftp/arxiv/papers/2006/2006.16179.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

    Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

  • Experimenting with Zero-Knowledge Proofs of Training

    Experimenting with Zero-Knowledge Proofs of Training

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • Human-AI Interactions and Societal Pitfalls

    Human-AI Interactions and Societal Pitfalls

  • A Case for AI Safety via Law

    A Case for AI Safety via Law

  • Research summary: Roles for Computing in Social Change

    Research summary: Roles for Computing in Social Change

  • Anthropomorphization of AI: Opportunities and Risks

    Anthropomorphization of AI: Opportunities and Risks

  • Harnessing Collective Intelligence Under a Lack of Cultural Consensus

    Harnessing Collective Intelligence Under a Lack of Cultural Consensus

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.