• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Structured access to AI capabilities: an emerging paradigm for safe AI deployment

June 19, 2022

🔬 Research Summary by Max Krueger, a consultant at Accenture with an interest in both the long and short-term implications of AI on society.

[Original paper by Toby Shevlane]


Overview:  With increasingly powerful AIs, safe access to these models becomes an important question. One such paradigm addressing this is that of structured capability access (SCA). SCA aims to restrict model misuse by employing strategies to limit the end-users’ access to various parts of a given model.


Introduction

Access control is a crucial component of an effective, global AI governance strategy. Structured access control is a framework in which developers limit control of AI systems’ use, reproduction, and modification. SCA takes the approach that AI systems are both information and a tool. Viewing access through this paradigm broadens the potential control mechanisms available to developers.

Key Insights

Fundamentally, structured capability access (SCA) is a safety mechanism in which the model developer provides – and takes responsibility for – access to their models. It can be thought of as model-as-a-service; you pay a company to use their model without seeing how it is built or maintained while the developer controls how you use it. Crucially, for SCA to be effective, companies must have an effective means of tracking and understanding how their models are being used. SCA aims to address the question, how can AI systems be safely deployed to prevent the user’s harm (intentional or not)? The author, Toby Shevlane, states:

The developer offers a controlled interaction with the AI system’s capabilities, using technical and sometimes bureaucratic methods to limit how the software can be used, modified, and reproduced.

There are two broad ways an entity can provide controls over a model 1) use controls and 2) modification and reproduction controls.

Use Controls

Use controls take two forms, software-level controls and access-level controls. As the name suggests, use controls limit how an end-user can leverage the AI model. For example, the careful design of an AI system could reduce bias and misuse with no need to vet the end-use case. In conjunction with software-level controls, the developer can utilize an application programming interface (API) or user interface to limit who and how frequently a model is accessed. This allows the controlling party to grant or revoke access to the model.

Modification and Reproduction Controls

Modification and reproduction controls aim to limit how much of the model the end-user could change or reproduce. For example, developers may opt to keep the source code proprietary (instead of open-source). This makes it harder to modify but may have other implications within the AI ethics domain. Additionally, a developer may implement sophisticated cybersecurity defenses to limit modification and reproduction. However, as demonstrated in my previous research summary, black-box models are still highly susceptible to adversarial attacks, especially reproduction attacks.

Selective Disclosure

Selective disclosure in SCA builds off the concept of structured transparency. According to Shevlane, structured transparency “involves finding mechanisms, both technical and social, for granting access to certain pieces of information while keeping others private.” SCA takes this concept further by “governing what somebody can and cannot do with an AI system.” As the author points out, selective disclosure works well for the governance of data such as personally identifiable information. Still, it does not work well for dual-use technologies such as model code, which is both information and a tool.

Microsoft’s DialoGPT illustrates this issue. Researchers were concerned about the inappropriate use of the model and therefore withheld an essential piece of code from the open-sourced codebase. This does not solve the issue as the model is rendered useless, or end-users can find a substitute for the missing code and have full access to the model. In summation, “The lesson is that the developer cannot selectively filter the informational content of the software in a way that neatly discriminates between uses.” Researchers aimed to limit the use of the tool while providing access to the information. Such a strategy was unsuccessful due to the dual-use technological nature of the model and its code. Selective disclosure does not seem to be a viable control mechanism.

Implementation of SCA

The author differentiates different SCA mechanisms based on the deployment method of the model, local or cloud-based. Controls on local deployment are complicated to enforce. For example, a developer could use a licensing system to control who uses the product though this is easily circumvented and harder to implement at scale. Controlling for modification and reproduction is additionally very difficult to police. Developers could build in piracy controls, deep-learning specific encryption, or embed the software on particular hardware. While each of these might make reproduction and modification less accessible, a well-motivated adversary may be able to break these controls.

Cloud-based deployment is far more secure. Software-level use controls excel with cloud-based models. Developers can give various levels of access to end-users and easily restrict control on a case-by-case basis. Cloud-based environments enable easier monitoring of the use of the model. Modification and reproduction controls are also more easily implemented in a cloud environment. On one end of the spectrum, developers could wholly restrict access to the model code and parameters while implementing access quotas to curb model stealing. On the other end, developers could give complete access to end-users. Cloud-based deployment provides greater granularity and flexibility in controlling access to a given model.

Other Considerations

The author points to an apparent weakness of SCA that inherently, SCA pushes power into the hands of AI developers. It is imperative to consider SCA as part of a larger governance strategy. Organizations could use SCA to comply with future government regulations. When combined with effective policy, centralization of power is of less concern.

Between the lines

Access control remains an essential question in AI safety. SCA is one potential method for ensuring the appropriate use of AI technologies. Ultimately, access control will likely take numerous shapes and sizes, with SCA being one part of the overarching solution. Cloud-based SCA seems like a promising control method given its flexibility to address several access regimes. A crucial part of this control mechanism is the ability of developers to collect and analyze use data. This may be a significant issue in a scenario where there are potentially hundreds of thousands of end-users. Developers must understand how end-users are using their platform and be able to detect inappropriate behaviors (think fraud detection for AI models). If developers can accurately and quickly identify fraudulent behavior, SCA has significant potential, especially when paired with other effective governance interventions (i.e., policy). At present, access control appears to come at the cost of transparency; it is vital to implement a mechanism that treats these as complementary.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Embedding Values in Artificial Intelligence (AI) Systems

    Embedding Values in Artificial Intelligence (AI) Systems

  • “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

    “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

  • AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

    AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

  • Participatory Design to build better contact- and proximity-tracing apps

    Participatory Design to build better contact- and proximity-tracing apps

  • Is the Human Being Lost in the Hiring Process?

    Is the Human Being Lost in the Hiring Process?

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.