🔬 Research Summary by Grace Wright, Business Development Manager at a technology start-up and has worked in research roles focused on responsible and ethical development and use of AI and other emerging technologies.
[Original paper by Maroussia LĂ©vesque]
Overview: The public sector can play an important role in governing artificial intelligence, acting directly or indirectly to help shape AI systems that benefit society. This paper examines the various tools and approaches policymakers can leverage to accomplish this by generating AI governance strategies that promote fairness and transparency. By exploring the benefits and drawbacks of these policy tools in more detail, the author aims to generate conversation amongst policymakers about which approaches are best suited for creating strong frameworks for AI governance.Â
Introduction
Many private sector actors are sceptical or outwardly critical about the ability of the public sector to effectively govern emerging technologies. Artificial intelligence is one technology in particular that continues to be a topic of debate in this regard, especially concerning the roles of private and public sector actors in ensuring AI systems remain fair and beneficial to society.
This paper explores the role of the public sector and how policymakers can effectively influence AI governance through an exploration of policy tools that increase more direct traditional policy interventions and others that alter the role of the public sector and place greater emphasis on procedural safeguards.
To provide these insights, the author examined multiple policy instruments in these two categories and assessed their utility for enhancing the fairness of AI systems. The options explored in this paper suggest that each tool has benefits and drawbacks, and given their varied forms and impact on generating more fair and transparent AI systems, multiple tools should be leveraged to create a robust approach to AI governance.
Key Insights
Governing AI Systems – The Importance
The author, Maroussia Levesque, argues that fair AI systems can ultimately contribute to the public good. However, AI development is primarily industry-driven, and commercial interests are not always aligned with public interests. In this respect, policymakers have a unique opportunity to influence AI policies that seek to guide the development of fair AI systems that benefit society.
Levesque notes that concerns over the accuracy and fairness of AI systems have been ongoing and persist in systems used today. Not only is there the challenge of false negatives and false positives, but racial and gender biases are also significant issues of concern. For example, AI systems that predict recidivism rates (i.e., how likely someone is to repeat a criminal offense) have been criticized for being racially biased, predicting higher levels of recidivism amongst Black individuals. This raises significant concerns about AI fairness and transparency and how these systems generate their outcomes, and consequently, how these results impact society. Examples like these underscore the urgency of policymakers to be involved in ensuring these systems are developed in a way that does not result in potentially discriminatory practices.
Options for Policy intervention
Levesque outlines multiple avenues that can be used to develop more robust frameworks for AI governance. In particular, the author explores options for redress using traditional, direct intervention and options that are more adaptive and “reinvent” the role of the public sector, emphasizing flexible procedural safeguards that can keep pace with technological change. Each of these options and their related policy tools are outlined below:
Redress
- Rights and liabilities: Strengthening protections to deter harmful behavior from the private sector and uphold individual rights to equality and non-discrimination. This may also include enhancing transparency from the private sector in cases of suspected discrimination, which could help address issues of transparency of AI systems and dissuade harmful behaviours.
- Command and control: Imposing penalties against companies for not following certain safeguards or engaging in harmful practices.
- Administrative oversight: Designating specialized agencies to provide oversight of AI systems concerning their uncertainty, complexity, transparency, and impact.
- Incentives: Providing tax credits for certification, debiasing training, and other practices based on advancing fairer and more transparent systems.
- Market-harnessing controls: Stimulating AI research & development focused on a non-economic driven basis.
- Public infrastructure: Building public AI infrastructure to inform its values and development from inception.
- Mandatory disclosures: Compelling private sector actors to disclose performance-related metrics of their AI systems that preserve proprietary information.
- Public compensation: Having companies dedicate a portion of revenues to compensation for harms caused by AI systems.
Adapt – Reinventing the role of public actors
- Checks and balances to counter industry dominance: Drawing on the principles of constitutionalism to have the AI innovation agenda driven by multiple interests rather than primarily the private sector
- Co-regulation: Drafting standards and regulations together – both the public and private sector, including through negotiated rule-making and alignment with industry standards. This includes implementing approaches similar to the EU AI Act or developing technical standards that reflect best practices.
While direct policy interventions can help redress bias, they are limited in their ability to define and regulate fairness effectively. Therefore, the author suggests, they should be left to those implementing AI systems to determine, with some level of oversight from the public sector. More adaptive procedural safeguards, on the other hand, aim to cultivate accountability and integrity and are more favorable because they can ultimately be more adaptable to a complex and rapidly evolving technology space.
Levesque notes that no one policy option is perfect – each has its drawbacks and should be viewed within the context of an entire toolbox for policymakers to draw from. The harms of AI can be varied, and so too should the policy instruments used to address them if an effective change will be made.
Between the lines
The paper draws some crucial points of consideration for regulating AI and emerging technologies more broadly. Firstly, regulation must be addressed from multiple angles with multiple policy instruments. Effective policy requires using multiple tools at the public sector’s disposal, especially given the complex and evolving challenge of regulating emerging technologies. Secondly, approaches emphasizing more adaptable, principle-based approaches appear better suited to rapidly changing policy spaces because they provide the flexibility and collaboration needed to solve complex challenges.
While the author does make some strong arguments in favor of public sector involvement in AI governance, this paper raises some thought-provoking areas for further research and discussion. For example, the public sector is often criticized for being lethargic, providing reactionary responses to challenges that it may not understand well. Given the quickly evolving nature of technology and the concentration of technical expertise in the private sector, how can the public sector be better equipped to develop robust governance frameworks? Are there more effective avenues for public and private sector collaboration on these issues that have yet to be explored, and if so, what are some practical ways of moving forward to test and adopt those approaches?