🔬 Research Summary by Guru Vamsi Policharla, a computer science PhD student at UC Berkeley.
[Original paper by Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi Policharla, and Mingyuan Wang]
Overview: Incoming regulation on AI, such as the EU AI Act, requires impact assessment and risk management to ensure fairness and accountability and provide transparency for “high-risk” AI systems. This requires companies to provide unfettered access to a third-party auditor who will provide a “seal of approval” before deploying an AI system. This often creates tension between companies trying to protect trade secrets and auditors who need “white box” access to the data and models. This work examines how cryptography can help resolve this tension and provide stronger transparency guarantees to the end user.
Introduction
Machine learning is expected to aid critical real-world decisions related to healthcare, recruitment, legal proceedings, finance, and beyond. Yet, the specter of bias and unfairness looms over AI deployments, exacerbated by the lack of transparency in many proprietary models. In response to these issues, incoming regulations like the EU AI Act aim to ensure fairness and accountability and provide transparency for “high-risk” AI systems. Although the technical mechanisms to do so have not yet been specified, the high-level goal is to ensure that a) the training data is of high quality and sufficiently representative and b) the machine learning model is safe and accurate.
In this work, we investigate how cryptography, in particular zero-knowledge proofs, can ensure regulation compliance while at the same time preserving the privacy of sensitive training data and machine learning models. In addition, it can provide stronger guarantees than third-party auditing.
Key Insights
The need for auditing
Imagine you are building a model for deciding the health insurance terms. The Affordable Care Act provides consumers protections by enforcing that the only factors that can be used to determine insurance premiums are — location, age, tobacco use, plan category, and whether the plan covers dependents. Any ML models used for this decision-making process must be audited to comply with this regulation. Furthermore, merely excluding this additional information during the ML training process is insufficient. Researchers have shown that household income, number of dependents, age, and location are often very good proxies to determine a person’s gender and race, and it is indeed possible to build prediction models that can successfully infer additional demographics. Consequently, the entire training process must be audited to ensure that no unfairness or bias is introduced in the ML model, either inadvertently or maliciously.
Limitations of third-party auditing
The use of AI in high-risk situations must come with guarantees that the model is compliant with regulations and actually delivers on its promises. A naïve solution for this problem is to give an auditor unrestricted access to the dataset, training procedure, and the final model. The auditor can then a) inspect the dataset to ensure that it is “sufficiently representative,” b) check that the objective functions and training procedures do not inadvertently lead to unfair outcomes, and c) query the final model to confirm its accuracy. The obvious issue here is that companies would be unwilling to provide unfettered access to their models and data, even to an auditor. Even if they were willing to do so, auditors have typically been for-profit organizations, paid by the companies they are meant to audit, thereby creating a serious conflict of interest.
Our Approach
Our work on “Experimenting with Zero-Knowledge Proofs of Training” takes a first step in this direction. By carefully designing zero-knowledge proofs tailored to the task at hand, we show that it is feasible for machine learning model owners to prove that:
- their training datasets satisfy various statistical properties
- their machine learning model was indeed trained on this dataset
- that all queries made to the service provider by all clients were answered according to the certified machine learning model
without revealing any additional information.
It is worth noting that the above framework is powerful enough to emulate white-box access to machine learning models and datasets while remaining completely blackbox. Thus, allowing regulators and auditors to specify the questions they want clearly answered and guarantees they wish to see from AI systems even without any knowledge of cryptography.
Our work supports the feasibility of balancing privacy and accountability in machine learning. At the same time, our work is only a first step, with a lot of room for improvement in reducing the cryptographic overheads of the system and scaling up to models beyond logistic regression.
Between the lines
Our main goal was to reduce the computational overhead paid by an ML model owner to create these zero-knowledge proofs on top of training the machine learning model. There has been tremendous progress in improving the performance of zero-knowledge proof systems, particularly with succinct zero-knowledge proofs (zkSNARKs), where the verifier time and proof size can be quite small. However, the proving costs of zkSNARKs remain prohibitively high for a massive circuit such as machine learning training. An alternative proof system that achieves concretely much lower prover costs is MPC-in-the-Head (MPCitH). Unfortunately, MPCitH results in proof size and verifier times proportional to the circuit’s size being proved.
To balance prover time, verifier time, and proof sizes, we carefully combine MPCitH with techniques from the succinct proof literature. At a high level, we compress the largest part of the circuit (the training data) using polynomial commitments and emulate all operations on the data using queries to these commitments. This allows us to shrink the proof size (<10% of the training data size) and the overheads (~4000x) to prove the training of a logistic regression model on a 4GB dataset. Our approach is also streaming-friendly and can be used to prove training on even larger datasets without running into memory issues. Finally, we believe that our techniques offer a promising direction to scale up to even more complicated models used in the real world.