• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

October 30, 2022

Summary contributed by Yi Sheng, a Ph.D. student at George Mason University, advised by Weiwen Jiang, and interested in software and hardware co-design, AutoML, and dermatology diagnosis.

[Original paper by Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu shi, Jingtong Hu, Weiwen Jiang, Lei Yang]


Overview: Along with the progress of AI democratization, neural networks are being deployed more frequently in edge devices for a wide range of applications. Therefore, fairness concerns gradually emerge in applications such as face recognition and mobile medical. One fundamental question arises: what will be the fairest neural architecture for edge devices? To address this challenge, a new work proposes a novel framework, FaHaNa.


Introduction

Many research efforts have been put into addressing the fairness issues in AI applications. However, they either focus on the model interpretability by modifying the neural network models to be fairer, or fairness-aware data collection. While this can mitigate unfairness, there is still a blind spot: neural networks need to be small enough to accommodate limited computation power and memory/storage space for edge devices. A new paper by Yi Sheng and co-authors seeks to solve this problem. By examining the existing networks, they observe that larger neural networks are typically fairer. However, is there a possibility that the model is also very fair while controlling model size? This paper uses a new neural architecture search (NAS) framework search for neural networks with balanced fairness and accuracy while guaranteed to meet the hardware specifications. FaHaNa is presented, and it is proved with high fairness and accuracy on a dermatology dataset.

Key Insights

Motivation and observation 

  • Existing neural networks, including MobileNet, MnasNet, ProxylessNAS, and ResNet are shown with unfairness scores. These models have a prejudice against light skins which is the majority group, and dark skins, which is the minority group. At the same time, there exists an inherent imbalance since data from minority groups may not be easily collected due to objective reasons. Such a result motivates us to find a more equitable model.
  • Fairness, accuracy, and hardware efficiency are equally crucial in edge AI applications like medical AI. Losing any one of these characteristics will render the architecture useless. Existing networks either have accuracy issues (SqueezeNet) or size issues (MobileNet).
  • Different groups obtain the variation of intermediate features after each layer in the neural network. Front layers have less impact, but the model’s tail affects the fairness more. Thus, NAS will target more on the tail search and freeze the head.

Framework 

There are four components in the FaHaNa framework: a recurrent neural network (RNN) based controller, a block-based search space, a backbone architecture producer, a performance evaluator, and a trainer. In particular, the controller will guide the optimization process. From block-based search space, it will identify the searchable block in the backbone architecture to form a neural network (child network). Then, the child network will be sent to the trainer to learn the function. Simultaneously, the evaluator will get the latency on the given hardware. Finally, a reward will be generated to update RNN in the controller.

Experimental results

FaHaNa, geared to run on mobile phones, is evaluated on a dermatology dataset for diagnosing dermatological diseases. Therefore, we apply two edge devices, Raspberry PI and Odroid XU-4, as our testbed. Results show that FaHaNa can improve fairness without compromising accuracy, meanwhile reducing the model size. 

FaHaNa-fair is the fairest architecture in the FaHaNa series. FaHaNa-small is the smallest network that is suitable for edge devices.

Between the Lines

The findings of this research present some exciting takeaways. Fairness between light and dark skins is an essential point in AI democratization. FaHaNa integrates fairness in NAS for the first time to design fairer neural architectures. In addition, a freezing method has been proposed to accelerate the NAS process. As such, FaHaNa can identify a series of neural architectures forming a much better Pareto frontier on accuracy, fairness, and model size than existing neural architectures. In terms of the next steps, it would be highly beneficial for researchers to pay more attention to architectural fairness.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

    Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

    To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

  • Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

    Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

  • Anthropomorphic interactions with a robot and robot-like agent

    Anthropomorphic interactions with a robot and robot-like agent

  • Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

    Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

  • Artificial Intelligence: the global landscape of ethics guidelines

    Artificial Intelligence: the global landscape of ethics guidelines

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.