• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

October 30, 2022

Summary contributed by Yi Sheng, a Ph.D. student at George Mason University, advised by Weiwen Jiang, and interested in software and hardware co-design, AutoML, and dermatology diagnosis.

[Original paper by Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu shi, Jingtong Hu, Weiwen Jiang, Lei Yang]


Overview: Along with the progress of AI democratization, neural networks are being deployed more frequently in edge devices for a wide range of applications. Therefore, fairness concerns gradually emerge in applications such as face recognition and mobile medical. One fundamental question arises: what will be the fairest neural architecture for edge devices? To address this challenge, a new work proposes a novel framework, FaHaNa.


Introduction

Many research efforts have been put into addressing the fairness issues in AI applications. However, they either focus on the model interpretability by modifying the neural network models to be fairer, or fairness-aware data collection. While this can mitigate unfairness, there is still a blind spot: neural networks need to be small enough to accommodate limited computation power and memory/storage space for edge devices. A new paper by Yi Sheng and co-authors seeks to solve this problem. By examining the existing networks, they observe that larger neural networks are typically fairer. However, is there a possibility that the model is also very fair while controlling model size? This paper uses a new neural architecture search (NAS) framework search for neural networks with balanced fairness and accuracy while guaranteed to meet the hardware specifications. FaHaNa is presented, and it is proved with high fairness and accuracy on a dermatology dataset.

Key Insights

Motivation and observation 

  • Existing neural networks, including MobileNet, MnasNet, ProxylessNAS, and ResNet are shown with unfairness scores. These models have a prejudice against light skins which is the majority group, and dark skins, which is the minority group. At the same time, there exists an inherent imbalance since data from minority groups may not be easily collected due to objective reasons. Such a result motivates us to find a more equitable model.
  • Fairness, accuracy, and hardware efficiency are equally crucial in edge AI applications like medical AI. Losing any one of these characteristics will render the architecture useless. Existing networks either have accuracy issues (SqueezeNet) or size issues (MobileNet).
  • Different groups obtain the variation of intermediate features after each layer in the neural network. Front layers have less impact, but the model’s tail affects the fairness more. Thus, NAS will target more on the tail search and freeze the head.

Framework 

There are four components in the FaHaNa framework: a recurrent neural network (RNN) based controller, a block-based search space, a backbone architecture producer, a performance evaluator, and a trainer. In particular, the controller will guide the optimization process. From block-based search space, it will identify the searchable block in the backbone architecture to form a neural network (child network). Then, the child network will be sent to the trainer to learn the function. Simultaneously, the evaluator will get the latency on the given hardware. Finally, a reward will be generated to update RNN in the controller.

Experimental results

FaHaNa, geared to run on mobile phones, is evaluated on a dermatology dataset for diagnosing dermatological diseases. Therefore, we apply two edge devices, Raspberry PI and Odroid XU-4, as our testbed. Results show that FaHaNa can improve fairness without compromising accuracy, meanwhile reducing the model size. 

FaHaNa-fair is the fairest architecture in the FaHaNa series. FaHaNa-small is the smallest network that is suitable for edge devices.

Between the Lines

The findings of this research present some exciting takeaways. Fairness between light and dark skins is an essential point in AI democratization. FaHaNa integrates fairness in NAS for the first time to design fairer neural architectures. In addition, a freezing method has been proposed to accelerate the NAS process. As such, FaHaNa can identify a series of neural architectures forming a much better Pareto frontier on accuracy, fairness, and model size than existing neural architectures. In terms of the next steps, it would be highly beneficial for researchers to pay more attention to architectural fairness.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

    Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

  • Fair Interpretable Representation Learning with Correction Vectors

    Fair Interpretable Representation Learning with Correction Vectors

  • Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

    Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

  • RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

    RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Harnessing Collective Intelligence Under a Lack of Cultural Consensus

    Harnessing Collective Intelligence Under a Lack of Cultural Consensus

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.