• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summary: Trust and Transparency in Contact Tracing Applications

June 28, 2020

Summary contributed byĀ Allison Cohen, a researcher here at MAIEI. Also consultant at AI Global. Previously an AI Strategy Consultant at Deloitte.

*Author & link to original paper at the bottom.


In a matter of days, a contact tracing application will be deployed in Ontario. It is estimated that 50-60% of the population must use the app in order for it to work as intended, warning individuals of exposure to COVID-19. But, how much do we really know about this technology? Of course, automatic contact tracing can be more accurate, efficient and comprehensive when identifying and notifying individuals who have been exposed to the virus relative to manual contact tracing; but, what are the trade-offs of this solution? To guide our thinking, authors of ā€œTrust and Transparency in Contact Tracing Applicationsā€ have developed FactSheets, a list of questions users should consider before downloading a contact tracing application.

According to the article, users should begin by asking which technology the app uses to track users’ location. If the app uses GPS, it works by identifying a user’s geographical location and pairing that data with a timestamp. In terms of efficacy, the technology is impeded when users are indoors or in a building with different stories (e.g. an apartment building). Bluetooth, on the other hand, establishes contact events through proximity detection.

However, Bluetooth’s signal strength can be obstructed by the orientation of the device as well as the signal’s absorption into the human body, radio signals or in buildings and trains. Neither GPS nor Bluetooth capture variables such as ventilation or the use of masks and gloves, which also impact the likelihood of transmission. Not to mention, both technologies rest on assumptions that the device is in possession of one individual and stays with them at all times. Both of these assumptions can result in a false determination of exposure.

In addition to accuracy concerns, users should consider:

  • Privacy: sensitive data users are asked to share with the application (health status, location details, social interactions, name, gender, age, health history)
  • Security: the vulnerability of the application to attack
  • Coverage: the number of users that will opt into the use of the application
  • Accessibility: whether the technology is accessible to the entire population (consider that 47% of people aged 65 and older do not have smartphones)
  • Accuracy: whether the limitations of Bluetooth and GPS location tracking will undermine the accuracy of the app
  • Asynchronous Contact Events: whether the app will capture risk of exposure from transmission in circumstances other than proximity to others (i.e. infected surfaces)
  • Device Impacts: the app’s impact on the users’ devices (battery life etc.)
  • Ability: users’ capacity to use the app as intended
  • Ability: interoperability between contact tracing applications downloaded by the rest of the population
  • Reluctance in Disclosure: whether users will submit information about their positive COVID-19 diagnosis

Check out FactSheets to obtain further details users should consider before downloading a contact tracing application.


Original paper by Stacy Hobson, Michael Hind, Aleksandra Mojsilovic“ and Kush R. Varshney: https://arxiv.org/pdf/2006.11356.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

    NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • Automating Informality: On AI and Labour in the Global South (Research Summary)

    Automating Informality: On AI and Labour in the Global South (Research Summary)

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.