• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Your wrist, your data, their access: Are you trading convenience for control?

January 5, 2026

✍️By Kennedy O’Neil from Encode Canada.

Kennedy is an undergraduate sociology student at McGill University, with a minor in psychology, and a writer for Encode Canada.


📌 Editor’s Note: This piece is part of our Recess series, featuring university students from Encode’s Canadian chapter at McGill University. The series aims to promote insights from university students on current issues in the AI ethics space. In this article, Kennedy O’Neil analyzes AI wearables and the ethical grey areas that arise regarding privacy and data collection.

Photo credit: Mudit Jain on Unsplash.


Earlier this month, I heard my front door creak open at an unusually late hour. Thirty minutes later, it creaked again as my roommate tiptoed back inside. The next morning, she laughed when I asked where she’d gone, explaining that she hadn’t met her activity goal for the day. Despite it being -10° Celsius outside, she couldn’t stand ending the day with her Apple Watch showing that she hadn’t moved enough. 

With approximately 281 million units sold since the Apple Watch’s inaugural release in 2015, Apple has become a frontrunner for digital health tracking. Surpassing other brands and capitalizing on the seamless integration of Apple Watches with the rest of users’ Apple ecosystems, Apple Watches are interconnected with various aspects of users’ lives beyond fitness. From texting capabilities to auto-generated messages, or nudges, that encourage movement, Apple Watches represent a breakthrough in just how personal a device can be. The integration of multiple functions into a single device is exceptionally significant for Gen Z. Members of this generation have grown up perceiving these watches as extensions of their identity, rather than as mere fitness tools. 

Wearable technologies such as Apple Watches use built-in sensors to collect data on a plethora of physiological metrics and translate it into health insights that can prompt changes in behaviour. This data is stored on the device and synced to a connected app or website. Through these cloud-based platforms, artificial intelligence (AI) programs streamline the interpretation of information and generate personalized health assessments. AI effectively draws connections between data patterns and presents user-friendly outputs, such as sleep quality reports and recommendations (Zhang et al., 2020). Furthermore, deep learning networks, which are AI systems designed to extract unique features from large datasets, are used to scan for anomalies in the collected data trends. This process improves the probability of early health risk detection and ultimately prevention. 

The integration of AI systems into wearable health technologies provides a valuable opportunity for improved health monitoring, but also poses serious privacy concerns, including unauthorized data sharing, data breaches, and de-anonymization of sensitive personal information. Therefore, it requires more comprehensive oversight and regulation than what is currently provided under Canada’s federal privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA).

The collection of data through these AI-based wearables raises ethical gray areas, including the risk of data breaches by hackers or advertisers who might exploit sensitive health information for malicious, strategic, or financial purposes. Individuals’ health data and information, including data regarding their movement and biometrics, are more vulnerable to being sold without their permission or awareness. We have seen examples of this ethical concern on a large scale, notably with the acquisition of Fitbit by Google, as users’ profiles on the two platforms were combined, creating a nearly complete picture of their private health and personal information. Many users fear that using this comprehensive profile might increase unauthorized data sharing and targeted advertising (Radanliev, 2025). Similar inferences can be drawn with the integration of AI into Apple Watches, as it offers Apple access to information about many facets of our lives. 

It is generally agreed that solidifying privacy, anonymity, and regulating health data is absolutely vital to the success of AI integration in healthcare and personal health practices. In Canada, the current government regulation of personal data is the aforementioned Personal Information Protection and Electronic Documents Act (PIPEDA). Despite the importance of this regulatory framework, PIPEDA is outdated and does not adequately encompass the necessary regulation of AI. Additionally,  PIPEDA lacks specificity regarding wearable health technology, particularly because it falls outside of the standard public healthcare system (Murdoch et al., 2022). Despite widespread calls for a federal bill regulating AI systems specifically, no such regulation has been adopted (Attard-Frost, 2025). One example of such a bill is the Artificial Intelligence Data Act (AIDA), which was proposed as part of Bill C-27 to establish a comprehensive, risk-based approach to regulating AI systems at various levels. AIDA was designed to align with the definitions and concepts outlined in the EU’s AI Act, fostering international cohesion and enhanced understanding, but was deeply contested and ultimately shelved during prorogation. To mitigate privacy concerns, AIDA or an improved version thereof must be adopted, with increased detail on the government’s plan to incorporate private technology, public health, and AI-based consumer applications. These protections are critical to Gen Z, who are the earliest users of wearable tech and therefore accumulate years of personal data far sooner in life than previous generations. 

Beyond policy solutions, we must take personal steps to mitigate the issue. As young people, we place a tremendous amount of trust in the data that technology collects and regurgitates back to us. Although your Apple Watch can support your fitness goals and so much more, it is vital that you remain aware of who might be accessing your personal information. If you own an Apple Watch or a similar device, which you are virtually never separated from, I implore you to research its privacy policy. Do you know how your data is being used? Do you care? Do you know who is advocating for your right to privacy? It is up to you to consider these questions. 

Overall, Apple Watches have evolved into powerful AI-driven tools that know more about us than we might expect. Alongside their convenience and health insights, I examined the privacy risks associated with devices that know us so intimately. As the use of these devices and discourse around this topic grow, our federal regulations must evolve in tandem. Looking ahead, our understanding of digital privacy and autonomy over our own health data will become increasingly essential, particularly for young people. As we continue to empower these devices by integrating them into our routines, we must also remain curious about how our data is being collected, stored, and used. Understanding the privacy of our data is not just a question for policy-makers and corporate CEOs alike; it includes all of us.


References

Asimakopoulos, G., Asimakopoulos, S., & Spillers, F. (2024). “It tracks me!”: An analysis of Apple Watch nudging and user adoption mechanisms. Health Informatics Journal, 30(4). https://doi.org/10.1177/14604582241291405

Attard-Frost, B. (2025, January 17). The death of Canada’s Artificial Intelligence and Data Act: What happened, and what’s next for AI regulation in Canada? Montreal AI Ethics Institute.https://montrealethics.ai/the-death-of-canadas-artificial-intelligence-and-data-act-what-happened-and-whats-next-for-ai-regulation-in-canada

BBC News. (2021, January 14). Google tries to allay Fitbit-deal privacy fears. https://www.bbc.com/news/technology-55662659

Busch, F., Geis, R., Wang, Y.-C., Kather, J. N., Al Khori, N., Makowski, M. R., Kolawole, I. K., Truhn, D., Clements, W., Gilbert, S., Adams, L. C., Ortiz-Prado, E., & Bressem, K. K. (2025). AI regulation in healthcare around the world: What is the status quo? medRxiv. https://doi.org/10.1101/2025.01.25.25321061. Awaiting peer review.

Boldi, A., Silacci, A., Boldi, M.-O., Cherubini, M., Caon, M., Zufferey, N., Huguenin, K., & Rapp, A. (2024). Exploring the impact of commercial wearable activity trackers on body awareness and body representations: A mixed-methods study on self-tracking. Computers in Human Behavior, 151, 108036. https://doi.org/10.1016/j.chb.2023.108036

C. Zhang, H. Shahriar and A. B. M. K. Riad, “Security and Privacy Analysis of Wearable Health Device,” 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 2020, pp. 1767-1772, https://doi.org/10.1109/COMPSAC48688.2020.00044

Doherty, C., Baldwin, M., Lambe, R. et al. Privacy in consumer wearable technologies: a living systematic analysis of data policies across leading manufacturers. npj Digit. Med. 8, 363 (2025). https://doi.org/10.1038/s41746-025-01757-1

Gomstyn, A., & Jonker, A. (n.d.). Exploring privacy issues in the age of AI. IBM Think. https://www.ibm.com/think/insights/ai-privacy

Government of Canada, Department of Justice. (n.d.). Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c27_1.html

High-level summary of the AI Act. EU Artificial Intelligence Act. (2024, February 27). https://artificialintelligenceact.eu/high-level-summary/

Khan, N., Deshpande, N., Singh, P. et al. A review of AI-based health prediction using apple watch and fitbit data. Discov Appl Sci 7, 1217 (2025). https://doi.org/10.1007/s42452-025-07783-8

Mohammed Yousef Shaheen. Applications of Artificial Intelligence (AI) in healthcare: A review. ScienceOpen Preprints.2021.https://doi.org/10.14293/S2199-1006.1.SOR-.PPVRY8K.v1

Mone, V., & Shakhlo, F. (2023). Health Data on the Go: Navigating Privacy Concerns with Wearable Technologies. Legal Information Management, 23(3), 179–188. https://doi.org/10.1017/S1472669623000427

Mondal, H., & Mondal, S. (2024). Ethical and social issues related to AI in healthcare. In Methods in Microbiology (Vol. 55, pp. 247–281). https://doi.org/10.1016/bs.mim.2024.05.009

Murdoch, B., Jandura, A. & Caulfield, T. (2022). Privacy Considerations in the Canadian Regulation of Commercially-Operated Healthcare Artificial Intelligence. Canadian Journal of Bioethics / Revue canadienne de bioéthique, 5(4), 44–52. https://doi.org/10.7202/1094696ar

Office of the Privacy Commissioner of Canada. (2020, November 12). A regulatory framework for AI: Recommendations for PIPEDA reform. https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/reg-fw_202011/

Radanliev P (2025) Privacy, ethics, transparency, and accountability in AI systems for wearable devices. Front. Digit. Health 7:1431246. doi: 10.3389/fdgth.2025.1431246 https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1431246/full

Scassa, T. (2020, December 23). Replacing Canada’s 20-year-old data protection law. Centre for International Governance Innovation. https://www.cigionline.org/articles/replacing-canadas-20-year-old-data-protection-law/

Shajari, S., Kuruvinashetti, K., Komeili, A., & Sundararaj, U. (2023). The Emergence of AI-Based Wearable Sensors for Digital Health Technology: A Review. Sensors, 23(23), 9498. https://doi.org/10.3390/s23239498

Sivakumar, C. L. V., Mone, V., & Abdumukhtor, R. (2024). Addressing privacy concerns with wearable health monitoring technology. WIREs Data Mining and Knowledge Discovery, 14(3), e1535. https://doi.org/10.1002/widm.1535So, A. (2025, April 24). The Apple Watch just turned 10. Here’s how far  it’s come. Wired.https://www.wired.com/story/apple-watch-turns-10/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

related posts

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Open Letter: Moving Forward Together – MAIEI’s Next Chapter

    Open Letter: Moving Forward Together – MAIEI’s Next Chapter

  • South Korea as a Fourth Industrial Revolution Middle Power?

    South Korea as a Fourth Industrial Revolution Middle Power?

  • Enhancing Trust in AI Through Industry Self-Governance

    Enhancing Trust in AI Through Industry Self-Governance

  • Defending Against Authorship Identification Attacks

    Defending Against Authorship Identification Attacks

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.