• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Social Work Thinking for UX and AI Design

July 18, 2020

Summary contributed by Victoria Heath (@victoria_heath7), Communications Manager at Creative Commons

*Authors of full paper & link at the bottom


Mini-summary: What if tech companies dedicated as much energy and resources to hiring a Chief Social Work Officer as they did technical AI talent? If that was the case, argues Desmond Upton Patton (associate professor of social work, sociology, and data science at Columbia University, and director of SAFElab), they would more often ask: Who should be in the room when considering “why or if AI should be created or integrated into society?” By integrating “social work thinking” into the process of developing AI systems, these companies would be better equipped to anticipate how technological solutions would impact various communities.

The code of ethics that guides social workers, argues Patton, should be used to guide the development of AI systems—leading companies to create systems that actually help people in need, address social problems, and are informed by conversations with the communities most impacted by the system. In particular, before looking for a technical solution to a problem, the problem must be fully understood first, especially as it’s “defined by the community.” These communities should be given the power to influence, change, or veto a solution. To integrate this social work thinking into UX and AI design, we must value individuals “beyond academic and domain experts.” Essentially, we must center humanity and acknowledge that in the process of doing so, we may end up devaluing the power and role of the technology itself.

Full summary:

What if tech companies dedicated as much energy and resources to hiring a Chief Social Work Officer as they did technical AI talent (e.g. engineers, computer scientists, etc.)? If that was the case, argues Desmond Upton Patton (associate professor of social work, sociology, and data science at Columbia University, and director of SAFElab), they would more often ask: Who should be in the room when considering “why or if AI should be created or integrated into society?” 

By integrating “social work thinking” into their process of developing AI systems and ethos, these companies would be better equipped to anticipate how technological solutions would impact various communities. To genuinely and effectively pursue “AI for good,” there are significant questions that need to be asked and contradictions that need to be examined, which social workers are generally trained to do. For example, Google recently hired individuals experiencing homelessness on a temporary basis to help collect facial scans to diversity Google’s dataset for developing facial recognition systems. Although on the surface this was touted as an act of “AI for good,” the company didn’t leverage their AI systems to actually help end homelessness. Instead, these efforts were for the sole purpose of creating AI systems for “capitalist gain.” It’s likely this contradiction would have been noticed and addressed if social work thinking was integrated from the very beginning. 

It’s especially difficult to effectively pursue “AI for good” when the field itself (and tech more broadly) remains largely racially homogenous, male, and socioeconomically privileged; as well as restricted to those with “technical” expertise while other expertise is largely devalued. Patton asks, “How might AI impact society in more positive ways if these communities [e.g., social workers, counselors, nurses, outreach workers, etc.] were consulted often, paid, and recognized as integral to the development and integration of these technologies…?”

Patton argues that systems and tools can be used to both help a community, and hurt it. “I haven’t identified an ethical AI framework,” he wrote, “that wrestles with the complexities and realities of safety and security within an inherently unequal society.” Thus, an AI technology shouldn’t be deployed in a community unless a “more reflective framework” can be created that “privileges community input.” When developing these systems, it’s important to admit, as Patton does, that the technical solution may not be what’s needed to solve the problem. 

Through his work at SAFElab, Patton has nurtured collaboration between natural language processing (NLP) and social work researchers to “study the role of social media in gun violence,” and create an AI system that predicts aggression and loss. Their approach was to first collect qualitative data by social workers trained in annotation who provided an analysis that then influenced the development of the “computational approach for analyzing social media content and automatically identifying relevant posts.” By working closely together, the social workers and the computer scientists were able to develop a more contextualized technical solution to a problem that was cognizant of the “real-world consequences of AI.”

In order to effectively ask the right questions and deal with the inherent complexities, problems, and contradictions with developing “AI for good,” we need to change who we view as “domain experts.” For the project at SAFElab, for example, they developed an “ethical annotation process” and hired youth from the communities they were researching in order to center “context and community voices in the preprocessing of training data.” They called this approach Contextual Analysis of Social Media (CASM). This approach includes collecting a baseline interpretation of a social media post from an annotator who provides a contextualized assessment; and then debriefing, evaluating, and reconciling disagreements on the labeled post with the community expert and the social work researcher. Once done, the labeled dataset is then given to the data science team to use in training the system. This approach eliminates the “cultural vacuum” that can exist in training datasets from the beginning and throughout the entire development process. 

The code of ethics that guides social workers, argues Patton, should be used to guide the development of AI systems—leading companies to create systems that actually help people in need, address social problems, and are informed by conversations with the communities most impacted by the system. In particular, before looking for a technical solution to a problem, the problem must be fully understood first, especially as it’s “defined by the community.” These communities should be given the power to influence, change, or veto a solution. To integrate this social work thinking into UX and AI design, we must value individuals “beyond academic and domain experts.” Essentially, we must center humanity and acknowledge that in the process of doing so, we may end up devaluing the power and role of the technology itself.


Original paper by Desmond Upton Patton: https://dl.acm.org/doi/10.1145/3380535 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Never trust, always verify: a roadmap for Trustworthy AI?

    Never trust, always verify: a roadmap for Trustworthy AI?

  • The Meaning of “Explainability Fosters Trust in AI”

    The Meaning of “Explainability Fosters Trust in AI”

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • Project Let’s Talk Privacy (Research Summary)

    Project Let’s Talk Privacy (Research Summary)

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • State of AI Ethics

    State of AI Ethics

  • The State of AI Ethics Report (Volume 4)

    The State of AI Ethics Report (Volume 4)

  • Research summary: Comparing Privacy Law GDPR Vs CCPA

    Research summary: Comparing Privacy Law GDPR Vs CCPA

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.