• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models

January 25, 2024

🔬 Research Summary by Mohi Reza, a Computer Science Ph.D. Candidate at the University of Toronto and an Adaptive Experimentation Accelerator team member who won the Grand Prize in the $1M XPRIZE Digital Learning Challenge specializes in enhancing digital learning using field experiments powered by HCI, ML, and AI.

[Original paper by Mohi Reza, Nathan Laundry, Ilya Musabirov, Peter Dushniku, Zhi Yuan “Michael” Yu, Kashish Mittal, Tovi Grossman, Michael Liut, Anastasia Kuzminykh, and Joseph Jay Williams]


Overview: State-of-the-art large language models (LLMs) potentially transform our text composition and editing approach. However, the prevalent chat-based interfaces for LLMs often complicate the management of multiple text variations, leading to increased workload and interruption of the writer’s creative process. This paper introduces ABScribe, a novel interface designed to facilitate a rapid yet visually structured exploration of writing variations in human-AI writing tasks while minimizing task workload and preserving writers’ flow.


Introduction

ABScribe Demo courtesy of the author team

“The only kind of writing is rewriting”

Ernest Hemingway, A Moveable Feast

In the age of Generative AI, as tools like ChatGPT become our writing companions, revising text with AI assistance becomes a battle against the clutter of lengthy, unwieldy chat logs. This research investigates ways to improve a fundamental aspect of writing with AI: the iterative, granular, and non-linear revision process. It presents ABScribe, a novel Human-AI co-writing interface that eliminates the predominant chat-based UI of generative AI, enabling writers to manage multiple text variations effectively. ABScribe’s design enables the rapid generation and comparison of variations through an ensemble of novel interface elements, sidestepping the usual pitfalls of AI co-writing, such as excessive scrolling and cognitive overload. In a study with 12 writers, ABScribe significantly reduced the subjective task workload and improved users’ perceptions of the revision process. Interfaces like ABScribe can help us harness the full potential of AI in writing, transforming an explosion of ideas into a fountain of well-organized creativity.

Key Insights

Unlocking Creativity with AI

Human-Computer Interaction (HCI) and traditional design practice encourage the parallel exploration of multiple variations to help avoid fixating on a single idea and to reduce the chances of eliminating rough but innovative ideas too early. Our research extends this principle to writing, proposing that AI can facilitate this exploration. However, the challenge lies in designing interfaces that prevent users from drowning in a sea of AI-generated content.

Moving beyond Chatbots

To ground our interface design, we distinguish between two kinds of Human-AI Co-Writing interfaces: the conversational interfaces, exemplified by ChatGPT, which mimic human dialogue but may hinder the management of multiple text variations due to their linear structure, and the ‘In-Place’ interfaces, which seamlessly blend AI suggestions directly into the document, enhancing the fluidity of text revision. Research and innovation in the latter kind of interface can help AI-supported writers become more productive as they revise text. 

In our design, we adopt an in-place editing interface in a GPT-4 powered research prototype, offering a solution to overcome challenges surrounding the management of multiple text variations in human-AI co-writing tasks. We carefully construct a baseline interface representing current workflows, providing fresh empirical insights based on our interviews with writers. These insights help us understand user perceptions of the revision process and explore how differences between in-place editing and chat-based AI writing companions impact their workflow.

Designing and Evaluating ABScribe

ABScribe is built on four design requirements that support AI-assisted revision: minimizing task workload, visually organizing text variations, allowing context-sensitive comparison and revision, and enabling revision-centric, reusable AI prompts. We designed five interface elements to embody these requirements, allowing writers to seamlessly explore multiple writing variations:  (i) Variation Components that store multiple human and AI-generated variations within flexible text segments in a non-linear manner, without overwriting text; (ii) Hover Buttons that reveal corresponding versions inside a Variation Component when users hover their mouse over them, allowing for rapid comparisons without breaking text flow; (iii) the Variation Accordion that organizes all variations in a navigable format; (iv) AI Buttons that automatically encapsulates LLM instructions into reusable buttons that can be applied across different Variation Components; and (v) AI Insert that allows writers to insert LLM-generated text directly into the document.

To validate our design, we conducted a controlled evaluation study and interviews with 12 writers comparing ABScribe with a widely-used baseline workflow consisting of an AI-integrated rich text editor based on GPT-4 with a chat-based AI assistant. Our findings demonstrate that ABScribe significantly reduces subjective task workload (d = 1.20, p<0.001) and enhances user perceptions of the revision process (d = 2.41, p<0.001) compared to the baseline. 

Between the lines

The text-editing interfaces in use today were designed before computers learned to write like humans. As such, researchers must critically reassess these interfaces as we integrate AI into our daily writing practices. As we transition from using computers as mere tools to embracing them as partners in creativity, the imperative is clear: We must develop the next generation of interfaces, such as ABScribe, to ensure they augment rather than usurp human creativity. Yet, questions linger: How will editing interfaces evolve to balance AI’s sophistication with the user’s need for simplicity? Can we maintain the authenticity of human expression amidst AI’s input? And what new forms of writing will emerge from this symbiosis?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

    Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

  • Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

    Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • SoK: The Gap Between Data Rights Ideals and Reality

    SoK: The Gap Between Data Rights Ideals and Reality

  • The Ethical Considerations of Self-Driving Cars

    The Ethical Considerations of Self-Driving Cars

  • Montreal AI Ethics Institute Hosts a TechAIDE Café Session

    Montreal AI Ethics Institute Hosts a TechAIDE Café Session

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.