• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

February 18, 2024

🔬 Research Summary by Lucian Schwartz-Croft, a European Law student specializing in AI systems in the EU.

[Original paper by Lucian Schwartz-Croft]


Overview: The text discusses the expensive nature of legal research in the USA, dominated by platforms like Westlaw, and contrasts it with the approach of countries where legal information is more accessible. It introduces ROSS Intelligence, an AI system aiming to democratize legal research, which faced a legal battle with Westlaw over copyright infringement. The text also mentions a merger between Fastcase and vLex, creating an AI assistant similar to ROSS, but with uncertain impact due to its funding sources. It highlights a legal case in Georgia as a step towards more open access to legal information and notes the challenges and potential of AI in legal research, especially in the USA’s common law system.


Introduction

In 2018, a team led by Andrew Arruba unveiled their solution to the problem, called ‘ROSS intelligence.’ This revolutionary AI system is a much more intelligent version of Westlaw, saving hundreds of billable hours, all for a reasonable price. The big issue, however, was that ROSS was forced to use the Westlaw database due to the nature of legal research in the USA. Unsurprisingly, Westlaw responded to this by suing ROSS. This paper concerns the lengthy and complex legal battle between ROSS and Westlaw and its possible ramifications. 

Key Insights

The David and Goliath of legal research

One of the founders of ROSS intelligence was once quoted as saying that the aim of ROSS was the ‘democratization of the law.’ As inspiring as this seems, it is easy to see that ROSS was bound for a David and Goliath legal battle from the start. By blatantly copying Westlaw data en masse, it was unsurprising that ROSS was sued for copyright infringement in 2020. Interestingly, much of the information copied is in the public domain, such as judicial opinions. Westlaw, therefore, focused on the details in its suit, such as the use of its headnotes and numbering system. ROSS denied using any of the protected data and tried to convince the court to dismiss the case, which failed. The team then tried to attack Westlaw because it exercises a monopoly over the legal research market by implementing restrictive marketing practices. 

This was a very interesting move, and although the courts dismissed parts of it, a part of the claim about ‘tying,’ which is when a seller requires buyers to purchase a second product or service as a condition of obtaining a first product, was upheld. Westlaw tied access to the public case law database to their paid search tools in this situation. As of the writing, the case is yet to be resolved, with slow progress and no clear end in sight. Although this is not the breakthrough case that some may have hoped, it indicates a small chance against the huge legal research companies that dominate the market and will perhaps set a precedent as a tool to counter such large research companies. 

The long road ahead, ‘Vincent AI’

Recently, after the full version of this paper was published, a merger between Fastcase and vLex resulted in the creation of an AI assistant similar to ROSS intelligence. Although the missions of vLex and Fastcase both echo the one from ROSS intelligence, to ‘democratize the law’ through affordable pricing, it is unclear whether the merger will actually do any good in the world of legal research or just be another Westlaw or LexisNexis clone, especially since the new company being created by the merger will be funded by Okley capital and Bain Capital Credit, the former being a large European private equity investor, and the latter a global credit specialist. Based on the funding background, it would be unsurprising if Vincent becomes another overpriced tool to join the ranks of companies like Westlaw. 

However, there are small signs that the legal landscape of the USA may evolve to open the door for other smaller AI projects. A promising example is from 2020, when, as mentioned in the introduction, the state of Georgia sued a nonprofit for uploading the state code on its website, making it freely available to the public. This, however, backfired and after several appeals, the Supreme Court ruled that the code was not copyrightable as it was covered under the government edicts doctrine. Although the case relies on a rather vague precedent, it is nevertheless a step in the right direction. 

Access to scholarly material has also improved in the last two decades with internet use. The booming industry that is the scientific research community has caused prices of nonlegal scholarly journals to skyrocket in the last three decades, causing significant issues for equal access to information, which has seemingly not affected the legal world, with legal scholarly articles not following the same curve, which is largely thanks to the strong community of student and university edited scholarly journals.

Between the lines

In a nutshell, for the time being, AI will be difficult to harness for legal research in the USA like it is being used in other sectors. Although there is a good chance of development in other countries, especially in unitary civil law nations, it is undeniable that the somewhat more complex nature of legal research in common law makes the idea of AI so promising. Even the near future remains unclear, and we are yet to see what will come from both the ROSS intelligence case and Vincent.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • The importance of audit in AI governance

    The importance of audit in AI governance

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

    Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

  • Achieving Fairness at No Utility Cost via Data Reweighing with Influence

    Achieving Fairness at No Utility Cost via Data Reweighing with Influence

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

    Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.