• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

February 18, 2024

🔬 Research Summary by Lucian Schwartz-Croft, a European Law student specializing in AI systems in the EU.

[Original paper by Lucian Schwartz-Croft]


Overview: The text discusses the expensive nature of legal research in the USA, dominated by platforms like Westlaw, and contrasts it with the approach of countries where legal information is more accessible. It introduces ROSS Intelligence, an AI system aiming to democratize legal research, which faced a legal battle with Westlaw over copyright infringement. The text also mentions a merger between Fastcase and vLex, creating an AI assistant similar to ROSS, but with uncertain impact due to its funding sources. It highlights a legal case in Georgia as a step towards more open access to legal information and notes the challenges and potential of AI in legal research, especially in the USA’s common law system.


Introduction

In 2018, a team led by Andrew Arruba unveiled their solution to the problem, called ‘ROSS intelligence.’ This revolutionary AI system is a much more intelligent version of Westlaw, saving hundreds of billable hours, all for a reasonable price. The big issue, however, was that ROSS was forced to use the Westlaw database due to the nature of legal research in the USA. Unsurprisingly, Westlaw responded to this by suing ROSS. This paper concerns the lengthy and complex legal battle between ROSS and Westlaw and its possible ramifications. 

Key Insights

The David and Goliath of legal research

One of the founders of ROSS intelligence was once quoted as saying that the aim of ROSS was the ‘democratization of the law.’ As inspiring as this seems, it is easy to see that ROSS was bound for a David and Goliath legal battle from the start. By blatantly copying Westlaw data en masse, it was unsurprising that ROSS was sued for copyright infringement in 2020. Interestingly, much of the information copied is in the public domain, such as judicial opinions. Westlaw, therefore, focused on the details in its suit, such as the use of its headnotes and numbering system. ROSS denied using any of the protected data and tried to convince the court to dismiss the case, which failed. The team then tried to attack Westlaw because it exercises a monopoly over the legal research market by implementing restrictive marketing practices. 

This was a very interesting move, and although the courts dismissed parts of it, a part of the claim about ‘tying,’ which is when a seller requires buyers to purchase a second product or service as a condition of obtaining a first product, was upheld. Westlaw tied access to the public case law database to their paid search tools in this situation. As of the writing, the case is yet to be resolved, with slow progress and no clear end in sight. Although this is not the breakthrough case that some may have hoped, it indicates a small chance against the huge legal research companies that dominate the market and will perhaps set a precedent as a tool to counter such large research companies. 

The long road ahead, ‘Vincent AI’

Recently, after the full version of this paper was published, a merger between Fastcase and vLex resulted in the creation of an AI assistant similar to ROSS intelligence. Although the missions of vLex and Fastcase both echo the one from ROSS intelligence, to ‘democratize the law’ through affordable pricing, it is unclear whether the merger will actually do any good in the world of legal research or just be another Westlaw or LexisNexis clone, especially since the new company being created by the merger will be funded by Okley capital and Bain Capital Credit, the former being a large European private equity investor, and the latter a global credit specialist. Based on the funding background, it would be unsurprising if Vincent becomes another overpriced tool to join the ranks of companies like Westlaw. 

However, there are small signs that the legal landscape of the USA may evolve to open the door for other smaller AI projects. A promising example is from 2020, when, as mentioned in the introduction, the state of Georgia sued a nonprofit for uploading the state code on its website, making it freely available to the public. This, however, backfired and after several appeals, the Supreme Court ruled that the code was not copyrightable as it was covered under the government edicts doctrine. Although the case relies on a rather vague precedent, it is nevertheless a step in the right direction. 

Access to scholarly material has also improved in the last two decades with internet use. The booming industry that is the scientific research community has caused prices of nonlegal scholarly journals to skyrocket in the last three decades, causing significant issues for equal access to information, which has seemingly not affected the legal world, with legal scholarly articles not following the same curve, which is largely thanks to the strong community of student and university edited scholarly journals.

Between the lines

In a nutshell, for the time being, AI will be difficult to harness for legal research in the USA like it is being used in other sectors. Although there is a good chance of development in other countries, especially in unitary civil law nations, it is undeniable that the somewhat more complex nature of legal research in common law makes the idea of AI so promising. Even the near future remains unclear, and we are yet to see what will come from both the ROSS intelligence case and Vincent.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Auditing for Human Expertise

    Auditing for Human Expertise

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability

    In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

    The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

  • Research summary: AI in Context: The Labor of Integrating New Technologies

    Research summary: AI in Context: The Labor of Integrating New Technologies

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.