Lady Justice and legal AI
Law Professor Samuel Dahan (2nd right) with three of his 2022-23 Conflict Analytics Lab (CAL) students – Mohamed Afify, Law’24, and Ingrid Kao and Solinne, both Law’23.
Law Professor Samuel Dahan (2nd right) with three of his 2022-23 Conflict Analytics Lab (CAL) students – Mohamed Afify, Law’24, and Ingrid Kao and Solinne, both Law’23. They are part of a team building OpenJustice, a new generative AI system that can change the legal profession. Although CAL’s main server has both a campus and a cloud location, every team member works on a personal computer, wherever they may be located. (Photo by Bernard Clark)
Law Professor Samuel Dahan (middle) with two of his three multidisciplinary co-founders of the new generative AI system called OpenJustice: David Liang, Law’21, (Smith School’s Program Manager of Analytics and AI Ecosystem); and Computer Engineering Professor Xiaodan Zhu. Not shown: Rohan Bhambhoria (PhD candidate in computer engineering). (Photo by Bernard Clark)
Law Professor Samuel Dahan (middle) with two of his three multidisciplinary co-founders of the new generative AI system called OpenJustice: David Liang, Law’21, (Smith School’s Program Manager of Analytics and AI Ecosystem); and Computer Engineering Professor Xiaodan Zhu. Not shown: Rohan Bhambhoria (PhD candidate in computer engineering). (Photo by Bernard Clark)

A revolutionary technology can now be used to create any type of content, from text, audio, and images to videos and simulations. Faculty, alumni, and students share their insights on how this new artificial intelligence can impact the legal profession, the justice system, legal education, and the field of intellectual property, and how a Queen’s team is leading the way with an innovation specializing in law. 

By Mark Witten

Artificial intelligence (AI) is being applied to (many people say “disrupting”) almost every industry and profession, and the legal profession is no exception. Since its release by California company OpenAI last November, ChatGPT – an AI-powered chatbot tool that answers questions with convincingly human-like responses – has captured the public imagination and already demonstrated the powerful capabilities of generative AI as a tool with the potential to transform the delivery of legal services. Because a substantial part of lawyers’ work takes the form of written documents, generative AI’s ability to rapidly absorb a huge amount of information and then create original content based on a user’s prompt suggests these technologies could change what lawyers do and how they do it in a multitude of ways.

In this multi-segment feature, Queen’s Law faculty, graduates, and students with expertise in relation to these issues share their perspectives, experience, and advice on the opportunities and risks of generative AI in three necessarily overlapping fields: 

  • the legal profession 
  • legal education 
  • the Canadian justice system.

“For the public, generative AI is a powerful tool that can potentially increase access to justice by empowering those who can’t afford a lawyer to pursue their own legal claims,” says Professor Samuel Dahan, Director of the Conflict Analytics Lab (CAL), a consortium for AI research on law, compliance, and conflict resolution, based at Queen’s Law and the Smith School of Business at Queen’s. “For lawyers and law firms, generative AI can augment their practice by providing more efficient ways to tackle problems and serve more clients. It can also help lawyers make better decisions by extracting and synthesizing knowledge from a sometimes-vast repository of data from their firm’s past work.”

Like many new, transformative technologies, generative AI also presents risks, flaws, and limitations that legal practitioners, law faculty and students must address to realize the benefits. “As lawyers, legal educators, and researchers, we have to ensure we’re using generative AI to upskill not deskill,” says Professor Bita Amani, whose specialization includes intellectual property law, information privacy, and data protection. “It would be a serious risk and a grave error to over-rely on these technologies, because generative AI doesn’t have understanding or judgement, and ChatGPT doesn’t care about the truth of the information it provides.”

OpenJustice is Coming

CAL’s innovation opens justice to the public

The idea for OpenJustice, which opened to its first partners in May by the Conflict Analytics Lab (CAL), started two years ago with a project dubbed the Smart Legal Clinic. Four innovators with a shared vision – Law Professor Samuel Dahan, CAL Director; David Liang, Law’21, Smith’s Program Manager of Analytics and AI Ecosystem; and, from Queen’s Computer Engineering, Professor Xiaodan Zhu and PhD candidate Rohan Bhambhoria – had been working on an AI project compiling a series of answers to common legal questions asked by everyday Canadians on such popular online forums as Law Stack Exchange and Canadian law subreddits – “the kind of places people go for legal advice because they can’t afford a private law firm and they don’t meet the income requirements for legal aid,” says Liang. 

“When OpenAI came out with ChatGPT, we realized it might be possible to train a legal language model on the repository of information that we encoded and create a large language model with the broad capability of answering many legal questions with sources,” he continues. “Fortunately, we had already been working on our legal database and this technology came along at precisely the right time. We decided to move fast.” 

And so OpenJustice was created – a specialized generative AI tool trained to perform legal tasks. It’s an interactive, natural-language-processing interface that allows users to ask common legal questions when they need guidance. “OpenJustice,” says Dahan, “will provide reliable, in-depth answers to legal questions and also address the shortcomings of generalized-language models like ChatGPT. In the first phase, the prototype will be trained and improved through collaborations with sophisticated partner-users with legal knowledge: law schools, top national law firms, legal scholars, and public-interest organizations. Once that fine-tuning process is done, OpenJustice will be open to the public.” That will be great news to some of the other legal stakeholders cited on the pages that follow. 

In conducting troubleshooting research to help guide the development of OpenJustice, Dahan’s CAL students, including Solinne Jung, Law’23, tested ChatGPT and GPT-4 on various legal questions derived from popular online forums. Their findings, on which Dahan is writing a journal article, revealed some serious flaws and limitations in the chatbot’s responses. 

“One major issue was that ChatGPT would often provide the right answer but not any citations for the cases it was referencing,” Jung points out. “Or the underlying reasoning component was incorrect. Other times, it failed to provide accurate legal information in the specific context.” Additional flaws included superficial answers and outright fabrication, Liang adds. “ChatGPT seems very convincing, but it also has a habit of ‘hallucinating’ information that doesn’t exist. It creates false citations, for instance. These are some of the problems we’re trying to fix.” 

These and other fixes that the CAL team are working on will address concerns raised by the recent U.S. case (Mata v. Avianca, Inc.), a cautionary tale Professor Amani references in discussing risks of use and reliance on generative AI in the legal profession. In this well-publicized case, two now sanctioned lawyers relied on ChatGPT to prepare the plaintiff’s court filing, only to discover the cases generated and included for citation were fabricated; the application had invented several "bogus" case law citations. 

CAL’s ingenious solution is to build OpenJustice as a hybrid system. “One of the main flaws of AI language models is their inability to output factual information,” says Bhambhoria. “Citations, facts, and reasoning are all problems noted by students testing ChatGPT that we’re aiming to address. Hybrid systems combine the capacities of language models and information retrieval systems, like search engines, to overcome that limitation in providing factual information, such as citations.”  

Improving transparency, verifiability, and accuracy are key elements in CAL’s approach to overcoming the critical flaws and limitations of ChatGPT and other generative AI models. For example, OpenJustice will add citations to responses, enabling users to source supporting information in case law, legislation, and high-quality legal information sources in the public domain. “We want to provide users with the resources to verify everything themselves. That way they can make an informed decision about how to pursue their claim,” says Dahan, noting that over 50 per cent of Canadian court cases involve at least one self-represented litigant, who could potentially benefit from this cutting-edge, open access legal app.

Finally, multiplicity and diversity of outputs are other key components of OpenJustice. A main issue with AI is that it only provides one answer per question, often framing it as a universal truth. “This does not align well with the far more subtle way legal reasoning works,” explains Dahan. “In fact, a slightly complex legal question can call for several possible truths, depending on such unpredictable factors as timing, location, resources, and the adjudicator. Accordingly, our model is trained to provide a variety of possible answers or truths to the same legal question.” 

Since its inception five years ago, CAL has also been working to improve AI’s limited ability to do causal and counter-factual reasoning. “ChatGPT and other generative AI systems do a very poor job of applying law to facts,” Dahan says. “If we have enough legal data and computing power, we think OpenJustice can get close to good legal reasoning.”

CAL is now in the process of establishing collaborations with partners from across Canada, the U.S., and Europe to help train, improve, and strengthen the core OpenJustice technology. Among them are leading legal scholars and academic institutions (including Harvard, Leiden, Singapore, and Paris Dauphine), top national law firms and other private sector organizations, and public interest institutions. On behalf of CAL, Dahan is applying for a Social Sciences and Humanities Research Council (SSHRC) partnership grant of up to $2.5 million to develop and strengthen the core technology to make it accessible to the public. “We want our consortium of partners to contribute to the public version, which we aim to release by early 2024,” he says. Then CAL will also be able to help partners like law firms create their own customized versions trained on their data.

To sum up, OpenJustice’s primary goal is to become one of the main generative AI tools specializing in law, open to the public, including self-represented litigants and such public intertest advocates as community legal clinics. 

As Liang notes, “There is a massive access-to-justice crisis in Canada and worldwide. OpenJustice could be one potential solution, empowering self-represented litigants to pursue their legal claims without a lawyer’s help. This tool could also help public interest lawyers manage their massive caseloads and represent a larger number of clients effectively.”

As for aspiring legal professionals, new grad Solinne Jung says that working with CAL’s multidisciplinary team has given her opportunities to think about and apply legal principles in different ways by designing and creating tools from the perspective of the end-user. She’s confident that “OpenJustice will inspire a new generation of lawyers to not only rely on the research tools we’re accustomed to, but to seek out or develop innovative ways of researching or providing legal information to clients.” 

Partner for Innovation

Law practitioners, academics, and students can help fine-tune OpenJustice for its public release. Our Conflict Analytics Lab is looking for collaborators to train its generative AI bot that specializes in law. To get involved, please contact Professor Samuel Dahan at samuel.dahan@queensu.ca.  

Legal Profession Impacts 

Legal and justice professionals must assess risks and opportunities: LCO 

Nye Thomas, Law’89, Executive Director of the Law Commission of Ontario
Nye Thomas, Law’89, Executive Director of the Law Commission of Ontario, is leading an LCO project on AI and Automated Decision-Making in the justice system. This project is addressing both AI’s well-documented risks and harms and its potential to improve fairness.

ChatGPT and generative AI technologies could have a potentially transformative impact on the legal profession and legal service delivery in terms of both opportunities and risks. Keeping a close eye on this situation is Nye Thomas, Law’89, Executive Director, Law Commission of Ontario (LCO). “Generative AI technologies have extraordinary potential to ameliorate the access-to-justice crisis in Ontario and across Canada by making the provision of legal information and services more efficient, affordable, and accessible to the public,” he says, “but that won’t happen by itself. 

“Regulators, the legal profession, judges, governments, and civil society organizations will have to think about how the technology can be used most effectively and appropriately. There are serious questions about the accuracy and reliability of generative AI, so verification and authentication processes are crucial to ensure that the answers users are getting reflect Ontario law.”

More positively, Thomas sees opportunities for using generative AI in public legal education. “A lot of people are shut out of the justice system due to the costs,” he explains. “Community legal clinics dedicated to public legal education could help to ensure the information and advice provided to self-represented individuals through generative AI tools will be accurate in Ontario and accessible. Self-represented individuals and lawyers could use this technology to write pleadings for tribunals. It could also be used to assist judges in writing decisions.”

Currently Thomas leads an LCO project on AI and Automated Decision-Making (ADM) in the justice system, addressing both AI’s well-documented risks and harms and its potential to improve fairness. The results will inform the LCO’s development of a framework for regulating AI, helping to make its use accountable in the justice system. A top priority will be to ensure that the use of AI respects human rights law. “Studies have shown AI systems have the potential to perpetuate or worsen biased or discriminatory decision-making in the justice system,” he says. “We’re working with the Ontario Human Rights Commission and the Canadian Human Rights Commission to develop an AI Human Rights Assessment tool.” 

Other issues the LCO will address include government use of generative AI to make decisions and ensuring that AI-powered decisions can be appealed. There are due process issues in decisions about people’s entitlements to government services and benefits, the right to know who makes these decisions, and the right to challenge a decision. What is the liability, should the generative AI system make a mistake, and who is responsible? Does the person affected sue the owner of the system?

“To develop effective and appropriate AI regulations, we’ll convene a multidisciplinary group that includes not just lawyers and judges, but also technology and privacy experts to identify potential risks in these systems and strategies to mitigate those risks,” says Thomas.

For legal professionals in any area of law, he sees it will be important to learn how to use generative AI as a tool to improve efficiency and enhance the quality of work they do. “Generative AI technologies will change the practice of law over time,” he says. In that future, he’s certain of three things, just as CAL’s OpenJustice team at Queen’s Law is:
“Lawyers must become more technologically competent to understand how the technology works, both its benefits and limitations; generative AI will be used to draft or help draft documents such as contracts; and access to information for legal research will speed up.” 

Thomas emphasizes, though, that AI systems won’t ever replace lawyers. “The real skill in using these systems is in the questions you ask through prompts, and lawyers are the best people to ask legal questions. Lawyers are also well equipped to evaluate and verify that the responses generated are accurate, reliable, and reflect the law in Ontario.”

Legal Education Impact

Exploiting potential and avoiding perils challenge both law faculty and students 

Associate Dean Mohamed Khimji
 Professor Mohamed Khimji, Associate Dean (Academic Policy), sees ample opportunities for professors and students to use ChatGPT appropriately as a tool to improve critical thinking. (Photo by Greg Black)

ChatGPT is smart enough to pass law school exams. After completing 95 multiple choice and 12 essay questions, the AI chatbot achieved a C+ passing grade overall on exams in four courses graded blindly by University of Minnesota Law School professors. Imagine how well future iterations of generative AI are likely to do within the next year or two! 

Mohamed Khimji, Associate Dean (Academic Policy) responsible for dealing with academic integrity issues, outlines three key principles that will guide Queen’s Law’s approach to using generative AI. 

“First, Queen’s will not ban or restrict the use of those technologies for learning purposes,” he says. “We see generative AI as a potentially valuable learning tool that can be used as a support to primary sources.”  

Second, inappropriate use of AI would constitute a departure from academic integrity, since it involves a misrepresentation of the student’s work and abilities. Among the core values of academic integrity are honesty in presenting one’s own academic work and acknowledging dependence on the ideas or words of any other source, and fairness, which involves full acknowledgement of sources. 

“We see inappropriate use of generative AI as no different than other forms of plagiarism, such as copying from a textbook without attribution and presenting it as your own work. Students should cite their sources clearly,” says Khimji, noting that one big challenge will be to prove students’ take-home assignments are the work of generative AI. “While tools have been developed to detect plagiarism using generative AI, these are not reliable, and we recommend instructors not use them yet.”

Third, instructors should indicate whether this technology can be used in a course and, if so, what the parameters of its use will be. “We want to give instructors the freedom to restrict or limit the use of generative AI in their course if they choose to,” explains Khimji. “We respect academic freedom, and they may have legitimate pedagogical reasons for restricting its use. For example, we want students to learn in first year how to extract legal principles from primary source materials, such as cases, and may want them to develop those skills on their own rather than by using generative AI.”

Khimji sees ample opportunities for professors and students to use ChatGPT appropriately as a tool to improve critical thinking. He gives an example from his Mergers and Acquisitions course: “Students could ask ChatGPT to produce an acquisition agreement and then analyze strengths and flaws in the document that’s been generated. These technologies can be used as a learning tool in any area of law. We know generative AI isn’t very good at performing a legal analysis in a hypothetical fact situation, so students could sharpen their analytical and legal reasoning skills by critiquing the responses generated to these types of legal questions,” he says, noting that law students are very open to using new technologies and new sources of information. “Students are excited and fascinated by ChatGPT. It’s an interactive resource, which has enhanced their engagement.”

To prepare for their legal careers, it will also be essential for students to know how to use the technology effectively and appropriately. “Law firms are thinking about how best to use AI and starting to do it,” Khimji says. “Once these technologies become more reliable, they will enhance efficiency and make legal services cheaper. Lawyers who can use AI will be more in demand than lawyers who can’t. But it's very important that our students learn how to evaluate the work of generative AI to ensure the quality and accuracy of the information isn’t compromised.”

Professor Bita Amani is a researcher and teacher specializing in intellectual property law, information privacy, data protection, and feminist legal studies.
Professor Bita Amani is a researcher and teacher specializing in intellectual property law, information privacy, data protection, and feminist legal studies. 

IP Law Impact

Generative AI creations raise new questions and challenges in IP law

Should AI-generated creative works such as songs, paintings, and text (novels or lyrics) be protected by copyright? Or, when the voices of Drake and The Weeknd are featured on AI-generated songs that rack up millions of views and streams without the artists’ participation or consent, does that constitute some form of intellectual property or personality rights infringement?

Right now, the answers to these new and tricky AI-triggered legal questions aren’t easy or clear. While creative work that doesn’t include an element of human authorship isn’t protected, the U.S. Copyright Office has issued guidance that artistic works created with the help of AI are copyright eligible. But how little or how much human involvement is needed for a creative work to be protected? Major record labels have been using their influence to get AI-generated music pulled from streaming services, but it’s not certain that an artist’s style or voice that AI is copying is protected by copyright like an individual’s existing work. 

“As reflected in the scholarly literature, from an IP perspective, generative AI raises important ontological questions about who can be an author or inventor and what an author or inventor is,” says Professor Bita Amani, whose teaching and research focuses on issues including intellectual property law, information privacy, and data protection.

As generative AI systems disrupt creative industry models, she sees pressure building for copyright laws and government regulations to be adapted and updated. Amani recommends more clarity and appropriate legal and regulatory reforms on issues such as copyright protection and copyright liability in Canada. 

“It’s important to resist calls to extend copyright protection to AI-generated creative works, and to maintain and confirm the existing requirement of human authorship and original expression as preconditions of copyright protection,” she believes. “We don’t need to incentivize or reward AI as we do human authors, and works generated by AI should remain in the public domain.” As for law reforms, she recommends that they should also confirm that the use of copyright works for text and that data-mining does not infringe copyright and can be undertaken in Canada without the threat of potential copyright liability.

In April, the Office of the Privacy Commissioner of Canada launched an investigation into OpenAI, the operator of ChatGPT, in response to a complaint. By May 25, privacy authorities of three provinces – Quebec, B.C., and Alberta – had signed on to a joint investigation. “Widespread use of generative AI raises serious privacy concerns as people become more aware of what personal data is being collected, used, and disclosed,” says Amani. 

“In Italy, for example, the government temporarily blocked ChatGPT over privacy concerns until the company satisfied data protection conditions.” In mid-May, the U.S. Congress heard from Open AI’s CEO that government intervention may be necessary to mitigate growing risks to privacy, technology, security, and the law.

Keith Spencer, Law’87, is counsel with Fasken in Vancouver specializing in information technology and intellectual property law.
Keith Spencer, Law’87, is counsel with Fasken in Vancouver specializing in information technology and intellectual property law. 

Keith Spencer, Law’87, counsel and a leading information technology and IP lawyer at Fasken, is not surprised that generative AI is already a disruptor in the music and other creative industries. “Using AI to create original works with or without minimal human intervention raises many questions about who owns the copyright. Until some of the cases are resolved in the courts, it can be difficult for lawyers to properly advise clients,” he says.

Spencer, who provides expert advice to startup and mature technology companies and serves on the boards of several early-stage private technology companies, is excited about AI’s potential to democratize access to information and reduce the time and cost of providing legal services. 

He also notes “a much higher demand and expectation that lawyers serving high-technology companies will adopt innovative and efficient practices, such as generative AI,” he says. “In tech acquisitions, for example, I can envision this AI creating a work product that surveys the risks in a deal. Lawyers could use it as a tool to go through a data room full of contracts, identify the highest risk contracts, and summarize them in a memo at a fraction of the time and cost it might otherwise take with a conventional approach. Of course, human judgement will still be required for the final assessment and recommendation stage of the memo, but most of the heavy lifting currently done by people today will be replaced.”

Conclusion

There is a consensus among legal and technology experts that generative AI will have a significant impact on law, the legal profession, and legal education. “Generative AI will be transformative, but the nature of the transformation isn’t yet clear,” says the LCO’s Nye Thomas. While leading Ontario’s regulatory efforts to ensure the technology will enhance public legal education and expand citizens’ access to justice, he also wants safeguards against AI’s potential risks and harms. 

Meanwhile, Professor Dahan and his Conflict Analytics Lab team are also helping to shape the direction of this transformation by building at Queen’s – with academic, public interest, and private sector collaborators – the generative AI system OpenJustice, which aims to become one of the main, large-core, legal language models open to the public. 

“Generative AI technologies can empower individuals with the ability to pursue their own legal claims without the help of a lawyer and enable public interest lawyers to serve their clients more effectively,” he says. “For the profession, I’d say if lawyers and law firms don’t start using generative AI to perform legal tasks more efficiently and help make better decisions, they will be left behind.”