Justice Alison Harvison Young of the Court of Appeal for Ontario presents “Legal Ethics and the AI Revolution” to an audience of Queen’s Law community members in person in the student lounge and online.
Justice Alison Harvison Young of the Court of Appeal for Ontario presents “Legal Ethics and the AI Revolution” to an audience of Queen’s Law community members in person in the student lounge and online.

Some members of the legal profession are wary of the growth of AI (artificial intelligence) technology; some embrace it. Still others, who have a more nuanced view, continue to weigh the pros and cons. Alison Harvison Young, the former dean of Queen’s Law (1998-2004) who now sits as a judge on the Court of Appeal for Ontario, includes herself in that latter group. In her mind, the jury is still out when it comes to the benefits and perils of AI and its chatbot enabler, ChatGPT.  

“It’s an understatement to say that AI has taken the world by storm,” Justice Harvison Young told an audience of students, faculty, staff, and alumni when she returned to Queen’s Law on November 27 to deliver a hybrid lecture on “Legal Ethics and the AI Revolution.” 

She noted that in the post-pandemic world there’s no area of the legal profession – from law schools, to practising lawyers, and the courts – that’s unaffected by this burgeoning technology. “And we can’t put the genie back in the bottle.” 

While the use of AI, which is still in its infancy, poses huge challenges, Harvison Young opined that as it evolves the technology holds the promise of enormous potential benefits. Nowhere is that more so than in areas of the law where access to legal counsel is sometimes limited or inadequate – for example, in smaller communities and in legal aid clinics that deal with refugee and immigration issues. But even in such venues there are dangers involved in using – much less relying on – AI and ChatGPT for help with legal research. 

“When lawyers employ sophisticated AI tools, they have professional and legal obligations to use due diligence. In other words, they’re responsible for whatever advice they provide,” she said. That means counsel must critically examine any AI-generated information they rely on, provide to clients, or include in court filings to ensure that it’s accurate and complete. The dangers of not doing so already are far too real. 

As evidence of that, Harvison Young recounted what happened when she experimented with ChatGPT, posing a preliminary legal research question. The reply the AI software provided her with pointed out an inconsequential typo in her query. Intrigued by this, she rephrased her question, added more detail, and resubmitted it. What came back next was a new answer to her question that she knew “was off the mark.” 

For her, that realization underscored the necessity of critically reviewing all AI-generated output. Harvison Young isn’t alone in making that assessment. “Across the country, provincial law societies and the Canadian Judicial Council are working hard to develop AI-use guidelines for lawyers and judges,” she said.

As Harvison Young suggests, there are at least a couple of possible approaches to the practice directives that might be developed to help guide legal decision makers. “One is the rules-based approach, while the other is principles-based” she said. “I’m in favour of the latter. You start with basic principles such as integrity, responsibility, and so on rather than with a rules-based approach.” 

Just as statutes are amended or repealed by legislators, and common law is constantly evolving, Harvison Young noted that AI technology is changing with quicksilver speed. And it will continue to do so. Embracing that reality rather than resisting it is vital to members of the legal profession and to law students. “They’re entering the profession at an incredibly challenging time,” she said. “However, the legal profession and the world are full of potential for those who are innovative, open-minded, and guided by principles.”

By Ken Cuthbertson, Law’83