The rise of generative artificial intelligence (“AI”) presents both great opportunities and potential catastrophe. Recently, several cases have emerged in Ontario involving lawyers who have misused AI, a technology that has been known to produce fictious legal citations—sometimes referred to as “hallucinations.”[1] In these instances, AI tools can generate citations or legal arguments that appear completely legitimate yet are entirely invented and have no grounding in reality. In this year alone, Ontario courts have addressed at least three cases on this very issue, making it clear that all court submissions, and their accompanying citations, must be verified by a human. These decisions serve as a crucial reminder to the legal profession that failing to verify one’s work can have serious consequences, including findings of contempt and potential licensing repercussions.
The R v Chand Decision
In R v Chand, 2025, Justice Kenkel identified issues with the submissions filed by defence counsel, Mr. Arvin Ross. Notably, one of the cases appeared to be fictitious, as the court could not locate any case matching that citation.[2] Several other cases cited led to entirely different cases. As a result, Justice Kenkel directed Mr. Ross to personally prepare and file a new set of submissions in accordance with specific guidelines.
- The paragraphs must be numbered;
- The pages must be numbered;
- Case citations must include a pinpoint cite to the paragraph that illustrates the point being made;
- Case citations must be checked and hyperlinked to CanLII or other site to ensure accuracy;
- AI or commercial legal software that uses AI must not be used for legal research for these submissions.[3]
This case highlighted a critical issue: the careless use of AI ultimately increased Mr. Ross’s workload. He was ordered to review work that had already been submitted, potentially resulting in additional time and expenses for both the lawyer and the client. This underscores the importance of using AI responsibly—while it may appear to save time initially, careless use can ultimately create a heavier workload.
The Ko v Li Decision
In Ko v Li, 2025, ONSC 2766 the court identified several problems with the factum submitted by Jisuh Lee, counsel for Hanna Ko. One cited case referred to an entirely unrelated matter, another citation led only to a generic internet error message, and a third case was cited for a proposition it did not support.[4]
Justice Myers concluded that Ms. Lee’s factum appeared to have been generated by AI, and that Ms. Lee failed to verify whether the cited cases were genuine or actually supported the legal propositions advanced in her submissions.[5] Justice Myers ordered Ms. Lee to show why she should not be found in contempt of court.[6] As outlined by Justice Myers in paragraphs 15 to 20 of his decision:
[15] All lawyers have duties to the court, to their clients, and to the administration of justice.
[16] It is the lawyer’s duty to faithfully represent the law to the court.
[17] It is the lawyer’s duty not to fabricate case precedents and not to mis-cite cases for propositions that they do not support.
[18] It is the lawyer’s duty to use technology, conduct legal research, and prepare court documents competently.
[19] It is the lawyer’s duty to supervise staff and review material prepared for her signature.
[20] It is the lawyer’s duty to ensure human review of materials prepared by non-human technology such as AI.
It is a significant and consequential matter for a lawyer to be required by a judge to explain why they should not be held in contempt of court. Contempt is a serious finding, indicating that a lawyer has disobeyed or shown disrespect for the authority of the court.[7] Such an offense can carry significant penalties, including fines or imprisonment. Justice Myers delivered a subsequent decision addressing this critical issue, emphasizing that counsel who misinterpret the law, submit fake case precedents, or grossly misrepresent the holdings of cited cases are in violation of their duties to the court.[8]
In Ko v Li, 2025 ONSC 2965, Ms. Lee acknowledged that her factum had been prepared using AI. She apologized, asked that no finding of contempt of court be made against her, and agreed to complete at least six hours of Continuing Professional Development training focused on the professional use and potential risks of AI tools in legal practice.[9] Justice Myers stated:
The error was not using AI to assist in drafting the factum. Rather, Ms. Lee’s failure arose when she signed, delivered, and used the factum without ensuring that the cases were authentic and supported the legal arguments she was submitting to the court.[10]
Justice Myers concluded that Ms. Lee’s expressions of accountability and remorse, along with her withdrawal of the offending factum, were sufficient to resolve any contempt of court that might otherwise have been found.[11] By contrast, in the Hussein decision below, counsel did not offer an apology for the irresponsible use of AI, and the court responded quite differently.
The Hussein v Canada Decision
In Hussein v Canada (Immigration, Refugees and Citizenship), 2025, Justice Moore identified two hallucinated cases, a cited authority that was irrelevant to the matter, and a legal test that was fabricated. For this reason, the court directed the applicant to prepare a Book of Authorities.[12] The applicant submitted two separate Books of Authorities, but on both occasions failed to include copies of the two decisions they relied upon—precisely the decisions that were of concern to the court. The applicant explained that the two cases had merely been mis-cited, and “resolved” the issue by removing those cases from their submissions and replacing them with new authorities.[13]
The court expressed concern that these cases might not actually exist and that the citations could have been generated using AI tools.[14] Justice Moore instructed the applicant to produce copies of the cases in question, or, if that wasn’t possible, to provide an explanation of how those citations ended up in the materials filed with the court.[15] Subsequently, applicant’s counsel informed Justice Moore that they had relied on Visto.ai, a professional legal research platform tailored for Canadian immigration and refugee law practitioners, for the citations and had not independently verified their accuracy.[16] Justice Moore stated:
The use of AI is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.[17]
The court determined that costs should be awarded for this motion.[18] However, applicant’s counsel was given a chance to address this specific decision another day before a final decision was made.[19] Importantly, even purpose-built tools can produce hallucinations, and lawyers remain responsible for any AI-generated content they incorporate into their work.
Key Takeaway
When used responsibly, AI can deliver great results. It is a valuable tool for reviewing cases, drafting legal documents, assisting with legal research, or summarizing lengthy materials. However, it is crucial for lawyers and legal professionals to develop a responsible approach to using AI. Fortunately, numerous guidelines have recently emerged to assist in this process.
The Federal Court has issued a notice outlining the proper use of AI in court proceedings. The Court requires parties involved in cases before it to disclose, both to the Court and to opposing parties, if any submitted documents contain content created or generated by AI.[20] This disclosure must be made through a formal declaration:
Sample Declaration: “Artificial intelligence (AI) was used to generate content in this document at paragraph 20-30.”
The Law Society of Ontario has also published a White Paper that provides guidance for licensees when using AI. For example, Rule 3.1-2 of the Rules of Professional Conduct requires lawyers to perform legal services for their clients to the standard of a competent lawyer.[21] To meet this obligation, licensees should take the time to thoroughly understand any AI tools they plan to use, including their capabilities, risks, and potential legal implications.[22]
Finally, the Rules of Civil Procedure state:
Rule 4.06.1 (2.1) A factum shall include a statement signed by the party’s lawyer, or on the lawyer’s behalf by someone the lawyer has specifically authorized, certifying that the person signing the statement is satisfied as to the authenticity of every authority cited in the factum.[23]
When used responsibly, AI offers significant benefits to the legal profession. However, it remains the duty of every lawyer and legal professional to verify that all cited cases are accurate and genuinely support the propositions for which they are referenced. The real challenge is not to abandon AI, but to master its use—ensuring that technological innovation strengthens, rather than compromises, the integrity of the legal profession.
The information and comments herein are for the general information of the reader and are not intended as advice or opinion to be relied upon in relation to any particular circumstances. For particular application of the law to specific situations, the reader should seek professional advice.
[1] Ko v. Li, 2025 ONSC 2766 (CanLII), at para 14.
[2] R. v. Chand, 2025 ONCJ 282 (CanLII), at para 2.
[3] Ibid at para 5.
[4] Ko v. Li, 2025 ONSC 2766 (CanLII), at paras 5, 6, 10, and 11.
[5] Ibid at para 14.
[6] Ibid at para 31.
[7] Ko v. Li, 2025 ONSC 2965 (CanLII) at para 13.
[8] Ibid at para 14.
[9] Ibid at para 23, 24.
[10] Ibid at para 59.
[11] Ibid at para 62.
[12] Hussein v. Canada (Immigration, Refugees and Citizenship), 2025 FC 1060 (CanLII), at para 35.
[13] Ibid at para 36.
[14] Ibid at para 37.
[15] Ibid at para 37.
[16] Ibid at para 38.
[17] Ibid at para 39.
[18] Ibid at para 43.
[19] Ibid.
[20] Federal Court, Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings.
[21] Law Society of Ontario, Rules of Professional Misconduct (Ontario: Law Society of Ontario, 2000) ch 3.1-2.
[22] Law Society of Ontario, “White Paper” (April 2024) at 9.
[23] R.R.O. 1990, Reg. 194, r 4.06.1(2.1).