ChatGPT Before the Court: The Price of Blind Trust

22 December 2025

 

In the case Specter Aviation Limited v. Laprade, the Superior Court, for the first time in Quebec, sanctioned the inappropriate use of artificial intelligence in judicial proceedings1. This decision highlights the very real risks associated with the unregulated use of artificial intelligence within the judicial process.

 

A First Sanction for the Inappropriate Use of AI in Quebec

 

The defendant, a 74-year-old self-represented litigant, filed a pleading riddled with non-existent case law and doctrinal references generated by ChatGPT, an artificial intelligence tool.

 

When confronted with these irregularities, he admitted that he had “relied on all the power that artificial intelligence could offer him” in order to defend himself without a lawyer. While showing sensitivity to the difficulties faced by unrepresented parties, the Court nevertheless emphasized that access to justice cannot come at the expense of truthfulness.

 

Access to Justice and the Veracity of Information

 

“While access to justice calls for a certain degree of flexibility on the part of the courts toward citizens who must represent themselves without the assistance of a lawyer, such flexibility can never translate into tolerance for falsehoods. Access to justice can never accommodate fabrication or pretense.”

“Caution and human intervention at every stage to validate information—these are the lessons to be learned.”

 

AI: A Tool to Be Regulated Rather Than Stigmatized

 

There is no need to stigmatize the use of artificial intelligence. Those who do so will quickly forget the fate reserved of those who resisted the promises and benefits that the Internet was expected to bring only a few years ago. Technological advances leaves no room for complacency, and the judicial system must adapt proactively rather than reactively. Moreover, any technological measure that can enhance citizens’ access to the justice system should be welcomed and regulated, rather than prohibited and stigmatized.”

 

Artificial Intelligence and the Judicial System

 

“Artificial intelligence will spare neither the legal system nor the courts, in addition to confronting it, will have to contend with this new technology that claims to be revolutionary. While its intoxicating promises are matched only by the fears associated with its inappropriate use, artificial intelligence will seriously test the vigilance of the courts in the years to come.”

 

The Early Impact of AI on the Quebec Justice System

 

“We are clearly at the very beginning of artificial intelligence’s impact on the conduct of judicial proceedings. Our Court does not yet appear to have had the opportunity to rule on this issue, which promises to fill many pages of case law in the near future.”

 

The Litigant’s Responsibility Regarding Legal Citations

 

The Court concluded that the submission of fictitious citations constitutes a serious breach of the proper conduct of proceedings, justifying the imposition of a procedural sanction of $5,000. It emphasized that the gravity of such a breach does not depend on an intention to deceive, but on the responsibility incumbent upon every litigant—whether represented by a lawyer or self-represented—to verify the accuracy of the legal sources cited:

 

“Attempting to mislead the opposing party and the Court by submitting fictitious excerpts of case law or other authorities constitutes a serious breach. Whether this conduct is intentional or the result of mere negligence, the litigant is held to the highest standards with respect to the procedures  filed with the Court.”

 

AI as a Tool, Not a Substitute for Human Judgment

 

Even in the absence of legal training, litigants remain bound by a duty of rigor: ignorance of the law or blind trust in artificial intelligence cannot serve as an excuse, as intellectual rigor remains a minimum requirement for participating in the judicial process. That said, the Court’s message is nuanced: Justice Morin cautions against stigmatizing the use of artificial intelligence while emphasizing the need for rigorous human oversight.

 

This approach aligns with the notice published by the Quebec Superior Court on October 24, 20232, which had already warned practitioners about the risks of fabricated legal sources generated by artificial intelligence.

 

Finding the Right Balance Between Technology, Legal Rigor, and Human Responsibility

 

This decision marks a first in Quebec and forms part of an emerging trend in Canadian jurisprudence, where several courts—particularly in British Columbia and before the Federal Court—have sanctioned the use of “hallucinated citations,” notably in Zhang v. Chen and Lloyd’s Register Canada Ltd. v. Choi.

It serves as a reminder that, while artificial intelligence can assist legal analysis, it cannot replace human judgment or responsibility.

Ultimately, the Specter Aviation case demonstrates that technology can assist the law, but it does not supplant it: rigor, caution, and judgment remain the best guarantors of justice. Artificial intelligence has its merits, but true intelligence remains—and must remain—human.

 

 

 

Questions about a dispute involving artificial intelligence?

Contact us today.

Our litigation and dispute resolution lawyers are here to support you.

 

 

 

 

1. By applying Article 342 C.C.P., relating to significant breaches in the conduct of proceedings.
2. Superior Court, Notice to the legal community and the public, the integrity of submissions to the courts when using large language models.

 

 

Authors:

Amélie Gingras,
amelie.gingras@steinmonast.ca
418-210-3191
See the profile

Mathieu Ayotte,
mathieu.ayotte@steinmonast.ca
418 640-4459
See the profile

See more news and ressources