Skip to content

AI Hallucinations are Proliferating, and IP is Not Exempt

“The core issues surrounding AI use in IP cases are pretty well the same as in litigation in general. But AI misuse could have broader repercussions in the IP context” – Camille Aubin

 It was likely inevitable: artificial intelligence (AI) hallucinations are permeating intellectual property (IP) law.

Falsely generated legal research first came to the world’s attention in 2023 when an American lawyer appeared on the front page of the New York Times after he included references to non-existent cases in his court documents. Similar mishaps were subsequently reported with increasing frequency around the globe, but only two of the 203 instances cited in Damien Charlotin’s AI Hallucination Cases database, the most widely-cited source listing reported hallucinations, engaged IP law.

The cases, Monster Energy Company v. Pacific Smoke International Inc.and Industria de Diseño Textil, S.A. v. Sara Ghassai, were both trademark opposition matters arising in 2024  in the Canadian Intellectual Property Office, and both involved fabricated citations.

Most recently, the U.K. Intellectual Property Appeal released a ruling, also in a trademark opposition matter (not yet listed in the database) in which a self-represented party, Dr. Mustapha Soufian, admitted using ChatGPT to generate citations that proved to be erroneous.

The adjudicator also criticised Victor Caddy, a trademark lawyer for the opposing party, for citations that detracted instead of supporting his arguments. Caddy’s inability to explain the citations fuelled the adjudicator’s suspicion of AI’s involvement.

Camille Aubin, litigation practice leader at Montreal-based Robic LLP, a member of the IPH Limited Group, isn’t surprised that AI issues have infiltrated IP litigation.

“The core issues surrounding AI use in IP cases are pretty well the same as in litigation in general,” she says. “But AI misuse could have broader repercussions in the IP context.”

For example, Aubin notes that patent experts are often involved in explaining prior art.

“If experts misuse AI to cite references, expert reports based on hallucinated data could later be attacked.”

Similarly, AI content that emulates existing work could give rise to copyright issues, and AI-generated brand names or logos could infringe on existing marks.

But just as AI could be misused in formulating IP, it could be misused in analysing or attempting to invalidate IP assets.

“The question is whether certain types of AI analysis intended to replace the work of lawyers or experts, as opposed to simply being research tools, are desirable in applying the analytic standards dictated by law,” Aubin says.

By way of illustration, Aubin notes that the notional addressee in determining novelty and inventiveness is that of a knowledgeable person skilled in the relevant art.

“There’s a serious issue as to whether AI software could properly assess the information available from the point of view of that fictional knowledgeable person.”

In the trademark context, courts assess the likelihood of confusion from the perspective of “a consumer in a hurry with a vague recollection of the plaintiff’s trademark” who then encounters the defendant’s trademark.

“AI systems could conceivably be devised to compare trademarks and provide a determination as to the likelihood of confusion,” Aubin says. “But the Supreme Court of Canada has made it clear that judges are well equipped to determine confusion because they are often ‘consumers in a hurry’ themselves.”

What is clear is that overall, the hallucination trend is accelerating: the majority of occurrences cited in Charlotin’s database arose in 2025, with barely half the year gone. That the IP sector, with its potential for misuse from various perspectives, can avoid this pattern, seems doubtful.