London, England — The High Court of England and Wales has issued a warning to legal professionals regarding the risks associated with the use of artificial intelligence in legal research. Judge Victoria Sharp emphasized the need for lawyers to adopt stricter measures to mitigate potential misuse of AI tools, particularly generative models like ChatGPT.
In a recent ruling that addressed two separate cases, Judge Sharp stated that tools powered by generative AI are not deemed reliable for conducting legal research. She highlighted that while these tools may generate responses that appear coherent and plausible, they could ultimately produce incorrect information. “The responses may sometimes assert confidently what is factually untrue,” she noted.
Despite this caution, Judge Sharp indicated that lawyers are still permitted to utilize AI in their research efforts. However, she underscored the necessity for attorneys to verify the accuracy of AI-generated research through established and authoritative sources before relying on it within their professional duties.
The ruling also comes amid increasing instances where lawyers have referenced what seem to be inaccuracies stemming from AI-generated content. Judge Sharp pointed out that this trend highlights the need for improved adherence to legal guidance, adding that her decision will be sent to relevant professional organizations, including the Bar Council and the Law Society.
One case under consideration involved a lawyer seeking damages for a client against two banks. The attorney submitted a filing that contained 45 citations, of which 18 were found to be fabricated. Many other references either lacked proper quotations, failed to support the claims made, or were irrelevant to the case at hand.
In another incident, a lawyer representing a client who faced eviction from their London residence included references to five non-existent cases. Although the lawyer denied employing AI in this instance, she acknowledged that some of the citations might have originated from AI-generated outputs encountered on platforms such as Google or Safari. Judge Sharp indicated that while the court did not choose to pursue contempt proceedings, it should not be taken as a precedent for future cases.
The judge warned that failure to adhere to professional obligations could lead to severe repercussions for legal practitioners. Both attorneys involved in these cases have either been referred or have taken the initiative to refer themselves to regulatory bodies responsible for overseeing legal conduct.
Judge Sharp cautioned that when lawyers fail to uphold their duties to the court, the range of consequences can vary widely, including public reprimand, imposition of costs, contempt proceedings, or even referrals to law enforcement.
The emphasis on verifying AI-generated information underscores a growing concern within the legal community about the reliability of technology in a field where precision is paramount.
This article was automatically written by Open AI and the people, facts, circumstances, and story may be inaccurate. Any article can be requested for removal, retraction, or correction by writing to contact@publiclawlibrary.org.