England’s Legal System Takes a Cautious Step into the Future, Allowing Judges to Harness Artificial Intelligence for Rulings

LONDON (AP) — England’s legal system, steeped in centuries-old traditions of wigs and robes, has cautiously embraced artificial intelligence (AI) to aid in producing rulings. The Courts and Tribunals Judiciary recently granted judges the ability to use AI to draft opinions, with the caveat that it should not be employed for research or legal analyses due to its potential to generate fabricated, misleading, inaccurate, and biased information.

Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales, stated that judges can utilize AI responsibly, but must prioritize protecting confidence and take full personal responsibility for their work.

This move comes at a time when legal scholars and experts are contemplating the role of AI in the legal profession, speculating whether it could replace lawyers, assist in jury selection, or even determine case outcomes. The approach outlined by the judiciary on December 11th reflects a measured response. However, it represents a proactive step for a profession that has been slow to embrace technological advancements, as both the government and industry grapple with the implications of rapidly evolving AI technology, which is viewed by some as a remedy and by others as a threat.

“There’s a vigorous public debate right now about whether and how to regulate artificial intelligence,” said Ryan Abbott, a law professor at the University of Surrey and author of “The Reasonable Robot: Artificial Intelligence and the Law.”

Legal experts, including Abbott, commended England and Wales for taking an initial step towards AI regulation, acknowledging that the guidance would be influential worldwide, as courts and jurists eagerly explore the use of AI or express concerns about its implications.

While England and Wales have now emerged as leaders in addressing AI in courts, they are not the first jurisdiction to do so. Five years ago, the European Commission for the Efficiency of Justice of the Council of Europe issued an ethical charter for the use of AI in court systems. Although not up to date with current AI technologies, the charter did cover fundamental principles such as accountability and risk mitigation, which judges are expected to adhere to.

The United States, however, has yet to establish guidelines on AI in its federal court system, primarily due to the fragmented nature of its state and county courts. Nevertheless, individual courts and judges at various levels have set their own rules, thereby creating a diverse landscape. Cary Coglianese, a law professor at the University of Pennsylvania, emphasized that the English language guidelines for judges and their staff pertaining to AI are among the first of their kind and will likely prompt discussions and internal assessments in other jurisdictions.

While the guidance demonstrates the courts’ acceptance of AI, it falls short of complete endorsement, according to Giulia Gentile, a lecturer at Essex Law School who specializes in AI’s application in legal and justice systems. Gentile questioned the lack of transparency and accountability mechanisms in the guidance and highlighted the importance of enforcement.

The guidance document underscores the limitations and potential risks of AI technology while emphasizing caution. It specifically advises against relying on chatbots like ChatGPT, which gained public attention for generating text, including legal briefs that quoted fictional cases. To address privacy concerns, judges were instructed not to disclose private or confidential information when using AI chatbots, as any input to a public AI chatbot is considered public domain knowledge.

Furthermore, the guidance warns that much of the legal material used to train AI systems is derived from the internet, predominantly based on U.S. law. However, AI can serve as a secondary tool for judges with heavy caseloads, assisting in writing background material and summarizing known information. It can also quickly access materials that judges are familiar with but do not have immediate access to. However, judges should refrain from relying on AI to find new information that cannot be independently verified, as the technology is yet to deliver convincing analysis or reasoning.

Appeals Court Justice Colin Birss recently reported his positive experience with ChatGPT, describing how it assisted him in drafting a paragraph for a ruling in a familiar area of law. He recognized the usefulness of AI but emphasized the judge’s ultimate responsibility for the content of their judgments.

In conclusion, England and Wales have paved the way in regulating the use of AI in their courts. This cautious approach, setting forth guidelines while acknowledging the technology’s limitations, demonstrates the courts’ effort to balance innovation and integrity. The impact of AI on the legal profession remains a topic of fierce debate, and as AI continues to progress, countries around the world will likely look to England and Wales for inspiration and guidance.