English and Welsh judges have been tentatively approved to utilize AI when composing legal opinions.
The legal system in England, which has been around for 1,000 years and incorporates customs such as the use of wigs and robes, has made a careful advancement into modern technology by allowing judges to utilize artificial intelligence in creating judgments.
The Courts and Tribunals Judiciary recently stated that AI may be useful in writing opinions, but cautioned against using it for research or legal analyses due to its potential for generating false, incorrect, and biased information.
“Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales, stated that judges should not avoid using AI in a cautious manner. However, they must guarantee the preservation of trust and accept complete personal accountability for their decisions.”
In light of the possibility that AI may eventually take over tasks traditionally performed by lawyers, such as selecting jurors or making legal decisions, experts are considering the implications. The recent announcement by the judiciary on December 11 takes a cautious approach. This is a proactive move for the legal profession, which has been slow to adopt technological advancements. As society and various industries respond to the rapidly evolving technology, it is being both praised and feared as a solution and a threat.
Ryan Abbott, a law professor at the University of Surrey and author of “The Reasonable Robot: Artificial Intelligence and the Law,” stated that there is currently a lively discussion in the public sphere regarding the regulation of artificial intelligence.
According to the speaker, there is a distinct concern regarding the use of AI in the judicial system. They emphasize the importance of involving humans in the decision-making process and suggest that the implementation of AI may not happen as quickly in the judiciary compared to other fields. Therefore, caution will be exercised in this area.
Legal experts, including Abbott, praised the judiciary for tackling the most recent advancements in AI. They believe that the guidance will be highly regarded by courts and jurists across the globe who are either eager to utilize AI or concerned about its potential impact.
By taking what was referred to as a first move, England and Wales have advanced in the courts’ handling of AI, although this is not the initial guidance on the matter.
In the past five years, the European Commission for the Efficiency of Justice of the Council of Europe released a moral code regarding the implementation of AI in court proceedings. Although this charter may not reflect current technology, it did cover key principles such as responsibility and managing potential risks that judges should adhere to. Giulia Gentile, a professor at Essex Law School who specializes in AI implementation in legal and justice systems, stated this.
Cary Coglianese, a professor of law at the University of Pennsylvania, stated that while Chief Justice John Roberts of the U.S. Supreme Court discussed the advantages and disadvantages of artificial intelligence in his yearly report, there is currently no established framework for AI in the federal court system. Additionally, the state and county courts in America lack a unified approach due to their fragmented nature. However, individual courts and judges at both the federal and local levels have implemented their own regulations.
According to Coglianese, this guidance for England and Wales is likely one of, if not the first, set of AI-related guidelines published in English that has a broad scope and is specifically aimed at judges and their staff. It is possible that numerous judges have already reminded their staff about how current policies on confidentiality and internet use apply to public-facing platforms that provide services like ChatGPT.
According to Gentile, the guidance indicates that the courts are open to using technology, but not fully committed. She expressed concern about a section stating that judges are not required to reveal their use of technology and wondered why there is no system in place for accountability.
Gentile expressed that the document is beneficial, but its implementation and enforcement remain uncertain. The document lacks details on its practical application, including who will monitor compliance and what consequences will be in place. If there are no consequences, what actions can we take?
The guidance emphasizes the importance of upholding the court’s integrity while also moving forward. It contains numerous cautions about the technology’s limitations and potential issues that may arise if a user is unfamiliar with its functionality.
At the top of the list is an admonition about chatbots, such as ChatGPT, the conversational tool that exploded into public view last year and has generated the most buzz over the technology because of its ability to swiftly compose everything from term papers to songs to marketing materials.
The drawbacks of using technology in the court system have gained notoriety, as demonstrated by two New York attorneys who used ChatGPT to compose a legal brief that included references to fictional cases. As a result, they were fined by a displeased judge who deemed their work as “legal nonsense.”
Judges in England and Wales were advised not to reveal any private or confidential information, as chatbots have the capability to store and recall questions and other details.
The advice stated, “Avoid sharing any private information with a public AI chatbot that is not already available to the general public. Any data you provide to a public AI chatbot should be considered as being made available to everyone.”
Some additional cautions to keep in mind are the sources of legal content used to train AI systems, which primarily originate from the internet and heavily rely on U.S. law.
However, according to the courts, jurists with high caseloads and a regular practice of writing extensive decisions, sometimes spanning dozens or even hundreds of pages, can utilize AI as an additional resource. This can be especially helpful when composing background information or summarizing already familiar information.
Judges were advised that the technology could not only be utilized for emails and presentations, but also for swiftly accessing familiar material that may not be readily available. However, caution should be exercised when attempting to find new information that cannot be independently confirmed, as the technology is not yet equipped to provide convincing analysis or reasoning, according to the courts.
Justice Colin Birss of the Appeals Court commended the effectiveness of ChatGPT in assisting him with drafting a paragraph for a ruling in a familiar legal field.
“I inquired with ChatGPT for a summary of this specific field of law, and it provided me with a paragraph,” he informed The Law Society. “I was already planning to write a paragraph on the matter, but ChatGPT saved me the trouble and I included it in my judgment. It has proven to be quite helpful.”
Source: wral.com