The High Court of England and Wales say lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work.
In an ruling that links the two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGpt “cannot conduct reliable legal investigations.”
“While tools like this can produce obviously consistent, plausible responses to prompts, those consistent, plausible responses may turn out to be completely wrong,” Judge Sharp wrote. “Answers may be confident that they simply aren’t true.”
That doesn’t mean that lawyers cannot use AI in their research, but she said she has a professional obligation to “check the accuracy of such research by referring to authoritative sources before using it in the course of a professional work.”
Judge Sharp suggested that lawyers (including on the US side who represent AI’s major platforms) were citing what appears to be falsehoods generated by AI.
In one of the issues, an attorney representing a man seeking damages against the two banks submitted an application containing 45 citations. Of these cases, 18 cases were absent, but many others “did not support the citations that stem from them, they did not support the proposed cited.
Meanwhile, the lawyer representing the man who was kicked out of his London home has filed a court citing five cases that appear to be unexistent. (The lawyer refused to use AI, but said the citation could be from an AI-generated summary that appeared in “Google or Safari.”) Judge Sharp said the court decided not to start the controus process, but that was “not a precedent.”
“At this point, lawyers who do not comply with expert obligations are at risk of serious sanctions,” she added.
Both lawyers have been referred or referred to a professional regulator. Judge Sharp noted that if an attorney fails to fulfill his obligations to the court, the court’s powers range from “public advice” to costs, light emptying procedures, or “referral to the police.”
Source link