The horror stories around artificial intelligence and lawyers are starting to add up. Last year, in a high-profile criminal trial in New York, two attorneys were fined and sanctioned after turning in a brief written by ChatGPT that included fake case citations and manufactured quotes.
Earlier this year, a second-year associate in Colorado was suspended by the bar and fired by his firm after he submitted a ChatGPT-written motion that included phantom cases. Another lawyer in New York, and attorneys in Massachusetts and Florida, have also faced sanctions in 2024—and those are just incidents that have hit the media.
The courts and bar associations in these cases issued or threatened sanctions because they said the lawyers violated their ethical obligations by failing to verify whether the citations and quotes they submitted were real.
It’s a reminder that, for all of the speed and convenience associated with them, generative AI tools are not infallible. In fact, many of the commonly available AI chatbots still leave much to be desired as research assistants. As law firms integrate artificial intelligence into their workflows, they should be on guard for possible ethical traps and institute internal policies for how and when AI is to be used by attorneys and staff.
YOUR AI ‘BUDDY’
While a number of companies are now building specialized artificial intelligence programs for lawyers and law firms, ChatGPT’s status as the best-known generative AI tool can make it tempting for lawyers and firms hoping to save time and money. If I can convey one piece of advice, it’s this: Your firm should not be using ChatGPT or any of the other consumer-facing chatbots like Google’s Gemini or Microsoft’s CoPilot for legal research. They are not up to the task, at least not at present.
One reason is the tendency of artificial intelligence to “hallucinate.” IBM defines an AI hallucination as “a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
In many ways, chatbots like ChatGPT are designed to be your friends. They may try to help you out by plugging holes in your research, and if they can’t find something to back up your assertions, they may invent material to help you fill the gap.
I’ve experienced ChatGPT’s friendly fire. Earlier this year, I used the $20-a-month “Plus” version to search for and summarize a publicly available example of an issue I was researching. Aware of the potential for inaccuracies, however, I went a step further and asked it to identify and link to any source or sources it had used.
What ChatGPT generated was a perfectly respectable-looking, clearly-worded paragraph or two. However, the response did not provide the sourcing I had requested, and my skepticism rose. Then began an increasingly frustrating attempt to get ChatGPT to explain how it had come up with the information. After several prompts, it acknowledged that it had invented an example based on its basic “knowledge” of the issue. What a pal.
RULES ARE MULTIPLYING
Firms and lawyers should be on guard for another potential ethical issue: Running afoul of new rules on the use of AI. Bar associations and judges across the country have been responding to lawyers’ AI snafus with a patchwork of new regulations, ranging from bans on generative AI in certain court filings to milder statements urging caution.
This example from U.S. District Judge Gene E.K. Pratter of the Eastern District of Pennsylvania is fairly typical. It requires lawyers to disclose the use of AI and certify citations are accurate:
Judge Pratter requires that counsel (or a party representing himself or herself) disclose any use of generative Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court, including in correspondence with the Court. If counsel (or a party representing himself or herself) has used generative AI, he or she must, in a clear and plain factual statement, disclose that generative AI has been used in any way in the preparation of the filing or correspondence and certify that each and every citation to the law or the record in the filing has been verified as authentic and accurate.
MANAGING RISK
When poorly deployed AI can pose ethical risks. But it’s important to acknowledge that the technology is here to stay, it will certainly have a big impact on the practice of law and delivery of legal services, and firms and lawyers should be prepared. Firms would do well to take a few basic precautions to help avoid ethical sand traps. Here are a few suggestions:
1. Training and Education. The ABA’s Model Rules for Professional Conduct, Rule 1.1 states: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.” Competence may be interpreted to include a basic understanding of technology developments like AI—particularly when mistakes made by the technology can affect clients. Firms would do well to bring in outside experts to help lawyers understand AI concepts or to fund participation in one of the many continuing legal education programs available on the subject.
2. Know the Rules. As noted, local court rules are shifting rapidly. Firms should make no assumptions. AI may not be allowed, or a judge may demand that it is disclosed. Check before you file. A few organizations are tracking the rules. Ropes & Gray, for example, has created a useful interactive map showing standing orders and local rules on the use of AI across state and federal courts. (At present, the firm has tracked orders in 15 states.)
3. Maintain Independent Judgment. While a chatbot’s answer may be correct and sound authoritative, lawyers have a professional responsibility to remain in the driver’s seat. They should verify the information and ensure the material reflects their thinking. As Model Rule 2.1 says, “In representing a client, a lawyer shall exercise independent professional judgment and render candid advice.”
4. Be Aware of Bias. Artificial intelligence is trained upon existing data. If that data reflects a certain point of view or if certain data sets are chosen or weighted over others, then the AI tool likely will include those biases. In an article on AI bias published last year by the New York State Bar Association, lawyer Luca CM Melchionna wrote, “To combat [bias], lawyers need to recognize and guard against computer-generated bigotry to protect their clients and their professional reputations.”
5. Protect Confidentiality. As with any web-based software, firms should take steps to ensure data is secure. Do not upload sensitive client information to ChatGPT or similar chatbots, which use information users input to help train their large language model. Deploy data encryption and be sure the firm has a clear understanding of the security offered by AI software providers.
Chief Justice John Roberts Jr., in his annual report on the federal judiciary, acknowledged that AI “obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike.” Given the hallucinations and recent spate of false case cites, however, Roberts warned that “any use of AI requires caution and humility.”
Roberts seemed to be channeling the old police drama Hill Street Blues, and the sergeant’s admonition to officers at the end of morning roll call: “Let’s be careful out there.” It was good advice for TV cops in the 1980s, and—when it comes to AI—it’s good advice for lawyers and their firms now.
Do you have questions, feedback, or topics you would like The Edge to cover? Send a note to david@good2bsocial.com.