New Guidelines Aim to Balance Legal Ethics and Innovation for Firms Using Generative AI


 BY DAVID L. BROWN

A few months back, I wrote about the possible ethical traps law firms face as they grapple with implementing generative artificial intelligence tools for use by attorneys and staff. Since then, the American Bar Association has weighed in and has offered lawyers and law firms a more structured approach to navigating generative AI’s ethical issues.

In Formal Opinion 512, issued over the summer, the ABA acknowledged that generative AI tools are “a rapidly moving target—in the sense that their precise features and utility to law practice are quickly changing and will continue to change in ways that may be difficult or impossible to anticipate.”

The opinion aims to identify legal ethics issues related to AI and offer guidance for lawyers attempting to navigate the new technology. It also comes after several high-profile incidents in which courts and bar associations sanctioned lawyers for failing to verify information generated by AI. In a number of cases, AI tools created fake citations and quotes that ended up in documents filed with the courts. At the same time, pressure has been growing on law firms to cut costs and improve efficiency by embracing generative AI.

With this in mind, let’s examine the key aspects of the ABA’s ruling and what it means for firms striving to strike a balance between innovation and ethics.

Fundamental Duty Number One: Competence

The core of the ABA’s Formal Opinion 512 revolves around three fundamental duties for lawyers using AI: competence, confidentiality, and supervision.

In this context, "competence" goes beyond legal knowledge—it extends to technological literacy. According to the ABA, lawyers are required to "exercise reasonable diligence" when using AI tools, meaning they must understand how the technology works and the risks associated with it.

For example, a lawyer who uses generative AI to draft a motion must review the output thoroughly to ensure accuracy and legal soundness. The ABA makes it clear that lawyers cannot outsource their professional judgment to machines. This is particularly important given that AI tools can occasionally produce hallucinations—errors that, if unnoticed, could have disastrous consequences in a legal context.

The ABA also emphasizes that maintaining competency requires continuous education. Lawyers should stay informed about advancements in AI: Generative AI is evolving rapidly, and the ABA's ruling suggests that staying up-to-date with these changes is a matter of professional competence.

Fundamental Duty Number Two: Confidentiality

Many AI tools rely on cloud-based services or third-party vendors, which could expose confidential client information to unauthorized access or data breaches. The ABA warns that lawyers must take reasonable steps to ensure that their use of AI does not compromise the confidentiality of client communications.

Many generative AI tools rely on large-scale data processing, often using third-party systems. This poses risks for client confidentiality, particularly when sensitive legal documents are processed through AI platforms. In one high-profile incident, a boutique firm in New York inadvertently exposed confidential client information when a third-party AI tool used in their practice failed to meet basic security standards. The breach resulted in a costly lawsuit for the firm.

To avoid such risks, the ABA suggests that firms implement robust data security protocols when using AI tools. This might include negotiating contracts with AI vendors that ensure compliance with privacy regulations and cybersecurity standards.

Fundamental Duty Number Three: Supervision

Law firms, the ABA said, must maintain strict oversight over AI-related work, especially when tasks are delegated to non-lawyers or outsourced to third-party vendors. Smaller and midsize firms may face particular challenges in this area, as they often operate with leaner teams and limited resources, making it harder to implement thorough review processes. Nonetheless, the ABA’s opinion holds all firms to the same standard, requiring "reasonable supervision" regardless of firm size.

This has significant implications for midsize and smaller firms. According to a recent Law360 report, many midsize firms are finding themselves at a crossroads: AI tools promise increased efficiency, but the need for stringent oversight and supervision could outweigh the perceived benefits, particularly if firms are not prepared to manage these tools effectively.

Citing earlier opinions on cloud technology and outsourcing of legal and nonlegal services that also emphasized lawyers’ supervisory responsibilities, the ABA said lawyers should:

  • “ensure that the [generative AI tool] is configured to preserve the confidentiality and security of information, that the obligation is enforceable, and that the lawyer will be notified in the event of a breach or service of process regarding production of client information.”
  • “investigate the [generative AI tool’s] reliability, security measures, and policies, including limitations on [the tool’s] liability.”
  • “determine whether the [generative AI tool] retains information submitted by the lawyer before and after the discontinuation of services or asserts proprietary rights to the information.”
  • “understand the risk that [generative AI tool servers] are subject to their own failures and may be an attractive target of cyber-attacks.”

What Firms Can Do

For law firm leaders looking to integrate AI tools while maintaining ethical standards, here are some key steps:

1. Develop AI Usage Policies: Establish clear guidelines on the acceptable uses of AI, and ensure that all team members are familiar with these policies.

2. Enhance Supervision: Implement a process where senior attorneys review all AI-generated outputs before they are presented to clients or used in court. This is especially crucial for smaller firms with fewer resources.

3. Prioritize Continuous Education: Make continuing education on AI and legal ethics mandatory for all attorneys in your firm. As technology evolves, so must your team’s knowledge and skills.

4. Vet AI Vendors Carefully: Conduct due diligence when selecting AI providers. Ensure that their systems meet the necessary data privacy and security standards to avoid compromising client information.

The ABA’s opinion emphasizes that law firms should take proactive measures to ensure their use of generative AI complies with ethics rules and that clients and their practices are protected. In doing so, they can embrace the potential benefits of artificial intelligence—and mitigate its risks.

Do you have questions, feedback, or topics you would like The Edge to cover? Send a note to david@good2bsocial.com.