California Bar Issues Guidance for Lawyers Using AI

California Bar Issues Guidance for Lawyers Using AI

Generative AI, such as Open AI’s ChatGPT and Google’s Gemini (previously called “Bard”), are capable of performing tasks that were once thought solely to be in the province of human ability. It can analyze and categorize data and even create human-sounding content in the form of text. As such, generative AI has clear applications in the practice of law and legal marketing. Some of the more obvious uses include document review, contract analysis, and even handling basic client communications.

As lawyers and other professionals have started looking for ways to leverage Generative AI to do their jobs more efficiently, many observers have sounded alarm bells about ethical and professional issues about how it is used.

In fact, some of the early adopters of Generative AI in the legal profession have been subject to sanctions, as the technology is known to “hallucinate” facts. In the case of two New York lawyers, ChatGPT made up case law out of thin air and then doubled down on its existence when asked to verify the cases it cited. Ultimately, the attorneys were each fined $5,000 and ordered to reach out to the judges about the fake cases mentioned. Perhaps worse, their names were splashed all over the national media – from Forbes to CNN – for using ChatGPT and not fact-checking its output.

A year and a few months into generative AI entering the mainstream, state bars are starting to develop guidance and rules regarding how lawyers use it. Given the concerns and uncertainties regarding the use of AI in the legal profession, this guidance is particularly valuable in helping attorneys leverage the efficiency of AI while upholding ethical duties. Recently, California issued guidance that lawyers across the United States can benefit from. I discuss some of the highlights in the material below.

The California Bar Guidance

As part of its guidance, the California Bar takes the position that AI is like any other technology that attorneys may leverage in their day-to-day professional activities. From the guidance:

Like any technology, generative AI must be used in a manner that conforms to a lawyer’s professional responsibility obligations, including those set forth in the Rules of Professional Conduct and the State Bar Act.

The guidance they provide demonstrates ways that lawyers can use AI consistently with their professional responsibility obligations. Some of the obligations they address are discussed in the material below.

Duty of Confidentiality

The California Bar cautions that the use of AI can have implications related to the disclosure of confidential information. The guidance points out that many generative AI models use inputs to train the AI further and the information that users upload may be shared with third parties. In addition, the models may lack adequate security for attorneys to input confidential information.

For this reason, the Bar advises that lawyers should not input any confidential information without first confirming the model they are using has sufficient confidentiality and security protections. Furthermore, the Bar advises lawyers to consult with IT professionals 

to confirm that an AI model adheres to security protocols and also carefully review the Terms of Use or other provisions.

Duties of Competence and Diligence

The use of generative AI also can raise issues related to the duties of competence and diligence. In light of the fact that these models can produce false or misleading information, the California Bar advises that lawyers must:

  • Ensure competent use of the technology and apply diligence and prudence with respect to facts and law
  • Understand to a reasonable degree how the technology works and its limitations
  • Carefully scrutinize outputs for accuracy and bias

In addition, the Bar cautions that overreliance on AI is inconsistent with the active practice of law and application of trained judgment by an attorney. Furthermore, the guidance advises that an attorney’s professional judgment cannot be delegated to AI.

Duty to Supervise Lawyers and Non-lawyers, Responsibilities of Subordinate Lawyers

The Bar advises that supervisory and managerial attorneys should establish clear policies regarding the use of generative AI. In addition, they should make reasonable efforts to ensure that the firm adopts measures that provide reasonable assurance that its lawyers’ and non-lawyers’ conduct complies with professional obligations when using generative AI. This includes training on how to use AI and the ethical implications of using AI.

Using AI Can Also Have Implications for Law Firm Marketing

At Lexicon Legal Content, our sole focus is on generating keyword-rich content that helps law firms connect with their clients. While the California Bar’s guidance does not mention it directly, using generative AI to create marketing materials like social media or blog posts may also have implications related to the rules of professional conduct.

Under California Rule 7.1, a lawyer may not make a false statement about the lawyer or the lawyer’s services, and a statement is false or misleading if it contains a material misrepresentation of fact or law. Importantly, this is analogous to ABA Model Rule 7.1, which many states have adopted. In addition, under Model Rule 7.2, a lawyer should not call themselves a specialist or expert in any area of law unless they have been certified by an appropriate authority of the state or the District of Columbia or a U.S. Territory or that has been accredited by the American Bar Association.

These professional duties related to advertising make it critical to review any AI output a law firm intends to use in its marketing efforts. At Lexicon Legal Content, we are staffed by experienced legal professionals, including law school graduates and licensed attorneys, who understand these rules and ensure that all of the content we create – whether AI-assisted or not – is in compliance with advertising regulations in our clients’ states.

Should AI-Generated Content Be Watermarked?

Since November of 2022, the world has been captivated by ChatGPT, the artificial intelligence chatbot created by OpenAI. ChatGPT’s meteoric rise in popularity  – reaching 100 million users in just two months – has brought attention to generative AI in general. As you would expect, generative AI is capable of creating text, images, and other forms of content in seconds that some consider indistinguishable from what a human would create.

Unsurprisingly, generative AI has been hailed as both a productivity enhancer and a job destroyer, sometimes simultaneously. It has raised serious issues in academia, with some people suggesting that the college essay has become obsolete. In addition, AI’s generative capabilities may fundamentally change the way white collar professionals work and may even threaten their jobs.

Should Readers Know Whether Content is AI-Generated?

One issue that appears to surface regularly in the conversations around AI is whether people should know whether content was created by a human or AI. Knowing the provenance of content seems like a fair request, especially if an individual is relying on the information for a serious matter such as their health, financial well-being, or safety. 

One solution that has been thrown around is the idea of watermarking AI content, allowing people and search engines to recognize it as such. In fact, at Google I/O , the company said that it would voluntarily watermark images created by its generative AI so that people could spot fakes. Microsoft made a similar announcement a few weeks later.

Inaccurate and Fake Content Can Have Real World Effects

It is becoming more and more clear that misleading content generated by AI can have real world effects – and cause real word harm. For example, on May 22 of this year, a false report of an explosion at the Pentagon accompanied by an image likely generated by AI caused a significant dip in the stock market. 

Similarly, many experts consider content containing misinformation to pose a risk to elections. Speaking at a World Economic Forum event earlier this year, Microsoft’s chief economist, Micahel Schwartz cautioned that “Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections.” 

Bad actors could generate misinformation at a scale never seen before in the form of social media posts, fake news stories, fake images, and even deep fake videos of candidates that are indistinguishable from reality.

Perhaps most troublingly, some observers think that the rise of generative AI risks a future of human incompetence. What does the world look like if all we have to do to demonstrate competence is to ask an AI to do it for us? As put by US DOJ National Security & Cybercrime Coordinator Matt Cronin recently in The Hill:

For even the most brilliant minds, mastering a domain and deeply understanding a topic takes significant time and effort. While ultimately rewarding, this stressful process risks failure and often takes thousands of hours. For the first time in history, an entire generation can skip this process and still progress (at least for a time) in school and work. They can press the magic box and suddenly have work product that rivals the best in their cohort. That is a tempting arrangement, particularly since their peers will likely use AI even if they do not.

Like most Faustian bargains, however, reliance on generative AI comes with a hidden price. Every time you press the box, you are not truly learning — at least not in a way that meaningfully benefits you. You are developing the AI’s neural network, not your own.

Cronin argues that incompetence will increase over time as we use AI, comparing using it to having someone else work out for you and expecting to get fit as a result.

Consider a hypothetical generation of surgeons who have been raised on AI and suddenly do not have internet access – do you want them operating on you? Do you want a lawyer who got through law school learning how to correctly “prompt” AI representing you in court? Of course, for most of us, the answer is “no.”

The fact is that generative AI allows people to seemingly demonstrate knowledge or expertise they do not have. While this clearly presents an issue in academia, where students are expected to demonstrate knowledge in writing assignments, it also raises an issue as to whether consumers can trust that knowledge-based professionals like lawyers, physicians, and mental health providers actually possess the skills they claim to have in their website content. 

What Does Watermarking AI-Generated Content Look Like?

You are probably already familiar with the idea of watermarking as it relates to visual content. For an example, go to iStock and see how they display the pictures they have for sale. In order to prevent you from simply right-clicking and saving the image to your desktop, each image has “iStock by Getty Images” superimposed on top of it.

Google is taking watermarking AI-generated images a step further and embedding data that will mark them as AI-generated. In a May 10th blog post on The Keyword, Google explained that:

“. . .as we begin to roll out generative image capabilities, we will ensure that every one of our AI-generated images has a markup in the original file to give you context if you come across it outside of our platforms. Creators and publishers will be able to add similar markups, so you’ll be able to see a label in images in Google Search, marking them as AI-generated. You can expect to see these from several publishers including Midjourney, Shutterstock, and others in the coming months.

Watermarking Content Presents Special Challenges

Of course, watermarking AI-generated text would be different from watermarking images. One idea that has been discussed by AI-creators like OpenAI and other stakeholders is the idea of cryptographic watermarking. This type of watermarking involves embedding a pattern or code into the text in a way that allows software to detect whether content is generated by AI.

Hany Farid, a Professor of Computer Science at the University of California, Berkeley, recently explained how watermarking text may work in a piece for GCN:

Generated text can be watermarked by secretly tagging a subset of words and then biasing the selection of a word to be a synonymous tagged word. For example, the tagged word “comprehend” can be used instead of “understand.” By periodically biasing word selection in this way, a body of text is watermarked based on a particular distribution of tagged words. This approach won’t work for short tweets but is generally effective with text of 800 or more words depending on the specific watermark details.

This idea has gained traction in many circles. Professor Farid believes that all AI-generated content should be watermarked, as does Matt Cronin (mentioned earlier in this article). Additionally, Fedscoop’s Nihal Krishan reports that Deputy National Security Adviser for Cyber and Emerging Technology met privately with tech executives  at the RSA Conference – including those from OpenAI and Microsoft – and urged them to consider watermarking any content their AI models generate.

Conclusion

While the future of AI-content watermarking remains unclear, what is clear is that generative AI can pose risks to individuals as well as  society as a whole. Misinformation has been a problem before, but the difference now is the scale and speed with which it can be produced.

One way to handle the issue would be for AI companies to watermark all of the content they create so that everyone has a clear idea of its provenance. This would allow for the use of AI in academia without the fear of an incompetent workforce, the use of AI in journalism without eroding the public trust, and the use of AI in marketing with transparency. 

In light of the risks posed by the proliferation of AI-generated content and the potential erosion of human competence, watermarking provides a practical measure to ensure transparency and accountability. By implementing watermarking practices, content creators and publishers can contribute to a more informed and discerning society, enabling individuals to make better decisions based on the origin and authenticity of the content they encounter.

U.S. Copyright Office Issues Guidance on AI-Generated Works

Generative AI is happening, and it’s creating images, text, music, and other forms of content right now, as you read this. Some see this technology as a way to maximize human efficiency and productivity, while others view it as an existential threat to humanity. Regardless of where you come down on the issue, the reality is that generative AI is creating new content every day, and there are significant legal and ethical implications.

One of the most vexing questions is who owns the material generated by AI? In other words, if you use AI to create content, is it copyrightable?  If you ask ChatGPT, it tells you that:

A screenshot of ChatGPT

Not a very clear answer – maybe, and maybe not. Fortunately for content creators, the U.S. Copyright Office issued guidance on the subject on March 16, 2023. The TLDR version is this: works generated solely by AI are not copyrightable, but works generated by humans using AI might be.

The Authorship Requirement in Copyright

In its guidance, the Office reiterated its position that the term “author” excludes non-humans. For this reason, copyright can only protect works that are the result of human creativity.  In Article I, Section 8, The Constitution grants Congress the right to provide “authors” with an exclusive right to their “writings.”

The seminal case interpreting these terms is Burrow-Giles Lithographic Co v. Sarony. In that case, Napoleon Sarony had taken photos of Oscar Wilde, which Burrow-Giles Lithographic Company made copies of and marketed. The company’s position was that photographs could not be copyrightable works as they were not “writing” or produced by an “author” – therefore invalidating the Copyright Act Amendment of 1865, which explicitly granted copyright protection to photographs.

The Court rejected this argument, explaining that visual works such as maps and charts had been granted copyright protection since the first Copyright Act of 1790. In addition, even if “ordinary pictures” may simply be the result of a technical process, the fact that Sarony had made decisions regarding lighting, costume, setting, and other issues indicated that Sarony was, in fact, the author of an original work of art that was within the class of works that for which Congress intended to provide copyright protection.

In the opinion, the Court defined an author as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.” Futhermore, as the decision repeatedly refers to authors as “persons,” human,” and a copyright as “the exclusive right of a man to the production of his own genius or intellect.”

Relying on this decision as well as subsequent cases, the Copyright Office has required that work have human authoriship in order to be copyrightable.

How the Copyright Office Applies the Authorship Requirement

When the Copyright Office receives a hybrid work that contains both human-created and AI-generated material, it considers whether the contributions of the AI are the result of mechanical reproduction or the author’s own original idea on a case-by-case basis.

The bottom line is that if a work is the result of simply feeding AI a prompt, it will not be copyrightable. From the guidance (footnotes omitted):

For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

Importantly, the guidance acknowledges that some works that include AI-generated material will have enough human authorship to qualify for copyright protection. According to the Office, if the material is sufficiently altered in a creative way, the end result could be a work of authorship subject to copyright protection.

Authors Can Use AI Tools to Create Copyrightable Works

The guidance makes clear that authors can create copyrightable works using AI and other technical tools. That said, people applying for copyright registration should disclose their use of AI and explain how the human author contributed to the work.

The emergence of generative AI has brought with it complex legal and ethical questions regarding ownership and copyright of AI-generated works. The recent guidance issued by the U.S. Copyright Office provides some clarity on the matter, but content creators should make an effort to stay informed on this rapidly-evolving area of law.

Note: ChatGPT was used in the editing process – and helped with the conclusion.

 

FAQs: Can Lawyers Use AI-Generated Content for Marketing?

Right now, it’s nearly impossible to have a discussion about digital marketing without mentioning ChatGPT or AI generally. The technology is undoubtedly amazing; it’s capable of answering questions, creating a business plan, and even writing essays. One of the most obvious potential use cases for the newest generation of AI tools is content creation – but is it a good idea to use it? Let’s dig in and see what the issues are….

Can I Post AI-Generated Content on My Website?

Yes, you can. That said , the last thing you should do is post AI-generated content on your legal site without significant oversight and review. On February 8, 2023, Google clarified its position on AI-generated content. In short, it said that using AI-generated content is not against its guidelines. Like other forms of content, it will rank well if it is helpful for people searching for information. Additionally, if you use AI to create content in an attempt to “game” SEO, your site will likely be penalized.

Should I Use AI Content?

As the old adage goes, just because you can do something doesn’t mean you should. If your site deals with topics that can affect your money or your life (YMYL, in Google’s parlance), it will scrutinize your site’s content more closely. Specifically, it will look closely for signals that demonstrate experience, expertise, authority, and trustworthiness (E-E-A-T).

YMYL sites include sites that relate to topics like medicine, finance, and law. As a result, it’s critical for lawyers to ensure that the content on their site is accurate, helpful, and in compliance with the rules of professional conduct. If you are using AI to generate content, it’s imperative that you (or someone with the necessary expertise) review every word of it before you post it on your website. At that point, it becomes a legitimate question as to whether using AI to create long-form legal content is truly more efficient than human writing.

If you need 100-word product descriptions for kitchen appliances, you’re likely fine to use AI to generate them and post them with a cursory review. If you are creating long-form content on complicated legal topics, you probably want to have more human involvement and oversight in content creation.

How Can AI Help in the Content Creation Process?

That said, there are certainly ways in which AI tools can help content creators make the process more efficient. Some of the ways that you can use AI to help in content creation ethically and without creating more work include:

  • Blog topic ideation
  • Client persona identification
  • Keyword research
  • Content outlining
  • Basic legal research
  • Getting over writer’s block

Is AI-Content Well-Written?

Whether you think AI-generated content is well-written depends on what you believe makes content “good.” To many people, it’s just too generic and “clean” to qualify as good content. The reality is that law firms and other professional service providers have a brand identity that they want their content to reflect, and content generated by artificial intelligence lacks the personality that achieves that goal.

Is AI-Content Bar-Compliant?

There is no guarantee that the content created by AI will be compliant with the rules of your state bar. It may make statements that inadvertently guarantee a favorable outcome, it may suggest that you are a “specialist” or an “expert,” and it may even provide incorrect information. Furthermore, it’s possible that some state bars may hold the position that using AI-generated content without oversight is, per se, a violation of the rules of professional conduct. 

In Conclusion…

If you are a law firm or a digital marketing agency that works with law firms, AI can certainly help you in your efforts. That said, you should be certain that there is a significant amount of expert oversight in the process. Using AI to mass-produce content and posting without review can land you in hot water with Google and even your state bar.

Is AI the Answer to Law Firms’ Legal Content Creation Needs?

Unless you’ve been living under a very large rock, you’ve heard about ChatGPT, OpenAI’s new chatbot that can perform a variety of tasks – including creating content that is very close to what a human could create.

Its abilities have set the marketing world abuzz, with many observers predicting that it will fundamentally change the way we do business across all industries. Recently, ChatGPT has passed the multiple choice portion of the MBE, a Wharton Business School Test, and the U.S. Medical Licensing Exam. In fact, an AI-powered “lawyer” is set to appear in court next month, whispering in the defendant’s ear what to say through headphones. It’s undoubtedly a very exciting technology, and many people are looking into how to leverage it to cut costs and improve efficiencies in their daily processes.

Continue reading “Is AI the Answer to Law Firms’ Legal Content Creation Needs?”