What is Google’s Position on AI Content?

What is Google’s Position on AI Content?

1. Google Doesn’t Care How Content is Produced

Google has made it clear that using AI in the content creation process is not against its policies. In its guidance about AI-generated content, it says that the “Appropriate use of AI or automation is not against our guidelines.” 

Let’s consider these other statements in the guidance:

  • Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high-quality results to users for years.
  • Using automation—including AI—to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies.
  • AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.
  • However content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.
  • Appropriate use of AI or automation is not against our guidelines. This means that it is not used to generate content primarily to manipulate search rankings, which is against our spam policies.

It’s pretty clear that Google does not have a per se prohibition against AI-generated content. 

2. Google Doesn’t Want Low-Quality Scaled Content

Google also has been very clear here: creating content at scale to manipulate the search results is against its policies. It refined this position in the announcement of its March 5th Core Update, which it expects to reduce spam by 40 percent. Specifically, when discussing “Scaled Content Abuse,” the announcement explains that:

We’ve long had a policy against using automation to generate low-quality or unoriginal content at scale with the goal of manipulating search rankings. This policy was originally designed to address instances of content being generated at scale where it was clear that automation was involved.

Today, scaled content creation methods are more sophisticated, and whether content is created purely through automation isn’t always as clear. To better address these techniques, we’re strengthening our policy to focus on this abusive behavior — producing content at scale to boost search ranking — whether automation, humans or a combination are involved. This will allow us to take action on more types of content with little to no value created at scale, like pages that pretend to have answers to popular searches but fail to deliver helpful content.

Note that Google will treat ALL low-quality, high-volume content as spam, regardless of whether it comes from a human or AI. This conversation is not about human vs. AI production. It’s only that if you use AI to generate low-quality content at scale, your site may be penalized.

3. Google Just Wants Your Content to Demonstrate E-E-A-T

Finally, Google has been crystal clear about the type of content it will reward with good rankings: “original, high-quality, people-first content demonstrating qualities E-E-A-T” (experience, expertise, authority, and trust) – regardless of how it is produced.

In Conclusion….

There is currently a debate going on in the content marketing world. On one side, there are AI enthusiasts who believe it is the future; on the other, there are people who believe that using AI will lead to mass unemployment and other societal ills and, as a result, Google and other interested parties will regulate it out of existence.

In my opinion, AI is just another tool that can augment human creativity. The other day, I suggested to one of our writers that she could safely use AI to rewrite calls-to-action for a client that had ordered several blogs; her response was, “I already feel like an AI when I do it manually.” Similarly, when we would need to list the symptoms of TBI in a blog about brain injuries, we’d visit the Mayo Clinic website and have to rewrite their list of symptoms as original content.

There is no question that AI can accomplish tasks like these more quickly than humans. That said, there is also no question that AI-generated content is often generic and lacks the empathy and emotional depth that connect with readers. In addition, without a human touch, AI-generated content inherently lacks experience, expertise, authority, and trust, so there is little chance it will rank well on its own.

So – if you’re a content creator, using AI as an extremely competent assistant can make you more efficient. Ask it to provide topic ideas, meta descriptions, and social media summaries, or even ask it to provide an introduction to get over writer’s block. But make sure you make your content your own, add value for your readers, and fact-check everything AI spits out.

Google Announces Changes to Search: What Legal Content Marketers Need to Know

On March 5th, Google announced changes to its policies and systems in an effort to fight attempts to game their results with low-quality content. The announcement is almost certainly in response to the rise of spammy AI-generated content and reports that the quality of search results were degrading

The announcement detailed two specific changes:

  • Google is updating its core ranking system algorithm to surface the most helpful content and reduce unoriginal content in the results
  • It is updating its spam policies to keep the lowest-quality results out of Search, such as obituary spam and expired websites that have been repurposed as spam repositories

Google Plans to Reduce Low-Quality and Unoriginal Results

In the announcement, Google states that it is updating its core ranking systems to better identify websites that are unhelpful, have poor UX, or seem like they were created for search engines instead of human readers – including sites that were created to match specific search queries.

Scaled Content Abuse

In addition, Google is changing its spam policies to address new abusive practices that lead to “low-quality or unoriginal content at scale with the goal of manipulating search rankings.” Google acknowledges that this policy was originally designed to address content produced by automation at scale, but also acknowledges that it cannot always tell whether automation was involved:

Moving forward, Google plans to focus on “abusive behavior” regardless of whether content was produced by humans, automation, or some hybrid method. According to Google, this will allow it to take action on pages that “pretend to have answers to popular searches but fail to deliver helpful content.”

Site Reputation Abuse

Google also plans to crack down on sites with high-quality content that host low-quality content. Publishers do this in an attempt to obtain ranking benefits from posting on the hosting site’s reputation. The search engine’s concern is this practice can mislead users who have different expectations for the content on a given site. Moving forward, third-party content on trusted sites will be treated as spam. This policy will not be enforced until May 5th of this year, giving site owners time to take remedial action.

Expired Domain Abuse

Finally, Google is taking action against publishers who purchase expired domains in an attempt to boost poor-quality or unoriginal content. According to Google, this has the potential to mislead users who believe the content was published on the older site. The search engine will now treat such domains as spam.

Takeaways for Law Firm Content Marketing

To a large extent, these updated policies refine what we already knew – Google wants to surface high-quality content that is helpful to its users. Here are some specific takeaways for legal content marketers moving forward:

  • Google is not focusing on how content is produced. It is focusing on penalizing attempts to game its ranking systems. As a result, make sure that all of the content you publish is original and helpful to readers.
  • It’s okay to use AI in your content generation process, but the content you publish needs to be helpful to your readers and add something to the conversation.
  • If you use AI in your content generation process, make sure the final product does not look like unedited AI output.
  • Remember that the content on law firm websites has significant real-word effects on readers, and it is within a category Google calls “Your Money or Your Life” (YMYL). More than ever, your content should demonstrate E-E-A-T: experience, expertise, authority, and trust.
  • Consider information gain when creating content – are you adding new, relevant information that is helpful to your readers? If so, your content will have a good chance of ranking. If it simply reworks existing information, it may not appear in the search results.

ChatGPT for Law Firm Content: Best Practices

ChatGPT and other generative AI models are set to revolutionize many industries, including law. While there are certainly practice-related use cases for AI, such as legal research or contract analysis, ChatGPT also has clear applications in the business of law, such as in creating marketing materials.

One of the most effective legal marketing channels in recent years has been content marketing, which involves the creation of content (such as blogs, social media posts, videos, and white papers) to gain brand awareness and connect with potential clients. Think of it this way; if someone Googles “How Much is My Personal Injury Case Worth?” and lands on your blog, there’s a good chance they’ll pick up the phone to call you or shoot you an email.

ChatGPT and other generative AI models can create reasonably good content in seconds. As a result, it’s no surprise that many law firms and their marketing teams have been looking into whether they can use AI to create law firm marketing content at scale.

Can You Use ChatGPT to Create Law Firm Content?

First things first – as we’ve said before, you can certainly use ChatGPT to create law firm content, provided there is significant expert oversight. You can use ChatGPT to create law firm content the same way you can using Grammarly, Jasper, or any other AI-based tool on the market. That said, there are some substantial and worrisome problems with relying on AI to create law firm content without substantial human review. They include the following:

  • ChatGPT and other LLM AIs are known to hallucinate information. In other words, AIs confidently output incorrect facts when they do not “know” the information they need to generate an accurate response. For a cautionary tale, all you need to do is consider the two New York attorneys who faced stiff sanctions for submitting a brief citing nonexistent case law after relying on ChatGPT for legal research.
  • ChatGPT often creates very similar-sounding content to similar queries. As a result, the content it generates is unlikely to stand out as demonstrating high experience, expertise, authority, and trust (E-E-A-T), which is critical to a high-quality page. In fact, based on how its creators trained it, its output is average by design.
  • According to guidance issued by the United States Copyright Office, you do not have any intellectual property interest in content generated by ChatGPT or any other generative AI without “sufficient human authorship.”

These problems notwithstanding, you can certainly use ChatGPT or other generative AI models to speed up the content creation process. What you can’t do in legal marketing is ask ChatGPT to give you a blog on a legal topic and then copy and paste the output onto your website.

So, how can lawyers and marketing teams use ChatGPT to create law firm content? Here are some best practices to ensure that you create accurate, unique, and rules of professional conduct-compliant content for your law firm website.

Use the Right Prompts

Using ChatGPT involves “prompting” the model to provide an output. Here’s how ChatGPT itself describes what prompting is:

Prompts can be long or short. As general rule, the more complex you want the output to be, the longer the prompt should be. So, for example, let’s say you wanted to update your website to advertise the fact that you were taking Chapter 7 cases. Here’s a reasonably detailed prompt you could use:

Pretty good, right? That said, you may not like the word “compassionate” to describe your firm. Furthermore, the assertion that the firm’s attorneys will work “tirelessly to eliminate your debts” could be construed to be promising a specific outcome and violative of the advertising rules in your jurisdiction. So, even with good prompt engineering, you still need to be sure to…

Read Every Word

If you are using ChatGPT to create law firm content, you (or a qualified expert) must read every word of its output. ChatGPT can provide nonsensical answers and does not really “know” anything. In addition, it sometimes blatantly disregards the instructions in the prompt. As a result, even if you tell it not to use the word “expert” or “specialist,” it may do so. Posting AI-generated content without sufficient review could lead to posting inaccurate, poorly worded, or even non-compliant content on your website.

Double Check Any Factual Statements or Statements of Law

While you are reading every word, make sure to verify any factual statements or legal assertions that the AI makes. ChatGPT has a knowledge cutoff date of September 2021. As a result, it may provide outdated information. Erin Fitzgerald of Lexicon Legal Content recently posted a a blog highlighting this point. While experimenting with ChatGPT, she recently discovered that it insists that the statute of limitations in Florida for personal injury is 4 years despite being lowered to 2 years in early 2023.

In addition, ChatGPT will simply make up facts if it does not know them. In the case of the New York lawyers who used ChatGPT for legal research, the AI fabricated Varghese v. China Southern Airlines Co., Ltd. Not only did it fabricate Varghese, it also fabricated cases that the nonexistent Varghese “court” used to reach its opinion! Perhaps most frighteningly, when the attorney  – Steven A. Schwartz of Levidow, Levidow & Oberman – asked if the cases were real, the ChatBot replied that they were. As reported by Ars Technica:

Schwartz provided an excerpt from ChatGPT queries in which he asked the AI tool whether Varghese is a real case. ChatGPT answered that it “is a real case” and “can be found on legal research databases such as Westlaw and LexisNexis.” When asked if the other cases provided by ChatGPT are fake, it answered, “No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.”

The moral of the story: if you use ChatGPT as a tool in law firm content creation, verify everything.

Make it Your Own

Now that you’re sure that your content is accurate and up-to-date, you should brand it to your law firm. On a fundamental level, ChatGPT uses its training data to predict what word should come next. As a result, the content it creates tends to be generic. You should revise the content to reflect your brand’s voice and marketing messaging.

Now that you’re sure you have accurate content that reflects your law firm the way you want it to, it’s time to optimize your content. Optimizing content for search involves:

  • Adding keyword phrases at an appropriate density
  • Using appropriate header tags for your title and subheadings
  • Adding internal links to other pages on your website
  • Adding external links to authoritative sources
  • Making your content easy to scan

Post Your Content and Engage in On-Site Optimization

At this point, it’s time to post your content and engage in some basic on-site SEO. Assuming your content is a blog post, you should copy and paste the content into a “new post” on the backend of your website, ensuring that the formatting is correctly transferred from the platform you were writing on to your website. Next, you should:

  • Find an appropriate image for your post (posts with images get 94 percent more engagement))
  • Add an alt tag to the image describing it with text
  • Write a custom meta description
  • Categorize and tag your post

 At this point, your blog is ready for posting. Once it’s posted, as a final step, you should submit it to Google Search Console to ensure it is indexed as soon as possible.

What Else Can Lawyers Use ChatGPT For?

Using ChatGPT and other generative AI models to create content can be risky for obvious reasons. That said, there are certainly other ways that lawyers can use ChatGPT for certain practice and marketing-related tasks. Some of the more obvious use cases of ChatGPT for lawyers include:

  • Content topic ideation
  • Responding to emails
  • Legal research (remember to double-check any cases or laws it cites)
  • Analyzing and summarizing documents
  • Drafting legal documents

Interested in ChatGPT for Law Firm Content? Call Lexicon Today

At Lexicon Legal Content, we have been creating content for law firms and marketing agencies for over a decade. We are presently experimenting with integrating AI into our content creation workflows to improve efficiency while still creating the same accurate and compelling legal content. That said, we are still offering our 100-percent human-written content and are transparent with our clients about how we’re using generative AI.

 To discuss your law firm content needs with a legal content marketing expert, call our office today or contact us online.

When ChatGPT Gets Legal Content Wrong

ChatGPT is unquestionably a useful tool for certain aspects of legal content marketing. The AI platform can generate complex blog topics, section headers, and more to increase efficiency in the content writing process. However, problems can quickly arise when lawyers try to cut corners by using ChatGPT to write the bulk of their content and rely on the output as accurate.

ChapGPT Doesn’t Know Current Changes in the Law

On March 24, 2023, Governor Ron DeSantis signed a Tort Reform bill, changing several critical aspects of Florida personal injury law. One key change is the reduction of the personal injury statute of limitations from four years to two years. 

Despite the wide publication of these changes, ChatGPT confidently delivered the following in July 2023:

This is because ChatGPT has a knowledge cutoff of September 2021, so any changes since then will not be reflected in its output. 

Imagine if a personal injury attorney fails to carefully check the ChatGPT output and posts this information as-is on their blog. It’s certainly foreseeable that an accident victim may find the content and rely on it when determining how quickly to proceed with their case. Once the car accident victim realizes they missed the accurate two-year window, the attorney can face possible sanctions. If a client relied on a law firm website’s incorrect information, the client’s attorney could also potentially face a malpractice lawsuit.

Using ChatGPT without the proper attention and editing can lead to problems for an attorney – and it can also significantly affect the lives of potential legal clients who rely on misinformation. Readers should assume they can trust what they read on a legal professional’s website, and they stand to lose a lot when the information is incorrect. 

A Hybrid Model Works Best

Fortunately, there is a simple solution that allows content creators to harness the power of generative AI for more efficient content writing while ensuring legal accuracy. You should always have someone with detailed legal knowledge review any material produced by ChatGPT before you publish it. 

Many attorneys have already tried this approach – using AI to write content and then reviewing and editing it. However, this process is far more time-consuming than it might initially seem. In fact, in some cases, it may be faster to write from scratch than to review and verify everything that ChatGPT spits out.

In order to safely and ethically use AI to create law firm marketing materials, one should:

  • Read the content mindfully and identify any possible errors
  • Double-check the accuracy of any factual statements or statements of law
  • Ensure that the content does not contain any verbiage that is disallowed by your jurisdiction, such as “expert” or “specialist”
  • Making sure the content does not guarantee certain outcomes or timelines
  • Brand the post to your firm, using your marketing messaging
  • Ensure the post follows SEO best practices with regard to keyword usage and density
  • Changing the point of view from third person if a firm wants a more personalized tone

More and more attorneys and their marketing teams find they do not have the time to carefully complete the above steps, which are necessary for reliable and effective content. As a result, many firms that left content providers to try their hand at ChatGPT have allowed their content goals and schedules to lapse, as they are finding that they don’t have the time required to ensure that AI-generated content is accurate and compliant.

The Third Option: Hybrid AI/Human Content Services

Fortunately, here is a third option besides paying for human-generated content and devoting the time and attention necessary to use ChatGPT on your own. At Lexicon Legal Content, we offer an AI/Human hybrid content creation service where we use AI to do the heavy lifting, followed by extensive human review and editing by an editor with a JD or similar legal knowledge. Ultimately, this process results in content that is accurate, unique, and SEO optimized to help our clients connect with clients and improve their SERP rankings.

If you are a law firm or digital marketing agency needing help with content development, you’re in the right place. Lexicon Legal Content has been creating accurate, compelling, and SEO-focused legal content for more than a decade. We’re committed to staying on top of the developments in generative AI to ensure that we can leverage this new technology responsibility and in a way that provides value to our clients. To learn more, call us today or contact us online.

Written by Erin Fitzgerald, JD

Co-Founder

Should AI-Generated Content Be Watermarked?

Since November of 2022, the world has been captivated by ChatGPT, the artificial intelligence chatbot created by OpenAI. ChatGPT’s meteoric rise in popularity  – reaching 100 million users in just two months – has brought attention to generative AI in general. As you would expect, generative AI is capable of creating text, images, and other forms of content in seconds that some consider indistinguishable from what a human would create.

Unsurprisingly, generative AI has been hailed as both a productivity enhancer and a job destroyer, sometimes simultaneously. It has raised serious issues in academia, with some people suggesting that the college essay has become obsolete. In addition, AI’s generative capabilities may fundamentally change the way white collar professionals work and may even threaten their jobs.

Should Readers Know Whether Content is AI-Generated?

One issue that appears to surface regularly in the conversations around AI is whether people should know whether content was created by a human or AI. Knowing the provenance of content seems like a fair request, especially if an individual is relying on the information for a serious matter such as their health, financial well-being, or safety. 

One solution that has been thrown around is the idea of watermarking AI content, allowing people and search engines to recognize it as such. In fact, at Google I/O , the company said that it would voluntarily watermark images created by its generative AI so that people could spot fakes. Microsoft made a similar announcement a few weeks later.

Inaccurate and Fake Content Can Have Real World Effects

It is becoming more and more clear that misleading content generated by AI can have real world effects – and cause real word harm. For example, on May 22 of this year, a false report of an explosion at the Pentagon accompanied by an image likely generated by AI caused a significant dip in the stock market. 

Similarly, many experts consider content containing misinformation to pose a risk to elections. Speaking at a World Economic Forum event earlier this year, Microsoft’s chief economist, Micahel Schwartz cautioned that “Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections.” 

Bad actors could generate misinformation at a scale never seen before in the form of social media posts, fake news stories, fake images, and even deep fake videos of candidates that are indistinguishable from reality.

Perhaps most troublingly, some observers think that the rise of generative AI risks a future of human incompetence. What does the world look like if all we have to do to demonstrate competence is to ask an AI to do it for us? As put by US DOJ National Security & Cybercrime Coordinator Matt Cronin recently in The Hill:

For even the most brilliant minds, mastering a domain and deeply understanding a topic takes significant time and effort. While ultimately rewarding, this stressful process risks failure and often takes thousands of hours. For the first time in history, an entire generation can skip this process and still progress (at least for a time) in school and work. They can press the magic box and suddenly have work product that rivals the best in their cohort. That is a tempting arrangement, particularly since their peers will likely use AI even if they do not.

Like most Faustian bargains, however, reliance on generative AI comes with a hidden price. Every time you press the box, you are not truly learning — at least not in a way that meaningfully benefits you. You are developing the AI’s neural network, not your own.

Cronin argues that incompetence will increase over time as we use AI, comparing using it to having someone else work out for you and expecting to get fit as a result.

Consider a hypothetical generation of surgeons who have been raised on AI and suddenly do not have internet access – do you want them operating on you? Do you want a lawyer who got through law school learning how to correctly “prompt” AI representing you in court? Of course, for most of us, the answer is “no.”

The fact is that generative AI allows people to seemingly demonstrate knowledge or expertise they do not have. While this clearly presents an issue in academia, where students are expected to demonstrate knowledge in writing assignments, it also raises an issue as to whether consumers can trust that knowledge-based professionals like lawyers, physicians, and mental health providers actually possess the skills they claim to have in their website content. 

What Does Watermarking AI-Generated Content Look Like?

You are probably already familiar with the idea of watermarking as it relates to visual content. For an example, go to iStock and see how they display the pictures they have for sale. In order to prevent you from simply right-clicking and saving the image to your desktop, each image has “iStock by Getty Images” superimposed on top of it.

Google is taking watermarking AI-generated images a step further and embedding data that will mark them as AI-generated. In a May 10th blog post on The Keyword, Google explained that:

“. . .as we begin to roll out generative image capabilities, we will ensure that every one of our AI-generated images has a markup in the original file to give you context if you come across it outside of our platforms. Creators and publishers will be able to add similar markups, so you’ll be able to see a label in images in Google Search, marking them as AI-generated. You can expect to see these from several publishers including Midjourney, Shutterstock, and others in the coming months.

Watermarking Content Presents Special Challenges

Of course, watermarking AI-generated text would be different from watermarking images. One idea that has been discussed by AI-creators like OpenAI and other stakeholders is the idea of cryptographic watermarking. This type of watermarking involves embedding a pattern or code into the text in a way that allows software to detect whether content is generated by AI.

Hany Farid, a Professor of Computer Science at the University of California, Berkeley, recently explained how watermarking text may work in a piece for GCN:

Generated text can be watermarked by secretly tagging a subset of words and then biasing the selection of a word to be a synonymous tagged word. For example, the tagged word “comprehend” can be used instead of “understand.” By periodically biasing word selection in this way, a body of text is watermarked based on a particular distribution of tagged words. This approach won’t work for short tweets but is generally effective with text of 800 or more words depending on the specific watermark details.

This idea has gained traction in many circles. Professor Farid believes that all AI-generated content should be watermarked, as does Matt Cronin (mentioned earlier in this article). Additionally, Fedscoop’s Nihal Krishan reports that Deputy National Security Adviser for Cyber and Emerging Technology met privately with tech executives  at the RSA Conference – including those from OpenAI and Microsoft – and urged them to consider watermarking any content their AI models generate.

Conclusion

While the future of AI-content watermarking remains unclear, what is clear is that generative AI can pose risks to individuals as well as  society as a whole. Misinformation has been a problem before, but the difference now is the scale and speed with which it can be produced.

One way to handle the issue would be for AI companies to watermark all of the content they create so that everyone has a clear idea of its provenance. This would allow for the use of AI in academia without the fear of an incompetent workforce, the use of AI in journalism without eroding the public trust, and the use of AI in marketing with transparency. 

In light of the risks posed by the proliferation of AI-generated content and the potential erosion of human competence, watermarking provides a practical measure to ensure transparency and accountability. By implementing watermarking practices, content creators and publishers can contribute to a more informed and discerning society, enabling individuals to make better decisions based on the origin and authenticity of the content they encounter.

Can You Use ChatGPT to Write Legal Blogs?

ChatGPT – the conversational AI model released by OpenAI in November 2022 – has captured the attention of people in every industry throughout the world. According to some, ChatGPT and other generative AIs are about to fundamentally change the world, take our jobs, and usher in a dystopian future. On the other hand, some observers think that ChatGPT is a waste of time that people are going to mostly use as a toy.

As is usually the case, the reality of generative AI’s impact is probably somewhere in the middle of these two positions. That said, one thing is crystal clear – ChatGPT is capable of generating human-like content on a wide range of topics in a matter of seconds. This generative capability clearly has wide-ranging implications in academia as well as the workplace – and these implications are particularly salient for people in white-collar positions in which their work product is typically written material. 

Since ChatGPT doesn’t have a law license (despite doing really well on the UBE), the lawyers are safe for now. That said, law firms and marketing teams are looking into whether it can do other non-practice-oriented tasks, such as creating marketing materials.

So, can you use ChatGPT for law firm marketing? Let’s take a look and find out.

What is ChatGPT?

While you’ve undoubtedly heard of it- what exactly is ChatGPT? One way to find out is by asking ChatGPT itself:

A screenshot of ChatGPT responding to a prompt to explain what ChatGPT is

Okay; here’s the plain English version – ChatGPT is an AI built on a large language model that can provide human-like responses to human inputs. For example, you could ask it to provide information, create a meal plan, plan a vacation – or – answer legal questions.

Spending a few minutes playing with ChatGPT is an eye-opening experience. At first blush, it’s easy to see how generative AI could change everything. For many people, using ChatGPT for the first time results in a series of existential questions – What happens to the college essay? What is the point of learning anything? Will this take my job? Do humans have to do anything anymore?

Undoubtedly, the tech has the potential to be extremely disruptive, but some of the apocalyptic prognosticating seems to already be dying down. For example, rather than banning it, some teachers have started to integrate ChatGPT into their lesson plans. People have realized that students can still demonstrate their knowledge by in-class testing or essay writing. Furthermore, many people now view AI as a way for professionals to improve their efficiency rather than a replacement for human expertise.

So, back to the question – can lawyers use ChatGPT or other generative AI models for legal marketing? The short answer is yes –  provided there is significant human oversight.

In fact, lawyers may have an ethical duty to stay on top of generative AI technology like ChatGPT. Most states have adopted Comment 8 to Model Rule 1.1 of the Model Rules of Professional Conduct, which requires a lawyer to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education with all continuing legal education requirement to which the lawyer is subject” [emphasis added]. ethical duty

ChatGPT and other generative AI models can help lawyers and their marketing teams accomplish certain tasks more quickly. That said, you should be aware of the limitations of the technology and the ways in which using it may cause problems.

It Can Provide Incorrect Information

ChatGPT can provide incorrect information – a problem that its creators like to call “hallucination.” They are well aware of this problem and even have a small disclaimer at the bottom of the page alerting users of this fact.

A screenshot of a disclaimer explaining that ChatGPT can provide incorrect information

If you are posting content on a law firm’s website, it has to be accurate. Incorrect information could result in significant consequences, including disciplinary action from the bar or even a malpractice lawsuit from a client who relied on it.

The Content May Be Plagiarized

ChatGPT runs on a Large Language Model (LLM), which is a type of AI that uses huge data sets to understand inputs and predict new content. In other words, ChatGPT uses existing content to figure out what word should come next. As a result, there is a substantial possibility that ChatGPT’s output can be extremely similar to existing content on the internet.

Additionally, it can produce very similar-sounding content to similar prompts. So, if you and another law firm (or hundreds of other law firms, as the case may be) ask it to spit out a 500-word legal blog post on “Common Types of Medical Malpractice,” the content it produces will likely be very similar to what other users are getting.

You Do Not Own the Content ChatGPT Produces

According to guidance issued by the United States Copyright Office in March of 2023, content produced by generative AI like ChatGPT is not eligible for copyright protection based on the “human authorship requirement.” In addition, there are multiple lawsuits going on from content creators alleging that generative AI using their works to create content is infringing. As Jonathan Grabb, Ethics Counsel for the Florida Bar, puts it – “utilizing an A.I. program to draft documents may not be risk free” when it comes to copyright and plagiarism issues.

ChatGPT Content Will Probably Not Demonstrate E-E-A-T without Significant Editing

When evaluating the quality of a page, it looks at the extent to which the content demonstrates experience, expertise, authority, and trust – E-E-A-T, in industry parlance. This is particularly true for sites that deal with issues related to the health, happiness, financial stability, and safety of users and society at large, which Google calls Your Money or Your Life (YMYL) sites.

Google makes it clear that the most important element of E-E-A-T is trust, and there is a substantial possibility that posting AI-generated content without any oversight will result in untrustworthy pages. Some of the ways you can ensure that your content demonstrates E-E-A-T include:

  • Ensuring that any facts contained in the output are accurate
  • Linking to authoritative sources
  • Ensuring that the content provides useful information and is not just regurgitating existing content
  • Highlighting your expertise or credentials

Despite the significant issues raised above, ChatGPT and generative AI does have a place in law firm marketing. Some of the best use cases for the technology include the following:

Coming Up with Content  Ideas

When it comes to regular content creation, the hardest part can be coming up with topics to write about. This is true whether you are regularly adding practice area pages, posting blogs, or updating your social media accounts. ChatGPT is great at helping come up with content topics – you just need to know how to prompt it correctly. Fortunately, the team at Lexicon Legal Content has done the work for you. We’ve developed a free, AI-powered Legal Blog Topic Generator that is designed for use by lawyers and law firm marketing companies. 

Outlining Content

A screenshot of ChatGPT providing an outline for a blog about what to do after a bicycle accident

Another part of the content creation process where AI can really help is in outlining a piece of content. It can provide headings and subheadings and can even help to identify ancillary topics that your piece of content could address to make it more comprehensive. Here’s an example of its outline for a blog on what people should do after a bicycle accident:

Of course, not all of these points may be relevant or appropriate to post on a law firm blog, but it’s a good start. Once you have a good outline in place, it can make the process of content creation much faster.

Overcoming Writer’s Block

Writer’s block is real. Everyone on the Lexicon team has struggled with staring at a blank page and asking, “what the hell should I talk about?” at one point or another. Asking ChatGPT to create an outline or even write an introduction can help get past writer’s block and into the content creation process.

Summarizing Material & Generating Short-Form Content

Finally, another great use case of ChatGPT is summarizing longer content and generating short-form content that you can use for social media posts, meta descriptions, or other areas where you may need one or two sentences., ChatGPT’s generic and terse content is actually a benefit for short-form content blurbs,  as you are often dealing with word count restrictions and need to be efficient as possible.

For more than 10 years, Lexicon Legal Content has been helping law firms connect with legal consumers through the power of content marketing. We’ve developed millions of words of content for law firms and digital marketing agencies throughout the United States and Canada, and we’re committed to staying cutting edge of content marketing trends and tech. To learn more about our services, call our office today or send us an email through our online contact form.

Generative AI for Law Firm Content? A Quick and Dirty Guide

It’s May of 2023, which means that professionals across all industries are working on determining how they can incorporate AI into their workflows to improve efficiency. Everyone knows the legal field moves more slowly with technology than others, but that doesn’t mean that lawyers and law firms are not trying to figure out how they can use it to do non-practice tasks like create marketing materials.

It’s true that generative AI can create fairly convincing human-sounding content, so law firms and their marketing managers may wonder whether they can use it to churn out content at scale. AI is a great assistant, but it still needs a human at the helm – especially in a high-stakes area like law. 

Below are some guidelines as to how law firms can currently use generative AI models like ChatGPT to help in the marketing efforts.+

Do Not Rely on It to Create a Finished Product By Itself

The first thing that lawyers and law firm marketing directors should realize is that you cannot rely on AI models to create a finished piece of content without human intervention. AI is a very convincing liar, and it is known to “hallucinate” answers that are just flat out wrong

It doesn’t take much to recognize that this can be a serious issue when creating legal content. Providing incorrect information could result in bar complaints or even a malpractice suit if someone who became a client used the information on your site for the basis of taking a specific course of action.

Additionally, even if you teach AI your brand voice, the fact is that AI-generated content does not capture the intricacy and personality of human writing. If you really want to make a connection with your readers, make sure there is a human touch to the final product.

Know What AI Does Best

Now that we’ve addressed some of the significant issues with AI content creation, it’s important to address the things that it can do extremely well. There is zero doubt that – when used correctly – AI can improve productivity and make the process of creating law firm marketing content easier. Some of the best use-cases for AI in legal content marketing include:

Topic Ideation

Sometimes, the hardest part of creating content is figuring out what to write about. After all, you can only package “why you need a [insert your practice area] attorney” in so many different ways. The fact is, however, that there is plenty to talk about in the legal field, and many questions that provide you an opportunity to connect with clients online.

Getting ChatGPT to spit out strong blog topics takes a little prompt engineering. For example, you need to narrow its output to consumer-facing matters (have you met a client that really wants to know the difference between assumption of the risk and comparative negligence?) and tell it some other details. 

Fortunately, the legal professionals at Lexicon Legal Content have done the hard part for you and created a legal industry-specific AI-Powered Legal Blog Topic Generator that you can use for free.

Getting Past Writer’s Block

So now you have some topics, but you are still looking at the blank page without any idea where to start. In cases like these, AI can help you get started. You can ask it to provide a basic introduction for your topic, which is often enough to get past writers’ block and put something on the page.

Outlining Your Content

Another place that AI shines is creating content outlines. Sometimes, it is just as simple as asking it to provide headers for an x-number of word article on your chosen topic. In others, you could ask it to get more granular and summarize pontiac ideas to cover in each section.

Read Every Word

When it comes to AI content, it is critical that someone with legal expertise (preferably someone with a JD) reads every single word of the output. A light edit adding some personal or brand flavor here and there is not going to cut it. As mentioned above, it is common knowledge that AI spits out incorrect information, and even a slight error could result in professional and legal consequences. 

In addition, AI may create content that is noncompliant with the advertising rules in your jurisdiction. A stray “specialist” or false statement about your experience could result in marketing materials that could land you in hot water with your state bar.

Run it Through a Plagiarism Checker

To vastly oversimplify the technology, generative AI uses advanced algorithms and available internet content to predict what word should come next. The fact that it is using existing content to create new content should make lawyers very nervous that the content that it generates may be extremely close to existing content on the internet. 

If you and some law firm across the street or across the country ask it to generate content on a similar topic, it may spit out very similar answers. For this reason, you should always run any AI-generated content through a plagiarism checker before publishing it. 

Keep in Mind that Without Significant Human Intervention, AI Content is Not Protected by Copyright

Earlier this year, the United States Copyright Office issued guidance regarding whether AI-generated content is subject to copyright protections. Feel free to read the entire document here, but the TLDR version is this: a work is not copyrightable when an AI generates content without human involvement, and providing a prompt is not sufficient human involvement to make a work copyrightable. In other words, if you tell an AI to “generate a blog on car accident law,” proofread it, and publish it on your website, you do not own it.

Outsource Content Creation to Legal Professionals

If this sounds like a lot to worry about when using AI to create content, it is. The reality is that in many cases, it is quicker to just write content from scratch the old-fashioned way than it is to have AI generate it and then clean it up. That said, when used correctly, AI can make parts of the content process more efficient and improve productively.

At Lexicon Legal Content, we leverage AI to create legal content for our clients that turns website visitors into clients. To learn more, call us today or send us an email.

Lessons from DoNotPay: The Ethical Implications of AI in the Legal Industry

Some shortcuts and hacks are worth it in life and business, and some simply aren’t. Some are harmless, appealing, less expensive, and time-saving in the beginning. Yet, they turn out to cause more problems and hassle than the situation presented initially without the shortcut. But, as one previously-aspiring attorney is coming to find out, AI in the legal realm is more of the latter—at least for now. So if you are an attorney or a marketing professional who works with attorneys, you’ll want to make a note of this case and learn from another’s mistakes instead of venturing down that path or similar ones yourself. 

“The World’s First Robot Lawyer”

San Fransisco’s DoNotPay is “the world’s first robot lawyer,” according to founder, CEO, and software engineer Joshua Browder. The tech company was founded in 2016 by Browder, a Stanford University undergraduate and 2018 Thiel Fellow who has received a remarkable amount of media attention in his short career. Browder says he started the company after moving to the U.S. from the U.K. and receiving many parking tickets that he couldn’t afford to pay. Instead, he looked for loopholes in the law he could use to his advantage to find ways out of paying them.

He claims that the government and other large corporations have conflicting rules and regulations that only stand to rip off consumers. With DoNotPay, his goal is to give a voice to the consumer without consumers having to pay steep legal fees. According to the company’s website, they use artificial intelligence (AI) to serve approximately 1,000 cases daily. Parking ticket cases have a success rate of about 65 percent, while Browder claims many other case types are 100 percent successful.

DoNotPay claims to have the ability to:

  • Fight corporations
  • Beat bureaucracy
  • Find hidden money
  • Sue anyone
  • Automatically cancel free trials

The company has an entire laundry list on its website of legal problems and matters its AI can handle, such as:

  • Jury duty exemptions
  • Child support payments
  • Clean credit reports
  • Defamation demand letters
  • HOA fines and complaints
  • Warranty claims
  • Lien removals
  • Neighbor complaints
  • Notice of intent to homeschool
  • Insurance claims
  • Identify theft
  • Filing a restraining order
  • SEC complaint filings
  • Egg donor rights
  • Landlord protection
  • Stop debt collectors

DoNotPay: Plagued with Problems

While his intentions might be relevant or even noble to some, they are landing Broward in his own legal hot water for which there may currently be no robot lawyer to represent him. 

State Bars Frown on AI in the Courtroom

In February, a California traffic court was set to see its first “robot lawyer” as Broward planned to have an AI-powered robot argue a defendant’s traffic ticket case in court. If his plan had come to fruition, the defendant would have worn smart glasses to record court proceedings while using a small speaker near their ear, allowing them to dictate appropriate legal responses. 

This unique and innovative system relied on AI text generators, including the new ChatGPT and DaVinci. While in the courtroom, the AI robot would process and understand what was being said and generate real-time responses to the defendant. Essentially, they could act as their own lawyer with the help of DoNotPay’s robot lawyer— a technology that has never been used within a courtroom. 

Many state bars and related entities quickly expressed their extreme disapproval when they learned about Browder’s plans. Multiple state bars threatened the business, even threatening prosecution and prison time. For example, one state bar official reminded him that unauthorized practice of law is a misdemeanor in certain states that can come with a punishment of up to six months in county jail.

State bars license and regulate lawyers in their respective states, ensuring those in need of legal assistance hire lawyers who understand the law and know what they are doing. According to them, Browder’s AI technology intended for courtroom use is clearly an “unauthorized practice of law.”

DoNotPay is now under investigation by several state bars, including the California State Bar. AI in the courtroom is also problematic because, currently, courtroom rules for federal and many state courts don’t allow the recording of court proceedings. Even still, Broward’s company offered $1 million to any lawyer to have its chatbot handle a U.S. Supreme Court case. To date, no one has accepted his offer.

DoNotPay Accused of Fraud

As if being reprimanded by several state bars isn’t bad enough, Broward and DoNotPay are now facing at least one, if not multiple, class action suits. The silver lining is that perhaps Browder will finally get to test his robot lawyer in court. 

On February 13, 2023, Seattle paralegal Kathryn Tewson filed a petition with the NY Supreme Court requesting an order for DoNotPay and Broward to preserve evidence and seeking pre-action discovery. She plans to file a consumer rights suit, purporting that the company is a fundamental fraud.

What’s even more interesting is that Tewson notes in her filing that she consents to Browder using his robot lawyer to represent himself in this case and even seems to dare him to do so:

For what it is worth, Petitioner does and will consent to any application Respondents make to use their “Robot Lawyer” in these proceedings. And she submits that a failure to make such an application should weigh heavily in the Court’s evaluation of whether DoNotPay actually has such a product.

Through her own research, Tewson has accused Broward of not even using AI but piecing different documents together to produce legal documents for consumers who either believe they are receiving AI content or real attorney-generated content. Suppose DoNotPay is actually using AI, as Broward claims. In that case, it’s obviously not producing quality work products, and consumers are starting to notice. 

A Potential Class Action Lawsuit

As if these legal issues weren’t already enough, next on the DoNotPay docket is a potential class action lawsuit. On March 6, 2023, Jonathan Faridian of Yolo County filed a lawsuit in San Francisco seeking damages for alleged violations of California’s unfair competition law. Faridian alleges he wouldn’t have subscribed to DoNotPay services if he knew that the company was not actually a real lawyer. He asks the court to certify a class of all people who have purchased a subscription to DoNotPay’s service.

Faridian’s lawyer Jay Edelson filed the complaint on his behalf, alleging that he subscribed to the DoNotPay services and used the service to perform a variety of legal services on his behalf, such as:

  • Drafting demand letters
  • Drafting an independent contractor agreement
  • Small claims court filings
  • Drafting two LLC operating agreements
  • An Equal Employment Opportunity Commission job discrimination complaint 

Faridian says he “believed he was purchasing legal documents and services that would be fit for use from a lawyer that was competent to provide them.” He further claims that the services he received were “substandard and poorly done.”

Edelson has successfully sued Google, Amazon, and Apple for billions. The NYT refers to him as the “most feared lawyer in Silicon Valley.”

When asked directly if DoNotPay would be hiring a lawyer for its defense or self-defending in court relying on its own tools, Browder said, “I apologize given the pending nature of the litigation, I can’t comment further.” Even still, he recently tweeted, “We may even use our robot lawyer in the case.”

What Lawyers and Marketing Professionals Can Learn From DoNotPay’s Mistakes

Stanford professors say that Browder is “not a bad person. He just lives in a world where it is normal not to think twice about how new technology companies could create harmful effects.” Whether this is true or not remains to be seen. In the meantime, attorneys and marketing professionals have a lot they can glean from Broward’s predicaments. They certainly need to think twice about the potentially harmful effects of AI technology use for several reasons.

DoNotAI

The overarching theme that we can take away from Broward and his business’s legal predicaments is that AI isn’t something that law firms or attorneys (or even those aspiring to be in the legal profession) should dabble in, at least for now. It isn’t worth using AI, such as ChatGPT or Google’s new Bard, whether for online form completion like DoNotPay or marketing content like blogs or newsletters. You don’t want to give the impression that something was drafted or reviewed by a licensed attorney when in reality, it was essentially written by a robot. On the other hand, you also don’t want to be accused of piecing legal documents together or performing shoddy work as an attorney because you are using AI. 

While relying on AI might seem harmless in some areas, it could later prove problematic, as it has for Broward. For example, using AI for any of your work or marketing content could:

  • Tarnish your reputation in your community and with your colleagues and network
  • Have your actions called into question by your state bar association
  • Provide consumers with the wrong or simply invaluable information, proving disastrous for your marketing and SEO efforts
  • Lower your SEO rankings and decrease your potential client leads
  • Cause legal action for malpractice or fraud

Adhere to Professional Standards

Always remember to adhere to your professional standards and codes of conduct. If anything related to the use of AI seems questionable or unethical, treat it as such and steer clear of it. The use of AI as a substitute for the advice and counsel of a bona fide attorney, whether online, in the courtroom, or in representing your clients, isn’t acceptable under any state bar at the current time. Taking shortcuts that rely on AI isn’t worth facing professional consequences up to and including having your license suspended or terminated.

What This Means for Legal Content 

AI is permissible and even valuable for some minor legal content generation tasks, such as determining keywords or composing an outline. However, these new and still emerging technologies shouldn’t be used to draft entire blog posts, white papers, newsletters, eBooks, landing pages, or other online marketing copy. There are several reasons to avoid this:

  • AI-generated content may soon carry a watermark detectable by web browsers
  • We don’t yet know how Google will react to such content—although Google currently claims the quality of the content is more important than how it is produced, AI may not be generating quality content, and Google could change its stance at any point
  • State bars may view AI-generated marketing content as unethical or fraudulent
  • The use of AI-generated content could constitute the unauthorized practice of law in some states
  • AI content may provide incorrect information and come across as cold or impersonal, something attorneys definitely want to avoid when marketing to potential clients

Do You Need Help Producing Original Content?

If you are an attorney or marketing professional who needs help producing legal content, Lexicon Legal Content can help. Don’t cut corners and put yourself at risk by turning to AI-generated content. Our team of attorney-led writers can produce valuable content for your website or other marketing efforts that pass not only plagiarism detection but also AI detection. All content is either written or reviewed by a licensed attorney. Talk to a content expert today about we can meet your legal content needs. 

U.S. Copyright Office Issues Guidance on AI-Generated Works

Generative AI is happening, and it’s creating images, text, music, and other forms of content right now, as you read this. Some see this technology as a way to maximize human efficiency and productivity, while others view it as an existential threat to humanity. Regardless of where you come down on the issue, the reality is that generative AI is creating new content every day, and there are significant legal and ethical implications.

One of the most vexing questions is who owns the material generated by AI? In other words, if you use AI to create content, is it copyrightable?  If you ask ChatGPT, it tells you that:

A screenshot of ChatGPT

Not a very clear answer – maybe, and maybe not. Fortunately for content creators, the U.S. Copyright Office issued guidance on the subject on March 16, 2023. The TLDR version is this: works generated solely by AI are not copyrightable, but works generated by humans using AI might be.

The Authorship Requirement in Copyright

In its guidance, the Office reiterated its position that the term “author” excludes non-humans. For this reason, copyright can only protect works that are the result of human creativity.  In Article I, Section 8, The Constitution grants Congress the right to provide “authors” with an exclusive right to their “writings.”

The seminal case interpreting these terms is Burrow-Giles Lithographic Co v. Sarony. In that case, Napoleon Sarony had taken photos of Oscar Wilde, which Burrow-Giles Lithographic Company made copies of and marketed. The company’s position was that photographs could not be copyrightable works as they were not “writing” or produced by an “author” – therefore invalidating the Copyright Act Amendment of 1865, which explicitly granted copyright protection to photographs.

The Court rejected this argument, explaining that visual works such as maps and charts had been granted copyright protection since the first Copyright Act of 1790. In addition, even if “ordinary pictures” may simply be the result of a technical process, the fact that Sarony had made decisions regarding lighting, costume, setting, and other issues indicated that Sarony was, in fact, the author of an original work of art that was within the class of works that for which Congress intended to provide copyright protection.

In the opinion, the Court defined an author as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.” Futhermore, as the decision repeatedly refers to authors as “persons,” human,” and a copyright as “the exclusive right of a man to the production of his own genius or intellect.”

Relying on this decision as well as subsequent cases, the Copyright Office has required that work have human authoriship in order to be copyrightable.

How the Copyright Office Applies the Authorship Requirement

When the Copyright Office receives a hybrid work that contains both human-created and AI-generated material, it considers whether the contributions of the AI are the result of mechanical reproduction or the author’s own original idea on a case-by-case basis.

The bottom line is that if a work is the result of simply feeding AI a prompt, it will not be copyrightable. From the guidance (footnotes omitted):

For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

Importantly, the guidance acknowledges that some works that include AI-generated material will have enough human authorship to qualify for copyright protection. According to the Office, if the material is sufficiently altered in a creative way, the end result could be a work of authorship subject to copyright protection.

Authors Can Use AI Tools to Create Copyrightable Works

The guidance makes clear that authors can create copyrightable works using AI and other technical tools. That said, people applying for copyright registration should disclose their use of AI and explain how the human author contributed to the work.

The emergence of generative AI has brought with it complex legal and ethical questions regarding ownership and copyright of AI-generated works. The recent guidance issued by the U.S. Copyright Office provides some clarity on the matter, but content creators should make an effort to stay informed on this rapidly-evolving area of law.

Note: ChatGPT was used in the editing process – and helped with the conclusion.

 

FAQs: Can Lawyers Use AI-Generated Content for Marketing?

Right now, it’s nearly impossible to have a discussion about digital marketing without mentioning ChatGPT or AI generally. The technology is undoubtedly amazing; it’s capable of answering questions, creating a business plan, and even writing essays. One of the most obvious potential use cases for the newest generation of AI tools is content creation – but is it a good idea to use it? Let’s dig in and see what the issues are….

Can I Post AI-Generated Content on My Website?

Yes, you can. That said , the last thing you should do is post AI-generated content on your legal site without significant oversight and review. On February 8, 2023, Google clarified its position on AI-generated content. In short, it said that using AI-generated content is not against its guidelines. Like other forms of content, it will rank well if it is helpful for people searching for information. Additionally, if you use AI to create content in an attempt to “game” SEO, your site will likely be penalized.

Should I Use AI Content?

As the old adage goes, just because you can do something doesn’t mean you should. If your site deals with topics that can affect your money or your life (YMYL, in Google’s parlance), it will scrutinize your site’s content more closely. Specifically, it will look closely for signals that demonstrate experience, expertise, authority, and trustworthiness (E-E-A-T).

YMYL sites include sites that relate to topics like medicine, finance, and law. As a result, it’s critical for lawyers to ensure that the content on their site is accurate, helpful, and in compliance with the rules of professional conduct. If you are using AI to generate content, it’s imperative that you (or someone with the necessary expertise) review every word of it before you post it on your website. At that point, it becomes a legitimate question as to whether using AI to create long-form legal content is truly more efficient than human writing.

If you need 100-word product descriptions for kitchen appliances, you’re likely fine to use AI to generate them and post them with a cursory review. If you are creating long-form content on complicated legal topics, you probably want to have more human involvement and oversight in content creation.

How Can AI Help in the Content Creation Process?

That said, there are certainly ways in which AI tools can help content creators make the process more efficient. Some of the ways that you can use AI to help in content creation ethically and without creating more work include:

  • Blog topic ideation
  • Client persona identification
  • Keyword research
  • Content outlining
  • Basic legal research
  • Getting over writer’s block

Is AI-Content Well-Written?

Whether you think AI-generated content is well-written depends on what you believe makes content “good.” To many people, it’s just too generic and “clean” to qualify as good content. The reality is that law firms and other professional service providers have a brand identity that they want their content to reflect, and content generated by artificial intelligence lacks the personality that achieves that goal.

Is AI-Content Bar-Compliant?

There is no guarantee that the content created by AI will be compliant with the rules of your state bar. It may make statements that inadvertently guarantee a favorable outcome, it may suggest that you are a “specialist” or an “expert,” and it may even provide incorrect information. Furthermore, it’s possible that some state bars may hold the position that using AI-generated content without oversight is, per se, a violation of the rules of professional conduct. 

In Conclusion…

If you are a law firm or a digital marketing agency that works with law firms, AI can certainly help you in your efforts. That said, you should be certain that there is a significant amount of expert oversight in the process. Using AI to mass-produce content and posting without review can land you in hot water with Google and even your state bar.