arketing: 7 Risks to be aware of

A few years ago, the majority of marketers had no idea how to utilize AI in marketing. We’re suddenly writing LinkedIn posts on the best prompts we can whisper into ChatGPT.

Artificial intelligence is a welcome addition to our daily work. McKenzie believes AI can unlock $2.6 trillion of value for marketers.

Our rapid adoption of AI could be putting us in danger of not being able to answer important questions about ethics, law, and operations. This will expose marketers to new risks they never thought of before. Can we be sued if we tell AI to “write ?”).

AI dust is everywhere in marketing and it won’t be settled for many years. You can’t see every possible pitfall when using machine learning and large language models to create and manage ads, no matter how hard your eyes squint.

In this article, we will try to look at seven of the biggest risks associated with using AI in marketing from a high perspective. Experts have provided advice to mitigate these risks. We’ve also added a number of resources to help you dig deeper into your questions.

Machine Learning Bias: Risk No. 1

Machine learning algorithms can sometimes produce results that are unfairly biased in favor of or against something or someone. This is called Machine Learning Bias or AI bias. It’s a problem that affects even the most sophisticated deep neural networks.

There’s a problem with the data

AI networks are not inherently racist. The data they’re fed is the problem.

The algorithms of machine learning work by identifying patterns and calculating the probability of a result, such as whether a certain group of customers will like your product.

What if data used to train the AI is biased towards a certain race, gender or age group. The AI will conclude that these people are better matches and change the creative of ads or placement accordingly.

Here’s an illustration. Researchers recently checked for gender bias when targeting Facebook ads. Researchers placed ads to recruit Pizza Hut delivery drivers, as well as a similar one for Instacart.

Facebook has shown these ads to men in disproportionate numbers because the pool of Pizza Hut driver is skewed towards males. Instacart employs more women, so their ads were shown to more women. There’s no reason why women would not want to hear about Pizza Hut’s jobs. This is a huge mistake in targeting.

AI bias is common

The problem goes beyond Facebook. Researchers at USC examined two large AI databases, and found that more than 38% of data was biased. The documentation of ChatGPT warns that may be associated with “negative stereotypes about black women.”

The machine learning bias has several implications for marketing, the least of them being poor ad performances. A platform for ad-targeting that excludes large segments of the population may not be the best option if you want to reach as many potential customers as possible.

There are of course more serious consequences if we unfairly exclude or target certain groups. You could be in trouble with the Fair Housing Act or the Federal Trade Commission if your real estate ad is discriminatory against protected minorities. You could also miss the inclusive advertising boat.

AI bias: How to avoid it

What do we do if our AI tools go rogue? You can take a few steps to ensure that your ads are fair for everyone.

Alaura Weber is the senior manager for content and community at Writer. She advises that you should first make sure someone checks your content. She explains that while AI technology is advancing, it still lacks critical thinking and decision making abilities. By having human editors review AI-written content and fact-check it, they can ensure its free of bias and that it follows ethical standards.

The human oversight of paid-for ad campaigns will also reduce the risk for negative outcomes.

Brett McHale is the founder of Empiric Marketing. “AI is most effective when it is given accurate inputs by organic intelligence, which has already amassed vast amounts of experience and data.”

Risk #2: False facts

Google recently lost its parent company $100b when Bard, its AI chatbot, provided an incorrect answer to a promotional Tweet.

Google’s mistake highlights one of AI’s biggest limitations, and one of its biggest risks: AI does not always tell the truth.

AI hallucinates

Ethan Mollic is a professor of at the Wharton School of Business. He recently described AI powered systems such as ChatGPT, as “omniscient interns who are eager to please and sometimes lie.”

Of course, AI isn’t sentient, despite what <a href="https://www.cbc.ca/news/science/ai-consciousness-how-to-recognize-1.6498068#:~:text=Google%20engineer%20Blake%20Lemoine%27s%20recent%20claim%20that%20the,questions%20about%20what%20it%20means%20to%20be%20alive." Some may say that AI is sentient. It's not trying to trick us. It may, however, be suffering from "hallucinations", which can lead it to make up stuff.

AI is a machine that predicts the future. It tries to guess the next phrase or word that will answer your question. It’s not aware of itself; AI does not have a gut-check logic that allows it to determine if the words and phrases it is stringing together make sense.

This doesn’t appear to be a bias problem. Even when a network has the correct information, it may still give us the wrong impression.

Take this example: A user asked ChatGPT, “How many times has Argentina won the FIFA World Cup?” The answer was one and referred to the 1978 win. The tweeter asked what team won the 1986 world cup.

The chatbot admitted it was Argentina with no explanation for its former gaffe.

The troubling part is that AI’s erroneous answers are often written so confidently, they blend into the text around them, making them seem completely plausible. They can also be comprehensive, as detailed in a lawsuit filed against Open.ai, where ChatGPT allegedly concocted an entire story of embezzlement that was then shared by a journalist.

How to avoid AI’s hallucinations

While AI can lead you astray with even single-word answers, it’s more likely to go off the rails when writing longer texts.

“From a single prompt, AI can generate a blog or an eBook. Yes, that’s amazing – but there’s a catch,” Weaver warns. “The more it generates, the more editing and fact-checking you’ll have to do.”

To reduce the chances that your AI tool starts spinning hallucinatory narratives, Weaver says it’s best to create an outline and have the bot tackle it one section at a time. And then, of course, have a person review the facts and stats it adds.

Risk #3: Misapplication of AI tools

Every morning we wake up to a new crop of AI tools that seemingly sprouted overnight like mushrooms after a rainstorm.

But not every platform is built for all marketing functions, and some marketing challenges can’t (yet) be solved by AI.

AI tools have limitations

ChatGPT is a great example. The belle of the AI ball is fun to play with (like writing how to remove a peanut butter sandwich from a VCR in the style of the King James Bible). And it can churn out some surprisingly well-written short form answers that bust up writer’s block. But don’t ask it to help you do keyword research.

ChatGPT fails because of its relatively old data set which only includes information pre-2022. Ask it to offer keywords for “AI marketing” and its answers won’t jive with what you find in other tools like Thinword or Contextminds.

Likewise, both Google and Facebook have new AI-powered tools to help marketers create ads, optimize ad spend, and personalize the ad experience. A chatbot can’t solve those challenges.

Google announced a slew of AI upgrades to its search and ads management products at the 2023 Google Marketing Live event.

You can overuse AI

If you give an AI tool a singular task, it can over index on just one goal. Nick Abbene, a marketing automation expert, sees this often with companies focused on improving their SEO.

“The biggest problem I see is using SEO tools blindly, over-optimizing for search engines, and disregarding customer search intent,” Abbene says. “SEO tools are great for signaling to search engines quality content. But ultimately, Google wants to match the searcher’s ask.”

How to avoid misapplication of AI tools

A wrench isn’t the best option for pounding nails. Likewise, an AI writing assistant may not be good for creating web pages. Before you go all in on any one AI option, Abbene says to get feedback from the tool’s builder and other users.

“In order to avoid mis selection of AI tools, understand if other marketers are using the tool for your use case,” he says. “Feel free to request a product demo, or trial it alongside some other tools that offer the same functionality.”

Websites like Capterra let you quickly compare multiple AI platforms.

And once you find the right AI tool stack, use it to aid the process, not take it over. “Don’t be afraid to use AI tools to augment your workflow, but use them just for that,” Abbene says. “Begin each piece of content from first principles, with quality keyword research and understanding search intent.”

Risk #4: Homogeneous content

AI can write an entire essay in about 10 seconds. But as impressive as generative AI has become, it lacks the nuance to be truly creative, leaving its output often feeling, well, robotic.

“While AI is great at producing content that’s informative, it often lacks the creative flair and engagement that humans bring to the table,” Weaver says.

AI is made to imitate

Ask a generative AI writing bot to pen your book report, and it’ll easily spin up 500 words that competently explain the main theme of Catcher in the Rye (assuming it doesn’t hallucinate Holden Caulfield as a bank robber).

It can do that because it’s absorbed thousands of texts about J. D. Salinger’s masterpiece.

Now ask your AI pal to write a blog post that explains a concept core to your business in a way that encapsulates your brand, audience, and value proposition. You might be disappointed. “AI-generated content doesn’t always account for the nuances of a brand’s personality and values and may produce content that misses the mark,” Weaver says.

In other words, AI is great at digesting, combining, and reconfiguring what’s already been created. It’s not great at creating something that stands out against existing content.

Generative AI tools are also not good at making content engaging. They’ll happily churn out huge blocks of words with nary an image, graph, or bullet point to give weary eyes a rest. They won’t pull in customer stories or hypothetical examples to make a point more relatable. And they’d struggle to connect a news story from your industry to a benefit your product provides.

How to avoid homogenous content

Some AI tools, like Writer, have built-in features to help writers maintain a consistent brand personality. But you’ll still need an editor to “review, and edit the content for brand voice and tone to ensure that it resonates with the audience and reinforces the organization’s messaging and objectives,” Weaver advises.

Editors and writers can also see an article like other humans will. If there’s an impenetrable block of words, they’ll be the ones to break it up and add a little visual zhuzh.

Use AI content as a starting point–as a way to help kickstart your creativity and research. But always add your own personal touch.

Risk #5: Loss of SEO

Google’s stance on AI content has been a little murky. At first, it seemed the search engine would penalize posts written with AI.


[Image: tweet from John Mueller on AI]

More recently, Google’s developer blog said that AI is OK in their book. But there is a significant wink with that confirmation. Only “content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness” will impress the human search raters that continually evaluate Google’s ranking systems.

Trust is clutch for SEO

Among Google’s E-E-A-T, the one factor that rules them all is trust.

[source]

We’ve already discussed that AI content is prone to fallacies, making it inherently untrustworthy without human supervision. It also fails to meet the supporting requirements because, by nature, it isn’t written by someone with expertise, authority, or experience on the topic.

Take a blog post about baking banana bread. An AI bot will give you a recipe in about two seconds. But it can’t wax poetic on the chilly winter days spent baking for its family. Or talk about the years it spent experimenting with various types of flour as a commercial baker. Those perspectives are what Google’s search raters look for.

It also seems to be what people crave, too. That’s why so many of them are turning to real people on TikTok videos to learn things they used to find on Google.

How to avoid losing SEO

The great thing about AI is it doesn’t mind sharing bylines. So when you do use a chatbot to speed up content production, make sure you reference a human author with credentials.

This is especially true for sensitive subjects like healthcare and personal finance, which Google calls Your Money, Your Life topics. “If you’re in a YMYL vertical, prioritize authority, trust and accuracy above all else in your content,” advises Elisa Gabbert, Director of Content and SEO for WordStream and LocaliQ.

When writing about healthcare, for example, have your posts reviewed by a medical professional and reference them in the post. That’s a strong signal to Google that your content is trustworthy, even if it was started in a chatbot.

Risk #6: Legal challenges

Generative AI learns from work created by humans, then creates something new(ish). The question of copyright is unclear for both the input and output of the AI content model.

Existing work is likely fair game for AI

To illustrate (pun intended) the copyright question for works that feed large learning models, we turn to a case reported by technologist Andy Baio. As Baio explains, an LA-based artist named Hollie Mengert learned that 32 of her illustrations had been absorbed into an AI model, then offered via open license to anyone that wanted to recreate her style.

Caption: a collection of artist Hollie Mengert’s illustrations (left) compared with AI generated illustrations based on her style, as curated by Andy Baio.

The story gets more complicated when you learn that she created many of her images for clients like Disney, who actually own the rights to them.

Can illustrators (or writers or coders) who find themselves in the same spot as Mengert successfully sue for copyright infringement?

There’s not yet a clear answer to the question. “I see people on both sides of this extremely confident in their positions, but the reality is nobody knows,” Baio told The Verge. “And anyone who says they know confidently how this will play out in court is wrong.”

If the AI you use to create an image or article was trained on thousands of works from many creators, you’re not likely to lose a court case. But if you feed the machine ten Stephen King books and tell the bot to write a new one in that style, you could be in trouble.

Disclaimer: We aren’t lawyers so please get legal advice if you’re unsure.

Your AI content may not be protected either

What about content you create using a chatbot, is it covered by copyright laws? For the most part, it’s not unless you’ve done considerable work to edit it. Which means you’d have little recourse if someone repurposes (read: steals) your posts for their own blog.

For content that is protected it may be the AI’s programmer, not you, that holds the rights. Many countries consider the maker of the tool that produced a work to be its creator, not the person that typed in the prompt.

How to avoid legal challenges

Start by using a reputable AI content creation tool. Find one with plenty of positive reviews and that clearly addresses their stance on copyright laws.

Also, use your good judgment to decide if you’re intentionally copying a creator’s work or simply using AI to augment your own.

And if you want a fighting chance in court to protect what you produce, make lots of substantial changes. Or use AI to help create an outline, but write most of the words yourself.

Risk #7: Security and privacy breaches

AI tools present marketers with a broad range of potential threats to their system’s security and data privacy. Some are direct attacks from malicious actors. Others are simply users unwittingly giving sensitive information to a system designed to share it.

Security risks from AI tools

“There are plenty of products out there that look, feel, and behave like legitimate tools, but are in fact malware,” Elaine Atwell, the Senior Editor of Content Marketing at endpoint security provider Kolide, told us. “They’re extremely difficult to differentiate from legitimate tools and you can find them in the Chrome store right now.”

Type any version of “AI tools” into the Google Chrome store and you’ll find no shortage of options.

Atwell wrote about these risks on the Kolide blog. In her article, she referenced an incident where a Chrome extension called “Quick access to Chat GPT” was actually a ruse. Once downloaded, the software hijacked users’ Facebook accounts and swiped all of the victim’s cookies–even those for security. Over 2,000 people downloaded the extension every day, Atwell reported.

Privacy unprotected

Atwell says even a legitimate AI tool can present a security risk. “…right now, most companies don’t even have policies in place to assess the types and levels of risk posed by different extensions. And in the absence of clear guidance, people all over the world are installing these little helpers and feeding them sensitive data.”

Let’s say you’re writing an internal financial report to be shared with investors. Remember that AI networks learn from what they’re given to produce outputs for other users. All the data you place in the AI chatbot could be fair game for people outside of your company. And may pop up if a competitor asks about your bottom line.

How to avoid privacy and security risks

The first line of defense is to make sure a piece of software is what it claims to be. Beyond that, be cautious about how you use the tools you choose. “If you’re going to use AI tools (and they do have uses!) don’t feed them any data that could be considered sensitive,” Atwell says.

Also, while you’re reviewing AI tools for usefulness and bias, ask about their privacy and security policies.

Mitigate the risks using AI for marketing

AI is advancing at an incredible rate. In less than a year Chat GTP has already seen significant boosts in its capabilities. It’s impossible to know what we’ll be able to do with AI in even the next six to twelve months. Nor can we anticipate the potential problems.

Here are several ways you can improve your AI marketing outcomes while avoiding some of the most common risks:

We’d like to thank Elain Attwell, Brett McHale, Nick Abenne, and Alaura Weaver for contributing to this post.

To recap, let’s review our list of risks that come with using AI for marketing:

  1. Machine learning bias
  2. Factual fallacies
  3. Misapplication of AI tools
  4. Homogeneous content
  5. Loss of SEO
  6. Legal challenges
  7. Security and privacy breaches

The post 7 Serious Risks You Face Using AI for Marketing appeared first on WordStream.

Leave a Reply

Your email address will not be published. Required fields are marked *