By Ryan Turner, Founder of EcommerceIntelligence.com
A major part of effective Ecommerce marketing involves writing search engine optimized content, a job that can be rather tedious. It is no wonder some writers attempt to simplify the task using artificial intelligence (AI) generated content. However, this raises the concern of whether AI copywriting tools can actually develop helpful content that solves the consumer’s problem by providing the right answer to their search intent. Is their contribution to the web a net positive or a net negative?
AI content generators rely on pattern identification by crawling through billions of sentences online. The tools then use a transformer model to generate predictive text based on the learning samples; that’s where the major concern lies for critics of AI writing tools. This article will delve into the ethics and risks of using AI-generated content in Ecommerce marketing and how it could potentially be harmful for your brand in the future.
How do AI content writers work?
AI content generators rely on a set of inputs such as keywords or topic headlines to predict word by word entirely ‘new’ content. At the very core of the generator is a collection of machine learning algorithms which identify patterns in the human language. The language models rely on mathematical functions designed as neural networks similar to the way neurons in the brain are wired. The prediction is made by computing the strength of neural connections to reduce prediction error via parallel training. These models are pre-trained on billions of pages with all manners of content on the internet.
A large number of AI-content generators rely on the GPT-3 language model, which uses deep learning to create human-like text. The key phrase here being ‘human-like’. When it comes to predicting text, the model uses generative pre-training meaning its predictions are based on the patterns it learned from the training data. While the model may come up with uncanny and sometimes almost genius sentences, it’s still based on statistics and not actual human-level intelligence.
Ethical concerns and risks associated with AI-generated content marketing
Since AI content generators learn from both supervised and unsupervised sources, there’s the inherent risk that the model will be exposed to biased and toxic content. Keep in mind that the models cannot fully comprehend what the content they are trained on really means. Think of it as a big game of word association. Sooner or later, the algorithm will learn to associate certain words with similar ones as they appear more often in similar contexts in the training data.
While Ai writing tools can create well-structured content, there’s still the likelihood that they will spew hate speech amidst a normal sentence. Researchers attribute this to the presence of hate speech-related words in the training data, which leads the algorithm to form statistical relationships between phrases that it’s trained on. Still, it doesn’t fully understand the context or meaning behind them.
It goes without saying that if this kind of content is published without being thoroughly checked and edited by a human, it could have a real negative impact on any Ecommerce brand using AI to generate web content for the purpose of promoting their brand.
While most large language models train on billions of parameters in what is best described as brute force scale, there are still scenarios when their predictions do not make any sense at all. Professor Emily M Bender, A computational Linguist from the University of Washington, referred to these models as “stochastic parrots” owing to their echo chamber-like abilities to make ridiculous yet comprehensible statements. This is because the algorithms just introduce randomness to existing content in their predictions, so they retain the biases in the training data.
When you prompt a writing tool to generate text on a topic that’s not very common on the internet, chances are that the model will have even fewer data to learn from. This affects the varying quality of the results, and often, the generated text will contain placeholder text that has nothing to do with the topic at hand. The predictions may also contain statements that are not fluent and the paragraphs lacking in the flow of ideas.
Potential for AI content to see poor search engine performance
In a recent blog post, Google stated that its upcoming update will focus on helpful content in an effort to fix the loopholes that websites have been using to gamify the ranking system. The fix involves changes in the search engine’s ranking signals to rank content worthiness. It’s expected that the update will change how the algorithms evaluate a website’s content and how it’s helpful to satisfy the searcher’s intent.
With this update, Google will give top priority to people-first content so that quality evaluation will be paramount. This is bound to negatively impact websites with AI-generated content that doesn’t clearly demonstrate the depth of knowledge and expertise of the topics covered. While the new signal is automated based on machine-learning models, it will not mark content as spam or issue a manual action. Instead, Google’s ranking algorithm will consider the signal while ranking websites in search engine results pages (SERPs). If you are looking to rank higher, it’s high time you got rid of unhelpful content, especially if you use extensive automation to write your content on many topics.
Ryan Turner, founder of Ecommerce marketing agency EcommerceIntelligence.com, said the following when asked about the trend of Ecommerce businesses using AI to make content marketing production faster and cheaper: “It is something we’re wary of for sure. Many brands we speak with have ambitious content publishing goals which are potentially focused too much on quantity instead of quality. We haven’t seen search engines take any kind of definitive action against AI content yet, but it is something many in the industry feel will happen at some point in the near future.”
Misinformation and disinformation in AI generated content
Ai writing tools are bound to repeat inaccurate information that already exists in the training data. In this case, the tools generate inaccurate information without the intention of causing harm in what is commonly referred to as misinformation. However, as AI tools advance in complexity, there’s fear that they may start deliberately generating false information, a common disinformation tactic. This is often the case with AI tools that write news articles that can dupe human readers.
In an effort to stay competitive with larger brands in their market, some Ecommerce marketers publish AI-generated articles without much human proofreading and editing of the content. This mass publication of AI-generated content is more likely to spread disinformation by repeating existing malicious information in a never-ending cycle. Some researchers have estimated that 99% of the internet will be based on AI-generated content by 2025 if we continue at the current rate of adoption, raising concerns on just how accurate all that information will be.
Summary for Ecommerce marketers
There’s no denying the fact that artificial intelligence is here to stay. While AI writing tools have experienced drastic improvement over the last couple of years, there still exists some serious ethical and operational risks associated with publishing AI-generated content – particularly if it is online representing a premium brand. AI writing tools can be used to help generate article and blog post ideas and headlines, as well as the overall structure of the piece. However, it might be a good idea to rely on real humans to do most of the actual writing.