Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions, particularly for high-risk items, by lowering emotional trust.
Read original articleA study conducted by researchers at Washington State University reveals that using the term "artificial intelligence" in product descriptions can negatively impact consumer purchase intentions. The research, published in the Journal of Hospitality Marketing & Management, involved over 1,000 U.S. adults and examined how AI disclosure affects consumer behavior. The findings indicate that mentioning AI tends to lower emotional trust among consumers, which subsequently decreases their likelihood of making a purchase. This effect was particularly pronounced for high-risk products, such as expensive electronics and medical devices, where consumers already feel uncertain. In experiments, participants shown product descriptions that included the term "artificial intelligence" expressed less interest in purchasing the items compared to those who saw descriptions without the term. The study suggests that marketers should reconsider how they present AI in their product descriptions, focusing instead on the features and benefits of the products without emphasizing AI. The lead author, Mesut Cicek, emphasized the importance of emotional trust in consumer perceptions of AI-powered products and recommended strategies to enhance this trust. The research highlights the potential pitfalls of using AI terminology in marketing, especially for products that carry higher perceived risks.
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
Unstoppable AI scams? Americans admit they can't tell what's real anymore
Americans are feeling vulnerable to scams with AI integration. 48% feel less "scam-savvy," struggling to identify scams, especially if impersonating someone they know. Concerns include fake news, robo-callers, and phishing attempts. Financial sector needs more protection. 31% have privacy, data, and fraud concerns despite some positive views on AI. 69% believe AI significantly impacts financial scams, with only 25% seeing a positive impact on financial safety. Recommendations include verifying identities and using advanced algorithms to prevent fraud. Vigilance and regulation are needed as AI technology advances and scammers adapt.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.
You can market your product as being "cheap" in the sense "you as a customer don't have to pay much", people like not spending money.
You can't market a product as being "cheap" in the sense of "no expense spent", people look at that and think "then why are you charging me for this?"
We've all seen the failure modes of LLMs and art generators, so AI comes across as the latter rather than the former. And that's despite the success examples: it isn't sufficient to point out that AI can do some things better than any human, as those things are very narrow domains like chess and protein folding and millisecond stock market trading; and it isn't sufficient to argue about the difference between the average human and AI because of our division of labour, because, for example, it may be better at architecture than an accountant and better at accountancy than an architect, but you wouldn't hire someone for either role unless they were skilled at that role.
I assume, like me, most consumers have been saturated with AI, tried it a few times, and found it doesn’t deliver on simplifying/improving anything. They tried it, it hasn’t helped and they’ve adjusted their mindset accordingly.
It needs to be kept back as a toolset that humans use to address human problems, not a front-line feature. It's not trustworthy, it holds no responsibility, it's not predictable, and it makes the company seem less invested in humanity itself.
For items with menu pictures, it gives a definitive description, e g. "Marinated pork with onions, cilantro, and pineapple, folded in a grilled tortilla. Served with lime wedges and a side of grilled jalapeño and sautéed onion."
For items without pictures, it hedges: "Folded flour tortilla filled with seasoned chicken and melted cheese, typically includes a blend of Mexican cheeses."
Part of me finds it pretty neat, but the other part wonders how long it will last.
That makes sense. Anyone who has used ChatGPT knows that while it is a great tool, it occasionally makes mistakes. In some products, mistakes can have significant consequences, so even infrequent mistakes are not allowed. In other products, infrequent mistakes may not be an issue.
With that said, your whole value proposition can’t be just “we have AI.” You should still talk about the desired outcomes your product helps achieve. The fact that it uses AI is just part of “how it works,” which is secondary.
And sometimes the “how it works” is better left unsaid. For example, Amazon’s logistics operation is out of this world, with robots and AI and incredibly sophisticated supply chains… But people only care that their stuff arrives in 2 days.
Of course, there are some industries and markets that desire the capabilities only AI can provide. But that's the point. Analysis should precede the message. We should market the benefits. I've seen a few people at least claim AI isn't a benefit, it's a feature. I'd argue it's not even a feature, it's an implementation detail, like using an object-oriented programming language or a relational database; it has advantages and disadvantages.
Focus on the needs of the customer and industry. Describe the benefits. For customers and investors alike, remove the AI veil of opacity, by describing simply what the AI is doing and how/why.
It's interesting to see a study that seems to corroborate my anecdotal experiences. It's a marketing study though, so it shouldn't be overly generalized until more studies reproduce the results. Studies about human behavior tend to be difficult to reproduce and can yield conflicting conclusions. I wouldn't be surprised to see another study with slightly different questions or methods come to the opposite conclusion, especially if they don't control for consumer segments, industries, or types of products.
I think there's space for AI generation out there, I see a lot and I do mean a lot, of popular AI content out there. People use it in their YouTube videos to generate images, people generate these silly videos of cats with a meow version of Billie Eilish as music. And that's about as much people want to use AI for.
Now, I do think there's good AI products out there like copilot. And apple's AI on the iPhones seemed interesting. But most AI implementations just feel too jarring and "okay but it's literally chatgpt on your data" to be useful.
At the middle level managers want to build AI products so they appear innovative and get promoted.
At the low level designers want to build AI products to increase their skill set so they can get a better job.
Consumers could generally care less. There are a few products that work better because they use neural networks, but that's an implementation detail. Does the TV look nice? Great I'll take it, I could care less if it's innovative.
Even six months ago, there were still "AGI" discussions happening, talks about how "over" it was for white-collar jobs etc.
Seems there's an increasingly negative sentiment around "AI" now, especially (or largely) among the non-technical general public.
When was this conducted? Relatively recently, after the commercial boom of LLMs, or prior, when AI was less front and center in the minds of an average consumer?
What are the characteristics of the study participants? Career choices or technical knowledge would be most interesting to pick apart as it relates to findings.
Anecdote Corner:
Prior to the LLM boom and OpenAI, I was real bearish on companies throwing about "AI capabilities," because it was, from my few decades of engineering experience, just marketing mumbo jumbo. I'm more bullish on the direction and viability of "AI" these days, and I'm really excited to see what the future holds, but I'm even more skeptical of "AI" as a product capability, for a few reasons:
1. I honestly don't care if something is powered by AI or not. Solve my problem, don't attempt to distract me with "AI" as a selling point.
2. I've seen what's capable with contemporary LLMs - it's awesome, but as many posts here on HN have shown, the technology is far from perfect. I'm being patient.
Give me AI when I _want_ AI, don't give it to me as a knock-off replacement for something else.
Same happens with the AI. I want to know if you're gonna solve my problem, not how. If AI won't do the job I want you to use something else.
I guess many companies worried they would be left out, they oversold it a bit, I think AI tools are great, and will continue to improve and help us improve
But the hype was a bit too much, many companies just rebranded their products as AI without really changing much if anything at all
And chat bots are still horrible on 100% of support site, why aren't chat bots AI already ?
> In the experiments, the researchers included questions and descriptions across diverse product and service categories. For example, in one experiment, participants were presented with identical descriptions of smart televisions, the only difference being the term “artificial intelligence” was included for one group and omitted for the other.
I mean yeah, what consumer is going to jump at the chance to own a TV with "artificial intelligence"?
I'd be interested to know the full range of "diverse product and service categories" they asked about here. Not interested enough to pay for access to the paper though!
> More that's tight means more to see, more for them, not more for me
> That can't help me climb a tree in ten seconds flat
-Dar Williams
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
Unstoppable AI scams? Americans admit they can't tell what's real anymore
Americans are feeling vulnerable to scams with AI integration. 48% feel less "scam-savvy," struggling to identify scams, especially if impersonating someone they know. Concerns include fake news, robo-callers, and phishing attempts. Financial sector needs more protection. 31% have privacy, data, and fraud concerns despite some positive views on AI. 69% believe AI significantly impacts financial scams, with only 25% seeing a positive impact on financial safety. Recommendations include verifying identities and using advanced algorithms to prevent fraud. Vigilance and regulation are needed as AI technology advances and scammers adapt.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.