Brands should avoid the term 'AI'. It's turning off customers
A study found that labeling products as "AI-powered" decreases purchase intentions due to trust issues and privacy concerns. Companies should focus on transparent messaging to improve consumer acceptance of AI.
Read original articleA recent study published in the Journal of Hospitality Marketing & Management reveals that labeling products as "AI-powered" can deter customers from purchasing them. The research, conducted by Dogan Gursoy and his team, involved participants evaluating various products, with one group seeing them described as "high tech" and the other as using AI. Results showed a significant decrease in purchase intention for items labeled with AI, regardless of the product type. This hesitance stems from two main factors: cognitive trust, where consumers expect AI to be error-free, and emotional trust, which is influenced by limited understanding of AI technology. Additionally, concerns about privacy and data management further contribute to negative perceptions. Gursoy emphasizes that companies should avoid using "AI" as a mere buzzword and instead focus on transparent messaging that explains how AI benefits consumers. The study highlights a disconnect between the rapid advancements in AI technology and consumer acceptance, suggesting that brands need to address fears and misconceptions to improve trust and engagement.
- Labeling products as "AI-powered" can reduce customer purchase intentions.
- Consumer trust in AI is affected by expectations of error-free performance and emotional perceptions.
- Privacy concerns regarding data management contribute to negative views on AI.
- Companies should provide clear, transparent messaging about AI benefits rather than using it as a buzzword.
- There is a significant gap between AI advancements and consumer acceptance.
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions, particularly for high-risk items, by lowering emotional trust.
Study: Consumers Actively Turned Off by AI
A study in the Journal of Hospitality Marketing & Management found that mentioning "artificial intelligence" in product descriptions reduces emotional trust and purchase intentions, especially for high-risk items.
- Many commenters associate the "AI" label with unreliable or overly complicated products, leading to decreased purchase intentions.
- There is a consensus that companies should focus on clear communication about the actual benefits of their products rather than relying on buzzwords like "AI."
- Some believe that the marketing of AI is more aimed at investors than consumers, suggesting a disconnect between product development and consumer needs.
- Several users express a desire for innovation that genuinely improves products rather than superficial enhancements labeled as AI.
- Comments highlight a growing fatigue with the overuse of the term "AI," comparing it to past marketing trends that ultimately failed to deliver real value.
The “AI” label also indicates the solution is way over complicated and simpler ways to improve the product have been ignored. For instance, Confluence now has an “AI” chatbot. Search is still substantially worse than grep.
You can try to wow the customer with a bunch of words but it’s all fluff - and everyone is implementing AI now, and usually these “implementations” are ChatGPT with RAG on the docs or something else that everyone’s done before. What you end up getting is only slightly better than typing in chatgpt.com.
If you’ve managed to get something that solves a problem just explain what it does to solve that.
AI is the same thing. It's a means to an end. You need to be talking about what you deliver with it. Not about how you deliver it.
In the case of CNN that hosts this article, a lot of their content probably is at this point passing through some LLMs at this point. They don't have to advertise that of course. If they do their job right, you barely notice this. Arguably, they should be doing a lot more with AI than they are doing already. The news business is fiercely competitive. Margins are small, and they have to produce more content with fewer people. LLMs can help them do that.
But, then this year they came out with "Apple Intelligence", which people will just see as "AI". So, I guess they finally gave up on that.
For the ones interested in the actual study instead the headline, this is the link for the original paper:
Paper page: https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...
PDF: https://www.tandfonline.com/doi/epdf/10.1080/19368623.2024.2...
Ubiquitous design and UX is better than throwing around nebulous technical cant 99.999% of people don't understand and are partially terrified of taking their jobs. While automating the navigation of ambiguous requests and delivering more open results with less exhaustive and tedious coding is cool, these features delivered to users have to provide useful advantages to be essential or they're just going to come off as "me too" bandwagon jumping.
"Self-hosted", "open architecture", "cloud optional" are terms I like to see but only because I'm weirdo who tinkers with things sometimes but don't necessarily want to spend all of my time fixing or supporting fragile hacks.
My perspective: consumers have seen new tech emerge a thousand times and are favoring reliability instead of the flavor of the month new tech, especially when it was designed to sell instead, not to solve real problems.
This seems just like TVs, where more people long for dumber TVs with quality display - they are faster and more reliable than what the market of smart TVs is providing.
Then you have Google writing letters for your children or showing how their camera AI integrations can help you live a lie. Frankly I'm glad to see data showing consumers are turned off.
Oh! And then for the executives they have Matthew McConaughey and Idris Elba talking about data security and productivity.
- VMware - Azure - IBM etc
Now-a-days
- Get your employees co-pilot assisted Intel/Windows/etc to boost productivity
We’re currently doing a bunch of “AI at the Edge” projects, even though it’s hardly justified (“edge” in this case is just an on-prem datacenter), but you need to use buzzwords like these to convince executives.
For existing companies with AI features: more useful but mostly LLM bolted on with the same use cases. They can improve the product if used right. But for me it’s mostly often just a gimmick.
The problem with LLMs: They mostly generate stuff they have seen and are bad as truly new stuff. They make mistakes You need to put time and energy in the review it.
For stuff that transforms data it’s useful. Like rewriting a piece of text.
It’s also useful for search queries on the corpus the LLM is trained on.
It’s good at pattern recognition and lastly: human like voice interfaces.
But for generating novel stuff: good luck reviewing it.
People who just blindly copy paste the output of an LLM: that’s quite dangerous and potentially plain wrong.
At least that’s my experience.
I'm an huge crypto enthusiast, but whenever I see the term "crypto" being used somewhere I just cringe.
Meanwhile another conference I am going to has several “machine learning” talks which could have been titled something more informative like “image analysis” or “regression modelling”.
* Nondeterministic * Untrustworthy * Uncertain * Marketing buzzword * Gimmick * Probably requires an external service or expensive hardware * Probably collecting my data
If other people generally have the same perception, I'm not surprised it would drive people away.
If that’s all you have in your sales pitch then you are failing.
By all means, please use "AI" everywhere.
Sell benefits not features.
Haha. So they made a splash talking about AI. Well played.
Everytime that happens, I go back to that meme that got created after Google's IO when Sundar Pichai said "ai, ai, ai..." like 113 times.
This should represent a Cambrian Explosion of delightful innovation on par with anything that followed the emergence of the personal computer, or the web, or the smartphone. And there is in fact a ton of amazingly cool stuff happening under the tidal wave of shitty monetization and financialization.
But the robber barons and the hustlers and the opportunists have gone for the jugular on how quickly and completely this event can be politicized (the lobbying and laws and the speed and ruthlessness around them are an embarrassment), how quickly it can be “monetized” via LLM spam and pump-and-dump Mag7 cap manipulation, and how directly it can be converted into a minimum cost offshore customer servicing model-style aspirations where consumers get a broken chat bot instead of a person while simultaneously facing pressure, real or perceived, that their job is about to be replaced by some inferior “agent” that isn’t done.
Machine learning is an amazing technology that should be strictly delightful in the hands of people who live to build awesome things that make people happy and prosperous and safe. “AI” has come to mean LLM spam, Thiel/Altman-style TESCREAL fascist politics, a massive surge in ubiquitous digital surveillance (which somehow still had headroom), and the next turn of the crank on the enshitification of modern life courtesy of the Battery Club.
The socially useful and technically exciting future of AI is just waiting on the fall of the “AI” people. It’s so close.
This seems more relevant to physical consumer goods
I share the sentiment but I don’t think this is tech related.
* The AI Browser
* AI note taking app
* AI photo gallery
* AI habit tracker
* AI budget planner
* AI music player
* AI bank
* AI hike planner
* AI icon pack(wtf?!)
* AI launcher
* AI camera
* AI news
* AI PDF reader
* AI comics viewer
* AI Maps
* AI food delivery
* AI shopping experience
* AI calorie tracker
* AI video editor
* AI backup
* AI share
* AI Authenticator(??)
* AI partner
* AI RoboAdviser(?)
* AI Wallpapers
* AI package tracker(?!)
* AI health tracker
And plenty other things I am forgetting. Even things that already had AI before are now new AI. This is getting out of hand.
Seriously though, it seems AI is being marketed towards investors, and that wherever it was included in a product it will be just to say it's on the product roadmap. If you're long enough in the game like myself you get to recognize these hype bubbles (CORBA anyone?) that claim to be about to take over the world and then fizzle out.
I guess that is what Tech is nowadays.
So yes when I see 'AI' mentioned in products I shake my head before looking at details. Case in point: iOS 18.1 Beta. The Great Apple Intelligence. It does nothing. Just... you won't even notice it.
The irony… I cannot imagine a more hilariously negative way to reduce a title than to say someone is the Taco Bell professor.
You see, a calm, factual, truthful and informed conversation of different technologies actual maturity, merrits and risks is in nobody's interest. /s
> McKinsey research estimates that gen AI could add to the economy between $2.6 trillion and $4.4 trillion annually while increasing the impact of all artificial intelligence by 15 to 40 percent.
When McKinsey releases a report, all product managers will present it in the next marketing slide and the ad budget will follow. If it fails to materialize, it's never their fault of course. The leader in market research is to blame.
Google has no business advertising Gemini on TV. Like who is the target really? But an enterprise not embracing AI after the biggest Market Research firm says it's the future? I'm putting my money on Long Island Ice Tea AI next.
[0]: https://www.mckinsey.com/industries/technology-media-and-tel...
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions, particularly for high-risk items, by lowering emotional trust.
Study: Consumers Actively Turned Off by AI
A study in the Journal of Hospitality Marketing & Management found that mentioning "artificial intelligence" in product descriptions reduces emotional trust and purchase intentions, especially for high-risk items.