Study: Consumers Actively Turned Off by AI
A study in the Journal of Hospitality Marketing & Management found that mentioning "artificial intelligence" in product descriptions reduces emotional trust and purchase intentions, especially for high-risk items.
Read original articleA recent study published in the Journal of Hospitality Marketing & Management reveals that consumers are increasingly turned off by products marketed with the term "artificial intelligence." The research, which involved 1,000 respondents, found that mentioning AI in product descriptions significantly lowers emotional trust and decreases purchase intentions. Lead author Mesut Cicek from Washington State University noted that emotional trust is crucial in how consumers perceive AI-powered products. In experiments, participants showed a marked preference for a smart television when the term "artificial intelligence" was omitted from its description. This aversion was particularly strong for high-risk purchases, such as expensive electronics and medical devices, where consumers are more concerned about potential financial loss or safety risks. The study's findings suggest a broader trend of growing consumer skepticism towards AI, coinciding with a report from Gartner indicating that the hype surrounding generative AI has peaked. Companies are increasingly incorporating AI claims into their products, despite unresolved issues and high costs, leading to consumer fatigue. Cicek advises marketers to reconsider how they present AI in product descriptions, suggesting that emphasizing features and benefits without using AI buzzwords may be more effective, especially for high-risk items.
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions, particularly for high-risk items, by lowering emotional trust.
So I told myself, why don't I add AI stuff on it like an AI assistant as a primary way of interaction? Everyone loves AI, it's the future!
Then the app retention tanked, as well as the install numbers and no purchase was made after that. People didn't even bother with leaving bad reviews.
Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.
Like the artificial sweeteners maybe?
"0 calories and the same taste with the sugar? Why would anybody ever use sugar again, right? Lets short the sugar cane and corn fields and put all our money into artificial sweeteners production equipment and chemicals"
When I see AI assistance as a travel feature I assume it is not only going to be useless, but actively disruptive to my experience.
Branding the mistakes LLM's get as hallucinations was sort of brilliant in my opinion, it was a good way to disguise the fact that LLM's are mostly just really lucky. In my anecdotal experience it hasn't worked out too well though, so maybe it wasn't that brilliant? Anyway, parts of how the AI impacts our business (solar energy + investment banking) has been through things like how Microsoft Teams was supposed to be capable of transcripting meetings with AI. Now, we're an international organisation where English is at best the second language for people, and usually the third, so this probably has an impact on it, but it's so bad. I know I'm not personally the clearest speaker, especially if I'm bored, but the transcripts the AI makes of me are so hilariously bad that they often become the foundation for the weekly friday meme in IT. Which may be innocent enough, but it hasn't been very confidence building in the higher ups who thought they wouldn't have to have someone write a summary for their meetings, and in typical top brass style didn't read the transcripts until there was some contract issue.
This along with how often AI "hallucinates" has meant that our top decision makers have decided to stop any AI within the Microsoft platform. Well every thing except the thing that makes power point presentations pretty. So our operations staff has had to roll back co-pilot for every non-IT employee. I don't necessarily agree with this myself, I use GPT quite a lot and while github co-pilot might "just" be fancy auto-complete it's still increased my productivity quite a lot as well as lowering my mental load of dealing with most snippets. That isn't how the rest of the organisation sees it though, they see the mistakes AI makes on areas where no-mistakes are allowed, and they consider it untrustworthy. The whole Microsoft debacle (I used chatGPT to tell me how to write "debaclable") where they wanted to screenshot everything all the time really sunk trust with our decision makers.
I'd be interested to see real studies into consumer satisfaction with AI features in existing products. My gut feeling is that people don't like (visible) AI in things they use, but that's biased by me reading online reporting about the failings of these features, I wouldn't be too surprised if it turns out people mostly like them.
Eventually everybody grew into it, and we can't imagine life without the Internet. AI is having that same moment right now. The breathless hype bombardment will eventually give way to normalcy just the same.
I think this goes beyond just consumers. If you listen in on a random companies' strategic planning... Any project with "ai" in its title is likely to be bullschtick.
"AI enabled" is our 2024 "now with electrolytes."
The thing about AI is it reminds me of what Steve Jobs said about speeds and fees.
People care about "1000 songs in your pocket" not "30GB hdd". AI seems to be the "30GB hdd" and people don't always relate AI to how it's going to help them.
AI is the shovelware wii game of the 2020s
I wonder what this will do to the YC S24 batch... https://www.ycombinator.com/companies?batch=S24
Consider how many atomic themed products there were in the 60's. We can be thankful that the only way most of these were actually atomic is in that they were made of atoms. The naivety if those days has gone. Information (true or not) travels so quickly now , everyone is a bit jaded. Many are outright nihilistic.
It's worth noting that people's opinion of AI in product is distinct to the actual AI itself.
I'm not in a position to find the reference right now, but I remember reading about a study which showed people felt that when given artworks to judge, the ones they were told were by AI were inferior. This was independent from whether or not each piece was actually by an AI or a human.
AI is a keyword for investors currently, that's all. Like all the companies that previously sprouted a blockchain unrelated to their core product.
I also like general purpose chat/writing/instruct models and they're a nice curiosity, even the image generation ones are nice for either some placeholder assets, texture work, or some sometimes goofy art. HuggingFace has some nice models and the stuff that people come up with on Civitai is sometimes cool! Video generation is really jank for now, I wonder where we'll be in 20 years.
Overall, I'm positive about "AI", but maybe that's because I seek it out and use it as a tool, as opposed to some platform shoving it down my throat in the form of a customer service bot that just throws noise at me while having no actual power to do anything. There are use cases that are pleasant, also stuff like summarizing, text prediction, writing improvements, but there are also those that nobody asked for.
And that's where the whole chatbot, AI or no, thing falls a little. No matter how smarmy or even actually helpful that chat thing is, it doesn't shake my hand, won't do me favours, probably not take me to the shops after work, and ask me whether I need help or why I look sad. Nor complement me on my eyes.
I don't know; at least in me, the herd animal roars loudly enough that I feel unvalued whenever I am forced to interact non-physically. The distance can be felt. And to me, that is not a "protective distance" but an "excluding" one. Makes me more lonely.
But I don’t think it’s a signal to consumers as much as a signal to investors and competitors. Since when does the consumer care about which technology was used to build a product? Has a consumer ever said “wow this product is so good it must have been built with Rust!” The bottom line is it shouldn’t matter and it doesn’t. People need features, not technology, even though many of them confuse the two.
For example Notion. They have an AI feature, and it's hilariously useless. It can barely even summarize things, yet alone write the rest of the document for you. Yet they push it constantly onto you with no way of disabling the crap.
What I have noticed is that the marketing people are in love with it, because it lets them generate the useless drivel they spam people with easier than before. I suspect they never used their brains much, but now they don't even have to at all!
Coincidentally, scammers also love it for similar reasons...
AI/CoPilot/Chat-GPT was mentioned 100 times in the 34 mins I was in it. ( I was tallying, cause I'm sick of it)
And thats not including text on the presentation. just the times the words were said.
As a consumer, why would I choose a company optimizing for their margins at the expense of my experience? Touting AI has become a warning sign that the company is doing this.
1. cheaper
2. worse
It's invariably used as a cost-cutting measure that is quick, cheap, and worse. Companies use AI art because they are too cheap to hire a real artist that would make better art. Companies use AI chatbots because they are too cheap to hire real customer service agents who could actually help people.
If a company slapped a label on a bag of chips that said "Now with fewer chips and less flavor!", I'm sure that would turn off consumers as well.
Everyone knows this -- even the people pushing it. We know what it is doing to art, what it is doing to just being able to find a book on amazon, what it is doing to reviews, to website searches, to authenticity everywhere.
Splitting hairs about whether people think it's the terminology or the functionality is absurd. Stop making generative AI products that are the cultural equivalent of breaking into the community pool solely to piss in it.
People don't really want this. They have an instinct that a lot of AI products are little more than grift (trained by their experiences with "Web 3.0"). And this study is showing that.
For example:
* You enter an ambiguous search term into a search engine. It shows you some results, but it also shows you some buttons to filter by meaning. For example if you've entered "universal", it could give you an option to filter out all pages where "universal" appears as the name of the film studio, without a hack like excluding "universal films", but actually deciding based on context
* If you've touched up a few photos the same way, an image processing program could give you a suggestion for a touch-up for the next photo, along with options to tweak it
* In an email program, if you've moved a few emails to a folder, it could suggest a selection of other emails to move to the same folder, or maybe suggest to you a mail rule that would do it for you in future.
... and please, provide an option to switch it off.
[0] https://www.logitech.com/en-us/software/logi-options-plus.ht...
Secondly, I am slightly burntout looking at all the "BlockChain", "distributed", "Crypto" for like last decade and now suddenly the same products(and products from same people) did a swap of "crypto" or "blockChain" with "AI". Needless to say, my mind is jaded from all those scammy blockchain peedlers that when I see an "AI" (e.g. TodoAI, like wtf? I just want to add some text and dates and want to check those when am done or just show me a notification if I forgot..), I just feel like someone is trying to scam me.
Also, I noticed that some products suddenly hiked the price after adding the "AI" in their featureset, which wouldn't feel scammy(because inflation happened), but if I see some crappification of the product post AI, it just leaves a bad taste that next time I see anything with "AI" as feature, I assume it is also some crap. I know, don't judge a book by fathers and children must not be judged by deeds/sins of their fathers but what can I say.
Also, most of the products are clearly proxying stuff to ChatGPT with a system prompt, you can actually sense it, if you have used OpenAI apis. Which causes me extra pain, because c'mon, you are selling me a proxy with a prompt and charging me like $5.99/month!
People associate the use of AI in an existing area as being a poor-quality facsimile of the real thing, whatever the "real thing" is. That, or an unnecessary addition causing annoyance (aka Clippy)
On the other hand, for genuinely new use-cases where AI is central and beneficial, I'd be surprised if there was a negative reaction. It is "new shiny thing" vs "cheap plastic imitation".
Reminds me of The Graduate "One word. Plastics."
1. Making really, really, REALLY, shitty "art"
2. Writing nonsensical and boring short stories or bland written-by-committee memos that make you sound like a soulless AI
3. Creating summaries that are pathetic compared to the first paragraph of any wikipedia article on literally any topic (also, they are often 100% wrong)
4. Acting as a useless, actually worse that useless due to being both useless and time wasting, wall between you and a human who can actually do something "assistant"
5. Looking at images and telling me if there's a cat or a fruit in them
6. Being a worse chatbot than ELIZA was 50 years ago
7. Writing code that, if it is anything more complex than something you can copy and paste from Stack Overflow and have work, you have to spend more time fixing than if you had just written it yourself
But it is very good at bombarding users with an infinite stream of garbage content that is cheap and effortless to create so it will eventually devour everything.
And even to those who think there is real meaning behind it, do you really think people want "intelligence" everywhere, artificial or not. People don't want their toaster to be intelligent, they just want it to toast. So what is an AI powered toaster? A toaster you have to argue with regarding how you want your bread toasted? And in many cases that's what happens when you replace a push button with an "AI assistant", so people are not wrong about it either.
AI isn’t going to magically help solve problems with technology. Frankly I’m tired of technology. I miss people.
Related
What is 'AI washing' and why is it a problem?
Companies engaging in AI washing exaggerate or misrepresent AI use in products. Regulators in the US act against false claims, while the UK has rules like the Advertising Standards Authority's code. Experts foresee AI losing marketing appeal as it becomes common.
AI washing: Silicon Valley's big new lie
AI washing is a deceptive marketing practice in Silicon Valley, exaggerating AI's role in products. It misleads by promoting AI as solving all problems independently, distorting funding priorities and creating unrealistic expectations.
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions by lowering emotional trust, especially for high-risk items.
Using the term 'AI' in product descriptions reduces purchase intentions
A study from Washington State University found that mentioning "artificial intelligence" in product descriptions can reduce consumer purchase intentions, particularly for high-risk items, by lowering emotional trust.