August 4th, 2024

Google Says AI Olympics Ad 'Tested Well' Before Inspiring Outrage

Google's ad for its AI chatbot, Gemini, faced backlash for suggesting AI can replace parental involvement. Critics condemned its message, leading to the ad's removal from TV, though it's still on YouTube.

Read original articleLink Icon
Google Says AI Olympics Ad 'Tested Well' Before Inspiring Outrage

Google's recent advertisement for its AI chatbot, Gemini, aimed at capitalizing on the excitement of the Olympics, has faced significant backlash. The ad, titled "Dear Sydney," features a father using Gemini to help his daughter write a fan letter to Olympic athlete Sydney McLaughlin-Levrone. Although Google stated that the ad "tested well" prior to its release, it was met with criticism for portraying AI as a poor substitute for parental involvement and creativity. Following the negative reception, which included harsh comments from media personalities and social media users, Google decided to remove the ad from television rotation, although it remains available on YouTube with comments disabled.

Critics argued that the ad misrepresents the capabilities of AI and sends a troubling message to parents about the role of technology in children's development. Some commentators expressed their disgust, while others found the concept of using AI for such personal tasks unsettling. Despite the backlash, a few viewers shared positive reactions to the ad on social media. In response to the controversy, Google reiterated its belief that while AI can enhance creativity, it cannot replace human connection and expression. The incident reflects broader concerns about the implications of AI in everyday life, particularly in sensitive areas like parenting and education.

Link Icon 23 comments
By @mewpmewp2 - 9 months
"AI can be a great tool for enhancing human creativity, but can never replace it."

Even their response is so dystopian and out of touch. Completely misunderstanding what is wrong about this ad.

The problem is not with replacing creativity, problem is attempting to replace what should be a genuine, emotional interaction between a young aspiring human being to a person they look up as their role model.

The idea is here to create that connection, not automate it away as some sort of nuisance.

You can automate away all the appointment negotiations, business interactions and all that, but this is the one thing that should never be automated.

The outrage is justified because someone so out of touch is working on a tech that may influence our future by a lot. If they are so out of touch, how can they make any decisions that people would like.

By @dylan604 - 9 months
I don't think this is a "Google" thing as much as an over hyped AI PR issue in general. SalesForce AI has ridiculous commercials as well with their Einstein insanity. I had never even heard of it until trying to watch some Olympic events in between commercial breaks. Zoom's commercials are there too over promoting AI use as well. Who's using AI to send chants in a Zoom?

The hype machine for AI makes what we experienced with crypto look tame.

By @hn_throwaway_99 - 9 months
I've found Google's AI ads to be ghastly for quite some time for using AI to "fix" or replace joyful human moments, so I'm glad to see the exposure of the Olympics is making Google rethink this.

For example, Google's original Gemini launch had an ad where someone wanted to use AI to caption their dog photo for social media: https://youtu.be/b5Fh7TaTkEU?t=36s

There was an older ad for the Pixel camera app where it could use AI to "fix" a family photo where one of the kids was making a funny face, and instead give him a "JC Penny catalog photo"-approved smile.

At this point I wonder if they've just replaced their marketing team with AI itself, because I can't believe a human with actual emotional experience would have green-lit these ads.

By @pessimizer - 9 months
One of the biggest signs of corruption is when extremely high-cost/high-effort works are released to the public seemingly without anyone internally asking basic and obvious questions about them. It's a sign that the people internally with decision-making power are neither asking for nor accepting input, or that the process of speaking to them has become impossibly intimidating or risky.

You end up seeing super-high production values on moronic product. How did the moron get to the top? He owns the place, or is a friend of the owner, and is answerable to no one.

By @r0m4n0 - 9 months
Personally, I think it’s because it only takes a small part of the internet to be outraged before it appears to be outrage. The twitter/x machine gets rolling with jokes and those get shared with a network effect. 100% of a room of 30 people won’t care about this. 1 goofball out of 100 makes some meme and shares it, that spins out of control and now there is a problem on their hands.

I could care less about this little clip. Personally I use AI to write letters to my elected officials, review the speech I had to write for the wedding I’m attending next week (and give me ideas for things I should have done differently like focus on the couples relationship instead of just stories I have with the groom), and help me write a children’s book for my daughter. All things I care about and I’m currently better for it and saved time. Say what you will

By @JSDevOps - 9 months
Haha someone’s lying to you Google. If you made any ad no one is going to say that’s terrible why would they they assume you’ve got the best people on the case.
By @Eridrus - 9 months
It probably did test well. If you take a bunch of average people and ask them to view an ad with a simple uplifting message (eg tech helps you connect with people and be the person you want to be), it will almost certainly test positively.

And this is a fine strategy for most ads, where pop culture critics are not paying much attention. But critics have tech and AI specifically firmly in their sights these days, so you probably need to consider whether you want to cater to that audience, or whether you're just hoping they'll be getting annoyed at someone else at that time.

By @kelsey98765431 - 9 months
I am certain it tested excellently. Infact, it is the job of not only the marketing team to make sure the testing team comes back with a positive result, but also the testing team. Why would anyone want to have their stuff come back with a failure?

The problem is the testing itself. We see this in political polling, where the poll surveys come back very confidently stating x y and z people are n points ahead among demographics a b and c. The problem is that time after time again the pollsters are made out to be weathermen forcasting rain during a heatwave a snow during a drought with lots of gesticulation but no substance. Their methods were wrong, and they picked the safest modeling to use so that if they were wrong they would be only sort of wrong, until it got to be a pure flip of the coin as to if it would rain or not.

The polls and tests and all this "data driven" hogwash are not testing for truth in the market, they are testing for truth in how things test. And as software developers know - the real testing happens after QA.

Too many predictors and fortune tellers are trying to sell a nice thought with plywood beams holding it up. In the past it was hard to actually go and quantify how poorly the rest result matched up with reality, today it is a lot easier to see a failure after a poor showing - but somehow we are still eluded in the testing process.

Fire them all. Have google gemmini make a thousand ads and run them in a thousand places and then figure out which ones are hitting the mark after the fact. It would be cheaper than sitting around for months trying to synthesize the perfect ad in vitro just for it to be poorly received and the entire investment wasted.

Pollsters, focus testers, marketers, they all have this same problem of thinking their lab still can accurately quantify the external world. They can't, and I dont have a solution (i will never respond to a poll, even for money).

By @jerojero - 9 months
It didn't seem bad until the actual prompt "help my daughter write a letter...".

No. No one likes that.

There's this huge misunderstanding that I see a lot on these AI prompting ads that people will want to use AI to write things they care about... Perhaps a sort of "I care so much about this that I'd like it to be perfect" when I believe, for people, when we care about something we want to put effort into it. Even if the result is flawed, the effort is what matters.

If I'm an athlete and I receive a handwritten letter from a little girl, and i can see the struggle in a simple "I want to be like you" that is so much more valuable than a 10 paragraph ai assisted essay.

This is so obvious? Obviously google is surrounded by yes men.

By @Workaccount2 - 9 months
I would guess that Google has the data to see that people use AI to write otherwise personal letters a lot. One of my top use cases for chargpt has been helping to write cards for birthdays, weddings or whatever. And talking to others I know they use it a lot for that too.

I sense more that it is something where the "ick" factor is high, but people are generally terrible at writing and love the boost AI gives them.

On Google's side though, it just looks like "damn, people really like using this to write letters to others".

By @Mindwipe - 9 months
I'm not instinctively anti-AI, even if I think the foreseeable term usefulness is overstated. And I know testing artefacts happen. They really do. Sometimes test audiences love something that is just a disaster in wide release.

But even then, I find it astonishing that this tested well, and probably more evidence that the testing questions and methodology were bad than anything else.

By @buttocks - 9 months
There’s a lot of good to say about AI but there are a lot of us who just enjoy humanity a lot more than AI and other automation. I like talking to people on the phone, going through staffed checkout lines, and reading real articles and comments on the web. I’m human and I want to interact with other humans. I’m not the only one.
By @blast - 9 months
It's fascinating how perfectly this little episode fits into the fractal of Googleness: the corporate borg with no human quality at all. At least Gates and Ballmer would kick you in the balls.

There was also that dystopian Apple ad about technology crushing everything.

By @jgalt212 - 9 months
I found it to be a perfect example of "The soft bigotry of low expectations".
By @rsynnott - 9 months
“Oh, well, it tested well”, is a weird comeback, really. Well, then, the testing must not have been very good, Google.
By @not_your_vase - 9 months
Ah, of course, I should have thought so. It's the viewers' fault! They really need to educate themselves, and learn once and for all what is really good[0], instead of relying on their own opinion.

[0]: The rule of thumb is actually very simple. If it is made by big tech, and/or broadcast on TV, then it's good.

By @1vuio0pswjnm7 - 9 months
Observation: When Apple ran an idiotic "AI" ad that got pulled, all initial attempts to submit news of it to HN were flagged.
By @skywhopper - 9 months
I don’t believe Google actually did a good faith test of this ad with a real focus group. It was one of the most disturbing things I’ve seen in a TV ad in a long time.

People don’t want AI to disintermediate their relationships with other humans. If I can’t trust the words you send are actually from you, then they become meaningless. If you used an AI to write something you probably didn’t really even read it, much less write it. How is that anything but an insult?

But it’s one thing when it’s between adults. But to take a kid of an utterly innocent age and pretend that it’d be good for the kid or meaningful in any way for the athlete to use AI in this way is just utterly sociopathic.

By @Copenjin - 9 months
Outrage for this? Really?