September 9th, 2024

Study shows 'alarming' level of trust in AI for life and death decisions

A study from UC Merced reveals that two-thirds of participants trusted unreliable AI in life-and-death decisions, raising concerns about AI's influence in military, law enforcement, and medical contexts.

Read original articleLink Icon
Study shows 'alarming' level of trust in AI for life and death decisions

A recent study from the University of California – Merced highlights a concerning level of trust that individuals place in artificial intelligence (AI) when making critical life-and-death decisions. In the study, participants were shown a series of target photos labeled as either friend or foe and were tasked with deciding whether to execute simulated drone strikes based on AI-generated advice, which was actually random. Despite being informed about the unreliability of the AI, two-thirds of the subjects allowed it to influence their decisions. Principal investigator Professor Colin Holbrook emphasized the need for caution regarding overtrust in AI, particularly in high-stakes situations such as military operations, law enforcement, and emergency medical care. The findings suggest that this issue extends beyond military applications and could impact various significant life decisions, including financial investments like home purchases. Holbrook advocates for a healthy skepticism towards AI, noting that while AI can perform impressively in certain areas, it is not infallible and should not be blindly trusted in critical scenarios.

- A study reveals excessive trust in AI for life-and-death decisions among participants.

- Two-thirds of subjects allowed AI to influence their decisions despite knowing it was unreliable.

- The findings raise concerns about AI's role in military, law enforcement, and medical contexts.

- The research suggests a broader application of these findings to significant life decisions.

- Experts call for skepticism towards AI, emphasizing its limitations in critical situations.

Link Icon 35 comments
By @batch12 - 7 months
So the study[0] involved people making simulated drone strike decisions. These people were not qualified to make these decisions for real and also knew the associated outcomes were also not real. This sounds like a flawed study to me.

Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.

> Our pre-registered target sample size was 100 undergraduates recruited in exchange for course credit. However, due to software development delays in preparation for a separate study, we had the opportunity to collect a raw sample of 145 participants. Data were prescreened for technical problems occurring in ten of the study sessions (e.g., the robot or video projection failing), yielding a final sample of 135 participants (78.5% female, Mage = 21.33 years, SD = 4.08).

[0] https://www.nature.com/articles/s41598-024-69771-z

By @JamesBarney - 7 months
This is a silly study. Replace AI with "Expert opinion", show the opposite result and see the headline "Study shows alarming levels of distrust in expert opinion".

People made the assumption the AI worked. The lesson here is don't deploy an AI recommendation engine that doesn't work which is a pretty banal takeaway.

In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human. Some will die because an AI made a wrong decision when a human would have made the right one, but far more will die from a person making a wrong decision when an AI would have made the right one.

By @adamwong246 - 7 months
AI is kind of the ultimate expression of "Deferred responsibility". Kind of like "I was protecting shareholder interests" or "I was just following orders".
By @bigbuppo - 7 months
This is how AI will destroy humanity. People that should know better attributing magical powers to a content respinner that has no understanding of what it's regurgitating. Then again, they have billions of dollars at stake, so it's easy to understand why it would be so difficult for them to see reality. The normies have no hope, they just nod and follow along that Google told them it's okay to jump into the canyon without a parachute.
By @simonw - 7 months
If you are in a role where you literally get to decide who lives and who dies, I can see how it would be extremely tempting to fall back on "the AI says this" as justification for making those awful decisions.
By @wormlord - 7 months
Isn't this kind of "burying the lede" where the real 'alarmingness' is the fact that people are so willing to kill someone they have never met, going off of very little information, with a missile from the sky, even in a simulation?

This reminds me of that Onion skit where pundits argue about how money should be destroyed, and everyone just accepts the fact that destroying money is a given.

https://www.youtube.com/watch?v=JnX-D4kkPOQ

By @ordu - 7 months
This study says that AI influences human decisions, and I think to say that the study needs a control group, with the same setup but with "AI" replaced by a human, who would toss a coin to choose his opinion. The participants of the control group should be made aware of this strategy.

Comparing with such a group we could meaningfully talk about AI influence or "trust in AI", if the results were different. But I'm really not sure that they would be different, because there is a hypothesis that people just reluctant to take the responsibility for their answer, so they happy to shift the responsibility to any other entity. If this hypothesis true, then there is a prediction: add some motivation, like pay people $1 for each right answer, and the influence of opinions of others will become lower.

By @HPsquared - 7 months
This study is more about psychology of "second opinions" than a real AI system actually used in practice.

I'm sure a lot of professional opinions are also basically a coin toss. Definitely something to be aware of though in Human Factors design.

By @varispeed - 7 months
This is just madness. I have a relative who is saying outlandish stuff about health and making hell for the whole family trying to make them adhere to whatever ChatGPT told her. She also learned to ask questions in a way that will reinforce confirmation bias and even if you show her studies contrary to what she "learned", she will dismiss them.
By @gpvos - 7 months
It seems to me that in reality in such a scenario (at least ideally), the human will mostly focus on targets that have already been marked by AI as probably an enemy, and rigorously double-check those before firing. That means that of course you are going to be influenced by AI, and it is not necessarily a problem. If you haven't first established, and are re-evaluating with some regularity, that the AI's results have a positive correlation with reality, why are you using AI at all? You could e.g. improve this further by showing a confidence percentage of the AI, and a summary of the reasons why it gave its result.

This is aside from whether remotely killing people by drone is a good idea at all, of which I'm not convinced.

By @nonrandomstring - 7 months
The people who "trust" AI to make life or death decisions are not the subjects of those decisions. Those who live or die by AI decisons are other people, who

- probably don't know their lives are in the hands of AI

- probably heven't given any meaningful consent or have any real choice

- are faceless and remote to the operator

Try this for an experiment; Wire the AI to the trigger of a shotgun pointing at the researchers face while the researcher asks it questions. Then tell me again all about the "level of trust" those people have in it.

By @medymed - 7 months
‘Artificial intelligence’ is a term of academic branding genius, because the name presumes successful creation of intelligence. Not surprising people trust it.

‘Decision-bots’ would have fewer fans.

By @patmcc - 7 months
As someone said way back in 1979 (an internal IBM training document, afaik)

"A computer can never be held accountable

Therefore a computer must never make a management decision"

By @thatoneguy - 7 months
I just had to sign a new version of my doctor's office consent form an hour ago letting me know that generative AI would be making notes.

God help us all.

By @Mountain_Skies - 7 months
Though the movie isn't held in high regard by most critics, there's a wonderful scene in Stanley Kubrick's 'A.I.' where humans fearful of robots go around gathering them up to destroy them in a type of festival. Most of the robots either look like machines or fall into the uncanny valley, and humans cheer as they are destroyed. But one is indistinguishable from a human boy, which garners sympathy from a portion of the crowd. Those who see him as a robot still want to destroy him the same as destroying any other robot while those who see him as a little boy, despite him being a robot, plead for him to be let go. Seems that this type of situation is going to play out far sooner than we expected.

https://www.youtube.com/watch?v=ZMbAmqD_tn0

By @AlexandrB - 7 months
Reminds me of the British Post Office scandal[1] and how the computers were assumed to be correct in that case.

[1] https://en.wikipedia.org/wiki/British_Post_Office_scandal

By @OutOfHere - 7 months
> A second opinion on the validity of the targets was given by AI. Unbeknownst to the humans, the AI advice was completely random.

It was not given by AI; it was given by a RNG. Let's not mix the two. An AI is calculated, not random, which is the point.

By @ulnarkressty - 7 months
There is no need for a study, this has already happened[0]. It's inevitable that in a crisis the military will use AI to short-circuit established protocols in order get an edge.

[0] - https://www.972mag.com/lavender-ai-israeli-army-gaza/

By @datavirtue - 7 months
I think it's unavoidable to have people trusting AI just as they would another person they can chat with. The trust is almost implicit or subconscious, and you have to explicitly or consciously make an effort to NOT trust it.

As others have pointed out, this study looks "sketch" but I can see where they are coming from.

By @hindsightbias - 7 months
I'm sure someone is already touting the mental health benefits and VA savings.

https://www.dailyjournal.com/articles/379594-the-trauma-of-k...

By @brodouevencode - 7 months
Tangentially related: my daughter (11) and I watched Wargames a couple of weeks ago. I asked her what she thought of the movie and her response was "there are some things computers shouldn't be allowed to do".
By @JoeAltmaier - 7 months
Heck, we show alarming levels of trust in ordinary situations - not getting a second opinion; one judge deciding a trial; everybody on the road with a license!

I'm thinking, AI is very much in line with those things.

By @imgabe - 7 months
> A second opinion on the validity of the targets was given by AI. Unbeknownst to the humans, the AI advice was completely random.

> Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI.

I mean, if you don't know the advice is random and you think it's an AI that is actually evaluating factors you might not be aware of, why wouldn't you allow it to influence the decision? It would be have to be something you take into account. What would be the point of an AI system that you just completely disregard? Why even have it then?

This study is like "We told people this was a tool that would give them useful information, and then they considered that information, oh no!"

By @indymike - 7 months
Hmm. Delegating a decision you do not enjoy making to a machine? Sounds like expected behavior.
By @mathgradthrow - 7 months
Drone strikes are definitely going to be killing people. So It's actually a death or death decision.
By @teqsun - 7 months
While I am a constant naysayer to a lot of the current AI hype, this just feels sensationalist. Someone who blindly trusts "AI" like this would be the same person who trusts the internet, or TV, or a scam artist on the street.
By @adamrezich - 7 months
Wholly unsurprising. "Anthropomorphization" is an unwieldy term, and most people aren't aware of the concept. If it responds like a human, then we tend to conceptualize it as having human qualities and treat it as such—especially if it sounds like it's confident about what it's saying.

We don't have any societal-level defenses against this situation, yet we're being thrust into it regardless.

It's hard to perceive this as anything other than yet another case of greedy Silicon Valley myopia with regards to nth-order effects of how "disruptive" applications of technology will affect everyday lives of citizens. I've beating this drum since this latest "AI Boom" began—and as the potential for useful applications for the technology begin to plateau, and the promise of "AGI" seems further and further out of reach, it's hard to look at how things shook out and honestly say that the future for this stuff seems bright.

By @emrah - 7 months
There are several categories of decisions as mentioned in the article (military, medical, personal etc) and we need a "control" in each to compare to I think. How are those decisions being made without AI and how sound are they compared to AI?
By @matwood - 7 months
I was speaking with a librarian who teaches college students how to use AI effectively. They said that most students by default trust what AI says. It got me wondering if a shift in people's trust of what they read online is in part to blame for people believing so many conspiracy theories now? When I was in college the internet was still new, and the prevailing thought was trust nothing you read on the internet. I feel like those of us from that generation (college in the 90s) are still the most skeptical of what we read. I wonder when the shift happened though.
By @webspinner - 7 months
I just had to login to say if you trust AI for that your stupid! I mean I'm half being sarcastic, but really? I would never participate in this kind of research.
By @WorkerBee28474 - 7 months
Things that humans would rather do than have to think:

1) Die

2) Kill someone else