June 22nd, 2024

An Anatomy of Algorithm Aversion

The study delves into algorithm aversion, where people favor human judgment over algorithms despite their superior performance. Factors include agency desire, emotional reactions, and ignorance. Addressing these could enhance algorithm acceptance.

Read original articleLink Icon
An Anatomy of Algorithm Aversion

The paper titled "An Anatomy of Algorithm Aversion" by Cass R. Sunstein and Jared Gaffe explores the phenomenon where individuals prefer human forecasters or decision-makers over algorithms, despite algorithms generally outperforming humans in accuracy and optimal decision-making. The aversion to algorithms stems from various factors such as a desire for agency, negative emotional reactions to algorithmic judgments, belief in unique human expertise, ignorance about algorithm performance, and asymmetrical forgiveness towards algorithmic errors. Understanding these mechanisms provides insights into overcoming algorithm aversion and its limitations. The study suggests that addressing these factors could help increase acceptance and trust in algorithmic decision-making processes.

Link Icon 10 comments
By @mjburgess - 4 months
Or whenever you automate a decision process you take all the resilience out of it. Human social institutions are built to survive all kinds of dramatic environmental change, the kinds of machine decision making available are not.

In particular, algorithsm do not offer advice. Advice is a case where your own goals, ambitions, preferences, desires have been understood -- and moreso, what ones you arent aware of, what needs you might have that arent met... and these are lined up with plausible things you can do that are in your interest.

There is no algorithmic 'advice'

By @kemitchell - 4 months
Reading just the syllabus, I was surprised to see no mention of accountability. Quick Ctrl+F searches for "accountability", "appeal", and "review" gave no results. "Reputation" appears, but in a section rather harshly titled "Herding and Conformity", about the reputations of the people not trusting algorithms, not the people making or deploying them.

In my own experience, human forecasters and decision-makers tend to be much easier to hold accountable for bad forecasts and decisions. At a minimum, they stake their reputations, just by putting their names to their actions. With algorithms, by contrast, there's often no visible sign of who created them or decided to use them. There's often no effective process for review, correction, or redress at all.

The fact that high-volume, low-risk decisions tend to get automated more often may partly explain this. But it may also partly explain general attitudes toward algorithms, as a consequence.

By @oldgradstudent - 4 months
> (4) ignorance about why algorithms perform well;

Au contraire. It is the correct understanding, born out of deep expertise, that algorithms, outside very structured artificial environements, often do not work well at all.

Even provably correct algorithms fail if there is even the slightest mistmatch between the assumptions and reality, imprefect data, noisy sensors, or a myriad other problems. Not to mention that the implementations of these provably correct algorithms are often buggy.

When of algorithms are based on user input, users learn very quickly how to manipulate the algorithm to produce the results they actually want.

By @oldgradstudent - 4 months
Weird, I have never encountered a single case of aversion to Booth's multiplcation algorithm, quicksort, binary search, DFS, BFS, Miller–Rabin primality test, or Tarjan's strongly connected components algorithm, .

Is there something special about the algorithms people are averse to? Maybe not actually working?

By @gurjeet - 4 months
OP > Algorithm aversion is a product of diverse mechanisms, including ... (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.

Related:

The legal rule that computers are presumed to be operating correctly https://news.ycombinator.com/item?id=40052611

> In England and Wales, courts consider computers, as a matter of law, to have been working correctly unless there is evidence to the contrary. Therefore, evidence produced by computers is treated as reliable unless other evidence suggests otherwise.

By @12_throw_away - 4 months
Wow, this paper is ... mystifyingly awful. It reads like some crank's blog, but it's actually written by two harvard lawyers, including a pretty famous one [1].

[1] https://en.wikipedia.org/wiki/Cass_Sunstein

By @foundart - 4 months
The note on the primary author's name says 'We intend this essay as a preliminary “discussion draft” and expect to revise it significantly over time' so if you have cogent revisions to suggest, you should strongly consider sending them.
By @mcint - 4 months
"Humans approximating human taste preferences perform worse on the validation set".

It's a sort of lazy argument that one can imagine a homo economicus which might might better decisions on a proxy variable, less lazily, bemoaning that they don't optimize the authors' preferred measurables.

It shows self-awareness at times

> It is worth noting, however, that the algorithm in the study was designed to optimize system-wide utilization rather than individual driver income. > The algorithm’s design weakens any conclusion about algorithm aversion, for individual drivers may have been better off optimizing > for themselves rather than the system.

It has the air of a future cudgel. The title works as a punchline, and as for the strength of the argument, well it's published (posted at all) online, isn't it.

By @jbandela1 - 4 months
I think part of the reason is that people understand that while in games such as chess, etc the entire state of the “universe” of the problem is provided to the algorithm, in the real world, they don’t have that confidence.

There are all sorts of cofounders to algorithms in the real world and an expert human is better at dealing with unexpected cofounders than an algorithm. Given the number of confounded possible, in real world use, it is likely that there will be at least 1 confounder.

By @advael - 4 months
As someone who spends almost all of my productive time on earth trying to solve problems via algorithms, this paper is the kind of take that should get someone fired. God I forget how much stupid shit academics can get away with writing. Right from the abstract this is hot garbage

> algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal).

Bullshit. Algorithm means any mechanical method, and while there are some of those that outperform humans, we are nowhere near the point where this is true generally, even if we steelman this by restricting this to the class of algorithms that institutions have deployed to replace human decision-makers

If you want an explanation for "algorithm aversion", I have a really simple one: Most proposed and implemented algorithms are bad. I get it. The few good ones are basically the fucking holy grail of statistics and computer science, and have changed the world. Institutions are really eager to deploy algorithms because they make decisions easier even if they are being made poorly. Also, as other commentators point out, the act of putting some decision in the hands of an algorithm is usually making it so no one can question, change, be held accountable for, or sometimes even understand the decision. Most forms of algorithmic decision-making that have been deployed in places that are visible to the average person have been designed explicitly to do bigoted shit.

> Algorithm aversion also has "softer" forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance.

Every performance metric is an oversimplification made for the convenience of researchers. Worse, it's not a matter of law or policy that's publicly accountable, even when the algorithm it results in is deployed in that context (and certainly not when deployed by a corporate institution). At best, to the person downstream of the decision, it's an esoteric detail in a whitepaper written by someone who is thinking of them as a spherical cow in their fancy equations. Performance metrics are even more gameable and unaccountable than the algorithms they produce

> Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms;

In other words, because they are rational adults

>(3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms;

You have to believe this to believe the algorithms should work in the first place. Algorithms are tools built and used by human experts. Automation is just hiding that expert behind at least two layers of abstraction (usually a machine and an institution)

> (4) ignorance about why algorithms perform well; and

Again, this ignorance is a feature, not a bug, of automated decisionmaking in practice with essentially no exceptions

> (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.

You should never "forgive" an algorithm for making an error. Forgiveness is a mechanism that is part of negotiation, which only works on things you can negotiate with. If a human makes a mistake and I can talk to them about it, I can at least try to fix the problem. If you want me to forgive an algorithm, give me the ability to reprogram it, or fuck off with this anthropomorphizing nonsense

> An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions.

I don't want to solve this problem. Laypeople should be, on balance, more skeptical of the outputs of computer algorithms than they currently are. "Algorithm aversion" is a sane behavior in any context where you can't audit the algorithm. Like, the institutions deploy these tools are the ones we should hold accountable for their results, and zero institutions doing so have earned the trust in their methodology that this paper seems to want