An Anatomy of Algorithm Aversion
The study delves into algorithm aversion, where people favor human judgment over algorithms despite their superior performance. Factors include agency desire, emotional reactions, and ignorance. Addressing these could enhance algorithm acceptance.
Read original articleThe paper titled "An Anatomy of Algorithm Aversion" by Cass R. Sunstein and Jared Gaffe explores the phenomenon where individuals prefer human forecasters or decision-makers over algorithms, despite algorithms generally outperforming humans in accuracy and optimal decision-making. The aversion to algorithms stems from various factors such as a desire for agency, negative emotional reactions to algorithmic judgments, belief in unique human expertise, ignorance about algorithm performance, and asymmetrical forgiveness towards algorithmic errors. Understanding these mechanisms provides insights into overcoming algorithm aversion and its limitations. The study suggests that addressing these factors could help increase acceptance and trust in algorithmic decision-making processes.
Related
We no longer use LangChain for building our AI agents
Octomind switched from LangChain due to its inflexibility and excessive abstractions, opting for modular building blocks instead. This change simplified their codebase, increased productivity, and emphasized the importance of well-designed abstractions in AI development.
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.
The Encyclopedia Project, or How to Know in the Age of AI
Artificial intelligence challenges information reliability online, blurring real and fake content. An anecdote underscores the necessity of trustworthy sources like encyclopedias. The piece advocates for critical thinking amid AI-driven misinformation.
In particular, algorithsm do not offer advice. Advice is a case where your own goals, ambitions, preferences, desires have been understood -- and moreso, what ones you arent aware of, what needs you might have that arent met... and these are lined up with plausible things you can do that are in your interest.
There is no algorithmic 'advice'
In my own experience, human forecasters and decision-makers tend to be much easier to hold accountable for bad forecasts and decisions. At a minimum, they stake their reputations, just by putting their names to their actions. With algorithms, by contrast, there's often no visible sign of who created them or decided to use them. There's often no effective process for review, correction, or redress at all.
The fact that high-volume, low-risk decisions tend to get automated more often may partly explain this. But it may also partly explain general attitudes toward algorithms, as a consequence.
Au contraire. It is the correct understanding, born out of deep expertise, that algorithms, outside very structured artificial environements, often do not work well at all.
Even provably correct algorithms fail if there is even the slightest mistmatch between the assumptions and reality, imprefect data, noisy sensors, or a myriad other problems. Not to mention that the implementations of these provably correct algorithms are often buggy.
When of algorithms are based on user input, users learn very quickly how to manipulate the algorithm to produce the results they actually want.
Is there something special about the algorithms people are averse to? Maybe not actually working?
Related:
The legal rule that computers are presumed to be operating correctly https://news.ycombinator.com/item?id=40052611
> In England and Wales, courts consider computers, as a matter of law, to have been working correctly unless there is evidence to the contrary. Therefore, evidence produced by computers is treated as reliable unless other evidence suggests otherwise.
It's a sort of lazy argument that one can imagine a homo economicus which might might better decisions on a proxy variable, less lazily, bemoaning that they don't optimize the authors' preferred measurables.
It shows self-awareness at times
> It is worth noting, however, that the algorithm in the study was designed to optimize system-wide utilization rather than individual driver income. > The algorithm’s design weakens any conclusion about algorithm aversion, for individual drivers may have been better off optimizing > for themselves rather than the system.
It has the air of a future cudgel. The title works as a punchline, and as for the strength of the argument, well it's published (posted at all) online, isn't it.
There are all sorts of cofounders to algorithms in the real world and an expert human is better at dealing with unexpected cofounders than an algorithm. Given the number of confounded possible, in real world use, it is likely that there will be at least 1 confounder.
> algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal).
Bullshit. Algorithm means any mechanical method, and while there are some of those that outperform humans, we are nowhere near the point where this is true generally, even if we steelman this by restricting this to the class of algorithms that institutions have deployed to replace human decision-makers
If you want an explanation for "algorithm aversion", I have a really simple one: Most proposed and implemented algorithms are bad. I get it. The few good ones are basically the fucking holy grail of statistics and computer science, and have changed the world. Institutions are really eager to deploy algorithms because they make decisions easier even if they are being made poorly. Also, as other commentators point out, the act of putting some decision in the hands of an algorithm is usually making it so no one can question, change, be held accountable for, or sometimes even understand the decision. Most forms of algorithmic decision-making that have been deployed in places that are visible to the average person have been designed explicitly to do bigoted shit.
> Algorithm aversion also has "softer" forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance.
Every performance metric is an oversimplification made for the convenience of researchers. Worse, it's not a matter of law or policy that's publicly accountable, even when the algorithm it results in is deployed in that context (and certainly not when deployed by a corporate institution). At best, to the person downstream of the decision, it's an esoteric detail in a whitepaper written by someone who is thinking of them as a spherical cow in their fancy equations. Performance metrics are even more gameable and unaccountable than the algorithms they produce
> Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms;
In other words, because they are rational adults
>(3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms;
You have to believe this to believe the algorithms should work in the first place. Algorithms are tools built and used by human experts. Automation is just hiding that expert behind at least two layers of abstraction (usually a machine and an institution)
> (4) ignorance about why algorithms perform well; and
Again, this ignorance is a feature, not a bug, of automated decisionmaking in practice with essentially no exceptions
> (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error.
You should never "forgive" an algorithm for making an error. Forgiveness is a mechanism that is part of negotiation, which only works on things you can negotiate with. If a human makes a mistake and I can talk to them about it, I can at least try to fix the problem. If you want me to forgive an algorithm, give me the ability to reprogram it, or fuck off with this anthropomorphizing nonsense
> An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions.
I don't want to solve this problem. Laypeople should be, on balance, more skeptical of the outputs of computer algorithms than they currently are. "Algorithm aversion" is a sane behavior in any context where you can't audit the algorithm. Like, the institutions deploy these tools are the ones we should hold accountable for their results, and zero institutions doing so have earned the trust in their methodology that this paper seems to want
Related
We no longer use LangChain for building our AI agents
Octomind switched from LangChain due to its inflexibility and excessive abstractions, opting for modular building blocks instead. This change simplified their codebase, increased productivity, and emphasized the importance of well-designed abstractions in AI development.
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.
The Encyclopedia Project, or How to Know in the Age of AI
Artificial intelligence challenges information reliability online, blurring real and fake content. An anecdote underscores the necessity of trustworthy sources like encyclopedias. The piece advocates for critical thinking amid AI-driven misinformation.