The Asymmetry of Nudges
The article explores the asymmetry of nudges using Google Chrome's Manifest V3 API as a case study. It discusses the impact on ad blockers, privacy, and user choice, highlighting how changes can favor business interests.
Read original articleThe article discusses the concept of the asymmetry of nudges, focusing on the development of the Manifest V3 API in Google Chrome as an example. It explains that the proposal aimed to revamp the permission model for browser extensions, receiving criticism for potentially hindering ad blockers. However, the article argues that Manifest V3 was a necessary step to address privacy and security risks posed by browser extensions. It highlights how even well-intentioned changes can unintentionally benefit companies like Google in the long run, impacting user choice and experience. The author emphasizes that such projects often unfold in ways that align with business interests, creating a bias towards changes that do not harm the bottom line. This asymmetry of nudges limits certain choices and can ultimately lead to outcomes that may not prioritize user welfare. The article concludes by discussing the challenges faced by engineers in proposing changes that may conflict with company revenue goals, leading to a complex interplay between intentions, outcomes, and business interests in the tech industry.
Related
Mozilla is an advertising company now
Mozilla acquires Anonym, a privacy-focused advertising company founded by ex-Facebook executives. Integration aims to balance privacy and advertising. Critics question Mozilla's advertising shift, prompting users to explore alternative privacy-centric browsers.
Distrust in Big Tech Fuels Adblocker Usage Among 52% of Americans
Amid rising data privacy concerns, 52% of Americans use adblockers primarily for privacy. Tech entities like Google and OpenAI face trust issues. Consumers demand data control and transparency for trust.
What we need are external checks and balances. These can come in many forms from market competition, to government regulation, to watchdog groups. Putting pressure on individuals to change massively powerful systems from within is a fools errand.
That is an unsatisfying conclusion as the general structure of Google is unlikely to ever change, but it does seem correct to me.
The real structural problem is that the needs of the shareholders and by some extension the needs of the high level executives and managers at Google are simply not aligned with the needs of the users. This is why the “nudges” inch along in a direction which is often at odds with the needs of the users.
The solution to this broad class of structural problem in our economy, as argued by economists like Richard Wolff, is to build our economy out of firms which are largely cooperative in structure, where the workers and members of the co-op are representative of the users of the product or service. For example if your local water company is a co-op of users, with cooperative decision making power, the co-op isn’t going to vote to raise water rates unnecessarily against their own users.
A middle ground in many cases is unions. So if anything this article is unintentionally making a case for a tech workers union at Google. This would change the structure at Google in the most significant way currently possible under today’s legal system.
I think the idea that engineers should take more responsibility is a noble one, but it’s not the real problem here. The problem is the structure of the firm.
The article seems to be trying to argue that company leadership are not the ones responsible for the "evil" things that companies do. But this:
> If you’re an engineer at Google, Facebook, Apple, or Microsoft, it’s always easier to propose architectural changes that don’t hurt the bottom line, or perhaps bolster it by accident. Conversely, if your proposal stands to wipe out a good chunk of revenue, you either self-censor and don’t bring it up — or you end up getting sucked into endless, futile arguments.
strongly implies that company leadership are indeed the ones responsible.
Like, I think what the article is trying to say is that, Manifest V3 was designed due to real-world privacy concerns, not for profit motives. It just happened to get the right amount of support and buy-in from leadership due to being something that -also- aided profit motives.
In other words, when a company leader has a variety of possible projects to invest in, she will naturally tend to invest in the ones with a long-term profit motive for the company. This also necessarily means -not- investing in other, potentially good and helpful and consumer-positive projects, that simply aren't as promising from a profit perspective. This phenomenon is what the article calls the "asymmetry of nudges".
But I guess what I'm failing to grasp is how this means it was the engineers' doing and not leadership. Yes, the engineers came up with the idea. But in this scenario, it seems like the engineers were the ones who were well-meaning, and just doing their jobs. Whereas leadership were the ones chasing dollar signs at all costs. This is precisely in alignment with what most people posit when they say that big corporations are evil, no?
From the perspective of a typical HN reader, Google and Mozilla have turned into Internet nanny states with Fisher-Price browsers. How far can they go in the name of "safety" before it's too far?
Not to mention the problem the article highlights: their motives aren't pure. The more control they give themselves, and the more inconvenient third parties they marginalize, the more money they stand to make.
Also, it's not a perfect A or B between flexibility and security. They could require extensions to be more open and inspectable so users could catch bad behavior. They could better police the extension store to catch malware faster. They could add more layers of warnings and permissions dialogs to prevent accidental compromise.
At any rate, whether due to incompetence or malice, the situation is not as one-sided as Google pretends it is.
For example, Boeing moving its headquarter, so the decision makers are far away from the reality on the ground. This pattern is visible in less extreme ways in most companies. CxO's are typically on another floor than the other people.
The idea is clear: They don't want to know what happens in reality. They want to be able to deny anything, while nudging everyone in the right direction.
As in, a content filter extension (or anything that interacts with a content filter) is run in a WASM sandbox without any access to the network or underlying system? It’s hermetically sealed from the rest of the extension, that might well need to make external requests to function.
If your company has an incentive to make products hard to fix by the lay person over time your company will make decisions that leads to precisely that, even if most of the individuals involved by themselves had a principled stance towards the quality of their designs, products and repairability.
That means the only reliable way I as a customer can trust a company means this for real, is if something within their structure disincentives selling out their good reputation for short term gains by creating shittier less repairable products.
The problem is that in capitalism most organizations are structured with incentives in mind that don't care about long term effects on the environment, society or even the company itself.
This does not follow from the rest of the article at all. I'll begin by acknowledging the concept of the "asymmetric nudge" as a useful thought. It does somehow explain and ground a feeling of engineers within large corporate structures, where somehow all of your good ideas turn user hostile. The author fails to sufficiently answer the followup question though. Why are the nudges asymmetric, and who holds responsibility for that?
This is where the "sociopathic" executive comes in. The executive does not make technical decisions. Instead they make human decisions, like what projects to fund, what form of communication to accept, and what sorts of arguments to listen to.
The power of the executive is not to censor designs, it's to instill the values into you that steers your self-censorship.
> One of these had to give, and Manifest V3 was the most elegant technical approach. Far from being the brainchild of a sociopathic executive, its architecture was devised by well-meaning engineers on the Chrome team.
The Chrome team has some very competent engineers. lcamtuf is a well-respected security engineer. I would expect such a group, trying to solve a problem of poorly behaved extensions, to develop a nice privacy-respecting API to block requests.
For example, there could be a way for an extension to run a portion of itself in a sandbox, such that the sandbox could inspect a request, decide whether to allow it, and output only an indication of whether to allow it. No further outgoing communication, including to the rest of the extension, would be allowed.
But instead we got Manifest V3, and I simply don't believe it's a meaningful privacy improvement. Read the docs: https://developer.chrome.com/docs/extensions/reference/api/w...
> Note: As of Manifest V3, the "webRequestBlocking" permission is no longer available for most extensions. Consider "declarativeNetRequest", which enables use the declarativeNetRequest API. Aside from "webRequestBlocking", the webRequest API is unchanged and available for normal use.
Did well-meaning engineers on the Chrome team really come up with a security improvement in which extensions can read request and response headers but not block the requests? I'd love to see an explanation, but to me it seems that the security "improvement" is pretty narrowly tailored to prevent ad-blocking without meaningfully improving privacy.
> In fiction and in journalism, the fault almost always lies with the executives
> we — the well-meaning engineers — shoulder the blame
This is a weird take to be honest. Company culture is the responsibility of the executives, and however we put it, ultimately the blame lies on them.
Is the hell paved with good intentions ? yes, surely, and there's a need to be critical of the impact of one's work. We could fault people for not taking a step back to look at it from a distance.
But the reward ("nudges") system the article is focusing on isn't that, it's incentives put in place by the company. Who set up these incentives should get the blame when shit hits the fan.
Related
Mozilla is an advertising company now
Mozilla acquires Anonym, a privacy-focused advertising company founded by ex-Facebook executives. Integration aims to balance privacy and advertising. Critics question Mozilla's advertising shift, prompting users to explore alternative privacy-centric browsers.
Distrust in Big Tech Fuels Adblocker Usage Among 52% of Americans
Amid rising data privacy concerns, 52% of Americans use adblockers primarily for privacy. Tech entities like Google and OpenAI face trust issues. Consumers demand data control and transparency for trust.