AI: Markets for Lemons, and the Great Logging Off
The article explores AI's impact on social interactions and market dynamics, predicting a rise in demand for genuine connections, verified accounts, and offline community engagement, which may boost real estate values.
Read original articlethe future of real estate is headed. The article discusses the implications of AI technology on social interactions and market dynamics, particularly focusing on the concept of a "Market for Lemons." This economic principle illustrates how the presence of low-quality offerings (lemons) can devalue high-quality ones (plums) in a market, leading to a decline in overall quality. The author warns that as AI enables the creation of numerous fake online personas, the internet may become saturated with inauthentic interactions, prompting users to disengage or "log off." This shift could lead to a preference for verified human accounts and a rise in private social networks, as users seek genuine connections. Additionally, there may be a cultural shift towards offline activities and community engagement, driven by concerns over the addictive nature of online platforms. The author posits that while some individuals may retreat from the digital world, others may become more entrenched in it, leading to a complex evolution of human behavior and societal structures. Ultimately, the article suggests that real estate values will continue to rise as communities prioritize face-to-face interactions and as productivity gains influence housing demand.
- The rise of AI may lead to an increase in fake online interactions, causing users to disengage.
- A preference for verified human accounts and private social networks is likely to emerge.
- Cultural shifts towards offline activities and community engagement may occur.
- Real estate prices are expected to rise due to increased demand for close-knit communities.
- The evolution of human behavior in response to technology will be complex and multifaceted.
Related
Jack Dorsey says we won't know what is real anymore in the next 5-10 years
Jack Dorsey and Elon Musk express concerns about AI-generated deep fakes blurring reality. OpenAI's efforts fall short, emphasizing the importance of human intervention in maintaining authenticity amidst AI advancements.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
What comes after the AI crash?
Concerns about a generative AI bubble highlight potential market corrections, misuse of AI technologies, ongoing harms like misinformation, environmental issues from data centers, and the need for vigilance post-crash.
The Problem with the Job Market
The job market faces challenges from company inefficiencies and AI's rise, causing employee dissatisfaction. However, a gradual transition to a new work paradigm may lead to a more fulfilling future.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.
Which isn't to say we're not still headed for something like this. I do think some of the most sensitive to this sort of crass misuse of platforms already got off when human-created ads and bots started becoming prevalent, so the question is if LLM-based bots are notably worse in a way that people care about differently.
I'm already a few steps down the "logging off" path at this point, but it's unclear to me that I'm not the outlier here. I worry that there's many people who are lonely and don't have a better way to interact socially where they're totally fine talking to some catfishing LLM that wants their money in some way or other.
It seems clear to me that a disconcerting amount of LLM startups are started by people who are, or are willing to take money from, people who see a relentlessly positive/comforting/sexually open faux human as their best way forward in life. Every week brings some new "it's a social network where everyone's an LLM that's nice to you" or "this app is your best friend". It just feels predatory to me in a way where even if it's well intended, it feels like an admission or exploitation of a serious ill in our society.
One thing I did take exception with is one of his possible influences that lead us to avoid a death of the online interaction. "that AI technologies won't be perfect substitutes for actual human-to-human contact" is, on its face, a compelling argument but the fact is that humans rarely look for "perfect" solutions to anything... especially in the social sphere. There is a non-trivial number of people right now that are using virtual companions driven by the questionably convincing ChatGTP LLM who howled in anger and pain when the company who created and sold the tech decided to alter the companion to be less overtly sexual. Sure, even in a pre-Internet era some of those fixated upon such companions would have found some other evolutionary dead-end but a large number of them would have engaged in satisficing behavior to find another human and reproduced.
What we face now is a noteworthy population of humans that opt out of the complexity of human relationships and reproduction because an over-powered chatbot weaponized their empathy against them and made them a genetic dead-end.
I'm hardly advocating for some Butlerian Jihad, but I would suggest that we need to think about the potential ramifications of a social Market of Lemons that is also operating in parallel with a technology that offers a satisfactory empathetic alternative to that Market that is also a Lemon in the long term for humans.
If nothing else, it provides a neat basis for a sci-fi premise.
This has become 100% true for me. I used to use Instagram, Twitter, Facebook, Reddit, etc, but now I only use Discord and Hacker News.
I think that, unfortunately, they will eventually become better substitutes. We are at a real crossroad where people are actively choosing to forgo human interaction for pure digital interaction. Add real digital intelligence that does a better job of interacting with you than real people do and it isn't hard to see how things are likely to go.
How'd I do on my predictions, folks?
How Western Media and Advertising work is the root cause.
Cutting and pasting from the first sentence on MIT's class on the Attention Economy.
"In Understanding Media, Marshall McLuhan proposes that in paying for space and time in newspapers and magazines, on radio and television, advertisers are effectively buying a piece of the reader, listener, or viewer. And he wryly observes that the ad agencies “would gladly pay the reader, listener, or viewer directly for his time and attention if they knew how to do so.” The absurdity of this proposition underscores the essentially mediated nature of human attention."
The class clearly shows how Human Attention is over exploited way before AI showed up.
All Human Attention is mostly directed towards Consumption, Status Signaling and Accumulation of Wealth through Mass Media/Advertising/Marketing/PR/Influencers etc.
There are examples in history where Human Attention at large scales can be directed towards things other than Consumption, Status Signaling, Accumulation of Wealth. A big one is Gandhi and his influence on a huge variety of people to live a simple life.
But ofcourse the moment charismatic leaders like that exits the stage, the Attention Allocation defaults back to status quo.
So we need very different leaders and a very different kind of Media and Advertising ecosystem to cause shifts in how finite Global Human Attention gets allocated. Lot of imagination is required. Unfortunately our most imaginative minds are in service of financial overlords and rent collectors to sell the plebs new iphones and more Star Wars movies, "experiences" and merch.
That's why I left twitter...
Which really showcases what it is like to casually browse modern social media. The most concerning part is how some of that absolute garbage is getting thousands...tens of thousands of likes and replies. And it's all from other bots.
It's pretty much the dead internet theory.
And somehow pages like FB can't moderate or filter away this garbage. Imagine what it will look like ten years from now.
Someone predicted that the future of internet will be the "closed" internet, places that are highly vetted for human use only. The rest will be some scorched AI-wasteland.
Also, imagine all the dollars wasted. Dollars wasted on generating the stuff, dollars wasted to accommodate the traffic, there has got to be some incentive to stop the slop.
{..guiltily tries to remember the last time I touched grass with my hand..}
The oligopolization and enshittification of all digital infrastructure has happened even as digitization of all of society became a one-way street.
This fundamental contradiction is a monumental failure of governance that will haunt society for a long time.
There will have to be some sort of resolution as the status-quo is untenable, a crippling stalemate that limits innovation and stokes great risks.
But it is not clear which way things will evolve. Its not like we have been here before. Make no mistake though, the stakes are very high. When the means of communication and information processing change so dramatically, none of the certainties of the past can be relied upon.
Related
Jack Dorsey says we won't know what is real anymore in the next 5-10 years
Jack Dorsey and Elon Musk express concerns about AI-generated deep fakes blurring reality. OpenAI's efforts fall short, emphasizing the importance of human intervention in maintaining authenticity amidst AI advancements.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
What comes after the AI crash?
Concerns about a generative AI bubble highlight potential market corrections, misuse of AI technologies, ongoing harms like misinformation, environmental issues from data centers, and the need for vigilance post-crash.
The Problem with the Job Market
The job market faces challenges from company inefficiencies and AI's rise, causing employee dissatisfaction. However, a gradual transition to a new work paradigm may lead to a more fulfilling future.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.