August 22nd, 2024

An Age of Hyperabundance

Laura Preston's article discusses her role as the contrarian speaker at the Project Voice conference, addressing ethical concerns of conversational AI, including its impact on vulnerable populations and human interaction.

Read original articleLink Icon
An Age of Hyperabundance

The article "An Age of Hyperabundance" by Laura Preston reflects on her experience at the Project Voice conference, a significant event in the conversational AI sector. Preston describes the atmosphere dominated by over-groomed men and the buzz of tech jargon, where she was invited as the "honorary contrarian speaker." Her role was to voice concerns about the implications of conversational AI, which encompasses a wide range of technologies beyond just popular models like ChatGPT. The conference showcased various companies and their products, including photorealistic avatars and AI-driven customer service tools. One notable exhibit featured an avatar named Chatty, designed to assist users in various settings, including healthcare. The article highlights the potential for AI to engage with vulnerable populations, such as the elderly, while also cautioning against manipulative design practices that could mislead users. Preston's observations raise questions about the ethical implications of AI technology and its impact on human interaction, particularly in sensitive contexts like elder care. The conference served as a microcosm of the broader conversation surrounding the rapid advancement of AI and its societal consequences.

- The Project Voice conference focuses on advancements in conversational AI, attracting developers and entrepreneurs.

- Laura Preston served as the "honorary contrarian speaker," tasked with voicing concerns about AI's cultural impact.

- The event showcased various AI applications, including avatars for customer service and healthcare.

- Ethical considerations, such as manipulative design practices in AI, were highlighted as significant concerns.

- The article emphasizes the need for critical dialogue about the implications of AI technology on human relationships.

Link Icon 6 comments
By @lapcat - about 2 months
> Loneliness was a problem, but loneliness had a solution, and the solution was conversation. But don’t talk with your elders, and not with the front desk, and certainly not with the man on the corner, though he might know where the pizza is. (“Noise-canceling is great, especially if you live urban,” said the earbuds guy. “There’s a lot of world out there.”) Idle chitchat was a snag in daily living. We’d rather slip through the world as silent as a burglar, seen by no one except our devices.

Under the guise of 24/7 "connectedness", we've become more disconnected than ever from other people. AI is just the next step in this technological dystopia: you'll never have to talk to another person, indeed won't be allowed to talk to another person.

By @matthewdgreen - about 2 months
This section was pretty horrifying:

The CEO leaned an elbow on the podium. “I’ll tell you a story,” he said.

A woman wrote to VERA about her elderly dog, who was having diarrhea.

“Your dog is at the end of his life,” said VERA. “I recommend euthanasia.”

The woman was beside herself. She told VERA she wasn’t ready to say goodbye. Her dog was her only companion.

VERA knew the woman’s location. She sent a list of nearby clinics that could get the job done. Still, the woman was unconvinced. Euthanasia was so expensive. She’d never be able to afford it. VERA sent another list, this time of nearby shelters. “If you relinquish your dog to a shelter, they will euthanize him at no cost,” she said.

The woman did not respond. But some days later she sent VERA a long and effusive message. She had taken VERA’s advice and euthanized her dog. She wanted to thank VERA for the support during the most difficult moment of her life.

The CEO regarded us with satisfaction for his chatbot’s work: that, through a series of escalating tactics, it had convinced a woman to end her dog’s life, though she hadn’t wanted to at all. “The point of this story is that the woman forgot she was talking to a bot,” he said. “The experience was so human.”

By @eig - about 2 months
This essay resonated with me, speaking about the "mediocrity" of our future relationship with computer assistants.

I loved the paragraph about feeling "scammed" though I would've called it being "faked". The AI doctor can never use the stethoscope around her neck. She is hijacking totems of professionalism to appear more comforting, without the capabilities to back them up. The fake veterinarian can suggest a diagnosis but can't actually treat anyone. The real-estate chatbot cannot try to help a domestic violence victim.

Maybe that's why I interact with AI assistants the same way I interact with psycopaths. I'm comfortable interacting with them in jobs where the law will incentivize them to behave well. But for things like teaching, or medicine, or personal matters, I prefer someone with empathy.

By @kushie - about 2 months
> Bradley had read my essay “HUMAN_FALLBACK” in n+1’s Winter 2022 issue in which I described my year impersonating a chatbot for a real estate start-up.

HN seemed to like this writing (I really did)

https://news.ycombinator.com/item?id=33966059

By @datadrivenangel - about 2 months
"What really frightened me was the future of mediocrity they suggested: the inescapable screens, the app-facilitated antisocial behavior, the assumptions advanced as knowledge, and above all the collective delusion formulated in high offices and peddled to common people that all this made for an easier life."

The singularity is here, but it's actually the singularity of mediocrity. Which is ironic, because we have more technology prowess than ever.

By @lmaoguy - about 2 months
Society needs to collapse.