June 26th, 2024

Jack Dorsey says we won't know what is real anymore in the next 5-10 years

Jack Dorsey and Elon Musk express concerns about AI-generated deep fakes blurring reality. OpenAI's efforts fall short, emphasizing the importance of human intervention in maintaining authenticity amidst AI advancements.

Read original articleLink Icon
Jack Dorsey says we won't know what is real anymore in the next 5-10 years

Jack Dorsey, former Twitter CEO, warns that in the next 5-10 years, distinguishing between real and fake content will become increasingly challenging due to the rise of AI-generated deep fakes. He emphasizes the importance of personal experience and vigilance to verify authenticity. OpenAI has taken steps like watermarking images to address this issue but acknowledges it's not a complete solution. Dorsey also discusses the impact of AI on job automation, highlighting the need for human intervention to maintain quality and authenticity in tasks like writing. The proliferation of deep fakes and AI-generated content poses a significant challenge, with Dorsey suggesting that the future may feel like living in a simulation where everything appears manufactured. Elon Musk echoes similar concerns, questioning if we are already in that state. The AI revolution is advancing rapidly, with tools like OpenAI's GPT-4o showcasing reasoning capabilities across various media types. Despite AI's job displacement effects, there is a growing trend of hiring writers to humanize automated content, indicating the ongoing need for human input in content creation.

Related

We need an evolved robots.txt and regulations to enforce it

We need an evolved robots.txt and regulations to enforce it

In the era of AI, the robots.txt file faces limitations in guiding web crawlers. Proposals advocate for enhanced standards to regulate content indexing, caching, and language model training. Stricter enforcement, including penalties for violators like Perplexity AI, is urged to protect content creators and uphold ethical AI practices.

Lessons About the Human Mind from Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.

The Encyclopedia Project, or How to Know in the Age of AI

The Encyclopedia Project, or How to Know in the Age of AI

Artificial intelligence challenges information reliability online, blurring real and fake content. An anecdote underscores the necessity of trustworthy sources like encyclopedias. The piece advocates for critical thinking amid AI-driven misinformation.

My Memories Are Just Meta's Training Data Now

My Memories Are Just Meta's Training Data Now

Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.

Link Icon 18 comments
By @bigiain - 4 months
Pretty sure Jack lost the ability to distinguish reality from his own hype _way_ more than 5-10 years ago
By @arnaudsm - 4 months
The original Twitter verification process was a great solution to this back in the day !

It's a shame its brand was destroyed to make a quick buck. Most blue checks I see nowadays are spam and trolls

By @tmnvix - 4 months
Not just what, but who.

I expect to see a rise in use of small scale networks/groups, ideally self hosted in some way, where participants are all known to each other in real life. For years now I've been seeing people migrate to WhatsApp groups largely for this reason (I know each of you, this conversation is not public, and we're not going to have some weird bot or 'curator' turn up to shill, steal, or spray slop around).

Really curious to know about any co-housing/living communities or the like that have found a useful solution along the lines of a calendar/notification/chat/messaging/file sharing platform that isn't some mashup of big tech offerings.

Who's offering real life communities a 'social platform in a box'?

By @jrflowers - 4 months
Chump numbers. If you are committed with your regular DMT usage you can get there within weeks.
By @MichaelRo - 4 months
I already don't know what's real anymore when reading historical snippets accompanied by pictures. Especially Facebook content, it's worse than the bottom of the barrel paper printed tabloids of the 90s, with the hen that gave birth to live chicks and all.

Eventually I'll have no option but to re-subscribe to paid content, while there's a trace of quality and objectiveness left.

By @freetanga - 4 months
The next planned step from caring governments and big tech: mandatory online identification.

Sow disinformation to drive crowds, zero in and silence dissidents. 1984 is served.

I just wish that Zuck, Thiel, Sergei, Elon, Jack and all the richest-beyond-any-practical-sense are remembered in history as the ones that did most of the work for this to happen.

Hell, I am no Stallman fan, but I would take his view over the crowd any day.

By @specialist - 4 months
Jack would know. He helped knock away the few remaining safeguards we had left.

Whereas Square used AI to prevent fraud, Twitter used AI to create an algorithmic hate machine. Any remaining authentic sigmal is overwhelmed by the noise of bots, trolls, agitprop, memes, and socketpuppets.

By @namaria - 4 months
I'm not sure that many of the people out there in the world can or ever could.
By @maxehmookau - 4 months
This is already the case for many people who fall for half-truth clickbait on a daily basis.

I agree it will get worse, but to suggest that this is a future problem is wrong.

By @r-spaghetti - 4 months
Is that the real Jack Dorsey?
By @Havoc - 4 months
10 years?

This feels more like a next 12 months problem to me at current pace

By @pharos92 - 4 months
AI needs to be put back in the box and buried.
By @curiousdeadcat - 4 months
Time to re-read "Fall, or Dodge in Hell".

I feel it might be more unsettling than when I read it ~5 years ago.

By @jarsin - 4 months
I already think over the top positive comments are AI.
By @ilaksh - 4 months
We already live in alternate realities depending on what media we consume. Which is determined by our political group. Which is largely determined by location or socio-economic status, etc.

I mean literally having wildly varying beliefs about what is true or not. Especially political beliefs, each group believes that the other groups or their leaders are incompetent and corrupt. This goes for splits inside of a country but also in regards to other countries.

Technology is an amplifier. But it shouldn't be blamed for problems that humans create due to the nature of humanity and the poor organization of society and poor integration of information.

By @vouaobrasil - 4 months
An alternative is to join a coalition that refuses to use AI, and only consume content created by the coalition.
By @aredox - 4 months
Reminder this is the guy who had a good time meditating in the middle of the Rohingya genocide.