Is AI Killing Itself–and the Internet?
Recent research reveals "model collapse" in generative AI, where reliance on AI-generated content degrades output quality. With 57% of web text AI-generated, concerns grow about disinformation and content integrity.
Read original articleRecent research from Cambridge and Oxford universities highlights a concerning phenomenon known as "model collapse" in generative AI systems. As AI-generated content proliferates online, these systems increasingly rely on their own outputs for training, leading to a degradation in the quality of responses. The study, led by Dr. Ilia Shumailov, found that after several iterations of AI querying AI-generated content, the responses deteriorate significantly, resulting in nonsensical outputs. This cycle of reliance on synthetic data threatens the integrity of AI models, as they lose touch with original human-generated content. Currently, approximately 57% of web text has been generated or altered by AI, raising alarms about the potential for AI to "kill itself" and compromise the quality of information on the internet. The researchers emphasize the need for AI systems to access a continuous stream of human-generated content to maintain their effectiveness. As AI adoption grows, predictions suggest that up to 90% of internet content could be AI-generated by 2025, exacerbating the risk of model collapse. The implications include a skewing of training data, increased disinformation, and challenges in filtering out AI-generated content. Without intervention, the future of AI and the internet may face significant challenges in maintaining truth and reliability.
- Model collapse occurs when AI systems rely solely on their own generated content, leading to degraded outputs.
- Approximately 57% of online text is now AI-generated, raising concerns about content quality.
- Predictions indicate that 90% of internet content could be AI-generated by 2025.
- The integrity of AI training data is at risk, potentially amplifying disinformation.
- Access to human-generated content is crucial for the sustainability of AI systems.
Related
The problem of 'model collapse': how a lack of human data limits AI progress
Research shows that using synthetic data for AI training can lead to significant risks, including model collapse and nonsensical outputs, highlighting the importance of diverse training data for accuracy.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
AI trained on AI garbage spits out AI garbage
Research from the University of Oxford reveals that AI models risk degradation due to "model collapse," where reliance on AI-generated content leads to incoherent outputs and declining performance.
'Model collapse'? An expert explains the rumours about an impending AI doom
Model collapse in AI refers to reduced effectiveness from reliance on AI-generated data. Concerns include diminished quality and diversity of outputs, prompting calls for better regulation and competition in the sector.
When A.I.'s Output Is a Threat to A.I. Itself
A.I. systems face quality degradation from training on their own outputs, risking "model collapse." Ensuring diverse, high-quality real-world data is essential to maintain effectiveness and reliability in A.I. applications.
Related
The problem of 'model collapse': how a lack of human data limits AI progress
Research shows that using synthetic data for AI training can lead to significant risks, including model collapse and nonsensical outputs, highlighting the importance of diverse training data for accuracy.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
AI trained on AI garbage spits out AI garbage
Research from the University of Oxford reveals that AI models risk degradation due to "model collapse," where reliance on AI-generated content leads to incoherent outputs and declining performance.
'Model collapse'? An expert explains the rumours about an impending AI doom
Model collapse in AI refers to reduced effectiveness from reliance on AI-generated data. Concerns include diminished quality and diversity of outputs, prompting calls for better regulation and competition in the sector.
When A.I.'s Output Is a Threat to A.I. Itself
A.I. systems face quality degradation from training on their own outputs, risking "model collapse." Ensuring diverse, high-quality real-world data is essential to maintain effectiveness and reliability in A.I. applications.