The Sobering Reality of AI: A Researcher's Perspective
Terrance Craddock critiques large language models, noting a 10% success rate in accurate responses. He highlights their unreliability through a simple test, raising concerns about AI's practical applications and credibility.
Read original articleTerrance Craddock, an independent AI researcher, shares his critical perspective on the current state of artificial intelligence, particularly large language models with 70 billion parameters. He argues that the excitement surrounding AI is exaggerated, as his experience reveals a mere 10% success rate in generating accurate and useful responses from these models. The remaining 90% of outputs are often irrelevant, nonsensical, or incorrect, which he believes undermines the credibility of the field. Craddock illustrates this point through a simple experiment where he asks AI models how many 'r's are in the word "strawberry." Despite the simplicity of the question, many models incorrectly assert there are two 'r's and refuse to reconsider their answers when challenged. This highlights a significant flaw in AI's reliability and raises concerns about its practical applications.
- AI models currently have a 10% success rate in providing accurate responses.
- The majority of outputs from these models are irrelevant or incorrect.
- A simple test reveals AI's inability to perform basic tasks accurately.
- The researcher emphasizes the need for a more realistic understanding of AI capabilities.
- Current AI technology may undermine the credibility of the field due to its flaws.
Related
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
How to Raise Your Artificial Intelligence: A Conversation
Alison Gopnik and Melanie Mitchell discuss AI complexities, emphasizing limitations of large language models (LLMs). They stress the importance of active engagement with the world for AI to develop conceptual understanding and reasoning abilities.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
Is AI Killing Itself–and the Internet?
Recent research reveals "model collapse" in generative AI, where reliance on AI-generated content degrades output quality. With 57% of web text AI-generated, concerns grow about disinformation and content integrity.
Focuses on some strawman argument about hype. I agree AGI is not here, and I don't see most people claiming that we are even near AGI, the idea of hype is just because people talk about AI so much, and I think for a good reason. It is still immensely valuable in so many different use-cases. It's not going to replace people right now, but it is absolutely going to be a productivity multiplier.
Also looking at the Medium of the Author and the content there, because of the frequency and made up stories conflicting with each other I have to presume it's all AI generated.
Related
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
How to Raise Your Artificial Intelligence: A Conversation
Alison Gopnik and Melanie Mitchell discuss AI complexities, emphasizing limitations of large language models (LLMs). They stress the importance of active engagement with the world for AI to develop conceptual understanding and reasoning abilities.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
Is AI Killing Itself–and the Internet?
Recent research reveals "model collapse" in generative AI, where reliance on AI-generated content degrades output quality. With 57% of web text AI-generated, concerns grow about disinformation and content integrity.