Data from macaque monkeys reveals flaws in deep neural networks
Research from Harvard reveals deep neural networks struggle to generalize, showing an 80% accuracy drop with unfamiliar data. This highlights the need for collaboration between AI and neuroscience to improve understanding.
Read original articleResearch conducted by a team at Harvard's John A. Paulson School of Engineering and Applied Sciences has revealed significant limitations in deep neural networks' ability to generalize, particularly when tested with "out-of-distribution" data. The study, presented at the Neural Information Processing Systems conference, utilized data from macaque monkeys to assess how well these models could predict neuronal responses to novel images. The researchers found that while the models performed well on familiar data, their accuracy dropped dramatically—by about 80%—when faced with unfamiliar conditions such as changes in image contrast or hue. This highlights a fundamental flaw in current brain modeling techniques that rely on deep neural networks, as they fail to replicate the brain's remarkable ability to generalize across varying conditions. The findings suggest that the challenges of generalization in artificial intelligence also extend to neuroscience, emphasizing the need for collaboration between the two fields to address these issues. The research, which involved extensive data collection from the monkeys, aims to improve understanding of both artificial and biological intelligence.
- Deep neural networks struggle with generalization, particularly with out-of-distribution data.
- The study used data from macaque monkeys to assess neuronal responses to novel images.
- Models performed well on familiar data but poorly on unfamiliar conditions, indicating a significant limitation.
- The findings suggest a need for collaboration between AI and neuroscience to tackle generalization challenges.
- The research highlights the broader implications of AI's limitations on understanding biological intelligence.
Related
Loss of plasticity in deep continual learning
A study in Nature reveals that standard deep learning methods lose plasticity over time, leading to performance declines. It proposes continual backpropagation to maintain adaptability without retraining from scratch.
Loss of plasticity in deep continual learning
A study in Nature reveals that standard deep-learning methods lose plasticity over time, proposing a new algorithm, continual backpropagation, to maintain adaptability in continual learning using classic datasets.
When computer vision works more like a brain, it sees more like people do
MIT researchers developed a computer vision model mimicking human brain processing, trained on monkey IT cortex data, enhancing object recognition and resistance to adversarial attacks, promoting collaboration between neuroscience and AI.
LLMs don't do formal reasoning
A study by Apple researchers reveals that large language models struggle with formal reasoning, relying on pattern matching. They suggest neurosymbolic AI may enhance reasoning capabilities, as current models are limited.
Apple researchers ran an AI test that exposed a fundamental 'intelligence' flaw
Apple researchers found that many AI models struggle with basic arithmetic when irrelevant data is included, highlighting a lack of genuine logical reasoning and cautioning against overestimating AI's intelligence.
Related
Loss of plasticity in deep continual learning
A study in Nature reveals that standard deep learning methods lose plasticity over time, leading to performance declines. It proposes continual backpropagation to maintain adaptability without retraining from scratch.
Loss of plasticity in deep continual learning
A study in Nature reveals that standard deep-learning methods lose plasticity over time, proposing a new algorithm, continual backpropagation, to maintain adaptability in continual learning using classic datasets.
When computer vision works more like a brain, it sees more like people do
MIT researchers developed a computer vision model mimicking human brain processing, trained on monkey IT cortex data, enhancing object recognition and resistance to adversarial attacks, promoting collaboration between neuroscience and AI.
LLMs don't do formal reasoning
A study by Apple researchers reveals that large language models struggle with formal reasoning, relying on pattern matching. They suggest neurosymbolic AI may enhance reasoning capabilities, as current models are limited.
Apple researchers ran an AI test that exposed a fundamental 'intelligence' flaw
Apple researchers found that many AI models struggle with basic arithmetic when irrelevant data is included, highlighting a lack of genuine logical reasoning and cautioning against overestimating AI's intelligence.