September 8th, 2024

Novel Chinese computing architecture 'inspired by human brain' can lead to AGI

Scientists in China have developed a brain-inspired computing architecture that focuses on internal complexity, potentially leading to artificial general intelligence (AGI) and more efficient AI systems.

Read original articleLink Icon
Novel Chinese computing architecture 'inspired by human brain' can lead to AGI

Scientists in China have developed a novel computing architecture inspired by the human brain, which they believe could pave the way for artificial general intelligence (AGI). Current AI models, particularly large language models (LLMs), rely on extensive neural networks that mimic human brain functions but are limited by their training data and reasoning capabilities. The new architecture aims to address these limitations by focusing on "internal complexity" rather than simply scaling up existing models. This approach involves creating artificial neurons with richer internal structures, similar to the human brain's 100 billion neurons and 1,000 trillion synaptic connections, while maintaining low energy consumption. The researchers implemented a Hodgkin-Huxley (HH) network model, which accurately simulates neuronal activity and has shown the ability to perform complex tasks efficiently. Their findings suggest that smaller models based on this architecture can achieve performance comparable to larger conventional models, potentially leading to more efficient AI systems. While AGI remains a theoretical goal, some experts believe it could be realized within a few years, although various strategies exist for achieving this milestone.

- A new computing architecture inspired by the human brain may lead to AGI.

- The approach emphasizes "internal complexity" over simply scaling up existing neural networks.

- The Hodgkin-Huxley model used in the research simulates neuronal activity with high accuracy.

- Smaller models based on this architecture can perform as well as larger conventional models.

- AGI is still a theoretical goal, but some researchers predict it could be achieved in the near future.

Link Icon 2 comments
By @proc0 - 8 months
Interesting, it's adding some complexity to each neuron in the network, which correlates to observations in neuroscience. Looks like they also used backprop for training but not sure how that is calculated at first glance (since presumably you would have to ignore the inner complexity of each neuron or something like that).

Saving the link for ref. https://www.nature.com/articles/s43588-024-00674-9.epdf?shar...

By @janalsncm - 8 months
AGI is probably overstating things here, since will probably need something more than an architectural advancement to reach the broad, transferable capabilities of even human problem solving abilities.

But it might be worth out scaling up some older architectures that were mostly abandoned. A huge GRU/LSTM would be really interesting. There may be some emergent properties we don’t know about.