July 11th, 2024

General Theory of Neural Networks

The article explores Universal Activation Networks (UANs) bridging biological gene regulatory networks and artificial neural networks. It discusses their evolution, structure, computational universality, and potential to advance research in both fields.

Read original articleLink Icon
SkepticismCriticismCuriosity
General Theory of Neural Networks

The article discusses the General Theory of Neural Networks, focusing on Universal Activation Networks (UANs) that span biological gene regulatory networks to artificial neural networks. These networks consist of nodes that integrate inputs and activate at a threshold, leading to actions or broadcasts. The evolution of UANs showcases evolvability and generative open-endedness, distinguishing them from other dynamic networks. The article highlights the historical development of networks from prokaryotes to modern artificial neural networks, emphasizing the recurring theme of network structures across species. It proposes that UANs represent a fundamental phenomenon bridging natural and formal sciences, offering explanatory power surpassing existing complex system theories. The text delves into the insights gained from studying artificial gene regulatory networks, demonstrating how network topology determines function and the potential reduction of biological complexity to Boolean circuits. The article also touches on the evolution of gene networks towards critical topologies for optimal function, drawing parallels to thermodynamics. The author suggests a series of conjectures regarding UANs' computational universality and the explainability of their operations. Overall, the article advocates for a unified understanding of UANs to advance research in biological and artificial networks.

Related

Trying Kolmogorov-Arnold Networks in Practice

Trying Kolmogorov-Arnold Networks in Practice

Recent interest in Kolmogorov-Arnold networks (KANs) stems from claims of improved accuracy and faster training. However, practical testing revealed that despite matching neural networks' performance, KANs require complex implementation and tuning. Despite efforts to optimize KANs, simpler neural networks consistently outperformed them. Alternative activation functions were explored, leading to the conclusion that neural networks are more effective with less effort. While KANs may excel in niche cases, neural networks remain a stronger default choice, emphasizing the value of exploring alternatives for AI advancements.

The moment we stopped understanding AI [AlexNet] [video]

The moment we stopped understanding AI [AlexNet] [video]

The video discusses high-dimensional embedding spaces in AI models like AlexNet and Chat GPT. It explains AlexNet's convolutional blocks for image analysis and Chat GPT's transformer use for responses, emphasizing AI model evolution and challenges in visualizing activations.

Training of Physical Neural Networks

Training of Physical Neural Networks

Physical Neural Networks (PNNs) leverage physical systems for computation, offering potential in AI. Research explores training larger models for local inference on edge devices. Various training methods are investigated, aiming to revolutionize AI systems by considering hardware physics constraints.

We need new metaphors that put life at the centre of biology Essays

We need new metaphors that put life at the centre of biology Essays

The article discusses the limitations of genetic frameworks in biology post-Human Genome Project. It highlights the significance of non-coding RNA genes in gene regulation, challenging traditional genetic narratives. RNA's role as a 'computational engine' is emphasized.

AI: What people are saying
The article on Universal Activation Networks (UANs) bridging biological and artificial neural networks has sparked diverse reactions.
  • Some commenters argue that the concept of universal function approximators is not unique to neural networks and highlight the importance of parameter efficiency.
  • Others draw parallels to metaphysical theories and Markov Chains, suggesting broader implications for understanding consciousness and network behavior.
  • Critics point out the grandiose prose and lack of originality, with some questioning the scientific validity of the claims.
  • There are discussions on the variety of activation functions in neural networks and the importance of topology in network models.
  • Several comments emphasize the need for caution, noting that all models have limitations and should be viewed critically.
Link Icon 16 comments
By @AIorNot - 3 months
What’s wild to me is that Donald Hoffman is also proposing a similar foundation for his metaphysical theory of consciousness, ie that it is a fundamental property and that it exists outside of spacetime and leads via a markov chain of conscious agents (in a Network as described above)

Ie everything that exists may be the result of some kind of Uber Network existing outside of space and time

It’s a wild theory but the fact that these networks keep popping up and recurring at level upon level when agency and intelligence is needed is crazy

https://youtu.be/yqOVu263OSk?si=SH_LvAZSMwhWqp5Q

By @lumost - 3 months
The existence of a universal function approximator or function representation is not particularly unique to neural networks. Fourier transforms can represent any function as a (potentially) infinite vector on an orthonormal basis.

What would be particularly interesting is if there were a proof that some universal approximators were more parameter efficient than others. The simplicity of the neural representation would suggest that it may be a particularly useful - if inscrutable approximator.

By @LarsDu88 - 3 months
There are a whole lot more activation functions used nowadays in NNs

https://dublog.net/blog/all-the-activations/

The author is extrapolating way too much. The simplest model of X is similar to the simplest model of Y, therefore the common element is deep and insightful, rather than mathematical modelers simply being rationally parsimonious.

By @AndrewKemendo - 3 months
This is another example of Markov Chains in the wild - so that’s what he’s seeing

The general nn is a discrete implementation of that

https://en.m.wikipedia.org/wiki/Markov_chain

By @rdlecler1 - 3 months
Despite vast implementation constraints spanning diverse biological systems, a clear pattern emerges the repeated and recursive evolution of Universal Activation Networks (UANs). These networks consist of nodes (Universal Activators) that integrate weighted inputs from other units or environmental interactions and activate at a threshold, resulting in an action or an intentional broadcast. Minimally, Universal Activator Networks include gene regulatory networks, cell networks, neural networks, cooperative social networks, and sufficiently advanced artificial neural networks.

Evolvability and generative open-endedness define Universal Activation Networks, setting them apart from other dynamic networks, complex systems or replicators. Evolvability implies robustness and plasticity in both structure and function, differentiable performance, inheritable replication, and selective mechanisms. They evolve, they learn, they adapt, they get better and their open-enedness lies in their capacity to form higher-order networks subject to a new level of selection.

By @t_serpico - 3 months
"Topology is all that matters" --> bold statement, especially when you read the paper. The original authors were much more reserved in terms of their conclusions.
By @sixo - 3 months
God this grandiose prose style is insufferable. Calm down.

Anyway, this doesn't even try to make the case that that equation is universal, only that "learning" is a general phenomena of living systems, which can be modeled probably in many different ways.

By @Imnimo - 3 months
How does the attention operator in transformers, in which input data is multiplied by input data (as opposed other neural network operations in which input data is multiplied by model weights) fit into the notion of a universal activator?
By @smokel - 3 months
People seem to be obsessed with finding fundamental properties in neural networks, but why not simply marvel at the more basic incredible operations of addition and multiplication, and stop there?
By @cfgauss2718 - 3 months
There are some interesting parallels to ideas in this article and IIT. The focus on parsimony in networks, and pruning connections that are redundant to reveal the minimum topology (and the underlying computation)is reminiscent of parts of IIT: I’m thinking of the computation of the maximally irreducible concept structure via searching for a network partition which minimizes the integrated cause-effect information in the system. Such redundant connections are necessarily severed by the partition.
By @flufluflufluffy - 3 months
We must always remember that all models are wrong, though some are useful.
By @29athrowaway - 3 months
In the biology there are families of neurons, each one with different morphologies.
By @xiaodai - 3 months
can't rule out it was generated by ChatGPT
By @hnax - 3 months
I switched off at paragraph two:

"Prokaryotes emerged 3.5 billion years ago, their gene networks acting like rudimentary brains. These networks controlled chemical reactions and cellular processes, laying the foundation for complexity."

... for which there is no evidence at all. Psuedo-science, aka Fantasy.

By @macilacilove - 3 months
If there is a 'god equation' it will almost certainly include a+b=c because we use it all the time to describe "diverse biological systems with vast implementation constraints".

This article is lacking originality and insight to such degree that I susupect it is patentable.