General Theory of Neural Networks
The article explores Universal Activation Networks (UANs) bridging biological gene regulatory networks and artificial neural networks. It discusses their evolution, structure, computational universality, and potential to advance research in both fields.
Read original articleThe article discusses the General Theory of Neural Networks, focusing on Universal Activation Networks (UANs) that span biological gene regulatory networks to artificial neural networks. These networks consist of nodes that integrate inputs and activate at a threshold, leading to actions or broadcasts. The evolution of UANs showcases evolvability and generative open-endedness, distinguishing them from other dynamic networks. The article highlights the historical development of networks from prokaryotes to modern artificial neural networks, emphasizing the recurring theme of network structures across species. It proposes that UANs represent a fundamental phenomenon bridging natural and formal sciences, offering explanatory power surpassing existing complex system theories. The text delves into the insights gained from studying artificial gene regulatory networks, demonstrating how network topology determines function and the potential reduction of biological complexity to Boolean circuits. The article also touches on the evolution of gene networks towards critical topologies for optimal function, drawing parallels to thermodynamics. The author suggests a series of conjectures regarding UANs' computational universality and the explainability of their operations. Overall, the article advocates for a unified understanding of UANs to advance research in biological and artificial networks.
Related
Trying Kolmogorov-Arnold Networks in Practice
Recent interest in Kolmogorov-Arnold networks (KANs) stems from claims of improved accuracy and faster training. However, practical testing revealed that despite matching neural networks' performance, KANs require complex implementation and tuning. Despite efforts to optimize KANs, simpler neural networks consistently outperformed them. Alternative activation functions were explored, leading to the conclusion that neural networks are more effective with less effort. While KANs may excel in niche cases, neural networks remain a stronger default choice, emphasizing the value of exploring alternatives for AI advancements.
The moment we stopped understanding AI [AlexNet] [video]
The video discusses high-dimensional embedding spaces in AI models like AlexNet and Chat GPT. It explains AlexNet's convolutional blocks for image analysis and Chat GPT's transformer use for responses, emphasizing AI model evolution and challenges in visualizing activations.
Training of Physical Neural Networks
Physical Neural Networks (PNNs) leverage physical systems for computation, offering potential in AI. Research explores training larger models for local inference on edge devices. Various training methods are investigated, aiming to revolutionize AI systems by considering hardware physics constraints.
We need new metaphors that put life at the centre of biology Essays
The article discusses the limitations of genetic frameworks in biology post-Human Genome Project. It highlights the significance of non-coding RNA genes in gene regulation, challenging traditional genetic narratives. RNA's role as a 'computational engine' is emphasized.
- Some commenters argue that the concept of universal function approximators is not unique to neural networks and highlight the importance of parameter efficiency.
- Others draw parallels to metaphysical theories and Markov Chains, suggesting broader implications for understanding consciousness and network behavior.
- Critics point out the grandiose prose and lack of originality, with some questioning the scientific validity of the claims.
- There are discussions on the variety of activation functions in neural networks and the importance of topology in network models.
- Several comments emphasize the need for caution, noting that all models have limitations and should be viewed critically.
Ie everything that exists may be the result of some kind of Uber Network existing outside of space and time
It’s a wild theory but the fact that these networks keep popping up and recurring at level upon level when agency and intelligence is needed is crazy
What would be particularly interesting is if there were a proof that some universal approximators were more parameter efficient than others. The simplicity of the neural representation would suggest that it may be a particularly useful - if inscrutable approximator.
https://dublog.net/blog/all-the-activations/
The author is extrapolating way too much. The simplest model of X is similar to the simplest model of Y, therefore the common element is deep and insightful, rather than mathematical modelers simply being rationally parsimonious.
The general nn is a discrete implementation of that
Evolvability and generative open-endedness define Universal Activation Networks, setting them apart from other dynamic networks, complex systems or replicators. Evolvability implies robustness and plasticity in both structure and function, differentiable performance, inheritable replication, and selective mechanisms. They evolve, they learn, they adapt, they get better and their open-enedness lies in their capacity to form higher-order networks subject to a new level of selection.
Anyway, this doesn't even try to make the case that that equation is universal, only that "learning" is a general phenomena of living systems, which can be modeled probably in many different ways.
"Prokaryotes emerged 3.5 billion years ago, their gene networks acting like rudimentary brains. These networks controlled chemical reactions and cellular processes, laying the foundation for complexity."
... for which there is no evidence at all. Psuedo-science, aka Fantasy.
This article is lacking originality and insight to such degree that I susupect it is patentable.
Related
Trying Kolmogorov-Arnold Networks in Practice
Recent interest in Kolmogorov-Arnold networks (KANs) stems from claims of improved accuracy and faster training. However, practical testing revealed that despite matching neural networks' performance, KANs require complex implementation and tuning. Despite efforts to optimize KANs, simpler neural networks consistently outperformed them. Alternative activation functions were explored, leading to the conclusion that neural networks are more effective with less effort. While KANs may excel in niche cases, neural networks remain a stronger default choice, emphasizing the value of exploring alternatives for AI advancements.
The moment we stopped understanding AI [AlexNet] [video]
The video discusses high-dimensional embedding spaces in AI models like AlexNet and Chat GPT. It explains AlexNet's convolutional blocks for image analysis and Chat GPT's transformer use for responses, emphasizing AI model evolution and challenges in visualizing activations.
Training of Physical Neural Networks
Physical Neural Networks (PNNs) leverage physical systems for computation, offering potential in AI. Research explores training larger models for local inference on edge devices. Various training methods are investigated, aiming to revolutionize AI systems by considering hardware physics constraints.
We need new metaphors that put life at the centre of biology Essays
The article discusses the limitations of genetic frameworks in biology post-Human Genome Project. It highlights the significance of non-coding RNA genes in gene regulation, challenging traditional genetic narratives. RNA's role as a 'computational engine' is emphasized.