Waiting for permission to breathe.

I like to think of myself as creative, you know. Arty. When my brother (“the favourite”) was getting accolades for being his golden, amazing self, to cheer myself, I’d remind myself: “You might be…


独家优惠奖金 100% 高达 1 BTC + 180 免费旋转

What Deep Learning can Learn from Cybernetics

Unfortunately, most DL researchers focus their attention on the artificial neuron and not the heavenly glory of Cybernetics. Allow me to further explain the deep wisdom found in Cybernetics.

Alan Turing had in fact explored Connectionist thinking but his papers were not published until 14 years after his untimely death in 1954. Norbert Wiener who collaborated with Turing had passed away a decade later (1964). So Turing’s thoughts would not see the light of day until 1968, the beginning of the emergence of symbolists thinking.

If you are coming from the classical perspective of AI (both GOFAI and DL) you will have the left side of the diagram above serves as your mental model for AI. This is the pervasive model of AI and it has been indoctrinated for decades into its disciples. It doesn’t matter if you are a Symbolists or Connectionist, the mental model has been cleansed of any remnants of the ancient teachings of Cybernetics.

I’ve seen the above diagram several times, but it took me a while of study to truly appreciate what it meant. Allow me, therefore, to decipher its meaning from the perspective of current Deep Learning research.

Let’s begin at the top of the diagram.

Cognitive systems are autonomous. To understand this, we have to realize what distinguishes biological life from inanimate objects. Biological life is autonomous, they exhibit their own intentional behavior. That is, they all are cognitive systems with their own autonomous behavior evolved towards surviving within their adapted environments. I explore this in more detail in “nano-intentionality”.

Organisms map through an environment back into themselves. To understand this, we have to begin with the viewpoint that all cognition originates from embodied learning. An organism learns by interacting with its environment. There is however a relationship between environment and organism that relates to both memory and representation. An organism has bounded rationality, as a consequence, an organism employs its environment as a way to offload cognitive load. An organism does not remember or represent everything, it leaves a lot of that to the environment. What is learned is affordances, that is the only information that is useful and it uses that information in conjunction with what it observes in the environment to predict its next action.

Nervous systems reproduce adaptive relationships. As a starting point, we determined that all biological life is autonomous and that autonomy leads to its own adaptability. That is, even the simplest of single-cell organisms have built-in autonomy and adaptiveness. In the diagram above, in the intersection between memory and reality, the same adaptiveness that is pervasive in biological life is actually simulated in biological brains.

Social agreement is primary objectivity. The intersection of knowledge and reality can be understood with the framework of Semiotics. The gist of the argument is that knowledge is captured by icons, indexes, and symbols and that our cognitive development needs to be grounded by icons. Indexes are leaned affordances. Symbols arise from the use of words that originate from their use. I’ve explored this in more detail in Deep Learning and Semiotics.

Intelligence resides in observed conversations. The most advanced form of intelligence is one that gains knowledge through conversations. The gist of this is that our human complete intelligence arises from our ability to manage conversations within a social environment. However, the concept of a conversation can represent the dynamic interplay of interactions between organisms. The ability to track these interactions and arrive at predictions is the highest form of general intelligence. I’ve explored this more extensively in Conversational Cognition.

Why then is this Cybernetic perspective better as compared to the conventional AI perspective depicted on the right? The primary difference is that AI seems to ignore the holistic nature of organisms and ecosystems. Everything in AI is framed from a mechanistic and objective point of view where there are absolutes, information manipulation, information storage, formal ontologies, and strict boundaries. The thinking is that intelligence can be independent of the environment or context. These are all artifacts of GOFAI thinking, but unfortunately, it has infected connectionist thinking.

Second order cybernetics which introduces the observer into its discourse provides a richer foundation to understand learning as compared to the disembodied and context-free viewpoint of classical AI. In fact, this second order notion maps cleanly to ideas found in meta-learning. Deep Learning advances reveal a viewpoint that is more compatible with what’s found in Cybernetics. This should not be a surprise, after all, Cybernetics is inspired by biology and both are the inspirations of the artificial neuron.

Haidt’s moral intuitions argue that our sense of morality is intuitive and natural and explains the difficulty of persuading others through a rational argument without appealing to their personal intuitions.

The GOFAI intuition has its source in the analytic traditions found in engineering and mathematics. However, complex systems like biology and the mind are known not to be engineered (or designed) but rather grown. So there indeed is a cognitive dissonance in a maximalist engineering mindset when working with biological-scale complexity. This also explains why the Cybernetic viewpoint seems to employ language that is so alien to many in the hard sciences. This is indeed unfortunate considering that Norbert Wiener was a mathematician.

Despite Cybernetics demise as a narrative for AI, it has influenced other fields of study that involved complex systems and culture:

Deep Learning will make accelerated progress when ideas from adjacent fields such as evolutionary biology, non-linear dynamics, and complexity theory are incorporated into the research vocabulary. It is indeed curious that Norbert Weiner’s Cybernetics book covers a rich variety of topics such as groups, statistical mechanics, communication, feedback, oscillation, gestalt, information, language, learning, self-replication, and self-organization. Perhaps required reading for any current day Deep Learning researcher.

Norbert Wiener had a deep understanding of the interplay of cognitive machines and humans that he wrote a follow-up book exploring this in greater detail:

In “The Human Use of Human Beings”, Wiener explores the same social issues that we are only beginning to collectively take seriously today. Norbert writes (68 years ago) that there is danger in trusting decisions to automation and be unlikely to identify with human values which are not purely utilitarian. We are unfortunately just beginning to realize the harmful effects of misalignment of automation (i.e. governments, corporations, internet, and AI) with human values. Cybernetics has always emphasized the interaction of humans and machines, thus Deep Learning practitioners can discover ideas that reach beyond the conventional technical horizon.

Further Reading

Add a comment

Related posts:

Johari Window

For the completion of this exercise, I rushed my sister who was just watching tv to fill out the for me, it took some convincing but she ended up doing it. For Region 2, Which she completed without…

How Europe Treats Depression and Anxiety

Mental health is a topic some people find difficult to openly discuss and many people do not seek treatment for mental illness do to the stigma associated with it. While the stigma associated with…

The Missing Library in your Machine Learning Workflow

Sound engineers can create the perfect blend in audio by tuning the sliders and knobs to the right positions on audio mixers. Before we go into how we can use Optuna for tuning hyperparameters…