What AI researchers can learn from the self-assembling brain


The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


The history of artificial intelligence is filled with theories and attempts to study and replicate the workings and structure of the brain. Symbolic AI systems tried to copy the brain’s behavior through rule-based modules. Deep neural networks are designed after the neural activation patterns and wiring of the brain.

But one idea that hasn’t gotten enough attention from the AI community is how the brain creates itself, argues Peter Robin Hiesinger, professor of neurobiology at the Free University of Berlin (Freie Universität Berlin).

In his book The Self-Assembling Brain, Hiesinger suggests that instead of looking at the brain from an endpoint perspective, we should study how information encoded in the genome is transformed to become the brain as we grow. This line of study might help discover new ideas and directions of research for the AI community.

The Self-Assembling Brain is organized as a series of seminar presentations interspersed with discussions between a robotics engineer, a neuroscientist, a geneticist, and an AI researcher. The thought-provoking conversations help to understand the views and the holes of each field on topics related to the mind, the brain, intelligence, and AI.

Biological brain vs artificial neural networks

Many secrets of the mind remain unlocked. But what we know is that the genome, the program that builds the human body, does not contain detailed information of how the brain will be wired. The initial state does not provide information to directly compute the end result. That result can only be obtained by computing the function step by step and running the program from start to end.

As the brain goes through the genetic algorithm, it develops new states, and those new states form the basis of the next developments.

As Hiesinger describes the process in The Self-Assembling Brain, “At each step, bits of the genome are activated to produce gene products that themselves change what parts of the genome will be activated next — a continuous feedback process between the genome and its products. A specific step may not have been possible before and may not be possible ever again. As growth continues, step by step, new states of organization are reached.”

Therefore, our genome contains the information required to create our brain. That information, however, is not a blueprint that describes the brain, but an algorithm that develops it with time and energy. In the biological brain, growth, organization, and learning happen in tandem. At each new stage of development, our brain gains new learning capabilities (common sense, logic, language, problem-solving, planning, math). And as we grow older, our capacity to learn changes.

the self-assembling brain book cover

Self-assembly is one of the key differences between biological brains and artificial neural networks, the currently popular approach to AI.

“ANNs are closer to an artificial brain than any approach previously taken in AI. However, self-organization has not been a major topic for much of the history of ANN research,” Hiesinger writes.

Before learning anything, ANNs start with a fixed structure and a predefined number of layers and parameters. In the beginning, the parameters contain no information and are initialized to random values. During training, the neural network gradually tunes the values of its parameters as it reviews numerous examples. Training stops when the network reaches acceptable accuracy in mapping input data into its proper output.

In biological terms, the ANN development process is the equivalent of letting a brain grow to its full adult size and then switching it on and trying to teach it to do things.

“Biological brains do not start out in life as networks with random synapses and no information content. Biological brains grow,” Hiesinger writes. “A spider does not learn how to weave a web; the information is encoded in its neural network through development and prior to environmental input.”

In reality, while deep neural networks are often compared to their biological counterparts, their fundamental differences put them on two totally different levels.

“Today, I dare say, it appears as unclear as ever how comparable these two really are,” Hiesinger writes. “On the one side, a combination of genetically encoded growth and learning from new input as it develops; on the other, no growth, but learning through readjusting a previously random network.”

Why self-assembly is largely ignored in AI research

deep learning

“As a neurobiologist who has spent his life in research trying to understand how the genes can encode a brain, the absence of the growth and self-organization ideas in mainstream ANNs was indeed my motivation to reach out to the AI and Alife communities,” Hiesinger told TechTalks.

Artificial life (Alife) scientists have been exploring genome-based developmental processes in recent years, though progress in the field has been largely eclipsed by the success of deep learning. In these architectures, the neural networks go through a process that iteratively creates their architecture and adjusts their weights. Since the process is more complex than the traditional deep learning approach, the computational requirements are also much higher.

“This kind of effort needs some justification — basically a demonstration of what true evolutionary programming of an ANN can produce that current deep learning cannot. Such a demonstration does not yet exist,” Hiesinger said. “It is shown in principle that evolutionary programming works and has interesting features (e.g., in adaptability), but the money and focus go to the approaches that make the headlines (think MuZero and AlphaFold).”

In a fashion, what Hiesinger says is reminiscent of the state of deep learning before the 2000s. At the time, deep neural networks were theoretically proven to work. But limits in the availability of computational power and data prevented them from reaching mainstream adoption until decades later.

“Maybe in a few years new computers (quantum computers?) will suddenly break a glass ceiling here. We do not know,” Hiesinger said.

Searching for shortcuts to AI

Peter Robin Hiesinger

Above: Peter Robin Hiesinger, Professor of Neurobiology at the Free University of Berlin (Freie Universität Berlin) and author of “The Self-Assembling Brain.”

Another reason for which the AI community is not giving enough attention to self-assembly regards the varying views on which aspects of biology are relevant to replicating intelligence. Scientists always try to find the lowest level of detail that provides a fair explanation of their subject of study.

In the AI community, scientists and researchers are constantly trying to take shortcuts and avoid implementing unnecessary biological details when creating AI systems. We do not need to imitate nature in all its messiness, the thinking goes. Therefore, instead of trying to create an AI system that creates itself through genetic development, scientists try to build models that approximate the behavior of the final product of the brain.

“Some leading AI research go as far as saying that the 1GB of genome information is obviously way too little anyway, so it has to be all learning,” Hiesinger said. “This is not a good argument, since we of course know that 1GB of genomic information can produce much much more information through a growth process.”

There are already several experiments that show with a small body of data, an algorithm, and enough execution cycles, we can create extremely complex systems. A telling example is the Game of Life, a cellular automaton created by British mathematician John Conway. The Game of Life is a grid of cells whose states shift between “dead” and “alive” based on three very simple rules. Any live cell surrounded by two or three neighbors stays alive in the next step, while dead cells surrounded by three live cells will come to life in the next step. All other cells die.

The Game of Life and other cellular automata such as Rule 110 sometimes give rise to Turing-complete systems, which means they are capable of universal computation.

“All kinds of random stuff happening around us could — in theory — all be part of a deterministic program look at from within because we can’t look at the universe from the outside,” Hiesinger said. Although this is a very philosophical argument that cannot be proven one way or the other, Hiesinger says, experiments like Rule 110 show that a system based on a super-simple genome can, given enough time, produce infinite complexity and may look as complicated from the inside as the universe we see around us.

Likewise, the brain starts with a very basic structure and gradually develops into a complex entity that surpasses the information capacity of its initial state. Therefore, dismissing the study of genetic development as irrelevant to intelligence can be an erroneous conclusion, Hiesinger argues.

“There is a bit of an unfortunate lack of appreciation for both information theory and biology in the case of some AI researchers that are (understandably) dazzled by the successes of their pure learning-based approaches,” Hiesinger said. “And I would add: the biologists are not helping, since they also are largely ignoring the information theory question and instead are trying to find single genes and molecules that wire brains.”

New ways to think about artificial general intelligence

dna science research

In The Self-Assembling Brain, Hiesinger argues that when it comes to replicating the human brain, you can’t take shortcuts and you must run the self-assembling algorithm in its finest detail.

But do we need to take such an undertaking?

In their current form, artificial neural networks suffer from serious weaknesses, including their need for numerous training examples and their sensitivity to changes in their environment. They don’t have the biological brain’s capacity to generalize skills across many tasks and to unseen scenarios. But despite their shortcomings, artificial neural networks have proven to be extremely efficient at specific tasks where the training data is available in enough quantity and represents the distribution that the model will meet in the real world. In some applications, neural networks even surpass humans in speed and accuracy.

So, do we want to grow robot brains, or should we rather stick to shortcuts that give us narrow AI systems that can perform specific tasks at a super-human level?

Hiesinger believes that narrow AI applications will continue to thrive and become an integral part of our daily lives. “For narrow AIs, the success story is absolutely obvious and the sky is the limit, if that,” he said.

Artificial general intelligence, however, is a bit more complicated. “I do not know why we would want to replicate humans in silico. But this may be a little like asking why we want to fly to the moon (it is not a very interesting place, really),” Hiesinger said.

But while the AI community continues to chase the dream of replicating human brains, it needs to adjust its perspective on artificial general intelligence.

“There is no agreement on what ‘general’ is supposed to really mean. Behave like a human? How about butterfly intelligence (all genetically encoded!)?” Hiesinger said, pointing out that every lifeform, in its own right, has a general intelligence that is suited to its own survival.

“Here is where I see the problem: ‘human-level intelligence’ is actually a bit non-sensical. ‘Human intelligence’ is clear: that’s ours. Humans have a very human-specific type of intelligence,” he said.

And that type of intelligence cannot be measured in the level of performance at one or multiple tasks such as playing chess or classifying images. Instead, the breadth of areas in which humans can operate, decide, operate, and solve problems makes them intelligent in their own unique way. As soon as you start to measure and compare levels of intelligence in tasks, then you’re taking away the human aspect of it, Hiesinger believes.

“In my view, artificial general intelligence is not a problem of ever-higher ‘levels’ of current narrow approaches to reach a human ‘level.’ There really is no such thing.  If you want to really make it human, then it is not about making current level-oriented task-specific AIs faster and better, but it is about getting the type of information into the network that make human brains human,” he said. “And that, as far as I can see, has currently only one known solution and path — the biological one we know, with no shortcuts.”

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Source link

Tesla Bot Takes Tech Demos to Their Logical Conclusion Previous post Tesla Bot Takes Tech Demos to Their Logical Conclusion
Next post Script Vocalizer