At the dawn of the 20th century, Ivan Pavlov was studying the digestive system. This led to an interest in conditional reflexes – whether, and how, a physiological response could be conditioned to occur (as related to digestive functions like food, smell, and salivation). In his most famous experiment, Pavlov trained dogs to associate the arrival of food with the sound of a bell ringing, but his interests also extended to conditional reflexes in human beings. Pavlov’s work was significant – it had important implications for physiology, biology, psychology, philosophy, and many other fields.
In the early half of the 20th century, Edward Tolman published Purposive Behavior in Animals and Men, whose thesis was that, beyond the simple “behavioral unit” of conditioned reflex, animals would create complex systems of memory, behavior, cognitive processes, and actions to achieve goals. This would be manifest in, e.g., a rat running a maze, with the goal of obtaining a cheese reward at the end of the maze.
Pavlov’s experiments showed that pathways in the memory of animals can be synthesized. Tolman showed that these pathways could become complex in scope, when they are made goal-oriented. This concept of animal memory can be applied and implemented for no-power storage of information – organic computer memory. Through the use of conditional reflex, simple memory “building blocks” can be created. These building blocks can be assembled to synthesize more complex memory systems. Much like a long Shakespearean soliloquy can be broken down into a set of simple hooks, and those hooks associated through mnemonic association, so too could a complex set of actions be engrained in the memory of an animal – providing a means for external storage of information.
Just as computers are built on unit “building blocks” – logical operations involving zeroes and ones – which are assembled, in increasing levels of complexity, into high-level languages and systems like compilers or template programming languages, so too could an organic memory system be devised in which a set of unit “building blocks” – conditional reflex responses in an animal – could be used to assemble more elaborate systems of memory and information storage.
Networks of mazes, multiple animals, or varieties of animals could all be utilized to construct increasingly elaborate systems. As animals become smaller, their range of operational building blocks becomes smaller and simpler, but the number that can be used becomes massive. (As the creatures become so small that they have no mnemonic capacity, or have no brains, the building blocks would become biochemical in nature, with reflexive responses occurring due to changes in biochemical conditions, e.g., pH, temperature, number of proteins, ion availability, electrochemical gradients, etc. – so-called “biochemical computing.”)
Clearly, animal brains are very dissimilar to computer chips, so a classical binary logic system, or a system with unit “building blocks” that are too simple, would be inefficient and difficult to implement in practice. But animal brains, like human brains, and unlike computers, are very good at certain tasks, such as visual recognition, spatial judgement, reflexive response to sensory stimulation, and so on. In other words, the most efficient unit operations would be complex. These organic memory systems would not, therefore, work best when applied to the kinds of applications for which computer memory works best.
There’s no telling what kinds of applications organic memory could have. But the central concept, one common to many no-power computing ideas, is to use natural systems that have natural unit “building blocks,” and use those as the conceptual bricks with which to build elaborate structures.