14. Neurons And The Mind


Before we can talk about the potential origin of God, we must first consider the concepts of thought and intelligence.
   The human brain is the most remarkable organ we know of on Earth that is capable of producing intelligence. It contains tens of billions of specialist cells called “neurons” that use electrical impulses and chemical signals to transmit information. They are similar to trees, with an “axon” trunk and “dendrites” that resemble branches. They communicate by sending chemicals across “synapses” between axon terminals and the dendrites of other neurons.
   One neuron is capable of connecting to thousands of other neurons, with the result that the human brain forms a vast communications network made up of trillions of connections.
   A memory isn’t stored in one neuron, otherwise the brain’s capacity would be limited by the number of neurons. Instead, it is made by a series of connections between neurons. Since an almost unlimited number of these connections can be formed, our memory capacity is unknown. Certainly it seems to far exceed what we need to remember within our current human lifespan.
   The workings of a neuron might be somewhat complex, but its overall function can be described in fairly simple terms. A neuron receives electrical and chemical signals from other neurons. If the combined strength of these input signals exceeds a certain threshold, the neuron fires off an output signal, which in turn can become the input for other neurons. Learning is a result of the strengthening of connections between groups of neurons.
   Thoughts are the result of neurons working together. I think intelligence, and particularly human intelligence, springs from the ability to develop thoughts, and to have thoughts about those thoughts. Whatever the case, out of this vast sea of neurons, along with the flow of electrical and chemical activity across the synapses, human intelligence somehow emerges.
   We can also create simpler forms of neurons with computers. In a computer, each neuron receives a set of inputs, and an output is sent if the total value of the inputs exceeds a certain level. Feedback loops can be added, so that the system can learn and adapt. This is called a “neural network” or “deep learning,” and is often the basis for artificial intelligence.
   Computers can process information incredibly fast. Brains are much slower at doing this, but human brains can do something that neural networks struggle with. They can think in abstract and conceptual ways.
   For example, after a lot of training, a neural network may be able to recognize a chair as a flat object with four legs underneath. Show it yet another picture of a flat object with four legs, and it will probably identify it as a chair, because it is good at pattern recognition.
   However, children can do something even better. If you show a child different objects that don’t necessarily fit the typical chair shape, the child may still identify it as a chair, because humans have the ability to think conceptually.
   Once we understand the concept that chairs are for sitting on, and therefore don’t have to be flat with four legs, we can see chairs everywhere. We can see a bean bag as a chair, while a neural network might not.
   This makes us far more creative than computer neural networks, and is also perhaps a major source of new thoughts and our imagination. This is how children can think of stacks of cardboard boxes as forts and dens, rather than packaging material. Human brains can think outside the box, and sometimes inside the box, depending on who we’re hiding from. The previous sentence is also an example of how we can apply words and concepts in literal as well as abstract ways.
   We have the ability for deep learning, like neural networks, but also for deep understanding, by being able to reflect on our thoughts. We can also use information to change our thoughts and knowledge. In other words, humans are good not only at thinking, but also understanding, at least at a conceptual level. This is why children can quickly grasp concepts that computers struggle with.
   As another example, consider the game of chess. Some of the most advanced computers can beat chess grandmasters, because they are able to make vastly more calculations in the same amount of time. However, humans may still compete to a certain extent, because they are able to think about their goals and objectives in an abstract manner, even if they haven’t calculated every step along the way. Furthermore, change the rules of chess, and a computer may be lost, while humans could still play well because the more abstract principles of winning may still apply.
   There are other things that could potentially act like neurons. In electronics, a “memristor” is an electrical component that can remember the amount of charge that has flowed through it. Memristors behave somewhat like synapses in this regard, and other materials have been found to act like memristors, including some polymers, which are large molecules made up of many repeated units.
   Science has also discovered that networks of “atomic switches” made of silver and copper possess properties similar to synapses in a neuron. When voltage is applied, filaments of silver grow at the atomic level, creating a switch that allows current to flow. This enables the creation of memories. When current flows in the other direction, the silver bridges shrink and the switch turns off.
   Various properties emerge from this network of atomic switches that have also been detected in the human brain. For example, in the brain, groups of neurons trigger others in a cascade of activity. But if there is too much activity, the brain will overload. If there is too little, the signals will flicker out. The delicate balance between the two states is called “criticality.” Just like the human brain, the network of atomic switches can find this balance by itself, as well as operating like a neural network.
   In other words, while researchers provided the raw materials such as voltage and silver, neuron-like properties emerge by themselves out of these materials, and on a larger scale, properties such as criticality emerge. With a network of atomic switches, its ability to act like a biological neural network is a product neither of creation nor evolution, but of emergence.
   The term “emergence” refers to how new and often unexpected behaviors can arise when a number of simpler entities operate in an environment, forming more complex behaviors as a group.
   John Conway’s “Game Of Life” is a good example of how emergence works. The game is played out automatically on a grid of cells, based on very simple rules. Individual cells go on to “live” or “die” in the next round of the game, depending on what happens to the cells around it. Over time, the game produces interesting shapes and patterns that look like they could be alive.
   The shapes aren’t directly created by the game designer, and neither do they evolve, at least in the Darwinian sense of the word. Instead, they emerge from the underlying rules and initial setup of the game. I suppose it could be argued that they are a product of design, since the rules of the game were designed in advance, but it’s still more accurate to say the shapes and patterns themselves emerge, since it’s hard to predict the exact shapes that can form, based on the initial setup.
   The concept of emergence is important, because it shows how complexity can arise out of a large number of simple parts. This will play an important role in “The Neuroverse Hypothesis” I will now introduce, on the potential origin of God’s mind.

 

LetterToTheAtheists.com | Contents | Previous Chapter | Next Chapter >>>