A Soliton and its owned Bions (Awareness and Mind) – part 1

These Intelligent Particles are how we Survive Death


12th edition

Copyright Table

2017 (12th ed.) 2006 (11th ed.) 2005 (10th ed.)
2004 (9th ed.) 2003 (8th ed.) 2002 (7th ed.)
2001 (6th ed.) 1999 (5th ed.) 1998 (4th ed.)
1996 (3rd ed.)     1994 (2nd ed.)     1993 (1st ed.)

by Kurt Johmann

Note: For the first eleven editions of this book
(1st ed. thru 11th ed. in the above Copyright Table),
the title of this book was The Computer Inside You

This Work is placed in the Public Domain

September 17, 2017: I, Kurt Johmann, the author and copyright owner, hereby place the entire text of A Soliton and its owned Bions (Awareness and Mind), 12th edition, in the public domain. The photo of myself as a 57-year-old man I’m also placing in the public domain.

Brief Overview

This book proposes in detail an old idea: that the universe is a computed reality generated by an underlying network of computing elements. In particular, this book uses this reality model to explain the otherwise unexplained: ESP, afterlife, mind, UFOs and their occupants, organic development, and such.

About the Author

I, Kurt Johmann, was born November 16, 1955, in Elizabeth, New Jersey, USA (United States of America). I obtained a BA in computer science from Rutgers University in 1978. From 1978 to 1988 I worked first as a systems analyst and then as a PC software developer. I entered graduate school in August 1988. In December 1989 I received an MS, and in May 1992 a PhD, both in computer science from the University of Florida in Gainesville Florida. I then returned to software development work, continuing such work up until the end of 2005, and also taking time as needed to work on the first ten editions of this book, and various other writings.

Beginning in early 2006 I had to start helping my parents at their home in Gainesville Florida because of their decline in various ways due to the infirmities of old age, so I retired from programming work to help them. After writing the 11th edition of this book in mid 2006, that was the last work I did on this book until late 2012 when I started work on the 12th edition, but soon stopped because in 2013 I had a substantially increased workload caring for my dad during his final year (he died at home at the end of 2013; my mother had already died at home at the end of 2011). After my father’s death there were various estate matters to handle, and also my own retirement to a different part of Florida. In June 2015 I resumed work on this book’s 12th edition. At the completion of this 12th edition on September 17, 2017, I am 61 years old.

Below is a photo (without my glasses that I normally wear) of myself, Kurt Johmann, when I was 57 years old, taken February 2013. I had started work on the 12th edition of this book a few months before this photo was taken, and I had this photo taken with the intention of including it in the 12th edition. My tan is from the Florida sun:

photo of Kurt Johmann, at age 57

Preface

At the time of Isaac Newton’s invention of the calculus in the 17th century, the mechanical clock was the most sophisticated machine known. The simplicity of the clock allowed its movements to be completely described with mathematics. Newton not only described the clock’s movements with mathematics, but also the movements of the planets and other astronomical bodies. Because of the success of the Newtonian method, a mathematics-based model of reality resulted.

In modern times, a much more sophisticated machine than the clock has appeared: the computer. A computer includes a clock but has much more, including programmability. Because of its programmability, the actions of a computer are arbitrarily complex. Assuming a complicated program, the actions of a computer cannot be described in any useful way with mathematics.

To keep pace with this advance from the clock to the computer, civilization should upgrade its thinking and adjust its model of reality accordingly. This book is an attempt to help smooth the transition from the old conception of reality—that allowed only mathematics to describe particles and their interactions—to a computer-based conception of reality.

Introduction

A reality model is a means for understanding the universe as a whole. Based on the reality model one accepts, one can classify things as either possible or impossible.

The reality model of 20th-century science is the mathematics-only reality model. This is a very restrictive reality model that rejects as impossible any particle whose interactions cannot be described with mathematical equations.

If one accepts the mathematics-only reality model, then there is no such thing as an afterlife, because according to that model a man only exists as the composite form of the simple mathematics-obeying common particles composing that man’s brain—and death is the permanent end of that composite form. For similar reasons the mathematics-only reality model denies and declares impossible many other psychic phenomena.

The approach taken in this book is to assume that deepest reality is computerized. Instead of, in effect, mathematics controlling the universe’s particles, computers control these particles. This is the computing-element reality model. This model is presented in detail in chapter 1.

With particles controlled by computers, particles can behave in complicated, intelligent ways. Thus, intelligent particles are a part of the computing-element reality model. And with intelligent particles, psychic phenomena, such as the afterlife, are easy to explain.

Of course, one can object to the existence of computers controlling the universe, because, compared to the mathematics-only reality model—which conveniently ignores questions about the mechanism behind its mathematics—the computing-element reality model adds complexity to the structure of deepest reality. However, this greater complexity is called for by both the scientific and other evidence presented in this book.

1 The Computing-Element Reality Model

This chapter presents the computing-element reality model. The chapter sections are:

1.1 Constraints for any Reality Model

The world is composed of particles. The visible objects that occupy the everyday world are aggregates of particles. This fact was known by the ancients: a consequence of seeing large objects break down into smaller ones.

Particles that are not composed of other particles are called elementary particles. Philosophically, one must grant the existence of elementary particles at some level, to avoid an infinite regress.

For the physics known as quantum mechanics, the old idea of the continuous motion of particles—and the smooth transition of a particle’s state to a different state—is replaced by discontinuous motion and discontinuous state changes. A particle moves in discrete steps (for example, the movement of an electron to a different orbital), and a particle’s state, changes in discrete steps (for example, the change of a photon’s spin).

For the particles studied by physics, the state of a particle is the current value of each attribute of that particle. A few examples of particle attributes are position, velocity, and mass. For certain attributes, each possible value for that attribute has an associated probability: the probability that that particle’s state will change to that value. The mathematics of quantum mechanics allows computation of these probabilities, thereby predicting specific state changes.

Various physics experiments, such as the double-slit experiments done with electrons and also neutrons, contradict the old idea that a particle is self-existing independent of everything else. For the particles studied by physics, these experiments show that the existence of a particle, knowable only thru observation, is at least partly dependent on the structure of the observing system.

Other physics experiments, such as the EPR experiments that test Bell’s theorem, demonstrate that widely separated particles can simultaneously, synchronously change state. Given the distance between the particles and the extent to which the synchronous state changes are measured as being simultaneous, it appears necessary that an instantaneous much-faster-than-lightspeed communication is involved in coordinating these synchronous state changes for the widely separated particles.

In summary, physics places the following three constraints on any reality model of the universe:

  1. A particle moves in discrete steps. And a particle’s state, changes in discrete steps.

    Thus, as a particle moves from some point A to some point B, that particle occupies, at most, only a finite number of different positions between those two points, instead of an infinite number of different positions.

    Similarly, as a particle changes state, from some state A to some state B, there are, at most, only a finite number of different in-between states, instead of an infinite number of different in-between states.

  2. Self-existing particles—that have a reality independent of everything else—do not exist.

  3. Instantaneous communication occurs.

    Regarding the actual speed of this instantaneous communication, it is at least 20 billion times the speed of light.[1]


footnotes

[1] The force of gravity is an example of instantaneous communication. Astronomer Tom Van Flandern computes a lower-bound on the speed of gravity as being not less than 20 billion times the speed of light (2×1010c). (Van Flandern, Tom. The speed of gravity—What the experiments say. Physics Letters A, volume 250 (21 December 1998): pp. 1–11)

Van Flandern’s article also debunks both Special Relativity and General Relativity, which are two physical theories that have been dominant in the 20th century, more for political reasons than reasons of merit.

Similarly, the Big Bang is a physical theory that has been dominant in the 20th century, for political reasons instead of reasons of merit. See my essay Big-Bang Bunk at https://solitoncentral.com/big-bang-bunk.

Note that the computing-element reality model that is detailed in the remainder of this chapter is not dependent on the truth or falsity of any particular physical theory, because any physical theory that is useful can be computed (section 1.5).

Although the computing-element reality model does not depend on specific physical theories, the model can be helpful in constructing physical theories. For example, consider the fact that time slows for an object as that object moves faster. Given the computing-element reality model, one can suggest, for example, that the faster an object moves thru the array of computing elements (section 1.2), the more that the available computing time is being devoted to moving that object, with less computing time available for interacting that object’s particles with each other and with the outside environment. Thus, if the object is a clock, then that clock runs more slowly, because all of that clock’s particles are moving more slowly relative to each other.

In general, the computing-element reality model provides a framework in which physics can use algorithms to explain physical phenomena, instead of limiting itself to using only mathematics.


1.2 Overview of the Model

The computing-element reality model states that the universe’s particles are controlled by computers. Specifically, the computing-element reality model states that the universe is a vast, space-filling, three-dimensional array of tiny, identical, computing elements.[2]

A computing element is a self-contained computer, with its own memory. Each computing element can communicate with its adjacent computing elements, and each computing element runs its own copy of the same large and complex program—called the computing-element program.

Each elementary particle in the universe exists only as a block of information that is stored as data in the memory of a computing element. Thus, all particles are both manipulated as data and moved about as data by these computing elements. Consequently, the reality that people experience is a computer-generated, computed reality.[3],[4]

In our human world with its man-made physical computers, thinking about the computing elements is necessarily influenced by what we know about our man-made physical computers. However, the actual composition and history of the computing elements, in terms of what they are made of and how they came into being, is necessarily unknowable to us, because these computing elements that generate our reality, are not inside that generated reality with us, which means that we cannot directly—with or without our scientific tools and instruments—see and examine these computing elements. However, knowing about physical computers, and reasoning by analogy, one can say that a computing element has a processor and memory and can run programs stored in its memory, but one cannot say what any of these components in a computing element are made of, or how they came into being.


footnotes

[2] One can ask how these computing elements came into existence, but this line of questioning faces the problem of infinite regress: if one answers the question as to what caused these computing elements to come into existence, then what caused that cause, and so on. At some point a reality model must declare something as bedrock for which causation is not sought. For the mathematics-only reality model its bedrock is mathematics; for the computing-element reality model its bedrock is the computing element.

A related line of questioning asks what existed before the universe, and what exists outside the universe. For these two questions the term universe includes the bedrock of whatever reality model one chooses. Both questions ask, in effect, what lies outside the containing framework of reality that is defined by one’s given reality model. The first question assumes that something lies outside in terms of time, and the second question assumes that something lies outside in terms of space. And both questions implicitly suggest that whatever lies outside in terms of time or space may be a fundamentally different reality model, because if it is just more of the same reality model, then why ask the question? However, because we cannot see or think apart from whatever actual underlying reality model gives us our existence, speculating about alternative reality models that may exist elsewhere in time or space is just imagination and guesswork with no practical value.

[3] Thruout the remainder of this book the word particle, unless stated or implied otherwise, denotes an elementary particle. An elementary particle is a particle that is not composed of other particles. In physics, prime examples of elementary particles are electrons, quarks, and photons. Also, the intelligent particles—bions and solitons, which are described later in this book—are elementary particles.

[4] The three-dimensional array of computing elements is, in effect, the universe and space itself. However, except in imagination it is not possible for anyone to see—with or without instruments—any part of this array of computing elements for the following reason: Because mankind and its instruments are composed of particles, and particles are data stored in computing elements, then, being only an effect of those computing elements, those particles cannot directly probe those computing elements.


1.3 Components of the Model

Today, computers are commonplace and the basics of programs and computers are widely known. Given the hypothesized computing elements that lie at the deepest level of the universe, overall complexity is minimized by assuming the following:

  • Each computing element is structurally identical, and there is only one type of computing element.

  • Each computing element runs the same program—called the computing-element program—and there is only one program; each computing element runs its own copy of this program.

Among other things, the computing-element program includes code that supports message transmission from a computing element to its adjacent computing elements, allowing, in effect, messages to travel thru 3D space. Section 3.8 covers messaging in detail. Also, assuming that gravity is communicated by messages, and assuming that Tom Van Flandern is correct that the speed of gravity is at least 20 billion times the speed of light (section 1.1), then the messages that result in gravity are probably moving thru 3D space at a speed that is at least 20 billion times the speed of light.

Regarding the shape and spacing of the computing elements, the question of shape and spacing is unimportant. Whatever the answer about shape and spacing might be, there is no obvious impact on any other question of interest. From the standpoint of what is esthetically pleasing, one can imagine that the computing elements are cubes that are packed together without intervening space.

Regarding the size of the computing elements, the required complexity of the computing-element program can be reduced by reducing the maximum number of particles that a computing element simultaneously stores and manipulates in its memory. In this regard the computing-element program is most simplified if that maximum number is one. Given this maximum of one, if one then assumes that no two particles can be closer than 10−16 centimeters apart—and consequently that each computing element is a cube 10−16 centimeters wide—then each cubic centimeter of space contains 1048 computing elements.[5] The value of 10−16 centimeters is used, because this is an upper-bound on the size of an electron, which is an elementary particle.

Regarding computing-element processing speed, it’s possible to compute lower-bounds by making a few assumptions: For example, assume a computing element only needs to process a total of 10,000 program instructions to determine that it should transfer to an adjacent computing element an information block. In addition, assume that this information block represents a particle moving at lightspeed, and the distance to be covered is 10−16 centimeters. With these assumptions there are about 10−26 seconds for the transfer of the information block to take place, and this is all the time that the computing element has to process those 10,000 instructions, so the MIPS rating of each computing element is at least 1024 MIPS (millions of instructions per second). For comparison, the first edition of this book was composed on a personal computer that had an 8-MIPS microprocessor.


footnotes

[5] In this book very large numbers and very small numbers are given in scientific notation. The exponent is the number of terms in a product of tens. A negative exponent means that 1 is divided by that product of tens. For example, 10−16 is equivalent to (1 ÷ 10,000,000,000,000,000) which is 0.0000000000000001; and, for example, 3×108 is equivalent to 300,000,000.


1.4 Particles

Regarding the first two constraints that physics places on any reality model of the universe (section 1.1):

  1. A particle moves in discrete steps. And a particle’s state, changes in discrete steps.

    This a consequence of the computing-element reality model, given the small size of the computing element, and given the finite resources of the computing element. These finite resources include such things as a finite processing speed, a finite memory, and a finite register size.

    Computing an infinity of different positions, or an infinity of different states, requires an infinity of time when the processing speed is finite. Thus, in the computing-element reality model, nothing is computed to an infinite extent. Everything is finite and discrete.

  2. Self-existing particles—that have a reality independent of everything else—do not exist.

    This is a consequence of the computing-element reality model, given that particles, being data, cannot exist apart from the computing elements that both store and manipulate that data.

A particle in the computing-element reality model exists only as a block of information stored as data in the memory of a computing element. A particle’s information block is the current, complete representation of that particle in the memory of whichever computing element currently holds that particle. A particle’s state information identifies the particle’s type and, depending on that particle type, includes a group of variables for which every particle of that type has a value set for each variable in its state information. For each particle type, its state information has a fixed format that is, in effect, defined by the computing-element program. For simplicity, one can assume that a particle’s state information is at the beginning of that particle’s information block.

1.5 Living Inside Computed Reality

In effect, the computing-element reality model explains personally experienced reality as a computer-generated, computed reality. Similarly, modern computers are often used to generate a computed reality for game players. However, there is an important difference between a computed reality generated by a modern computer and the ongoing computed reality generated by the computing elements. From a personal perspective, the computed reality generated by the computing elements is reality itself; the two are identical. Put another way, one inhabits that computed reality; it is one’s reality.

For the last few centuries scientists have often remarked and puzzled about the fact that so much of the world can be described with mathematics. Physics textbooks are typically littered with equations that wrap up physical relationships in nice neat formulas. Why is there such a close relationship between mathematics and the workings of the world? This question is frequently asked.

Mathematics is, in effect, the product of computation. At its base, mathematics is counting (numbers are counts); a simple algorithm. For a reality that flows directly from an underlying computation layer—the essence of the computing-element reality model—mathematics is a natural part of that reality. A finite reality that results from finite computations is going to have relationships and patterns within it that can be quantified by equations.

Note that the high degree of order and structure in our reality is a direct reflection of the high degree of order and structure in the computing-element program. To help make this clear, imagine a simple reality-generating program that generates, in effect, nothing but noise: a sequence of random numbers. For that kind of reality, even though it is finite, the only relationship or pattern within that reality that applies over a wide area is the trivial one regarding its randomness. Thus, for example, in that reality you will not find the relationships and patterns described by the equations of our physics, such as, for example, Newton’s laws.[6]

Regarding what the computing-element reality model allows as possible within the universe: Because all the equations of physics describing particle interactions can be computed, either exactly or approximately, everything allowed by the mathematics-only reality model is also allowed by the computing-element reality model.[7]

The mathematics-only reality model disallows particles whose interactions cannot be expressed or explained with equations. By moving to the computing-element reality model, this limitation of the mathematics-only reality model is avoided.


footnotes

[6] For a formal treatment of the relationship between a program and its output when given the order and structure of that output, see, for example:

Chaitin, Gregory. “Information-Theoretic Computational Complexity.” In New Directions in the Philosophy of Mathematics, Thomas Tymoczko, ed. Princeton University Press, Princeton, 1998.

[7] Equations that cannot be computed are useless to physics, because they cannot be validated. For physics, validation requires computed numbers that can be compared with measurements made by experiment.


1.6 Common Particles and Intelligent Particles

A programmed computer can behave in ways that are considered intelligent. In computer science, the Turing Hypothesis states that all intelligence can be reduced to a single program, running on a simple computer, and written in a simple language. The universe contains at least one example of intelligence: ourselves. The computing-element reality model offers an easy explanation for this intelligence, because all intelligence in the universe can spring from the computing elements and their computing-element program.

At this point one can make the distinction between two classes of particles: common particles and intelligent particles. Classify all the particles of physics as common particles. Prime examples of common particles are electrons, photons, and quarks. In general, a common particle is a particle with simple state information, consisting only of attribute values. This simplicity of the state information allows the interactions between common particles to be expressed with mathematical equations. This satisfies the requirement of the mathematics-only reality model, so both models allow common particles.

Besides common particles, the computing-element reality model allows the existence of intelligent particles. In general, an intelligent particle is a particle whose information block includes a lot more than just simple state information and associated data, if any. Instead, a typical intelligent particle’s information block includes learned programs (section 3.6) and data stored and/or used by those learned programs. Regarding the state information of an intelligent particle, one can assume that among other things it includes a pointer to a linked list of the learned programs that that intelligent particle currently has in its information block.

Only an intelligent particle can have learned programs, and, in general, an intelligent particle’s learned programs are a major factor that determines how that intelligent particle interacts with other particles. Because different intelligent particles can have different learned programs, and the learned programs themselves can be very complex, expressing with mathematical equations the interactions involving intelligent particles is impossible. This explains why intelligent particles are absent from the mathematics-only reality model.

Regarding the movement of a particle thru 3D space, this movement happens in finite steps, and each step is done by simply copying that particle’s information block from the computing element that currently holds that particle, to an adjacent computing element that becomes the new holder of that particle, and then at the computing element that no longer holds that particle: in effect, delete that particle’s information block from that computing element’s memory.

Regarding the organization of a computing element’s memory, one can assume that each computing element has the same amount of internal memory, and each computing element allocates the same part of its internal memory for holding a particle’s information block, and this allocation has the same size in each computing element (by “same size” is meant the same number of bytes or whatever unit of memory is used by computing elements). The size of this memory allocation for holding a particle’s information block, is a limit on the size of any particle’s information block. Because the size of a common particle’s information block is tiny compared to the size of a typical intelligent particle’s information block, in practice only an intelligent particle can, in effect, grow the current size of its information block so that it is using all or nearly all of the available memory allocated for holding a particle’s information block.

There are two different kinds of intelligent particles, bions and solitons (described later in this book), and references in this book to a bion’s memory, or to a soliton’s memory, are always implicitly referring to that same-sized memory allocation for holding a particle’s information block that each computing element has. For example, storing data—aka saving data or writing data—in a bion’s memory means that that data is written into the memory of whichever computing element currently holds that bion, becoming a part of that bion’s current information block. In general, one can assume that the computing-element program, which is like an operating system with regard to learned programs, manages the memory in an intelligent particle’s information block, so that, for example, a bion’s learned programs can’t write over and corrupt that bion’s state information, and can’t write over and corrupt any of that bion’s learned programs.

For a computing element holding a common particle, that computing element can run that part of its computing-element program that determines how that type of particle will interact, if at all, with whatever other common particles, if any, are in the nearby surrounding environment found in nearby computing elements. The common particles of interest in this surrounding environment can be determined by that computing element sending a short-range message to those nearby computing elements, asking, in effect, if they are holding any particles of a type that can interact with its held particle, and if so, then send back relevant details regarding those held particles. Then, with that received information, that computing element can interact its held common particle with one or more of those nearby common particles. In general, the actual size of the neighborhood examined for the presence of common particles by a computing element depends on the type of common particle it is holding and/or that held particle’s state information.

2 Biology and Bions

This chapter presents some of the evidence that each cell—in this book, the word cell means an organic living cell—is inhabited and controlled by an intelligent particle that makes that cell alive. The chapter sections are:

2.1 The Bion

The bion is an intelligent particle that has no associated awareness.[8] Assume there is one bion associated with each cell. For any bion, its association, if any, with cells and cellular activity depends on the details of its learned programs (section 3.6).[9] Depending on its learned programs, a bion can interact with both intelligent particles and common particles.


footnotes

[8] By “no associated awareness” for the bion, is meant that the bion is always an unconscious particle (likewise, common particles are always unconscious particles). In the reality model presented in this book, the only intelligent particles that are conscious when awake are the intelligent-particle solitons which are described later in this book (sleep for an intelligent particle is explained in section 9.3; common particles do not sleep).

[9] The word bion is a coined word which I made up as follows: bi from the word biology, and the on suffix to denote a particle. Most of the bions active in our physical human world are directly involved with cells, hence the reason I incorporated the word biology into the name I chose for these unconscious intelligent particles. However, for those bions that compose our human minds, none of those bions have any of the learned programs for making cells alive (see section 3.7). Another example of bions that have none of the learned programs for making cells alive are the bions that compose the bion bodies of the Caretakers (section 7.6).


2.2 Cell Movement

The ability to move either toward or away from an increasing chemical concentration is a coordinated activity that many single-cell organisms can do. Single-cell animals, and bacteria, typically have some mechanical means of movement. Some bacteria use long external whip-like filaments called flagella. Flagella are rotated by a molecular motor to cause propulsion thru water. The larger single-cell animals may use flagella similar to bacteria, or they may have rows of short filaments called cilia, which work like oars, or they may move about as amebas do. Amebas move by extruding themselves in the direction they want to go.

The Escherichia coli bacterium has a standard pattern of movement when searching for food: it moves in a straight line for a while, then it stops and turns a bit, and then continues moving in a straight line again. This pattern of movement is followed until the presence of food is detected. The bacterium can detect molecules in the water that indicate the presence of food. When the bacterium moves in a straight line, it continues longer in that direction if the concentration of these molecules is increasing. Conversely, if the concentration is decreasing, it stops its movement sooner, and changes direction. Eventually this strategy gets the bacterium to a nearby food source.

Amebas that live in soil, feed on bacteria. One might not think that bacteria leave signs of their presence in the surrounding water, but they do. This happens because bacteria make small molecules, such as cyclic AMP and folic acid. There is always some leakage of these molecules into the surrounding water thru the cell membrane. Amebas can move in the direction of increasing concentration of these molecules, and thereby find nearby bacteria. Amebas can also react to the concentration of molecules that identify the presence of other amebas. The amebas themselves leave telltale molecules in the water, and amebas move in a direction of decreasing concentration of these molecules, away from each other.

The ability of a cell to follow a chemical concentration gradient is hard to explain using chemistry alone. The easy part is the actual detection of a molecule. A cell can have receptors on its outer membrane that react when contacted by specific molecules. The other easy part is the means of cell movement. Either flagella, or cilia, or self-extrusion is used. However, the hard part is to explain the control mechanism that lies between the receptors and the means of movement.

In the ameba, one might suggest that wherever a receptor on the cell surface is stimulated by the molecule to be detected, then there is an extrusion of the ameba at that point. This kind of mechanism is a simple reflexive one. However, this reflex mechanism is not reliable. Surrounding the cell at any one time could be many molecules to be detected. This would cause the cell to move in many different directions at once. And this reflex mechanism is further complicated by the need to move in the opposite direction from other amebas. This would mean that a stimulated receptor at one end of the cell would have to trigger an extrusion of the cell at the opposite end.

A much more reliable mechanism to follow a chemical concentration gradient is one that takes measurements of the concentration over time. For example, during each time interval—of some predetermined fixed length, such as during each second—the moving cell could count how many molecules were detected by its receptors. If the count is decreasing over time, then the cell is probably moving away from the source. Conversely, if the count is increasing over time, then the cell is probably moving toward the source. Using this information, the cell can change its direction of movement as needed.

Unlike the reflex mechanism, there is no doubt that this count-over-time mechanism would work. However, this count-over-time mechanism requires a clock and a memory, and a means of comparing the counts stored in memory. This sounds like a computer, but such a computer is extremely difficult to design as a chemical mechanism, and no one has done it. On the other hand, the bion, an intelligent particle, can provide these services.

2.3 Cell Division

All cells reproduce by dividing: one cell becomes two. When a cell divides, it divides roughly in half. The division of water and proteins between the dividing cell halves does not have to be exactly even. Instead, a roughly even distribution of the cellular material is acceptable. However, there is one important exception: the cell’s DNA, which is known to code the structure of individual proteins, and may contain other kinds of information. The DNA of a cell is like a single massive book. This book cannot be torn in half and roughly distributed between the two dividing cell halves. Instead, each new cell needs its own complete copy. Therefore, before a cell can divide, it must duplicate all its DNA, and each of the two new cells must receive a complete copy of the original DNA.

All multicellular organisms are made out of eucaryotic cells. Eucaryotic cells are characterized by having a well-defined cellular nucleus that contains all the cell’s DNA. Division for eucaryotic cells has three main steps. In the first step all the DNA is duplicated, and the chromosomes condense into clearly distinct and separate groupings of DNA. For a particular type of cell, such as a human cell, there are a fixed and unchanging number of condensed chromosomes formed; ordinary human cells always form 46 condensed chromosomes before dividing.

During the normal life of a cell, the chromosomes in the nucleus are sufficiently decondensed so that they are not easily seen as being separate from each other. During cell division, each condensed chromosome that forms—hereafter simply referred to as a chromosome—consists of two equal-length strands that are joined. The place where the two strands are joined is called a centromere. Each chromosome strand consists mostly of a long DNA molecule wrapped helically around specialized proteins called histones. For each chromosome, each of the two strands is a duplicate of the other, coming from the preceding duplication of DNA. For a human cell there are a total of 92 strands comprising 46 chromosomes. The 46 chromosomes comprise two copies of all the information coded in the cell’s DNA. One copy will go to one half of the dividing cell, and the other copy will go to the other half.

The second step of cell division is the actual distribution of the chromosomal DNA between the two halves of the cell. The membrane of the nucleus disintegrates, and simultaneously a spindle forms. The spindle is composed of microtubules, which are long, thin rods made of chained proteins. The spindle can have several thousand of these microtubules. Many of the microtubules extend from one half of the cell to the chromosomes, and a roughly equal number of microtubules extend from the opposite half of the cell to the chromosomes. Each chromosome’s centromere becomes attached to microtubules from both halves of the cell.

When the spindle is complete, and all the centromeres are attached to microtubules, the chromosomes are then aligned together. The alignment places all the centromeres in a plane that is oriented at a right angle to the spindle. The chromosomes are at their maximum contraction. All the DNA is tightly bound so that none will break off during the actual separation of each chromosome. The separation itself is caused by a shortening of the microtubules. In addition, in some cases the separation is caused by the two bundles of microtubules moving away from each other. The centromere, which held together the two strands of each chromosome, is pulled apart into two pieces. One piece of the centromere, attached to one chromosome strand, is pulled into one half of the cell. And the other centromere piece, attached to the other chromosome strand, is pulled into the opposite half of the cell. Thus, the DNA is equally divided between the two halves of the dividing cell.

The third step of cell division involves the construction of new membranes. Once the divided DNA has reached the two respective cell halves, a normal-looking nucleus forms in each cell half: at least some of the spindle’s microtubules first disintegrate, a new nuclear membrane assembles around the DNA, and the chromosomes become decondensed within the new nucleus. Once the two new nuclei are established, a new cell membrane is built in the middle of the cell, dividing the cell in two. Depending on the type of cell, the new cell membrane may be a shared membrane. Or the new cell membrane may be two separate cell membranes, with each membrane facing the other. Once the membranes are completed, and the two new cells are truly divided, the remains of the spindle disintegrate.

2.4 Generation of Sex Cells

The dividing of eucaryotic cells is impressive in its precision and complexity. However, there is a special kind of cell division used to make the sex cells of most higher organisms, including man. This special division process is more complex than ordinary cell division. For organisms that use this process, each ordinary cell (ordinary in the sense of not being a sex cell) has half its total DNA from the organism’s mother, and the other half from the organism’s father. Thus, within the cell are two collections of DNA. One collection originated from the mother, and the other collection originated from the father. Instead of this DNA from the two origins being mixed, the separateness of the two collections is maintained within the cell. When the condensed chromosomes form during ordinary cell division, half the chromosomes contain all the DNA that was passed by the mother, and the other half contain all the DNA that was passed by the father. In any particular chromosome, all the DNA came from only one parent, either the mother or the father.

Regarding genetic inheritance, particulate inheritance requires that each inheritable characteristic be represented by an even number of genes.[10] Genes are specific sections of an organism’s DNA. For any given characteristic encoded in the DNA, half the genes come from the mother, and the other half come from the father. For example, if the mother’s DNA contribution has a gene for making hemoglobin, then there is a gene for making hemoglobin in the father’s DNA contribution. The actual detail of the two hemoglobin genes may differ, but for every gene in the mother’s contribution, there is a corresponding gene in the father’s contribution. Thus, the DNA from the mother is always a rough copy of the DNA from the father, and vice versa. The only difference is in the detail of the individual genes.

Sex cells are made four-at-a-time from an original cell.[11] The original cell divides once, and then the two newly formed cells each divide, producing the final four sex cells. The first step for the original cell is a single duplication of all its DNA. Then, ultimately, this DNA is evenly distributed among each resultant sex cell, giving each sex cell only half the DNA possessed by an ordinary nondividing cell. Then, when the male sex cell combines with the female sex cell, the then-fertilized egg has the normal amount of DNA for a nondividing cell.

The whole purpose of sexual reproduction is to provide a controlled variability of an organism’s characteristics, for those characteristics that are represented in that organism’s DNA. Differences between individuals of the same species give natural selection something to work with—allowing, within the limits of that variability, an optimization of that species to its environment.[12] To help accomplish this variability, there is a mixed selection in the sex cell of the DNA that came from the two parents. However, the DNA that goes into a particular sex cell cannot be a random selection from all the available DNA. Instead, the DNA in the sex cell must be complete, in the sense that each characteristic specified by that organism’s DNA is specified in that sex cell, and the number of genes used to specify each such characteristic is only half the number of genes present for that characteristic in ordinary nondividing cells. Also, the order of the genes on the DNA must remain the same as it was originally—conforming to the DNA format for that species.

The mixing of DNA that satisfies the above constraints is partially accomplished by randomly choosing from the four strands of each functionally equivalent pair of chromosomes. Recall that a condensed chromosome consists of two identical strands joined by a centromere. For each chromosome that originated from the mother, there is a corresponding chromosome with the same genes that originated from the father. These two chromosomes together are a functionally equivalent pair. One of the chromosomes from each functionally equivalent pair of chromosomes is split between two of the sex cells. And the other chromosome from that pair is split between the other two sex cells. In addition to this mixing method, it would improve the overall variability if at least some corresponding sequences of genes on different chromosomes are exchanged with each other. And this exchange method is in fact used. Thus, a random exchanging of corresponding sequences of genes within a functionally equivalent pair of chromosomes, followed by a random choosing of a chromosome strand from each functionally equivalent pair of chromosomes, provides good overall variability, and preserves the DNA format for that species.

Following are the details of how the sex cells get their DNA: The original cell, as already stated, duplicates all its DNA. The same number of condensed chromosomes are formed as during ordinary cell division. However, these chromosomes are much longer and thinner than chromosomes formed during ordinary cell division. These chromosomes are stretched out, so as to make the exchanging of sequences of genes easier.

Once these condensed, stretched-out chromosomes are formed, each chromosome, in effect, seeks out the other functionally equivalent chromosome and lines up with it, so that corresponding sequences of genes are directly across from each other. Then, on average, for each functionally equivalent pair of chromosomes, several random exchanges of corresponding sequences of genes take place.

After the exchanging is done, the next step has the paired chromosomes move away somewhat from each other. However, they remain connected in one or more places. Also, the chromosomes themselves undergo contraction, losing their stretched-out long-and-thin appearance. As the chromosomes contract, the nuclear membrane disintegrates, and a spindle forms. Each connected pair of contracted chromosomes lines up so that one centromere is closer to one end of the spindle, and the other centromere is closer to the opposite end of the spindle. The microtubules from each end of the spindle attach to those centromeres that are closer to that end. The two chromosomes of each connected pair are then pulled apart, moving into opposite halves of the cell. It is random as to which chromosome of each functionally equivalent pair goes to which cell half. Thus, each cell half gets one chromosome from each pair of what was originally mother and father chromosomes, but which have since undergone random exchanges of corresponding sequences of genes.

After the chromosomes have been divided into the two cell halves, there is a delay, the duration of which depends on the particular species. During this delay—which may or may not involve the forming of nuclei and the construction of a dividing cell membrane—the chromosomes remain unchanged. After the delay, the final step begins. New spindles form—in each cell half if there was no cell membrane constructed during the delay; or in each of the two new cells if a cell membrane was constructed—and the final step divides each chromosome at its centromere. The chromosomes line up, the microtubules attach to the centromeres, and the two strands of each chromosome are pulled apart in opposite directions. Four new nuclear membranes form. The chromosomes become decondensed within each new nucleus. The in-between cell membranes form, and the spindles disintegrate. There are now four sex cells, and each sex cell contains a well-varied blend of that organism’s genetic inheritance which originated from its two parents.


footnotes

[10] The exception to this rule, and the exception to the rules that follow, are genes and chromosomes that are sex-specific, such as the X and Y chromosomes in man. There is no further mention of this complicating factor.

[11] In female sex cells, four cells are made from an original cell, but only one of these four cells is a viable egg (this viable egg has most of the original cell’s cytoplasm). The other three cells are not viable eggs and they disintegrate. There is no further mention of this complicating factor.

[12] The idea of natural selection is that differences between individuals translate into differences in their ability to survive and reproduce. If a species has a pool of variable characteristics, then those characteristics that make individuals of that species less likely to survive and reproduce, tend to disappear from that species. Conversely, those characteristics that make individuals of that species more likely to survive and reproduce, tend to become common in that species.

A species is characterized by the ability of its members to interbreed. It may appear that if one had a perfect design for a particular species, then that species would have no need for sexual reproduction. However, the environment could change and thereby invalidate parts of any fixed design. In contrast, the mechanism of sexual reproduction allows a species to change as its environment changes.


2.5 Bions and Cell Division

As one can see, cell division is a complex and highly coordinated process that consists of a sequence of well-defined steps. So, can cell division itself be exclusively a chemical phenomenon? Or would it be reasonable to believe that bions are involved?

Cells are highly organized, but there is still considerable random movement of molecules, and there are regions of more or less disorganized molecules. Also, the organized internal parts of a cell are suspended in a watery gel. And no one has been able to construct, either by designing on paper or by building in practice, any computer-like control mechanisms that are made—as cells are—from groups of organized molecules suspended in a watery gel.[13] Also, the molecular structure of cells is already known in great—albeit incomplete—detail, and computer-like control mechanisms composed of molecules have not been observed. Instead, the only major computer component seen in cells is DNA, which, in effect, is read-only memory. But a computer requires an instruction processor, which is a centralized machine that can do each action corresponding to each program instruction stored in memory. And this required computer component has not been observed in cells. Given all these difficulties for the chemical explanation, it is reasonable to conclude that for each cell a bion controls its cell-division process.[14]


footnotes

[13] The sequence of well-defined steps for cell division is a program. For running such a moderately complex program, the great advantage of computerization over non-computer solutions—in terms of resource requirements—is discussed in section 3.3.

[14] The bion also explains the otherwise enigmatic subject of biological transmutations. Organic life is able to perform a number of different transmutations of elements into different elements, and this has been shown by many different experiments (Kervran, C. Louis. Biological Transmutations. Beekman Publishers, Woodstock NY, 1998):

In chemistry we are always referred to a law of Lavoisier’s formulated at the end of the 18th century. “Nothing is lost, nothing is created, everything is transformed.” This is the credo of all the chemists. They are right: for in chemistry this is true. Where they go wrong is when they claim that nature follows their laws: that Life is nothing more than chemistry. [Ibid., p. viii; Herbert Rosenauer]

Included among the many different examples of biological transmutations are such things as the production of calcium by hens (Ibid., pp. 15, 60–61), the production of iodine by algae (Ibid., p. 69), and the production of copper by lobsters (Ibid., pp. 120–122). In general, it appears that plants, animals, and smaller organisms such as bacteria, are all engaged in the production of certain elements.

Although there is much experimental evidence for biological transmutations, there has been no explanation within the framework of physics and chemistry. However, given the bion, biological transmutations can be explained as being done by bions.


2.6 Multicellular Development

For most multicellular organisms, the body of the organism develops from a single cell. How a single cell can develop into a starfish, tuna, honeybee, frog, dog, or man, is obviously a big question. Much research and experimentation has been done on the problems of development. In particular, there has been much focus on early development, because the transition from a single cell to a baby is a much more radical step than the transition from a baby to an adult, or from an adult to an old adult.

In spite of much research on early development, there is no real explanation of how it happens, except for general statements of what must be happening. For example, it is known that some sort of communication must be taking place between neighboring cells—and molecules are typically guessed as the information carrier—but the mechanism is unknown. In general, it is not hard to state what must be happening. However, the mathematics-only reality model allows only a chemical explanation for multicellular development, and given this restriction, there has been little progress. There is a great mass of data, but no explanation of the development mechanism.

Alternatively, given the computing-element reality model and the bion, multicellular development is explained as a cooperative effort between bions. During development, the cooperating bions read and follow as needed whatever relevant information is recorded in the organism’s DNA.[15]


footnotes

[15] As an analogy, consider the construction of a house from a set of blueprints. The blueprints by themselves do not build the house. Instead, a construction crew, which can read the blueprints, builds the house. And this construction crew, besides being able to read the blueprints, also has inside itself a great deal of additional knowledge and ability—not in the blueprints—needed to construct the house.

For a developing organism, its DNA are the blueprints and the organic body is the house. The organism’s bions are the construction crew. The learned programs in those bions, and associated data, are the additional knowledge and ability—not in the blueprints—needed to construct the house.

Note that at present it is not known how complete the DNA blueprints are, because the only code in DNA that has been deciphered so far is the code that specifies the structure of individual proteins. However, there is probably additional information in the DNA which is written in a language currently unknown:

So-called “junk” DNA, regions of genetic material (accounting for 97% of the human genome) that do not provide blueprints for proteins and therefore have no apparent purpose, have been puzzling to scientists. Now a new study shows that these non-coding sequences seem to possess structural similarities to natural languages. This suggests that these “silent” DNA regions may carry biological information, according to a statistical analysis of DNA fragments by researchers … [Physics News Update, American Institute of Physics, 1994, at: http://www.aip.org/enews/physnews/1994/split/pnu202-1.htm]


3 The Brain and the Mind

This chapter considers both the brain and the mind, and the involvement of bions in both. Also, learned programs are explained. And the last section presents in detail various algorithms, data structures, and code, that, among other things, support the development of multicellular animals including the development of one’s own physical body. The chapter sections are:

3.1 Neurons

Every mammal, bird, reptile, amphibian, fish, and insect, has a brain. The brain is at the root of a tree of sensory and motor nerves, with branches thruout the body. The building block of any nervous system, including the brain, is the nerve cell. Nerve cells are called neurons. All animal life shows the same basic design for neurons. For example, a neuron from the brain of a man uses the same method for signal transmission as a neuron from a jellyfish.

Neurons come in many shapes and sizes. The typical neuron has a cell body and an axon along which a signal can be transmitted. An axon has a cylindrical shape, and resembles an electrical wire in both shape and purpose. In man, axon length varies from less than a millimeter to more than a meter in length.

A signal is transmitted from one end of the axon to the other end, as a chemical wave involving the movement of sodium ions across the axon membrane. During the wave, the sodium ions move from outside the axon to inside the axon. Within the neuron is a chemical pump that is always working to transport sodium ions to the outside of the cell. A neuron waiting to transmit a signal sits at a threshold state. The sodium-ion imbalance that exists across the axon membrane waits for a trigger to set the wave in motion. Neurons with a clearly defined axon can transmit a signal in only one direction.

The speed of signal transmission thru an axon is very slow compared to electrons moving thru an electrical wire. Depending on the axon, a signal may move at a speed of anywhere from ½ to 120 meters per second. The fastest transmission speeds are obtained by axons that have a myelin sheath: a fatty covering. The long sensory and motor nerves that connect the brain thru the spinal cord to different parts of the body are examples of myelinated neurons. In comparison to the top speed of 120 meters per second, an electrical current in a wire can move more than a million times faster. Besides speed, another consideration is how quickly a neuron can transmit a new signal. At best, a neuron can transmit about one thousand signals per second. One may call this the switching speed. In comparison, the fastest electrical circuits can switch more than a million times faster.

One important way that neurons differ from each other is by the neurotransmitters that they make and respond to. In terms of signal transmission, neurotransmitters are the link that connects one neuron to another. The sodium-ion wave is not directly transferred from one neuron to the next. Instead, the sodium-ion wave travels along the axon, and spreads into the terminal branches which end with synapses. There, the synapses release some of the neurotransmitter made by that neuron. The released neurotransmitter quickly reaches those neurons whose dendrites adjoin those synapses, provoking a response to that released neurotransmitter. There are three different responses: a neuron could either be stimulated to start its own sodium-ion wave, or inhibited from starting its own sodium-ion wave, or a neuron could have no response.[16],[17]


footnotes

[16] In the human brain there are many different neurotransmitters. Certain functionally different parts of the brain use different neurotransmitters. The subject of neurotransmitters raises the larger question of the affect of various drugs on the mind.

Although it is clear that certain chemicals affect the mind, it does not follow that the mind is a product of chemistry. As an analogy, consider the case of yourself and your physical environment: In your physical environment—including where you live, where you work, where you sleep, and so on—you are surrounded by physical objects, and you interact with many of these physical objects on a regular basis. Now, what happens when your physical environment changes? The change or changes, depending on what they are, may or may not affect you, depending on the specifics of the changes and how you normally interact with the objects in question. Can an outside observer logically conclude that the part of you that produces your reactions to changes in your physical environment is the same as, or is constructed from, the objects that you are reacting to? Obviously, no. And likewise, it does not logically follow that just because certain changes in the chemical landscape of the brain can affect the mind, that the mind is a product of chemistry, or is composed of chemicals.

To generalize the argument: Given that object A is affected by object B, it does not logically follow that object A, or any part of object A, is composed of the same materials as object B.

Regarding psychedelic drugs (Grinspoon, Lester, and James Bakalar. Psychedelic Drugs Reconsidered. The Lindesmith Center, New York, 1997):

The fact that a simple compound like nitrous oxide as well as the complex organic molecule of a drug like LSD can produce a kind of psychedelic mystical experience suggests that the human organism has a rather general capacity to attain the state and can reach it by many different biological pathways. It should be clear that there is no simple correlation between the chemical structure of a substance and its effect on consciousness. The same drug can produce many different reactions, and the same reaction can be produced by many different drugs. [Ibid., p. 36]

Regarding psychiatric drugs (Breggin, Peter, and David Cohen. Your Drug May Be Your Problem. Perseus Books, Reading MA, 1999):

Psychiatric drugs do not work by correcting anything wrong in the brain. We can be sure of this because such drugs affect animals and humans, as well as healthy people and diagnosed patients, in exactly the same way. There are no known biochemical imbalances and no tests for them. That’s why psychiatrists do not draw blood or perform spinal taps to determine the presence of a biochemical imbalance in patients. They merely observe the patients and announce the existence of the imbalances. The purpose is to encourage patients to take drugs.

Psychiatric drugs “work” precisely by causing imbalances in the brain—by producing enough brain malfunction to dull the emotions and judgment or to produce an artificial high. [Ibid., p. 41]

It is perhaps interesting to note that just as one might react to a sudden surplus or deficit in one’s physical environment of some physical object that one uses regularly, by taking actions to return that physical object to its normal quantity and/or affect, there is the same kind of reaction to chemical imbalances in the brain caused by certain drugs. For example:

All four drugs [Prozac, Zoloft, Paxil, and Luvox], known as selective serotonin reuptake inhibitors (SSRIs), block the normal removal of the neurotransmitter serotonin from the synaptic cleft—the space between nerve cells. The resultant overabundance of serotonin then causes the system to become hyperactive. But the brain reacts against this drug-induced overactivity by destroying its capacity to react to stimulation by serotonin. This compensatory process is known as “downregulation.” Some of the receptors for serotonin actually disappear or die off.

To further compensate for the drug effect, the brain tries to reduce its output of serotonin. This mechanism is active for approximately ten days and then begins to fail, whereas downregulation continues indefinitely and may become permanent. Thus, we know in some detail about two of the ways in which the brain tries to counterbalance the effects of psychiatric drugs. There are other compensatory mechanisms about which we know less, including counterbalancing adjustments in other neurotransmitter systems. But, overall, the brain places itself in a state of imbalance in an attempt to prevent or overcome overstimulation by the drugs. [Ibid., p. 46]

Regarding changes to the affected nerve cells, such as the “downregulation” and reduced output of serotonin mentioned in the above quote, these changes are not done by one’s mind, but instead are done by the cell-controlling bions that occupy those affected nerve cells.

[17] The best known psychedelic drug is probably LSD, first synthesized by the Swiss chemist Albert Hofmann in 1938 while working for the drug company Sandoz. In 1943, Hofmann, as described in his book, LSD: My Problem Child, inadvertently absorbed some of the drug and had the following experience:

Last Friday, April 16, 1943, I was forced to interrupt my work in the laboratory in the middle of the afternoon and proceed home, being affected by a remarkable restlessness, combined with a slight dizziness. At home I lay down and sank into a not unpleasant intoxicated-like condition, characterized by an extremely stimulated imagination. In a dreamlike state, with eyes closed (I found the daylight to be unpleasantly glaring), I perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors. After some two hours this condition faded away. [Hofmann, Albert. LSD: My Problem Child. Multidisciplinary Association for Psychedelic Studies, Santa Cruz, 2009, p. 47]

After that Friday experience with LSD, three days later he took a larger, measured dose of LSD, and he describes the visual effects:

Everything in the room spun around, and the familiar objects and pieces of furniture assumed grotesque, threatening forms. They were in continuous motion, animated, as if driven by an inner restlessness. The lady next door, whom I scarcely recognized, brought me milk—in the course of the evening I drank more than two liters. She was no longer Mrs. R., but rather a malevolent, insidious witch with a colored mask. [Ibid., p. 49. Hofmann said he asked for and drank milk because he thought he had taken too much LSD, and milk would be a “nonspecific antidote to poisoning”.]

Now, little by little I could begin to enjoy the unprecedented colors and plays of shapes that persisted behind my closed eyes. Kaleidoscopic, fantastic images surged in on me, alternating, variegated, opening and then closing themselves in circles and spirals, exploding in colored fountains, rearranging and hybridizing themselves in constant flux. It was particularly remarkable how every acoustic perception, such as the sound of a door handle or a passing automobile, became transformed into optical perceptions. Every sound generated a vividly changing image, with its own consistent form and color. [Ibid., p. 50]

To understand what is happening with LSD, let’s start with a consideration of visual imagination. I have, as far as I know, an ordinary visual imagination when compared to others of my nationality: I can visualize something in my mind—either a made-up construction, or recalling what something real in my life looks like—and I have conscious control over it, because I can consciously make changes to what I am seeing of it, and I can also see it as either a static image if I want, or see it animated in some way, also of my conscious choosing. However, what I see from my visual imagination—when I am awake in my physical body—is always faint, and this faint imagery is composited onto my ordinary vision when my eyes are open in a lit environment, and composited onto a dark background when my eyes are closed (note that making an image faint, and also compositing one image onto another image or background, are both simple, low-cost computations). From what I’ve read, and also from talking with others, some of us get imagery from their visual imagination that is substantially less faint than what I get, and good examples of such people probably include at least some of those who work as graphic artists, and also those who have so-called photographic memory.

Let’s assume that we each have, as a part of our human mind, the programming for a visual imagination that can, among other things, generate the complex imagery that is seen during an LSD trip. My guess as to what the LSD drug is doing, is that it interferes in some way with the normal activity of at least some of the nerve cells in one’s brain that are involved with seeing thru one’s physical eyes (perhaps affecting some specific neurotransmitter), with an end result that the cell-controlling bions in one’s brain that send vision data to one’s mind, end up sending vision data that differs in some way from what they normally send as vision data, and in reaction to this distorted or corrupted vision data from one’s brain, the vision-processing programs in one’s mind, compensate, in effect, by giving less importance to the vision data received from the brain, and more importance to what can be seen with the visual imagination, and this change in importance is done by one’s mind, in effect, skipping over and not doing the normal step in the vision process that most of us experience while awake, that makes imagery from one’s visual imagination faint, before compositing that imagery with either sight imagery or a dark background as stated in the previous paragraph.

For a typical person who tries LSD and gets the visual results typically described for that drug, which exposes the full power of one’s visual imagination without its produced imagery being made faint before one consciously sees it, one realizes that one’s mind has hidden abilities one didn’t know were there, and this realization is personally enlightening. Note that I have no experience with such drugs myself, because they have been illegal in the USA for my entire adult life. However, if it were legal and one could get the drug pure and untainted, then I would probably try it. Hopefully a better society in the future will not criminalize a non-addictive drug that allows a person to gain some insight into oneself.

Sections 9.6 and 9.7 go into detail regarding the fact that the human mind has mental abilities that go far beyond what most of us consciously experience as adults during our physically embodied human life. Also, with regard to one’s allocation plan, which is discussed in those two sections, note that the degree to which the imagery output from one’s visual imagination is made faint before being composited with other imagery or background, and then sent to one’s visual field to be consciously seen, does not affect nor change in any way the allocation of awareness-particle input channels for one’s visual field. The final imagery that is sent to one’s visual field can be anything, constructed by any processing means—including such methods as making an image faint, and compositing one image onto another image or background—without affecting or changing in any way the allocation of awareness-particle input channels for one’s visual field. However, the size of each final image sent to the awareness is constrained to the current size of one’s visual field. Also, for any pixel of a final image sent to the awareness, the brightness range and range of colors of that pixel is constrained to what the sent-to pixel in one’s visual field can show to one’s awareness. Also, because the allocation of awareness-particle input channels for one’s visual field is not affected by the degree to which the imagery from one’s visual imagination is made faint, and given that the degree of this making faint can be different between different persons, it follows that this is a potential mental difference between two persons without it being the result of an allocation-plan difference between them.

Regarding one’s visual imagination and the degree to which the imagery from one’s visual imagination is made faint, I expect that for a typical person during his time in the afterlife (section 6.3), the imagery from his visual imagination is made substantially less faint than it was during his adult human life. When we are in our physical human bodies, our sight is substantially more important than our visual imagination, because our physical bodies have constant needs and are easily damaged, and one’s sight is very important for satisfying those needs and guarding one’s physical body against damage. In the afterlife, there is no physical body with its needs and its potential for being damaged. During the bion-body stage of the afterlife, one has a bion-body, but it has no needs and cannot be damaged. During the lucid-dream stage of the afterlife, one is just one’s awareness/mind (defined in chapter 5), without a body. In either case, bion-body stage or lucid-dream stage, the imagery from one’s visual imagination during the afterlife can be made substantially less faint without increasing the risk to oneself. And, during the afterlife, perhaps one can consciously switch between just seeing the output of one’s visual imagination (without its produced imagery being made faint), and just seeing one’s external environment (there are three different ways to see one’s external environment during the afterlife—vision of physical matter as illuminated by physical light, vision of bions, and vision of d-common atoms—all detailed elsewhere in this book).


3.2 The Cerebral Cortex

There is ample proof that the cerebrum’s thin, gray, covering layer, called the cortex, is the major site for human intelligence. More specifically, and with regard to this book, the cortex is, in effect, the interface between the mind and the physical body. Specific cell-controlling bions occupying neurons in the sense-handling parts of the cortex, such as the visual cortex, send sensory-data messages to the mind. And, to get the muscle movements that the mind wants, the mind sends messages to specific cell-controlling bions occupying neurons in the motor cortex.

Beneath the cortex is the bulk of the cerebrum. This is the white matter whose white appearance is caused by the presence of fatty sheaths protecting nerve-cell fibers—much like insulation on electrical wire.

The white matter is primarily a space thru which an abundance of nerve pathways, called tracts, pass. Hundreds of millions of neurons are bundled into different tracts, just as wires are sometimes bundled into larger cables. Tracts are often composed of long axons that stretch the entire length covered by the tract.

As an example of a tract, consider the optic nerve, which leaves the back of the eye as a bundle of about a million axons. The supporting cell bodies of these axons are buried in the retina of the eye. The optic tract passes into the base of a thalamus, which is primarily a relay station for incoming sensory signals. There, a new set of neurons—one outgoing neuron for each incoming neuron—comprises a second optic tract, called the optic radiation. This optic radiation connects from the base of the thalamus to a wide area of cerebral cortex in the lower back of the brain.

There are three main categories of white-matter tracts, corresponding to those parts of the brain the tracts are connecting. Projection tracts connect areas of cortex with the brainstem and the thalami. Association tracts connect, on the same cerebral hemisphere, one area of cortex with a different area of cortex. Commissural tracts connect, on opposite cerebral hemispheres, one area of cortex with a different area of cortex. Altogether, there are many thousands of different tracts. It seems that all tracts in the white matter have either their origin, destination, or both, in the cortex.

The detailed structure of the cortex shows general uniformity across its surface. In any square millimeter of cortex, there are about 100,000 neurons. This gives a total count of about fifteen billion neurons for the entire human cortex. To contain this many neurons in the cortex, the typical cortex neuron is very small, and does not have a long axon. Many neurons whose cell bodies are in the cortex do have long axons, but these axons pass into the white matter as fibers in tracts. Although fairly uniform across its surface, the cortex is not uniform thru its thickness. Instead, when seen under a microscope, there are six distinct layers. The main visible difference between these layers is the shape and density of the neurons in each layer.

There is only very limited sideways communication thru the cortex. When a signal enters the cortex thru an axon, the signal is largely confined to an imaginary column of no more than a millimeter across. Different areas of widely spaced cortex do communicate with each other, but by means of tracts passing thru the white matter.

The primary motor cortex is one example of cortex function. This cortex area is in the shape of a strip that wraps over the middle of the cerebrum. As the name suggests, the primary motor cortex plays a major part in voluntary movement. This cortex area is a map of the body, and the map was determined by neurologists touching electrodes to different points on the cortex surface, and observing which muscles contracted. This map represents the parts of the body in the order that they occur on the body. In other words, any two adjacent parts of the body are motor-controlled by adjacent areas of primary motor cortex. However, the map does not draw a good picture of the body, because the body parts that are under fine control get more cortex. The hand, for example, gets about as much cortex area as the whole leg and foot. This is similar to the primary visual cortex, in which more cortex is devoted to the center-of-view than to peripheral vision.

There are many tracts carrying signals into the primary motor cortex, including tracts coming from other cortex areas, sensory tracts from the thalami, and tracts thru the thalami that originated in other parts of the brain. The incoming tracts are spread across the motor cortex strip, and the axons of those tracts terminate in cortex layers 1, 2, 3, and 4. For example, sensory-signal axons terminate primarily in layer 4 of the motor cortex. Similarly, the optic-radiation axons terminate primarily in layer 4 of the primary visual cortex.

Regarding the outgoing signals of the primary motor cortex, the giant Betz cells are big neurons with thick myelinated axons, which pass down thru the brainstem into the spinal cord. Muscles are activated from signals passed thru these Betz cells. The Betz cells originate in layer 5 of the primary motor cortex. Besides the Betz cells, there are smaller outgoing axons that originate in layers 5 and 6. These outgoing axons, in tracts, connect to other areas of cortex, and elsewhere.

Besides the primary motor cortex and the primary visual cortex, there are many other areas of cortex for which definite functions are known. This knowledge of the functional areas of the cortex did not come from studying the actual structure of the cortex, but instead from two other methods: by electrically stimulating different points on the cortex and observing the results, and by observing individuals who have specific cortex damage.

The study of cortex damage has been the best source of knowledge about the functional areas of the cortex. Among the possible causes of localized cortex damage are head wounds, strokes, and brain tumors. The basic picture that emerges from studies of cortex damage, is that the cortex, in terms of how it relates to the mind, is divided into many different functional parts, and these functional parts exist at different areas of cortex.

Clustered around the primary visual cortex, and associated with it, are other cortex areas, known as association cortex. In general, association cortex borders each primary cortex area. The primary area receives the sense-signals first, and from the primary area the same sense-signals are transmitted thru tracts to the association areas.

Each association area attacks a specific part of the total problem. Thus, an association area is a specialist. For example, for the primary visual cortex there is a specific association area for the recognition of faces. If this area is destroyed, the person suffering this loss can still see and recognize other objects, but cannot recognize a face.

Some other examples of cortex areas are Wernicke’s area, Broca’s area, and the prefrontal area. When Wernicke’s area is destroyed, there is a general loss of language comprehension. The person suffering this loss can no longer make any sense of what is read or heard, and any attempt to speak produces gibberish. Broca’s area is an association area of the primary motor cortex. When Broca’s area is destroyed, the person suffering this loss can no longer speak, producing only noises. The prefrontal area is beneath the forehead. When this area is destroyed, there is a general loss of foresight, concentration, and the ability to form and carry out plans of action.

3.3 Mental Mechanisms and Computers

There is a great deal of wiring in the human brain done by the neurons. But what is missing from the preceding description of brain structure is any hint of what the mental mechanisms are that accomplish human intelligence. However, regardless of how the computers are composed, human intelligence is most likely accomplished by computers, for the following three reasons:

  1. The existence of human memory implies computers, because memory is a major component of any computer. In contrast, hardwired control mechanisms—a term used here to represent any non-computer solution—typically work without memory.

  2. People have learning ability—even single-cell animals show learning ability—which implies the flexibility of computers using data saved in memory to guide future actions. In contrast, hardwired control mechanisms are almost by definition incapable of learning, because learning implies restructuring the hardwired, i.e., fixed, design.

  3. A hardwired solution has hardware redundancy when compared to a functionally equivalent computers-and-programs solution. The redundancy happens because a hardwired mechanism duplicates at each occurrence of an algorithmic instruction the relevant hardware needed to execute that instruction. In effect, a hardwired solution trades the low-cost redundancy of stored program instructions, for the high-cost redundancy of hardware. Thus, total resource requirements are much greater if mental processes are hardwired instead of computerized.

3.4 Composition of the Computers

Human intelligence can be decomposed into functional parts, which in turn can be decomposed into programs that use various algorithms. In general, for the purpose of guiding a computer, each algorithm must exist in a form where each elementary action of the algorithm corresponds with an elementary action of the computer. The elementary actions of a computer are known collectively as the instruction set of that computer.

Regarding the composition of the computers responsible for human intelligence, if one tries to hypothesize a chemical computer made of organic molecules suspended in a watery gel, then an immediate difficulty is how to make this computer’s instruction set powerful enough to do the actions of the many different algorithms used by mental processes. For example, how does a program add two numbers by catalyzing some reaction with a protein? If one tries to assume that instead of an instruction set similar in power to those found in modern computers, that the instruction set of the organic computer is much less powerful—that a refolding of some protein, for example, is an instruction—then one has merely transferred the complexity of the instruction set to the algorithms: instead of, for example, a single add-two-numbers instruction, an algorithm would need some large number of less-powerful instructions to accomplish the same thing.

For those who apply the mathematics-only reality model, confining themselves to a chemical explanation of mental processes, there has been little progress. As with the control mechanisms for cell movement, cell division, and multicellular development, all considered in chapter 2, there is the same problem: no one knows how to build computer-like control mechanisms satisfying cellular conditions. And the required computer component, an instruction processor, has not been observed in cells.

Alternatively, the computing-element reality model offers intelligent particles. Instead of one’s intelligence being the result of chemistry, one’s intelligence is instead the result of a group of bions (collectively one’s mind), and the programming (learned programs) and stored data in that mind.

3.5 Memory

People have a rich variety of memories, such as memories of sights, sounds, and factual data.[18] Regarding memory, the whole question of memory has been frustrating for those who have sought its presence in physical substance. During much of the 20th century, there was a determined search for memory in physical substance—by many different researchers. However, these researchers were unable to localize memory in any physical substance.

An issue related to memory is the frequently heard claim that neural networks are the mechanism responsible for human intelligence—in spite of their usefulness being limited to pattern recognition. However, and regardless of usefulness, without both a neural-network algorithm and input-data preprocessing—requiring memory and computational ability—neural networks do nothing. Thus, before invoking physical neural networks to explain any part of human intelligence, memory and computational ability must first exist as part of the physical substance of the brain—which does not appear to be the case.

In the latter part of the 20th century, the most common explanation of memory is that it is stored, in effect, by relative differences between individual synapses. Although this explanation has the advantage of not requiring any memory molecules—which have not been found—there must still be a mechanism that records and retrieves memories from this alleged storage medium. This requirement of a storage and retrieval mechanism raises many questions. For example:

  1. How does a sequence of single-bit signals along an axon—interpreting, for example, the sodium-ion wave moving along an axon and into the synapses as a 1, and its absence as a 0—become meaningfully encoded into the synapses at the end of that axon?

  2. If memory is encoded into the synapses, then why is the encoded memory not recalled every time the associated axon transmits a signal; or, conversely, why is a memory not encoded every time the associated axon transmits a signal?

  3. How do differences between a neuron’s synapses become a meaningful sequence of single-bit signals along those neurons whose dendrites adjoin those synapses?

The above questions have no answer. Thus, the explanation that memory is stored by relative differences between individual synapses, pushes the problem of memory elsewhere, making it worse in the process, because synapses—based on their physical structure—are specialized for neurotransmitter release, not memory storage and retrieval.

Alternatively, given bions, and given that each bion has its own memory (see the definition of a bion’s memory, in section 1.6), one’s memories are located somewhere in the collective memory of those bions that collectively form one’s mind.


footnotes

[18] The conscious memories of sights, sounds, and factual data, are high-level representations of memory data that have already undergone extensive processing into the forms that awareness receives.


3.6 Learned Programs

Regarding the residence of the programs of the mind, and with the aim of minimizing the required complexity of the computing-element program, assume that the computing-element program provides various learning algorithms—such as learning by trial and error, learning by analogy, and learning by copying—which, in effect, allow intelligent particles to program themselves. Specifically, with this assumption, each program in one’s mind—such as the program that recognizes faces—exists in the memory of one or more of those bions that collectively form one’s mind.

For reasons of efficiency, assume that the overall learning mechanism provided by the computing-element program includes a high-level programming language in which learned programs are written. In effect, the computing-element program runs (executes) a learned program by reading it and doing what it says. In this programming language for learned programs, besides the usual control statements found in any programming language (such as the if test) and the usual jump statements (such as return and go to) and the usual math operators (such as add, subtract, multiply, and divide), there are also many statements that are, in effect, calls of routines that exist in the computing-element program (the computing-element program is like an operating system with many routines that other programs, in this case learned programs, can call).

Once a specific learned program is established and in use by one or more bions, other bions nearby can potentially copy that learned program from any of those bions that already have it, and then, over time, potentially evolve that copied learned program further by using the various learning algorithms.[19],[20],[21]

Regarding learned programs within moving particles, motion thru space is the rule for particles. In general, as an intelligent particle moves thru space, each successive computing element that currently holds that intelligent particle continues running whichever of that intelligent particle’s learned programs are supposed to be running, continuing from the final execution state those learned programs were in when the previous computing element that held that intelligent particle gave that intelligent particle to the current computing element that holds that intelligent particle.[22]


footnotes

[19] In the discussion of rebirth in section 6.3, regarding transitioning from one animal type to a different animal type, the specific example of transitioning from being a chimpanzee in Africa, to being in its next incarnation an african human, is assumed to have enough difference between a chimpanzee mind and a human mind to require a complete replacement of the programming of its chimpanzee mind with the programming of a human mind, and most likely copying that human-mind programming from the mind of its human mother.

For a typical new human in his first human life, after the complete-replacement copying that gave him a human mind, my guess is that as long as that person keeps reincarnating as a human, any copying of mental programming from another mind will either not happen at all, or be very infrequent. Instead, it will be dependent on that person’s awareness and what that awareness wants, along with the non-copying learning algorithms of the computing-element program, to evolve improvements and/or changes, if any, to the programming of that person’s human mind. Thus, over the course of many human lifetimes, including all the in-between time in the afterlife, at least some customization of the programming of one’s human mind is, I think, likely. The area of customization that I think is most likely, is, in effect, choosing the kind of allocation plan (section 9.6) that one prefers as an adult in one’s human lives. For example, regarding one’s human lives, does one want to emphasize being more average, or emphasize being more intelligent, or emphasize being more athletic. There are advantages and disadvantages to each of these.

As discussed later in this book, the programming of the human mind includes, in effect, both genders in terms of their psychology, and, in general, one’s current allocation plan (section 9.6) determines which parts of one’s human mind manifest, and how strongly they manifest, to one’s awareness in one’s current human life, including which emotions can manifest and how intensely they can manifest.

[20] In effect, learned programs undergo evolution by natural selection: the environment of a learned program is, at one end, the input datasets that the learned program processes, and, at the other end, the positive or negative feedback, if any, from whatever uses the output of that learned program, being either one or more other learned programs in the same or other bions, and/or the soliton described later in this book.

It is its environment, in effect, that determines the rate of evolutionary change in a learned program. The changes themselves are made by the aforementioned learning algorithms in the computing-element program. Presumably these learning algorithms, when they run, will use whatever recent feedback there was, if any, from the user(s) of the output of that learned program, to both control the rate of change, and to guide both the type and location of the changes made to that learned program. Within these learning algorithms, negative feedback from a soliton probably carries the most weight in causing these learning algorithms to make changes to a learned program.

Note that evolutionary change can include simply replacing the currently used version of a learned program, by copying a different version of that learned program, if it is available, from those bions that already have it. The sharing of learned programs among bions appears to be the rule—and, in effect, cooperative evolution of a learned program is likely.

[21] An example of a learned program that is widely shared is the learned program (or programs) for vision.

Although one may imagine that vision is a simple process of merely seeing the image that falls on the eye, that is not the case at all (note that the fact that we all see things alike, because we are all using the same program(s), adds to this illusion of simplicity). Instead, the process of human vision—converting what falls on our eyes into what we consciously see in our minds—is very complex, with many rules and algorithms (Hoffman, Donald. Visual Intelligence. W. W. Norton, New York, 1998):

Perhaps the most surprising insight that has emerged from vision research is this: Vision is not merely a matter of passive perception, it is an intelligent process of active construction. What you see is, invariably, what your visual intelligence constructs. [Ibid., p. xii]

The fundamental problem of vision: The image at the eye has countless possible interpretations. [Ibid., p. 13]

The fundamental problem of seeing depth: The image at the eye has two dimensions; therefore it has countless interpretations in three dimensions. [Ibid., p. 23]

About our senses, it isn’t just what we see that is a construction of our minds. Instead, as Hoffman says:

I don’t want to claim only that you construct what you see. I want to claim that, at a minimum, you also construct all that you hear, smell, taste, and feel. In short, I want to claim that all your sensations and perceptions are your constructions.

And the biggest impediment to buying that claim comes, I think, from touch. Most of us believe that touch gives us direct contact with unconstructed reality. [Ibid., p. 176]

To prove this idea that our sense perceptions are mental constructions, one only needs to point at experiments that show a person experiencing some sense perception that has no basis in physical reality. For vision, there are many different optical illusions that cause one to see something that is not in the physical image. For touch, Hoffman cites experimental results regarding an effect that was “discovered by accident in the early 1970s by Frank Geldard and Carl Sherrick” (Ibid., p. 180). These experiments consist of making during a short time interval a small number of taps at different points on a test subject’s forearm. Depending on the location and timing of the different taps, the subject will feel one or more interpolated taps at locations where no physical taps were made. For example, Hoffman describes an experiment that delivers two quick physical taps at one point, quickly followed by one physical tap at a second point, and the subject reports feeling the three taps but with the second tap lying between those two points instead of being at the first point where the actual second physical tap was made (Ibid., p. 181). As Hoffman notes, this means that the entire perception of the three taps was constructed by the mind after the three physical taps had happened, because the interpolated tap point is dependent on knowing the two end-points for the interpolation, and the second end-point is only known when the third and final physical tap happens.

[22] It is reasonable to assume that each intelligent particle has a small mass—i.e., its mass attribute has a positive value—making an intelligent particle subject to both gravity and inertia. This assumption is consistent with how the intelligent particles currently associated with the Earth, including those cell-controlling bions that currently occupy and control organic cells, stay with the Earth as the Earth moves thru space at high speed due to a combination of gravitational and inertial effects including the rotation of the Earth, the Earth’s revolution around the Sun, and the revolution of the solar system around the galactic core.


3.7 The Mind

Each neuron in the brain is a cell, and is therefore occupied by a bion that has the learned programs for controlling cells and making them alive. Call the bions that occupy the nerve cells of the brain, brain bions. And call the bions that collectively form a mind, mind bions (this group of bions, taken as a whole, has all the learned programs for the mental abilities of that mind).

To explain one’s intelligence, one could say that taken as a whole, one’s brain bions, besides having cell-controlling learned programs, also have all the programming (the learned programs) for one’s mental abilities. To justify this explanation, one could point out that brain bions are in the perfect location to read and process the sodium-ion signals moving along their neurons from sensory sources, and brain bions are also in the perfect location to start sodium-ion signals that transmit thru nerves to muscles, activating those muscles and causing movement. However, among the reasons to reject this explanation that mind bions are also brain bions, is limited computing resources and the benefits of specialization:

In general, given both the finite processing speed and finite memory available to each bion, and also, given the advantages, in general, of specialization: Instead of combining both cell-controlling abilities (learned programs for controlling a cell) and mental abilities (learned programs for mental abilities) into the same group of bions, it would be more efficient—both in evolutionary terms and operational terms—and less subject to conflicts, such as usage conflicts over the limited computing resources available to the two groups of learned programs which are extremely different in terms of their purpose (the programming for the mental abilities of the human mind in contrast to the programming for controlling cells)—if the learned programs for our various mental abilities occupy a different, separate group of bions than our brain bions. Assuming this separation, then there must be interface programming—existing in one or more learned programs on the brain side, and existing in one or more learned programs on the mind side—that interfaces one’s mind with one’s brain.

The interface programming would be the means by which specific brain bions can send data from sensory sources, such as one’s eyes and ears, to one’s mind (an example would be specific brain bions in the visual cortex sending vision data to one’s mind, and one or more specific mind bions receiving that sent vision data which ultimately will be processed in one’s mind into what one will consciously see). And likewise, interface programming would be the means by which one’s mind can send commands to specific brain bions in the motor cortex, which, after those brain bions receive those commands and then cause their nerves to signal, will ultimately result in the wanted muscle movements.