Contents


A-Space, E-Space, Gaia, and Soul

written by Kurt R. Johmann

Copyright © 1988 Kurt R. Johmann



BEGIN of note written by Kurt Johmann in May 2021:

A-Space, E-Space, Gaia, and Soul is the first book I wrote, and I’ve already written about that first book in my last book A Soliton and its owned Bions (Awareness and Mind) (my last book is also known as the 12th edition of The Computer Inside You). More specifically, what I wrote there about A-Space, E-Space, Gaia, and Soul is in footnote 165, and I now quote from it:

begin quote

All of the editions of The Computer Inside You were published, unlike my first attempt at such a book, titled A-Space, E-Space, Gaia, and Soul, which I wrote beginning in the Fall of 1987 at age 31, and completed in March 1988 at age 32. I never published that book, but I did register a copyright for it (the FORM TX that I have, says that the Effective Date of Registration is April 6, 1988).

Much of what I had written in A-Space, E-Space, Gaia, and Soul—regarding meditation, the syllable Om, and the Upanishads, and out-of-body projections including Oliver Fox and Sylvan Muldoon, and the kundalini injury, and the soul and soul projections—I copied into the first edition of The Computer Inside You (in the first edition of The Computer Inside You, I renamed the soul and soul projections as the soliton and solitonic projections). Most memorable to me about my A-Space, E-Space, Gaia, and Soul book, is that during my first semester in graduate school in the Fall of 1988, about six months after I had finished writing A-Space, E-Space, Gaia, and Soul, I realized that I had made a major error in that book, because I imagined, in effect, two different kinds of computing elements with their own programming, that I named A-Space and E-Space, instead of the more simple and efficient single kind of computing element with its computing-element program. Quoting from my unpublished book:

The deductive leap is to say that there are two types of space, and they are very finely nested with each other. The pieces of space which support the physical world will be called A-space. The pieces of space which support the hidden world will be called E-space. This is the theory of A-space and E-space. In a large volume of space, such as a cubic meter, we could say that half of the cubic meter is A-space and the other half is E-space. We would also say the two spaces are very finely nested together. In fact, the two spaces are probably nested together in a very regular and orderly pattern.

Looking back at that first book and the many editions of this book, writing a book that puts computation at the root of reality has been a long, drawn-out process for me, with many mistakes made by me along the way, beginning with A-Space, E-Space, Gaia, and Soul, and continuing with the different editions of The Computer Inside You.

end quote

My first computer was an IBM PC, which I bought near the end of 1984, and I typed my first book, A-Space, E-Space, Gaia, and Soul, into that computer. And in 1988, after I had finished writing that book, I then filled out a FORM TX for that book (a printed FORM TX was free for the asking from the USA’s Copyright Office), and then I mailed that filled-out FORM TX back to the Copyright Office along with a printed copy of my book, and then after what I recall was a few weeks, I was mailed back from the Copyright Office the same filled-out FORM TX I had mailed to them, but now that FORM TX had some new info on it (including, among other things, the Effective Date of Registration being 4/6/88 as stated in the above quote), done by the Copyright Office, in places on FORM TX that are, in effect, reserved for office use.

Regarding that printed copy of my book that I sent to the Copyright Office in 1988, I actually made several printed copies of A-Space, E-Space, Gaia, and Soul back in 1988, done as follows: From my IBM PC I printed the digital file that I had for my book, onto whatever printer I was using back then in 1988, and the result was a stack of loose sheets of 8½ × 11-inch paper (a USA-standard paper size, aka Letter size); I then used a USA-standard three-hole punch for Letter-size paper, on 3 or 4 of those paper sheets at a time, until all those paper sheets had been three-hole punched; then I lined-up all those paper sheets with regard to the three holes in each sheet, so that I could use the type of paper fastener I wanted to use to bind that stack of paper sheets together; then I bound together that lined-up stack of paper sheets, using paper fasteners I had already bought, after I knew how thick that paper stack was going to be.

Regarding the two previous paragraphs, the reason I was able to give that level of detail, especially with regard to FORM TX, is not because my memory is that good, because it isn’t (those recounted events in 1988 were just about 33 years ago). Instead, it’s because I kept that FORM TX and I also kept one of those printed copies of A-Space, E-Space, Gaia, and Soul that I made in 1988.

In all the editions of The Computer Inside You, I’ve given my name using just my first and last name, “Kurt Johmann”, without including my middle initial which is the letter R. So then, why, for this book, A-Space, E-Space, Gaia, and Soul, am I including in my name my middle initial? Specifically, see the two lines before this May 2021 note, which say: “written by Kurt R. Johmann” and “Copyright © 1988 Kurt R. Johmann”. So, why the middle initial? Because the cover sheet on my kept printed copy of A-Space, E-Space, Gaia, and Soul that I made in 1988 literally says, among other things, “A Nonfiction Book by Kurt R. Johmann” and “Copyright 1988 Kurt R. Johmann”.

END of note written by Kurt Johmann in May 2021.

Contents

Introduction
Chapter 1: A-space and E-space
Chapter 2: The Brain
Chapter 3: Evolution
Chapter 4: Development
Chapter 5: Gaia
Chapter 6: Hinduism and E-space
Chapter 7: Projections into E-space
Chapter 8: The Soul
Bibliography

Introduction

There are several viewpoints from which one may understand our human experience and life in general. One such viewpoint is that there exists only the world of the physical: All that we are, is a collection of physical atoms subject to physical laws, and nothing more. Many will recognize this view as that of a materialist. An inescapable conclusion of a materialist is that death is something final and all-embracing. The dissolution of the atom-composed body means the dissolution of the personality, and, of course, consciousness. All is lost because the man and the body are one-and-the-same thing. Such mysterious things as self-awareness are just a strange consequence of atomic, molecular, or neural arrangements in the brain.

It is easy to see that materialism would be the dominant view when we are surrounded by the technological fruits of science. Matter, in the form of machines, serves us more than living things do. We drive cars instead of riding horses, we eat packaged food kept in refrigerators instead of buying fresh food in some open-air market, and so on. While it lasts, the fruits of science are preferable to the old ways that relied more heavily on living things.

What this book will try to do, is to show an alternative to the materialist view. The approach taken will be scientific, and this book will read more like a science book than a religious, occult, or philosophy book. The reason for such an approach is that science, for the most part, does an excellent job of describing the physical universe. Its correctness has been proved a thousand times over. Only when it comes to the question of planetary life, does the strict materialism of science encounter difficulties. We will focus on these difficulties and offer a full, non-materialist theory to account for them.

There is at present some turmoil in the house of science, but not much. Subject areas such as physics, chemistry, engineering, astronomy, astrophysics, geology, meteorology, and in general, all the non-life sciences, are tranquil and self-satisfied. These subjects have no need of a spirit world or even a God. There are no experimental results in any of the non-life sciences that offer even the slightest opportunity for recourse to supernatural explanations. Quite the contrary, old notions of supernatural control of non-life phenomena have been completely discredited and proven false. Mankind is certainly the better for this knowledge. It is better to believe that a thunderbolt, or an earthquake, is a natural phenomenon which has nothing to do with oneself, than to believe that such a thing is a supernatural sign or warning.

The materialist approach in science has done wonders. It has conquered everywhere, almost. It has proven that there is indeed a physical universe that exists in its own right and doesn’t need, or care about, our existence, or our notions of a supernatural universe. With so much success under their belts, it’s only natural that the materialists will demand that this self-existing, physical universe accounts fully for our human selves as well. At worst, they can only be partly wrong.

So today, materialist science is rather calm, confident, and self-satisfied. Only a few of the life sciences are in trouble, and we will examine these trouble-spots later. By contrast to the calm of science, religion in the industrial countries is in a definite crisis. The major Western religion, Christianity, has lost all credibility. It has been officially banned in the Soviet Union, and elsewhere the intellectual classes have condemned it. Its chief document, the Bible, is an archaic book written by men who embraced the supernatural to the exclusion of the physical. The Bible is full of falsehoods. The first page of the Bible claims that the whole physical universe was created in six days by some supernatural superbeing, known as God. At the other end of the Bible we are treated to such wild claims as a man-who-is-God turning water into wine, and multiplying bread and fish out of thin air. Such “miracles” violate energy-and-mass conservation laws and cannot possibly be true. However, the poor, smitten believers will defend the Bible by claiming it must be true because it is the word-of-God. Instead of the word-of-God, the Bible presents us with the self-interested, lying words of men who concocted tall-tales in a pre-scientific age. The Bible is so porous and full of holes that a potential convert must either turn off his reason and accept it blindly on an emotional basis, or turn elsewhere for spiritual satisfaction.

Most people have neither the time nor inclination to sit down and develop their own world view, and even if they did, it would probably be wrong. Many people feel a need to explain themselves as something spiritual, and yet their only recourse is to accept some ready-made system from the society in which they live. Science in its present form tells people they have no spiritual part. Some people can live with this. Christianity tells people they do have a spiritual part, but at the same time burdens them with so much archaic rubbish that many can’t accept it. The alternative is to seek alternatives. Eastern religions have scored gains. So have occult writings, New Age mish-mash, and astrology. People want answers and they want security of belief. However, none of the spiritual systems currently available seem adequate. Their fundamental flaw is that they don’t mesh well with scientific knowledge. This should not surprise us as most spiritual systems currently offered were developed in pre-scientific times. What is needed is an explanation of the spiritual in the light of scientific knowledge and technique. Actually, a great deal can be said about the spiritual thanks to the findings of science. An example has already been given regarding the water-into-wine miracle. Science and materialism, in essence, tell us that the spiritual world treads lightly when it interacts with the physical. And the reason for this is not so much deliberate restraint, but rather actual weakness, for science has shown us that the physical world is real and that it runs its own show. It doesn’t need the hand of an outsider to move it.

One may criticize materialism for its one-world view, but do not make the mistake of embracing the opposite extreme. When people believe that the physical world is illusion, and one’s own mind or spirit is all that counts, then the result is often abuse or neglect of one’s own physical body. Science has shown us that the physical body can be damaged, and that only physical methods, not mind-power, prayer, self-will, or whatever, can restore the body. For example, a deep cut will be much better served by stitches at the hand of a doctor, than by praying about it, or trying to will the wound to heal, or any such spiritual method. In the light of scientific realism, one can only pity the religious devotee who deliberately damages his own body, such as by whipping it, or starving it.

If spiritual extremism threatens the physical body, does it follow then that physical extremism threatens the spiritual mind? This is less certain because bodily damage is readily visible while mental damage often isn’t. So it is hard to say. One can only make the observation that the majority of mankind, including those in the industrial countries, reject physical extremism and want some measure of spiritual recognition.

Certainly, at present, a great deal more is known about the physical world than about the spiritual world. Our knowledge of the spiritual world is limited to our own personal, inner experiences, and to those experiences of other people as told to us or as recorded in books. Even so, a lot can be said about the spiritual world, as we shall see. However, as an example of how little most people who express spiritual belief actually know about it, let’s consider the question of God. Most Westerners who accept the spiritual, accept some sort of God, but what do these people actually know about their God? Haven’t we all heard that God is everywhere, all-powerful, all-knowing, all-this, and all-that. These are statements of ignorance. Where is God? If you don’t know, then the best answer is everywhere. How powerful is God? If you don’t know God’s limitations, then the best answer is all-powerful. Such an answer immediately stops the question dead in its tracks. The do-it-all, be-everywhere, know-it-all God, is a part of Christianity not because such a superbeing really exists, but because the priests of the religion don’t want questions they can’t answer. It is this sort of deception that has soured many people on the concept of a superbeing. And such an all-this-and-that God stands on very shaky ground in our scientific age, and much more so than in pre-scientific times. In pre-scientific times, the universe was conceived as being much smaller than it is today. Before Copernicus, the Earth was the center of the universe, the lights in the sky were just lights and not gigantic stars, and most people did not even realize that the Earth is as big as it actually is. With the notion of such a shrunken universe, an everywhere God would be more believable. But now we know that the universe is immense, and very old too. Because the Christian priests are ignorant, they had no choice but to let their God expand to fill this new space as it was discovered. However, in the face of such immensity of time and space, the old ploy of an all-this-and-that God has grown rather thin.

So far, we have been using the words “spiritual,” and “God.” However, too many people equate these words with the erroneous conceptions of pre-scientific systems. Therefore, instead of “spiritual,” we shall use the word E-space (pronounced ee-space), and instead of “God,” we shall use the word Gaia (pronounced guy-ah).

E-space, along with its companion A-space, are new words given and defined in this book. However, Gaia is an old Greek word for Mother Earth. Scientist James Lovelock has recently popularized the word Gaia, and its use in this book therefore seems appropriate.

Chapter 1: A-space and E-space

We need to examine what is known about our universe so that we can see where a hidden, invisible world can fit in.

In the beginning, about fifteen billion years ago, there was the Big Bang. The Big Bang theory is well established by observed evidence. The most telling piece of evidence is the fact that the universe is expanding. This expansion was first observed by astronomer Edwin Hubble in the 1920s. The light from every galaxy was found to be red-shifted, which meant they were moving away from our own. If galaxies are moving away from each other as time goes forward, then it is not hard to imagine that if one were to run time backward, then the galaxies would move closer and closer together until at some point they would all be clumped together in one large mass. Additional evidence came in 1965 when Arno Penzias and Robert Wilson discovered a very homogenous, microwave radiation that was coming from space. No matter where they pointed their antenna at the sky, they found the same radiation at the same intensity. There are several reasons this radiation is considered a relic or left-over of the early days of the Big Bang. Firstly, there are no plausible radiators of this radiation. In other words, it can’t be coming from the galaxies in space. The radiation is both too uniform, and too strong for that. The book, The Left Hand of Creation, by John Barrow and Joseph Silk, provides some interesting numbers to help make the point that the radiation didn’t come from galactic radiators. Taking the microwave radiation, which is composed of photons, into account, gives a photon-to-baryon ratio in the current universe of about 1010. When a star burns from start to finish, it only generates about 106 photons during its lifetime for every baryon in the star. Even if the star finishes its life with a supernova explosion, the lifetime photon production will be only 107. So one could see that the microwave radiation couldn’t possibly be from stars because there is too much of it.

One may wonder why, if so many photons are tied up in this microwave energy, why we aren’t all cooked by it as if we were in a microwave oven? The reason is that the radiation fills the gigantic space of the present-day universe so that the energy density per cubic meter is very low. If one were to run the clock backwards and shrink the universe, the energy density for the same radiation would necessarily increase. In the early universe it was very hot. The microwave radiation fits the expectations of a big-bang model very well. In fact, the model of George Gamow and two co-workers in 1948, predicted the existence of this microwave radiation.

On the twin pillars of an expanding universe, and the microwave radiation, rests the Big Bang theory. That there was an actual beginning to the universe, and that the universe is of finite size, is now considered fact. Acceptance of the Big Bang is demanded by the observed facts. Competing theories such as an infinite, steady-state universe, have fallen by the wayside in spite of the best efforts of those who find the Big Bang unappealing. Just as nowadays people point to the 16th century as the demise of the geocentric universe, so will the people of future centuries point back to the 20th century as the demise of the steady-state universe, and its replacement by the Big Bang.

With the Big Bang firmly entrenched, we need to look at some of its details. Firstly, what did the Big Bang explode into? A common fallacy is to imagine some vast void of empty space, and at some point in that empty space there was a great explosion of matter and energy out of nothing. This is completely wrong. A much better conception is to imagine a vast void of nothing, and at some point in that nothing there was a great explosion of space. The initial space contained latent energy, some of which quickly condensed out as matter. The key idea here is to realize that the empty space that fills our universe is not a true or absolute nothing. No, not at all. Remove all matter and energy from a cubic meter of space. Is that nothing? The answer from the physicists is a resounding no! We will examine space in detail a little further on. For the moment, just imagine empty space as a real something, a virtual sea of potential being.

The expansion of the universe means that new space is constantly being created. Whereas mass and energy in the universe are conserved, there is no equivalent conservation of space. Perhaps one may say that space multiplies, or that the existence of space breeds more space, and that space is self-replicating. All the energy and matter that currently exists in the universe, is probably no more or less than what was latent in the very first bit of space that heralded the beginning of the universe.

The question of how the first bit of space happened, is inevitable. As one may guess, there is no answer. How can there be? No matter what cause one may give, one can then come back and ask what caused the cause, and so on ad infinitum. The easy-out answer of “God” doesn’t work: What caused God?

Another inevitable question has to do with the edge of the universe. If the universe is finite in size, which it is, then it must have a boundary. It is not unreasonable to imagine the universe as a gigantic sphere. At the extreme surface of this sphere, there must be the final layer of the kind of space that fills our universe. On one side, towards the center of the sphere, is more of the same space. On the other side, away from the center, is alien territory. It may be nothing or it may be something, but whatever it is, it is completely foreign, alien, and strange to the space of our own universe. Now imagine a man transported from inside the universe to outside the universe. What will happen to him? He will just cease to exist. Totally disappear. Even his soul will vanish. (This will become clear when we look at space in greater detail a little further on.) However, there is no cause for alarm. For it is impossible for anything inside the universe to get outside of it. Imagine a light beam originating very close to the edge of the universe, and that the light beam is pointed away from the center towards the edge. What will happen when the light reaches the final layer of space? When the light enters the final layer, the final layer has no choice but to alter the straight-line course of the light and reflect it back at some unknown angle, perhaps the angle of incidence. The space that fills our universe cannot hand the contents of space (such as a light beam) off to some strange, alien other. For all practical purposes, the final layer of space is an impenetrable wall beyond which nothing may pass in either direction. Nothing can get out, and nothing can get in. So a spaceship with a man inside heading for the wall would crash and break up into who knows how many pieces.

To gain insight into the space of our universe, we must examine the findings of quantum mechanics. Quantum mechanics replaced the classical conceptions of the atomic and sub-atomic realms which persisted from ancient times to the beginning of the 20th century. The concept of atoms as extremely small bits of matter, goes back to at least the time of the Greek philosophers. The classical conception is that atoms, and later the parts of atoms once they were discovered, such as the electron, are real little objects that exist in their own right. Apart from any other reality, these little objects exist, and at any point in time the given little object has both a position and a momentum or motion, which may be zero. When, for example, an electron moves from point A to point B, one could infinitely divide the intervening space and say that at such-and-such a time, the electron was at a particular point on the connecting line, and was moving with a certain motion towards B. This is all part of the classical conception. It follows from the everyday, macroscopic events that are directly visible to our eyes. Throwing a ball, dropping a book, or just waving our hands: the objects in question are solid and real, and they certainly move very smoothly through space. There is no perceived jerkiness of motion, and one could certainly divide up the space into fine increments and observe the object at each point and say that yes, the object is there and it is moving with a definite motion. So the classical notion is the very common-sense notion that one could scale down the solidity and smooth motion of macroscopic objects ad infinitum and cover any atomic or sub-atomic realms in the process. However, this common-sense notion is wrong. Things just aren’t solid and smooth-moving at the atomic and sub-atomic levels.

Using the imaginary downscaling of macroscopic happenings as the basis for understanding and describing atomic and sub-atomic happenings, has a solid history of failure. For example: The wave-nature of light was well established back in the 18th century by different experiments, done by Young, showing interference and diffraction. Macroscopic waves, such as sound waves and water waves, require a medium substance for their propagation. It was only common sense to suppose that light waves must propagate through some real substance also. Newton himself suggested the existence of the ether. By the mid 19th century, the wave-nature of light was so well supported by experiments, such as those by Fresnel, that the supposed ether was readily believed. In 1881, Albert Michelson began his famous experiments to detect the ether. As we all know, it wasn’t there. Light still had a wave nature, but it didn’t need a medium as common sense based on macroscopic happenings would suppose.

Another notable failure of downscaling had to do with the black-body radiator. A major problem posed in the 19th century was to accurately describe the energy distribution by wavelength for a closed system at thermal equilibrium, such as for an evenly heated, evacuated oven. Experimental data had been gathered and the problem was to account for the data with a descriptive theory. Different theories—which assumed that radiative energy would vary smoothly during absorption and emission, just as one could, for example, smoothly adjust the wick in a lamp, or as might be suggested by the smooth motion of objects—were proposed and failed. The breakthrough came in 1900 when Max Planck offered a theory which fit the data and was based on the idea of discrete, discontinuous energy steps during absorption and emission. His equation makes first use of that famous constant that bears his name: Planck’s constant. The common-sense notion of smooth action was contradicted.

Another downscaling failure is bigger and broader than the two previous examples, and at the same time is less obvious. It has to do with the appearance of sub-atomic phenomena. The wave-nature of light has already been mentioned, and yet there is solid experimental evidence, such as the photoelectric effect, that light is composed of tiny particles. This particle-wave duality is not confined to light, or to electromagnetic radiation in general, but has been found true for sub-atomic particles as well. Even though an electron is preferentially spoken of as a particle, and light is preferentially spoken of as a wave, there are simple experiments which show an electron to be a wave, just as there are experiments which show light to be a particle. The conclusion of Neils Bohr, as part of the Copenhagen Interpretation given in 1927, is that the wave description and particle description are equally valid for sub-atomic phenomena. As to what in particular is observed, either wave or particle, that depends on the experiment done. What makes this strange, of course, is that nowhere in our macroscopic world do we see anything like this. A baseball does not act like a wave under any condition. Neither does a sound sometimes refuse to fill the air, but rather hit at one particular spot only. So once again, downscaling fails us.

In the early 20th century, a major effort was made to explain in detail the experimentally observed absorption and emission of radiation by individual atoms. The simplest atom is hydrogen, so it was the logical starting point for theoretical explanation. The basic layout of the atom had been established by Ernest Rutherford in 1911. His famous experiment was to shoot alpha particles against a very thin sheet of gold. Most of the particles went through unimpeded, but occasionally an alpha particle would bounce back. Rutherford’s interpretation was that atoms have a small, hard nucleus, about which the electrons orbit at some distance from the nucleus. Thus, the alpha particles could pass through the mostly empty space where the electrons were, but occasionally they would hit a nucleus and bounce back. With the notion of electron orbits, one may say that Rutherford had downscaled the workings of our solar system. Building on the Rutherford model, Neils Bohr did his best to make the model with its electron orbits work. The work of Bohr, and other theorists, in spite of using such non-solar-system notions as electrons jumping discontinuously from one fixed orbit to another, ultimately failed to explain all the data.

The breakthroughs came in 1925. Werner Heisenberg developed a new mathematical approach called matrix mechanics, and Erwin Schrödinger developed his wave equation. Heisenberg’s approach presumed particles, and Schrödinger’s approach presumed waves. Both approaches worked equally well in accurately explaining the atomic data. The downscaling idea of electron orbits was dead. Heisenberg soon went to work with Bohr and the two presented their Copenhagen Interpretation in 1927. This Interpretation provides guidelines for understanding the atomic and sub-atomic realms. In general, the work done by Heisenberg, Schrödinger, Bohr, and others at that time, is covered by the term quantum mechanics.

Included in quantum mechanics is the Uncertainty Principle. The “uncertainty” relates to the inability to accurately measure both the position and momentum of a sub-atomic particle. This uncertainty has an actual mathematical representation and is not a vague statement. First noticed by Max Born, the uncertainty is a direct mathematical consequence of Heisenberg’s matrix mechanics. In a nutshell, the Uncertainty Principle says that the more accurately one states the position of a given particle such as an electron, the less certain one can accurately state the particle’s momentum, and vice versa. Apparently, Heisenberg at first tried to interpret this mathematical uncertainty to mean that the particle, such as an electron, does have a real position and momentum at any instant in time, but accurate measurement itself is impossible because the measuring device would have to use some form of radiation to detect the particle and this detection itself must inevitably upset the particle. So the more accurately one measures the position of an electron by bouncing some form of radiation off it, the more one simultaneously would alter its momentum. Heisenberg’s explanation was a clear attempt to save the common-sense notion that a particle possesses a true position and motion at all times. However, by the time of the Copenhagen Interpretation in 1927, this explanation had been abandoned. It is true that the act of measurement will affect the particle, but that is not the point. The point is that a particle does not always have a position and a momentum. This is the stand taken by the Copenhagen Interpretation.

In 1932, mathematician John von Neumann, in his book, The Mathematical Foundations of Quantum Mechanics, presented a proof that sub-atomic particles do not exist in their own right with intrinsic position and motion. The work of von Neumann was just another nail in the coffin of what was just another downscaling attempt that failed. Just take a basketball and shrink it down a billion or a trillion times, and one will have something like an electron, one might say. However, it just isn’t so.

It isn’t just mathematics that denies our common-sense notion of a particle. If one insists on believing that a sub-atomic particle, such as an electron, must exist by itself and always have a position and motion, then the following experiment can’t possibly give the results which it does. The experiment shoots electrons one-at-a-time towards two very narrow, closely-spaced slits. On the other side of the wall with the slits, is some sort of detecting film or screen. This setup is similar to the early experiments done to show the interference of light. If one were to shoot many electrons at once towards the slits, one will not be too surprised to see a definite interference pattern on the detector. Because the particle electrons also have a wave nature, this is just what happens. The interference pattern is observed. However, now do the experiment shooting only one electron at a time. With only one electron at a time, one would expect each electron to pass through one of the slits and impact somewhere on the detector in a narrow band behind that particular slit through which it passed. No interference would be expected, since there was no other electron to interfere with. However, the results of the experiment are the same whether one shoots many electrons at once, or only one at a time. The same interference pattern is observed. The standard explanation is that the single electron went through both slits at once and interfered with itself. Therefore, the notion of a true self-existing particle with position and motion cannot be correct.

To summarize what has happened in physics regarding atomic and sub-atomic phenomena, one may say that when theorists held to common-sense notions of self-existing particles with position and motion, they came up with calculated results that did not fit the facts observed by experiment. On the other hand, when theorists abandoned these common-sense notions, they were able to develop theories whose calculated results did fit the facts. Nowadays, all physicists accept quantum mechanics.

To some extent, we have been discursive with all this physics history. However, the purpose is to set the stage for a new idea presented in this book. We are ultimately concerned with the existence of a hidden world, and such things as our souls. We must suggest some sort of deep reality that will support the existence of these things, and it cannot contradict the physicists. The deep reality must also allow for our physical world as it is known by the physicists. At the moment, this may seem like a tall order, but there is a simple way as we shall see.

The findings of quantum mechanics makes the task easier than would be so if particles were self-existent with position and motion. Suppose for the sake of argument that one does indeed have an immortal soul that survives death, and that this soul accounts for our self-awareness and consciousness. Supposing only this much, what can we say must be true about this soul? Firstly, if the soul survives death then it cannot be a direct consequence of the physical brain or body. So one must say the soul is non-physical. Now one may ask, where is this soul? We all have the common experience of our self-awareness being situated in our heads. It would certainly be a cheat to say it does not reside in the head. So the soul resides in the head. The soul shares some of the same macroscopic space as the head does. Because the shared space was created as a consequence of the Big Bang, then it must be that the soul is a part of the universe that happened because of the Big Bang. The soul, even though it is non-physical, is still a part of our universe. Nothing that is outside our universe can get in, and nothing inside can get out. The soul is obviously receiving sensory input from our bodies, so if our bodies are in our universe, then so must our souls be. The point being made is that the hidden world, and our souls and such, are just as much a part of the Big Bang and a part of the resultant universe, as are our bodies and the ground under our feet. Among other things, this means our souls cannot be older than the Big Bang, which happened about fifteen billion years ago.

Now the big question is what is meant by non-physical? We know the soul occupies at least some of the same volume of space as our head does, and in general the whole hidden world—as occult tradition assures us—occupies the same volume of space as does our physical world. The idea that two completely different worlds, operating in completely different fashions, can occupy the same volume of space, violates our common sense. However, we have already seen the limitations of applying common-sense thinking to atomic and sub-atomic happenings.

Inside the volume of space occupied by the brain, there is a mass of nerve cells and other tissue, all bathed in a constant flow of liquid blood. There is a lot of dynamic motion going on in the volume of space occupied by the brain. How can a soul also reside in that space, with fluids sweeping past in constant motion, electrical phenomena, and all sorts of chemical, cellular activity going on? One may also ponder the center of the Earth. There, a superhot core of iron is under great pressure. How could a hidden world occupying the same volume of space be oblivious to the presence of such a great mass of metal?

Let’s suppose the physicists are wrong and common sense is right. Let’s imagine a universe where space is both truly empty, and infinitely divisible, and particles are self-existent with position and motion. If the universe were like this, then it would be very hard to find a place for either a soul or a hidden world to hide. Even if one suggested that there would be two classes of particles, with one class physical and the other class non-physical, and suggested further that each class of particles had its own set of forces to which it reacted and that these forces for the most part did not affect the particles in the other class, there would still remain the problem of collisions between particles of the two different classes. With only a single space, and self-existent particles, there is no way to avoid collisions. Collisions would happen. So many collisions would happen that no hidden world could remain hidden. And how could any complex structure made up of the non-physical particles, such as a thinking mind, occupy the same space as the brain? Even assuming that the forces are different and largely transparent to each other, there would still be constant collisions of the self-existing particles themselves. This collision problem is impossible to avoid.

However, fortunately, the physicists are right and the collision problem which we just encountered in the common-sense universe, doesn’t have to be faced. The self-existent particle with position and motion doesn’t exist. And this means the empty and infinitely-divisible space needed to support such a particle, is not a requirement. Space does not have to be completely smooth and continuous.

We are now ready for a deductive leap. We are faced with the need to fit two very different worlds into the same volume of space, and we know that space does not have to be a perfectly smooth, unbroken continuum. The deductive leap is to say that there are two types of space, and they are very finely nested with each other. The pieces of space which support the physical world will be called A-space. The pieces of space which support the hidden world will be called E-space. This is the theory of A-space and E-space. In a large volume of space, such as a cubic meter, we could say that half of the cubic meter is A-space and the other half is E-space. We would also say the two spaces are very finely nested together. In fact, the two spaces are probably nested together in a very regular and orderly pattern.

Probably the best way to think about a piece of A-space or E-space, is to imagine a small rectangular or cubic block like a brick. Now imagine these bricks laid down in alternating order, forming a whole volume of space. In the volume, each A-space brick would be surrounded on each of its six sides by an E-space brick, and vice versa. Only at the edges of a brick would contact be made with the same kind of brick. It is not being said that this is the actual arrangement of A-space and E-space, but it seems the most likely because it is the most symmetrical. Notice that a given block of space touches other blocks of the same space along their edges and not at their sides or faces. If one were to hypothesize that a tiny block of space could not directly contact along a side or face with another block of the same space, then one could say that two different spaces are actually necessary.

Assuming a block of A-space and a block of E-space are the same size, just how small are these blocks? Obviously, they must be extremely tiny. So as not to overcomplicate what goes on in a single block, it is reasonable to assume that only a single, fundamental particle may exist within a given block at any one time. The electron is considered a fundamental particle, and it occupies a volume smaller than a cube 10−16 centimeters on a side. This is known from accelerator experiments, and future experiments using larger accelerators may push this maximum electron size even lower. The block of space must be smaller than the smallest particle known that is not a fundamental, irreducible particle, and the block of space must also be larger than the largest fundamental particle. The proton is a very small particle which is supposedly composed of even smaller particles called quarks. A proton is about 10−13 centimeters across, so the block of space must be smaller than this. Obviously, a block of space is very small. If we assume a cubic block size of 10−16 cm on a side, then in a cubic meter of macroscopic space there would be 1048 blocks. Half the blocks would be A-space and half would be E-space.

With the theory of A-space and E-space, it is easy to see how two different worlds can occupy the same macroscopic space. Consider an electron moving about in A-space. The electron derives its existence from whichever block of A-space it currently occupies. The electron is a definition within A-space and can only exist there since we assume the electron is not a recognized definition in E-space. The electron is passed from one block of A-space to another, not as a self-existent particle which it isn’t, but instead as a piece of information. Blocks communicate with other blocks. When an electron moves across a meter of macroscopic space, it is being passed along by more-or-less successive blocks of A-space, so the electron is moving in tiny jumps from one block to the next. The intervening blocks of E-space do not manifest the electron because the electron is not defined to E-space. However, perhaps the E-space blocks help carry communications between the A-space blocks, and vice versa. Or perhaps communication between blocks of A-space, when passing information about an electron, for example, happens only at the edges where other blocks of A-space touch. One really can’t say on this question of communication, except that on a few things, at least, A-space and E-space do communicate with each other. This much we know, because our souls which reside in E-space are able to ultimately perceive happenings in A-space. So some communication is necessary.

That which is physical happens in A-space, and that which is non-physical happens in E-space. If something is defined in one space but not defined in the other, then that something could never appear or manifest in the other space. The two spaces have some interaction with each other, and perhaps share some common definitions, but overall there is little interaction, and so the two different spaces are largely invisible to each other. To an electron, the nearest E-space is never more than 10−16 cm away (assuming 10−16 cm as the block size for both A-space and E-space), and yet, that electron can never go there, and its effect on that E-space can be no more than what a block of E-space will allow. The hidden world is never more than 10−16 cm away from any point in the physical world, and yet there is no conflict, and no collisions.

What is going on within a block of A-space or E-space? Considering the prevalence of computers, an effective view is to imagine a block as a tiny data-processor running a large program. A program is just a list of rules and instructions. All blocks of A-space would have the same identical program. What makes E-space different from A-space is that E-space uses a different program than A-space. One may say that the only difference between any block of A-space and any block of E-space, is that they run different programs. The processor itself, and the communication channels, are probably identical in every block of space. One could say that the hardware design is fixed and only comes in one model, but there are two different programs available: an A-space program, and an E-space program. One may say that the ultimate purpose of the processor is to control what happens or manifests in the block of space. For example, based on communications from other nearby processors, and the running of its own program, a particular block of space causes a fundamental particle, such as an electron, to just appear or manifest. Based on further processing of the data, the block sends a communication to a nearby block and causes the particle to disappear. It is now up to the nearby block to remanifest the particle if its program tells it to do so based on the given data. If we could see an English translation of the A-space program, it would probably impress us with its size and heavy use of mathematics. The same could be said for the E-space program. If we were to compare the two programs, we would probably find that the language is the same, but the rules and instructions used are very different, with very little in common. The physical world and the hidden world speak the same language, but each plays by a different set of rules.

To continue the computer analogy, let’s consider MIPs (millions of instructions per second). A MIPs rating for a computer gives a general idea of its processing power. A supercomputer operates in the neighborhood of a thousand MIPs. (We are going to calculate an estimated MIPs rating for a block of A-space. All the numbers are given as powers of ten.) Let’s assume an A-space block only needs to process 105 (100,000) program instructions to determine that it should send information to the next block of A-space regarding the transfer of an electron. Assume the electron is moving along through macroscopic space at one-third the speed of light (1010 cm/sec), and that the next block of A-space is 10−16 cm away. This gives the electron only 10−26 seconds to make its move, and this is all the time the 105 instructions would have to be processed. Under these assumptions, the MIPs rating for a block of A-space would be about 1025 MIPs. Compared to this, the thousand MIPs of a supercomputer is moving at far less than a snail’s pace.

The computer analogy has been convenient because computers are familiar objects in our technical world. However, it is not necessary to refer to computers to understand a block of A-space or E-space. There are only three essential components in a block. These components are a communication channel, a rule book, and something to follow the rules and send and receive the communications. One could imagine a clerk sitting at a desk. On the desk is a telephone, and one large book of rules and instructions. Perhaps also on the desk is a calculator to help the clerk with the mathematics. The telephone rings and the clerk answers. Based on what the clerk is told, the large book is consulted about what to do. The clerk reads the instructions and does exactly what they say, which probably includes making at least one outgoing call and perhaps manifesting a particle. Thus, a clerk at a desk can simulate a block of A-space or E-space.

The speed of light is accepted as a limit on the speed of anything that manifests in A-space. There is probably a similar limit in E-space. However, these limits are a limit on the manifestations of matter and energy, and not a limit on the internal workings of the space itself, such as the processing within blocks and the communication between blocks. In the deep reality of an A-space or E-space block, faster-than-light processing, and faster-than-light communication, must be taking place. The theory of A-space and E-space requires FTL (faster-than-light) for its communications and internal workings. The 1025 MIPs figure mentioned earlier certainly requires FTL.

In 1964, physicist John Bell proved what is known as Bell’s theorem. Bell’s theorem requires that FTL communication is actually happening in the deep, underlying reality of our universe. The theorem takes as its starting point something known as the EPR experiment, and applies some arithmetic to reach its conclusion of FTL. Experimental proof of Bell’s theorem was announced by John Clauser in 1972. A more precise and exact experiment by Alain Aspect was completed in 1982. The Aspect experiment, like Clauser’s, confirmed Bell’s theorem. The conclusion of Bell’s theorem is that regardless of what model of deep reality one may choose for our universe, it must include FTL communication. Bell’s theorem does not prove the theory of A-space and E-space in particular, but it does add support for it, since it mandates FTL which is a requirement of the theory.

Returning to the Big Bang, in the very beginning there must have been one block of A-space and one block of E-space. Perhaps within each block, as part of an initial condition, there was the definition of a very massive particle. One could think of the particles as being, in effect, nothing more than very large initial values or numbers, used as a starting point by the programs of A-space and E-space. Somehow, the space blocks multiplied, perhaps through self-replication, and the initial condition was communicated and distributed to other blocks. Because of the conservation laws known to exist in A-space, the initial massive particles probably represented all the matter and energy which we find in the universe today.

At this point we have presented the theory of A-space and E-space. It is not really important whether or not the reader actually accepts this view of deep reality. However, what is important is that for one to say that science has proved that a hidden world, or our souls, can’t exist, is wrong. On the contrary, it is possible to use both the facts and theory of science, and physics in particular, to find a place where our souls, and an alternate world, can reside unmolested in the same large volume of space as our physical world. Later on in this book we will get a better picture of just how different and strange the rules of E-space are when compared to what we are used to in the physical world. However, realizing that a different program runs E-space, one should be able to refrain from saying “impossible.”

Chapter 2: The Brain

The word “brain” clearly refers to the physical end-bundle of nerves found in the head. Every mammal, bird, reptile, amphibian, fish, and insect, has a brain. The brain sits at the base of a tree of sensory and motor nerves that spreads through the body. There is no doubt that the purpose of any brain is to provide a command-center for the control of the animal’s movements in its environment. Plants and trees, which do not move about, have no brain and no nerves at all. They do not need to control movements which they cannot make. One may also add, although it digresses a bit, that plants and trees have no senses such as eyes or ears. There is no point having such senses as the plant or tree could not productively use what it might learn from such senses, because it can’t move. The ultimate purpose of any brain and nervous system is to control the movements of the animal.

The word “mind” is less clear than “brain.” When people speak of mind, they usually mean what goes on in their heads from their own self-conscious perspective. Someone who would say “I think, therefore I am” in one sentence, would be more likely to use the word “mind” instead of “brain” in the next. Whereas “brain” clearly refers to a physical object, there is disagreement on what “mind” refers to. In particular, is the mind really just the brain? Are they one and the same thing?

There is a tendency to take extreme positions on this question. One extreme is to say that brain and mind are identical. The other extreme is to say the mind is a non-physical, separate entity, and its only connection with the brain is through a narrow interface of sensory inputs going in, and motor control coming out, which is to say only the bare minimum of brain participation. A typical materialist will adopt the first position with the justification that there is no alternative because a non-physical mind can’t be anywhere. However, we know about E-space, and this is a place where a non-physical mind can be. We have already placed the soul in E-space, but in this chapter we are not concerned with the phenomenon of self-awareness or consciousness. Instead, our concern is with answering the question of just how much of what goes on in the head—with the ultimate purpose of controlling the body—is due to the brain, and how much is due to a mind residing in E-space.

The best way to find out where the brain leaves off and an E-space mind begins, is to examine what is known about the brain. A great deal is known about the brain. We will proceed with the facts and draw conclusions where possible.

Amnesia is the loss of a person’s remembrance of who he is. The personal and related memories are more-or-less forgotten. The following case was taken from a 1987 newspaper story: “A 33-year-old Union woman, missing for two days and whose family feared she had been abducted from the Livingston Mall, turned up yesterday in Lyndhurst with a bump on the head and almost no recollection of who she was or where she had been.… [The woman’s sister said the woman] still did not know her own identity, nor did she recognize her husband, Gregory, or her 11-year-old son. ‘She doesn’t seem to remember any of us,’ said [the sister], who also lives in Union. ‘She’s very upset. She’s very scared.’”[1] This occurrence of amnesia sounds fairly typical. Obviously the head suffered a blow of some sort that left the bump as evidence. The blow gave the brain a jolt and somehow the amnesia resulted.

With this one simple example, we can immediately comment on the extreme position that the mind is isolated from the brain except for the minimal requirements of senses going in and motor-control coming out. This minimalist position is contradicted by the facts. The woman’s misfortune was not a loss of sensory input, nor was it a loss of motor control. Instead, personal memories were lost, and the loss is obviously brain related.

The building block of any nervous system, including the brain, is the nerve cell. Nerve cells are called neurons. The neuron probably made its first appearance in the early history of multicellular life over 600 million years ago in Precambrian times. All life at that time was in the seas, and the first animal with neurons may well have been a jellyfish. Along with the neurons, there would have been muscle cells, as the two always go together. Muscle is useless without nerves, and vice versa. An examination of all currently-living animal life shows the same fundamental design for neurons. For example, comparing a neuron from the brain of a man with a neuron from a modern-day jellyfish would show the same signal-conduction method used.

The purpose of a neuron, regardless of which animal or where in the body it came from, is always the same. The purpose is to—under the right conditions—conduct a signal from one end of the neuron to the other. A neuron is like a piece of wire, but with the proviso that it is normally inhibited from transmitting a signal.

Neurons come in many shapes and sizes. A typical neuron has a cell body which supports the single axon along which a signal can be transmitted. Axon length can vary from about a millimeter to a full meter. A signal is transmitted from one end of the axon to the other as a sort of chemical wave involving the movement of sodium ions across the axon membrane. During the wave, the sodium ions move from outside the axon to its inside. Within the neuron is a chemical “pump” that is always working to transport sodium ions back to the outside. The sodium-ion wave is the common signal method used by all neurons. A neuron waiting to conduct a signal sits at a threshold state. The sodium-ion imbalance across the axon membrane is just waiting for a trigger to set the wave in motion. Nerve cells with a clearly-defined axon transmit in one direction only. They are one-way streets.

The speed of signal transmission through an axon is very slow compared to electrons moving through a wire. Depending on the axon, a signal may move anywhere from half a meter per second to 120 meters per second. The faster transmission speeds are obtained by axons that have what is known as a myelin sheath. The long sensory and motor nerves that connect the brain through the spinal cord to different parts of the body, are examples of myelinated neurons. Speed along these long nerves is important, and a myelinated neuron can transmit its signal ten times faster than it could if it did not have the myelin sheath. In contrast to a top speed of 120 meters per second, an electrical current in a wire can move at near light speed which is 300,000,000 meters per second. Besides speed, another consideration is how quickly a neuron can transmit a new signal one after another. The answer is that, at best, a neuron can transmit a thousand separate signals per second. One may refer to this as the switching speed. By contrast, the switching speed of a semiconductor is easily measured in millions of separate signals per second. Comparing the best neuron against a modest electrical circuit containing a single semiconductor that switches in a millionth of a second, we see a speed difference of about six orders of magnitude, and a switching difference of three orders of magnitude.

There are sensory neurons, motor neurons, and the rest of the neurons are called interneurons. A sensory neuron has some sort of specialized sensor structure at one end of its axon. A motor neuron at one end of its axon terminates with what are called motor end plates. The other end of the axon terminates with a number of thin branches called dendrites. Dendrites also terminate the other end of a sensory neuron and both ends of an interneuron. Every neuron has a receiving end, and a sending end. The signal always travels from the receiving end to the sending end. The place where the sending dendrite of a neuron is touching at its end against the surface of another neuron, is called a synapse or synaptic connection. At the tip of a sending dendrite is a concentration of a special chemical called a neurotransmitter. It should be pointed out that there is not just one neurotransmitter. In the human nervous system there are at least fifty chemically-different neurotransmitters, and there may actually be hundreds. One of the important ways neurons differ from each other is by the neurotransmitters that they make and respond to. The neurotransmitter is the only real link that connects one neuron to another. The sodium-ion wave mentioned earlier is not directly transferred from one neuron to the next. Instead, what happens is the sodium-ion wave travels through the axon and spreads into the sending dendrites. This causes the dendrites to release some of the particular neurotransmitter made by that neuron. The released neurotransmitter quickly makes contact with whatever part of the other neuron the dendrite is touching. The other neuron touched by the dendrite will tend to react to the neurotransmitter that was released near it in one of three ways: Either the neuron will be stimulated to start its own sodium-ion wave signal, or instead of stimulated it will be inhibited, or instead of stimulated or inhibited the neuron will have no reaction or only a weak reaction to the particular neurotransmitter.

The existence of neurotransmitters gives the nervous system flexibility. For example, suppose a neuron, labeled A, at its sending end is touching through its dendrites three other neurons labeled B, C, and D. For any given neuron, only one neurotransmitter is created. So every time a sodium-ion wave travels through the axon of neuron A and washes into its dendrites, the same chemical neurotransmitter will always be released. One of the ways neurons B, C, and D, may differ from each other, is in how they respond to the neurotransmitter that is created by neuron A. If B were stimulated by it, C inhibited, and D only weakly stimulated, then the signal of neuron A can be varied as to its effect depending on the receiving neurons that make contact with it.

Consider the following practical design problem. Suppose we have two interneuron pathways, A and B. When pathway A is transmitting, we want pathway B to be inhibited, and when pathway B is transmitting, we want pathway A to be inhibited. In other words, we do not want both pathways A and B to be transmitting at the same time, perhaps because there would be confusion somewhere in the system if they do. In the human nervous system, there are four widely used neurotransmitters. These are acetylcholine, neuropenephrine, seratonin, and dopamine. Another major neurotransmitter, GABA, usually affects other neurons as an inhibitor. Now suppose both pathways A and B are creating and reacting to the same neurotransmitter, acetylcholine, to transmit a signal through the pathway, and that GABA will be an inhibitor. So we are saying that both pathways use the same kind of interneuron. On the receiving end of the interneuron, the dendrites will be stimulated by the presence of acetylcholine and start the sodium-ion wave rolling down the axon. At the sending end, the sodium-ion wave washes into dendrites which are stimulated to release acetylcholine. The released acetylcholine in turn stimulates the next interneuron in the pathway, and so on. Now we know that each pathway creates and releases acetylcholine when actively transmitting its signal, and we also know that the presence of GABA will inhibit the transmission. What we need for our design goal, is the existence of a second kind of interneuron. We want a neuron that will be stimulated by acetylcholine on its receiving end, but create and release GABA on its sending end. At some point along pathway A, we place this second kind of neuron so that its receiving end makes contact with some of the dendrites of pathway A where acetylcholine is released, and we place the sending end, where the inhibitor GABA is created and released, against the receiving dendrites of a neuron of pathway B. With this new connection between pathways A and B, whenever pathway A is transmitting, the transmission inhibitor GABA will be released onto pathway B. To complete our design we need only make another cross-pathway connection as before, but in directional reverse, so that GABA we be released onto pathway A when pathway B is transmitting.

On the receiving end of an interneuron, the general rule is that there is always some active presence of an inhibiting neurotransmitter. An interneuron has what may be called an active-off state, as opposed to a passive-off state. Before an interneuron can transmit its signal, the inhibiting neurotransmitter must be overwhelmed by stimulating neurotransmitter. Thus, in the preceding design problem, although not described, there would always be a minimum presence of an inhibiting neurotransmitter, such as GABA, along both pathways A and B.

Once a neurotransmitter molecule is released from a sending dendrite, it will either be absorbed by a receiving dendrite or cell body and be quickly broken down, or reabsorbed by a sending dendrite and reused, or absorbed by a nearby glial cell, or lost to the bloodstream and broken down there. One way or another, neurotransmitters, once released, are not lingering about. If a neuron is to be able to transmit a signal in tandem with its stimulating neuron many times per second, then neurotransmitter levels must be quickly reset to the normal balance of the active-off state in which inhibitor overwhelms stimulator.

The human brain is a higher-animal brain. Although larger than most mammal brains, there is nothing structurally unique about the human brain as far as large-scale features visible to the naked eye are concerned. Since the appearance of the first nerve cell over 600 million years ago, the major features of the brain have appeared in more-or-less distinct stages. Interestingly enough, it does not seem that there was ever a completely new brain design during the history of life. Instead, over the long period of time, as major new groups of animals appeared, the “old” brain would still be used, but perhaps with the addition of one-or-more new structures just built on top of, or alongside, the old structure.

The higher-animal brain is often divided into three different parts. These are the forebrain, midbrain, and hindbrain. The hindbrain consists of the brainstem and cerebellum. The hindbrain is several hundred million years old. For example, sharks have well-developed hindbrains, and sharks were already a diverse group in the Devonian period more than 360 million years ago. The midbrain is the topmost part of the brainstem and connects the hindbrain to the forebrain. The well-developed forebrain appeared over one hundred million years ago. The most important part of the forebrain is the two cerebral hemispheres, collectively known as the cerebrum. Considered part of the forebrain, and clustered around the midbrain, are some rather oddly-shaped, small structures. The odd structures comprising the limbic system are mostly concerned with direct regulation of the body. Such things as body temperature, blood pressure, and heart rate, are controlled here. The other odd structures are the thalami (plural of thalamus) and basal ganglia. The thalamus is mostly a relay station for incoming sensory signals. Almost all sensory signals that terminate in the cerebral cortex have been connected through the thalamus. The basal ganglia are mostly concerned with motor control done in a cooperative fashion with other parts of the brain, such as the cerebral motor cortex and the cerebellum.

The cerebrum is the darling of brain researchers. There is ample proof that its thin covering-layer is the major site for things we normally consider as intelligence. In the human brain, the cerebrum represents the bulk of the forebrain. The two large, gray, heavily-wrinkled hemispheres are the most visible part of a fully-exposed brain. All other parts of the brain are small in comparison. There is a left hemisphere and a right hemisphere. The two hemispheres look very similar, but they are not completely symmetrical. The right hemisphere (on the right side of the body) is usually a little wider than the left, and also the front of the right hemisphere extends a bit more forward than the left, while the back of the left hemisphere extends a bit more backward than the right. One of the major surface folds, known as the sylvian fissure, is longer on the left hemisphere than on the right.

The gray appearance of the cerebrum is actually confined to the thin layer which covers it. This layer is the cerebral cortex. Often, it is just called the cortex. The word “cortex” is a Latin word which means bark. Just as the bark of a tree is the outer covering of the tree, so the cerebral cortex is the outer covering of the cerebrum. However, unlike the bark of a tree, the cortex is not a protective covering. Instead, it is the actual site where intelligence seems to be taking place.

Beneath the cortex is the bulk of the cerebrum. This is the white matter. It is called white matter because of its white appearance, caused by the presence of fatty sheaths protecting nerve-cell fibers, much like insulation on a wire. The white matter is primarily a volume of space through which an abundance of nerve pathways, called tracts, are running. Hundreds of millions of neurons are bundled into different tracts, much as single wires are bundled into larger cables. The tracts are often composed of long axons that stretch the entire length covered by the tract. As an example, consider the optic nerve. It leaves the back of the eye as a bundle of about a million axons. The supporting cell bodies of the axons are buried back in the retina of the eye. The optic tract, as one may call it, runs into the base of the thalamus, and there a new set of neurons, one outgoing neuron for each incoming neuron, comprises a new optic tract referred to as the optic radiation. The optic radiation connects from the base of the thalamus to a wide area of cerebral cortex in the lower back of the brain. Thus, from the back of the eye to the back of the brain is a distance of roughly ten centimeters, and it is covered by the use of only two neurons per “wire.” Almost all of this length is due to the long axons.

The white matter is a volume of space largely filled with different tracts. Besides tracts, there are also the usual complement of nerve-supporting cells known as glia (Greek for glue), and a host of blood vessels. However, the white matter seems to have no other purpose than as a nest of wires. The tracts always come from somewhere, and go somewhere else. Each tract has a definite place of origin, and place of destination. There are three main categories of white-matter tracts based on what parts of the brain the tracts are connecting. Association tracts connect one area of cortex with a different area of cortex on the same hemisphere. Commissural tracts connect one area of cortex with a different area of cortex on the opposite hemisphere. All the commissural tracts come together into a single bundle of nerves known as the corpus callosum. The corpus callosum joins the two hemispheres together. Projection tracts connect areas of cortex with the brainstem and the thalamus. It seems that all tracts in the white matter have either their origin, destination, or both, in the thin cortex layer. Altogether there are many thousands of different tracts.

The cortex is a sheet varying from two to five millimeters in thickness. The total surface area is about 1,500 square centimeters. There is universal agreement that the reason the cerebrum-covering cortex is so wrinkled, is to allow a greater area of cortex than could be had if the surface of the cerebrum were smooth. This wrinkling implies that the cortex is important to the operation of the brain. All the tracts which make connections with the cortex, also lend support to the apparent importance of the cortex.

The detailed structure of the cortex shows general uniformity across its surface on both hemispheres. A small patch of cortex under a microscope is going to look mostly the same no matter where it’s taken from, although Brodmann’s map, which divides the cortex of a hemisphere into about fifty different areas, is based on differences in such things as the relative density of neurons in the six different layers of the cortex, and the thickness of both the cortex and its individual layers. In any square millimeter of cortex, there are roughly 100,000 neurons. This gives a total count of roughly fifteen billion neurons for the entire cortex. To contain such a number of neurons in the cortex, the typical cortex neuron is very small and does not have a long axon. However, many neurons whose cell bodies are in the cortex, do have long axons, but these axons pass into the white matter as fibers in tracts. Although fairly uniform across its surface, the cortex is not uniform through its thickness. There are six distinct layers that show under a microscope. The main difference from one layer to the next, is the shape and density of the neurons in each layer. However, the layers are not insulated from each other, as there are cross-connections between the layers made by tiny axons and dendrites.

Nobelists David Hubel and Torsten Wiesel have studied in detail the primary visual cortex. This is the area of cortex that receives the optic-radiation tract mentioned earlier. The individual axons from the tract enter the cortex and spread their dendrites in cortex layer four, which is the fourth layer counting down from the top. What the two researchers discovered, was that the fundamental structure of the primary visual cortex is a tiny column which runs through the thickness of the cortex and is about a millimeter wide measured across the cortex surface. Using microelectrodes, and experimenting on monkeys, it was found that the neuronic activity within one column is largely isolated from the neuronic activity in surrounding columns. Into each column would run hundreds of axons from the optic radiation, and coming out of the column would be perhaps ten times as many axons. The outgoing axons are projected from receiving ends buried in layers two, three, five, and six. The outgoing axons (tracts) from layers two and three, connect to other areas of cortex. The tracts from layer five connect to the midbrain. The tracts from layer six connect back to the thalamus from which the optic radiation came.

Entering a column, the incoming axons from the optic radiation are evenly divided between left eye and right eye, and represent the same small patch of retina in each eye. Thus, a single column is receiving only a small piece of the total picture which the eyes can see. The actual size of the picture-piece which a particular column receives, depends on where in the retina the picture-piece is coming from. A picture-piece from the retina where the center-of-view is focused, will be spread over thirty-five times as much area of cortex as a peripheral picture-piece of the same size, because there will be about thirty-five times more optic-radiation axons representing that picture-piece than a peripheral picture-piece. Concerning the processing of the picture-piece that goes on in the column, it was found that definite line extraction is taking place. What this means is that certain neurons in the column are sending signals only if the incoming picture-piece has a definite line feature at a certain position and at a certain angle of orientation. Many different line positions and angles within the picture-piece can be individually detected by different neurons which are presumably physically oriented in some corresponding fashion. One may imagine, for example, a neuron with its receiving dendrites spread out in a more-or-less straight line, making connections with the sending dendrites from a number of incoming optic-radiation axons. One must then assume that the line-detection neuron will not signal unless it is stimulated at the same time by several of the incoming signals along its line. For this scheme to work, the line-detection neuron must be triggered by a line of incoming signals that corresponds to a true straight line in the picture-piece.

The column described is actually not sharply defined. There is no physical, insulating wall built around the column and thereby defining it. Instead, the column bounds are determined in one direction as the combined width of a strip of left-eye optic radiation and an adjoining, visually corresponding, strip of right-eye optic radiation. In the other direction, the column bound is the distance over which a microelectrode determines that line-detection is taking place for a line in the visual field that is being rotated through its possible angles. What this means, is that talk of columns in the cortex is somewhat subjective. The real basis for thinking of the cortex as being arranged into many small, functional columns, each about a millimeter wide, is the fact that there is very limited sideways communication through the cortex. When a signal enters the cortex through an axon, the signal is largely confined to an imaginary column of no more than a millimeter across. This is what is found in actual practice. There are not a lot of neurons, axons, or dendrites, to spread a locally-occurring signal laterally through a layer of cortex. Different areas of widely-spaced cortex do communicate with each other, but by means of tracts passing through the white matter, not by means of sideways connections running directly through the cortex.

As an example of another cortex area, consider the primary motor cortex. This cortex area is in the shape of a strip that wraps across the middle of the cerebrum. As the name suggests, this site plays a big part in voluntary movement. The area is a definite map of the body. Determining the existence and layout of the map has been rather easy for the neurologists. All that is done, is to touch an electrode to the cortex surface and observe which muscles contract. In general, the map represents parts of the body in the order they occur on the body. Thus, moving across the length of the strip, one can move from the toes to the ankle, knee, hip, trunk, and so on in that order. However, the map does not draw a very good picture of the body because those body parts that are under fine control get more cortex. The hand, for example, gets about as much cortex area as the whole leg and foot. This is analogous to the primary visual cortex where more cortex is devoted to the center-of-view than to peripheral vision.

There are many tracts that carry signals into the primary motor cortex. These include tracts coming from other cortex areas, both from the same hemisphere and the opposite hemisphere (association and commissural tracts), sensory tracts from the thalamus, and tracts through the thalamus which ultimately connect with the basal ganglia, cerebellum, and brainstem. The incoming tracts are spread across the strip, and the actual axons terminate in layers one, two, three, and four. Sensory-signal axons terminate primarily in layer four, similar to the primary visual cortex. On the outgoing side, there are the giant Betz cells. There are only about 34,000 Betz cells in each hemisphere along the motor strip. These are big neurons with thick, myelinated axons which pass down through the brainstem all the way into the spinal cord. The muscles are activated from signals passed through these Betz cells. The Betz cells originate in layer five of the motor cortex. Besides the Betz cells, there are smaller, outgoing axons, which originate in layers five and six. These outgoing tracts connect to other areas of cortex, and elsewhere.

Besides the primary visual cortex and the primary motor cortex, there are many other areas of cortex for which definite functions are known. When the function of the area is characterized by distinctive sensory or motor neurons, as with the primary visual cortex and its optic radiation, and with the primary motor cortex and its giant Betz cells, then there is a match between function and an area on Brodmann’s map. However, when there are no distinctive sensory or motor neurons for a known functional area, then there is often no match with Brodmann’s areas. What this means, for example, is that quite different mental functions can be taking place on different areas of the cortex which Brodmann’s map would consider identical. For example, many complex functions are taking place at the prefrontal cortex, and yet Brodmann’s map shows only four areas. With the exception of certain sensory and motor areas, Brodmann’s map does not correlate well with known function.

The knowledge of the functional areas of the cortex did not come from studying the actual physical structure of the cortex, as Brodmann did. Instead, there are two main ways that the many different functional areas have been identified. One way is by electrical stimulation of different spots of cortex, and observing the results. The other way is by observing individuals who have specific cortex damage. Actually, by far, the study of the effects of specific cortex damage has been the best source of knowledge on cortex functions. Localized damage has mostly come from either head wounds, or strokes and tumors. During times of war, there have been many head wounds that destroyed small areas of cortex, and some of these cases have been studied. Strokes are fairly common in older men, and strokes often destroy a small area of cortex. Strokes happen when a blood vessel in the brain bursts forming a clot, or when a blood vessel becomes blocked. Either way, when a clot forms or the blood stops flowing, surrounding tissue, such as neurons, dies. Brain tumors are rare, but some cases of tumors destroying cortex have been studied. The basic picture that emerges from all the studies of damage and such, is that mental processing is broken into many different pieces, and these pieces exist at different sites or patches of the cortex.

Clustered around the primary visual cortex, are other cortex areas that receive sensory signals from the primary area. In general, around the borders of any primary sensory area where the sensory signals first run into, there are association or secondary areas. The primary area receives the sense-signals first, but from the primary area the same sense-signals (more-or-less) are transmitted to the different association areas through tracts. The individual association areas are also getting signals from elsewhere, such as from the thalamus. What a single association area does, is attack a specific part of the total problem. An association area is a specialist. In the case of vision, for example, there is a specific association area which is necessary for the recognition of faces. If this area is destroyed, the individual can still see, and recognize other objects, but cannot recognize a face.

Discussing the functional areas of the cortex in detail is actually rather difficult because there is a lot of subjectivity involved. In the case of the facial-recognition area, for example, all that is really known about it is that if the area is destroyed, then the only obvious effect is that the individual can no longer recognize a face. One just can’t say in detail what is going on at that site, or just how that site is interacting with other sites.

Although it is hard to discuss functional cortex areas in detail, some of the common examples given are Wernicke’s area, Broca’s area, and the prefrontal areas. Wernicke’s area is an area of cortex on the side of the cerebrum. When Wernicke’s area is destroyed, there is a general loss of language comprehension. One can no longer make any sense out of what one reads or hears, and if such a person tries to speak, only gibberish comes out. Broca’s area, an association area of the primary motor cortex, is near Wernicke’s area but more towards the front. When Broca’s area is destroyed, there is a loss of speech. One can still understand the written and spoken word, and also write, but one can no longer speak. Instead of words when one tries to speak, only noises come out. The prefrontal area is at the front of the cerebrum. When this area is destroyed, there is a general loss of foresight, and concentration, and the ability for forming and carrying out plans of action. The person with prefrontal damage loses the sense of a future and becomes a creature of the moment. Moods will change quickly as the momentary stimulus changes. Such a person is easily distracted. Lower animals do not have functional areas comparable to Wernicke’s area and Broca’s area, because only man has a complex language ability. However, lower animals do have the same kind of function as man in their prefrontal cortex. A lower animal will become a creature of the moment, easily distracted, just as a man would be, when its prefrontal cortex is destroyed.

All the tracts running through the white matter comprise roughly a hundred million different “wires.” The “wires” or neurons can only transmit a signal in one direction. Thus, when considering a specific patch of cortex and all the “wires” that connect to it, it is accurate to divide the “wires” into input (sending signals into the patch) and output (sending signals out of the patch). No doubt the vast majority of the “wires” are carrying very low-level data such as sensory and motor data. Because of the existence of association areas, more-or-less the same sensory or motor data is transmitted to a number of different areas, with each area requiring its own set of “wires.” For example, the primary visual cortex is sending its point data to other areas, and it is probably also sending its line data as well. The point data originated from the retina, while the line data originated in the primary visual cortex. However, besides the low-level data, there is evidence for what might be called high-level data. For example, researcher E. Rolls located in a monkey brain neurons that signaled only when the monkey saw edible food, and also neurons that signaled only when the monkey saw a familiar object. The locations of these neurons were away from the actual visual cortex and its association areas, so it is reasonable to assume those cortex areas were the site where these high-level recognitions were made, and then these recognitions were transmitted elsewhere along dedicated, high-level data, “I see food,” and “I see a familiar object,” signaling “wires.”

At this point we are done with a technical overview of the brain. In particular, we have looked at both neurons and the cerebral cortex in detail. Neurons are the building blocks, and the cortex is the known site for most of what we consider intelligence. We are now ready to attack the central question of just how much of our minds is physical, and how much is non-physical residing in E-space. We will consider several different arguments for some E-space participation.

The building block, the neuron, is clearly a signal transmitter. There is no doubt that it makes a good wire, albeit a very slow wire when compared to electrical wire. The neuron is an all-or-nothing, one-way, slow-speed signaler. It either sends a signal, or it doesn’t. When it does send a signal, the end result is always the release of the same neurotransmitter. Whether or not a neuron actually signals, depends on the balance of stimulating and inhibiting neurotransmitters at its receiving end. When someone who maintains that the mind is all physical, is confronted with building a working model of the mind out of the building-block neurons, the common reaction seems to be to hide behind the great number of neurons in the human brain and use that as an excuse for not trying. This sounds like a good excuse, but what if we consider a much smaller brain, such as the brain of an insect. The honeybee has only about 7,000 neurons in its brain. This is not overwhelming, and compared to a microprocessor which has many more components, it would seem that if a more-complex microprocessor can be built, then it should be possible to build, on paper, a less-complex 7,000-neuron honeybee controller. A bee is a social insect and has duties within its community. The worker bee can see, walk about, recognize and feed larvae, build and repair comb, fly beyond direct visual range of the nest and then use the sun to navigate back to the nest, find and collect nectar, pollen, water, and resin, and communicate their whereabouts to other bees by means of a dance, recognize and remove debris from the hive, and defend the hive from intruders.

Trying to build a working bee mind from neurons alone, is like trying to build a computer from a bucket of sand. Sure, a computer uses silicon and sand has silicon, but trying to arrange the sand and get a computer is hopeless. One could have a whole beach of sand to work with and it wouldn’t make a difference. There is more to a working computer than just silicon. Likewise, it seems there is more to a bee’s mind than just neurons.

The believers in a completely physical mind have found other excuses to hide behind besides the neuron number. If the number of neurons is not big enough, even for the human brain which has roughly fifteen billion, then the new excuse is to greatly boost the number by claiming all the individual dendrites, and perhaps even individual synapses, are important in their own right. For example, using a guess of a thousand dendrites per neuron, and saying that the way each dendrite connects with other neurons is very important, one could boost the bee-brain number from a lowly 7,000 to a more intimidating 7,000,000. However, there is just no basis in fact for this. There is only an on-off signal transmitted through the axon, so there is nothing to base self-important dendrites or synapses on. A single neuron releases only one neurotransmitter because there is only one kind of signal. Either a sodium-ion wave has passed along the axon, or it hasn’t. This is a single bit of information. There is no way for the different dendrites or synapses to act differently from each other in any meaningful way when all they have to go on is the same single bit of information.

Another excuse sometimes used, is to invoke hidden cellular processes that make the neuron much more capable than it appears. No researcher has found any such hidden process, but a more persuasive argument against hidden processes is the same argument used against the dendrites. There is only one bit of information to go on, so how could any hidden process increase the information in that bit?

However, one could still try to save these excuses by introducing the idea of a built-in clock that somehow orders the single bit based on its appearance over time. This scheme would be like burying a digital computer in every neuron and claiming the axon is just the external I/O (input/output) channel. This excuse-saver is hard to refute because it is really a deus ex machina. However, there is no physical evidence for such a complex mechanism, and any self-respecting molecular biologist would have to pronounce a digital computer within a neuron as impossible.

So we are back to a 7,000-neuron bee brain. If the mind of a honeybee were completely physical, then one could reasonably expect that the computer, a completely-physical device, could easily outperform a bee brain. Given a bee-brain simulator program, running on a supercomputer, it should be no contest. The computer should win easily. We have already seen how the best neuron compares against a modest electrical circuit. Although we only gave a three-orders-of-magnitude difference in switching speed, when talking about the much-faster processor circuits in a supercomputer, compared against a more realistic neuron which is not constantly signaling at the absolute maximum rate, it is completely fair to say that a supercomputer has an overall six-orders-of-magnitude speed advantage over a single neuron. Besides the speed advantage, the computer is a much better, more flexible processor than a single neuron is. Compared to the neuron’s single on-off state, the computer processor has a rich instruction set, including math and memory operations. A neuron has none of these and no memory at all. A computer with a tiny program of just a few instructions, can easily simulate a neuron, but a neuron cannot simulate a computer. In every way, a computer is better than a neuron. When comparing a supercomputer to a neuron, because of the roughly six-orders-of-magnitude speed advantage, it is fair to say that one supercomputer is worth one million neurons. The advantage is much greater when taking into account the instruction set and memory which the computer has. However, let’s just use the million-to-one figure. If a supercomputer is worth a million neurons, and a bee brain has only 7,000, then a supercomputer is one hundred times more capable than a bee brain. This means the supercomputer should be able to do the job of the bee brain roughly one hundred times faster than the bee brain itself.

However, there is no actual working comparison between bee brain and supercomputer to point to, because no bee-mind simulator program has been written. The reason no such program has been written is that it would be a big programming job and there is no good reason to do it. In fact, for those familiar with software engineering and AI (artificial intelligence), it would be a gigantic program. It would certainly take many more instructions than the bee has neurons. The number of instructions needed is certainly more than a million, but probably less than a billion. Even though the bee has only primitive insect eyes, the problem of programming its vision system would be a tremendous project all by itself. With such a large program needed, it would be hard to expect the program to run in real time, even on a supercomputer. A honeybee moves about and does its chores rather quickly. Instead of running one hundred times faster than the bee, our hypothetical program on a supercomputer is much more likely to run one hundred times slower.

The idea that the bee mind is completely physical, is contradicted by the expected poor showing of the computer when given the task of bee-mind simulation. The computer, a physical device, is far superior to the neuron, another physical device. If the computer can’t easily and quickly simulate the bee mind, then the best explanation is that the bee mind is only partly neurons and brain. The rest of the bee mind must be mental structures in E-space. The physical computer would then be trying to simulate happenings in E-space, where the rules are very different from the A-space in which the computer exists.

However, ignoring what has been said so far, if we consider the bee mind as completely physical, we must try to find a place for its memories. It certainly seems the bee has memories, because it must have a memory of its flight to a food source if it is then going to do a dance about the food’s location for the other bees. The bee must also have built-in memories of some form, against which it compares different sensory data, if it is to do such things as find its food. Of course, if someone wants to say that a honeybee has no memories at all, of any kind whatsoever, then we must still address the question of where memories are stored, because no one can deny that we people have memories, including long-term memories.

The whole question of memory has been frustrating for those who have sought its presence in physical substance. Just about any intellectual process requires the use of memory. Memory is a common denominator. In the case of humans there is a rich store of memories, of many different kinds. Besides the more obvious memories of sight, sound, and factual data, there are less obvious memories such as all the motor memories for walking, driving a car, and so on. The importance of memory is well demonstrated by a computer. Without its memory storage, a computer is totally useless.

Because of the obviousness and prevalence of memory, there has been a determined search for it in physical substance. The hypothetical physical memory is called a trace or engram. A Nobel prize is still waiting for the first researcher that can find a real engram. If engrams were real, then it should be possible to selectively destroy them. Many experiments were done with monkeys and other animals in an effort to destroy engrams. For example, an animal would be taught a maze, and then bits of brain would be destroyed in an effort to cause partial memory failure. However, memory seemed to be all-or-nothing, and this agrees with the evidence of human brain damage. Entire, large, related groups of memories have been lost due to damage, but not more-or-less individual and unrelated memories. An amnesiac loses whole blocks of related memories, such as memories of personal past, but no one has, to anyone’s knowledge, suffered brain damage or injury and retained all memories except for a few more-or-less random ones. To give a concrete example, no one has been bumped on the head and forgotten only about his grandmother. If engrams were real, then why don’t they individually disappear? The fact that engrams can’t be made to selectively disappear, suggests that they aren’t real. We are once again faced with E-space. It is probably safe to say that all memories reside in E-space.

Besides the experiments which rebuff engrams, the currently favored theory for how engrams would work is none other than claiming importance for individual synapses. We have already shown that because the axon carries only one bit of information, the individual dendrites and synapses cannot have meaningful, informational differences from each other. Therefore, both experimental results, and the lack of a credible theory, strongly suggest that physical memory is a fiction. The previously favored theory of memory which claimed that memories were signals circulating through closed loops of neurons, is known to be invalid because all signaling has been stopped in brains by such means as cooling, anesthesia, oxygen cutoff, and blood-flow stoppage, and yet upon reactivation of the brain the memories remain.

At this point we have presented our arguments for E-space participation in the mind. The logic of the arguments has been that when one assumes that the mind is completely physical, then contradictions result. This is the well-known reductio ad absurdum method. It is now time to present an actual theory or model of the human mind using both A-space and E-space. We will justify the theory as to why both spaces are used, and then test the theory against such things as Penfield’s results, mind-affecting drugs, and mental illness.

The theory presented here, is that for the human mind there are many separate, specialized processors existing in E-space, and these processors, or pieces, as we may call them, are tied together by the neurons existing in A-space. A good analogy is to consider an electronic circuit board. On the board are separate IC (integrated circuit) packages. These separate ICs are like the processor pieces in E-space. The thin wires running along the surface of the circuit board connect the ICs together. These connecting wires are like the neurons in A-space.

The cerebral cortex resembles a circuit board. The great abundance of tracts connecting to the cortex is just like the connecting wires of a circuit board. The “columns” of the cortex are much like the connector pins of a circuit board. The IC package is plugged onto the connector pins, and in a similar fashion the E-space processor is plugged onto the “columns” of the cortex, although there may be many separate, individual connections within a single “column.” Whereas an IC may have a few dozen, or a few hundred, pin connections, an E-space processor typically has many more, perhaps having millions of individual connections. However, most of the connections would be communicating low-level data.

As to the number of processors or pieces, it is hard to say. There is probably one piece for each functional area of cortex, but there is uncertainty over how many functional areas there are. A rough guess would be fifty functional areas, and thus fifty E-space pieces. Besides the cerebral cortex, there are a few other places in the brain where specialist E-space processors are probably connected. One such place would be the cerebellum, but in general we will confine our interest to the cerebral cortex, where, no doubt, the majority of the E-space pieces in the brain are connected.

Each E-space piece is a specialist processor concentrating on a set of related functional problems. Among other things, each piece would have its own separate memory storage. None of these pieces are self-aware because self-awareness is a special quality of a special E-space object called the soul. There is only one soul per human mind, and the soul will be covered in detail in a later chapter. However, the soul still has to make its connection with the brain, and through the neurons of the brain the soul is in communication with at least some of the E-space processors. As for just where in the brain the soul is connected, the most likely site would be either the reticular formation in the brainstem, or one of the thalami. Both possible sites are known to be important to the general level of consciousness.

We now have our theory of mind using both A-space and E-space. A good question is why is the mind divided in this way? Why isn’t more of the mind physical? This is easy to answer. Although neurons make good “wires,” they are inadequate as building blocks for a real data processor. The neurons are used for what they are good at, and that is making connections.

A more difficult question is why isn’t the mind much more E-space? Obviously, within an E-space processor there are many internal connections taking place in E-space. So why is there a need for the connections between different E-space processors to be made by neurons existing in A-space? What this question suggests is a more simple model of mind where there is just one large, do-everything processor in E-space, complete with direct E-space connections to the soul, that would have only sensory neurons as input and only motor neurons as output. This design would seem to have the advantage of greater damage resistance, because neuron connections are kept to a minimum. So the question is, why is the more complex, less damage-resistant design of a mind in pieces, preferred?

This question is basically an engineering question. Although greater damage resistance is an advantage, it must be outweighed by advantages of the alternative design in actual use. One reason the E-space portion of the mind is in many separate pieces, could be the same reason the circuits of an electronic system are often spread over many different silicon chips, instead of being placed all on one large chip. If there is a flaw in one of the circuits, then the chip containing that flawed circuit may have to be discarded or replaced. If there are many small chips and there is one circuit flaw, then only one of the small chips is defective and can be individually replaced, but if there is only one large chip, then that whole large chip must be replaced. This is the basic problem of putting all of one’s eggs in one basket.

Another possible reason for having the E-space portion of the mind in pieces, is that some of the pieces can remain with the brain while other pieces can be temporarily removed away from the physical body. This will be discussed in a later chapter. A third possible reason is to get a better overall synchronization between body and mind. The neuron connections probably represent a bottleneck that helps to slow down the E-space pieces and keep the mind running at a rate that meshes well with the physical body and its world. A fourth possible reason is that separate functional pieces promote interchangeability. Some of the E-space pieces may be very well standardized and used in other animals besides just the human animal. Also, the capability of an individual mind can differ from another by just using a different piece while keeping the other pieces the same.

At this point we have presented our theory of mind and justified it. We will now assume this theory is correct and discuss a few things in light of it. Wilder Penfield was a well-known neurologist. During brain surgery, he would, as an experiment, touch different places on the cerebral cortex with an electrode and ask the conscious patient to describe the reaction, if any. Once in a while during temporal-lobe stimulation, a patient would experience an impressive, vivid replay of some personal experience from memory. This sort of reaction is easily explained. The electrode causes the neurons near it to signal. If the electrode were to trigger a neuron that is a high-level data input to an E-space processor holding personal memories, and that input neuron, when signaling, happened to mean something like “Recall personal experience and send to awareness,” then it would be done. As to what particular experience would be sent, that could well be random because other data-input neurons which would carry the identity of the experience to be recalled, would not be sending meaningful signals. The same can also be said for how vividly the recalled experience may be presented to the awareness. Without coordinated, controlling input, the output is unpredictable. Occasionally, using random stimulation, unusual results would happen such as Penfield got. Every E-space processor is going to connect to neurons that carry signals which direct the action of that processor. The individual processor will do what it can to satisfy the input signals, regardless of why the signals are happening.

There are many physical substances which, when taken into the body, can cause a reaction somewhere in the nervous system. Such substances are drugs. Some of the better-known drugs are alcohol, caffeine (in coffee), nicotine (in tobacco), morphine (in poppies), tetrahydrocannabinol (in marihuana), LSD, and the antipsychotic drugs chlorpromazine and haloperidol. All drugs that affect the nervous system do so by either increasing, or decreasing, the likelihood of certain tracts or pathways to signal. Not only are there different neurotransmitters in the nervous system which allow different tracts to respond to the chemical presence of a particular neurotransmitter differently, but many neurons also have receptors for, and respond to, other chemicals which are not neurotransmitters.

Because all neurons do not necessarily have the same reaction to a particular chemical, this means a drug can have a selective effect. In fact, the drug takes advantage of a deliberate bodily system. Functionally related neurons or tracts often have their own chemical stimulants or depressants so that the mind or body can itself selectively influence a function by just dumping the right chemical in the bloodstream. A whole subsystem in the nervous system can be either boosted (signaling easier) or depressed (signaling harder) relative to the other brain subsystems by the simple expedient of creating and releasing into the blood a chemical which targets that subsystem.

All that is necessary for a drug to work, is that it either duplicate or more-or-less closely approximate the actual chemical structure or shape of one of the selective-control chemicals used in the nervous system. Morphine, for example, has a shape which resembles somewhat the enkephalin chemicals which the body actually uses. Neurons which can signal pain have receptors for enkephalins. The presence of enkephalins will depress or inhibit the neuron’s ability to signal. The morphine molecule, although not an enkephalin, will still fit the neuron’s receptor for enkephalin, and cause the inhibition because of the similar shape. It is not hard to see why the body might have such control over the pain subsystem. Pain is often the result of bodily injury. If the injury happens during a fight or struggle, and enkephalins are released, then there would be a chance to either finish the fight, or flee to safety, before there is disabling pain from the injury.

It is not known how many chemically-affected subsystems there are in the human nervous system, but there are certainly many. Having a chemical control on a subsystem is like having a volume switch on a TV or radio. It adds flexibility. It is a good engineering practice. Instead of a subsystem having the same relative importance all the time, its importance can be turned down (volume down) or turned up (volume up) so as to meet the special needs of the moment.

This practice of chemically-affected subsystems is not unique to humans, but rather is a very common practice throughout the animal kingdom. An interesting aside is the reason why many plants make nerve-affecting chemicals. We have already mentioned the coffee, tobacco, poppy, and marihuana plants. A plant has no nervous system of its own, so the nerve-affecting chemical it makes can’t be for its own use. Instead, the chemical is made to serve as a kind of poison for animals that would eat the plant. So the plant produces the chemical as an act of self-defense in an effort to discourage different animals, or perhaps insects, from eating it.

The power of drugs to affect the mind is easily accommodated by our theory. Because the neurons serve as connectors between all the E-space processors, and also between E-space processors and the awareness or soul, the neurons obviously have great importance to the mind as a whole. Whatever affects a group of neuron’s ability to signal is going to affect the mind, although perhaps in an unconscious way.

A rather spectacular class of drugs are the hallucinogens. Mescaline comes from the peyote cactus and causes visual hallucinations when taken. The hallucinations are not completely random, but instead tend to fall into distinct categories or constant types. In general, the hallucinations caused by mescaline are typically colorful geometric designs. Among the E-space processors it is easy to imagine one that is especially concerned with spatial symmetry and symmetrical shapes and patterns. Mescaline probably just turns the output volume way up on this particular processor.

Perhaps the most famous hallucinogen is LSD. LSD does not come from a plant. Instead, it is made in a laboratory. A dose of LSD will not only typically produce geometric patterns such as mescaline does, but it will also often produce a wide range of very complex imagery. Pieces of personal memory, such as images of faces and places, can appear in the imagery, although often in cartoon or caricature form. Images from memory are often combined with geometrical patterns. The subject content of LSD hallucinations can be influenced to some extent by suggestion. If a hallucinating subject is asked to think of cars, then imagery containing cars is more likely. LSD hallucinations can become so intense that subjects sometimes report that the hallucinations have completely preoccupied their awareness and they no longer feel connected with their bodies.

The LSD experience is easily explained using our theory. Just as mescaline turns up the output volume for the E-space processor concerned with spatial symmetry, so LSD must be doing the same. However, LSD must also be turning up the input volume for signals entering the symmetry processor and coming from a different processor storing everyday images such as faces and places. This second processor can still receive normal controlling input, and that is why a verbal suggestion can influence the output which it sends to the symmetry processor. As for when the hallucinating subject reports that the hallucinations are becoming all that he is aware of, this is just a matter of the output volume being turned up so high that other signals to the awareness, such as normal sensory signals, are drowned out.

One thing probably true about E-space processors, is that they are never truly off. They probably are always running to some extent. Just as a car sits at a traffic light, and its engine idles, so the E-space processor probably idles. There may always be some more-or-less random and meaningless output being generated by the processor, although not loudly enough to be signaled elsewhere. A hallucinogen such as mescaline, may, in effect, just be showing to the awareness what that E-space symmetry processor is generating while it idles away, waiting for meaningful work to do.

It is estimated that schizophrenia affects one percent of the world’s human population, and is found in all human societies. The term schizophrenia is a sort of catchall for chronic mental illness. One common feature of perhaps the majority of schizophrenics, is that they hear voices. Far from being friendly, the voices are hostile. We will let clinical psychologist Wilson Van Dusen explain: “They will suggest lewd acts and then scold the patient for considering them. They find a weak point of conscience and work on it interminably. For instance, one man heard voices teasing him for three years over a ten-cent debt he had already paid. They call the patient every conceivable name, suggest every lewd act, steal memories or ideas right out of consciousness, threaten death, and work on the patient’s credibility in every way.”[2] Needless to say, this description sounds horrible. In olden times it was commonly thought that these voices were coming from truly separate, demonic spirits. Nowadays, the voices are considered to be coming from a patient’s own mind.

There is good evidence that the voices come from the mind. To quote Van Dusen again: “The vocabulary and range of ideas of the lower order [the voices] is limited … They never have a personal identity though they accept most names or identities given them. They either conceal or have no awareness of personal memories.... Their voice quality can change or shift, leaving the patient quite confused as to who might be speaking.”[3] Regarding the voices, Van Dusen concludes, "They seem imprisoned in the lowest level of the patient’s mind, giving no real evidence of a personal world or any higher-order thinking or experiencing."[4] The pioneer drug chlorpromazine, originally invented as a clothing dye, was found to be effective in treating schizophrenics. Among other things, it could stop the voices. Chlorpromazine, and later such drugs as haloperidol, were responsible for the widespread deinstitutionalization of mental patients that began in the 1950s.

In the light of our theory, it looks like the voices are coming from an E-space processor. The drugs shut the voices up because they probably inhibit the neurons which carry the signal of the voices from the processor to the awareness. (A lesser possibility is that the drugs work because they affect an input signal which stops the creation of the voices by the processor.) But the question still remains as to why the processor is acting so outrageously? The actual processor involved is probably one responsible for what is know as conscience. Conscience gives one a sense of right and wrong about one’s actions or possible actions. However, in the case of mental illness, it seems we are dealing with a conscience processor run amok and perhaps quite defective. The conscience processor is probably located in the forward or frontal section of the cortex, and there may actually be more than a single processor involved.

If an E-space processor is defective and malfunctioning, the question is why is this particular processor defective? Why isn’t mental illness happening where, for example, vision or motor-control processors are involved? Part of the answer as to why the conscience processor seems to be the main culprit in mental illness, may be the very newness of the processor itself. In the long history of large-animal life, which spans several hundred million years, it is probably safe to say that the human animal, which is only about 100,000 years old, is the first animal to use the conscience processor which it has. Probably there are several processors used in the human animal that are very new, and to a large extent, experimental. A new E-space processor is probably like a large new piece of computer software. The software may work pretty good most of the time, but there are invariably some bugs which can cause the software to fail completely or produce wrong results. The bugs occur in the first place because the problem-domain which the software was supposed to fully cover, is often too complex for the designers to see every interaction that must be programmed for. Likewise, the conscience processor is covering a complex problem-domain and because of its newness, probably contains design flaws which haven’t been worked out yet. Unfortunately, when an E-space conscience processor fails due to a bug, the result may be some form of schizophrenia. However, it may also be that some schizophrenia is strictly a neurotransmitter problem, and the persistent voices are produced as the processor idles, and they are not supposed to be heard by the awareness, but a neurotransmitter problem causes them to be heard.


footnotes

[1] McDaniel, Jay. "Union woman, missing for two days, is found in Lyndhurst with amnesia." Star-Ledger [Newark, New Jersey] 23 Nov. 1987, p. 17.

[2] Dusen, Wilson Van. "Hallucinations as the World of Spirits." Frontiers of Consciousness. Ed. John White. Julian Press-Crown Publishing Group, New York, 1985. p. 57.

[3] Ibid., p. 58.

[4] Ibid.


Chapter 3: Evolution

The physical universe, in general, shows a very low level of self-organization. Recall that the total universe is composed of both A-space and E-space. We say that A-space life is exceptional because it is getting a helping hand from E-space. So let’s ignore life for the moment and consider only the physical half of the universe and A-space by itself.

The observations of the astronomers show a universe where a single, large-scale, one-way, indiscriminate force is at work. A galaxy typically has huge, amorphous gas clouds, and tiny balls of condensed gas called stars and planets. Gravity takes part of a gas cloud on a one-way trip of contraction and coalescence. The same spherical shape is always the result. If the gravitationally-collapsed ball is large enough, thermonuclear ignition results, and we have a star. The radiant energy of the star helps to sweep small debris away from the star and just leave the larger objects such as planets. Because gravity causes heavier elements to sink relative to lighter elements, there will have been more heavy elements close to the star than away from it when the star formed. This helps to explain why the inner planets are more dense than the outer planets, just as the Earth with all its rock and iron is much more dense than the giant gas ball of Jupiter. And the same fact of heavier elements sinking, also accounts for the gross structure of the individual planets. In the Earth, the heavy iron and the heat-generating radioactive elements sank to the center with the lighter rocks on top, and the still lighter water and gases at the very top, forming the oceans and atmosphere.

Nowhere do astronomers see any exceptions to the rule of gravity and its constant, uniform results. The physical universe at large shows only the simplest of self-organizing principles at work.

The observations of the chemists show that the elements of the physical universe like to find a stasis or equilibrium position of minimal energy, and then stay there. As an experiment, one can mix any combination of elements together, apply energy such as heat, and stir. The elements may or may not react with each other, and the mixture will soon reach equilibrium where reactions have ceased. The same experiment always gives the same results. The forces that control chemical reactions are like gravity. They are simple and one-way. The one-way flight to equilibrium is self-organizing at its lowest.

The physicist has no trouble understanding the observations of the astronomers and chemists. The physicist’s observations show that there are only a few fundamental forces at work in the physical universe, and none of these forces are directed at building large, complex structures. No physicist would claim that a fundamental force acts as though it were intelligent. At the moment, there are four fundamental forces which the physicists accept. They are gravity, the strong nuclear force, the weak force, and electromagnetism. None of these forces are going to deliberately design and build a large-scale structure that has some distinct, large-scale purpose. By the same token, none of these forces are going to act on a substance differently just because that substance may be part of a large-scale structure that has some large-scale purpose. For example, the forces of gravity, the strong nuclear force, the weak force, and electromagnetism, will not discriminate between an atom that forms part of a telephone, and an atom that forms part of a rock. All atoms get treated the same regardless of their structural context.

Still ignoring life and its handiwork for the moment, the ordinary man-in-the-street will agree with the astronomers, chemists, and physicists. Looking about him at such things as rocks, clouds, and streams of water, he sees either no movement, or one-way movement of essentially structureless things that show no evidence of any complex self-organization.

Of course, standing in sharp contrast to the normal physical universe, is life itself. Obviously, in life we see very complex self-organization complete with purpose. The big question is what caused this life? For those who want to maintain a completely physical basis for life, one approach would be to find examples of comparable self-organization in some non-living things. If this could be done, then the complexity of life would seem less special.

We are all familiar with crystals and their simple geometric shapes. There are also snowflakes, which are ice crystals. Although pretty, the level of self-organization in crystals is still extremely low when compared to a bacterium. Nobelist Ilya Prigogine did his best to find examples of complex structure in non-living things. He has two best examples, it seems. The first example is that of water. When a cup of water is heated, at certain critical temperatures a stable pattern of convection currents will appear on the surface as a pattern of rough hexagons. The second example has to do with a chemical reaction discovered in 1958. Known as the Belousov-Zhabotinsky reaction, it involves a mixture of malonic acid, bromate, cerium ions, and sulfuric acid. With the right mix and the right critical temperature, pretty circular and spiral-shaped crystals form on the surface. Perhaps the only thing that would seem mysterious in these two examples is the hexagon shapes in the water example. When many side-by-side, same-diameter columns are each pushing outward, the rough hexagon shape occurs naturally because surrounding any one column will be six other columns, and thus six roughly-equal walls will result. It is interesting to note that this is the same explanation for the hexagonal cells of honeybee comb.

In spite of best efforts, there are no examples of non-living things that can compare with a bacterium. Thus, living things stand truly apart from the rest of the physical universe in terms of their complex self-organization. Because there are no large-scale, selectively-organizing forces in the physical universe, there was only one possibility left open to those who would keep life completely physical. The only possibility left was randomness.

Randomness was accepted only as a last resort. Charles Darwin’s book, On the Origin of Species, appeared in the mid 1800s. He advocated random changes coupled with a struggling-for-survival population that would welcome and spread random changes that helped it to survive, while rejecting all other changes. He called the change-selecting mechanism “natural selection.” Darwin claimed that random change or mutation, coupled with natural selection, could account for all life, and nothing else would be needed. This claim, or theory, is known as Darwinism. Thus, Darwin comes down completely on the side of A-space when explaining earthly life.

Darwin was not happy with the randomness part of his theory. It was an easy target for his critics, but he clung fast to randomness anyway rather than give any opening to the Bible-thumping advocates of Genesis-style creation. Genesis is the first book of the Bible, and a tribute to pre-scientific ignorance. Scientist Fred Hoyle has this to say about Darwinism versus Christianity:

Undoubtedly, however, the biggest thing going for Darwinism was that it finally broke the tyranny in which Christianity had held the minds of men for many centuries. Christianity as it is practiced today is a rather mild social philosophy, but in medieval times it bestrode in the most dreadful way the whole range of intellectual thought. It did this by imposing on the brain a set of concepts that were false and by then insisting (on pain of extreme punishment) that all subsequent thinking be made consistent with those false concepts.[5]

Darwinism probably wouldn’t have been so bold in its claims about randomness if it hadn’t been facing the fundamentalist creationists. The biblical creationists are still active today. If their position were something like: “We accept science, but do believe that life is partly non-physical,” then we could sympathize with them. Unfortunately, their position is quite different, and gives any talk of creationism a bad name. The fundamentalist position gives us a 6,000-year-old Earth, which had all life on it created more-or-less instantly and effortlessly by an all-powerful God. Among other things, the 6,000-year-old Earth has experienced a gigantic flood, not because there is any evidence for it, but because the Bible tells us so. Needless to say, the fundamentalist position represents a retreat away from knowledge, and towards ignorance.

Darwinism is a theory of how evolution has happened. One should not confuse Darwinism with evolution itself. The idea of evolution is much older than Darwinism. Evolution simply states that new living things are derived from older living things. Often the derivation involves an increase in complexity, but this is not a requirement of evolution. The only alternative to evolution, is that every time a new living thing appears, it does so as an act of spontaneous generation. Can anyone seriously believe that the first elephant was generated complete out of some mud or such, or even out of thin air? Isn’t it so much easier to believe that the first elephant was born of an animal that already looked more like an elephant than any other animal at that time?

Spontaneous generation used to be believed possible for very small animals, such as one-celled animals, but the experiments of Louis Pasteur in the mid 1800s put a final end to that. No life in the world today forms by spontaneous generation. Instead, there is always a living parent, or egg from a parent. The complete lack of spontaneous generation, combined with the very long history of life on Earth, leaves no doubt that evolution has happened. Because the fossil record shows the complexity of life decreasing the further back one goes, it is reasonable to assume that all planetary life has derived from a single, first, simplest, form of life.

The origin of that first, simplest form of life, presents a very special problem for all those who accept only A-space. E-space, unlike A-space, does have large-scale, selectively-organizing forces. Thus, the first life was designed in E-space based on knowledge of A-space, and then implemented in A-space as a one-time act of spontaneous generation, because there was no other way to do it. From then on, once the first life was established, it would always be much easier and more economical to implement any new life-design by making a variation in an already-existing living thing. Although E-space has organizing abilities, there is no reason to believe that the design of the first living thing was an easy task. This broader question of just how E-space is responsible for planetary life will be addressed in the “Gaia” chapter. However, it is perhaps worth mentioning now that one should not think that an entire universe of E-space was devoted to establishing life on our planet. Instead, only the E-space world that exists and moves in tandem through space with our physical world, has been concerned with this planet’s life.

The quest of those who would make the origin of life a completely physical event, has been fruitless. Darwinism itself does not enter the picture, because in the beginning there was no natural selection. In the beginning there was only chemistry, and randomness, to play with. At present, the simplest living thing is a virus. However, a virus is a parasite which requires a host cell for it to be able to reproduce itself. Because in the beginning, there were no host cells, the first living thing could not have been a virus. A fundamental requirement of the first life is that it be able to reproduce itself. Without self-reproduction, anything else that might possibly develop is meaningless, because it’s condemned to disappear without a trace. The simplest living thing in existence today, that can self-reproduce without the need to parasitize other living things, is the bacterium.

Any self-reproducing thing in the physical universe that is going to be more than a crystal, must meet certain theoretical requirements. The self-reproducing thing must have a wall to protect and hold together its contents. Within the walls it needs a power plant to run its machinery. Machinery is needed to bring in raw materials from outside the walls and transform the raw materials into all the components needed to build a copy of the self-reproducing thing. The machinery requires some sort of guidance since there must be some coordinated assembly of the manufactured components into the new copy of the self-reproducing thing. The guidance mechanism cannot be too trivial, since its complexity must include a construction plan in some form for the self-reproducing thing. Thus, the complexity of the guidance mechanism is somewhat proportional to the structural complexity of the self-reproducing thing.

The requirements of a wall, power plant, machinery, and guidance mechanism, all working together to bring about self-reproduction, are not easily met. Consider the fact that there are no man-made, self-reproducing things to point to. The bacterium is only simple when compared to other, larger living things. The bacterium is extremely complex when compared with any crystal, or when examined as a collection of molecules.

The guidance mechanism for a bacterium is considered to be, for the most part, its DNA. DNA is a long molecule composed of nucleotides strung together like links on a chain. There are four nucleotides used in DNA, so each link will be one of four possible nucleotides. The order or sequence of the nucleotides is very important, as for one thing, they code the structure of other molecules such as proteins. A bacterium will typically have many strands of DNA, containing all together hundreds of thousands, or millions, of ordered nucleotides. Besides the DNA, the most prominent class of molecules is the protein class. The proteins are the machinery and power plant. A protein is a long, although folded, molecule, somewhat similar to DNA. Just as DNA is composed of a sequence of smaller building blocks, so is a protein. However, whereas the building blocks of DNA are four nucleotides, the building blocks of proteins are twenty amino acids. Although a protein has more choices per link than DNA, a protein rarely exceeds several thousand links. A very short protein of less than twenty links is called a peptide. A bacterium typically has dozens of different peptides. Of the longer proteins, the bacterium has several thousand different ones. The average length of these different proteins would be somewhere in the hundreds of links. Just as with DNA, the order of the links is known to be important. The wall of the bacterium is made of chained sugars. Within the wall is the DNA, proteins, other molecules, and a great number of water molecules which gives the DNA, proteins, and so on, the room they need to do their work. About two-thirds of a bacterium’s mass is water.

Having taken a brief look at a bacterium from a molecular standpoint, we can actually do some meaningful calculations on the odds of a bacterium forming randomly. Let’s imagine a rich soup of the four nucleotides, and twenty amino acids, and whatever else the bacterium might need. The odds against getting a precise DNA chain of 100,000 links, would be 4100,000 to one, or roughly 1060,200 to one during a single trial of allowing a DNA strand to form randomly to a length of 100,000 links. The odds against getting a precise set of one thousand proteins of one hundred links each, would be 20100,000 to one, or roughly 10130,100 to one during a single trial of allowing a thousand proteins to form randomly to the set length of one hundred links each. If one were to be very generous and assume a short DNA strand of only 10,000 links, and that at any link, any one of two nucleotides would do just as well, then the odds against getting the DNA chain would be only 210,000 to one, or roughly 103,010 to one during a single trial. We will use this smaller number. We don’t need the higher numbers of 1060,200 and 10130,100, and we don’t need to consider the fact that a DNA strand by itself, even in a rich soup, does not make a self-reproducing thing.

To put our low number of 103,010 in perspective, we need only compare it against a few cosmic numbers. There are only about 1080 atoms in the physical universe. If instead there were ten times the number of atoms, then that would only make the number 1081. The age of the universe is only about 1018 seconds. If one were to suppose that all the atoms of the universe could make roughly 1071 flasks of nucleotide soup, and then allow a million trials per second for the entire age of the universe, this would only give us 1095 chances or trials to get a particular DNA chain. Our simple chain needs 103,010 trials before it has an even chance of happening. Having 1095 trials could get us just about any exact DNA sequence of fewer than 160 links. However, no one can seriously believe that there theoretically exists an exact DNA chain of less than 160 nucleotides, which in a soup would make a self-reproducing thing. Such a tiny DNA chain could only code for one small protein, or a few peptides at most. Comparing 1095, which is extremely generous on the random-creation side, with 103,010, which is extremely generous on the needs side, does not give a mathematical zero of likelihood, but it does give a practical zero. There is no chance that either a known bacterium, or an imaginary, much-simpler bacterium, could have been created by randomness, even when allowing a rich soup and a lot of time.

As if to add insult to injury, scientific opinion of early-Earth conditions now holds that the soup never existed, at least not on a large scale. The sort of atmosphere needed to help generate a soup, would be an atmosphere of hydrogen-rich compounds such as methane and ammonia. Such an atmosphere, if it ever did exist, would have been quickly destroyed by sunlight. Instead, the early atmosphere was probably a mix of nitrogen, carbon dioxide, hydrogen sulfide, and water. Such an atmosphere won’t generate a soup. However, even if one were to assume that the building blocks of nucleotides, amino acids, and such, were still being generated somehow, it does not seem that they could have built up in concentration in water. Instead, the building blocks themselves would be unstable and would either clump and settle as tar, or revert to methane, or be absorbed by minerals. Overall, the prospects for an abundant, rich soup on the early-Earth, look very poor. One could also add that in spite of the impossible odds and the lack of soup, the first bacteria didn’t seem to waste much time making their appearance. Bacteria have been found as fossils that are 3.5 billion years old. The Earth is only 4.6 billion years old.

Confronted with the hopeless odds, lack of soup, and quick appearance of bacteria, scientists have reacted in different ways. Many have simply denied there is a problem. They use a backwards reasoning based on a leap of faith. They state that because there is life and it can only have a physical cause, then there must have been a soup and the odds were beaten. Because life exists now, then these other things must have happened. The obvious flaw in their reasoning is that they are not questioning their assumption that the origin of life can only have a physical cause. In an effort to improve the overall chance for life, some scientists have embraced an extraterrestrial origin for it. Both Fred Hoyle and Nobelist Francis Crick have advocated this. However, the chances do not improve much, even when the whole physical universe is thrown in. It is interesting to note that Fred Hoyle finally abandoned randomness as life’s designer, and decided that some form of intelligent design was necessary. He speculates that perhaps the intelligent design was done by a computer. Of course, he then has the problem of finding the designer of the computer. Hoyle knows this, but he still prefers intelligent design over randomness. As we know, the place to look for an intelligent designer is not A-space, but E-space.

Biologists often make a distinction between small evolutionary change, and large evolutionary change. Microevolution refers to the evolutionary changes within a species. Macroevolution refers to the evolutionary changes that produce new species. Megaevolution refers to the evolutionary changes that produce new genera, families, orders, classes, phyla, and kingdoms. A species is a group of common organisms that can interbreed with each other, producing fertile offspring. Mankind, for example, is considered to be a single species.

We want to determine just how evolution has happened. The answer presented here is not a simple one. We neither embrace nor condemn Darwinism. Instead, we want to find its rightful place. Darwinism is part of the answer as to how evolution happens, but it is not the complete answer. To help us find the complete answer, we must introduce a biological concept. It is the sort of concept which is really no more than a focusing on differences that are out in plain view for everyone to see. The problem for us, at the moment, is that the categories of taxonomy—species, genera, families, and so on—are not the best way to try to pigeonhole Darwinism. However, biologists do use the taxonomic categories when they debate the range of Darwinism. For example, some biologists say Darwinism can only account for microevolution; others say it can account for both microevolution and macroevolution but not megaevolution; and still others say it can account for all evolution. It would be nice if Darwinism fitted into one of these three schemes, but it doesn’t.

The concept we need, is that of novelties. By a novelty we mean a separate and independent functional structure or mechanism found in one or more organisms. The dictionary meaning of novelty is something new and unusual, an innovation. We want to give this word a specific biological meaning. A few examples of biological novelties would be the insect eye, a muscle cell, bone, calcium shells, the circulatory system, pigment to dye exposed surfaces, a feather, a hand, hair, gills, eggs, sexual reproduction, an intestinal tract, DNA, proteins, neurons, cuticle, skin, defensive spines, particulate inheritance, kidneys, liver, spleen, cartilage, teeth, vocal cords, wings, and beaks. A complete organism, such as a dog, or a man, is not a novelty. Instead, they are composed of novelties.

To be a novelty, there must be a definite function. Thus, a single molecule such as acetylcholine, is a novelty, because its function is to serve as a neurotransmitter. Most novelties are very old, often hundreds of millions of years old. For any novelty, there would always be a time in Earth history when it made its first appearance in planetary life. A good question is can Darwinism account for novelties?

Darwinism, with its natural selection, could certainly spread a novelty throughout a population once the novelty has appeared, but can it also make the novelty appear in the first place? For creation, Darwinism has only randomness to work with. In other words, it is up to the novelty to appear on its own, by chance. What Darwinism as a theory tries to do when confronted by a complex novelty, is break it down into simpler novelties which would seem more possible to occur randomly. There are many thousands of novelties that by themselves cannot be expected to appear for the first time by chance. Darwin himself, unable to explain the first appearance of even one novelty, resorted to a clever dodge. Instead of the burden of explaining novelties being on his theory where it belonged, Darwin shifted the burden to his would-be critics. He claimed they lacked sufficient imagination to explain novelties by randomness. However, Darwin claimed his theory explained all life and thus every novelty. The burden is Darwin’s.

It must be that novelties are designed in E-space, although it is possible that an occasional novelty has happened randomly without the help of E-space. It is interesting to note that a great many books and articles have been written against Darwinism, and the key argument is always the impotence of randomness when confronted by novelties. Darwin’s clever dodge has worked to perfection. His critics all do their best to present what they consider killer examples of complex novelties which can’t be broken down into simpler novelties. In the meantime, while Darwin’s critics are busy presenting their negative theses, Darwin’s supporters dwell on the positive, successful side of Darwinism, which is natural selection.

Natural selection itself is a novelty, albeit an abstract one. Its normal source of random changes in a sexually-reproducing organism, is provided by particulate inheritance, which is another novelty. The whole process of sexual reproduction is geared to providing a controlled, random variability within an interbreeding population. Natural selection then decides which variabilities get favored at any one time, depending on environmental factors.

Particulate inheritance works as follows. An individual organism has many, perhaps many thousands of different characteristics which are defined by genes. Genes are coded sections of DNA. Genes are the particles of inheritance, hence the term particulate inheritance. For every characteristic, there is always an even number of genes carried by an individual organism to control that characteristic. The fewest number of genes defining a characteristic would be two, the next number would be four, and so on. Although each individual organism carries a small set number of genes for a given characteristic, there may actually be a much larger number of different genes defined for the characteristic in the total gene pool of the population. For example, characteristic X may have four genes in any individual, but there may actually be a hundred different genes for characteristic X more-or-less randomly distributed throughout the population.

When a sex cell is created, for each characteristic exactly half of the required number of genes are carried in the sex cell. For example, if a given characteristic is represented in an individual by six genes, then that individual’s sex cell will have only three genes for that characteristic. A random process will determine which three of the six genes are found in a particular sex cell. When the male sex-cell unites with the female sex-cell, there is once again a full complement of genes for each characteristic. When a new organism develops from the united sex cells, there are very definite rules that determine just how the genes for a given characteristic will be judged together, and collectively expressed as the characteristic. The basic rules were discovered by Gregor Mendel and are sometimes referred to as Mendelian laws.

Individual genes are novelties, particulate inheritance is a novelty, the Mendelian laws are a novelty, and sexual reproduction as a whole is a novelty. From individual genes to sexual reproduction, all are the product of design occurring in E-space, although an occasional gene may be randomly mutated, such as by a copy error or radiation damage, and then possibly enter the population gene pool. Natural selection is another novelty and is as powerful a selection mechanism as its proponents claim. However, for natural selection to work, it obviously needs a source of variety or variability. The variety comes from the cooperating novelties of genes, particulate inheritance, Mendelian laws, and sexual reproduction.

The whole design of controlled variety sifted by natural selection, is one large mechanism the purpose of which is to fine-tune a population to its environment. When the environment changes, the fine-tuning mechanism is automatic and can bring about a corrective adjustment. This automatic mechanism relieves E-space of the burden of having to constantly evaluate the fitness of a population, and then make special changes when needed.

There are several ways the environment can change for a population, but perhaps the most important way is when part of a population moves, or tries to move, into a new territory where conditions are different. The different conditions could involve many things, such as climate, competitors, food supply, predators, and diseases and parasites. Subjected to the new conditions, natural selection may well favor some different variations of existing characteristics. For example, the presence of a visually-acute predator may put the pressure on for a better-camouflaged population. The presence of a new, unexploited food source will favor changes to the population that allow it to consume the new food. The case of Darwin’s finches is a classic example. At some point in the history of the Galapagos Islands, either a lone finch, or a pair of finches, made its way to the islands and found many unexploited food sources (although perhaps not all at once), and a severe lack of predators. The result was that several different versions of the bird developed to best exploit the different food sources. One common way the versions differ, is in their beaks. Some have ordinary finch beaks, others have parrot-like beaks, some have long, slender beaks, and so on. Because the beak shape is important in feeding, it is not hard to see why different beaks have developed and been preserved by natural selection. For example, the parrot-like beak is good for eating seeds, while the long, slender beak is good for picking insects off of trees. However, before natural selection can favor certain beak designs, it is first necessary that the beak design in the original population can vary. This raises the general question of just how much variety within a population is possible?

There is actually a lot of potential variety in many populations. The exact nature of the variability of individual characteristics, and the ease with which the different varieties can appear, is determined by the number and type of genes involved, the distribution of these genes in the population, and how these genes are interpreted by both the Mendelian laws and any other laws or rules that might apply. In general, let’s consider the question of species fixity. A species is a reproductively-isolated population. Many species just don’t show much overall variety. For example, common forest animals such as deer, mice, moles, racoons, and such, just don’t seem to vary within their species. If you’ve seen one deer, mouse, mole, or racoon of a certain species, then you’ve seen them all for that species. The evidence of the fossil record also shows species fixity. A given species typically appears unchanged over its entire life of several million years. There are even so-called living fossils which appear unchanged over hundreds of millions of years. A good example of a living fossil is the Coelacanth fish. It survives unchanged, although its former range has been greatly reduced.

Species fixity is probably best explained as a real loss of variability, and in particular a loss of unwanted, expressive genes. For example, suppose a particular deer species in its early years, has genes for both a red coat and a brown coat, and these genes are about evenly distributed throughout the population, and neither gene is strongly dominant over the other. Initially, about half the deer would have red coats, and half would have brown coats. Now suppose the red coat is easier to spot for predators, and consequently a deer with a red coat is more likely to die before it could reproduce. With the higher loss-rate of red deer, the total proportion of red-coat genes in the population will decline. Eventually, only a few deer will have red coats, and finally, none. It is possible that the red-coat gene could disappear entirely from the population. If that happened, there would be a permanent loss of variability. Even if conditions changed and a red coat became preferable, it would be too late. Without a gene for it, there cannot be a red coat. However, genes can be sheltered from extinction by having the environmentally-favored gene as strongly dominant over the unfavored gene. However, a sheltered gene is a less-expressive gene, and thus will manifest as a characteristic less often in the population. In general, the more sheltered a gene is, the less often it will actually be expressed. The typical species probably has more overall variability during its early years, and then loses some, or perhaps all of its easy variability, as it ages. Compared to the total lifetime of a species, which can be several million years, the loss of easy variability is probably more-or-less complete after only fifty or a hundred thousand years, or perhaps in much less time than that. The end result would be the species fixity we commonly observe, although it is always possible the particular species never had much easy variability to begin with. However, in spite of fixity, sheltered genes preserve the hope of some future variability if it is ever needed. If a sheltered gene were to become favored by the environment, then the added success of the occasional individual expressing that sheltered gene could result in a small sub-population developing, in which the sheltered gene becomes dominant due to a loss from the sub-population of the previously dominant gene. It is in this way that sub-species and races develop.

The art of selective breeding is an old one. The dog is a well-known product of selective breeding. Everyone agrees that the wolf is the ancestor of the dog. Selective breeding of captive wolves probably began over fourteen thousand years ago. For the wolf to be the ancestor of all the different dogs in the world today, there obviously must have been a lot of latent, unexpressed variability in the wolf. The existence of sheltered genes is the obvious answer, in spite of the wolf’s apparent fixity in the wild.

The question of what makes a new species in the first place is not easily answered. Species are defined as being reproductively isolated. The question then becomes, how does reproductive isolation happen? What is the actual mechanism? If one organism cannot reproduce with another organism, there can be several reasons for it. Firstly, the organisms may simply refuse to mate with each other because of behavioral differences, or be unable to mate because of physical differences. However, even if a mating, perhaps forced or artificial, takes place, the male sex cell may be unable to physically unite with the female sex cell because of some sort of biochemical incompatibility. However, if the two cells do unite, perhaps done artificially in a lab, at some point during embryonic development the development may just stop. If the structural and development differences between the two organisms cannot be reconciled, then the development stops somewhere along the development line, and perhaps at the very beginning so that the first division of the egg-cell never even happens. However, if all these barriers are overcome, and an actual hybrid organism develops and reaches maturity, the hybrid may be sterile. This can happen as a consequence of the hybrid’s parents having dissimilar quantities and groupings of genetic material.

Natural selection, working with the variability already present within an existing species, could perhaps cause a splitting of the species into two or more sub-species that would refuse to mate with each other, or be physically unable to do so. This could result in a de facto reproductive isolation. However, to be a truly new species, biologists insist on isolation even if mating is forced or artificial. One cannot say for certain that the initial variability in a new species does not include genes that can cause the sort of reproductive isolation the biologists demand. Thus, one cannot rule out the possibility that natural selection by itself, working on a new species, could create other new species from the original species. Although natural selection may be responsible for some new species, it can only work with the variability it is initially given. The initial variability is set by E-space. Besides initial variability, E-space must also be controlling the introduction of novelties.

Over the long history of planetary life, millions of different species have made their appearance. For any one particular species, there would typically be several, or perhaps many, closely-related species that either appear more-or-less all at once, or in sequence over time. The follow-on appearance of closely-related species after the initial species has appeared, may well be, in most instances, the result of the automatic mechanism of natural selection. However, at many branches in the great tree of life, E-space must have intervened in ad hoc fashion so as to create the new branch. Wherever one or more novelties were introduced, E-space must have been active.

E-space probably takes the following steps to introduce a novelty into a sexually-reproducing, multicellular organism. First it chooses an existing species as the best base on which to graft the novelty. Next it determines all the changes it must make to the genetic material so as to be able to add the novelty. Besides designing new genes for the novelty, there may also be a complete evaluation of existing genes and characteristics, and changes planned for them so as to better support the novelty. At some point in time, the planned changes are complete. E-space then probably chooses particular female individuals from the base species, and before a fertilized egg cell can start dividing, temporarily suspends the normal progress of the egg and then makes physical changes to its genetic material. Probably many altered offspring are concentrated in a small geographical area so as to give the new species a chance to get going on its own. Once the new species with the novelty is established, E-space stops its interference with the base species.

When E-space itself creates a new species, it probably pays special attention to the variability the new species will start with. For the automatic mechanism of natural selection to work well, it must have good variability to work with. To get a lot of easy variability with a minimum of effort, one thing E-space can do, is increase the expressivity of currently sheltered genes. This probably requires no more than a minor code change for either the sheltered gene itself, or for the dominant gene or genes that currently overshadow it. By altering the expressivity of a gene, in effect one creates a new gene, and if the original form of the altered gene is still retained, then one has actually increased the number of different genes. Thus, E-space can have its cake and eat it too. When creating a new species, it can, in effect, create new genes and a lot of easy variability for natural selection to play with, by altering the expressivity or dominance for only some of the occurrences in the population of particular genes.

As a species ages, natural selection will discard those expressive genes that do not work as well for the species’ survival as alternative expressive genes. Recall the example of the deer with the red-coat and brown-coat genes, both genes equally expressive. Eventually, the red-coat gene will be lost. The end-result of natural selection’s work is the commonly-observed species fixity. Species fixity has the advantage of not producing individuals that have been proven by natural selection as less likely to survive. However, the disadvantage of species fixity is that the species may be unable to meet the challenge of environmental changes. The end result could be extinction of the species.

Extinction does in fact happen. Probably the most famous extinction was that of the dinosaurs. Mankind itself has forced many species into extinction. However, confronted by the kind of extinction mankind represents, species fixity is not a factor. Consider the case of the Dodo, a large flightless bird on an island. No amount of easy variability could have saved the Dodo from extinction, because the super-predator man appeared suddenly in the Dodo’s territory and quickly wiped it out. Long before the Dodo could shrink in size and refind its wings, it was gone, extinct. Aside from the mass extinction currently happening in the world because of mankind, there have been several episodes of mass extinction in the past. The common culprit seems to be global cooling. Confronted with global cooling, species fixity would not be a factor if a species goes extinct. All organisms are more-or-less temperature sensitive, and many organisms survive within a narrow temperature range. For an organism which lacks hair to survive at a substantially different temperature range, it would typically require a major reworking of the organism. There is too much complexity involved, as well as intense natural-selection pressure, for temperature-tolerance to be an easily changed characteristic in a species. A further complication is that temperature changes to a particular territory can happen quickly and alter completely the food and predator situation, as well as the disease and parasite situation. What this all means is that E-space doesn’t try to save a species, or group of species, when environmental conditions become too adverse. Instead, they are allowed to perish, and E-space will create new species, if needed, to fill the gap.

Within a single species there is a definite limit as to how much environmental change it can cope with. Although species fixity may make extinction more likely, there are environmental changes possible which no amount of easy variability could cope with. The sudden arrival of a super-predator, and global climate changes, are two major examples. Thus, because species can become extinct anyway, it doesn’t seem that the disadvantage of species fixity can ever outweigh the advantage of species fixity, which is the production of optimal individuals as proven by past natural selection. Instead of trying to perpetuate old species, which may not be possible, E-space seems to concentrate on making new species. It may be that E-space never does anything to prevent extinction, or delay it.


footnotes

[5] Hoyle, Fred, and Wickramasinghe, Chandra (1984) Evolution From Space. Touchstone-Simon and Schuster, New York. p. 133.


Chapter 4: Development

Development from a single egg cell to a complex multicellular organism is an everyday phenomenon. It happens all the time in our own species when a woman learns she is pregnant, swells up, and gives birth to a baby. The baby then develops further and eventually reaches maturity. Development doesn’t stop at maturity, but continues on to final death.

Every multicellular organism at some point, began as a single cell. How a single cell can develop into a starfish, tuna, honeybee, frog, dog, or man, is obviously a big question. Much research and experimentation has been done on development. In particular, there has been a lot of focus on early development. Embryology is concerned with the initial development from egg to self-supporting organism, or from egg to birth. The biggest puzzle is certainly in the embryo stage. From a single cell to a baby is a much more radical step than from baby to adult, or from adult to death.

In spite of all the focus on early development, there is no real explanation of how it happens, except for general statements of what must be happening. For example, it is known that some sort of communication must be taking place between neighboring cells, but the mechanism is unknown. In general, it is not hard to make statements about what must be happening. The hard part is actually identifying and describing any of the real cellular mechanisms one assumes must be there.

Although it never seems to be mentioned, there is a close connection between development and Darwinism. The ignorance of the actual mechanisms of development has been a silent support for the assertion that randomness accounts for all novelties. As long as the mechanisms of development are unknown, it will never be possible to make accurate calculations of the probability that a particular novelty could arise by chance. For example, suppose one wanted to calculate the odds against a squirrel that has no loose skin connecting its arms and legs, getting it and thus being able to glide during jumps from one tree branch to another. One cannot say what changes must be made to the DNA, because one does not know how the DNA is translated in its entirety into the final, complete organism. Thus, it is impossible to calculate the odds in any convincing way. Someone who wants to believe that randomness is sufficient, will just maintain that the odds are low, while someone who cannot accept randomness, will maintain that the odds are high. An impasse results and both sides are reduced to using general arguments and analogies to defend their views. The randomness aspect of Darwinism gets a big boost because it cannot be disproved by mathematics, even though randomness implies probability, which is a well-understood mathematical discipline.

Our purpose in this chapter is to determine what role, if any, E-space plays in development. The cellular mechanisms that control development may be so elusive for molecular-biology researchers because they exist in E-space rather than A-space. However, before we jump to any conclusions, we must first examine the evidence. There have been many experimental results that narrow the possibilities as to how development is controlled. However, before considering development, let’s first examine the building block of development.

The building block of development is the cell. The first cells on Earth were the bacteria. They first appeared 3.5 billion years ago. Bacteria have DNA, just as all cells do. However, bacterial DNA is loosely distributed within the bacterium, instead of being kept in a well-defined cellular nucleus. All bacteria, and the related blue-green algae which first appeared 2.5 billion years ago, are known as procaryotes, and are characterized by the lack of a cellular nucleus. About 1.4 billion years ago, a radically different kind of cell appeared. The new cell was much bigger than a procaryote cell, and it had its DNA grouped together in a well-defined nucleus. There were other differences as well, and in general the new cell was substantially more complex than a procaryote cell. All cells that have a well-defined nucleus are called eucaryotes. All multicellular organisms are made out of eucaryotes. No multicellular organism is made out of procaryotes, although blue-green algae cells can clump together into stromatolites, for example, and look like a multicellular organism.

All cells reproduce by dividing. One cell becomes two. When a cell divides, it will divide roughly in half. The division of water, and proteins, and such, need not be exactly fifty-fifty. A fairly even distribution of the cellular material will do. However, there is one very important exception. The DNA is the information storehouse of a cell. Among other things, the DNA is a direct code for all the possible proteins the cell can make. If the cell is part of a multicellular organism, then the DNA also contains a developmental map, or plan, in some form or other, for the complete organism. The DNA of a cell is like a single, massive book. The book can’t just be torn in half and roughly distributed between the two dividing parts. What good would half a book be to each of the two new cells. Instead, each new cell needs its own complete book. Thus, before a cell can divide, it must duplicate all its DNA and make sure each new cell gets a complete set of the DNA. There can be no room for error. All of the DNA must be precisely duplicated, and there can be no mixups when the DNA is divided between the two forming cells. In other words, it wouldn’t do if during duplication of the book, page fifty-eight were missed, or during division of the pages, both copies of page ninety-two went to the same half of the dividing cell.

Division for eucaryote cells has three main steps. In the first step, all the DNA is duplicated. Chromosomes are distinct and separate groupings of DNA that form towards the end of the duplication process. For a particular type of cell, such as a human cell, there will be a fixed and unchanging number of chromosomes formed. The ordinary human cell always forms forty-six chromosomes before it divides. A mouse cell forms forty chromosomes, a chicken cell seventy-eight, a chimpanzee cell forty-eight, and a cricket cell twenty-two. It should be made clear that the chromosomes form as a consequence of the duplication of the DNA for cell division. During the normal life of a cell, its DNA is more-or-less loosely distributed in the nucleus, and not confined to the distinct chromosomal bodies that form for the purpose of cell division. The chromosomes that form, always consist of two separate, equal-length strands that are joined together. The place where the two strands are joined together is called a centromere. Each strand consists of a long DNA molecule wrapped helically around specialized proteins called histones. One strand is a functional duplicate of the other. The information coded in the DNA of one strand is duplicated in the other strand on the same chromosome. For a human cell there would be a total of ninety-two strands comprising forty-six chromosomes. The forty-six chromosomes would represent two copies of all the information coded in the cell’s DNA. One copy will go to one half of the cell, and the other copy will go to the other half.

The second step of cell division is the actual distribution of the chromosomal DNA between two separate halves of the cell. The membrane of the nucleus disintegrates and at the same time the spindle is forming. The spindle is composed of microtubules which are long thin rods made of chained proteins. The spindle may have several thousand of these microtubules. A large number of the microtubules extend from one half of the cell into the nucleus, and a roughly equal number of microtubules extend from the opposite half of the cell into the nucleus. Many of those microtubules extending into the nucleus, will attach to the centromere region of the chromosomes. Each chromosome’s centromere will be attached to by microtubules from both halves of the cell. When the spindle is complete and all the chromosomes have been attached to the microtubules, the chromosomes are then aligned together so that all the centromeres are in a plane oriented at a right angle to the spindle. At this stage of alignment, the chromosomes are at their maximum contraction. All the DNA is tightly bound so that none will break off and be lost during the actual separation. The separation itself is caused by a shortening of the microtubules, and perhaps also by the two bundles of microtubules moving away from each other and the nucleus. The centromere, which held together the two separate strands of each chromosome, is pulled apart into two pieces. One piece of the centromere attached to one chromosomal strand is pulled into one half of the cell, while the other centromere piece attached to the other chromosomal strand is pulled into the opposite half of the cell. Thus, the DNA is equally divided between the two halves of the dividing cell.

Once the divided DNA has reached the two respective cell halves, a normal-looking nucleus forms in each half. This means that at least some of the microtubules disintegrate, a new nuclear membrane assembles around the DNA, and the DNA itself becomes more-or-less loosely distributed within the nucleus again, instead of being tightly bound up in chromosome strands. Once the two new nucleuses are established, the third and final step of cell division takes place: A new cell wall is built in the middle of the cell and this divides the cell in two. The new cell wall may be a shared wall, or it may actually be two separate cell walls with each wall facing the other. Once the walls are done and the two new cells are truly divided, the remains of the spindle disintegrate.

With the exception of an egg cell and its first few divisions, a newly divided cell won’t divide again until it reaches the standard size for that cell. There must be a period of growth before any new division can happen. Experiments done with amebas help show this. Amebas are tiny, single-celled animals. If during division, amebas are shaken, then sometimes there will be one large cell and one small cell formed from a single division. Even though their starting sizes are not normal, both the small and large, newly-formed amebas, will not themselves divide until they reach the same standard size. Another experiment done with amebas is the frequent removal of bits of its cell mass. The ameba is physically prevented from reaching its standard size by this removal, and it will not divide, even though much time, perhaps many months, have passed since it would have divided if not interfered with. Once the interfering mass-removal stops, and the ameba reaches its standard size, it divides normally.

The most time-consuming step of division is the duplication of the DNA. All the DNA in the typical eucaryote cell consists of billions of links. Every one of these links must be duplicated. The duplication itself occurs simultaneously at many places along the giant DNA molecules. In spite of massive, parallel duplication, the duplication step is still the slowest step of division. As an example, certain newly-divided cells from a mouse’s intestinal lining will spend about nine hours growing, then seven hours duplicating the DNA, and then only two hours to complete the rest of the division process. Another example comes from root-tip cells from a plant. About two hours are spent growing, then six hours to duplicate the DNA, then about three hours waiting, and then two hours to complete the rest of the division process.

Overall, cell division is very complex and coordinated. Can cell division itself be strictly an A-space phenomenon, or would it be reasonable to believe that E-space must be involved? There are two related aspects of cell division that make it difficult for the controlling mechanisms to be exclusively in A-space. The first aspect is timing. The several steps of division follow each other in a precise order. The second aspect is recognition that a step has been completed. Only when one step is finished can the next step begin. Probably the most difficult thing for an exclusively A-space theory to explain, is the recognition that a step has completed.

All the A-space theorist has to work with is basic chemistry, and chemicals, such as proteins, only “see” and react to their immediate surroundings. How can a chemical mechanism be constructed that would signal when the cell reaches its standard size? The overall size of the cell is much beyond the “visual range” of a molecule. How can a molecule detect and quantify the cell size?

How can a chemical mechanism be constructed that will signal when all the DNA has been duplicated, and not before? Once again, the “vision” of a molecule is much too short when compared to the wide view it would need to see that DNA duplication is done. However, one could try to get around this difficulty by suggesting that the cell has a clock, and it allows a fixed amount of time for the DNA duplication with the expectation that all duplication will finish within the allotted time. The clock idea sounds good, but then the problem becomes how to construct an accurate chemical clock. Even so, the theorist may have an easier time with the clock explanation. However, a clock approach would be making a big assumption that the DNA duplication would always be complete in the fixed time allowed.

After the chromosomes form, and the spindle develops, the microtubules can’t shorten until all the chromosomes have been connected at their centromeres to microtubules from both cell halves. How can a chemical mechanism be constructed that will signal when all the chromosomes have been connected? The same problem of “short-sighted” molecules always presents itself. Once again, the A-space theorist may prefer the clock approach with the assumption that so many microtubules enter the nucleus region that all the chromosomes are certain to be properly connected given enough time.

Perhaps the most convincing and plausible explanation for cell division that an A-space theorist could offer, would be one that involves a series of clocks. When each clock in the sequence runs down, it triggers both the next clock and the activation of specific genes that will manufacture the proteins needed by the next step. Specific proteins will do the DNA duplication during the duplication step. Different proteins will self-assemble into microtubules during the spindle-creation step. And so on. Instead of somehow checking that each step is complete, it will just be assumed that the step completes in the allotted time allowed by the clock.

A possible alternative explanation of the control of cell division, would be to rely on chemical concentrations instead of clocks. A specific example of a control mechanism by chemical concentration is provided by bacteria. The E. coli bacterium normally feeds on glucose, but can also feed on lactose if it is present. The enzymes (catalytic proteins) needed to feed on glucose, are always present in the cell, but the enzymes needed to feed on lactose, are not. There are three enzymes needed to feed on lactose, and the code for them is in one continuous section of DNA. This section of DNA has, in effect, two separate locks on it that prevent the RNA transcription (an intermediate step) necessary to make the three enzymes. The two locks are two adjacent sections of DNA which themselves are adjacent to the DNA section that codes the three enzymes. The DNA links of the first lock can bind with a special protein that is always present in the cell. When this protein is bound to the first lock, it serves as a physical block preventing RNA transcription of the three enzymes. However, this same protein that will bind to the first lock, will also preferentially bind with a derivative of lactose and forsake the binding at the lock. Thus, if lactose is present in the cell, then the physical block at the first lock will be removed. The second lock works in reverse fashion from the first lock. Whereas the first lock requires the removal of a protein to be unlocked, the second lock requires the binding of a protein. The DNA links of the second lock must be bound with a special protein before RNA transcription of the three enzymes can happen. This special protein will bind to the second lock only if there is not a high concentration of glucose in the cell.

The first lock means the lactose-feeding enzymes will not be made unless they are actually needed. The second lock means that as long as the preferred food, glucose, is present, lactose will be ignored even if it is in the cell. The automatic control this two-lock system provides, is demonstrated by considering a bacterium in action. Suppose an E. coli has landed on a small food source that consists of both glucose and lactose. Both the glucose and lactose molecules will pass through the cell wall and build up a concentration in the cell. The glucose is immediately attacked by the glucose-feeding enzymes that are always present in the cell. The presence of the glucose will unbind the special protein at the second lock and prevent the production of lactose-feeding enzymes, even though the presence of the lactose has unlocked the first lock. When the glucose is exhausted, the second lock will now unlock. With both the first and second locks unlocked, the lactose-feeding enzymes are finally made. As the enzymes are produced, they break down the lactose. When the lactose supply is exhausted, then the first lock will be relocked and the production of lactose-feeding enzymes will cease. As for all the lactose-feeding enzymes that were already made and now no longer needed, they will break down on their own in a short time.

The method of locks used by E. coli is not unique to E. coli, nor is it limited to lactose feeding. Instead, the method of locks is in widespread use by non-eucaryote cells such as bacteria. One may wonder if some form of the lock method could be used to control eucaryote cell division. Consider the DNA duplication step. Special proteins are needed to do the DNA duplication. One could suppose that as long as DNA duplication was actually taking place, then the concentration of the special proteins would be low elsewhere in the cell. Once the duplication is over, then one could expect the concentration of the special proteins to rise. This rising concentration could perhaps lock shut the production of the DNA-duplication proteins and at the same time unlock the next step in the cell-division process.

However, the great difficulty with using the lock method for eucaryote cell division, is the lack of precision. The lock method is clumsy because the local concentration of a particular molecule in the vicinity of a lock site can be highly variable. Although it may be less likely during DNA duplication that a locking molecule will wander over to the lock site and bind, it can still happen. One could try to get around this difficulty by suggesting that there are many locking sites for each step, and it is the overall average of how many sites are locked, or unlocked, that matter. Each lock site would be responsible for producing only a tiny fraction of all the special protein actually needed by a particular step. However, the problem with this spread-the-control approach, is that it would cause a lot of fuzziness and overlap for the different steps. For example, it could easily happen that before the first step is finished, all the following steps are more-or-less active.

The A-space theorist has his hands full trying to explain the control mechanism for eucaryote cell division. None of the possible schemes are wholly satisfactory. At the root of the problem is always the “short-sightedness” of molecules. The A-space theorist must explain large-scale, coordinated phenomena, and has only loose molecules suspended in water to work with. E-space has at least one large-scale, selectively-organizing force to work with, while A-space has none. The control mechanism for eucaryote cell division looks like a good example of something that may be very easy for E-space to do, and very hard for A-space. Probably at least part of the actual control mechanism is implemented in E-space. However, if at least part of the control mechanism can be effectively implemented in A-space, then it would be.

A-space life is a cooperative venture between A-space and E-space. The Golden Rule of this joint venture is that whatever can be effectively done in A-space, will be done in A-space. One should look for an E-space mechanism only as a last resort. E-space wants to minimize its energy expenditure on behalf of A-space life, and thus it always tries to design whatever might be needed as an A-space mechanism. We saw an example of this in the “Brain” chapter: neurons are used to connect different E-space processors even though the connections could have been made in E-space. Another example was in the “Evolution” chapter: natural selection coupled with a controlled random variability relieves E-space of the burden of fine-tuning a species and creating closely-related species.

The Golden Rule is important. There would be nothing to A-space life if everything were done in E-space. The more that can be done in A-space, the more real A-space life becomes. If a particular mechanism can be implemented satisfactorily in A-space, then it will be implemented in A-space, even if the mechanism would work better if implemented in E-space.

We should keep the Golden Rule in mind and not be quick to say some unknown mechanism exists in E-space rather than A-space. Only when the A-space theorists are unable to give a good explanation for how the mechanism could actually work, should we suspect E-space.

The dividing of eucaryote cells is impressive. However, there is a special kind of division that is used to create the sex cells. This special division process is even more precise and complex than ordinary division. Obviously, if A-space theorists have trouble explaining the control mechanism for ordinary division, then their difficulties are compounded by this special division. Once again, help from E-space seems necessary.

An ordinary eucaryote cell that is part of a sexually-reproducing multicellular organism, will have about half of its total DNA from the organism’s mother, and the other half from the organism’s father. Within the cell is two different collections or sets of DNA. One set of DNA originated from the mother, and the other set from the father. Instead of this DNA from the two different origins being mixed up together, the separateness is maintained. When the chromosomes form during ordinary cell division, half of those chromosomes contain all the DNA that was passed by the mother. The other chromosomes contain all the DNA that was passed by the father. There is no mixup. In any particular chromosome, all the DNA either came from the mother, or the father, but not from both.

Particulate inheritance (explained in the “Evolution” chapter) requires that every characteristic be represented by an even number of genes. The genes are just sections of coded DNA. For a given characteristic, half the genes come from the mother, and half from the father. Whatever coded characteristic is found in one set, will also be found in the other set, except its specification may be different. Thus, the information coded in the DNA from the mother, is matched characteristic by characteristic by the DNA from the father. For example, if the mother’s DNA contribution has a gene for making hemoglobin, then there will also be a gene to make hemoglobin in the father’s DNA contribution. The actual detail of the two hemoglobin genes may be different from each other, but for every gene in the mother’s contribution there will be a corresponding gene in the father’s contribution. Thus, the DNA from the mother will always be a rough copy of the DNA from the father, and vice versa. The only difference is in the fine detail of individual genes. Among other things, this means that if there were no sexual reproduction, then the same organisms would need less than half as much DNA as they currently have.

When the chromosomes form during ordinary cell division, because half the chromosomes are DNA from the mother and half from the father, and because the two respective sets of DNA are rough copies of each other, it is possible to pair any chromosome with its opposite-sex equivalent. For example, the ordinary human cell forms forty-six chromosomes. These forty-six chromosomes can be thought of as twenty-three pairs. Each pair consists of one chromosome that originated from the mother, and another, corresponding chromosome that originated from the father. If one were to unwind all the DNA from each chromosome in the pair and compare one chromosome’s DNA with the other, one would find the same genes in the same order on both DNA. For example, if gene number twenty on one DNA codes for hair color, then gene number twenty on the other DNA will also code for hair color. The detail code of the mother gene may specify black hair, while the father gene specifies brown, but both genes still specify hair color.

Sex cells are created four-at-a-time from an original cell. The original cell will divide once, and then the two newly-formed cells will each divide, thus producing the final four sex cells. The original cell itself was created by ordinary cell division and somehow becomes committed to develop into the sex cells. Of course, sex-cell production only takes place at one or more specialized sites in the organism’s body. If the original cell is going to produce sperm, then there will be four equally-viable sperm cells at the end. However, if the original cell is going to produce an egg, then there will still be a total of four cells produced, as far as the handling of the DNA is concerned, but three of the four cells are little more than pinched-off nucleuses from the original cell body. Only one nucleus with its DNA will remain in the original cell body, and this cell becomes the single egg. The three pinched-off nucleuses just disintegrate. Regardless of whether sperm or egg are formed, the original cell undergoes the exact same process as far as the way its DNA is duplicated, rearranged, and ultimately divided up among the four resultant sex cells.

The first step the original cell undergoes, is a duplication of all its DNA. There will be just this one duplication of DNA. Once the duplication is completed, the original cell will have twice as much DNA as an ordinary non-dividing cell in the same organism would have. There will be no further duplication of DNA even though the original cell will ultimately divide into four separate sex cells. Because the DNA will be evenly distributed among each resultant sex cell, this means each sex cell will end up with only half of the DNA possessed by an ordinary non-dividing cell. And this is exactly what is wanted, because when the male sex cell combines with the female sex cell, the new cell will have the normal amount of DNA for a non-dividing cell. However, the DNA that goes into a particular sex cell cannot just be a random selection from all the available DNA. Instead, it must be a complete and single set of DNA for all the characteristics of the organism. Every characteristic must be represented and there must be only half the total number of genes used for each characteristic. Also, the order of the genes on the DNA must remain the same as they were originally.

Each sex cell must have a standardized collection of DNA. The only variability allowed is in the detail of each particular gene. The DNA in any sex cell must be a rough copy of the DNA that came from each of the organism’s parents. The basic DNA pattern that exists for a particular species must be followed consistently by all members of that species. This requirement could be met by just having a sex cell receive either all the DNA that came from the mother, or all the DNA that came from the father. This could be accomplished by the original cell not duplicating its DNA and instead just dividing directly into two sex cells where one of the sex cells will get all the chromosomes from the mother and the other sex cell will get all the chromosomes from the father. This scheme is simple, but the obvious drawback is that there would be no variability. Recall that the whole purpose of sexual reproduction is to provide a controlled variability for natural selection to work with. Therefore, there must be some mixup of the DNA from both parents.

This mixup can be accomplished by randomly choosing from each functional pair of chromosomes, one chromosome to be split between two sex cells and the other chromosome in the pair to be split between the other two sex cells. The human has twenty-three pairs of chromosomes. Within each pair is one chromosome that originated from the mother, and one chromosome that originated from the father. If four sex cells are formed, and each sex cell gets one chromosome strand chosen at random from each pair, then there would be 223 or 8,388,608 different sex cells possible. This provides the wanted mixup while at the same time preserving the standardized format of the DNA. This mixup method is actually used, as we shall see. However, there is still a drawback. The drawback or limitation of this method is that there is no way to change the genes on a particular chromosome. Because of the small number of chromosomes, and the large number of genes, each chromosome will carry many different genes on it. It would improve the overall variability if at least some of the corresponding genes on different chromosomes could be exchanged or swapped with each other. For example, suppose a particular chromosome from the mother has a gene for hair color on it. On the corresponding chromosome from the father, there will also be at the same position along the DNA, a gene for hair color. Assuming the detail of the two genes are different, it would improve variability if the two genes were physically swapped with each other before the chromosome strands are divided among the sex cells. This method of swapping corresponding genes is also, in fact, used. Thus, a random swapping of corresponding genes, along with a random choosing of a chromosome strand from each chromosome pair, provides good overall variability. By using these two methods, each sex cell will get a good mixup of genes, while at the same time the standardized format of the DNA for that species is preserved.

Now that we know what is going to happen and why, we can at last look at the actual details of how the sex cells get their DNA. The original cell, as already mentioned, duplicates all its DNA. The same chromosomes are formed as would be done during ordinary cell division. For example, if it is a human cell, then there will be the same forty-six chromosomes, and each chromosome will be composed of two functionally-identical strands connected together by a centromere. However, there is a difference in that the chromosomes will be much longer and thinner than they are during ordinary cell division. The reason the chromosomes are stretched out like this, is to make the swapping of genes easier, as we shall see.

Once the chromosomes are formed, the next step is rather amazing (at least to an A-space theorist). Each chromosome seeks out, and lines up exactly, with its functionally-corresponding chromosome. This means the two chromosomes of each pair find each other, and unite along their lengths. The two chromosomes of each pair unite in such a way that corresponding genes are directly across from each other. Obviously, this is all taking place for the sake of gene swapping. During this step, some sort of random, corresponding-gene swapping actually happens.

After the swapping, the next step is that the paired chromosomes pull away from each other, although they still remain connected together in one or more places. Also, the chromosomes themselves undergo contraction and lose their stretched-out, long-and-thin appearance. It is no longer needed as the swapping is over. As the chromosomes contract, the nuclear membrane disintegrates and a spindle forms. Each connected pair of contracted chromosomes now lines up so that one centromere is closer to one pole of the spindle, while the other centromere is closer to the opposite pole of the spindle. The microtubules from each pole of the spindle attach to those centromeres that are closer to that pole. The two chromosomes of each pair are then pulled apart as they move into opposite halves of the cell. It is random as to which chromosome of a given pair goes to which cell half. The only thing certain is that exactly one complete chromosome from each pair will move to one half of the cell, while the other complete chromosome of that pair will move to the opposite half of the cell. Thus, each cell half will end up with a random selection of mother and father chromosomes which have already undergone some random, corresponding-gene swapping.

After the chromosomes have been divided into the two cell halves, there is a delay, the duration of which depends on the particular species. During the delay—which may or may not involve the forming of nucleuses and the construction of a dividing cell wall—the chromosomes remain unchanged. After the delay, which may be zero in some species, the final step happens. New spindles form in each cell half (or in each cell if a dividing cell wall was constructed), and this final step will simply divide each chromosome at its centromere. Recall that when the DNA was duplicated in the first step, each chromosome consisted of two functionally-identical strands connected together by a centromere. The first division of the chromosomes only divided whole pairs of chromosomes. It is now necessary to divide the two strands of each chromosome. It is worth noting that at this point the two strands may no longer be functionally identical because of the corresponding-gene swapping that took place earlier.

The chromosomes line up, the microtubules attach to the centromeres, and the two strands of each chromosome are pulled apart in opposite directions. Four new nuclear membranes form, the DNA becomes loose within them, the dividing cell walls form, and the spindles disintegrate. There are now four sex cells, and each one contains a well-varied blend of the organism’s genetic inheritance from its two parents. And at the same time, the standardized format of the DNA for the species has been preserved.

Besides cell division, another coordinated activity which individual cells can do is to move either towards, or away from, an increasing chemical concentration. In this case we are concerned with single-celled animals and even bacteria. Single-celled animals and bacteria typically have some mechanical means of movement. Bacteria either use long, external, whip-like filaments called flagella, which are actually rotated by a molecular motor to cause propulsion through water, or they somehow glide. The larger single-celled animals may use flagella similar to bacteria, or they may have rows of short filaments called cilia, which work like oars, or they may move about as amebas do, by extruding themselves in the direction they want to go.

The E. coli bacterium, when searching for food, has a normal pattern of movement. The cell moves in a straight line for a while, then stops and turns a bit, and then continues moving in a straight line again. If there is no detectable food in the water, then the cell just makes this random search. However, if the cell detects food molecules, then when it moves in a straight line it will continue longer in that direction if the concentration or gradient is increasing. If the concentration is decreasing, then it will stop its movement sooner and change direction. Eventually, this strategy will get the bacterium to a nearby food source.

Amebas living in soil feed on bacteria. Although one may not think that bacteria would leave signs of their presence in the surrounding water, they in fact do. This happens because the bacteria create small molecules such as cyclic AMP and folic acid. There is always some leakage of these small molecules through the cell membrane and into the surrounding water. Amebas are able to move in the direction of increasing concentration for these small molecules, and thus find nearby bacteria. Amebas can also react to the concentration of a different molecule that has to do with the presence of other amebas. The amebas themselves leave tell-tale molecules in the water, and individual amebas will move in a direction of decreasing concentration of these molecules. Therefore, nearby amebas will spread out, and thus search for food more efficiently by covering a larger area.

The ability of a cell to actively move in the direction of a gradient is hard to explain using A-space alone. The easy part is the actual detection of the gradient molecule. A given cell may have receptors on its outer membrane that will react in some fashion when contacted by the gradient molecule. The other easy part is the means of cell movement itself. Either flagella, cilia, or self-extrusion is used. However, what is hard to explain is the control mechanism that sits between the receptors and means of movement.

In the case of the ameba, one could suggest that wherever a receptor on the cell surface is stimulated, there will be an extrusion of the cell at that point. This kind of mechanism would be a simple reflexive one, and could probably be accommodated in A-space. However, this reflex mechanism may not work because surrounding the cell at any one time there could be many of the gradient molecules. The cell could try to move in a number of different directions at once. An ameba is so tiny that there is no reason for a gradient to be consistently strong at one end of the cell, and weak at the opposite end. Instead, the cell will be encountering gradient molecules from all sides.

The reflex mechanism is further complicated by the need to move in the opposite direction from other amebas. This would mean that a stimulated receptor at one end of the cell would have to trigger an extrusion of the cell at the opposite end. Although it is unclear that a reflex mechanism would actually work, we consider it anyway as a possibility. Bearing in mind the Golden Rule, we will not insist that the control mechanism for cell movement is in E-space. However, it seems likely. The way the E-space mechanism would probably work is to take measurements of concentration over time. For example, every few seconds the moving cell could check its receptors and count how many gradient molecules have been encountered. If the count is decreasing over time, then the cell must be moving away from the source. If the count is increasing over time, then the cell must be moving towards the source. Based on this information, the cell can change its direction of movement as needed. Unlike the reflex mechanism, there is no doubt that this count-over-time mechanism would work. Among other things, the count-over-time mechanism requires a clock and a memory, and a means of comparing the counts stored in memory. This sounds like a tiny computer and would be extremely difficult to design as an A-space cellular mechanism.

We have considered cell division, and to a lesser extent cell movement. A case has been made for at least some E-space participation at the cell level. The alternative to E-space is to insist on the existence of several hypothetical A-space mechanisms. When the development of a large, multicellular organism such as a horse from a single cell is considered, there is a great increase in the number of hypothetical A-space mechanisms needed. Science has a principle known as Occam’s razor. The idea is to keep the number of hypotheses to a minimum. There are many different things that cells do which seem to require some sort of computer-like controller. If we say that everything a cell does must fit the Procrustean bed of A-space, then we present ourselves with many extremely difficult problems to solve. Every time a cell does something that calls for precise, coordinated control, one is forced to hypothesize the existence of some chemical mechanism which no one can imagine or describe. It is precisely for this reason that there has been no progress on the question of multicellular development. Informative and valuable experiments concerning development have been going on since the late 19th century. There is a great mass of data, and still there is no credible theory of development. As long as one allows only A-space, there will never be a credible theory.

Instead of many hypotheses that have no basis in fact, we need only one hypothesis. For every cell capable of division, we hypothesize the existence of a controller residing in E-space. This E-space cell controller occupies a volume of macroscopic space no larger than the cell itself. The cell resides in A-space and the computer-like controller resides in E-space. Recall that A-space and E-space are nested together. The nesting is extremely fine and the E-space controller has no problem getting close to its associated cell. For example, between the quarks in a proton contained in an atom residing in a large protein molecule, there is E-space. The controller is right on top of things, so to speak.

The controller is best thought of as a specialized computer which is always running its program. The DNA of a cell is of crucial importance to the cell. The DNA is a storage method for information, and it also doubles as a template for making proteins. Much of a typical cell’s DNA is not a code for proteins. This non-protein DNA has been called meaningless and garbage, but at least some of it probably has special meaning for the cell controller. There are probably many places in the typical cell-controller program where the instructions say, in effect, “Read the DNA.” Based on the information in the DNA, the program will determine what it should do. The DNA will, in effect, customize the controller to that particular cell.

Although the controller is likened to a computer, the analogy only goes so far. A computer resides in A-space, while the controller resides in E-space. The fundamental laws of physics are different in the two spaces. The laws of E-space favor the construction of intelligent-acting devices much more so than do the laws of A-space. As an example, consider the fact that E-space has at least one large-scale, selectively-organizing force to work with, while A-space has none. Among other things, this force probably allows instant, meaningful comparisons of different complex patterns. The lack of this force in A-space means patterns must be compared bit-by-bit using customized algorithms. A single low-level program statement in E-space may say something like, “Compare the two patterns,” and execute instantly. In A-space, the equivalent to that single statement may be something like a customized program of many statements that will take several seconds on a supercomputer to compare the two patterns in the same way.

As to how the cell and its controller come together, there are two possibilities. The first possibility is that the controllers are stockpiled in E-space and are somehow assigned to newly-formed cells as needed. The second possibility is that the controller is created along with the cell. The first possibility has obvious drawbacks. It is wasteful of resources and requires some omnipresent lookout always watching for newly-formed cells. Therefore, this first possibility can be dismissed as impractical. The second possibility is much more reasonable. The point at which a new cell is created is when it divides from an already existing cell. There is no other way. Recall that there is no spontaneous generation for any cell of any kind anywhere on the planet. Without exception, every cell that comes into existence on this planet does so by being the divided half of an already-existing cell. The already-existing cell has a controller which, among other things, will guide any cell division that takes place. Why can’t this controller, as part of the total division process, duplicate itself and attach this duplicate controller to one of the divided cell halves while it retains control of the other? What is being suggested, is that the E-space cell controllers are themselves self-replicating. Every time a cell divides and becomes two, so does the controller.

There are many thousands of different species on the planet. However, this does not mean that there are many thousands of different cell-controller models, one for each species. Different species have different information coded in their DNA. The information in the DNA, as already mentioned, is used by the cell controller to guide at least some of its actions. The DNA thus customizes the controller. Instead of many different cell-controller models, there are probably just a few. A single model is probably used for all procaryote cells. A more complex model is probably used for all eucaryote cells that do not form part of a larger multicellular organism. There are perhaps a handful of very complex models that are used for cells that do form part of a multicellular organism. As an example, the controller model used for cells in a human is probably the exact same model that is used for horse and racoon cells. However, a different model is probably used for tree cells because the structural differences between a human and a tree are so great, it would seem wasteful for a controller to be customizable by the DNA to such an extreme, because large parts of such a controller would always go unused in any one organism.

Although the rule is that every new cell gets a controller, there may be exceptions. A red blood cell has no nucleus or DNA, and is little more than a sack of hemoglobin molecules. It may be that this cell has no controller attached to it for the simple reason that it doesn’t need one. However, if it does have a controller, then it would be the same model as used in the other cells of the organism.

Viruses are tiny parasites, and one can reasonably argue that they are non-living, because viruses never divide or feed. The typical virus is just a piece of DNA or RNA residing in a protective shell. Receptors on the shell will attach to certain target cells, and the viral DNA or RNA will then enter the target cell. Once invaded, the target cell will assemble new viruses and often die in the process. Considering what viruses are, and how they work, it doesn’t seem that they need a controller. All the work of building the virus is done by the target cell. Once the viral DNA or RNA is in the target cell, the controller for that cell apparently treats that DNA or RNA as if it were its own, and acts according to the information contained therein. Cell controllers seem to have no test for foreign DNA. Instead, they assume that any DNA in the cell belongs there. Viruses have taken advantage of this lack of defense. However, it is not hard to see why cell controllers lack such discrimination. To identify the foreign DNA, the controller would have to constantly check the cell membrane for penetration and passage of foreign DNA or RNA. Most cells go through their lives with no such invasion happening. Therefore, it would be a waste of controller energy to be constantly checking for a rare and unlikely event.

When a cell dies, the associated cell controller probably disintegrates. Cell death itself must be detectable by the controller. As part of any controller program, there must be a section that can determine whether or not the cell is sufficiently damaged or destroyed so that it can no longer function. This part of the program need only be run when some other function of the controller is blocked. For example, the controller may be trying unsuccessfully to divide the cell. As part of the problem determination, the controller may run the program part that checks for cell death. In the event of cell death, the controller is probably programmed for its own disintegration. Besides checking for cell death, the typical controller can cause cell death. Many cells have to die according to schedule, or have a time-limit on how long they can live. However, an exception are the procaryotes, such as bacteria. Procaryotes last indefinitely until they are destroyed by environmental conditions.

When considering development, one has to consider the possibility that there are hierarchies of controllers involved. For example, when a horse develops from a single cell, will there be a separate controller to build its intestines, or its brain, or its heart? If one says there is a hierarchy, then there should be one controller that integrates all the organ controllers and builds the overall organism. This would be the highest controller at the top of the hierarchy pyramid. The whole justification for a hierarchy would be that the individual cell controllers are not enough. Instead, one reasons a need for controllers that can see a great number of cells at once, and guide them to achieve structures that are much bigger than a single cell.

Although it may seem appealing, and even necessary to have a hierarchy of controllers, in fact it is a poor idea. Before confounding Occam’s razor and hypothesizing new kinds of controllers besides the cell controller, we must consider the merits of the case. An immediate point against a hierarchy is that these organ controllers must be stockpiled somewhere and brought to a given site as needed. These higher controllers can’t just spring from a cell controller, because to do so would mean all their power and complexity is already in the cell controller. Instead, an omnipresent watcher is needed to detect the need for controllers at a developing organism and make sure they get there. One could try to avoid this by supposing the cell controllers send signals for the higher controllers as needed, and the higher controllers are automatically attracted to the source of the signals. However, this isn’t much better than the watcher, because it hypothesizes the need for a complex communication and guidance system. Besides this, the need to have controllers waiting to be used implies idleness and the waste of resources.

Another problem with the idea of a hierarchy, is that if it really existed, then it would be hard to reconcile with known malformations. For example, how could a chemical, such as thalidomide, cause large-scale fetal abnormalities, such as missing limbs, if the development of the fetus were being controlled by high-level controllers that could see whole organs, and with the highest-level controller presumably seeing the whole organism? These controllers would have a wide view that extends over large masses of cells. How are we to imagine such wide-seeing controllers being so blinded or confused by the presence of a certain chemical that they can’t even tell that entire limbs are missing? Besides an example such as thalidomide, what about all the birth defects that involve large, visible, structural flaws? For example, spina bifida is a split-spine condition where one or more vertebrae are incompletely formed and the bone does not completely enclose the spinal cord. How could the skeleton controller, or whatever high-level controller we imagine, make such a big, easily-seen mistake?

Besides malformation, there are experimental results that speak against the existence of hierarchies. If the hierarchy actually existed, then it would seem the first thing a high-level controller would do, is properly position and orient itself within the developing organism. In the case of a developing embryo, if one were to take a bit of tissue from one site and transplant it to another site, one would not expect this to alter the placement of large, developing structures. However, this actually happens. For example, at some point during its development, a chick embryo will develop leg buds. The leg bud is the first visible step in the construction of a bird leg, but the bud itself has no bone or muscle in it. Instead, it is just a small mass of undifferentiated cells. If this leg bud is cut off from one embryo and transplanted to the body cavity of a different embryo, then a complete leg with its bone and muscle will develop from that bud in a place where it doesn’t belong. Are we to suppose that there exists a separate leg controller and that this leg controller moved with the transplanted bud? If so, why didn’t this controller remain where it originally was, since it was properly positioned there? Why would it follow a bit of tissue? In addition, instead of a leg controller, wouldn’t it seem more reasonable to have skeleton controllers, and muscle controllers, and so on? One would assume the skeleton controller would be indivisible. Surely it would stay with the mass of the organism and not be pulled off by the removal of a small bit of tissue.

Another experiment that contradicts the idea of development hierarchies, is one done with salamander eggs. A fertilized salamander egg-cell, before it undergoes its first division, develops a so-called gray crescent. This is a crescent-shaped collection of gray material that forms along one side of the egg. If some of this gray-crescent material is removed from one salamander egg and transplanted into another salamander egg, but at a place somewhat across from this second egg’s already formed gray crescent, then the result, after the egg develops, will be a salamander larva with two nervous systems and sometimes even two complete heads. How can this be reconciled with the assumed existence of high-level controllers? Are we to suppose that somehow, multiple high-level controllers doing the exact same constructions are tricked into working together to create a two-headed larva? can’t they see that something is wrong?

In general, there are many experiments that give evidence against the idea that there is a hierarchy of controllers. There doesn’t seem to be any experimental results that support the idea. The hierarchy notion is both contradicted by the facts, and also looks poor from an engineering standpoint. We have already seen the need for stockpiling controllers and moving them about. Everything would be more elegant and simple if the cell controller by itself were sufficient for the task of development.

This may seem like a tall order for such a tiny thing, but we must remember that E-space plays by a different set of rules than A-space. Actually, the cell controller only needs a few things to be able to handle large-scale development. The cell controller needs an accurate, synchronized clock. The cell controller must be able to communicate with its neighboring cells, and determine both its relative position, and also what the neighboring cells are doing. And most important, the cell controller must be able to construct an image of what the entire organism should look like at different stages of development, based on the controller’s programming and the information contained in the cell’s DNA. The cell controller will use the image of the organism at the appropriate development stage, along with its relative position and consultation with neighboring cells, to determine just what it should be doing, and at some point what kind of cell it should specialize into.

Of the things which the cell controller needs, only the ability to image how the entire organism should look at different projected stages of development, seems daunting. However, this really isn’t so difficult for E-space. The large-scale, selectively-organizing force or forces which E-space has, gives it powerful pattern and image capabilities. Only a part of the total image need be created at any one time by a cell controller, but it must be able to image any part or place in the organism, because a given cell could be anywhere.

We are going to take a firm stand and say that there are no development hierarchies anywhere, for any organism. The development of anything—from a tree to a whale—requires only a cooperative effort by the cell controllers. There is no other controller besides the cell controller. For an organism with a brain, the only thing that would be added once it reaches a certain stage of development, would be an E-space mind.

Let’s now review the same examples that were used against the idea of hierarchies, and see if they can be explained by our cell-controller-only theory. The fact that certain chemicals, such as thalidomide, can cause development abnormalities, is easily explained. When one cell controller communicates with a neighboring cell, the communication will happen in A-space, both because of the Golden Rule, and also because to do so guarantees the presence of a physical cell as a neighbor, and that the communication is happening with that neighbor and not a more distant cell. The communication will take place across the cell walls that separate two adjoining cells. The most likely means of communication is the exchange of coded molecules. One need only suppose that a chemical like thalidomide interferes with a cell’s ability to communicate with certain coded molecules. For example, suppose thalidomide leaves most of the coded molecules alone, but somehow reacts with or imitates one or more of them. This could cause garbled communication and some sort of abnormal development as a consequence.

Birth defects, such as spina bifida, are hard to explain because there is no obvious causative agent such as a chemical like thalidomide. However, the fact that these abnormalities happen does not weigh against the cell controllers as it did against the hypothetical skeleton controller. The individual cell controller never sees the whole organism as it actually is. It could see what the organism should look like, but it can’t actually see what it does look like. This is an admitted weakness, but it does explain how large visible defects can happen without any apparent attempt at correction. The individual cell controllers don’t know the defect is there because they can’t see it. By contrast, the whole idea of the development hierarchy was that there was a need for a wide-seeing controller that could see large structures. If such controllers existed, they would certainly be able to see the large abnormality and either prevent it from happening in the first place, or correct it if it did happen.

The fact that a chick-embryo leg bud can be transplanted into the body of a different embryo and grow into a complete leg there, is easily explained. The cell controllers in the leg bud had already agreed amongst themselves that they would develop as a leg. When a cell knows what it is going to do, but hasn’t done it yet, this is known as commitment. Before the leg bud was cut off and transplanted, the cells in the bud had already committed. The cell controllers in the bud had already determined where they were in the developing embryo, and when they compared that position with the image of what the organism should look like, they saw that they should develop into a leg, and at that point committed to do so. Once committed, there is no reason to constantly reevaluate that commitment. Instead, the cells must concentrate on fulfilling their commitment, and they proceed to do so. Because a leg is a structure that has contact to the organism at only one place on one end, there is no need for most of the cells in the developing leg to communicate with cells that aren’t likewise committed to forming the leg. Thus, when the leg bud is transplanted, the cells in the leg bud, except perhaps for those at the cut, never realize that something is wrong. To fulfill their commitment, they only have to communicate with other leg-bud cells. They continue doing so and develop into a complete leg.

The gray-crescent transplant and the two-headed salamander larva that results, is easily explained. The gray crescent is part of the salamander’s overall plan of development. Either the formation of the gray crescent and its subsequent use is directly programmed as part of the cell controller, or, more likely, it is called for by the DNA. Regardless of where the creation and use of the gray crescent is actually specified, either in the program or the DNA, the cell controller will do what it is told. Once the egg cell has divided several times, only some of the cells will have the gray-crescent material in them. It must be that part of the overall development plan for the organism specifies an early commitment for those cells with gray-crescent material. It seems these cells are committed to form the upper body of the salamander larva. The transplant of gray-crescent material to a part of the egg away from where a gray crescent has already formed, means that after the egg cell has divided several times, there will be two separate groups of cells with gray-crescent material. Both groups will commit to develop as an upper body and the final result will be the two-headed larva.

At this point we have made our case for E-space cell controllers and their role in development. However, there is still more that can be said about them. The remainder of this chapter will be devoted to further explaining the workings of these controllers as they bring about development.

When neighboring cells consult together on how to accomplish their development task, it is essential that they are all looking at the exact same plan. For this to happen, the cell controllers must be identical, the master clock in each cell controller must be showing the exact same time, and the DNA in each cell must be identical. The cell controllers will be identical because any one is just a direct duplicate of the other. The time on the master clock is probably set to zero once the egg cell is either fertilized or starts dividing. As the cell divides, the cell controller would be duplicated without any reset of its master clock. Whenever a cell divides in the organism, the time on the master clock should be duplicated along with the rest of the cell controller. This way, every cell in the organism will always show the exact same elapsed time on their master clocks. The time on the clock is used to determine the current stage of development.

The need for the DNA to be identical in each cell appears to be satisfied. Examination of dividing cells in different parts of an organism show the same number and form of chromosomes. Based on this evidence, it is reasonable to suppose that each cell will have the same DNA. The only exceptions would be special cases such as sex cells and red blood cells. However, these exceptions do not themselves divide further but instead are strictly end products. The exact sameness of the DNA in different cells is demonstrated by the following two experiments.

Cells can be scraped off a carrot root and placed in a culture where they will undergo division. As the number of cells increases, they start to form roots and shoots. As long as this developing clump of cells is properly cared for, it can ultimately develop into a complete carrot plant with nothing missing. Obviously, the carrot-root cell still has all the DNA information needed to develop a complete plant.

The egg cell of a toad can have its nucleus destroyed and then replaced by a nucleus taken from an intestinal-lining cell in an already-developed tadpole. The egg cell, if it develops, will become a complete toad with nothing missing. Once again, the DNA in a specialized cell is shown to retain all the information needed to develop the complete organism.

With duplicate controllers, the same time on their master clocks, and the same DNA, two neighboring cells will both be constructing and looking at the same development plans for their region or place in the organism. The variable that can make two neighboring cells at some point in time act differently from each other, is their actual position. No two cells can have the exact same position. The need for a cell to be able to determine its position is obvious. There are different possible schemes for determining position, and we can only guess as to how it is actually done. However, positional information probably flows from the outer layers of the organism inward. When a cell has no neighbors on one side, then the cell knows it is an outermost cell. Also, instead of a lack of neighbors on one side, a cell could have one or more neighbors that have committed or specialized into a different structure. For example, a committed liver cell could determine that it is an outermost liver cell if it is bordered by a committed outer-lining cell for the liver. Cells could communicate to their neighbors what they know about their own position, if they know something. For example, the outermost liver cell could communicate to its neighbors that it is outermost. If a given neighbor receiving this communication isn’t already an outermost liver cell itself, then it must be one-away from being outermost. Having determined as much, this cell in turn can communicate to its neighbors, and so on. It may be that accurate counts are actually kept and communicated. For example, a liver cell may communicate to its neighbors that it is two thousand and twenty-eight cells away from the outermost liver cell. With information such as this coming from different directions, it may be that a cell can actually triangulate its position. Another possibility, and one probably used, is that cells start off having as much positional information as the cell they divided from. This would give a cell a good idea of where it is to begin with. This starting knowledge could then be verified or updated as needed by communicating with its neighbors.

The actual plan that a cell controller can generate deserves further consideration. The cell controller can image what any part of the organism should look like at any stage in the organism’s development. However, this image or plan is not an extremely-detailed, cell-by-cell map. An organism may easily contain many billions of cells. The plan is not going to show every cell that the organism should have. Quite the contrary, instead of being extremely detailed, the plan is no doubt detailed, but in a way that avoids redundancy of description, and allows flexibility in design. For example, the skin of an animal is going to be fairly uniform over the whole surface of the animal. All the plan need do, is show the structure of a small patch of skin and specify it as the model by which all the skin is to be made. Any exceptions to the basic model would be noted at the positions where they occur.

Any many-celled organism is going to have standard patterns or structures that are repeated. Besides repeated patterns or structures, there are only going to be a relatively small number of specialized cell types. For example, a human body has only about two hundred different cell types. The detailed specification for each cell type need only be given once. Probably these specifications are in the DNA. The model for a given pattern or structure need only identify by name the cell types needed. The exact shape or size of a given cell will be determined as the pattern or structure is actually made.

Some large structures, such as a skeleton, are probably very detailed in the plan. Each bone must have a precise shape, and this shape must be followed closely when the bones are actually made. On the other hand, a large structure such as all the blood vessels that make up the circulatory system, is most likely specified as a set of rules to follow. There would be no detailed map showing every blood vessel that is to be made. Instead, there would be a rough map of the major blood vessels only. Every cell in the body must have some contact with the blood. Contact may be either direct, or indirect, such as with neurons shielded by glial cells. The need for contact means blood passages are everywhere. However, all the fine blood vessels will be made on an as-needed basis. In general, as any large pattern or structure is made, there will be a constant, regularly-spaced forming of passages, and blood-vessel walls, for the needed contact with blood.

We have said that the cell controllers can image what any part of the organism should look like at any stage in the organism’s development. We have just considered what is meant by “image.” Let’s now consider what is meant by “stage.” Here “stage” means time, and time can be very finely divided. The overall development plan of a human must cover at least as long as the human can live. Let’s assume the overall development plan covers one hundred years. If the development stages were minute-by-minute, then there would be over fifty-two million stages and associated plans. Of course, fine timing is only needed during early development. Even so, there will be a large number of different identifiable stages of development during the lifetime of the organism. Most of these stages will be during early development. Every time a new structure starts to form, one can say that it is a new stage. And that structure itself may have distinct stages of growth. For example, a human skeleton continues growing for about twenty years, and its rate of growth varies over that time.

The linear growth of a structure, where a structure just gets bigger without changing its shape or function, does not require a large number of separate plans to show or describe it. Instead, all that is needed is a scaling factor and some simple mathematics. An object that is mapped onto Cartesian space can be changed in size without any distortion, by just multiplying every coordinate by the same scaling factor. For example, a cube in three-dimensional space could be specified by eight points such as (1,1,1), (2,1,1), (1,2,1), (2,2,1), (1,1,2), (2,1,2), (1,2,2), and (2,2,2). If one wanted the same cube, only three times as large (with a volume twenty-seven times greater), then all that is needed is to multiply every single coordinate by the scaling factor of three. For example, (3,3,3), (6,3,3), (3,6,3), (6,6,3), (3,3,6), (6,3,6), (3,6,6), and (6,6,6). If instead, one wanted a cube only half as large as the original, then multiply by one half. For example, (.5,.5,.5), (1,.5,.5), (.5,1,.5), (1,1,.5), (.5,.5,1), (1,.5,1), (.5,1,1), and (1,1,1). Although a cube is a very simple shape, it does not matter how many points describe a shape, or how complex the shape is. Any shape can be scaled larger or smaller, without distortion, by multiplying every coordinate by the scaling factor.

The cell controller can certainly do simple mathematics, and thus can scale any structure. The specification for an entire stage of growth for a particular structure could require as few as two numbers. One number would be the scaling factor representing the total change in size from the beginning of the stage. The other number would be the time period over which the scaling is to take place. For example, the plan for the development of a particular organism’s skeleton from years one to five, may be just the numbers 1.6 and 4. These numbers would mean to grow at a steady rate so that the skeleton will be 60% bigger than its current size after four years. However, the bone cells would not follow this part of the plan until a year has elapsed on their master clocks. By the further application of some simple mathematics, individual bone cells could calculate how often they would have to divide to achieve the planned growth.

Of course, a major growth stage for something as complex as a skeleton will require more specification than just the two numbers cited. However, those two numbers may well be sufficient to specify the overall pattern of development. All the exceptions to the overall pattern would have to be individually given. And the growth of a large structure such as a skeleton would certainly have to be coordinated with the growth of other things as well, such as the rest of the organism.

The ability to handle linear growth with simple mathematics, greatly reduces the total information needed to plan all the stages of development. Stages can be specified as discontinuous steps. Each step would mark the appearance of a new structure, pattern, or process. At any one time during development, there could be many different stages active in different parts of the developing organism. For example, all the stages of development for an eye are going to be mostly independent from and different than all the stages of development for a brain. Only when the eye needs to be connected to the brain will there be some overlap and synchronization. In general, development plans are both time and location dependent. Only at the very beginning of an organism’s development will a single plan cover the entire organism. Thus, by a “stage of development,” not just time, but also a particular part of the organism is usually meant.

The regulatory eggs of the starfish, sea urchin, and dragonfly, are interesting examples of the cell controller’s ability to scale the size of an entire organism. By regulation, it is meant that the organism can recover from partial destruction or loss and still develop as normal. However, what is interesting is that the size of the organism will change. For example, a starfish egg-cell will divide repeatedly as part of its normal development. At the eight-cell stage, if one were to destroy any seven of the eight cells, the remaining cell will be able to go on and form a complete starfish larva, but this larva will be only one-eighth normal size by volume. For the sea-urchin egg, one can destroy cells at either the two-cell, or four-cell stage, and see a similar scaling in size of the organism that develops from the surviving cell or cells. In addition, if two sea-urchin embryos during very-early development are joined together, the result will be a sea-urchin roughly twice normal size by volume.

An insect egg is typically a small cylinder containing mostly yolk. The yolk is the raw material needed to develop the insect inside the egg. Along with the yolk, there is a single, centrally-positioned nucleus from which the whole insect will ultimately develop. In the case of the dragonfly egg, once development starts, the nucleus divides in two, and these two nucleuses migrate to opposite halves of the egg cylinder. Then, each of these nucleuses divides, and the new nucleuses move apart. At this stage there are four nucleuses in the egg. Two nucleuses are in each half. If the egg is now tied around the middle, in effect two eggs will be made, and two complete, although half-sized dragonfly larvae, will develop.

These examples of regulation show the plans for an entire organism being scaled to a different size. It seems that in the case of the starfish, sea urchin, and dragonfly, there is a count of the number of cells or nucleuses at a particular early stage, and the plans for the whole organism are scaled based on this count. However, there doesn’t seem to be any good practical reason why this is done. The fact that regulation of this kind is exceptional, is probably for the reason that it doesn’t serve any good purpose. (However, perhaps there was a purpose in the early history of these organisms. For example, over 280 million years ago there were gigantic dragonflies. Perhaps these giant dragonflies had the option to ligate their own eggs and thus double the number of their offspring.) Although it isn’t clear why these particular organisms have the programming or DNA instructions to regulate like this, the fact that they do, clearly shows the cell controller’s ability to scale its plans to different sizes.

So far, we have only considered the master clock as a means of determining when a development stage should begin. Of course, many stages would just follow the completion of a previous stage. These follow-on stages don’t need a clock to initiate them, but for those stages that are started based on the clock, there is probably often some leeway. For example, a stage may be programmed to start at an elapsed time of fourteen years on the master clock. However, the actual start may perhaps be delayed if there are unfavorable conditions for it at that time.

Using the master clock is one way that the cells in an organism can be synchronized to act together. However, there is another way to synchronize the action of the cells. This other way is to release a commonly-recognized chemical that has a programmed meaning to the cells. Such a chemical is called a hormone. One of the major uses of hormones is to allow an organism to adjust its development to environmental conditions. Many organisms make use of hormones to help control their overall growth. Although the actual growth of an organism is preprogrammed, this same preprogramming often allows for some moderation of the planned growth by hormones. The release of these hormones would in turn depend on environmental factors. As part of the specification for each cell type in an organism, there would be a description of how the cell should respond to the different hormones it might be exposed to.

Chapter 5: Gaia

The total volume of the universe is half A-space and half E-space, with the two spaces finely nested together. In A-space, both matter and energy are unevenly distributed. The same is true for E-space. What happens in A-space is mostly independent of what happens in E-space, and vice versa. The only interaction allowed between manifestations in the two spaces, is just what is allowed by the two space-block programs that govern A-space and E-space. Although the fundamental forces in A-space are mostly different than the fundamental forces in E-space, there is one force the two spaces have in common. This is the force of gravity.

There is a simple logic to the statement that gravity is a crossover force, shared by both A-space and E-space. The Earth is a large ball moving through A-space. It is not stationary in A-space. Consider a stationary object on Earth, such as a glass of water. At any one instant in time, the object is being manifested or represented by a large number of stationary blocks of A-space. However, just a second later the same stationary object is now manifested by a completely different set of blocks of A-space. The blocks of A-space that “held” the glass just a second ago, are now many kilometers away. The manifestation of the glass is constantly being passed along to different blocks of A-space. The reason for this is that the Earth is in absolute motion with regard to the underlying blocks of A-space.

The Earth revolves about the sun at a speed of thirty kilometers per second. In addition, the whole solar system is revolving about the galactic center at a speed of 250 kilometers per second. Our Milky Way galaxy is moving towards the Andromeda galaxy at a speed of 100 kilometers per second. Another measured motion is that of the Earth relative to the microwave background radiation. This motion is 390 kilometers per second. Overall, the Earth is very distant from the place in the universe it occupied just a second ago.

The primary cause of all the motion discussed so far, is gravity. However, there is another motion the Earth undergoes which is not due to gravity. This is the rotation of the Earth. The Earth’s circumference at the equator is 40,074 kilometers. Because the Earth makes one complete rotation in twenty-four hours, this means a point on the surface of the Earth is moving at a speed of roughly half a kilometer per second. Thus, an object in A-space at the surface of the Earth will move half a kilometer per second just because of the Earth’s rotation.

The rotation of the Earth came about due to both gravity and the conservation of momentum. Gravity pulled the Earth together, and conservation of momentum preserved the overall momentum or motion of the pulled-together pieces. Conservation of momentum is part of a broader law which states that mass and energy are conserved. The sum total of all the matter and energy in the universe is always the same. Matter may convert to energy, and vice versa, but the overall amount of matter and energy together never changes during the life of the universe. We have already stated in the “A-space and E-space” chapter that at the very beginning of the universe, when there was a single block each of A-space and E-space, that the sum total of all matter and energy that the universe would contain was just a large number in each block. The conservation of matter and energy is not a force, but a law. The fact that A-space and E-space have different forces, does not mean that there must be a difference between their conservation laws. The most reasonable view is that both spaces have the exact same conservation law. The sum total of matter and energy in the total universe, comprising both A-space and E-space, does not change.

The conservation law actually simplifies the programs for both A-space and E-space. The reason for this is that an individual block of space never has to run any extra programming to determine if some manifestation should be created or destroyed independently of what is transferred to or from that block from other blocks. Also, if matter and energy were not conserved, it is hard to see how a gross one-sidedness wouldn’t develop. For example, if the presence of matter caused the creation of new matter, then the universe would fill with matter and one would expect some sort of exponential increase. Overall, the conservation law both simplifies the programming, and guarantees that a runaway, self-feeding condition, doesn’t either empty or fill the universe. We know the conservation law applies to A-space. We assume with good reason that it also applies to E-space.

The whole point of considering planetary motion, gravity, and conservation, is to answer the question of how an E-space Earth could form around the A-space Earth, and then move synchronously with it. It must be that gravity is a crossover force. The same force of gravity must be present in both spaces, and the matter in one space can gravitationally attract the matter in the other space. Thus, the total mass of the Earth is not just all the matter in A-space. It also includes all the matter in E-space that occupies the same macroscopic volume of space as the physical Earth. The crossover force of gravity caused the two worlds to condense together in the same large space, and it keeps them together. And the same conservation of momentum keeps the two worlds turning synchronously with each other.

Although it isn’t apparent from anything that has been said so far, it seems the actual mass of the E-space Earth is much less than the mass of the A-space Earth. The matter of the E-space Earth is much lighter. Mass is a relative measurement. The force of gravity on an object is directly proportional to the mass of the object. At the same time, the amount of force needed to change the momentum of an object is directly proportional to the mass of the object. There is an interesting mathematical relationship here. One might think that if the E-space Earth were less massive than the A-space Earth, then it wouldn’t be moved by gravity the same way. However, this isn’t true. Although the force of gravity on the E-space Earth will be less, at the same time the force needed to move the E-space Earth will also be less, and by the same amount. Thus, the two worlds can remain in lock step, even though the one world is substantially less massive or lighter than the other.

There are actually two reasons we say the E-space Earth is much lighter than the A-space Earth. One reason is that if it weren’t, then its presence would show up in different physics experiments. Because of the crossover of gravity, and the fact that at the surface of the Earth E-space objects are moving independently of A-space objects, if the E-space objects were massive, they would cause apparently random fluctuations in gravitation measurements done by physicists. The other reason for believing E-space objects to be much less massive than A-space objects of comparable size, is the evidence of direct observation. The subject of observing E-space directly is covered in the next two chapters, but suffice it to say that large E-space objects both act as though they were light as a feather, and at the same time can quickly accelerate to great speeds and just as quickly deaccelerate without any evidence of great energy expenditure.

One would expect the more massive part of the E-space Earth to be at the center, just as it is for the A-space Earth with its iron core. This is probably so, but nothing is really known about it. However, it is likely the overall lightness of the E-space Earth persists right to its core. The surface of the A-space Earth is characterized by the transition from solids and liquids, to gas. However, it is not clear where the surface of the E-space Earth actually is. For the sake of convenience, we’ll assume the surface of the E-space Earth occurs where it does in the A-space Earth.

The matter that forms the E-space Earth was pulled together by gravity from the same giant cloud of matter from which the whole solar system was formed. Because of the crossover force of gravity, wherever a lot of A-space matter is found in the universe, there is probably some E-space matter there too. Every celestial A-space object probably has associated E-space matter. In our solar system, the sun and planets must have associated E-space masses. There must be a great deal more E-space matter associated with the sun than there is with the Earth. Just as the solar system is largely clear of A-space matter, except where the sun and planets are, it is probably the same for the E-space matter.

There is an interesting problem in astronomy known as “the missing mass.” Astronomers have two different ways of estimating the mass of galaxies. The first way is to estimate the mass based on the amount of radiation received from the galaxy. The distance to the galaxy is taken into account, since the intensity of the radiation will decline by the square of the distance. Astronomers express the estimated mass in units of solar mass. Our sun is the unit of measurement and has a mass of one solar mass. A typical galaxy will have a mass of many billions of solar masses.

The other way to estimate the mass of a galaxy, is to study its dynamics and then apply the law of gravity to calculate what the mass must be. This method has already been used to determine the masses in our solar system. The starting point is to determine the mass of the Earth. This can be done mathematically using the law of gravity and the radius of the Earth. The distance from the Earth to the sun can be accurately measured by triangulation. With the known quantities of the Earth’s mass, its distance from the sun, and its time to revolve about the sun, the mass of the sun can then be calculated using the law of gravity. Once the sun’s mass is known, the mass of all the other planets can be calculated by using their distances from the sun and their times to revolve around it. In similar fashion, the mass of a galaxy can be calculated based on distances and times of revolution.

The “missing mass” happens when one compares the estimated mass of a galaxy based on radiation, with the estimated mass of the same galaxy based on dynamics. For every galaxy where astronomers have made such a comparison, the dynamics mass is always ten times or more than the radiation mass. The dynamics of large galaxy clusters often show a mass of more than a hundred times the radiation mass for the clusters. The verdict of the astronomers is that overall, at least ninety percent of the mass of the universe is missing. The methods of the astronomers are very sophisticated. One should not think that they are overlooking some easy answer, such as that the missing mass is just large gas clouds, or a large number of black holes, or dim stars. These and other possibilities have been considered and seem doubtful. The basic problem is that the missing matter, whatever its form, should be radiating some detectable energy. The energy doesn’t have to be visible light. Instead, it can be anywhere on the electromagnetic spectrum, from radio waves to gamma rays. The astronomers have detectors for the whole range of the electromagnetic spectrum, and yet they can’t find the missing mass.

Perhaps the missing mass is in E-space. Matter in E-space is not going to radiate energy in A-space. However, the presence of the E-space matter will contribute to the overall gravitation observed in A-space. An interesting fact about the missing mass is that the amount of the missing mass increases with the distance from the galactic center. For example, the nearby Andromeda galaxy has about ninety percent of its calculated mass in its inner regions missing, and ninety-nine percent of its calculated mass in its outer regions missing. It seems to be an unbroken rule that the percentage of missing mass rises as the distance from the galactic center increases. The much-lighter E-space matter is suggestive. One might think that gravity will cause the more dense A-space matter to sink more to the center of a galaxy than the E-space matter. An analogy would be the Earth with its iron core at the center, and gaseous atmosphere at the surface. However, the E-space matter is not actually displaced by the A-space matter. Instead, it seems that A-space matter is more easily compressed by gravity into a smaller volume than E-space matter is. The original hydrogen gas cloud of a galaxy must have been compressed by gravity to a much smaller volume than the original E-space cloud of matter. This would explain the increasing proportion of E-space matter to A-space matter as one moves away from the galactic center.

If the missing mass of the universe really is in E-space, then this would mean the overall mass of the E-space universe is ten times or more the overall mass of the A-space universe. This statement would seem to conflict with the lightness of E-space matter. However, the conflict is easily resolved by considering the tremendous volume of a galaxy, and how little of that total volume is actually filled with dense, A-space matter. What the E-space matter loses with density, it regains with volume. For example, a cubic meter of iron is much more massive than a cubic meter of air, but a billion cubic meters of air is much more massive than that cubic meter of iron.

It should be apparent by now that only a very miniscule fraction of the total E-space matter in the universe is actually associated with our Earth. One should avoid thinking that our Earth, and the life on it, is something special and important in this universe, because it isn’t. It is special and important to us because we are a part of it, but it means nothing to the universe as a whole. There are, no doubt, many planets with life similar to our own. However, looking at A-space, one must say that life is the exception, not the rule. For the most part, A-space is dead and lifeless. Even so, the universe itself is teeming with life, but this life is confined almost exclusively to E-space. Our own souls reside in E-space. Although our Earth has A-space life, that A-space life is always partly due to E-space. E-space life can and does exist by itself, but there is no such thing as A-space life existing by itself.

If one were to enter the E-space world of the sun, one would find it teeming with life. However, that life would be exclusively E-space life. None of the forms one might see would look like images of Earth life. It is hard to speculate what life would be like at the sun. However, because intelligence and awareness both reside in E-space, it would seem reasonable to say there would be, among other things, intelligent and self-aware beings. These beings, though, would know nothing about Earth life, and their concerns, if they had any, would be very different from our own, and also their intelligence would be specialized very differently. Although the sun in A-space is a very hot and energetic body, the associated E-space world is oblivious to it. The heat and radiation in A-space stays in A-space. There is no penetration into E-space. Thus, the E-space life at the sun is probably no different than the E-space life at the dead planets in our solar system, or most anywhere else in the universe.

The only obvious exception to the widespread sameness of E-space life, would be our Earth and any other life-supporting, A-space planets in the universe. Thus, in the universe there is a great abundance of E-space life, and only a thin smattering of E-space-supported A-space life. One could say that E-space-supported A-space life represents a leakage of E-space life into A-space. Put poetically, the E-space cup runneth over.

There is an opinion, called the Anthropic Principle, which in its strong version states that the universe exists as it does so that intelligent life could ultimately come about and exist. However, the Anthropic Principle, as currently advocated, only considers A-space. The crux of the Anthropic argument is that if any of several physics parameters were changed slightly, then life as we know it would be impossible. The reasoning of the Anthropic Principle is that the physics parameters must have been set at the universe’s beginning with life in mind.

If the Anthropic Principle were to broaden its horizons to include E-space, then the truth of the principle would be almost self-evident. When one considers how far-and-few-between life is in A-space, and how fragile it is, and how painful, then the Anthropic Principle without E-space appears suspect. If life is the goal, then why is so much done for so little? A whole universe, and so little life to show for it. Mankind itself has only been around for about 100,000 years, and the universe is already fifteen billion years old. However, when one looks at E-space, life is the rule, not the exception. If the missing mass of the universe is in E-space, then not only is life abundant in E-space, but it is also what most of the total mass of the universe is devoted to. Under these conditions, it would be reasonable to conclude that life is indeed the purpose of the universe, just as the strong version of the Anthropic Principle would have it.

It is important to realize that what happens on Earth is confined to Earth. As should be apparent from what has already been said, there is no grand conspiracy of E-space to bring life to our Earth. The life on Earth is solely due to the E-space Earth that is associated with the A-space Earth. If it had not been possible for life to develop on the physical Earth, then life in the associated E-space Earth would be just like it is elsewhere in the universe. Instead, because conditions on Earth were favorable, a part of the E-space Earth is preoccupied with A-space life.

From before the time of the first cell to the present, part of the E-space Earth has been devoted to the development and maintenance of A-space life. It is not unreasonable to refer to this devoted part of E-space Earth as a single intelligent superbeing, named Gaia (pronounced guy-ah). Gaia has been at work on behalf of A-space life for about 3.5 billion years. Although Gaia is a superbeing, this is only relative to ourselves and one should not be overimpressed. Gaia’s actual structure is probably shifting, and changes as needed. Gaia, today, is probably much larger and more complex than it was in the beginning before the first cell. An interesting question is to ask whether Gaia is self-aware or not? Probably the correct answer is that Gaia is not only self-aware with a soul, but in fact has many souls connected at different places in its large mind. The mind of Gaia is probably much too large for a single soul to service it alone.

Although Gaia is certainly huge compared to ourselves, it must be only a tiny part of the total E-space matter that forms the E-space Earth. Consider that the mind of a man, including his soul, fits in a small brain case. This would be a volume of roughly a thousand cubic centimeters. By contrast, the volume of the E-space Earth is at least as large as the A-space Earth, and the solid part of the physical Earth has a volume of roughly 1027 cubic centimeters. Looking at these numbers, it is easy to suppose Gaia comprises much less than one percent of the total mass of the E-space Earth. In fact, Gaia probably isn’t even a trillionth of the E-space Earth.

With the exception of gravity, there is very little interaction between the two spaces. Events in E-space are normally invisible to A-space, and vice versa. However, somehow there can be a weak, limited interaction between the two spaces, but the actual force or mechanism of this interaction is unknown. Besides gravity, A-space has three prominent forces. These are electromagnetism, the strong nuclear force, and the weak force. E-space is so different from A-space that it is very doubtful that E-space has any of these three forces. E-space has a different set of forces defined for it. We have already mentioned the existence of at least one large-scale, selectively-organizing force in E-space. Aside from this, it is hard to say anything specific. Because we really don’t know in detail the physics of E-space, we can’t explain just how the crossover into A-space can happen.

Somehow, Gaia in E-space was able to observe events in A-space. Among other things, Gaia observed and learned both the chemistry and physics of A-space. In addition, Gaia could actually manipulate A-space matter. This crossover capability of E-space is already apparent when considering souls, mind pieces, and cell controllers. Interaction between the two spaces is obviously going on, but just how it happens is unknown. The only thing that can be said with certainty is that the interaction is typically weak and limited.

One of the more interesting discoveries of the 20th century is that the composition of the Earth’s atmosphere is the result of life. Not only did the current atmosphere originate from life processes, but it is actively maintained by life. In other words, the manipulation of the atmosphere is not just something that happened long ago. Instead, it is an ongoing process. James Lovelock, an atmospherics scientist, gets major credit for both suggesting and providing supporting evidence that life maintains the Earth’s atmosphere for its own benefit and to suit its own needs.

The surface of the Earth, and the atmosphere, were very different 4.6 billion years ago, shortly after the Earth had formed. For one thing, there probably wasn’t any solid ground. The whole Earth was molten from radioactive-decay heating. A solid crust had to wait a few hundred million years before it could form. Besides great heat, the surface of the early Earth was pelted with meteorites both large and small. The atmosphere of the early Earth came about from the buildup of gases released from the molten Earth. This would be similar to the gases released by volcanoes. Major gases from volcanoes are chlorine, water, carbon dioxide, nitrogen, and hydrogen sulfide.

Sometime before 3.8 billion years ago, the temperature at the surface of the Earth dropped below the boiling point of water. Up until this time, there were no oceans and all the water was steam in the thick atmosphere. With a drop in temperature due to a continuous falloff in the radioactive-decay heating, the steam was able to condense. Over a period of several million years the water left the atmosphere and formed the oceans.

Besides the radioactive-decay heating, one of the things that helped to keep the surface temperature of the Earth relatively high, was the carbon dioxide in the early atmosphere. Carbon dioxide is important to temperature because it absorbs infrared radiation and converts it to heat which is transferred by collision to the other gases in the atmosphere. Most of the infrared radiation comes from matter which is heated by sunlight and then radiates this heat as infrared. Instead of letting the infrared energy escape into space, the gaseous carbon dioxide traps this energy and recycles it. The net result is that the Earth retains the solar energy that strikes it for a longer period of time. Regardless of carbon-dioxide levels, the incoming solar energy will be balanced by the outgoing energy radiated into space, and the overall temperature at the Earth’s surface will be in equilibrium. However, the whole point of the carbon dioxide is that it allows a higher equilibrium temperature.

At present, the amount of carbon dioxide in the atmosphere appears miniscule. Only .03% of the atmosphere is carbon dioxide. However, this small amount makes a big difference. It has been calculated that if all the carbon dioxide were removed from the atmosphere, then the oceans would freeze solid. The early atmosphere had much more carbon dioxide than the present atmosphere. In fact, carbon dioxide was a major component of the early atmosphere, and the total mass of the early atmosphere was probably several times the mass of the present atmosphere. This great mass of carbon dioxide helped to keep the early Earth much warmer than it is today, even though there was less energy from the sun. The sun has been steadily increasing its energy output since it formed. Four billion years ago, the output of the sun may have been as much as thirty percent less than it is today. However, with regard to temperature, this lower energy was more than compensated for by the abundant carbon dioxide.

The planet Venus is a good example of what the atmosphere of a lifeless Earth would be like. Venus is a dry, acidic planet with a waterless atmosphere. All the water Venus once had, was destroyed in the upper atmosphere by sunlight. Today, the atmosphere is ninety-eight percent carbon dioxide and many times more massive than Earth’s present atmosphere. The overall composition of Venus is similar to the Earth’s. For example, the density of Venus is 5.2 grams per cubic centimeter while the density for Earth is 5.5. Both planets are about the same size, with Venus being the smaller of the two. Both planets formed at the same time. However, Venus is closer to the sun and receives about twice as much solar energy per unit area as the Earth does. The average surface temperature of Venus is 477 degrees centigrade, while the average temperature for the Earth is only thirteen degrees centigrade. Most of this temperature difference is due to the great mass of carbon dioxide enshrouding Venus.

Two and a half billion years ago, Venus would have been less hot than it is today. The main reason is that there would have been less carbon dioxide in the atmosphere, because a lot of carbon dioxide has been added during the last 2.5 billion years due to continuing volcanism and outgassing. However, unlike Venus, at least 2.5 billion years ago on Earth the amount of carbon dioxide was declining. This was due to life, but before life could make a big impact on the atmosphere and temperature of the Earth, it first had to gain a toehold. This toehold, among other things, required a minimum temperature somewhat less than the boiling point of water. E-space can’t create the conditions needed. It can only watch and see if they occur. E-space can act to create life in A-space only if the opportunity of favorable conditions appears.

It seems the surface temperature of Venus never dropped below the boiling point of water. There never was a toehold for life on Venus, and E-space could do nothing about it. The E-space world of Venus never had the opportunity to establish any life on the associated A-space world. However, unlike Venus, the opportunity did present itself on Earth. Since the time the oceans formed, over 3.8 billion years ago, the surface temperature of the Earth has been below the boiling point of water. The greater distance from the sun made the difference. However, if nothing had been done about the carbon-dioxide atmosphere, and the buildup had continued, then it is estimated that all the carbon dioxide, along with the more energetic sun, would put the surface temperature of the Earth today at roughly 300 degrees centigrade. Since the boiling point of water is 100 degrees centigrade, it would be impossible for any life on Earth to exist.

The oldest life is known to be bacteria, and it first appeared about 3.5 billion years ago. Actually, removal of carbon dioxide from the atmosphere may have started at this time. This first life lived in the oceans. Gases dissolve in water, and a part of the atmosphere will always be dissolved in the oceans. When cells living in the oceans remove dissolved gases from the water, it is the same thing as removing the gases directly from the atmosphere. Atmospheric gases will be absorbed by the water just as quickly as the gases are removed from the water by bacteria and converted into non-gaseous compounds. There are bacteria living today that create their food, glucose, by reacting carbon dioxide and hydrogen sulfide with light. The actual reaction is six molecules of carbon dioxide, plus twelve molecules of hydrogen sulfide, plus light energy, gives one glucose molecule, plus six water molecules, plus twelve sulfur atoms. The first bacteria may well have used this reaction because both carbon dioxide and hydrogen sulfide were present in the early atmosphere. Some or all of the carbon atoms in the created glucose molecule may ultimately find their way back into the atmosphere as carbon dioxide, but the overall net effect of bacteria using this reaction would be to reduce the level of atmospheric carbon dioxide.

Although the removal by bacteria of carbon dioxide and hydrogen sulfide may have started 3.5 billion years ago, the actual production of oxygen by blue-green algae didn’t begin until 2.5 billion years ago. Perhaps the supply of hydrogen sulfide had been exhausted by this time, and life needed a new reaction to survive. The new reaction is the standard reaction of photosynthesis. Six molecules of carbon dioxide, plus six molecules of water, plus light energy, gives one glucose molecule, and six oxygen molecules. The blue-green algae use this reaction, and so do all the plants and trees. From a gases standpoint, this reaction converts carbon-dioxide molecules to oxygen molecules on a one-for-one basis. Unlike carbon dioxide, oxygen is a very reactive gas. Although carbon dioxide was being removed from the atmosphere, there was not a parallel buildup of atmospheric oxygen. Instead, the oxygen was reacting with iron and other elements.

The production of oxygen by microorganisms is known to have begun 2.5 billion years ago, because the oldest deposits of oxidized iron (ferric oxide) date from that time. After about two hundred million years of oxygen production, the Earth had its first known glacial episode, 2.3 billion years ago. The carbon-dioxide level must have been low for this to happen. As far as temperature is concerned, the Earth seemed ready for the next step: the eucaryote cell from which all multicellular life is made. However, this next step didn’t happen until 1.4 billion years ago. This is a gap of 900 million years. There are two possible reasons for this great delay. The first reason is that although conditions allowed an earlier appearance, Gaia hadn’t designed the eucaryote cell yet. The second reason is that Gaia’s design of the eucaryote cell was ready, but the overall environmental conditions necessary for its survival, were not. There just isn’t enough information to know why this long delay of 900 million years happened.

One should not attribute too much foresight to Gaia at those early times. There is no good reason to believe that Gaia foresaw the future of multicellular life long before it made it. The possibilities probably weren’t realized until the first, simple, multicellular organisms had been successfully made about 800 million years ago. This is a gap of 600 million years since the first eucaryote cells appeared 1.4 billion years ago, although modifications were probably necessary to those first eucaryote cells before multicellular life could be made from them. If there was any long-range planning by Gaia during the first 2.1 billion years of life on Earth, when there were only procaryote cells, it was probably limited to the survival and prosperity of procaryotes. However, environmental conditions that are good for procaryotes will tend to be good for eucaryotes, and hence multicellular life.

At this point let’s consider the present Earth atmosphere. This atmosphere is not self-sustaining. It is not an equilibrium atmosphere that would persist if all planetary life were removed. Instead, our atmosphere is both a product of life, and actively maintained in its present condition by life. The composition of the atmosphere is 78% nitrogen, 21% oxygen, 1% argon, and .03% carbon dioxide. Other gases are present, but in relatively small amounts. If all life on Earth were eliminated, then the nitrogen and oxygen would leave the atmosphere. The oxygen would slowly react with the nitrogen and form nitrates which would dissolve in the oceans. After a million years or so, the Earth would have its equilibrium atmosphere. The argon would remain, and there would be much more carbon dioxide, but all the oxygen would definitely be gone, along with most, if not all of the nitrogen. However, instead of moving to this equilibrium state, the atmosphere is maintained in disequilibrium by the coordinated activities of the worldwide ecosystem.

Let’s consider each gas individually. Carbon dioxide is a stable gas, and if its representation in the atmosphere is to remain the same, it is only necessary that life return as much as it takes. The removal of carbon dioxide by photosynthesis is counterbalanced by the addition of carbon dioxide from the respiration of planetary life. The next gas, argon, is a non-reactive gas and it plays no part in life processes. Concerning oxygen, the oxygen level has a definite tendency to drop, and it must be actively propped up. The oxygen level was built up in the first place by an excess of photosynthesis over respiration. As long as carbon dioxide is added to the atmosphere by volcanoes, then this excess carbon dioxide can be used to make oxygen without depleting the already existing amount of atmospheric carbon dioxide. The carbon that is separated from the oxygen is ultimately buried under sediments and thus prevented from recombining with oxygen. Concerning nitrogen, the reaction of nitrogen with oxygen, producing nitrates, is counteracted by special bacteria that break the nitrates back into nitrogen and oxygen.

One of the more interesting examples of precise control over the atmosphere is the production of ammonia. About a billion tons of ammonia are produced by bacteria every year. Ammonia is a base, and its presence in the atmosphere counteracts the acids produced by the oxidation of nitrogen and sulfur. It is estimated that if there were no ammonia production by bacteria, then rainwater would be as acid as vinegar. Instead, there is just enough ammonia made to counteract the acids and keep the rainwater close to neutral.

Because Gaia designed and introduced all the novelties of earthly life, it is ultimately responsible for the life which keeps the atmosphere the way it is. For the most part, the entire worldwide ecosystem is running on automatic. There is no need for Gaia to constantly intervene to maintain the atmosphere. The only thing Gaia might do, is take occasional measurements of the atmosphere and look for unwanted trends in gas concentrations. If it sees a problem developing, it could perhaps introduce a new or modified type of bacterium to compensate. To automate things as much as possible, it may be that some of the types of existing life can themselves make accurate measurements of gas concentrations and will adjust their own reactions accordingly. The instructions for such measurements would be coded in the DNA and carried out by the cell controller.

The atmosphere is global, and so is the realm of Gaia, so it makes some sense to put the two together. However, managing the atmosphere takes up very little of Gaia’s capacity for work. Gaia is a self-appointed specialist-manager of earthly life. Nowadays, a lot of its work must be concerned with us. it’s probably doing a lot of observing, evaluating, managing, and perhaps designing. It was suggested earlier, in the “Brain” chapter, that some of the new mind pieces for mankind have flaws. Part of Gaia is probably working to improve the designs for these pieces. As to how Gaia’s designs or plans actually get turned into useable product, that is unknown.

Chapter 6: Hinduism and E-space

Both our souls and our mind-pieces reside in E-space. They are made of E-space matter and are subject to E-space forces. However, our normal perceptions are of A-space. We have already seen how a great deal of the wiring of our minds exists in A-space as neurons. Among other things, the neurons are connected to A-space sensors such as our eyes and ears. As human beings, we are a mixed bag, being part A-space and part E-space. However, in order to preserve our fragile A-space bodies, we must be constantly aware of A-space.

The possibility of damage to our E-space part seems to be minimal, and thus there is no need to be aware of E-space happenings. In contrast, there is the constant possibility of damage to our A-space part. Our A-space bodies require constant attention and maintenance. Our minds have been specialized to care for our bodies and look after their needs, as well as being concerned with the production and upbringing of children, which are replacement bodies. The point that is being made, is that there is a very good practical reason why we do not normally perceive E-space. Any time spent perceiving, or even thinking about E-space, is time during which our A-space bodies are being neglected. Neglecting our bodies and their needs is a sure way to lessen our chances of survival. There can be no doubt that natural selection will work against any kind of E-space perception.

For humanity, E-space perception is both unnatural and exceptional, because it is impractical. E-space perception works against survival, instead of for it. It is for this reason, one may assume, that ordinary men are hostile to any suggestion that they concern themselves with E-space. People who claim to have had one or more E-space perceptions, quickly learn to keep their mouths shut. Ordinary people don’t want to hear about it. They profess disbelief. It doesn’t matter that there are many books, and a lot of evidence, that E-space is there and that people can sometimes perceive it, the fundamental concern must always be that such perceptions are contrary to survival.

The most radical type of E-space perception is where one’s soul, and at least part of one’s mind, temporarily leave the physical body and go on a journey of some kind in E-space. We will approach this subject with the sort of intellectual curiosity that is characteristic of science. Although it is said that curiosity killed the cat, many of us are curious and want to know about things even if they aren’t good for us. In fact, many people have been driven by curiosity to explore E-space for themselves. Although we will examine a tried-and-true method that allows journeys into E-space, its use is not recommended. However, for the extremely curious who can’t resist, try it at your own risk. As we shall see, there is a very big risk. Fortunately, there is no need to see E-space now, as we all get a look at it upon our deaths. The conditions of the afterlife will be briefly covered in the “Soul” chapter.

Having put our subject in proper perspective, and even issued a warning, we can now examine the facts and learn something. Of all the major world religions, there is only one that is seriously concerned with experiencing E-space before one’s death. This religion is Hinduism. Hinduism has its holy books, called the Vedas. Most of the content of the Vedas is archaic and has no place in our modern world. Like all holy books, they suffer from old age and their pre-scientific origins. The Vedas had many authors and were probably written over a period of several centuries. It is not known with any certainty when the Vedas were written, but a good guess would be that the oldest books were written about 1,000 B.C. Hinduism was already established at the time of Buddha, and Buddha died in 487 B.C.

In the beginning of a religion, there is just an idea or two, a story, an outlook, or something. Nothing is certain and carved in stone yet. The founders of the religion write about it, and this is the start of the religion’s fossilization. The writings increase greatly in importance once the founders are dead. Assuming the religion is successful, an organization is established to perpetuate the teachings of the founders. Because the founders are gone, there is no choice but to rely on their writings. To prevent squabbling and contention over both the teachings and the meaning of the new religion, the early writings are greatly elevated in status, and perhaps proclaimed sacred, infallible, holy, complete, and final. To avoid the criticism that the works of men are subject to error, the new priesthood might even go so far as to proclaim the written works as done by God himself. It is a credit to the Hindu priests that they didn’t resort to this lie. Unlike the Christians, the Hindus never claim anything but a human origin for their holy books. However, their books don’t tell the tall tales that the Bible does. The falsity of the Bible is so great that there was probably no choice but to claim divine origin and thus shield its fraudulent statements from the criticism and condemnation they deserve.

To become established, a religion must have some sort of hook that will attract people. It must offer something people want. Among the several possible hooks, there are the promise of eternal life, access to a powerful god that will listen to one’s needs and provide help, promise of freedom from the wheel of rebirth, promise of a paradise after death, and an offer of knowledge and experience of the supernatural. Of the five hooks just listed, notice that three promise some advantage after death, and two promise advantage before death. It is an easy task to identify which hooks are used by each of the major religions. Christianity uses the eternal-life hook, and the helping-god hook. Mohammedanism (Islam) uses the paradise-afterlife hook, and the helping-god hook. (Note the strong resemblance between Christianity and Mohammedanism in terms of the hooks they use. This is no coincidence. Mohammed was both familiar with, and influenced by Christianity.) Hinduism uses the knowledge-and-experience hook. Buddhism uses the escape-the-wheel-of-rebirth hook. Just as Mohammedanism was influenced by Christianity, so was Buddhism influenced by Hinduism. Part of the knowledge of the Hindus is that the essential part of man is subject to rebirth over and over again without any definite end. This is termed the wheel of rebirth. Buddha obviously did not like this idea of endless rebirth. Buddha was not alone in his dislike, as the spread of his religion shows. His escape-the-wheel-of-rebirth hook was a big success.

Of the four major religions mentioned, only Hinduism appears modern in terms of its hook. It has the scientific spirit of seeking knowledge. Hinduism claims that knowledge is power, and so does science. This appeal to knowledge is no doubt the cause of Hinduism’s greater respectability among many educated and cultured Westerners. However, this is not to say that Hinduism is a desirable religion to practice. The basic idea of Hinduism is okay, but the way the religion is actually practiced in India and elsewhere, isn’t.

The only parts of the Hindu holy books that have any interest to the modern, are the Upanishads. The Upanishads are a collection of ancient writings which embody the philosophy of Hinduism. The hook of Hinduism is the advantage of knowledge. However, how is the individual to gain this knowledge? If it were just a matter of reading a book, and gaining the knowledge that way, then it would be hard to see how that could be the basis for a religion. If the knowledge came strictly from a book, and anyone could read the book and know all it says, then how could that be of any great personal advantage? If the knowledge in the book is so valuable, and anyone could have it by just reading it, then everyone will read it and no one could possibly have any advantage.

No religion says that one only has to read a book. Quite the contrary, all religions demand some sort of personal action or payment to gain the hooks the religion offers. Hinduism is no exception. The Upanishads are certainly recommended by Hindus for study, but this is only a starting point. The Upanishads hint at the knowledge available, but they do not exhaustively detail that knowledge. For the knowledge to be advantageous, it is necessary that the devotee or student of the religion get the knowledge himself. One sure way to get knowledge about the supernatural, is to directly experience the supernatural. This is the approach used by Hinduism. But how does one experience the supernatural? If Hinduism had said nothing practical about how to experience it, then it would never have succeeded as a religion. However, the whole reason Hinduism developed and prospered in the first place, was because it had a very specific and powerful means for an individual to experience the supernatural for himself.

The Upanishads speak clearly about the means, or method, to gain knowledge. They do not try to conceal it. It is important for the success of the religion that the method be made known to everyone. If the reader’s curiosity about this method is building, then prepare yourself for an amazingly simple method. It can be stated in a single short sentence: Repeat over and over again the sound OM. This so-called sacred syllable, OM, is the whole pillar of Hinduism. So that there is no confusion, let’s be clear about the pronunciation. The sound OM rhymes with Rome, and home. The O sound is short, and the M sound is typically drawn out, so the syllable would be sounded more like OMMM.

The importance of OM to Hinduism cannot be overstated. It is the very essence of Hinduism. Without it, the religion would be nothing but a philosophy. The discovery of OM was the beginning of Hinduism.

The word which all the Vedas rehearse,
And which all austerities proclaim,
Desiring which men live the life of religious studentship—
That word to thee I briefly declare.
That is Om!

That syllable, truly, indeed, is Brahma!
That syllable indeed is the supreme!
Knowing that syllable, truly, indeed,
Whatever one desires is his!

That is the best support.
That is the supreme support.
Knowing that support,
One becomes happy in the Brahma-world.[6]

This verse is from the Katha Upanishad. The excellent translation from the Sanskrit original is by Robert Hume, who translated the thirteen principal Upanishads. In this verse, we see the praises that are heaped upon OM. There is also a promise of desires fulfilled and happiness attained.

Om!—This syllable is the whole world.
Its further explanation is:—
The past, the present, the future—
Everything is just the word Om.
And whatever else that transcends threefold time—
That, too, is just the word Om.[7]

This verse is at the beginning of the Mandukya Upanishad. As one can easily see, great emphasis is put on the importance of the sound OM. The verse is an obvious exaggeration, but OM is the whole crux of the religion, so it is hard to overemphasize it. Instead of being condemned, the exaggeration is welcomed by the religion’s devotees.

Taking as a bow the great weapon of the Upanishad,
One should put upon it an arrow sharpened by meditation.
Stretching it with a thought directed to the essence of That,
Penetrate that Imperishable as the mark, my friend.

The mystic syllable Om is the bow. The arrow is the soul.
Brahma is said to be the mark.
By the undistracted man is It to be penetrated.
One should come to be in It, as the arrow [in the mark].[8]

This verse is from the Mundaka Upanishad. It uses a clever analogy, but is obviously dated. Nowadays guns and bullets are used, instead of bows and arrows. Notice how OM is identified as the bow in the fifth line, and in the first line the bow is called the great weapon. The power of OM is made clear. A straightforward interpretation of the verse is that the use of OM will launch the soul into E-space.

As the material form of fire when latent in its source
Is not perceived—and yet there is no evanishment of its subtle form—
But may be caught again by means of the drill in its source,
So, verily, both are in the body by the use of Om.

By making one’s own body the lower friction-stick
And the syllable Om the upper friction-stick,
By practicing the friction of meditation,
One may see the god who is hidden, as it were.[9]

This verse is from the Svetasvatara Upanishad. It uses an anachronistic analogy, just as the previous verse did. Before matches, lighters, and self-igniting stoves, mankind started fires by such means as rapidly spinning a stick of wood (the drill) whose end is pressed against a stationary piece of wood. The beginning of the verse is scientifically inaccurate as it seems to be saying that the fire exists in the wood in some subtle form. This is not true, but it is excusable since the Upanishads are pre-scientific writings.

The meaning of this verse starts with the fourth line. The first three lines make the claim that fire has both an open, explicit form, and also a subtle, hidden form. The remaining lines make the claim that there is something similar in the human body. This something in the body has both a hidden form, and an explicit form. Normally it is hidden, just as the writer of the verse supposed fire is hidden in the stick. But by using OM, one can draw out this hidden something and make it known to one’s own awareness.

There are two Brahmas to be known:
Sound-Brahma, and what higher is.
Those people who sound-Brahma know,
Unto the higher Brahma go.

Now, it has elsewhere been said: “The sound-Brahma is the syllable Om.”[10]

The meaning of this verse is clear. By use of OM, one goes to the higher Brahma. The word Brahma is a technical term which occurs frequently in the Upanishads. Brahma is E-space.

This verse, and the following verses and prose we shall quote, are from the Maitri Upanishad. The Maitri is certainly an ancient work, but it is not considered as valuable by many Hindu scholars as those Upanishads in either the traditional group of ten, or group of twelve. Probably the reason for this lower status is that the Maitri speaks too plainly. It is too open and obvious about what it is saying. From a religious standpoint, there are two problems with plain speaking. The first problem is that a clearly-stated position is more easily disagreed with. The second problem is that plain speaking makes the religion more simple. The people who make their livings from a religion, don’t want a simple religion. Quite the contrary, the priests and scholars want enough complexity so that they will be needed to interpret it for the common man.

Who is both higher and lower,
That god, known by the name of Om.
Soundless and void of being, too—
Thereon concentrate in the head![11]

This verse praises OM, and exaggerates by calling the sound a god. The third line is interesting because it immediately contradicts the second line. OM is a sound, but it is called soundless, and OM, which was called a god, is now said to have no existence. This contradiction is the sort of deliberate obscurity and complexity which the priests and scholars of a religion love. Although we said the Maitri is more plain-speaking than the Upanishads in the group of twelve, it still has its share of double talk. If it didn’t have some obscurity, the Maitri Upanishad would have been discarded by its caretakers long ago.

The final line of this verse is an explicit statement about how OM is to be used. It is not a complete instruction, but it does say to concentrate on OM in the head.

Now, it has elsewhere been said: “The body is a bow. The arrow is Om. The mind is its point. Darkness is the mark. Having pierced through the darkness, one goes to what is not enveloped in darkness.”[12]

This prose is similar to the verse quoted from the Mundaka Upanishad. The same sort of bow-and-arrow analogy is used. In the Mundaka, the bow was OM and the soul was the arrow. In this prose from the Maitri, the analogy is less elegant, but the meaning is still the same. The Mundaka verse says that the mark or target is Brahma, which is E-space. Although this prose from the Maitri is not as elegant as the Mundaka verse, it does tell us something factual about entering Brahma. It implicitly says we go from the darkness of using OM (the eyes are normally closed during the repetition of OM), to the visible, non-dark world of Brahma. The implication of this prose is that we can see in Brahma.

Whereas one thus joins breath and the syllable Om
And all the manifold world—
Or perhaps they are joined!—
Therefore it has been declared to be Yoga.

The oneness of the breath and mind,
And likewise of the senses,
And the relinquishment of all conditions of existence—
This is designated as Yoga.[13]

This verse gives us a definition of yoga. It is not a very good definition, because it is deliberately obscure, but it does feature the syllable OM.

If a man practices Yoga for six months,
And is constantly freed,
The infinite, supreme, mysterious
Yoga is perfectly produced.

But if a man is afflicted with Passion and Darkness,
Enlightened as he may be—
If to son and wife and family
He is attached—for such a one, no, never at all![14]

We know from the previous verse that yoga involves use of OM. A common, popular image of India, is that of a yogi sitting still in some lotus posture and quietly meditating. What most outsiders don’t realize, is that if the yogi is actually meditating properly, then he is repeating the syllable OM over and over again to himself mentally. This is the key that will unlock the door. However, it is more easily accomplished lying down than sitting up.

The meaning of this verse is obvious as long as one keeps in mind that yoga is the use of OM, and OM is the means of entry into E-space. In the second line, the word “freed” is used. Freed from what? Freed from A-space. Instead of always perceiving A-space, the practice of yoga allows the soul to perceive E-space. The second half of this verse is interesting because it shows the basic conflict between seeking E-space perception, and taking care of one’s body and its needs. As already stated, E-space perception is not normal because it is not practical. The fragile A-space body requires our undivided attention. Of course, Hinduism cannot admit that its main goal is impractical, so it does its best to belittle the normal A-space concerns when compared with the seeking of Brahma.

The founders of Hinduism had somehow discovered OM. By repeating OM many times, the founders were able to become perceptive of E-space and move about in it. The method, in full, is to lie down comfortably on a bed, preferably at night before sleeping. The room should be quiet. Close the eyes and mentally repeat the sound, OM, over and over again at whatever seems like a normal pace. Do not say the sound out loud. Avoid stray thoughts and try not to feel the body. Although movement should be avoided, by all means do move if it will correct any physical discomfort. Normally, the attention has to settle somewhere, so a good place to focus the attention on, is the center of the forehead. If stray thoughts intrude, then try to make the thoughts about gaining entry into E-space, which is the reason for using OM.

There is no guarantee that the use of OM will produce results. Many hours of using OM, spread over many days, may be necessary before there are any results. A good question at this point is to ask just what results are possible? One possibility is that there will be a better and more frequent recall of dreams. The use of OM tends to enhance dream remembrance upon waking from sleep. Another possibility is that upon sleeping, there will be lucid dreaming. A lucid dream is where one is conscious and more-or-less mentally normal during the dream. Lucid dreams are actual E-space experiences. Another possibility is that during sleep, there will be an onset of lucidness and a direct perception of a non-physical body. Often this E-space body is either pulling out of, or reentering, the physical body. This possibility is similar to a lucid dream, but the addition of a tangible, non-physical body which is often in close proximity to the physical body and capable of independent motion and movement, is what convinces those people that experience it that they are truly exterior to the physical body. A final possibility is that something is actually felt in the body during the use of OM. There may be a vibration felt, or a loss of sensation in the limbs, or a kind of shrinking feeling.

Of the first three possibilities mentioned, notice that the E-space experiences occur during sleep, and not during the actual use of OM. The prior use of OM brings about an enhancement during sleep of both awareness and memory. Sleep is a time when awareness is normally absent or disconnected, and long-term memory recording is not taking place. What OM does, is make the aware soul more connected to the mind during sleep than it normally is. OM also turns on the memory recorder, so to speak.

If we are going to have any E-space perceptions at all, the best time to have them is when we are asleep. When asleep, our bodies have their lowest need for the services of our minds. If part of the mind were to wander off and leave the body alone, then hopefully the body wouldn’t miss it. However, in spite of this opportunity to perceive E-space, the normal human doesn’t. The reason is that any remembered E-space experience could distract the mind from its normal duties to the body during waking hours. However, it seems that Gaia has put a deliberate loophole into the overall human mechanism.

When Gaia designed the human, curiosity was included. However, this presented a problem. An intelligent, curious human is inevitably going to ask questions about his own existence. What am I? Who am I? Where did I come from? Where am I going? And so on. These questions can cause frustration if the human can’t answer them. To be answered correctly, a human needs to know something about E-space. However, E-space perception is bad for the human, just as it would be for any other animal with an A-space body. This causes a dilemma. The satisfaction of human curiosity requires E-space perception, but any E-space perception distracts the mind from serving the body. Overall, this design problem facing Gaia was resolved in favor of the body. Under normal conditions, there would be no E-space perception. However, it seems that Gaia built in a loophole to the no-perception rule. The loophole is OM, and Gaia probably communicated its use to one or more of the founders of Hinduism.

So as to restrict its use, the results of OM have a high threshold. A single sounding of OM is useless. Instead, it must be repeated a great many times. Making OM available to mankind was an experiment by Gaia. Considering the history of Hinduism and the tremendous corruption of the religion, it is hard to declare the experiment a success. However, the actual evaluation of the experiment is Gaia’s responsibility and not ours.

It was never intended that everyone use OM. Instead, only the most curious would persevere with the method and go on to learn about E-space. These individuals would in turn explain what they had learned to the rest of the people. This way, everyone could satisfy their curiosity about themselves. There would be a handful of teachers, and only they would actually need E-space experience. Everyone else could get the facts secondhand and thus be spared the penalty of E-space perception. This must have been the general game-plan which Gaia had in mind.

The response to OM must be programmed into one or more of the mind pieces. There is nothing intrinsically special about the sound OM. There is no reason to believe that OM has some self-existing importance. Instead, it is only the programming in a mind piece that gives the sound OM its special potential. Gaia selected the sound, and then altered the programming of a mind piece so that it would recognize the sound and treat it special under certain conditions. The programming change was probably made in a piece that analyzes words as part of the overall language processing. Among other things, the programming change probably checks for correct pronunciation, and counts the number of times the sound is repeated. When the programmed requirements are met, the piece probably sends a signal to whatever other piece controls the organization and activation of the mind during sleep.

There is no way of knowing if this programming change was universally made throughout humanity. Therefore, not only must OM be used a lot before there is any indication that it is working, but there is no guarantee for a particular individual that it would ever work. The fact that OM never traveled much beyond India may be because it just won’t work for other peoples, or at least some of them.

At this point we know about OM. We know what it does, and why it was introduced. From such a simple beginning—the use of OM to learn about E-space—Hinduism, over the centuries, has undergone a shocking corruption. It is worthwhile to briefly review how and why Hinduism changed into the monstrosity it is today.

The major cause of the religion’s corruption was the needs of the people who made their living from the religion, or who hoped to make their living from it. For the few at the very top, there is a normal resistance against any change to a religion. These people have already gained the pinnacle of material rewards which the religion has to offer, and they do not want any changes that may threaten their security. Although this is the attitude at the top, there is often an opposite attitude at the bottom. Those members at the bottom of a religious organization, normally hope to reach the top someday. Of course, only a few can make it. One strategy to reach the top is to play ball and support completely the conservative dictates from the top. However, the problem with this strategy is that everyone else is doing it too, so there is no real advantage. An alternative strategy is to be an innovator. This is a high-risk strategy that usually fails but occasionally pays off big.

Hinduism’s original method of personal enlightenment was simple: Use OM and experience Brahma (E-space) for yourself. In addition to using OM, one should also learn from an experienced teacher or guru. For those who don’t want to experience Brahma directly, they can just take the guru’s word for it. For the scholarly types, they can read the writings of the gurus. For the great majority of people who can’t be bothered too much with religion, the religious organization provides temples, priests, rituals, festivals, and a watered-down Hinduism which ordinary people without any experience of Brahma could digest and absorb. A good example of watered-down Hinduism is that Brahma, which is clearly E-space based on a reading of the Upanishads, was very early in the history of Hinduism turned into a human-like god which ordinary people could understand and relate to. However, the watered-down Hinduism presented to the masses is not a corruption of Hinduism. Because knowledge and experience of Brahma is inherently impractical, one can’t expect the masses to take a great interest in it, or be overly concerned with the truthfulness of the watered-down version they receive. As long as the version of the religion for the masses meets their limited curiosity about themselves, then they are satisfied. It would actually do them a disservice to try to force on them a more accurate teaching which they don’t want or need.

The real corruption of Hinduism has been in the means to personal enlightenment. Since Hinduism’s founding, there has been an endless parade of would-be innovators on the OM method. The monstrous yoga practices of today are testimony to this. Perhaps the best-known innovator was Patanjali, who wrote a book about yoga. The book is over two thousand years old. Patanjali’s yoga has eight parts to it. These are righteous living and thinking, daily performance of religious rituals, correct posture during meditation, correct breathing during meditation, correct subjugation of the senses during meditation, correct concentration during meditation, correct timing of meditation, and correct emotional state (preferably ecstatic) during meditation. Notice all the additions Patanjali makes to the original method of just repeating OM. Patanjali’s yoga is called Raja yoga and is the principal yoga taught in India today. Besides Raja yoga, there is a grotesque variant called Hatha yoga. Hatha yoga puts extra emphasis on breathing control, strange postures, and the holding of one’s breath. Hatha yoga goes to dangerous extremes and sometimes results in death.

With Raja yoga, the use of OM recedes into the background. For many practitioners, the syllable is not used at all, or if it is used, then it is used in combination with other sounds. The actual goal of Raja yoga, enlightenment, is poorly understood by its practitioners. They don’t know what real enlightenment about Brahma is, or that enlightenment comes during sleep and not during actual meditation. The poor fools spend part of their lives chasing after an enlightenment they will never have, because they are using the wrong method. Their gurus which teach Raja yoga are the blind leading the blind. The gurus often lie and claim that enlightenment will come and it will be wonderful, a truly beautiful experience, and so on. Many gurus, perhaps all, claim to have experienced enlightenment themselves. They all agree on what a great and beautiful experience it is. The gurus were once hoodwinked students themselves. However, as they got older they learned how to play the game and live well from the hopes of a new crop of students. To help attract students, they must, of course, promise great things. It is actually in the guru’s self-interest to keep the student on the hook for as long as possible. If the student never gains enlightenment, then that’s just fine with the guru.

In summary, one could say that enlightenment methods were developed that would serve the interests of the teacher, instead of the student. The original OM method was not a good method when considered from the standpoint of a teacher that needs to earn a living. With hindsight, one could say it was inevitable that a simple, straightforward method would be replaced by a complex method that tortures the student and wastes his time.

Although there is no guarantee that the original OM method will actually work for a particular individual, it is a reasonable certainty that any made-up method never will. The correct teaching to a seeker of Brahma would be this: Try the original OM method, and if after a few weeks there are no results, then just assume your mind piece lacks the programming for it, and therefore restrict your seeking to secondhand methods, such as the reading of books and the hearing of teachers.

As far as the repetition of sounds is concerned, there is only the one sound, OM, which has the potential to open up E-space perception during sleep. The reason for this, as already stated, is because the sound is explicitly programmed for. However, the success of OM has spawned countless attempts over the centuries to introduce new mantras, as they are called. All of these mantras are worthless time-wasters. Some of the better known mantras are: om mani padme hum, which means, om: the jewel in the lotus; om tat sat, which means, om: that is; soham, which means, he is I; and hansah, which means, I am he. Some gurus like to give secret and personal mantras to their students. The Transcendental Meditation fad which swept the US in the 1970s, was based on personalized mantras. A different approach to mantras is the singing mantra used by the members of the Hare Krishna movement. This sect also had its high-point in the US during the 1970s. Its members wandered city streets dressed in orange robes with their heads shaved. These devotees would bounce up and down like pogo sticks and chant-sing their mantra: hare krishna, hare krishna, krishna krishna, hare hare, hare rama, hare rama, rama rama, hare hare.

Before one grieves too much for all these misled students and seekers, it is worth remembering that direct perception of E-space has no practical value. The most it can do is satisfy curiosity. The worst it can do is something very horrible called kundalini. Both the OM method, and meditation in general, can, after long use, cause a devastating psychic injury known as kundalini. This injury happens during the actual meditation, and not during sleep. In a nutshell, the cause of the injury is too much meditation. However, no one knows exactly why the injury happens. Perhaps it is a programmed response to put a stop to excessive meditation, or perhaps it is a true injury to the E-space body analogous to an athlete pulling a muscle or having cramps from too much exercise. Regardless of the true reason for the kundalini injury, it is something to be avoided.

The details of the kundalini injury are these: At some point during meditation, without any warning, there will be a strong sensation at the spine in the lower back. There will then be a sensation of something pushing up the spine from the point of the original sensation. How far up the spine the sensation gets, is variable, and also somewhat dependent on what the person to whom this is happening, does. Aside from these strange sensations, there is no pain, yet. The onset of the pain is also variable, but follows the kundalini injury quickly within a day or two. The pain of the kundalini injury is always the same. It is a burning sensation across the back, and the pain may also cover other parts of the body such as the head. The pain is very real, and sometimes intense. The pain may come and go over a period of months and eventually fade away, or it may burn for years without relief.

The common reaction to the kundalini injury is bewilderment about what is happening. Meditation is stopped once the pain has started, and it may be permanently discontinued. The poor sufferer is helpless. The kundalini injury is not a physical injury (although the pain signals are probably carried by nerves), and trips to the doctor, and medications, are of limited value. As the pain continues month after month, the typical sufferer develops a strong aversion for meditation and regrets having meditated. The Indian, Gopi Krishna, at the age of thirty-four, suffered the kundalini injury in December 1937. He had a habit of meditating for about three hours every morning, and he did this for seventeen years. He apparently did not use OM, but instead would just concentrate on a spot centered on his forehead. He was a practitioner of Raja yoga and apparently had no E-space experiences. In his case, the sensation rose all the way up his spine and into his head. The pain he would suffer lasted decades. The Indian, Krishnamurti, who had been groomed as the World Teacher of The Theosophical Society, suffered the kundalini injury at the age of twenty-seven during August 1922. He had been meditating. His suffering lasted several years and the pain would come and go. In one of his letters of 1925, Krishnamurti wrote, “I suppose it will stop some day but at present it is rather awful. I can’t do any work etc. It goes on all day and all night now.”[15]

What, one may wonder, is the reaction of the gurus to this kundalini? They, of course, talk it up as a wonderful experience. There are even liars that will say they had a beautiful kundalini experience and are now enlightened because of it. There is also an old but elaborate humbug philosophy which has been developed around the kundalini injury. However, in spite of all this, there is no evidence that the kundalini injury is anything but a painful injury which can be avoided by not meditating in the first place.


footnotes

[6] Hume, Robert (1977) The Thirteen Principal Upanishads, 2nd ed. Oxford University Press, Oxford. pp. 348–349.

[7] Ibid., p. 391.

[8] Ibid., p. 372.

[9] Ibid., p. 396.

[10] Ibid., p. 438.

[11] Ibid.

[12] Ibid.

[13] Ibid., p. 439.

[14] Ibid., p. 441.

[15] Lutyens, Mary (1983) Krishnamurti, The Years of Awakening. Avon Books, New York. p. 216.


Chapter 7: Projections into E-space

The Hindus have OM, and by its proper use they can sometimes have E-space perceptions. Offhand, one might think Hinduism would have abundant records about E-space as experienced by its gurus. However, because of the corruption of the religion, this is not the case. In fact, the writings of the Hindus are a very poor source for E-space information. Part of the problem is a deliberate secrecy and obscurity, but the biggest reason is that very few Hindus have any E-space perceptions. This is due to the widespread abandonment more than two thousand years ago of the proper OM method, as was explained in the previous chapter.

Fortunately, the Hindus and their poor records are not needed. Instead, there is an abundance of good records that have been written in Europe and the United States during the 20th century. Many people have had isolated experiences of E-space, and reports of these experiences have sometimes been collected and published by researchers. However, the best records are the handful of books that have been written by individuals that have had a large number of personal E-space experiences. The habitual projectors, as we may call them because of their ability to be self-aware and remember their experiences while projected away from their A-space bodies, had their ability to project thrust upon them. Instead of being seekers, they just started having spontaneous projection experiences.

Such spontaneous projectors are rare, but there are a few around, and occasionally one of them writes a book. The typical book has some personal history, and a description of different E-space experiences. The book will also have some attempt at analysis by the projector. As a rule, the analysis is poor and can be safely ignored. Although the projector who writes a book is bound to be intelligent, this alone does not qualify him as an able commentator on his own experiences. Thus, one should not blindly accept the opinions or conclusions of these projectors. The value of their writings lies only with the detailed descriptions of their experiences, not with their analyses of their experiences.

In 1920, the personal account of Hugh Calloway, writing under the pseudonym Oliver Fox, was published in the British journal Occult Review. About two decades later, Fox, as we shall call him, wrote a book which recounts his experiences more fully. Fox was a lucid dreamer, which means he would sometimes become conscious or self-aware during his dreams. Many dreams take place in one’s own head and are not projection experiences whether remembered or not. However, sometimes, perhaps often, a dream is an actual projection of part of the mind away from the A-space body. Most people have dreams of this kind, but they do not become self-aware during them. What made Fox different was that he often became self-aware.

Fox had his first lucid dream at the age of sixteen in 1902. He dreamed he was standing outside his home. In the dream, the sun was rising and the nearby ocean was visible, along with trees and nearby buildings. Fox walked towards his home and looked down at the stone-covered walkway. Although similar, the walkway in the dream was not exactly like the real-life, A-space walkway it imitated. During the dream, Fox noticed this difference and wondered about it. The explanation that he was dreaming occurred to him, and at that point he became lucid. His dream ended shortly afterwards.

After his first lucid dream, lucid dreaming became a frequent occurrence. He would be asleep and dreaming, and at some point he would become lucid. Fox noted two interesting things about his lucid dreams. Firstly, he could move about within the dream, such as by gliding across an apparent surface. Secondly, he found that the substance that formed the objects in the dream, could be molded by thought.

His lucid dreams were typically short, and Fox would do his best to prolong them. Oddly enough, he claims he would feel a pain in his dream-head and that this signaled the need to return to his body. As the initially weak pain grew, he would then experience a dual-perception of both his dream sensations and his body’s sensations. A sort of tug-of-war resulted, with the body normally winning. These experiences had all taken place during the first year or so after the first lucid-dream experience at age sixteen. Fox was still a teenager.

Fox had wondered what would happen if he resisted the body’s return call. Most people who have lucid dreams never report having such a choice. At some point the lucid dream just ends for them and they awake. Unlike most people, Fox had a choice and decided to experiment. About a year after his first lucid dream, he became lucid in another of his apparently typical, walk-around-the-town dreams. He had the warning pain and ignored it. The dual-perception occurred and he successfully willed to retain the dream perception. Next there was a growing pain in his dream-head, which peaked and then disappeared. At this point Fox was free to continue his dream.

As Fox’s lucid dream continued, he soon wanted to awaken, but nothing happened; his dream continued. He then became fearful and tried to concentrate on returning to his body. Suddenly, he was back in his body, but found himself paralyzed. His bodily senses were working, but he was unable to make any motor movements. Fortunately, this condition did not last long and he was soon able to move again. However, immediately afterwards he was queasy, and he felt sick for three days. This experience deterred him for a while, but a few weeks later he ignored the return call again during a lucid dream, and the same pattern resulted. He says the sickness was less this second time, and that memory of the dream was lost. After this second experience, Fox no longer fought against the return call.

Fox remarks that years later he learned that if he had just relaxed and fallen asleep when he was paralyzed in his body, then all the consequent sickness would not occur, and the body would be fine. This sounds reasonable. The fact that the body was ultimately able to move, shows that there was no real damage. The delay in motor-control reconnection may have been due to some loss of immediate-reconnect ability, because Fox had stayed in his dream too long. Perhaps the initial warning pain occurs in the first place because immediate-reconnect ability is about to be lost. It may be that if certain mind pieces are away from the brain for too long, then they will need some time to reestablish their connections with all the neuronic input and output lines they must connect to. Putting pressure on the reconnection process might well be the cause of the sickness which Fox experienced.

Fox was still a teenager, and a student at a technical college. He remembers his school-days fondly, and like all older people, longs for his lost youth. The world was full of promise, and Fox remarks that his lucid dreaming made him feel special, like an explorer discovering new territory for mankind. During his teens and twenties, Fox continued having lucid dreams, and he noticed a pattern. His lucid dreams often never reached the warning-pain stage because he would do something that would cut them short and cause him to awake. Fox gives some examples of what he means. He mentions ordering a meal in a restaurant and then eating it. Trying to taste the food he’s eating causes him to awake. While watching a theatrical play, a growing interest in the play would cause him to awake. If Fox encountered an attractive woman, he could converse with her, but as soon as Fox thought of an embrace or such, he would awake. In an effort to prolong a lucid dream, Fox suggests the following projectionist’s motto: “I may look, but I must not get too interested—let alone touch!”[16]

For anyone who has had lucid dreams, Fox’s experience of premature-ending will sound familiar. The most common end to a lucid dream is when the dreamer tries to react to the dream in some personal way. Although the reason for this was perplexing to Fox, a surprisingly clear explanation is available. Recall that in the “Brain” chapter, we learned that the human mind is in many separate pieces. It was suggested that one of the several advantages of having the mind in pieces, is that the mind could be split in two, with one mind-part remaining with the body, and the other mind-part leaving the body. This must be what’s happening during a lucid dream. The mind of a lucid dreamer is not the complete mind available to that person once he is awake. It is this absence of certain pieces that causes the end of the lucid dream. As soon as the dreamer tries to think or do something that requires one or more of the missing pieces, then the two mind parts are automatically rejoined so as to fulfill the functional request. This rejoining, of course, means a return to the A-space body where the other mind-part remains, so the dreamer awakes.

Some of the mind pieces that accompany the soul of the lucid dreamer can be identified. Several of the vision-processing pieces, and several of the sound-processing pieces, are present, although not necessarily all of them. And some sort of long-term memory is present, since the lucid-dream experience can sometimes be remembered for years. However, the long-term memory may not be a separate piece. Instead, it could be just a function in one or more of the vision and sound pieces that are present.

The vision and sound pieces are certainly present because seeing and hearing are the two senses of the lucid dreamer that work just as well in the dream as they do in the body. The typical lucid dreamer sees clearly in color and has no problem hearing sounds and words. He can also speak. Although conversation during a lucid dream is infrequent, it does happen. (The larger question of what sight and sound in E-space really is, will be covered later in this chapter.) In contrast to seeing and hearing, the other senses are noticeably absent. The lucid dreamer has no sense of taste, touch, or smell. Any attempt to use these senses during a lucid dream will result in the automatic rejoining of the split mind. Also apparently absent are at least one or more of the pieces needed to understand writing. Fox remarks on how he always had trouble reading any writing he might encounter. He could see the writing, and he knew it was writing, but he couldn’t read it, except occasionally with difficulty. Fox says other people told him that they had the same problem reading dream-writings.

Sooner or later, lucid dreamers wonder whether their dreams are all in their heads or not. Are lucid dreams actually adventures in a dream world that is truly external to the body? Typically, the lucid dreamer comes to the conclusion that the seeming adventures in an external dream world are just that. One obvious clue for the lucid dreamer which supports the dream-world interpretation, is the frequent motion or movement of the dreamer within the dream. Instead of being an idle spectator just watching the world go by, the lucid dreamer is frequently in motion. The dreamer may be moving slowly by walking or floating, or he may be moving more quickly through the dream-scenery by a kind of flying. However, the most spectacular means of motion for the lucid dreamer is the sudden acceleration to a great speed. The dreamer may be at either a relative standstill, or flying, when the sudden acceleration starts. As the acceleration quickly builds, the sight will go black, and often there will be a loss of awareness. The next thing the lucid dreamer knows, is that he is somewhere else with his vision restored. The reasonable explanation is that the sudden acceleration happens when a large distance has to be traveled. There is reason to believe that the projected mind-part can accelerate to a top speed of several hundred kilometers per second in a brief time of only a few seconds. The evidence for such a great acceleration comes from sources other than Fox, where transcontinental and transoceanic distances have been traveled by the lucid dreamer.

Although the motion of the lucid dreamer is an impressive clue that the dream-world is real, the most decisive evidence comes from dream encounters with people known to the dreamer. These dream encounters are sometimes independently confirmed when the awakened dreamer later talks with the people in question. For example, Fox tells the following story: He had been discussing dreams with two friends and the three of them agreed to try to meet together that night in their dreams. Fox remembered meeting only one of the friends in a dream that night. The next day the three compared experiences. The friend whom Fox met in the dream also recalled meeting Fox. Both Fox and this friend agreed that they never saw the third friend. This third friend in turn claimed to have no memory of his dreams that night.

For Fox himself, his most personally-convincing experience that the dream-world is real, involved a girlfriend of his when he was nineteen in the summer of 1905. Fox had talked about his lucid-dream experiences with the girl, and her attitude was that such things were wicked. Fox tried to overcome her objections by claiming she was ignorant and that he could teach her. Her reaction was that she already knew about such things, and could actually appear in his room at night if she wanted to. He doubted her claim and she became determined to prove it. That night the girl made good on it. Fox had what he calls a False Awakening. This is where, sometime during sleep, Fox will become self-aware in close proximity to his body. He is not really awake, but he thinks he is. In this condition he still has his dream-vision and hearing.

While in such a condition, his girlfriend made a sudden, dazzling appearance in his bedroom. She appeared fully formed and wearing a nightdress. A large ovoid of colorful light surrounded her. She said nothing, but looked about the room. After a while, Fox tried to speak to her, but she disappeared and at the same time Fox awoke.

Fox’s experience probably terminated because his attempt to speak to his girlfriend required a joining of his separated mind-parts. If Fox had kept quiet, the experience probably would have continued longer. The following day Fox met with his girlfriend to compare experiences. She greeted him enthusiastically with the news of her success. Without having been in his room before, she successfully described both its appearance and contents in sufficient detail to convince Fox of the reality of her visit. Fox remarks that his girlfriend said his eyes were open during the visit.

In describing his projections, Fox often shows an apparent confusion between dream-world objects, and physical objects. For example, he seems to think that his girlfriend saw his physical bedroom, and that is why he makes the remark about her saying that she saw his eyes open during the visit. He is quite sure his physical eyes were closed. He finally concludes that she probably saw the open eyes of his dream appearance. Although Fox seems unsure, it can be firmly stated that everything seen during a lucid dream is an E-space object. When Fox’s girlfriend visited his room, she was having a lucid dream, and she saw only an E-space replica of his room. The replica occupied the same macroscopic space as the physical room, but it was still just a replica.

It is very common for E-space objects to duplicate the shape and coloring of physical objects. The appearances of other people seen during a lucid dream are just imitations of the physical appearances of those people. When Fox’s girlfriend made her appearance that night, the only thing in that room that was really her, was a mind-part that probably occupied a volume of less than 100 cubic centimeters. If Fox had seen only the real her that was present, then he would have seen a small, oddly-shaped object that he would never have recognized as his girlfriend. Instead of being awed by the whole experience, he would have been repelled.

A valid question is what causes the E-space material to assume these shapes and colorings which imitate physical objects? It would be a complete mistake to say Gaia, or some other superbeing, did it. There is no watching intelligence involved, except the people themselves. At least some of the material in E-space can be molded into shape by just thinking about it. The actual mechanisms or forces are unknown, but this does happen. The only thing that shaped, colored, and clothed Fox’s girlfriend during her appearance, was the girlfriend herself. In other words, the mind-part of the girlfriend that was present in the room constructed the appearance which Fox saw.

A general rule would be that each mind-part, when away from its body, is responsible for its own appearance. This accounts for the appearance of individuals, but what about inanimate objects such as Fox’s replica room? The replica room was probably part of a larger, replica house or building. It must be that these replicas are made by the people that are associated with the structures in question. The house or building Fox was living in, may have been originally replicated in E-space by the minds of the men who built the original structure. Once established, the replica could be altered or modified as needed by the inhabitants of the structure. For example, the details of Fox’s room were probably done by Fox himself. Since he was anticipating a visit from his girlfriend that night, it is likely that Fox unconsciously made sure his replica room closely matched his physical room. The actual manipulation of the E-space material is done by one or more of the mind pieces and is normally an unconscious procedure. However, sometimes a lucid dreamer consciously orders a change in some nearby E-space object, and actually sees the change happen.

Once an E-space replica is made, it will stay put and retain its form until some E-space force is exerted to change or destroy it. It would be a mistake to think that an E-space replica-structure is closely tied to, and must imitate exactly, the A-space object which it is based on. For example, some occultists make the mistake of thinking that the physical matter itself attracts and holds the so-called “astral” matter. On the surface this seems like a reasonable belief, but it is wrong. For one thing, the replicas in E-space are often either totally absent, or noticeably different in appearance from the physical objects they represent. Another consideration is that the copycat E-space object often reproduces the surface colorings and any painted or printed details of the imitated physical object. If the physical object itself were attracting and holding the E-space material, and thus creating a parallel object in E-space, then one would expect the parallel E-space object to have a uniform surface without reproducing the coloring and any painted or printed detail. After all, the coloring of a physical object is due to the interplay of light with the surface of the object, and with how the reflected light is actually processed in the eye and mind of the observer. How is the supposedly attracted E-space material suppose to detect and correctly interpret all this? There is no reason to believe that it can.

Actually, the typical replica in E-space will look very different from its physical model, if one could see pictures of each in a side-by-side comparison. However, the lucid dreamer normally doesn’t notice all the differences. This failure to notice differences is not unusual. After all, it happens all the time with strictly physical objects. For example, an object can be moved to a different place in a room, or removed altogether, and a person will often not notice upon reentering the room. A good friend can get a haircut, or shave a beard, or somehow quite change his outward appearance, and yet this may go unnoticed, or only vaguely noticed. Overall, the visual part of our mind does not require an exact match for an object to be recognized. Our minds are quite tolerant of appearance changes.

If Fox’s home had suddenly been blown up or knocked down, the associated E-space home would still stand as it was. It wouldn’t be blown apart or knocked down with the physical structure. Instead, it would be up to the individuals associated with the destroyed home to make corresponding changes to the associated E-space home. However, there is no law that says this must be done. The E-space home could be left standing.

When moving a physical object that has an associated E-space object occupying the same macroscopic space, the associated E-space object is not going to move with the physical object, unless it is actually moved along with it by some E-space force. It may be that the E-space object will be repositioned automatically by a mind piece without any communication to the awareness. It may also be that a physical object moved by direct bodily contact will have any associated E-space object similarly moved by contact with the E-space cell controllers that fill the body. However, these possibilities are just speculation, and there is no evidence for them. It may actually be very common for associated E-space objects to be at some distance from the physical objects they imitate. There is no law that says they must move together in lock step, or always occupy the same macroscopic space. For example, an experiment often reported by lucid dreamers is that they successfully move some object which they think corresponds to a familiar physical object, but once they are awake and check the physical object, they always find it unmoved.

The main thing about E-space objects, is that they are both made and moved about by E-space, not by A-space. Apart from the effect of gravity, which keeps both the A-space Earth and E-space Earth spherical and together, any similarity between an A-space object and an E-space object, in either appearance or position, is solely due to E-space forces which are intelligently guided to mimic things in A-space. We humans are strongly impressed by our A-space environment. We then in turn impress these A-space perceptions on the surrounding E-space. Therefore, E-space is full of objects that resemble A-space objects. Probably wherever humans typically are, or go to, in the E-space world, they alter the surrounding E-space to look like their familiar A-space world.

Fox mentions the existence in the dream-world, which is E-space, of a whole city, an imitation London, which he visited and explored. Along with imitation buildings which looked normal, there were also buildings and monuments which Fox knew had no equivalents in the real city of London. Fox concludes by saying that in his experience, repeated trips to the same dream-world town or city will show the same buildings and monuments, including those that have no counterpart in the real town or city. These observations show two things. Firstly, that the imitation city is not molded by the physical buildings themselves, because the imitation city has some structures which have no physical equivalent. Secondly, once made, a large E-space object, such as an imitation city, will persist more-or-less intact over time. It takes energy to make an E-space object. Once made, the object will retain its form without additional energy. However, to alter or destroy an E-space object, more energy is needed. Man-made E-space objects tend to accumulate. They get made, but no one wants to waste energy destroying them without good reason.

The fact that one can see and hear in a lucid dream, raises the question of just how this seeing and hearing happens. In the physical world, seeing relies on light. Different frequencies of light are variously absorbed, transmitted, or reflected, by the different physical objects the light strikes. Without light, there would be no sight in A-space. One would naturally wonder if E-space has a similar kind of light, and that this would be how objects are seen in E-space. It may be that E-space does have something like light, but at the very least there are some big differences. For one thing, there never seems to be any light source in E-space. The physical Earth has a sun, but there is no sun in E-space. In E-space, such as during a lucid dream, everything seems uniformly lit. There is never any sign of a light source, whether natural like the sun, or artificial like a street lamp. There is also a total absence of shadow. There is absolutely nothing to indicate external lighting. Instead, if E-space does have something like light, it must be that the E-space objects themselves are self-luminous and provide their own lighting. If E-space objects were to continuously radiate their own light, and at the same time completely absorb any light striking them from any other E-space object, then this would explain what is actually seen. E-space objects never show any signs of transparency or reflectivity. There never seems to be any E-space glass or mirrors. There is never any indication that the lighting of one E-space object has any influence on the lighting of any other E-space object.

In A-space, sounds are transmitted as vibrations in some tangible medium. Air is the sound medium we are most familiar with. Solid objects, and liquids such as water, can also transmit sounds. For example, the water-living dolphin relies more heavily on its sense of hearing than on its sense of sight. It may be that the sounds in E-space, such as hearing someone else talking during a lucid dream, may also be transmitted as vibrations in some medium. However, there is no evidence for what this medium might be. There is no indication that there is any sort of air in E-space. Perhaps there is some sort of air-like medium present, but no one ever feels it. There never seems to be any wind. However, the lucid dreamer has no sense of touch, so even if there were an air-like medium in E-space, it would probably go undetected.

One fact that speaks against an air-like medium, is the absence of any perceived Doppler-shift in a sound. In spite of the frequent motion of the lucid dreamer, no one ever reports anything that sounds like a Doppler-shift happening. Another fact about sounds in E-space, is that they never seem to be obstructed. Whenever the lucid dreamer hears something, it is always loud and clear. There is never the sense of some sound-source being far away, or hidden behind an E-space object, or otherwise diminished or obstructed. There is also a definite absence of noise in E-space. It seems one doesn’t hear things in E-space unless one is meant to hear them. Overall, it doesn’t seem that there is a common sound-conducting medium in E-space. Instead, it seems that sounds are communicated directly to the intended receiver by some unknown means.

An interesting question concerning the two principal senses of seeing and hearing, is to ask where they developed first. Did they first occur in A-space life, and then were copied in E-space, or is it the other way around? Since our planetary A-space life is mostly the result of intelligent design by Gaia, it would seem Gaia had to already understand seeing and hearing before it could design for it. That the capacity for seeing and hearing must have been preexistent in E-space, is understandable when one considers just what these two senses are. Time and space are two fundamentals we all understand. Everyone directly experiences both time and space. Quite simply, hearing is our principal time sense, and seeing is our principal space sense. Hearing is the variation of a signal over time, while seeing is the variation of a signal over space. Although the two exist together, both time and space are mutually exclusive, just as the two senses, hearing and seeing, are mutually exclusive.

The mental processing of sight and sound is the same whether the signal source is in A-space, or E-space. This is because both time and space are the same whether in A-space or E-space. Although the mental processing is the same, the actual collection of the signals in the first place, isn’t. The signal collection is done by sensors, and the A-space sensors are the eye and ear for sight and sound respectively. However, the A-space sensors won’t work for E-space signals. To receive E-space signals for sight and sound, there must be a separate set of sensors. The design of these sensors and how they work is unknown. However, based on the preceding discussion of sight and sound in E-space, it seems likely that the E-space sensors are radically different than their A-space equivalents. For example, the E-space sight sensor probably has no lens, and the E-space sound sensor probably has no drum in contact with a vibrating medium.

The E-space sight sensor is probably an integral part of one of the mind pieces. The same can be said for the E-space sound sensor. As a rule, there is no conscious perception from either of these sensors during the time one is awake. Regarding the other senses, the two A-space senses of smell and taste have no equivalent in E-space. Both these senses translate contact with certain molecules into pleasant or unpleasant sensations. However, E-space life does not breath and does not eat, and there seems to be no atmosphere, so there is no need for such senses in E-space. The sense of touch is an important A-space sense due to the fragility of the A-space body. Although the lucid dreamer has no touch because he has no real body, there is a different kind of projection where the projector has an E-space body complete with a limited sense of touch.

Overall, if life had never developed on A-space Earth, then life in the E-space Earth would still see and hear, because these are the principal senses of space and time. However, there would definitely be no taste and smell, just as those senses are absent now, but the sense of touch would be ambiguous. Perhaps certain E-space life-forms would have a definite E-space body, and a sense of touch, but there is no way to really know about this.

Besides having lucid dreams, Fox also had what may be called E-body experiences. The starting point for Fox was his False Awakening. This would occur during sleep and often be preceded by a dream, sometimes lucid. Fox would become conscious in what seemed to be his physical body. However, his mind-part wasn’t really reconnected to the physical body in normal fashion. Instead, the mind-part would apparently be connected to an E-body which exists in the same macroscopic space as the physical body. Fox was already having the False Awakening when he was nineteen. A few years later, in July 1908 when he was twenty-two, Fox had his first transition from being awake in the first place, to having a projection experience without any intervening sleep. Fox was apparently tired and was lying down on a sofa with his eyes closed. His dream-sight suddenly began operating, and Fox, willing himself out of his body, had what seems to be from his description, a lucid dream. About a year later, Fox had a similar experience. However, it wasn’t until July 1912 when Fox was twenty-six, that he had what seems to be his first E-body experience from a wakened state. In this case, once he realized he was in a condition similar to the False Awakening, he just sat up in bed and then got off the bed and walked around in his room, only it wasn’t his physical body he was in. As soon as he tried to leave his room, the experience ended as he was quickly reconnected to his physical body.

Ultimately, Fox had two ways of having an E-body projection. In the first way, Fox had to wait until he found himself in the False Awakening condition during sleep. Once in this condition, he was sometimes able to have an E-body projection. The second way Fox had, was to have something similar to the False Awakening condition while he was awake. Once in this condition, he was sometimes able to have an E-body projection, especially by actively willing it.

Fox remarks that during his early experiences of the False Awakening, he would sometimes feel a hand pressing against him, or grabbing hold of him. In one case, he was given a painful bear hug. Fox found these experiences frightening. Fox was still in his body on these occasions as he had not yet learned that once he was in the False Awakening condition, he could willfully leave his physical body behind and have either a lucid dream or an E-body projection. However, what these mysterious hands show us, is that from the outset of the False Awakening, Fox’s mind-part is connected to his E-body and not to his physical body. The hands must have been part of a complete, human E-body. The only way Fox could feel the touch from E-body hands, would be to have his mind-part connected to his own E-body. When all of one’s mind is properly connected to the physical body, then the only sense of touch one has is what comes from the physical sensory nerves. These nerves are completely A-space, and they signal only A-space happenings. Even if one’s E-space body were given a bear hug, if one is awake when it happens, then it would never be felt.

Overall, Fox was primarily a lucid dreamer. His E-body experiences seem to have been very infrequent. A much better source of information about the human E-body, is Sylvan Muldoon, whose story will be covered shortly. The E-body can vary in its apparent mass and substantialness, and unlike Sylvan Muldoon, it doesn’t seem that Fox ever had an E-body experience in which his E-body felt substantial, and blocked off the normal E-space senses of sight and sound. Instead, the few times Fox was projected in his E-body, it always seems to be a rather flimsy E-body with the E-space senses still functioning.

Fox summarizes his own E-body experiences and compares them with lucid dreaming. Fox says they are more real than lucid dreaming. This is to be expected since the standard of realness is being fully awake. The closer one comes to being like oneself when fully awake, the more real the experience will be judged. Having an E-body with a sense of touch is closer to the waking experience than a lucid dream is, so it should seem more real. In addition, it may be that the number of mind pieces that can leave the physical body is variable, and that more pieces are projected during an E-body experience than during a lucid dream. The more mind pieces one has during a projection, the more real the experience is likely to seem.

Fox states that during his E-body projections, he was ignored and apparently not noticed by the other people he met. The question is who are these people and why don’t they respond to Fox’s presence? Although Fox claims to be earthbound during these experiences, he does not actually mean that he is seeing the physical world. Instead, he is just seeing the E-space imitations of physical objects. Fox remarks how he is much more in control during an E-body projection than during a lucid dream. In other words, he was more able to consciously control his experience, and there was a minimum of automatic, unconscious control.

Actually, this conscious control is probably the reason for his going unnoticed by the people he would meet. The other people were probably just ordinary, non-lucid dreamers projected from their bodies. To get their attention, Fox thinks all he has to do is to be seen by them, and this means just standing in front of them. This would be true for everyday waking life, but Fox overlooks the fact that the typical person he sees is just an apparition constructed from E-space material by a small mind-part which is not seen directly. The direction the apparition is facing has no real importance, because the apparition’s eyes are non-functional. To get the attention of the mind-part which is creating the appearance of a whole physical person, it seems that it has to be signaled directly. This signaling is not something one consciously knows how to do. The fact that unconscious control is mostly disabled during the E-body projections, is why Fox couldn’t get the attention of the people (apparitions) he would meet. Fox’s E-body may well have been seen by the mind-parts he encountered, but without the direct signaling, he was not recognized as someone to interact with. Fox’s own self-conscious desire to interact with the apparitions was not adequately communicated to the mind-parts behind them.

Of course, an alternative explanation to Fox’s being ignored by the people he met, is that these people are the appearance in E-space of wide-awake, physical people who are going about their business in town. If this were the case, then it would be obvious why the people ignored him. They would be seeing only the physical world around them while Fox would be seeing them as they appear in the surrounding E-space. Since Fox says he is earthbound during his E-body projections, this explanation is tempting. However, it seems this could never be more than a partial explanation. Based on Fox’s descriptions, there is no way to know if any of the people he met were really wide-awake in A-space. Perhaps some of them were, but there is no definite evidence for it.

Fox was certainly unusual in terms of his self-aware, E-space projections, but he remarks how the memories were fleeting. To counter this, he would often write down an account of his projection as soon as he would awake. In his book, he wonders why these memories aren’t more permanent. Of course, the memory of ordinary dreams is very fleeting too. Occasionally a projection or dream makes an impression on long-term memory, but this is the exception, not the rule. Considered from a practical standpoint, the reason for the fleeting memories is obvious. Such memories are impractical, so it would be a waste to store them for long-term recall.

How can the remembrance of projections and dreams serve the A-space needs of the individual? Our struggle for survival and personal advantage takes place in A-space, not E-space. Memories of E-space experiences are clearly a waste. There may be occasional exceptions, but overall there is no advantage to the individual if he could recall a projection or dream that took place a week, or a month, or a year ago. Instead of giving an advantage, such long-term recall would create a disadvantage. If the experiences don’t exist as long-term memory, then the individual can’t waste time recalling them. However, if the projections and dreams did exist as long-term memory, then the individual could waste time recalling them and thus disadvantage himself. Natural selection will always work against long-term recall for dreams and projections. In addition, one can say that natural selection will work against projectionists such as Fox.

Not surprisingly, Fox wasted part of his waking life trying to make sense of his projection experiences. He took time to record and write about his experiences. He also discussed them with friends. And he spent time studying occult literature. It seems Fox embraced Theosophy, complete with its Master worship. Fox also wasted his time on automatic writing, which is writing without conscious control. As part of his Theosophical beliefs, Fox thought a master was doing the automatic writing through him. Overall, Fox wasted part of his life on disadvantageous pursuits, all because of his projections. If Fox had been like most people, and never had any projections, then he wouldn’t have wasted part of his life as he did. The time lost could have been used working for material advantage.

At this point we are done with Fox. The next subject of interest is Sylvan Muldoon. Muldoon was born in America in 1903, and spent his life in the Midwest. In November 1927, he sent a letter to Hereward Carrington, who was a well-known writer on occult subjects. Muldoon had read one of Carrington’s books, and wanted to let him know that he, Muldoon, knew a lot more about the projection of the astral body (the E-body) than did the sources which Carrington used in his book. Muldoon gave some particulars and Carrington was so impressed that he wrote Muldoon back and invited him to write a book which he, Carrington, would edit and write an introduction for. Because Carrington was already an established writer on occult subjects, this must have encouraged Muldoon to accept the offer and persevere with writing a book about his own experiences. The end result was The Projection of the Astral Body, published in London in 1929.

Muldoon was even more unusual than Fox. In the general population, lucid dreams are common compared to E-body projections. Muldoon had only E-body projections. He apparently never had a lucid dream. His projected E-body was also much more substantial than in the case of Fox and similar projectors who often have lucid dreams and only occasionally have E-body projections. Carrington knew he had struck gold with Muldoon as far as the E-body was concerned. Muldoon’s very first experience is a treasure-trove of information.

Muldoon was only twelve when it first happened. His mother had taken him along with her to a camp of gathered Spiritualists in Iowa. His mother was interested in Spiritualism. Muldoon slept in a nearby house that night, along with other people from the camp. He had been asleep for several hours when he slowly awoke. At first, he didn’t know where he was, and everything was dark. Eventually he realized he was lying down on the bed, but he couldn’t move. Soon he felt his whole body vibrating, and felt a pulsing pressure in the back of his head. At the same time he had the sensation of floating.

While all these sensations were happening, Muldoon gained his sight and hearing. He then realized that he was floating about a meter above the bed. This was his E-body floating, although he didn’t realize it yet. He still couldn’t move. He continued to float upward, and as soon as his E-body was about two meters above the bed, his rigid E-body was moved upright and placed onto the floor, standing. Muldoon estimates he was frozen in this standing position for about two minutes. After the two minutes, the E-body became relaxed and Muldoon could now consciously control it.

The first thing Muldoon did, was turn around and look at the bed. He saw himself (his physical body) lying on the bed. He also saw what he calls a cable, extending from between the eyes of his physical body on the bed. This cable ran to the back of his E-body head, which is where he continued to feel some pressure. At the moment, Muldoon was about two meters from his physical body. His E-body was not firmly held down by gravity and it tended to sway back and forth, in spite of Muldoon’s efforts to stabilize it.

Not surprisingly, Muldoon was both bewildered and upset by all this. He thought he had died, so he resolved to let the other people in the house know what had happened to him. He walked up to the door of the room, intending to open it, but he walked right through it. Muldoon then went from one room to another, and tried to wake the people in them, but was unable to. His hands passed through the people he tried to grab and shake. Muldoon remarks that in spite of this inability to make contact with physical objects, he could still see and hear them clearly. Muldoon says that at one point during his movements in the house, he both saw and heard a car that passed by the house. Muldoon also says that he heard a clock strike two, and upon looking at the hands of the clock, verified that it was two.

Muldoon gave up trying to wake the other people in the house, and instead wandered around in the house for about fifteen minutes. At the end of this time, he noticed that the cable in the back of his head was resisting his movements. The resistance increased, and Muldoon soon found himself being pulled backwards toward his physical body, which was still lying back on the bed. He lost all conscious control of his E-body and it was automatically repositioned, as before, above his physical body. The E-body then lowered down, it began vibrating again, and then reconnected to the physical body. Upon reconnection, Muldoon felt a sharp pain. The experience was clearly over. Muldoon concludes his story by saying: “I was physically alive again, filled with awe, as amazed as fearful, and I had been conscious throughout the entire occurrence.”[17]

Over the years that followed, up to the time he wrote his book, Muldoon says he had several more projections similar to this first one, where he is conscious from the very beginning of the projection to its very end. In addition, he says he has had several hundred other projections where he was conscious for only part of the time during a projection. Typically, he would become conscious after the E-body had already been separated and moved upright at a distance from the physical body. In all cases, the order of events established by his very first experience, was maintained, as far as he could tell. His situation in terms of his sight, hearing, E-body, cable connection, and such, was the same from one experience to the next.

As already stated, it seems Muldoon never had a lucid dream. He had dreams, but not lucid ones. Muldoon actually comes down hard on lucid dreamers and claims that their experiences are not conscious projections. This is an ironic position for him to take, because he elsewhere complains about the difficulty of getting people to believe his own experiences. Muldoon argues that only his kind of E-body projection is possible. Anything else, such as lucid dreams, are confined to either one’s physical or E-body head. Muldoon’s naivete on this point is not unique. Quite the contrary, most of his book is his own faulty analysis of projection in general. He gives only a handful of his own experiences, and yet this is the only thing of any value.

Because Muldoon’s very first experience is also the one he best describes in his book, we will consider it at length. The first question that confronts us is just what is this E-body? Muldoon, of course, doesn’t know. However, the answer is the cell controllers which were explained in the “Development” chapter. Although there may be certain exceptions, every cell in the body has an associated E-space cell controller. The human body has trillions of cells, and therefore trillions of E-space cell controllers. If some fraction of these trillions of controllers were to move apart from the physical body, then this would explain the E-body.

To say that a projectionist’s E-body is always a set of cell controllers drawn from his physical body, is an explanation that agrees well with the facts. Let’s consider the facts in detail. Projectionists who have E-body experiences always state that the E-body is a normal part of the everyday human body. They reach this conclusion because they directly experience the E-body pulling out of the physical body, and then at the end of the projection the E-body reenters the physical body. The fact that the E-body both comes from the same space as the physical body, and returns to it, is convincing to the projectionist that it is a normal part of the human body. So far, we know from the earlier consideration of the life sciences that the human being has three components that exist in E-space. These are, one self-awareness or soul, many mind pieces, and trillions of cell controllers. Soul, mind pieces, and cell controllers: these are the three components. Of these three, only the cell controllers can explain the E-body.

Some projectionists experience the E-body with different densities. During one projection, the E-body may be quite flimsy, while during another, it may seem much more substantial. Muldoon’s E-body seems to have been consistently dense. Overall, the experience of the projectionists indicates an E-body that can occur in different grades of apparent density, ranging all the way from barely noticeable, to very dense and body-like. This variability is perfectly explained by the E-body being cell controllers. Because individual cell controllers are independent of each other, their number in the E-body can be variable. The total number of cell controllers in the E-body could range from perhaps only a billion or so, all the way up to trillions.

The cable that connects the E-body with the physical, is more commonly called a cord, and has been noticed by many E-body projectors. What is this cord and what does it connect to? The cell-controller explanation provides an easy answer. The cord is just more cell controllers. Back at the physical body the cord is connected to the cell controllers that are still with the physical body. In a sense, the cord does not exist as a really separate structure. Instead, there are just two body-shaped masses of cell controllers which are joined or bridged together by still more cell controllers in the shape of a cord. The sum total at any one time of all the cell controllers in the E-body, cord, and physical body, will always be the number of cell controllers in the non-projected, fully-awake individual both before and after the projection. One could say that the number of cell controllers is conserved during a projection. If a particular cell controller is no longer in the physical body during a projection, then it must be in either the cord, or the E-body. At the end of the projection, all cell controllers are with the physical body again.

The only possible objection to the cell-controller explanation, is that the cell controllers can’t leave their associated physical cells without the cells becoming damaged due to neglect. Without its controller, a cell would be just a collection of A-space chemicals that will tend towards some sort of equilibrium, and that could be destructive. At the very least, the cell would become functionally inactive without its controller. This is a valid argument, but some cells are more demanding of attention that others. Neurons must be ready to signal at all times, so it seems unlikely that their controllers could be spared. Any cells undergoing division will need their controllers. Muscles must be ready to contract. The muscle cells of the heart, and those cells needed for breathing and such, certainly must keep their controllers at all times. However, there are structural cells, such as skin and epithelium cells, which could probably go unattended by their controllers for brief intervals, as long as they aren’t currently undergoing cell division. The individual cell controller will know, as part of its program, whether or not it can abandon its cell for a while. The cell type, and the cell’s current status, will be the factors which the cell controller, via its program, will consider. It may be that the DNA code which describes a particular cell type, explicitly includes a time interval during which the controller can safely disengage from the cell. For example, a certain type of skin cell may have, as part of its DNA code, the information that the cell can be ignored for X minutes without serious damage to the cell or risk to the organism. With this kind of information, each cell controller could decide whether or not it could leave its cell, and for how long. We already learned in the “Development” chapter that each cell controller has an accurate clock. If a cell controller does leave its cell, it can use its clock to measure how long it is away.

During an E-body projection, it often happens that the E-body will briefly return to the physical body at regular intervals. Sometimes this brief return may actually be felt as a kind of pumping sensation. The E-body will quickly reenter and recoincide with the physical body, and during the brief time of about two or three seconds that the E-body is with the physical, the projectionist may feel the whole E-body pumping, as it were. Muldoon, as well as other projectionists, has interpreted these brief returns of the E-body as a recharging or reenergizing of the body. This is the fuel-is-low and batteries-are-run-down kind of explanation.

However, our cell-controllers explanation has no need for the fuel-is-low kind of answer. The reason for the brief returns of the E-body to the physical is most likely the need of at least some of the cell controllers in the E-body to get back to their cells. The pumping sensation is probably caused by cell controllers both leaving and entering the E-body synchronously in droves. During the brief return, those cell controllers whose time is up, can leave the E-body and reassociate with their physical cells. At the same time, different cell controllers currently associated with their cells, can leave the cells and join the E-body. In other words, a swap of used for unused cell controllers takes place. If, during a return, there are not enough unused cell controllers to replace the used ones, then the whole projection experience may terminate at that point.

The consistent shape of the E-body is suggestive of its cell-controller composition. The E-body is always a match of the physical body in terms of its general outline. No projectionist ever reports an incomplete E-body, or an E-body that can alter or transform its shape. This is different than what is possible during a lucid dream. Lucid dreamers have much more flexibility in their apparent bodies, because such bodies are constructed on the spot out of E-space material that has no connection with the physical body. Lucid dreamers sometimes report having no body, or an incomplete body, or a non-human body, or seeing someone else undergo a transformation of their apparent human form. However, this kind of changeability is never reported for the E-body. The E-body is always shaped like the associated physical body. It is always a complete body in its outline, and it never looses its shape or transforms in any way during the projection. If a cell controller were to leave its cell, one would expect it to return to the exact same cell it left. The cell controller must have a way of finding its cell upon its return. An obvious and simple scheme a cell controller could use, would be to only leave with some of the nearby cell controllers and then keep its position relative to them the same. This way, the problem of finding its way back to its cell, becomes the problem of the group of cell controllers it is traveling with finding its way back. By extension of this group togetherness, if every cell controller wouldn’t leave its cell unless at least some of its nearby neighbors did too, then the end result would be the entire body in outline, which is just what we get. By maintaining their positions relative to each other throughout the projection experience, then when the E-body reassociates with the physical, it is a quick and easy task for the individual cell controllers to find and reassociate with their cells. This maintaining of relative position accounts well for the E-body’s failure to ever transform its shape. Maintaining position means that body shape will be preserved.

The typical E-body projectionist finds himself in a flimsy E-body and cannot be away from the physical body for long without at least a brief return. These projectionists make no connection between physical health and E-body projection ability, unless to claim that good health promotes projection. Muldoon, of course, was not the typical E-body projectionist. His E-body was consistently dense and his projections were long lasting. It is interesting that Muldoon takes a very decisive position on the relationship between physical health and projection ability. He claims sickness promotes projection, and health has the opposite effect. His basis for this claim was his own personal experience. Muldoon, it seems, was often sick. According to Carrington, Muldoon wrote his book from his sickbed.

Muldoon’s identification of sickness with projection ability may be accurate in his case, but not in the way he thought. Muldoon’s thinking is that the sickness comes first, and then the projections follow. Considering that the E-body is made of cell controllers, and that Muldoon’s projections kept a lot of cell controllers away from their cells for a comparatively long time, it seems more reasonable to suppose the projections came first, followed by the sickness. Muldoon’s projections probably caused some cell damage, and this in turn would make him sick. Muldoon recognized the relationship between his health and his projections, but he chose to overlook the true cause-effect between them. If he had believed that the projections were making him sick, then he would have suffered a letdown, since the value of his projections would have been diminished. Instead, Muldoon chose to believe his projections were benign.

Overall, all the facts known about the E-body, as experienced and recorded by projectionists, are well explained by the E-body being made of cell controllers. Now that we’ve established what the E-body is, we can continue with our consideration of Muldoon’s first experience. Most E-body projectionists do not have the experience of the E-body being slowly separated from the physical without any conscious control, as Muldoon did. Instead, at the outset of the projection, the typical E-body projectionist just pulls his E-body as best he can away from the physical. Muldoon was certainly different. The first thing he noticed was the vibration of his E-body. The E-body is known to vibrate at times. In all cases, when the E-body vibrates, it is probably a real vibration that has a single cause. Because the E-body is made up of a large number of cell controllers, and a vibration can be caused by fast-moving waves, it is reasonable to suppose some sort of wave is moving back and forth through the cell controllers. The frequency of the E-body’s vibration, when it can be felt, seems to range from a low of a few cycles per second, to a high of perhaps a few hundred cycles per second. Even when it can’t be felt, it may be that the E-body is vibrating all the time, but at a frequency too high to be noticed.

The wave that passes through the cell controllers is probably their means of keeping together. The wave is probably a wave of communication, a kind of coordinated feeling of one’s neighbors. The wave would move in one direction through the E-body until it culminates at the extreme endpoint or end-plane of the E-body in that direction. With the wave’s forward motion impeded, the cell controllers at the endpoint or end-plane reverse the direction of their communication and the wave now starts back in the direction it came, having been reflected, as it were. Although this explanation of the vibration of the E-body is somewhat speculative, one can be sure that there is a practical reason for the vibration, and that it has to do with the needs of the cell controllers. The major need of the E-body cell controllers is to keep together. To a lesser extent, it may be that the vibrations are also used to communicate information between the E-body and the mind-part which is in the E-body. The E-body can often imitate physical-body motions such as walking. These movements may well be communicated from the mind-part into the wave that passes through the cell controllers.

The occult literature of the 20th century has a standard explanation for the vibrations of the E-body. The explanation is that there are different invisible planes of existence, and these planes of existence operate at different frequencies, and the vibration-rate of the E-body determines which of these invisible planes will become visible and accessible to the projectionist. However, there is only A-space and E-space, so any notion of there being more than a single, invisible plane of existence, is wrong. However, there are two reasons this occult explanation came about. Firstly, it is the common observation that when the vibrations are felt to be increasing in frequency, then separation of the E-body from the physical will either happen, or continue if it has already happened. Conversely, when the vibrations are felt to be decreasing in frequency, then reassociation of the E-body with the physical is likely, and even inevitable if the vibrational frequency becomes too low. Secondly, projectionists often report experiences that are very different from each other. To some people this suggests different planes of existence. For example, lucid dreams would be happening on one plane, while E-body projections would be happening on a different plane.

The notion of planes of existence, and frequency-based access to them, is an idea that did make some sense of the facts, but in light of our understanding of E-space, it is clearly passe. The vibration of the E-body is a necessary function of the E-body to keep itself together. The vibrations have nothing to do with tuning in alternate realities, as though the E-body were a radio tuner or television tuner switching stations and channels. However, the correlation of decreasing frequency with physical reassociation, and increasing frequency with physical disassociation, does strongly suggest that when the E-body is separated from the physical, and the projectionist does not feel any vibration, then the E-body is indeed vibrating, but at a frequency too high to be felt or otherwise noticed.

After the onset of the vibrations, Muldoon felt himself floating. Until Muldoon was later moved upright at a distance from his physical body, he had no conscious control over what was happening to him. The question is, what was controlling this projection? We know man has three E-space components. These are the soul, mind pieces, and cell controllers. As Muldoon clearly states, the projection controller was not his self-awareness or soul. We must also eliminate the cell controllers, as they are just along for the ride. Therefore, one of the mind pieces must be controlling the projection, and only relinquishing control to Muldoon’s self-awareness when it deems such conscious control appropriate. This mind piece is also capable of regaining exclusive control when it wants, as the end of Muldoon’s projection shows.

As Muldoon was floating up, his senses of hearing and seeing became active. The exact quality of his two primary senses is of great interest, because unlike most projectionists, Muldoon was able to both see and hear real physical objects. Instead of seeing and hearing only things in E-space, Muldoon had E-space sensors that could directly observe A-space. We already know, in general, that some things in E-space can sense things in A-space, since A-space life would be impossible otherwise. However, the fact remains that most projectionists, including E-body projectionists, never have direct conscious perceptions of A-space. They see and hear only E-space. Once again, Muldoon is exceptional.

To try to understand what Muldoon’s senses were like, it is best to quote him directly. “When the sense of hearing first begins to manifest, the sounds seem far away. When the eyes first begin to see, everything seems blurred and whitish. Just as the sounds become more distinct, so does the sense of sight become clearer and clearer.”[18] “As is often the case, everything at first seemed blurred about me, as though the room were filled with steam, or white clouds, half transparent; as though one were looking through an imperfect window-pane, seeing blurry objects through it. This condition is but temporary, however—lasting, as a rule, about a minute in practically all conscious projections.”[19] “Once you are exteriorized, and your sense of sight working, the room, which was dark to your physical eyes, is no longer dark—for you are using your astral eyes, and there is a ‘foggish’ light everywhere, such as you see in your dreams, a diffused light we might call it, a light which seems none too bright, and yet is not too dim, apparently sifting right through the objects of the material world.”[20]

These senses which Muldoon describes, are very different from the normal E-space senses. A lucid dreamer, for example, never has a hazy warm-up period. The senses always work perfectly from the very beginning. Obviously, Muldoon is using different sensors than most other projectionists. During his projections, there is a dense E-body along with a soul and a mind-part. What makes Muldoon different from most other projectionists, is his dense E-body. We must ask what or where is the sensor Muldoon is using to perceive A-space? It isn’t his soul. We already know that one or more of the mind pieces have sensors for E-space sight and sound. However, the fact that lucid dreamers never perceive A-space, does suggest that there are no sensors among the mind pieces that can respond to A-space sights and sounds. The only logical choice is the dense E-body. This must be the sensor Muldoon is using.

Although Muldoon doesn’t exactly say so, it is clear that his vision is not the same as it is when he is awake in his physical body. For one thing, while projected, Muldoon can see certain E-space objects, such as his own E-body, as well as A-space objects. Also, the ability to see physical objects in an otherwise dark room, indicates either an extremely sensitive light sensor, or a sensor that measures some other portion of the electromagnetic spectrum. Nowhere does Muldoon say that he sees in color while projected. There is an absence of color from his descriptions, and it may be that he saw physical objects in a black-and-white gray-scale. The fact that his sensor was not relying on normal levels of A-space light, would make it probable that he could not perceive color directly. If he did see any physical objects colored, this coloring may have been inferred by the vision-processing mind piece.

A dense E-body is a plausible sensor for both A-space sight and sound. Because the E-body is made of cell controllers, which are able to both sense and manipulate physical cells, it seems likely that the E-body as a whole could somehow react or respond to the surrounding, nearby A-space. As usual, the appropriate mind piece will interpret the sensor’s data and create the final sights and sounds that are seen and heard by the awareness.

The cable which Muldoon noticed during his first projection, was a common feature of his later projections. He often studied this cable when he was projected. Many E-body projectionists never notice any cable or cord connecting between their E-body and their physical body. However, it may be that every E-body projection has some sort of cord connection, whether noticed or not. We have already said that the cord or cable must itself be composed of cell controllers, just as the E-body is. Muldoon’s opinion was that the cable was made of the same substance as the E-body.

Muldoon describes what he calls cord-activity range. The cord remains thick out to a somewhat variable distance of a few meters from the physical body. As long as the cord appears thick, then the E-body is still within range of the physical, and is strongly influenced by it. Within range, Muldoon felt any happenings in the physical body reproduced in the E-body. For example, one time a pet dog jumped on the bed and snuggled against Muldoon’s physical body while he was projected within range. He felt the dog as though the dog were pressing against his E-body. In general, Muldoon would both feel his physical body’s sensations, and even control its breathing when he was within range.

The cord-activity range was defined by the thickness of the cord. As Muldoon would move further from his physical body, the cord would at some point become very thin like a thread. Once the cord was thin, then the happenings of the physical body were no longer felt. Muldoon claims the cord would keep its thread-like thinness out to whatever distance he might move to, even to a distance of many kilometers. The cord, one must assume, is like a life line. It is the cell controllers’ guarantee of getting back to their cells. It seems to also serve as a communication link. If there is any distress back at the physical body, then all the cell controllers in the E-body can be recalled.

There is no evidence for any kind of cord during a lucid dream. A lucid dream involves the projection of only the soul and some mind pieces. There are no cell controllers, and thus no cord. Somehow, this bundle of soul and mind pieces is able to navigate in E-space and ultimately find its way back to the physical body. This navigation is handled by one of the mind pieces without any conscious participation. The lucid dreamer is subject to recall by the physical body, so there must be some sort of communication link. This communication link is probably also involved in the guidance back to the physical body. The method of communication used is probably analogous to radio in A-space, instead of being a wire, like the cord.

Individual cell controllers are not as sophisticated as the much larger mind pieces. There is no reason to believe that if a cell controller were individually removed far from its cell, that it would be able to find its way back to its cell. The cell controllers probably consider the cord a necessity for the E-body to find its way back to the physical body. Apparently, no assumptions are made about the presence in the E-body of a mind piece that could navigate back using a non-cord method. Although there is no way to know for sure, if the cord were hypothetically broken, then perhaps the navigating mind piece (assuming it is present in the E-body) could get the whole E-body back to the physical.

One might wonder if there is a limit on how far away an E-body could move from the physical body, since it is trailing a cord behind it. There is good evidence that lucid dreamers can move thousands of kilometers away from their bodies. There is no good evidence that an E-body projectionist has ever moved such a distance away. It is probably safe to say that the range of the E-body projectionist is substantially less than the range of the lucid dreamer.

During Muldoon’s first projection, he tried to make contact with the other people in the house. He saw their physical bodies lying in bed, but his E-body hands passed right through them. When Muldoon walked through his first door, it was expected. The projected Muldoon is only E-space substance, so passage through A-space substance is to be expected. Because of the fine nesting together of A-space and E-space, any passage of an E-space object through an A-space object (or vice versa) is really a passage around the other object. However, when Muldoon tried to grab the body of a sleeper, it was not obvious that his hands would pass through. The reason is that a sleeper’s body must contain a great number of E-space cell controllers.

Muldoon’s E-body hands are cell controllers, so why didn’t they encounter resistance from the cell controllers in the sleeper’s body? To suggest that the sleeper’s body did not contain E-space material, is unacceptable. The cell controllers must have been there. The only possible explanation is that there is enough room between individual cell controllers so that the one group was able to pass through the other without any substantial resistance. However, this does not mean that one E-body cannot make contact with another, or that it was truly impossible for Muldoon to have made contact with a sleeper, or even the door.

The normal E-body, during conscious control of it, will contact other E-bodies, but not sleepers or physical objects. The awareness only has as much control over the E-body as it is allowed to have by the intermediating mind pieces. Muldoon remarks how frustrated he was that he could never make contact with physical objects. In all the many projections he had, his E-body never made contact with a physical object while he was consciously in control. However, there were a few rare instances where Muldoon knew that his E-body had made contact with a physical object while he was unconscious. Because the E-body is made of cell controllers, and cell controllers can manipulate physical cells, it is not unreasonable to suppose that the E-body could touch or apply pressure against a physical object. Normally, it never does, but the potential is still there.

On the night of February 26, 1928, Muldoon had a serious stomach sickness that caused him great pain. At close to midnight he was overcome with pain and called out to his mother for help. She was asleep in an upstairs bedroom and didn’t hear him. Muldoon struggled out of bed, still calling, and fainted from the pain and effort. After regaining consciousness, only to struggle and faint again, the next time Muldoon regained consciousness, he was in his projected E-body. His E-body was moving without conscious control up the stairs, through a wall, and into the room where his mother and small brother were sleeping. Muldoon saw both of them sound asleep on the bed. At this point Muldoon lost consciousness for a brief period. Upon regaining consciousness, Muldoon saw his mother and small brother excitedly talking about being rolled out of bed by an uplifted mattress. After witnessing this scene for a while, Muldoon’s E-body was drawn back and reconnected to the physical. Back in the flesh, he called to his mother again, and this time she heard him and came downstairs. Ignoring the fact that he was lying on the floor, she excitedly told him how “spirits” had lifted the mattress several times and that she was, of course, frightened by it.

This seems to be Muldoon’s best example of his E-body making contact with a physical object. His awareness was turned off or disconnected at the critical moment when the contact actually happened. In general, there doesn’t seem to be any evidence that any E-body under conscious control has ever made contact with a physical object. No reliable projectionist has ever reported such a thing. (In a recent popular book by Robert Monroe, Journeys Out of the Body, Monroe claims such an experience, but there are reasons to doubt it.) It must be that the intermediating mind pieces have strict instructions or programming to never allow such conscious control. The ultimate origin of this restriction would be Gaia. Gaia must have decided that the E-body would not be allowed to affect the physical, including sleepers or awake persons containing cell controllers, while the awareness is on. In keeping with this restriction, Muldoon’s awareness was turned off by the appropriate mind piece while the physical contact was made. However, even without conscious participation, physical contact seems to be a great rarity. The internal programming of the cell controllers and the mind pieces must largely forbid such contact, except where appropriate as when a cell controller is with its cell.

The fact that the E-body is restricted from physical contact, including contact with other cell controllers in a physical body, is obviously for the common good. It isn’t hard to imagine the chaos possible if there were no such restriction. After all, who wants to be touched or otherwise disturbed by “spirits” while they are awake or sleeping? And imagine the abuse possible if the E-body projectionist could make contact with physical objects at will. It wouldn’t be long before all sorts of crimes and disturbances were committed invisibly.

Apparently, the only contact allowed is what may be called fair contact. The only fair contact possible for an E-body, would be contact with other projected E-bodies or E-bodies that have no physical association. Because they are meeting on equal terms, the two E-bodies will make contact with each other even when under conscious control. In fact, most E-body projectionists sooner or later have encounters with other E-bodies. Struggles and fights are often reported, and the opponent E-body is not always human. These encounters can be both frightening and very painful. Muldoon gives one example of this kind of encounter.

In 1923, Muldoon listened to a conversation between his mother and another woman who lived in town. This other woman described what an awful man her very-recently dead husband had been. Muldoon was angered against the man by the stories he heard. That night, Muldoon was asleep when he had a conscious projection. Upon turning to look at his physical body, Muldoon was shocked to see the E-body of the man who was being talked about earlier in the day. Muldoon describes the man as having a savage look, and being determined for revenge. This man quickly proceeded to attack the projected Muldoon. There was a fight, and Muldoon was getting the worst of it, as well as being cursed at. However, the fight soon ended when Muldoon was drawn back into his physical body. Once reconnected with the physical, Muldoon no longer felt or heard the attack of his enemy.

Muldoon remarks how his attacker clung to him and continued his attack while Muldoon was being slowly drawn back towards his physical body. Muldoon says his attacker was completely powerless to prevent, or even slow down, this drawing back and reconnection process. This failure to delay Muldoon’s reconnection is easily explained. Although Muldoon found his opponent to be stronger than himself, the fact remains that the total mass of his attacker’s E-body was extremely low when compared to the mass of Muldoon’s physical body. Even if Muldoon had only weighed fifty kilograms, that is still a mass of fifty thousand grams. By comparison, his attacker’s E-body probably had a mass of less than a gram. With his physical body serving as an anchor, it is easy to see why Muldoon won the tug-of-war.

At this point we are done with our examination of Sylvan Muldoon and his E-body projections. In this chapter we have considered in detail both lucid dreams, and E-body projections. These are two of the three kinds of projection that are possible to the human being. The human has three E-space components. These are the soul, the mind pieces, and the cell controllers. We are only interested in conscious projections, so the soul must always be present during the projection. If the soul is projected by itself, then this is a soul projection, which is covered in the next chapter. For normal projections having both sight and sound senses, the presence of at least some of the mind pieces is required, so any normal projection must include both the soul and some of the mind pieces. This is the minimum requirement for a normal projection. The only possible addition to the minimum requirement from among the three E-space components, is the cell controllers in the form of an E-body. Thus, if only the minimum requirement is met, then the projection is a lucid dream, and if the E-body is added, then the projection is an E-body projection.

Both lucid dreams and E-body projections are real to the awareness that experiences them. However, one can argue that the E-body projection is more real because it is more like everyday life since there is the addition of a body that has some resemblance in both movement and feeling to the physical body. In practical terms, however, lucid dreams are much more desirable than E-body projections. In fact, E-body projections can be downright loathsome. The reason for this is that lucid dreams are always painless and often quite fascinating, but E-body projections are often painful and sometimes excruciatingly so. The feeling E-body can unfortunately have its feelings translated into pain for the soul, just as the physical body can cause pain. Besides the pain, E-body projections sometimes have periods of darkness and disorientation. In addition, the potential for fear and emotional upset is much greater during an E-body projection than it is during a lucid dream. Overall, E-body projections are undesirable, and one should not be pleased to have them.


footnotes

[16] Fox, Oliver (1980) Astral Projection. Citadel Press, Secaucus. p. 44.

[17] Muldoon, Sylvan, and Carrington, Hereward (1980) The Projection of the Astral Body. Samuel Weiser, New York. p. 53.

[18] Ibid., p. 233.

[19] Ibid., p. 255.

[20] Ibid., p. 204.


Chapter 8: The Soul

The total man has four components. Three of these components exist in E-space. These are one soul, many mind pieces, and trillions of cell controllers. The whole of these three components taken together probably has a mass of less than a gram. The fourth component exists in A-space. This is the physical body. Compared to the E-space components, its mass is immense. The great mass of A-space objects makes their acceleration both energy-expensive and slow, when compared to E-space objects. Popular and poetic imagination conceives of the soul as a light and airy thing. This low-mass conception is certainly right.

Both the existence, as well as some of the qualities of the mind pieces and cell controllers, have been deduced by a study of the physical evidence. The physical evidence is so clear and supporting that both the mind pieces and the cell controllers were introduced and considered in this book in a scientific context. However, there will be no pretending that the soul can be handled in the same way. Unlike the mind pieces and cell controllers, there is no physical evidence for the existence of self-awareness or consciousness, and hence the soul. The soul is outside the realm of science. The only source of evidence for it, is human experience.

In considering the soul, we can begin with what is common knowledge about it. Each human recognizes in himself the existence of consciousness and self-awareness. The state or condition of this awareness is variable, but everyone agrees that it is always unitary. Each human has only one awareness. For this reason we say that a man has only one soul. Such human abnormalities as multiple personalities do not contradict this conclusion. The mind pieces are much more responsible for personality than the soul is.

Because there is no physical evidence, this chapter can’t help but be the most speculative in this book. There is no way to prove statements about the soul. However, in spite of this, some very definite statements will be made. They all rely on human observation, and on logical thinking used to interpret the observations. Having said this, we can jump right in with our boldest statement on the soul.

The soul is a small sphere roughly two centimeters in diameter. The awareness exists like a point at the center of this sphere. The evidence for this description of the soul is a very rare type of projection experience. Most projectionists, who are a rarity to begin with, never have what may be called a soul projection. We have already covered lucid dreaming and E-body projection, but nothing so far has been said about the extremely rare soul projection. To adequately understand a soul projection, it is first necessary to assume as a precondition what is learned from the soul projection. What one learns from the soul projection, besides such things as shape and perhaps size, is that awareness exists independently of all sensory and mental inputs.

When all, or almost all of the sensory and mental inputs are detached or cut off, and at the same time the awareness is still active or turned on, then the most extraordinary experience results. One finds oneself existing as a completely bodiless and largely mindless self-aware thing at the center of a sphere. All sensory inputs are definitely cut off. Most of the mental inputs are also cut. Obviously, no one could report the experience unless it were remembered, and such an experience is usually remembered for life, so a connection to long-term memory is indicated. For this reason, it doesn’t seem to be possible for a soul projection to happen without there still being some, albeit minimal, type of mental connection.

The perception of there being a surrounding spherical shell around the point-like awareness is a common feature of a soul projection. What this shell is, no one knows. It may be something analogous to an event horizon. In other words, it may not be an actual shell of E-space matter. Instead, the shell may represent the limit of the soul’s direct perception when it is in the soul-projection state. The volume of space between the point-like awareness and the shell is just a few cubic centimeters, and apparently unoccupied, although once the soul is back in the physical head, presumably cell controllers must be able to occupy the space.

The size of the shell is estimated from how large it feels as it is reintegrated amongst the mind pieces and cell controllers of the human head. This felt reintegration is not a common feature of a soul projection. The soul may be reintegrated while it is non-aware, or reintegrated to its mind pieces while it is still away from the body. Because soul projections are so rare to begin with, one can’t say with any certainty how often one has a chance to feel the size of the sphere. The only thing certain is that many soul projections give no indication of how large or small the sphere actually is. Although size data is minimal, the two-centimeter estimate is believable because one would assume the soul is smaller than the head it inhabits. During conscious reintegration, the soul is definitely felt as being in the head, and apparently centered between the left and right sides.

The state of awareness during a soul projection is benign. There is no sensation of either pain or pleasure. There is no emotion. These are external inputs that have been cut off. The interior lighting of the sphere seems to be variable. There may even be darkness. However, there is no fear because all emotion has been cut off. There may be the facility of the interior voice. In other words, one may be able to actually think to oneself during the soul projection. However, the full power of one’s mind is likely to be absent and the thoughts childish. It seems both the interior lighting, and the capacity for thought, are external to both the awareness and the soul in general, because they are not common to every soul projection.

The one common feature of a soul projection is the perception of a sphere surrounding one’s bodiless awareness. There is no sensory input such as sight or sound, although if the interior of the sphere is lit, one can sort of see the shell, and some sound like static may be heard. Soul projections are typically short in duration, lasting less than a minute, or perhaps only a few seconds. The awareness at the center of the sphere is passive and isolated. There is no feeling or sense of any power over anything. The awareness does not control its own situation. However, there is a definite sense of identity and existence. The awareness is most definitely aware of itself.

The basic picture that emerges from the scanty reports of soul projectionists, is that residing within a small sphere is a passive awareness that has no mental qualities, emotions, or senses of its own. The soul lives, so to speak, in cooperation with objects exterior to it. The soul is along for the ride, but it isn’t the driver. In the human mind, the driver is the mind pieces.

Everyone experiences episodes of unconsciousness. There is no doubt that awareness ceases at times, such as during sleep. The soul is passive and only aware of what the mind pieces allow it to be aware of. The question is, what is non-awareness or unconsciousness? There is no indication that the soul controls its own awareness. No such power over itself is reported by soul projectionists. The fact that the soul is a passive consumer of external stimuli, suggests that unconsciousness will occur if all external stimuli is cut off. This would mean that complete isolation would cause the awareness to cease and become dormant.

The soul projection is probably very close to being unconscious, and not in terms of a dimmed awareness, but in terms of a starved awareness that is close to turning itself off automatically. One might say the soul is like a light bulb. It can be turned on and off very easily by either allowing or blocking the needed stimulus, which in the case of a light bulb is an electric current. During a soul projection, all the stimuli a human soul normally receives is blocked, with the exception of a stimulus connection to memory, and perhaps a few other somewhat minor or trivial connections. One might say the experience of being a point-like awareness in a sphere is the first threshold of awareness. A certain minimum stimulation can turn the awareness on, and let it perceive little more than itself and its immediate surroundings or event horizon.

The normal working of the human being is that the soul is either always stimulated past its event-horizon threshold, or its stimulation is completely cut off so that the soul becomes dormant and awareness ceases. A soul projection is clearly abnormal. It serves no practical purpose except to satisfy human curiosity. It sometimes happens to ordinary people without a history of the more common projections, but this seems extremely rare. More frequently, soul projections happen to experienced lucid dreamers and E-body projectionists. The use of the OM method, which is the basis of Hinduism, can elicit an occasional soul projection, along with the more frequent lucid dreams and E-body projections.

The great rarity of soul projections is evidenced by a reading of the principal Upanishads. There is definite evidence of confusion when the Upanishads talk about the soul. However, unlike the majority of the Upanishads, the Katha Upanishad seems to have a clear idea of what it is talking about.

Know thou the soul as riding in a chariot,
The body as the chariot.
Know thou the intellect as the chariot-driver,
And the mind as the reins.

The senses, they say, are the horses;
The objects of sense, what they range over.
The self combined with senses and mind
Wise men call “the enjoyer.”[21]

This verse clearly distinguishes the soul from the mental qualities and senses. The soul is also portrayed as a passive passenger along for the ride. This verse agrees well with what is known from soul projections.

Though He is hidden in all things,
That soul shines not forth.
But he is seen by subtle seers
With superior, subtle intellect.[22]

A certain wise man, while seeking immortality,
Introspectively beheld the Soul face to face.[23]

These two verses, both from the Katha, are probably talking about soul projections. The “certain wise man” mentioned may well have been the writer of the Katha, or a close acquaintance of the writer. Notice the pat on the back in the third and fourth lines of the first verse. It is common for seers to rate both their experiences and themselves highly. The Upanishads are full of such self-praise. However, there is no reason, except personal vanity, to believe that a soul projection has any value other than to satisfy personal curiosity. A soul projection is an abnormal state, and nothing more.

The men who wrote the Upanishads were either seers themselves or students of seers. The method to become a seer is OM, as was explained in the “Hinduism and E-space” chapter. The typical seer, if successful, would have lucid dreams and E-body projections while asleep. The Upanishads are fairly consistent about the characteristics of OM, Brahma (E-space), and the Brahma-world (E-space Earth). This consistency is a product of either the writers or their teachers having similar projection experiences. However, there is a lot of inconsistency in the Upanishads about the soul. Confusing lists of soul qualities often include the entire mind. What this suggests is that soul projections were a rarity, and the typical Upanishad writer was ignorant of them.

As a replacement for their ignorance, the early Hindus developed a philosophy of a small soul the size of a thumb that lives in the heart. Nowadays, we know the heart to be just a muscle that pumps blood. However, in pre-scientific times the heart was often considered as the source of emotions. Even so, it is strange that the Hindus would place the soul in the heart, because common sense says the awareness is in the head. In spite of the Katha Upanishad’s apparent show of understanding about the soul, it too claims the soul lives in the heart. This heart doctrine is probably the single biggest error the Upanishads make. This error is a clear indication that the soul was poorly understood, even by seers.

Hinduism’s defective soul-knowledge gave Buddhism the opening it needed. A poorly-understood soul was the big chink in Hinduism’s armor. Buddhism countered with an organized and consistent view of the mind. The individual person was recognized as being a composite of several different things. The major components of man, according to Buddhism, are the physical body, senses, reason, will, and awareness. The major argument of Siddhartha Gautama, who was Buddhism’s founder, is that none of the components of man show true permanence. The physical body dies, decays, and rots away. The senses can be blinked on and off, or silenced. The reason is unreliable. The will can drift and wane. The awareness regularly ceases with sleep. Extremes of humanity, such as idiots and the insane, show how fragile the mind is. Overall, Gautama concluded that Hinduism’s glorious and eternal soul was unlikely.

Gautama confused the question of permanent states with the question of permanent objects. Everyone agrees that the awareness does not exist in a permanent, steady state. There is obvious variance in what one is aware of. The subjects of awareness are constantly changing. There is also universal agreement that awareness ceases on a more-or-less regular basis during sleep. However, this constantly changing state or condition of awareness does not mean there isn’t a permanent object such as the soul in which the awareness exists, even when it is dormant and turned off. After all, a light bulb and a microprocessor still exist, whether or not they receive any electric current. No doubt the confusion in Hinduism about the soul encouraged Gautama to conclude that impermanent states meant there were no permanent, underlying objects.

Buddhism is a remarkably consistent religion. Its major ideas all dovetail nicely with each other. The human-being-as-impermanent-bundle idea, was joined with the idea that personal existence could truly cease. This is the idea of Nirvana. If there are no permanent objects underlying the individual, then it seems very reasonable that the individual can become extinct. All that is necessary for extinction, is that the external factors or causes that keep the individual in existence, be removed. If the individual has no permanent, underlying objects of support, the reasoning goes, then the individual must be propped up or supported from outside. Kick away those external props and the individual will disappear.

Normally, people would be repelled by the idea that they are completely impermanent and subject to total extinction. However, Buddhism turns this apparent weakness into its great strength. According to Buddhism, the world is full of suffering. Everything in existence suffers, mankind suffers, individuals suffer. Therefore, extinction is highly desirable because the only alternative is suffering. Instead of something to be afraid of, Buddhism turns extinction into a great goal. Gautama himself is said to have reached the goal upon his death. However, Gautama did not attain extinction just because his physical body died. No, not at all. In spite of Buddhism’s claim that there is nothing permanent underlying man, it is still necessary to Buddhism that some part of the individual survive physical death. After all, if death ended everything, then everyone would reach Nirvana as soon as they die, and there would be no reason for people to do anything special in an effort to work towards, and ultimately reach, Nirvana.

Besides needing after-death survival, Buddhism also needs a continuing reason for the desirability of Nirvana. If the individual survives death and yet Nirvana is still desirable, then it must be that the individual’s suffering will continue. If the individual never came back to live on Earth, then it would be hard to justify a claim that the suffering will continue. The obvious answer is to bring the individual back to Earth for a new physical life and another round of suffering. Thus, to be workable, Buddhism needs the doctrine of rebirth or reincarnation. Rebirth was already a doctrine of Hinduism, so Buddhism had it from the outset both ready-made and right-at-hand, so to speak.

At this point in our discussion, Buddhism is almost a complete system. The obvious suffering of earthly life is combined with rebirth to give the doctrine of endless suffering. The impermanent-man argument implies the possibility of Nirvana, which is true extinction. The focus on eternal suffering makes Nirvana both desirable and a great goal. However, Buddhism needs just one more element to make itself complete. As presented so far, Buddhism has a weak link in its reasoning. There is a definite conflict between rebirth, and the claim that there are no permanent objects in man. If there are no permanent objects, then what is reborn? This is the vexing question for Buddhism.

Buddhism solves this logical difficulty, and actually kills two birds with one stone, by borrowing another doctrine from Hinduism. This is the doctrine of karma. Karma means deeds, and the Hindu idea is that both good and bad deeds in life can have a carry-over effect into the next life. Buddhism took this idea and adapted it for its own use. Buddhism uses karma as the glue that holds the impermanent bundle together from one lifetime to the next. Deeds, once done, are clearly external to the individual, and yet are claimed by Buddhism to still be attached and a part of the individual because of a great, universal web of cause and effect. This cause-and-effect argument is necessarily vague. It hides behind the size and complexity of the universe. However, it is needed to justify karma as the glue that holds the impermanent bundle of the individual together.

Besides being the necessary glue, karma also justifies personal effort to work towards gaining the great goal, Nirvana. The reasoning is simple. If deeds hold the bundle together, then deeds can tear the bundle apart. The right deeds can destroy the bundle and thus gain for the individual Nirvana, which is the ceasing of existence. With the addition of karma, Buddhism is now complete. The impermanent man is held together by his deeds in an existence of eternal suffering. By changing his deeds, the man can escape eternal suffering by becoming non-existent. This is the Buddhist system.

Gautama, of course, had to state what deeds were good (leading towards Nirvana) and what deeds were bad (ensuring continued existence). According to Gautama, bad deeds are all types of desire. This is a reasonable villain because one can’t attain to one’s desires if one does not continue to exist. Thus, to desire anything is to implicitly want continued existence so that the desire can be fulfilled. Gautama cleverly includes the desire for Nirvana as a bad deed. There are no exceptions. All desires are bad. By condemning desires, Gautama is implicitly saying that one must train oneself to reject the future and prepare for extinction. Besides identifying the bad deeds, Gautama also had to identify those good deeds that will lead towards personal extinction. This is his Eightfold Path. Right view, right purpose, right speech, right action, right livelihood, right effort, right mindfulness, and right concentration, are the good deeds. Obviously, the qualification of “right” means there is a great deal of leeway and personal interpretation as to just exactly what a good deed is.

After this overview of Buddhism, we can easily understand why the soul as a real object is rejected by Buddhism. If the soul were real, then Nirvana would be unlikely. Without Nirvana as the escape, there is no point in emphasizing suffering. Without Nirvana and suffering, there is no Buddhism. The soul is denied for the sake of building a consistent religious system. To deny the soul, Gautama used the logical argument of the variable awareness, but this is the only evidence against the soul that Buddhism has. Contradicting this argument are the real experiences of soul projectionists.

Buddhism as a religion is not interested in E-space. It has a very narrow focus directed towards personal liberation from suffering by true extinction. Buddhism is indifferent to any knowledge, such as knowledge about E-space, unless the knowledge will help in the quest for Nirvana. Gautama started the Buddhist tradition of avoiding questions that are considered beside the point. He openly refused to answer questions that didn’t directly relate to the great goal. Any questions about E-space, known as Brahma to the Hindus, would have been ignored by Gautama. Unlike Hinduism, Buddhism is basically anti-knowledge. This anti-knowledge approach becomes extreme in a variant of Buddhism known as Zen. A Zen teacher replies to most questions with a deliberately meaningless answer.

The fact that during a soul projection the awareness exists independently of all the stimulation one is normally aware of, shows clearly that the awareness is not a product or consequence of that stimulation. Instead of being a consequence of stimulation, the awareness shows itself to be a consumer of stimulation. It feeds constantly on whatever it is given. Without its food, it becomes dormant (unconscious). The soul-projection state is very close to starvation and dormancy, and yet the awareness is fully awake and aware, just as much as when it is flooded with stimulation. Stimulated below its event-horizon threshold, the awareness is not dimmed, tired, or sleepy. Instead, it is fully awake, although the perceptions of being bodiless and without senses are so remarkably different from what is normal for us. The awareness very much seems to have only two states. It is either on, or off. There doesn’t seem to be any scale of intensity or strength. Instead, the great variable for the awareness is the stimulation it receives.

The best evidence that the soul is a real object, is the spherical shell surrounding the awareness during a soul projection. As already stated, this shell itself may not be real E-space matter, but instead only an event horizon. Let’s consider these two possibilities. If the shell is real E-space matter, then the soul is immediately a real, tangible object in E-space. The soul would be a tiny ball with the awareness encased inside. The other possibility is that the surrounding spherical shell is a non-material event horizon. The fact that the awareness is at the center of the event horizon means there is something real at the center which is responsible for the event horizon. To try to explain it, it is reasonable to invoke the word “force.” A force at the center creates the surrounding event horizon. The spherical shape would be caused by a force that has no directional preference. In trying to understand what awareness is, it is not unreasonable to suppose that awareness or consciousness is one of the fundamental forces of E-space. This would explain the spherical event horizon. The focal point for the force must be an E-space particle of some kind. This particle is real, and the force is real, so the soul, which would be everything together, is a real object with its own independent existence in E-space.

Of the two possible explanations for the spherical shell around the awareness during a soul projection, the preferred possibility is the awareness-as-fundamental-force, because it is both simpler and more elegant than the something-in-a-ball alternative. By analogy with A-space physics, there must be a particle, in other words, some kind of matter that justifies or causes the presence of the force. Force needs matter to manifest. For the sake of convenience we will call this particle the soul-particle. The particle itself is not awareness, but it is needed for the awareness-force to manifest.

With this better, although obviously speculative, understanding of the soul, we can make a try at answering questions about the soul’s origin and longevity. The fundamental force of awareness must have been with E-space from the very beginning. It would be part of the programming of each block of E-space. Therefore, the potential for awareness exists as long as there is E-space. To answer our questions, instead of considering the force, we must consider the particle. Nothing is known about this particle except that it appears to be stable over time. The best one can do, is to speculate that the soul-particles were created early in the history of the Big Bang, just as the stable particles of A-space, such as the proton and electron, were. If this is true, and it is only a guess, then every soul would be about the age of the universe, which is roughly fifteen billion years old. Perhaps these soul-particles can be annihilated, but there is no evidence of such annihilation going on now in E-space. Certainly there is no evidence of any threat to the existence of our own soul-particles. Therefore, perhaps they will persist until the end of the universe. However, the mere existence of the soul-particle does not mean that there will be any associated awareness. As we know, there must be a certain minimum of external stimulation.

There must be a tremendous number of soul-particles distributed throughout E-space. However, only a very few would ever find themselves mixed up with A-space life. Probably one soul-particle is just as good for any purpose as the next soul-particle. There is no reason to believe there is more than one kind of soul-particle. They are probably all identical, just as all protons are identical, and all electrons are identical. There are big differences from one man to the next, but these differences are not to be explained as differences between their soul-particles, because these soul-particles are most likely identical.

The soul has no personality. It is sexless and characterless. It has no memory of its own. It can’t think by itself. All the qualities that make us what we are, are external to the soul, with the single exception of the awareness. An interesting question is to ask what would happen if, while a man were unconscious, his soul were exchanged for another? All souls are the same, so upon regaining consciousness the man would be the same as before. The soul has no memory of its own, so the new soul would be unaware of its newness to the situation. However, this switching of souls is unlikely because it serves no practical purpose. This line of reasoning brings to mind Buddhism and the whole question of escape from suffering. Suppose one considered one’s own existence to be filled with suffering, and one wanted to escape from it. Now let’s suppose one’s soul were removed and replaced with another soul while one is asleep. Does this mean one has escaped from the former existence and its suffering? Logically, the answer is yes. The former existence and its suffering would still continue, but with a different soul. However, unless the soul that escaped were attached to a new mind that had a specific memory that its new soul had escaped from what it did, then the escaped soul would never know that it had escaped, and never know what it had escaped from.

Admittedly, the soul by itself is too depersonalized. The fact that our souls will probably exist for the age of the universe is really meaningless and a cold comfort. After all, our souls are probably already fifteen billion years old, and what difference does that make, what comfort does that bring? Instead, our concerns are really about our larger personality. There is no need to be concerned about the soul or awareness itself.

In what does the personality reside? The four components of man are one soul, many mind pieces, trillions of cell controllers, and one physical body. We have already eliminated the soul as a repository for personality. The physical body clearly has a strong influence on the personality, but the personality resides in E-space, not A-space. The cell controllers are only concerned with their cells, so they are no home for the personality. By a process of elimination, we are left with the mind pieces. The personality resides in the mind pieces. The mind pieces are complex constructions of E-space matter. Both their age and longevity are extremely short compared to the soul. Thus, our concern for the survival of the personality is justified.

Our present personalities are human. However, humans have only been on Earth for roughly 100,000 years. Therefore, it is reasonable to say that no human personality is older than 100,000 years. Someday, humanity will become extinct. This rendezvous with extinction may be many hundreds of thousands of years away, but it will happen, eventually. Once humanity is extinct, the human personality will be obsolete. These obsolete personalities may persist for a while, but eventually they will either disintegrate on their own, or more likely be forcibly broken down and restructured for a new use. By the time the last human personality has ceased to exist in our E-space world, the oldest human personality will be at most a few million years old, and possibly much younger. Once obsolete, human personalities will probably be restructured by Gaia for use in a different species. By removing, adding, and replacing certain mind pieces, a human personality could be transformed into a non-human personality suitable for use in a different species.

For the Buddhists who want their personalities to disintegrate, they need do nothing special. All they have to do is be patient, and their personalities are guaranteed to become extinct in the not-too-distant future. As for being able to hasten the ultimate disintegration by deeds, that is doubtful. There is no evidence that the mind is designed with a self-destruct button. Perhaps personalities are broken down in E-space if they are truly damaged or defective, but Buddhist practices do not aim at causing deliberate damage. Besides, it is easy to cause physical damage, including brain damage, and the personality can certainly appear damaged when there is physical damage, but that does not mean there is actual damage to any of the E-space mind pieces. However, as discussed in the “Brain” chapter, certain kinds of damaged personality may in fact be due to actual flaws or damage in one or more mind pieces. Perhaps when there is real E-space damage, the personality will be broken down and reconstructed.

With our discussion of the human personality so far, we are obviously assuming it survives the death of the physical body. Let’s consider in detail this question of survival. Of the four components of man, it is only obvious that the physical body does not survive. The other three components reside in E-space, so the dissolution of the A-space component will affect them only indirectly. What the loss of the physical body means to the E-space components, is a loss of purpose. Without cells, the cell controllers have nothing to do. It has already been suggested in the “Development” chapter that upon the destruction of its associated cell, the cell controller will self-destruct after a more-or-less short period of time.

There is abundant evidence that the cell controllers do survive the death of the physical body for a short, somewhat variable period of time. A very dense E-body typically stays intact after physical death. Unlike an E-body projection when the cells are still alive, there is no need to return to the physical body after death. It seems the first stage of after-death experience for the typical person is a continuous E-body projection. The average duration of this condition is uncertain, but it seems to be comparatively brief, perhaps just a few weeks or months. It may also be that the average duration of the after-death E-body projection shortens with physical age. In other words, the older one is when one dies, the shorter the E-body projection will be. There is some evidence for this inverse relationship, and it may be that the timing of the cell controllers’ self-destruction is somewhat dependent on their age. There is also evidence that a violent or sudden death will tend to prolong the E-body projection. If certain ghost stories are to be believed, then it would seem in rare instances that the E-body projection can last for more than a century. However, such a long continuance of the E-body projection would be very abnormal, and may actually never happen.

The after-death E-body projection is similar to the experience of those E-body projectionists who project while still physically alive. In general, it is an undesirable condition. The senses may seem impaired. Things may often look dark. The feeling of pain is a real possibility. There is the possibility of being attacked by other E-bodies, including non-human E-bodies. Instead of being a time of joy that they are still alive, many newly dead find their E-body projection a time of confusion and suffering. Fortunately, the E-body does not last for long.

One way or another, the typical person is soon freed from his E-body, which is doomed to disintegration. At this point, of course, the person isn’t as complete as he used to be. The soul, and probably all of the mind pieces, are what remain. After the E-body projection, the next logical stage is the lucid dream. It seems this lucid-dream stage can continue unbroken for many years, even centuries. Once again, there is no body to return to, so there is no reason for this projection to quickly end. The loss of the E-body is a great benefit because with it goes the possibility of pain. By contrast, there is no pain or suffering during the lucid-dream stage. Instead, one leads a benign and perhaps enjoyable existence.

The next stage that follows the lucid dream, seems to be a reuse or recycling of that bundle of surviving soul and mind pieces which is experiencing the lucid dream. Considered from the standpoint of economy, it makes good sense to recycle human personalities rather than destroy the old ones and create new ones. We can assume Gaia is economical. Minds should be reused, as long as there is A-space life that needs them. The only obstacle to reuse would be all the old memories and any built-in development clocks and such. To be economical, Gaia would have designed the mind pieces for reuse, so memories can be erased and any development clocks can be reset. Certain mind pieces may also have switches that allow certain qualities within the mind pieces to be adjusted. Mental abilities, character, and such, may all be adjustable within limits.

Obviously, the subject of reuse raises the question of rebirth. As already noted, both Hinduism and Buddhism accept rebirth. Besides the evidence for rebirth, it just seems practical. Gaia is an engineer, and reuse of unworn parts is a good engineering practice. Somehow, the typical person will find himself being reused and back in a baby’s body. The old memories have been erased, development clocks have been reset, and perhaps some switch settings have been changed. However, one can assume the same soul is intact, since there is no reason to replace it with another soul. Such a replacement would be a waste of energy since one soul is as good as another, and they are all identical. As already stated, the personality resides in the mind pieces, not the soul, but the same soul will be experiencing the reused mind pieces.

One of the most pernicious beliefs about rebirth is that the individual is becoming perfected, and as part of the perfection process, must have the full range of human experience over many lifetimes. The advocates of this perfection notion will say that a rich man must be born a poor man, a man must be born as a woman and a woman as a man, a genius must be born an idiot, and so on. However, there is no evidence whatsoever to support this belief. It is pure fancy with perhaps a touch of malice and envy. The reuse of the same bundle of mind pieces and soul, suggests that the pattern established in one lifetime is likely to be somewhat repeated in the next lifetime, because the same elements are involved, although details of the personality may be changed somewhat and the exact circumstances of the new life will be different.

Most men want to be reborn as men, and most women want to be reborn as women. Why shouldn’t these wishes be fulfilled? A pattern has already been established. After all, following an old path is easier than cutting a new one. A rich man in one life may not be rich the next, because few are rich and there is always great competition for wealth, but the typical self-made rich man certainly has a higher probability of being rich in the next life than does the typical poor man. More so than wealth, it is to be expected that an intelligent person in one life will have a comparable intelligence in the next life, although the details and emphasis of the intelligence may be different.

The idea of perfection makes sense when considered from a species standpoint. Natural selection will slowly perfect the human species in terms of its survival fitness, but this kind of perfection is just a fine-tuning. Perfection in general implies fixity, just as natural selection tends towards species fixity. How is the fixity of perfection to be reconciled with the suggested large swings and changes? An athlete doesn’t become perfect in his sport by having a leg broken, neither does an intelligent man become perfect by being an idiot, or a man become perfect by being a woman, or a hard-working, prosperous man become perfect by being impoverished. Thus, mixing the idea of perfection with large swings and changes, is contradictory.

While on the subject of rebirth, the notion of karma or deeds is worth considering. To what extent must one either suffer or enjoy the consequence of one’s actions in a previous life? To understand karma it is helpful to consider how the idea came about in the first place. Obviously, karma is argued for by analogy with the consequence of deeds in the current life. We all know what the likely consequences are of certain actions. For example, if one is hostile towards another person, then that other person is more likely to respond with hostility rather than friendliness. We all live under laws, and we know that if we break them, then some form of punishment is likely, or at least possible. Our actions, to a large extent, are guided by the results we expect. Karma is just an extension over time of the obvious action-causes-consequence happenings in our everyday lives.

However, karma has a great weakness that makes its importance minimal. The weakness of karma is the passage of time itself. As we know so well from our current lives, the more time that passes after a particular action, the less likely there will be any further consequence from that action. One cannot dismiss karma entirely, but its impact must be typically slight. The passage of too much time has cut off the consequence of past-life actions. The present is mostly concerned with the immediate past, not the distant past. This will be even more true once the memories are erased in preparation for the next life. Although karma is weak from the standpoint of logic, it is strong from the standpoint of religious expediency. Karma is often used as a weapon to beat religious devotees into submission. The devotee is taught that he must be good and obey the guru, or church, or whatever, or he will suffer for it in a future life. Threatening future suffering based on current actions is a common feature of Christianity, Mohammedanism, Buddhism, and Hinduism.

Although we have concluded that all souls are identical and devoid of personality, there is still the question of which A-space life-forms have souls? For example, does a cat have a soul, or a mouse, or a worm? To answer this question we must consider what practical value a soul has to an A-space species. How does the soul’s awareness enhance the survival of an A-space species?

Firstly, the soul is associated with the mind pieces and not the cell controllers. It seems reasonable to conclude that if an A-space species has no mind pieces, then it has no soul. All one-celled organisms, all plants and trees, and all brainless animals can be immediately eliminated from having a soul. Only an animal with a brain can have one or more mind pieces. However, just having a few mind pieces doesn’t mean there would be any benefit from having a soul. As we know from our own experience, mind pieces are capable of doing very complex things without involving the awareness.

A characteristic of the soul is that it is many-channeled. The human soul is subjected to a lot of simultaneous stimulation. This use of the soul is a clue as to what the value of the soul is. The soul must help to tie together and regulate, in a cooperative and balanced fashion, the many different functions of the mind. This suggests there must be a minimum complexity before a soul could be of benefit.

The complexity of the minds of insects, and worms, and such, is probably too low to need a soul for regulation and balance. Another consideration is the apparent size of the soul. With an apparent diameter of about two centimeters, a soul would be much too large to be accommodated in an insect or worm, or even a mouse. It seems likely that the head size of an animal is a valid consideration in determining if an animal has a soul. Of course, one can’t be sure about the soul’s diameter at times other than a soul projection. Perhaps the event horizon will shrink as stimulation increases, but no such shrinking of the event horizon has been reported. Overall, it seems very unlikely that the smaller animals have souls. They lack both mental complexity, and space.

Cats and dogs are often considered to have souls. This seems likely. Their heads are big enough and they show mental complexity somewhat comparable to our own. Besides cats and dogs, there are many large animals that are likely to have souls. Giraffes, horses, lions, buffalo, owls, dolphins, whales, elephants, chimpanzees, and so on, are all likely candidates for having souls. It is sometimes said that certain animals have a soulful look in their eyes. The presence of a soul may well give a certain pace and balance that will show in the animal’s appearance and actions.


footnotes

[21] The Thirteen Principal Upanishads, 2nd ed., p. 351.

[22] Ibid., p. 352.

[23] Ibid., p. 353.


Bibliography


END OF BOOK