Table of Contents


A Soliton and its owned Bions (Awareness and Mind)

These Intelligent Particles are how we Survive Death

12th edition

Copyright Table ©

2017 (12th ed.) 2006 (11th ed.) 2005 (10th ed.)
2004 (9th ed.) 2003 (8th ed.) 2002 (7th ed.)
2001 (6th ed.) 1999 (5th ed.) 1998 (4th ed.)
1996 (3rd ed.)     1994 (2nd ed.)     1993 (1st ed.)

by Kurt Johmann

Note: For the first eleven editions of this book
(1st ed. thru 11th ed. in the above Copyright Table),
the title of this book was The Computer Inside You

This Work is placed in the Public Domain

September 17, 2017: I, Kurt Johmann, the author and copyright owner, hereby place the entire text of A Soliton and its owned Bions (Awareness and Mind), 12th edition, in the public domain. The photo of myself as a 57-year-old man I’m also placing in the public domain.

Brief Overview

This book proposes in detail an old idea: that the universe is a computed reality generated by an underlying network of computing elements. In particular, this book uses this reality model to explain the otherwise unexplained: ESP, afterlife, mind, UFOs and their occupants, organic development, and such.

About the Author

I, Kurt Johmann, was born November 16, 1955, in Elizabeth, New Jersey, USA (United States of America). I obtained a BA in computer science from Rutgers University in 1978. From 1978 to 1988 I worked first as a systems analyst and then as a PC software developer. I entered graduate school in August 1988. In December 1989 I received an MS, and in May 1992 a PhD, both in computer science from the University of Florida in Gainesville Florida. I then returned to software development work, continuing such work up until the end of 2005, and also taking time as needed to work on the first ten editions of this book, and various other writings.

Beginning in early 2006 I had to start helping my parents at their home in Gainesville Florida because of their decline in various ways due to the infirmities of old age, so I retired from programming work to help them. After writing the 11th edition of this book in mid 2006, that was the last work I did on this book until late 2012 when I started work on the 12th edition, but soon stopped because in 2013 I had a substantially increased workload caring for my dad during his final year (he died at home at the end of 2013; my mother had already died at home at the end of 2011). After my father’s death there were various estate matters to handle, and also my own retirement to a different part of Florida. In June 2015 I resumed work on this book’s 12th edition. At the completion of this 12th edition on September 17, 2017, I am 61 years old.

Below is a photo (without my glasses that I normally wear) of myself, Kurt Johmann, when I was 57 years old, taken February 2013. I had started work on the 12th edition of this book a few months before this photo was taken, and I had this photo taken with the intention of including it in the 12th edition. My tan is from the Florida sun:

photo of Kurt Johmann, at age 57


Mathematica is a registered trademark ® of Wolfram Research, Inc.

Table of Contents

Preface
Introduction
1 The Computing-Element Reality Model
1.1 Constraints for any Reality Model
1.2 Overview of the Model
1.3 Components of the Model
1.4 Particles
1.5 Living Inside Computed Reality
1.6 Common Particles and Intelligent Particles
2 Biology and Bions
2.1 The Bion
2.2 Cell Movement
2.3 Cell Division
2.4 Generation of Sex Cells
2.5 Bions and Cell Division
2.6 Multicellular Development
3 The Brain and the Mind
3.1 Neurons
3.2 The Cerebral Cortex
3.3 Mental Mechanisms and Computers
3.4 Composition of the Computers
3.5 Memory
3.6 Learned Programs
3.7 The Mind
3.8 Identifier Blocks, the send() Statement, and Multicellular Development
3.8.1 Identifier Blocks
3.8.2 The Learned-Program send() Statement: Parameters
3.8.3 Coordinates in 3D Space, and Message Transmission thru 3D Space
Two different Message-Transmission Algorithms
A Message-Transmission Algorithm for Sending a Message to a Specific Computing Element
An Efficient Gravity Algorithm
Regarding the above Gravity Algorithm: Approximations and Efficiency
3.8.4 The Learned-Program send() Statement: the message_instance
3.8.5 The Learned-Program send() Statement: Algorithmic Details
Handling the Special Case of a Recipient Particle being Moved when the Message Arrives
The Sphere-Filling Message-Transmission Algorithm
Several Properties of this Sphere-Filling Message-Transmission Algorithm
3.8.6 Multicellular Development
Regarding the Particle Details returned by the various get_relative_location…() learned-program statements described in this Book
Avoid Unreasonable Assumptions when Designing Algorithms that will Run on Computing Elements
Timers and Keeping Track of Elapsed Time
3.8.7 The Learned-Program Statements for Seeing and Manipulating Physical Matter have a very Short Range
3.8.8 Bions Seeing and Manipulating Atoms and Molecules
3.8.9 How Cell-Controlling Bions Stay with their Cells
How does a Sleeping Cell-Controlling Bion stay with its Cell, and how does a Sleeping Bion in a Bion-Body stay with that Bion-Body
4 Experience and Experimentation
4.1 Psychic Phenomena
4.2 Obstacles to Observing Bions
4.3 Meditation
4.4 Effects of Om Meditation
4.5 The Kundalini Injury
5 Out-of-Body Travels
5.1 Internal Dreams and External Dreams
5.1.1 The Soliton Directory
5.1.2 External Dreams aka Lucid Dreams
5.2 Movement when Out-of-Body
5.2.1 Out-of-Body Movement during a Lucid Dream
5.2.2 Vision and Movement during my Bion-Body Projections
5.2.3 How One’s Projected Bion-Body Maintains its Human Shape
Moving my Projected Bion-Body’s Limbs
5.3 Lucid-Dream Projections ~ Oliver Fox
5.4 Bion-Body Projections ~ Sylvan Muldoon
6 Awareness and the Soliton
6.1 The Soliton
6.2 Solitonic Projections
6.3 The Afterlife
The Bion-Body Stage of the Afterlife
The Lucid-Dream Stage of the Afterlife
Transitioning from one Animal Type to a different Animal Type
6.3.1 Birds of a Feather, Flock Together
6.3.2 How a Mind Connects with a Brain before Birth
7 The Lamarckian Evolution of Organic Life
7.1 Evolution
7.2 Explanation by the Mathematics-Only Reality Model of the Evolution of Organic Life
7.3 Darwinism
7.4 Darwinism Fails the Probability Test
The First Self-Reproducing Bacterium
7.5 Darwinism Fails the Behe Test
7.6 Explanation by the Computing-Element Reality Model of the Evolution of Organic Life, and the Existence of the Caretaker Civilization
Learned Programs and Organic Life
8 Caretaker Activity
8.1 The UFO
8.2 The UFO according to Hill
My Analysis
8.3 The UFO Occupants
8.4 Identity of the UFO Occupants
8.5 Interstellar Travel
9 The Human Condition
9.1 The Age of Modern Man according to Cremo and Thompson
9.2 The Gender Basis of the Three Races
Some Additional Evidence for the Gender Basis of the Three Races
9.3 The Need for Sleep
9.4 A Brief Analysis of Christianity
9.5 Karma
9.6 Orgasm
9.7 Allocation Changes during Growth and Aging
10 A Brief Autobiography of myself Kurt Johmann
10.1 My own Relevant Experiences regarding Lucid-Dream Projections, Bion-Body Projections, Solitonic Projections, and the Kundalini Injury
10.1.1 My One Dense Bion-Body Projection
10.1.2 My Two Solitonic Projections
10.1.3 My Kundalini Injury
10.2 Motivation and Means for Writing this Book
10.3 Some Details of my Early Life
10.4 Some Details of my Later Life
Bibliography

Preface

At the time of Isaac Newton’s invention of the calculus in the 17th century, the mechanical clock was the most sophisticated machine known. The simplicity of the clock allowed its movements to be completely described with mathematics. Newton not only described the clock’s movements with mathematics, but also the movements of the planets and other astronomical bodies. Because of the success of the Newtonian method, a mathematics-based model of reality resulted.

In modern times, a much more sophisticated machine than the clock has appeared: the computer. A computer includes a clock but has much more, including programmability. Because of its programmability, the actions of a computer are arbitrarily complex. Assuming a complicated program, the actions of a computer cannot be described in any useful way with mathematics.

To keep pace with this advance from the clock to the computer, civilization should upgrade its thinking and adjust its model of reality accordingly. This book is an attempt to help smooth the transition from the old conception of reality—that allowed only mathematics to describe particles and their interactions—to a computer-based conception of reality.

Introduction

A reality model is a means for understanding the universe as a whole. Based on the reality model one accepts, one can classify things as either possible or impossible.

The reality model of 20th-century science is the mathematics-only reality model. This is a very restrictive reality model that rejects as impossible any particle whose interactions cannot be described with mathematical equations.

If one accepts the mathematics-only reality model, then there is no such thing as an afterlife, because according to that model a man only exists as the composite form of the simple mathematics-obeying common particles composing that man’s brain—and death is the permanent end of that composite form. For similar reasons the mathematics-only reality model denies and declares impossible many other psychic phenomena.

The approach taken in this book is to assume that deepest reality is computerized. Instead of, in effect, mathematics controlling the universe’s particles, computers control these particles. This is the computing-element reality model. This model is presented in detail in chapter 1.

With particles controlled by computers, particles can behave in complicated, intelligent ways. Thus, intelligent particles are a part of the computing-element reality model. And with intelligent particles, psychic phenomena, such as the afterlife, are easy to explain.

Of course, one can object to the existence of computers controlling the universe, because, compared to the mathematics-only reality model—which conveniently ignores questions about the mechanism behind its mathematics—the computing-element reality model adds complexity to the structure of deepest reality. However, this greater complexity is called for by both the scientific and other evidence presented in this book.

1 The Computing-Element Reality Model

This chapter presents the computing-element reality model. The chapter sections are:

1.1 Constraints for any Reality Model
1.2 Overview of the Model
1.3 Components of the Model
1.4 Particles
1.5 Living Inside Computed Reality
1.6 Common Particles and Intelligent Particles

1.1 Constraints for any Reality Model

The world is composed of particles. The visible objects that occupy the everyday world are aggregates of particles. This fact was known by the ancients: a consequence of seeing large objects break down into smaller ones.

Particles that are not composed of other particles are called elementary particles. Philosophically, one must grant the existence of elementary particles at some level, to avoid an infinite regress.

For the physics known as quantum mechanics, the old idea of the continuous motion of particles—and the smooth transition of a particle’s state to a different state—is replaced by discontinuous motion and discontinuous state changes. A particle moves in discrete steps (for example, the movement of an electron to a different orbital), and a particle’s state, changes in discrete steps (for example, the change of a photon’s spin).

For the particles studied by physics, the state of a particle is the current value of each attribute of that particle. A few examples of particle attributes are position, velocity, and mass. For certain attributes, each possible value for that attribute has an associated probability: the probability that that particle’s state will change to that value. The mathematics of quantum mechanics allows computation of these probabilities, thereby predicting specific state changes.

Various physics experiments, such as the double-slit experiments done with electrons and also neutrons, contradict the old idea that a particle is self-existing independent of everything else. For the particles studied by physics, these experiments show that the existence of a particle, knowable only thru observation, is at least partly dependent on the structure of the observing system.

Other physics experiments, such as the EPR experiments that test Bell’s theorem, demonstrate that widely separated particles can simultaneously, synchronously change state. Given the distance between the particles and the extent to which the synchronous state changes are measured as being simultaneous, it appears necessary that an instantaneous much-faster-than-lightspeed communication is involved in coordinating these synchronous state changes for the widely separated particles.

In summary, physics places the following three constraints on any reality model of the universe:

  1. A particle moves in discrete steps. And a particle’s state, changes in discrete steps.

    Thus, as a particle moves from some point A to some point B, that particle occupies, at most, only a finite number of different positions between those two points, instead of an infinite number of different positions.

    Similarly, as a particle changes state, from some state A to some state B, there are, at most, only a finite number of different in-between states, instead of an infinite number of different in-between states.

  2. Self-existing particles—that have a reality independent of everything else—do not exist.

  3. Instantaneous communication occurs.

    Regarding the actual speed of this instantaneous communication, it is at least 20 billion times the speed of light.[1]


footnotes

[1] The force of gravity is an example of instantaneous communication. Astronomer Tom Van Flandern computes a lower-bound on the speed of gravity as being not less than 20 billion times the speed of light (2×1010c). (Van Flandern, Tom. The speed of gravity—What the experiments say. Physics Letters A, volume 250 (21 December 1998): pp. 1–11)

Van Flandern’s article also debunks both Special Relativity and General Relativity, which are two physical theories that have been dominant in the 20th century, more for political reasons than reasons of merit.

Similarly, the Big Bang is a physical theory that has been dominant in the 20th century, for political reasons instead of reasons of merit. See my essay Big-Bang Bunk at relative link or absolute link.

Note that the computing-element reality model that is detailed in the remainder of this chapter is not dependent on the truth or falsity of any particular physical theory, because any physical theory that is useful can be computed (section 1.5).

Although the computing-element reality model does not depend on specific physical theories, the model can be helpful in constructing physical theories. For example, consider the fact that time slows for an object as that object moves faster. Given the computing-element reality model, one can suggest, for example, that the faster an object moves thru the array of computing elements (section 1.2), the more that the available computing time is being devoted to moving that object, with less computing time available for interacting that object’s particles with each other and with the outside environment. Thus, if the object is a clock, then that clock runs more slowly, because all of that clock’s particles are moving more slowly relative to each other.

In general, the computing-element reality model provides a framework in which physics can use algorithms to explain physical phenomena, instead of limiting itself to using only mathematics.


1.2 Overview of the Model

The computing-element reality model states that the universe’s particles are controlled by computers. Specifically, the computing-element reality model states that the universe is a vast, space-filling, three-dimensional array of tiny, identical, computing elements.[2]

A computing element is a self-contained computer, with its own memory. Each computing element can communicate with its adjacent computing elements, and each computing element runs its own copy of the same large and complex program—called the computing-element program.

Each elementary particle in the universe exists only as a block of information that is stored as data in the memory of a computing element. Thus, all particles are both manipulated as data and moved about as data by these computing elements. Consequently, the reality that people experience is a computer-generated, computed reality.[3],[4]

In our human world with its man-made physical computers, thinking about the computing elements is necessarily influenced by what we know about our man-made physical computers. However, the actual composition and history of the computing elements, in terms of what they are made of and how they came into being, is necessarily unknowable to us, because these computing elements that generate our reality, are not inside that generated reality with us, which means that we cannot directly—with or without our scientific tools and instruments—see and examine these computing elements. However, knowing about physical computers, and reasoning by analogy, one can say that a computing element has a processor and memory and can run programs stored in its memory, but one cannot say what any of these components in a computing element are made of, or how they came into being.


footnotes

[2] One can ask how these computing elements came into existence, but this line of questioning faces the problem of infinite regress: if one answers the question as to what caused these computing elements to come into existence, then what caused that cause, and so on. At some point a reality model must declare something as bedrock for which causation is not sought. For the mathematics-only reality model its bedrock is mathematics; for the computing-element reality model its bedrock is the computing element.

A related line of questioning asks what existed before the universe, and what exists outside the universe. For these two questions the term universe includes the bedrock of whatever reality model one chooses. Both questions ask, in effect, what lies outside the containing framework of reality that is defined by one’s given reality model. The first question assumes that something lies outside in terms of time, and the second question assumes that something lies outside in terms of space. And both questions implicitly suggest that whatever lies outside in terms of time or space may be a fundamentally different reality model, because if it is just more of the same reality model, then why ask the question? However, because we cannot see or think apart from whatever actual underlying reality model gives us our existence, speculating about alternative reality models that may exist elsewhere in time or space is just imagination and guesswork with no practical value.

[3] Thruout the remainder of this book the word particle, unless stated or implied otherwise, denotes an elementary particle. An elementary particle is a particle that is not composed of other particles. In physics, prime examples of elementary particles are electrons, quarks, and photons. Also, the intelligent particles—bions and solitons, which are described later in this book—are elementary particles.

[4] The three-dimensional array of computing elements is, in effect, the universe and space itself. However, except in imagination it is not possible for anyone to see—with or without instruments—any part of this array of computing elements for the following reason: Because mankind and its instruments are composed of particles, and particles are data stored in computing elements, then, being only an effect of those computing elements, those particles cannot directly probe those computing elements.


1.3 Components of the Model

Today, computers are commonplace and the basics of programs and computers are widely known. Given the hypothesized computing elements that lie at the deepest level of the universe, overall complexity is minimized by assuming the following:

Among other things, the computing-element program includes code that supports message transmission from a computing element to its adjacent computing elements, allowing, in effect, messages to travel thru 3D space. Section 3.8 covers messaging in detail. Also, assuming that gravity is communicated by messages, and assuming that Tom Van Flandern is correct that the speed of gravity is at least 20 billion times the speed of light (section 1.1), then the messages that result in gravity are probably moving thru 3D space at a speed that is at least 20 billion times the speed of light.

Regarding the shape and spacing of the computing elements, the question of shape and spacing is unimportant. Whatever the answer about shape and spacing might be, there is no obvious impact on any other question of interest. From the standpoint of what is esthetically pleasing, one can imagine that the computing elements are cubes that are packed together without intervening space.

Regarding the size of the computing elements, the required complexity of the computing-element program can be reduced by reducing the maximum number of particles that a computing element simultaneously stores and manipulates in its memory. In this regard the computing-element program is most simplified if that maximum number is one. Given this maximum of one, if one then assumes that no two particles can be closer than 10−16 centimeters apart—and consequently that each computing element is a cube 10−16 centimeters wide—then each cubic centimeter of space contains 1048 computing elements.[5] The value of 10−16 centimeters is used, because this is an upper-bound on the size of an electron, which is an elementary particle.

Regarding computing-element processing speed, it’s possible to compute lower-bounds by making a few assumptions: For example, assume a computing element only needs to process a total of 10,000 program instructions to determine that it should transfer to an adjacent computing element an information block. In addition, assume that this information block represents a particle moving at lightspeed, and the distance to be covered is 10−16 centimeters. With these assumptions there are about 10−26 seconds for the transfer of the information block to take place, and this is all the time that the computing element has to process those 10,000 instructions, so the MIPS rating of each computing element is at least 1024 MIPS (millions of instructions per second). For comparison, the first edition of this book was composed on a personal computer that had an 8-MIPS microprocessor.


footnotes

[5] In this book very large numbers and very small numbers are given in scientific notation. The exponent is the number of terms in a product of tens. A negative exponent means that 1 is divided by that product of tens. For example, 10−16 is equivalent to (1 ÷ 10,000,000,000,000,000) which is 0.0000000000000001; and, for example, 3×108 is equivalent to 300,000,000.


1.4 Particles

Regarding the first two constraints that physics places on any reality model of the universe (section 1.1):

  1. A particle moves in discrete steps. And a particle’s state, changes in discrete steps.

    This a consequence of the computing-element reality model, given the small size of the computing element, and given the finite resources of the computing element. These finite resources include such things as a finite processing speed, a finite memory, and a finite register size.

    Computing an infinity of different positions, or an infinity of different states, requires an infinity of time when the processing speed is finite. Thus, in the computing-element reality model, nothing is computed to an infinite extent. Everything is finite and discrete.

  2. Self-existing particles—that have a reality independent of everything else—do not exist.

    This is a consequence of the computing-element reality model, given that particles, being data, cannot exist apart from the computing elements that both store and manipulate that data.

A particle in the computing-element reality model exists only as a block of information stored as data in the memory of a computing element. A particle’s information block is the current, complete representation of that particle in the memory of whichever computing element currently holds that particle. A particle’s state information identifies the particle’s type and, depending on that particle type, includes a group of variables for which every particle of that type has a value set for each variable in its state information. For each particle type, its state information has a fixed format that is, in effect, defined by the computing-element program. For simplicity, one can assume that a particle’s state information is at the beginning of that particle’s information block.

1.5 Living Inside Computed Reality

In effect, the computing-element reality model explains personally experienced reality as a computer-generated, computed reality. Similarly, modern computers are often used to generate a computed reality for game players. However, there is an important difference between a computed reality generated by a modern computer and the ongoing computed reality generated by the computing elements. From a personal perspective, the computed reality generated by the computing elements is reality itself; the two are identical. Put another way, one inhabits that computed reality; it is one’s reality.

For the last few centuries scientists have often remarked and puzzled about the fact that so much of the world can be described with mathematics. Physics textbooks are typically littered with equations that wrap up physical relationships in nice neat formulas. Why is there such a close relationship between mathematics and the workings of the world? This question is frequently asked.

Mathematics is, in effect, the product of computation. At its base, mathematics is counting (numbers are counts); a simple algorithm. For a reality that flows directly from an underlying computation layer—the essence of the computing-element reality model—mathematics is a natural part of that reality. A finite reality that results from finite computations is going to have relationships and patterns within it that can be quantified by equations.

Note that the high degree of order and structure in our reality is a direct reflection of the high degree of order and structure in the computing-element program. To help make this clear, imagine a simple reality-generating program that generates, in effect, nothing but noise: a sequence of random numbers. For that kind of reality, even though it is finite, the only relationship or pattern within that reality that applies over a wide area is the trivial one regarding its randomness. Thus, for example, in that reality you will not find the relationships and patterns described by the equations of our physics, such as, for example, Newton’s laws.[6]

Regarding what the computing-element reality model allows as possible within the universe: Because all the equations of physics describing particle interactions can be computed, either exactly or approximately, everything allowed by the mathematics-only reality model is also allowed by the computing-element reality model.[7]

The mathematics-only reality model disallows particles whose interactions cannot be expressed or explained with equations. By moving to the computing-element reality model, this limitation of the mathematics-only reality model is avoided.


footnotes

[6] For a formal treatment of the relationship between a program and its output when given the order and structure of that output, see, for example:

Chaitin, Gregory. “Information-Theoretic Computational Complexity.” In New Directions in the Philosophy of Mathematics, Thomas Tymoczko, ed. Princeton University Press, Princeton, 1998.

[7] Equations that cannot be computed are useless to physics, because they cannot be validated. For physics, validation requires computed numbers that can be compared with measurements made by experiment.


1.6 Common Particles and Intelligent Particles

A programmed computer can behave in ways that are considered intelligent. In computer science, the Turing Hypothesis states that all intelligence can be reduced to a single program, running on a simple computer, and written in a simple language. The universe contains at least one example of intelligence: ourselves. The computing-element reality model offers an easy explanation for this intelligence, because all intelligence in the universe can spring from the computing elements and their computing-element program.

At this point one can make the distinction between two classes of particles: common particles and intelligent particles. Classify all the particles of physics as common particles. Prime examples of common particles are electrons, photons, and quarks. In general, a common particle is a particle with simple state information, consisting only of attribute values. This simplicity of the state information allows the interactions between common particles to be expressed with mathematical equations. This satisfies the requirement of the mathematics-only reality model, so both models allow common particles.

Besides common particles, the computing-element reality model allows the existence of intelligent particles. In general, an intelligent particle is a particle whose information block includes a lot more than just simple state information and associated data, if any. Instead, a typical intelligent particle’s information block includes learned programs (section 3.6) and data stored and/or used by those learned programs. Regarding the state information of an intelligent particle, one can assume that among other things it includes a pointer to a linked list of the learned programs that that intelligent particle currently has in its information block.

Only an intelligent particle can have learned programs, and, in general, an intelligent particle’s learned programs are a major factor that determines how that intelligent particle interacts with other particles. Because different intelligent particles can have different learned programs, and the learned programs themselves can be very complex, expressing with mathematical equations the interactions involving intelligent particles is impossible. This explains why intelligent particles are absent from the mathematics-only reality model.

Regarding the movement of a particle thru 3D space, this movement happens in finite steps, and each step is done by simply copying that particle’s information block from the computing element that currently holds that particle, to an adjacent computing element that becomes the new holder of that particle, and then at the computing element that no longer holds that particle: in effect, delete that particle’s information block from that computing element’s memory.

Regarding the organization of a computing element’s memory, one can assume that each computing element has the same amount of internal memory, and each computing element allocates the same part of its internal memory for holding a particle’s information block, and this allocation has the same size in each computing element (by “same size” is meant the same number of bytes or whatever unit of memory is used by computing elements). The size of this memory allocation for holding a particle’s information block, is a limit on the size of any particle’s information block. Because the size of a common particle’s information block is tiny compared to the size of a typical intelligent particle’s information block, in practice only an intelligent particle can, in effect, grow the current size of its information block so that it is using all or nearly all of the available memory allocated for holding a particle’s information block.

There are two different kinds of intelligent particles, bions and solitons (described later in this book), and references in this book to a bion’s memory, or to a soliton’s memory, are always implicitly referring to that same-sized memory allocation for holding a particle’s information block that each computing element has. For example, storing data—aka saving data or writing data—in a bion’s memory means that that data is written into the memory of whichever computing element currently holds that bion, becoming a part of that bion’s current information block. In general, one can assume that the computing-element program, which is like an operating system with regard to learned programs, manages the memory in an intelligent particle’s information block, so that, for example, a bion’s learned programs can’t write over and corrupt that bion’s state information, and can’t write over and corrupt any of that bion’s learned programs.

For a computing element holding a common particle, that computing element can run that part of its computing-element program that determines how that type of particle will interact, if at all, with whatever other common particles, if any, are in the nearby surrounding environment found in nearby computing elements. The common particles of interest in this surrounding environment can be determined by that computing element sending a short-range message to those nearby computing elements, asking, in effect, if they are holding any particles of a type that can interact with its held particle, and if so, then send back relevant details regarding those held particles. Then, with that received information, that computing element can interact its held common particle with one or more of those nearby common particles. In general, the actual size of the neighborhood examined for the presence of common particles by a computing element depends on the type of common particle it is holding and/or that held particle’s state information.

2 Biology and Bions

This chapter presents some of the evidence that each cell—in this book, the word cell means an organic living cell—is inhabited and controlled by an intelligent particle that makes that cell alive. The chapter sections are:

2.1 The Bion
2.2 Cell Movement
2.3 Cell Division
2.4 Generation of Sex Cells
2.5 Bions and Cell Division
2.6 Multicellular Development

2.1 The Bion

The bion is an intelligent particle that has no associated awareness.[8] Assume there is one bion associated with each cell. For any bion, its association, if any, with cells and cellular activity depends on the details of its learned programs (section 3.6).[9] Depending on its learned programs, a bion can interact with both intelligent particles and common particles.


footnotes

[8] By “no associated awareness” for the bion, is meant that the bion is always an unconscious particle (likewise, common particles are always unconscious particles). In the reality model presented in this book, the only intelligent particles that are conscious when awake are the intelligent-particle solitons which are described later in this book (sleep for an intelligent particle is explained in section 9.3; common particles do not sleep).

[9] The word bion is a coined word which I made up as follows: bi from the word biology, and the on suffix to denote a particle. Most of the bions active in our physical human world are directly involved with cells, hence the reason I incorporated the word biology into the name I chose for these unconscious intelligent particles. However, for those bions that compose our human minds, none of those bions have any of the learned programs for making cells alive (see section 3.7). Another example of bions that have none of the learned programs for making cells alive are the bions that compose the bion bodies of the Caretakers (section 7.6).


2.2 Cell Movement

The ability to move either toward or away from an increasing chemical concentration is a coordinated activity that many single-cell organisms can do. Single-cell animals, and bacteria, typically have some mechanical means of movement. Some bacteria use long external whip-like filaments called flagella. Flagella are rotated by a molecular motor to cause propulsion thru water. The larger single-cell animals may use flagella similar to bacteria, or they may have rows of short filaments called cilia, which work like oars, or they may move about as amebas do. Amebas move by extruding themselves in the direction they want to go.

The Escherichia coli bacterium has a standard pattern of movement when searching for food: it moves in a straight line for a while, then it stops and turns a bit, and then continues moving in a straight line again. This pattern of movement is followed until the presence of food is detected. The bacterium can detect molecules in the water that indicate the presence of food. When the bacterium moves in a straight line, it continues longer in that direction if the concentration of these molecules is increasing. Conversely, if the concentration is decreasing, it stops its movement sooner, and changes direction. Eventually this strategy gets the bacterium to a nearby food source.

Amebas that live in soil, feed on bacteria. One might not think that bacteria leave signs of their presence in the surrounding water, but they do. This happens because bacteria make small molecules, such as cyclic AMP and folic acid. There is always some leakage of these molecules into the surrounding water thru the cell membrane. Amebas can move in the direction of increasing concentration of these molecules, and thereby find nearby bacteria. Amebas can also react to the concentration of molecules that identify the presence of other amebas. The amebas themselves leave telltale molecules in the water, and amebas move in a direction of decreasing concentration of these molecules, away from each other.

The ability of a cell to follow a chemical concentration gradient is hard to explain using chemistry alone. The easy part is the actual detection of a molecule. A cell can have receptors on its outer membrane that react when contacted by specific molecules. The other easy part is the means of cell movement. Either flagella, or cilia, or self-extrusion is used. However, the hard part is to explain the control mechanism that lies between the receptors and the means of movement.

In the ameba, one might suggest that wherever a receptor on the cell surface is stimulated by the molecule to be detected, then there is an extrusion of the ameba at that point. This kind of mechanism is a simple reflexive one. However, this reflex mechanism is not reliable. Surrounding the cell at any one time could be many molecules to be detected. This would cause the cell to move in many different directions at once. And this reflex mechanism is further complicated by the need to move in the opposite direction from other amebas. This would mean that a stimulated receptor at one end of the cell would have to trigger an extrusion of the cell at the opposite end.

A much more reliable mechanism to follow a chemical concentration gradient is one that takes measurements of the concentration over time. For example, during each time interval—of some predetermined fixed length, such as during each second—the moving cell could count how many molecules were detected by its receptors. If the count is decreasing over time, then the cell is probably moving away from the source. Conversely, if the count is increasing over time, then the cell is probably moving toward the source. Using this information, the cell can change its direction of movement as needed.

Unlike the reflex mechanism, there is no doubt that this count-over-time mechanism would work. However, this count-over-time mechanism requires a clock and a memory, and a means of comparing the counts stored in memory. This sounds like a computer, but such a computer is extremely difficult to design as a chemical mechanism, and no one has done it. On the other hand, the bion, an intelligent particle, can provide these services.

2.3 Cell Division

All cells reproduce by dividing: one cell becomes two. When a cell divides, it divides roughly in half. The division of water and proteins between the dividing cell halves does not have to be exactly even. Instead, a roughly even distribution of the cellular material is acceptable. However, there is one important exception: the cell’s DNA, which is known to code the structure of individual proteins, and may contain other kinds of information. The DNA of a cell is like a single massive book. This book cannot be torn in half and roughly distributed between the two dividing cell halves. Instead, each new cell needs its own complete copy. Therefore, before a cell can divide, it must duplicate all its DNA, and each of the two new cells must receive a complete copy of the original DNA.

All multicellular organisms are made out of eucaryotic cells. Eucaryotic cells are characterized by having a well-defined cellular nucleus that contains all the cell’s DNA. Division for eucaryotic cells has three main steps. In the first step all the DNA is duplicated, and the chromosomes condense into clearly distinct and separate groupings of DNA. For a particular type of cell, such as a human cell, there are a fixed and unchanging number of condensed chromosomes formed; ordinary human cells always form 46 condensed chromosomes before dividing.

During the normal life of a cell, the chromosomes in the nucleus are sufficiently decondensed so that they are not easily seen as being separate from each other. During cell division, each condensed chromosome that forms—hereafter simply referred to as a chromosome—consists of two equal-length strands that are joined. The place where the two strands are joined is called a centromere. Each chromosome strand consists mostly of a long DNA molecule wrapped helically around specialized proteins called histones. For each chromosome, each of the two strands is a duplicate of the other, coming from the preceding duplication of DNA. For a human cell there are a total of 92 strands comprising 46 chromosomes. The 46 chromosomes comprise two copies of all the information coded in the cell’s DNA. One copy will go to one half of the dividing cell, and the other copy will go to the other half.

The second step of cell division is the actual distribution of the chromosomal DNA between the two halves of the cell. The membrane of the nucleus disintegrates, and simultaneously a spindle forms. The spindle is composed of microtubules, which are long, thin rods made of chained proteins. The spindle can have several thousand of these microtubules. Many of the microtubules extend from one half of the cell to the chromosomes, and a roughly equal number of microtubules extend from the opposite half of the cell to the chromosomes. Each chromosome’s centromere becomes attached to microtubules from both halves of the cell.

When the spindle is complete, and all the centromeres are attached to microtubules, the chromosomes are then aligned together. The alignment places all the centromeres in a plane that is oriented at a right angle to the spindle. The chromosomes are at their maximum contraction. All the DNA is tightly bound so that none will break off during the actual separation of each chromosome. The separation itself is caused by a shortening of the microtubules. In addition, in some cases the separation is caused by the two bundles of microtubules moving away from each other. The centromere, which held together the two strands of each chromosome, is pulled apart into two pieces. One piece of the centromere, attached to one chromosome strand, is pulled into one half of the cell. And the other centromere piece, attached to the other chromosome strand, is pulled into the opposite half of the cell. Thus, the DNA is equally divided between the two halves of the dividing cell.

The third step of cell division involves the construction of new membranes. Once the divided DNA has reached the two respective cell halves, a normal-looking nucleus forms in each cell half: at least some of the spindle’s microtubules first disintegrate, a new nuclear membrane assembles around the DNA, and the chromosomes become decondensed within the new nucleus. Once the two new nuclei are established, a new cell membrane is built in the middle of the cell, dividing the cell in two. Depending on the type of cell, the new cell membrane may be a shared membrane. Or the new cell membrane may be two separate cell membranes, with each membrane facing the other. Once the membranes are completed, and the two new cells are truly divided, the remains of the spindle disintegrate.

2.4 Generation of Sex Cells

The dividing of eucaryotic cells is impressive in its precision and complexity. However, there is a special kind of cell division used to make the sex cells of most higher organisms, including man. This special division process is more complex than ordinary cell division. For organisms that use this process, each ordinary cell (ordinary in the sense of not being a sex cell) has half its total DNA from the organism’s mother, and the other half from the organism’s father. Thus, within the cell are two collections of DNA. One collection originated from the mother, and the other collection originated from the father. Instead of this DNA from the two origins being mixed, the separateness of the two collections is maintained within the cell. When the condensed chromosomes form during ordinary cell division, half the chromosomes contain all the DNA that was passed by the mother, and the other half contain all the DNA that was passed by the father. In any particular chromosome, all the DNA came from only one parent, either the mother or the father.

Regarding genetic inheritance, particulate inheritance requires that each inheritable characteristic be represented by an even number of genes.[10] Genes are specific sections of an organism’s DNA. For any given characteristic encoded in the DNA, half the genes come from the mother, and the other half come from the father. For example, if the mother’s DNA contribution has a gene for making hemoglobin, then there is a gene for making hemoglobin in the father’s DNA contribution. The actual detail of the two hemoglobin genes may differ, but for every gene in the mother’s contribution, there is a corresponding gene in the father’s contribution. Thus, the DNA from the mother is always a rough copy of the DNA from the father, and vice versa. The only difference is in the detail of the individual genes.

Sex cells are made four-at-a-time from an original cell.[11] The original cell divides once, and then the two newly formed cells each divide, producing the final four sex cells. The first step for the original cell is a single duplication of all its DNA. Then, ultimately, this DNA is evenly distributed among each resultant sex cell, giving each sex cell only half the DNA possessed by an ordinary nondividing cell. Then, when the male sex cell combines with the female sex cell, the then-fertilized egg has the normal amount of DNA for a nondividing cell.

The whole purpose of sexual reproduction is to provide a controlled variability of an organism’s characteristics, for those characteristics that are represented in that organism’s DNA. Differences between individuals of the same species give natural selection something to work with—allowing, within the limits of that variability, an optimization of that species to its environment.[12] To help accomplish this variability, there is a mixed selection in the sex cell of the DNA that came from the two parents. However, the DNA that goes into a particular sex cell cannot be a random selection from all the available DNA. Instead, the DNA in the sex cell must be complete, in the sense that each characteristic specified by that organism’s DNA is specified in that sex cell, and the number of genes used to specify each such characteristic is only half the number of genes present for that characteristic in ordinary nondividing cells. Also, the order of the genes on the DNA must remain the same as it was originally—conforming to the DNA format for that species.

The mixing of DNA that satisfies the above constraints is partially accomplished by randomly choosing from the four strands of each functionally equivalent pair of chromosomes. Recall that a condensed chromosome consists of two identical strands joined by a centromere. For each chromosome that originated from the mother, there is a corresponding chromosome with the same genes that originated from the father. These two chromosomes together are a functionally equivalent pair. One of the chromosomes from each functionally equivalent pair of chromosomes is split between two of the sex cells. And the other chromosome from that pair is split between the other two sex cells. In addition to this mixing method, it would improve the overall variability if at least some corresponding sequences of genes on different chromosomes are exchanged with each other. And this exchange method is in fact used. Thus, a random exchanging of corresponding sequences of genes within a functionally equivalent pair of chromosomes, followed by a random choosing of a chromosome strand from each functionally equivalent pair of chromosomes, provides good overall variability, and preserves the DNA format for that species.

Following are the details of how the sex cells get their DNA: The original cell, as already stated, duplicates all its DNA. The same number of condensed chromosomes are formed as during ordinary cell division. However, these chromosomes are much longer and thinner than chromosomes formed during ordinary cell division. These chromosomes are stretched out, so as to make the exchanging of sequences of genes easier.

Once these condensed, stretched-out chromosomes are formed, each chromosome, in effect, seeks out the other functionally equivalent chromosome and lines up with it, so that corresponding sequences of genes are directly across from each other. Then, on average, for each functionally equivalent pair of chromosomes, several random exchanges of corresponding sequences of genes take place.

After the exchanging is done, the next step has the paired chromosomes move away somewhat from each other. However, they remain connected in one or more places. Also, the chromosomes themselves undergo contraction, losing their stretched-out long-and-thin appearance. As the chromosomes contract, the nuclear membrane disintegrates, and a spindle forms. Each connected pair of contracted chromosomes lines up so that one centromere is closer to one end of the spindle, and the other centromere is closer to the opposite end of the spindle. The microtubules from each end of the spindle attach to those centromeres that are closer to that end. The two chromosomes of each connected pair are then pulled apart, moving into opposite halves of the cell. It is random as to which chromosome of each functionally equivalent pair goes to which cell half. Thus, each cell half gets one chromosome from each pair of what was originally mother and father chromosomes, but which have since undergone random exchanges of corresponding sequences of genes.

After the chromosomes have been divided into the two cell halves, there is a delay, the duration of which depends on the particular species. During this delay—which may or may not involve the forming of nuclei and the construction of a dividing cell membrane—the chromosomes remain unchanged. After the delay, the final step begins. New spindles form—in each cell half if there was no cell membrane constructed during the delay; or in each of the two new cells if a cell membrane was constructed—and the final step divides each chromosome at its centromere. The chromosomes line up, the microtubules attach to the centromeres, and the two strands of each chromosome are pulled apart in opposite directions. Four new nuclear membranes form. The chromosomes become decondensed within each new nucleus. The in-between cell membranes form, and the spindles disintegrate. There are now four sex cells, and each sex cell contains a well-varied blend of that organism’s genetic inheritance which originated from its two parents.


footnotes

[10] The exception to this rule, and the exception to the rules that follow, are genes and chromosomes that are sex-specific, such as the X and Y chromosomes in man. There is no further mention of this complicating factor.

[11] In female sex cells, four cells are made from an original cell, but only one of these four cells is a viable egg (this viable egg has most of the original cell’s cytoplasm). The other three cells are not viable eggs and they disintegrate. There is no further mention of this complicating factor.

[12] The idea of natural selection is that differences between individuals translate into differences in their ability to survive and reproduce. If a species has a pool of variable characteristics, then those characteristics that make individuals of that species less likely to survive and reproduce, tend to disappear from that species. Conversely, those characteristics that make individuals of that species more likely to survive and reproduce, tend to become common in that species.

A species is characterized by the ability of its members to interbreed. It may appear that if one had a perfect design for a particular species, then that species would have no need for sexual reproduction. However, the environment could change and thereby invalidate parts of any fixed design. In contrast, the mechanism of sexual reproduction allows a species to change as its environment changes.


2.5 Bions and Cell Division

As one can see, cell division is a complex and highly coordinated process that consists of a sequence of well-defined steps. So, can cell division itself be exclusively a chemical phenomenon? Or would it be reasonable to believe that bions are involved?

Cells are highly organized, but there is still considerable random movement of molecules, and there are regions of more or less disorganized molecules. Also, the organized internal parts of a cell are suspended in a watery gel. And no one has been able to construct, either by designing on paper or by building in practice, any computer-like control mechanisms that are made—as cells are—from groups of organized molecules suspended in a watery gel.[13] Also, the molecular structure of cells is already known in great—albeit incomplete—detail, and computer-like control mechanisms composed of molecules have not been observed. Instead, the only major computer component seen in cells is DNA, which, in effect, is read-only memory. But a computer requires an instruction processor, which is a centralized machine that can do each action corresponding to each program instruction stored in memory. And this required computer component has not been observed in cells. Given all these difficulties for the chemical explanation, it is reasonable to conclude that for each cell a bion controls its cell-division process.[14]


footnotes

[13] The sequence of well-defined steps for cell division is a program. For running such a moderately complex program, the great advantage of computerization over non-computer solutions—in terms of resource requirements—is discussed in section 3.3.

[14] The bion also explains the otherwise enigmatic subject of biological transmutations. Organic life is able to perform a number of different transmutations of elements into different elements, and this has been shown by many different experiments (Kervran, C. Louis. Biological Transmutations. Beekman Publishers, Woodstock NY, 1998):

In chemistry we are always referred to a law of Lavoisier’s formulated at the end of the 18th century. “Nothing is lost, nothing is created, everything is transformed.” This is the credo of all the chemists. They are right: for in chemistry this is true. Where they go wrong is when they claim that nature follows their laws: that Life is nothing more than chemistry. [Ibid., p. viii; Herbert Rosenauer]

Included among the many different examples of biological transmutations are such things as the production of calcium by hens (Ibid., pp. 15, 60–61), the production of iodine by algae (Ibid., p. 69), and the production of copper by lobsters (Ibid., pp. 120–122). In general, it appears that plants, animals, and smaller organisms such as bacteria, are all engaged in the production of certain elements.

Although there is much experimental evidence for biological transmutations, there has been no explanation within the framework of physics and chemistry. However, given the bion, biological transmutations can be explained as being done by bions.


2.6 Multicellular Development

For most multicellular organisms, the body of the organism develops from a single cell. How a single cell can develop into a starfish, tuna, honeybee, frog, dog, or man, is obviously a big question. Much research and experimentation has been done on the problems of development. In particular, there has been much focus on early development, because the transition from a single cell to a baby is a much more radical step than the transition from a baby to an adult, or from an adult to an old adult.

In spite of much research on early development, there is no real explanation of how it happens, except for general statements of what must be happening. For example, it is known that some sort of communication must be taking place between neighboring cells—and molecules are typically guessed as the information carrier—but the mechanism is unknown. In general, it is not hard to state what must be happening. However, the mathematics-only reality model allows only a chemical explanation for multicellular development, and given this restriction, there has been little progress. There is a great mass of data, but no explanation of the development mechanism.

Alternatively, given the computing-element reality model and the bion, multicellular development is explained as a cooperative effort between bions. During development, the cooperating bions read and follow as needed whatever relevant information is recorded in the organism’s DNA.[15]


footnotes

[15] As an analogy, consider the construction of a house from a set of blueprints. The blueprints by themselves do not build the house. Instead, a construction crew, which can read the blueprints, builds the house. And this construction crew, besides being able to read the blueprints, also has inside itself a great deal of additional knowledge and ability—not in the blueprints—needed to construct the house.

For a developing organism, its DNA are the blueprints and the organic body is the house. The organism’s bions are the construction crew. The learned programs in those bions, and associated data, are the additional knowledge and ability—not in the blueprints—needed to construct the house.

Note that at present it is not known how complete the DNA blueprints are, because the only code in DNA that has been deciphered so far is the code that specifies the structure of individual proteins. However, there is probably additional information in the DNA which is written in a language currently unknown:

So-called “junk” DNA, regions of genetic material (accounting for 97% of the human genome) that do not provide blueprints for proteins and therefore have no apparent purpose, have been puzzling to scientists. Now a new study shows that these non-coding sequences seem to possess structural similarities to natural languages. This suggests that these “silent” DNA regions may carry biological information, according to a statistical analysis of DNA fragments by researchers … [Physics News Update, American Institute of Physics, 1994, at: http://www.aip.org/enews/physnews/1994/split/pnu202-1.htm]


3 The Brain and the Mind

This chapter considers both the brain and the mind, and the involvement of bions in both. Also, learned programs are explained. And the last section presents in detail various algorithms, data structures, and code, that, among other things, support the development of multicellular animals including the development of one’s own physical body. The chapter sections are:

3.1 Neurons
3.2 The Cerebral Cortex
3.3 Mental Mechanisms and Computers
3.4 Composition of the Computers
3.5 Memory
3.6 Learned Programs
3.7 The Mind
3.8 Identifier Blocks, the send() Statement, and Multicellular Development
3.8.1 Identifier Blocks
3.8.2 The Learned-Program send() Statement: Parameters
3.8.3 Coordinates in 3D Space, and Message Transmission thru 3D Space
Two different Message-Transmission Algorithms
A Message-Transmission Algorithm for Sending a Message to a Specific Computing Element
An Efficient Gravity Algorithm
Regarding the above Gravity Algorithm: Approximations and Efficiency
3.8.4 The Learned-Program send() Statement: the message_instance
3.8.5 The Learned-Program send() Statement: Algorithmic Details
Handling the Special Case of a Recipient Particle being Moved when the Message Arrives
The Sphere-Filling Message-Transmission Algorithm
Several Properties of this Sphere-Filling Message-Transmission Algorithm
3.8.6 Multicellular Development
Regarding the Particle Details returned by the various get_relative_location…() learned-program statements described in this Book
Avoid Unreasonable Assumptions when Designing Algorithms that will Run on Computing Elements
Timers and Keeping Track of Elapsed Time
3.8.7 The Learned-Program Statements for Seeing and Manipulating Physical Matter have a very Short Range
3.8.8 Bions Seeing and Manipulating Atoms and Molecules
3.8.9 How Cell-Controlling Bions Stay with their Cells
How does a Sleeping Cell-Controlling Bion stay with its Cell, and how does a Sleeping Bion in a Bion-Body stay with that Bion-Body

3.1 Neurons

Every mammal, bird, reptile, amphibian, fish, and insect, has a brain. The brain is at the root of a tree of sensory and motor nerves, with branches thruout the body. The building block of any nervous system, including the brain, is the nerve cell. Nerve cells are called neurons. All animal life shows the same basic design for neurons. For example, a neuron from the brain of a man uses the same method for signal transmission as a neuron from a jellyfish.

Neurons come in many shapes and sizes. The typical neuron has a cell body and an axon along which a signal can be transmitted. An axon has a cylindrical shape, and resembles an electrical wire in both shape and purpose. In man, axon length varies from less than a millimeter to more than a meter in length.

A signal is transmitted from one end of the axon to the other end, as a chemical wave involving the movement of sodium ions across the axon membrane. During the wave, the sodium ions move from outside the axon to inside the axon. Within the neuron is a chemical pump that is always working to transport sodium ions to the outside of the cell. A neuron waiting to transmit a signal sits at a threshold state. The sodium-ion imbalance that exists across the axon membrane waits for a trigger to set the wave in motion. Neurons with a clearly defined axon can transmit a signal in only one direction.

The speed of signal transmission thru an axon is very slow compared to electrons moving thru an electrical wire. Depending on the axon, a signal may move at a speed of anywhere from ½ to 120 meters per second. The fastest transmission speeds are obtained by axons that have a myelin sheath: a fatty covering. The long sensory and motor nerves that connect the brain thru the spinal cord to different parts of the body are examples of myelinated neurons. In comparison to the top speed of 120 meters per second, an electrical current in a wire can move more than a million times faster. Besides speed, another consideration is how quickly a neuron can transmit a new signal. At best, a neuron can transmit about one thousand signals per second. One may call this the switching speed. In comparison, the fastest electrical circuits can switch more than a million times faster.

One important way that neurons differ from each other is by the neurotransmitters that they make and respond to. In terms of signal transmission, neurotransmitters are the link that connects one neuron to another. The sodium-ion wave is not directly transferred from one neuron to the next. Instead, the sodium-ion wave travels along the axon, and spreads into the terminal branches which end with synapses. There, the synapses release some of the neurotransmitter made by that neuron. The released neurotransmitter quickly reaches those neurons whose dendrites adjoin those synapses, provoking a response to that released neurotransmitter. There are three different responses: a neuron could either be stimulated to start its own sodium-ion wave, or inhibited from starting its own sodium-ion wave, or a neuron could have no response.[16],[17]


footnotes

[16] In the human brain there are many different neurotransmitters. Certain functionally different parts of the brain use different neurotransmitters. The subject of neurotransmitters raises the larger question of the affect of various drugs on the mind.

Although it is clear that certain chemicals affect the mind, it does not follow that the mind is a product of chemistry. As an analogy, consider the case of yourself and your physical environment: In your physical environment—including where you live, where you work, where you sleep, and so on—you are surrounded by physical objects, and you interact with many of these physical objects on a regular basis. Now, what happens when your physical environment changes? The change or changes, depending on what they are, may or may not affect you, depending on the specifics of the changes and how you normally interact with the objects in question. Can an outside observer logically conclude that the part of you that produces your reactions to changes in your physical environment is the same as, or is constructed from, the objects that you are reacting to? Obviously, no. And likewise, it does not logically follow that just because certain changes in the chemical landscape of the brain can affect the mind, that the mind is a product of chemistry, or is composed of chemicals.

To generalize the argument: Given that object A is affected by object B, it does not logically follow that object A, or any part of object A, is composed of the same materials as object B.

Regarding psychedelic drugs (Grinspoon, Lester, and James Bakalar. Psychedelic Drugs Reconsidered. The Lindesmith Center, New York, 1997):

The fact that a simple compound like nitrous oxide as well as the complex organic molecule of a drug like LSD can produce a kind of psychedelic mystical experience suggests that the human organism has a rather general capacity to attain the state and can reach it by many different biological pathways. It should be clear that there is no simple correlation between the chemical structure of a substance and its effect on consciousness. The same drug can produce many different reactions, and the same reaction can be produced by many different drugs. [Ibid., p. 36]

Regarding psychiatric drugs (Breggin, Peter, and David Cohen. Your Drug May Be Your Problem. Perseus Books, Reading MA, 1999):

Psychiatric drugs do not work by correcting anything wrong in the brain. We can be sure of this because such drugs affect animals and humans, as well as healthy people and diagnosed patients, in exactly the same way. There are no known biochemical imbalances and no tests for them. That’s why psychiatrists do not draw blood or perform spinal taps to determine the presence of a biochemical imbalance in patients. They merely observe the patients and announce the existence of the imbalances. The purpose is to encourage patients to take drugs.

Psychiatric drugs “work” precisely by causing imbalances in the brain—by producing enough brain malfunction to dull the emotions and judgment or to produce an artificial high. [Ibid., p. 41]

It is perhaps interesting to note that just as one might react to a sudden surplus or deficit in one’s physical environment of some physical object that one uses regularly, by taking actions to return that physical object to its normal quantity and/or affect, there is the same kind of reaction to chemical imbalances in the brain caused by certain drugs. For example:

All four drugs [Prozac, Zoloft, Paxil, and Luvox], known as selective serotonin reuptake inhibitors (SSRIs), block the normal removal of the neurotransmitter serotonin from the synaptic cleft—the space between nerve cells. The resultant overabundance of serotonin then causes the system to become hyperactive. But the brain reacts against this drug-induced overactivity by destroying its capacity to react to stimulation by serotonin. This compensatory process is known as “downregulation.” Some of the receptors for serotonin actually disappear or die off.

To further compensate for the drug effect, the brain tries to reduce its output of serotonin. This mechanism is active for approximately ten days and then begins to fail, whereas downregulation continues indefinitely and may become permanent. Thus, we know in some detail about two of the ways in which the brain tries to counterbalance the effects of psychiatric drugs. There are other compensatory mechanisms about which we know less, including counterbalancing adjustments in other neurotransmitter systems. But, overall, the brain places itself in a state of imbalance in an attempt to prevent or overcome overstimulation by the drugs. [Ibid., p. 46]

Regarding changes to the affected nerve cells, such as the “downregulation” and reduced output of serotonin mentioned in the above quote, these changes are not done by one’s mind, but instead are done by the cell-controlling bions that occupy those affected nerve cells.

[17] The best known psychedelic drug is probably LSD, first synthesized by the Swiss chemist Albert Hofmann in 1938 while working for the drug company Sandoz. In 1943, Hofmann, as described in his book, LSD: My Problem Child, inadvertently absorbed some of the drug and had the following experience:

Last Friday, April 16, 1943, I was forced to interrupt my work in the laboratory in the middle of the afternoon and proceed home, being affected by a remarkable restlessness, combined with a slight dizziness. At home I lay down and sank into a not unpleasant intoxicated-like condition, characterized by an extremely stimulated imagination. In a dreamlike state, with eyes closed (I found the daylight to be unpleasantly glaring), I perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors. After some two hours this condition faded away. [Hofmann, Albert. LSD: My Problem Child. Multidisciplinary Association for Psychedelic Studies, Santa Cruz, 2009, p. 47]

After that Friday experience with LSD, three days later he took a larger, measured dose of LSD, and he describes the visual effects:

Everything in the room spun around, and the familiar objects and pieces of furniture assumed grotesque, threatening forms. They were in continuous motion, animated, as if driven by an inner restlessness. The lady next door, whom I scarcely recognized, brought me milk—in the course of the evening I drank more than two liters. She was no longer Mrs. R., but rather a malevolent, insidious witch with a colored mask. [Ibid., p. 49. Hofmann said he asked for and drank milk because he thought he had taken too much LSD, and milk would be a “nonspecific antidote to poisoning”.]

Now, little by little I could begin to enjoy the unprecedented colors and plays of shapes that persisted behind my closed eyes. Kaleidoscopic, fantastic images surged in on me, alternating, variegated, opening and then closing themselves in circles and spirals, exploding in colored fountains, rearranging and hybridizing themselves in constant flux. It was particularly remarkable how every acoustic perception, such as the sound of a door handle or a passing automobile, became transformed into optical perceptions. Every sound generated a vividly changing image, with its own consistent form and color. [Ibid., p. 50]

To understand what is happening with LSD, let’s start with a consideration of visual imagination. I have, as far as I know, an ordinary visual imagination when compared to others of my nationality: I can visualize something in my mind—either a made-up construction, or recalling what something real in my life looks like—and I have conscious control over it, because I can consciously make changes to what I am seeing of it, and I can also see it as either a static image if I want, or see it animated in some way, also of my conscious choosing. However, what I see from my visual imagination—when I am awake in my physical body—is always faint, and this faint imagery is composited onto my ordinary vision when my eyes are open in a lit environment, and composited onto a dark background when my eyes are closed (note that making an image faint, and also compositing one image onto another image or background, are both simple, low-cost computations). From what I’ve read, and also from talking with others, some of us get imagery from their visual imagination that is substantially less faint than what I get, and good examples of such people probably include at least some of those who work as graphic artists, and also those who have so-called photographic memory.

Let’s assume that we each have, as a part of our human mind, the programming for a visual imagination that can, among other things, generate the complex imagery that is seen during an LSD trip. My guess as to what the LSD drug is doing, is that it interferes in some way with the normal activity of at least some of the nerve cells in one’s brain that are involved with seeing thru one’s physical eyes (perhaps affecting some specific neurotransmitter), with an end result that the cell-controlling bions in one’s brain that send vision data to one’s mind, end up sending vision data that differs in some way from what they normally send as vision data, and in reaction to this distorted or corrupted vision data from one’s brain, the vision-processing programs in one’s mind, compensate, in effect, by giving less importance to the vision data received from the brain, and more importance to what can be seen with the visual imagination, and this change in importance is done by one’s mind, in effect, skipping over and not doing the normal step in the vision process that most of us experience while awake, that makes imagery from one’s visual imagination faint, before compositing that imagery with either sight imagery or a dark background as stated in the previous paragraph.

For a typical person who tries LSD and gets the visual results typically described for that drug, which exposes the full power of one’s visual imagination without its produced imagery being made faint before one consciously sees it, one realizes that one’s mind has hidden abilities one didn’t know were there, and this realization is personally enlightening. Note that I have no experience with such drugs myself, because they have been illegal in the USA for my entire adult life. However, if it were legal and one could get the drug pure and untainted, then I would probably try it. Hopefully a better society in the future will not criminalize a non-addictive drug that allows a person to gain some insight into oneself.

Sections 9.6 and 9.7 go into detail regarding the fact that the human mind has mental abilities that go far beyond what most of us consciously experience as adults during our physically embodied human life. Also, with regard to one’s allocation plan, which is discussed in those two sections, note that the degree to which the imagery output from one’s visual imagination is made faint before being composited with other imagery or background, and then sent to one’s visual field to be consciously seen, does not affect nor change in any way the allocation of awareness-particle input channels for one’s visual field. The final imagery that is sent to one’s visual field can be anything, constructed by any processing means—including such methods as making an image faint, and compositing one image onto another image or background—without affecting or changing in any way the allocation of awareness-particle input channels for one’s visual field. However, the size of each final image sent to the awareness is constrained to the current size of one’s visual field. Also, for any pixel of a final image sent to the awareness, the brightness range and range of colors of that pixel is constrained to what the sent-to pixel in one’s visual field can show to one’s awareness. Also, because the allocation of awareness-particle input channels for one’s visual field is not affected by the degree to which the imagery from one’s visual imagination is made faint, and given that the degree of this making faint can be different between different persons, it follows that this is a potential mental difference between two persons without it being the result of an allocation-plan difference between them.

Regarding one’s visual imagination and the degree to which the imagery from one’s visual imagination is made faint, I expect that for a typical person during his time in the afterlife (section 6.3), the imagery from his visual imagination is made substantially less faint than it was during his adult human life. When we are in our physical human bodies, our sight is substantially more important than our visual imagination, because our physical bodies have constant needs and are easily damaged, and one’s sight is very important for satisfying those needs and guarding one’s physical body against damage. In the afterlife, there is no physical body with its needs and its potential for being damaged. During the bion-body stage of the afterlife, one has a bion-body, but it has no needs and cannot be damaged. During the lucid-dream stage of the afterlife, one is just one’s awareness/mind (defined in chapter 5), without a body. In either case, bion-body stage or lucid-dream stage, the imagery from one’s visual imagination during the afterlife can be made substantially less faint without increasing the risk to oneself. And, during the afterlife, perhaps one can consciously switch between just seeing the output of one’s visual imagination (without its produced imagery being made faint), and just seeing one’s external environment (there are three different ways to see one’s external environment during the afterlife—vision of physical matter as illuminated by physical light, vision of bions, and vision of d-common atoms—all detailed elsewhere in this book).


3.2 The Cerebral Cortex

There is ample proof that the cerebrum’s thin, gray, covering layer, called the cortex, is the major site for human intelligence. More specifically, and with regard to this book, the cortex is, in effect, the interface between the mind and the physical body. Specific cell-controlling bions occupying neurons in the sense-handling parts of the cortex, such as the visual cortex, send sensory-data messages to the mind. And, to get the muscle movements that the mind wants, the mind sends messages to specific cell-controlling bions occupying neurons in the motor cortex.

Beneath the cortex is the bulk of the cerebrum. This is the white matter whose white appearance is caused by the presence of fatty sheaths protecting nerve-cell fibers—much like insulation on electrical wire.

The white matter is primarily a space thru which an abundance of nerve pathways, called tracts, pass. Hundreds of millions of neurons are bundled into different tracts, just as wires are sometimes bundled into larger cables. Tracts are often composed of long axons that stretch the entire length covered by the tract.

As an example of a tract, consider the optic nerve, which leaves the back of the eye as a bundle of about a million axons. The supporting cell bodies of these axons are buried in the retina of the eye. The optic tract passes into the base of a thalamus, which is primarily a relay station for incoming sensory signals. There, a new set of neurons—one outgoing neuron for each incoming neuron—comprises a second optic tract, called the optic radiation. This optic radiation connects from the base of the thalamus to a wide area of cerebral cortex in the lower back of the brain.

There are three main categories of white-matter tracts, corresponding to those parts of the brain the tracts are connecting. Projection tracts connect areas of cortex with the brainstem and the thalami. Association tracts connect, on the same cerebral hemisphere, one area of cortex with a different area of cortex. Commissural tracts connect, on opposite cerebral hemispheres, one area of cortex with a different area of cortex. Altogether, there are many thousands of different tracts. It seems that all tracts in the white matter have either their origin, destination, or both, in the cortex.

The detailed structure of the cortex shows general uniformity across its surface. In any square millimeter of cortex, there are about 100,000 neurons. This gives a total count of about fifteen billion neurons for the entire human cortex. To contain this many neurons in the cortex, the typical cortex neuron is very small, and does not have a long axon. Many neurons whose cell bodies are in the cortex do have long axons, but these axons pass into the white matter as fibers in tracts. Although fairly uniform across its surface, the cortex is not uniform thru its thickness. Instead, when seen under a microscope, there are six distinct layers. The main visible difference between these layers is the shape and density of the neurons in each layer.

There is only very limited sideways communication thru the cortex. When a signal enters the cortex thru an axon, the signal is largely confined to an imaginary column of no more than a millimeter across. Different areas of widely spaced cortex do communicate with each other, but by means of tracts passing thru the white matter.

The primary motor cortex is one example of cortex function. This cortex area is in the shape of a strip that wraps over the middle of the cerebrum. As the name suggests, the primary motor cortex plays a major part in voluntary movement. This cortex area is a map of the body, and the map was determined by neurologists touching electrodes to different points on the cortex surface, and observing which muscles contracted. This map represents the parts of the body in the order that they occur on the body. In other words, any two adjacent parts of the body are motor-controlled by adjacent areas of primary motor cortex. However, the map does not draw a good picture of the body, because the body parts that are under fine control get more cortex. The hand, for example, gets about as much cortex area as the whole leg and foot. This is similar to the primary visual cortex, in which more cortex is devoted to the center-of-view than to peripheral vision.

There are many tracts carrying signals into the primary motor cortex, including tracts coming from other cortex areas, sensory tracts from the thalami, and tracts thru the thalami that originated in other parts of the brain. The incoming tracts are spread across the motor cortex strip, and the axons of those tracts terminate in cortex layers 1, 2, 3, and 4. For example, sensory-signal axons terminate primarily in layer 4 of the motor cortex. Similarly, the optic-radiation axons terminate primarily in layer 4 of the primary visual cortex.

Regarding the outgoing signals of the primary motor cortex, the giant Betz cells are big neurons with thick myelinated axons, which pass down thru the brainstem into the spinal cord. Muscles are activated from signals passed thru these Betz cells. The Betz cells originate in layer 5 of the primary motor cortex. Besides the Betz cells, there are smaller outgoing axons that originate in layers 5 and 6. These outgoing axons, in tracts, connect to other areas of cortex, and elsewhere.

Besides the primary motor cortex and the primary visual cortex, there are many other areas of cortex for which definite functions are known. This knowledge of the functional areas of the cortex did not come from studying the actual structure of the cortex, but instead from two other methods: by electrically stimulating different points on the cortex and observing the results, and by observing individuals who have specific cortex damage.

The study of cortex damage has been the best source of knowledge about the functional areas of the cortex. Among the possible causes of localized cortex damage are head wounds, strokes, and brain tumors. The basic picture that emerges from studies of cortex damage, is that the cortex, in terms of how it relates to the mind, is divided into many different functional parts, and these functional parts exist at different areas of cortex.

Clustered around the primary visual cortex, and associated with it, are other cortex areas, known as association cortex. In general, association cortex borders each primary cortex area. The primary area receives the sense-signals first, and from the primary area the same sense-signals are transmitted thru tracts to the association areas.

Each association area attacks a specific part of the total problem. Thus, an association area is a specialist. For example, for the primary visual cortex there is a specific association area for the recognition of faces. If this area is destroyed, the person suffering this loss can still see and recognize other objects, but cannot recognize a face.

Some other examples of cortex areas are Wernicke’s area, Broca’s area, and the prefrontal area. When Wernicke’s area is destroyed, there is a general loss of language comprehension. The person suffering this loss can no longer make any sense of what is read or heard, and any attempt to speak produces gibberish. Broca’s area is an association area of the primary motor cortex. When Broca’s area is destroyed, the person suffering this loss can no longer speak, producing only noises. The prefrontal area is beneath the forehead. When this area is destroyed, there is a general loss of foresight, concentration, and the ability to form and carry out plans of action.

3.3 Mental Mechanisms and Computers

There is a great deal of wiring in the human brain done by the neurons. But what is missing from the preceding description of brain structure is any hint of what the mental mechanisms are that accomplish human intelligence. However, regardless of how the computers are composed, human intelligence is most likely accomplished by computers, for the following three reasons:

  1. The existence of human memory implies computers, because memory is a major component of any computer. In contrast, hardwired control mechanisms—a term used here to represent any non-computer solution—typically work without memory.

  2. People have learning ability—even single-cell animals show learning ability—which implies the flexibility of computers using data saved in memory to guide future actions. In contrast, hardwired control mechanisms are almost by definition incapable of learning, because learning implies restructuring the hardwired, i.e., fixed, design.

  3. A hardwired solution has hardware redundancy when compared to a functionally equivalent computers-and-programs solution. The redundancy happens because a hardwired mechanism duplicates at each occurrence of an algorithmic instruction the relevant hardware needed to execute that instruction. In effect, a hardwired solution trades the low-cost redundancy of stored program instructions, for the high-cost redundancy of hardware. Thus, total resource requirements are much greater if mental processes are hardwired instead of computerized.

3.4 Composition of the Computers

Human intelligence can be decomposed into functional parts, which in turn can be decomposed into programs that use various algorithms. In general, for the purpose of guiding a computer, each algorithm must exist in a form where each elementary action of the algorithm corresponds with an elementary action of the computer. The elementary actions of a computer are known collectively as the instruction set of that computer.

Regarding the composition of the computers responsible for human intelligence, if one tries to hypothesize a chemical computer made of organic molecules suspended in a watery gel, then an immediate difficulty is how to make this computer’s instruction set powerful enough to do the actions of the many different algorithms used by mental processes. For example, how does a program add two numbers by catalyzing some reaction with a protein? If one tries to assume that instead of an instruction set similar in power to those found in modern computers, that the instruction set of the organic computer is much less powerful—that a refolding of some protein, for example, is an instruction—then one has merely transferred the complexity of the instruction set to the algorithms: instead of, for example, a single add-two-numbers instruction, an algorithm would need some large number of less-powerful instructions to accomplish the same thing.

For those who apply the mathematics-only reality model, confining themselves to a chemical explanation of mental processes, there has been little progress. As with the control mechanisms for cell movement, cell division, and multicellular development, all considered in chapter 2, there is the same problem: no one knows how to build computer-like control mechanisms satisfying cellular conditions. And the required computer component, an instruction processor, has not been observed in cells.

Alternatively, the computing-element reality model offers intelligent particles. Instead of one’s intelligence being the result of chemistry, one’s intelligence is instead the result of a group of bions (collectively one’s mind), and the programming (learned programs) and stored data in that mind.

3.5 Memory

People have a rich variety of memories, such as memories of sights, sounds, and factual data.[18] Regarding memory, the whole question of memory has been frustrating for those who have sought its presence in physical substance. During much of the 20th century, there was a determined search for memory in physical substance—by many different researchers. However, these researchers were unable to localize memory in any physical substance.

An issue related to memory is the frequently heard claim that neural networks are the mechanism responsible for human intelligence—in spite of their usefulness being limited to pattern recognition. However, and regardless of usefulness, without both a neural-network algorithm and input-data preprocessing—requiring memory and computational ability—neural networks do nothing. Thus, before invoking physical neural networks to explain any part of human intelligence, memory and computational ability must first exist as part of the physical substance of the brain—which does not appear to be the case.

In the latter part of the 20th century, the most common explanation of memory is that it is stored, in effect, by relative differences between individual synapses. Although this explanation has the advantage of not requiring any memory molecules—which have not been found—there must still be a mechanism that records and retrieves memories from this alleged storage medium. This requirement of a storage and retrieval mechanism raises many questions. For example:

  1. How does a sequence of single-bit signals along an axon—interpreting, for example, the sodium-ion wave moving along an axon and into the synapses as a 1, and its absence as a 0—become meaningfully encoded into the synapses at the end of that axon?

  2. If memory is encoded into the synapses, then why is the encoded memory not recalled every time the associated axon transmits a signal; or, conversely, why is a memory not encoded every time the associated axon transmits a signal?

  3. How do differences between a neuron’s synapses become a meaningful sequence of single-bit signals along those neurons whose dendrites adjoin those synapses?

The above questions have no answer. Thus, the explanation that memory is stored by relative differences between individual synapses, pushes the problem of memory elsewhere, making it worse in the process, because synapses—based on their physical structure—are specialized for neurotransmitter release, not memory storage and retrieval.

Alternatively, given bions, and given that each bion has its own memory (see the definition of a bion’s memory, in section 1.6), one’s memories are located somewhere in the collective memory of those bions that collectively form one’s mind.


footnotes

[18] The conscious memories of sights, sounds, and factual data, are high-level representations of memory data that have already undergone extensive processing into the forms that awareness receives.


3.6 Learned Programs

Regarding the residence of the programs of the mind, and with the aim of minimizing the required complexity of the computing-element program, assume that the computing-element program provides various learning algorithms—such as learning by trial and error, learning by analogy, and learning by copying—which, in effect, allow intelligent particles to program themselves. Specifically, with this assumption, each program in one’s mind—such as the program that recognizes faces—exists in the memory of one or more of those bions that collectively form one’s mind.

For reasons of efficiency, assume that the overall learning mechanism provided by the computing-element program includes a high-level programming language in which learned programs are written. In effect, the computing-element program runs (executes) a learned program by reading it and doing what it says. In this programming language for learned programs, besides the usual control statements found in any programming language (such as the if test) and the usual jump statements (such as return and go to) and the usual math operators (such as add, subtract, multiply, and divide), there are also many statements that are, in effect, calls of routines that exist in the computing-element program (the computing-element program is like an operating system with many routines that other programs, in this case learned programs, can call).

Once a specific learned program is established and in use by one or more bions, other bions nearby can potentially copy that learned program from any of those bions that already have it, and then, over time, potentially evolve that copied learned program further by using the various learning algorithms.[19],[20],[21]

Regarding learned programs within moving particles, motion thru space is the rule for particles. In general, as an intelligent particle moves thru space, each successive computing element that currently holds that intelligent particle continues running whichever of that intelligent particle’s learned programs are supposed to be running, continuing from the final execution state those learned programs were in when the previous computing element that held that intelligent particle gave that intelligent particle to the current computing element that holds that intelligent particle.[22]


footnotes

[19] In the discussion of rebirth in section 6.3, regarding transitioning from one animal type to a different animal type, the specific example of transitioning from being a chimpanzee in Africa, to being in its next incarnation an african human, is assumed to have enough difference between a chimpanzee mind and a human mind to require a complete replacement of the programming of its chimpanzee mind with the programming of a human mind, and most likely copying that human-mind programming from the mind of its human mother.

For a typical new human in his first human life, after the complete-replacement copying that gave him a human mind, my guess is that as long as that person keeps reincarnating as a human, any copying of mental programming from another mind will either not happen at all, or be very infrequent. Instead, it will be dependent on that person’s awareness and what that awareness wants, along with the non-copying learning algorithms of the computing-element program, to evolve improvements and/or changes, if any, to the programming of that person’s human mind. Thus, over the course of many human lifetimes, including all the in-between time in the afterlife, at least some customization of the programming of one’s human mind is, I think, likely. The area of customization that I think is most likely, is, in effect, choosing the kind of allocation plan (section 9.6) that one prefers as an adult in one’s human lives. For example, regarding one’s human lives, does one want to emphasize being more average, or emphasize being more intelligent, or emphasize being more athletic. There are advantages and disadvantages to each of these.

As discussed later in this book, the programming of the human mind includes, in effect, both genders in terms of their psychology, and, in general, one’s current allocation plan (section 9.6) determines which parts of one’s human mind manifest, and how strongly they manifest, to one’s awareness in one’s current human life, including which emotions can manifest and how intensely they can manifest.

[20] In effect, learned programs undergo evolution by natural selection: the environment of a learned program is, at one end, the input datasets that the learned program processes, and, at the other end, the positive or negative feedback, if any, from whatever uses the output of that learned program, being either one or more other learned programs in the same or other bions, and/or the soliton described later in this book.

It is its environment, in effect, that determines the rate of evolutionary change in a learned program. The changes themselves are made by the aforementioned learning algorithms in the computing-element program. Presumably these learning algorithms, when they run, will use whatever recent feedback there was, if any, from the user(s) of the output of that learned program, to both control the rate of change, and to guide both the type and location of the changes made to that learned program. Within these learning algorithms, negative feedback from a soliton probably carries the most weight in causing these learning algorithms to make changes to a learned program.

Note that evolutionary change can include simply replacing the currently used version of a learned program, by copying a different version of that learned program, if it is available, from those bions that already have it. The sharing of learned programs among bions appears to be the rule—and, in effect, cooperative evolution of a learned program is likely.

[21] An example of a learned program that is widely shared is the learned program (or programs) for vision.

Although one may imagine that vision is a simple process of merely seeing the image that falls on the eye, that is not the case at all (note that the fact that we all see things alike, because we are all using the same program(s), adds to this illusion of simplicity). Instead, the process of human vision—converting what falls on our eyes into what we consciously see in our minds—is very complex, with many rules and algorithms (Hoffman, Donald. Visual Intelligence. W. W. Norton, New York, 1998):

Perhaps the most surprising insight that has emerged from vision research is this: Vision is not merely a matter of passive perception, it is an intelligent process of active construction. What you see is, invariably, what your visual intelligence constructs. [Ibid., p. xii]

The fundamental problem of vision: The image at the eye has countless possible interpretations. [Ibid., p. 13]

The fundamental problem of seeing depth: The image at the eye has two dimensions; therefore it has countless interpretations in three dimensions. [Ibid., p. 23]

About our senses, it isn’t just what we see that is a construction of our minds. Instead, as Hoffman says:

I don’t want to claim only that you construct what you see. I want to claim that, at a minimum, you also construct all that you hear, smell, taste, and feel. In short, I want to claim that all your sensations and perceptions are your constructions.

And the biggest impediment to buying that claim comes, I think, from touch. Most of us believe that touch gives us direct contact with unconstructed reality. [Ibid., p. 176]

To prove this idea that our sense perceptions are mental constructions, one only needs to point at experiments that show a person experiencing some sense perception that has no basis in physical reality. For vision, there are many different optical illusions that cause one to see something that is not in the physical image. For touch, Hoffman cites experimental results regarding an effect that was “discovered by accident in the early 1970s by Frank Geldard and Carl Sherrick” (Ibid., p. 180). These experiments consist of making during a short time interval a small number of taps at different points on a test subject’s forearm. Depending on the location and timing of the different taps, the subject will feel one or more interpolated taps at locations where no physical taps were made. For example, Hoffman describes an experiment that delivers two quick physical taps at one point, quickly followed by one physical tap at a second point, and the subject reports feeling the three taps but with the second tap lying between those two points instead of being at the first point where the actual second physical tap was made (Ibid., p. 181). As Hoffman notes, this means that the entire perception of the three taps was constructed by the mind after the three physical taps had happened, because the interpolated tap point is dependent on knowing the two end-points for the interpolation, and the second end-point is only known when the third and final physical tap happens.

[22] It is reasonable to assume that each intelligent particle has a small mass—i.e., its mass attribute has a positive value—making an intelligent particle subject to both gravity and inertia. This assumption is consistent with how the intelligent particles currently associated with the Earth, including those cell-controlling bions that currently occupy and control organic cells, stay with the Earth as the Earth moves thru space at high speed due to a combination of gravitational and inertial effects including the rotation of the Earth, the Earth’s revolution around the Sun, and the revolution of the solar system around the galactic core.


3.7 The Mind

Each neuron in the brain is a cell, and is therefore occupied by a bion that has the learned programs for controlling cells and making them alive. Call the bions that occupy the nerve cells of the brain, brain bions. And call the bions that collectively form a mind, mind bions (this group of bions, taken as a whole, has all the learned programs for the mental abilities of that mind).

To explain one’s intelligence, one could say that taken as a whole, one’s brain bions, besides having cell-controlling learned programs, also have all the programming (the learned programs) for one’s mental abilities. To justify this explanation, one could point out that brain bions are in the perfect location to read and process the sodium-ion signals moving along their neurons from sensory sources, and brain bions are also in the perfect location to start sodium-ion signals that transmit thru nerves to muscles, activating those muscles and causing movement. However, among the reasons to reject this explanation that mind bions are also brain bions, is limited computing resources and the benefits of specialization:

In general, given both the finite processing speed and finite memory available to each bion, and also, given the advantages, in general, of specialization: Instead of combining both cell-controlling abilities (learned programs for controlling a cell) and mental abilities (learned programs for mental abilities) into the same group of bions, it would be more efficient—both in evolutionary terms and operational terms—and less subject to conflicts, such as usage conflicts over the limited computing resources available to the two groups of learned programs which are extremely different in terms of their purpose (the programming for the mental abilities of the human mind in contrast to the programming for controlling cells)—if the learned programs for our various mental abilities occupy a different, separate group of bions than our brain bions. Assuming this separation, then there must be interface programming—existing in one or more learned programs on the brain side, and existing in one or more learned programs on the mind side—that interfaces one’s mind with one’s brain.

The interface programming would be the means by which specific brain bions can send data from sensory sources, such as one’s eyes and ears, to one’s mind (an example would be specific brain bions in the visual cortex sending vision data to one’s mind, and one or more specific mind bions receiving that sent vision data which ultimately will be processed in one’s mind into what one will consciously see). And likewise, interface programming would be the means by which one’s mind can send commands to specific brain bions in the motor cortex, which, after those brain bions receive those commands and then cause their nerves to signal, will ultimately result in the wanted muscle movements.

3.8 Identifier Blocks, the send() Statement, and Multicellular Development

The interface programming mentioned in section 3.7 implies that there is a way to send messages from one bion to another, and that bions can be individually recognized. The easiest way to allow recognition of individual bions is by each bion having its own unique identifier. This and much more is covered in the following subsections.

3.8.1 Identifier Blocks

Assume that the state information for an intelligent particle always includes a separate identifier block that has a fixed format composed of a small number of integers (probably less than twenty integers):

  1. Let the first integer in the identifier block identify that intelligent-particle’s type: either a bion or a soliton. Note that the soliton is the awareness particle, and besides the discussion of the soliton in this section, there is much more regarding the soliton in other parts of this book.

  2. Let the second integer in the identifier block uniquely identify that intelligent particle within its intelligent-particle type:

    For bions and solitons, assume that for all bions and solitons in existence, whenever that intelligent particle was initially created by the computing-element program, it was given a sufficiently long and randomly generated integer that will serve as a unique identifier for that intelligent particle (by “sufficiently long and randomly generated” is meant that given a very large group of bions and solitons, such as all the bions and solitons in our solar system, it will be very unlikely that there are two or more intelligent particles in that group that have the same identifier value). This identifier value, once given, cannot be changed, and it will be a permanent part of that intelligent particle’s identifier block. Also, this identifier value will be referred to as being a unique identifier for that intelligent particle, even though given a large enough group of bions and solitons, such as all the bions and solitons in our galaxy, it may not actually be unique within that group.

    Note that there are several good algorithms published in the computer-science literature for generating a sequence of random numbers given some initial arbitrary seed value as the starting point for generating that sequence. And there are many different problem domains in which solution algorithms make use of randomly generated numbers. Thus, it is reasonable to assume that the computing-element program has within it one or more routines for generating random numbers, and one can also assume that random-number generation is always available when considering the kind of algorithms that might be found among learned programs.

    Likewise, one can also assume that the computing-element program has within it one or more routines for generating a seed value given the current environment of the computing element that is executing that generate-seed routine. Note that the environmental source(s) used to generate the seed value are not necessarily external to that computing element, but instead can be internal. As an example of how a very random and very-unlikely-to-be-repeated-anytime-soon seed value can be generated from within a computing element: if the generate-seed routine is being called when that computing element is currently occupied by an intelligent particle, then that generate-seed routine can compute a hash value by reading thru the entire state information of that intelligent particle, and then use the current clock-tick count—or whatever, for this purpose, is equivalent to a clock-tick count in a computing element—to either combine with or manipulate further that computed hash value to get the final seed value returned by that generate-seed routine.

  3. Let the third integer in the identifier block be either a null value or, if the intelligent particle is a bion and that bion belongs to a soliton, then this third integer is the unique identifier for that soliton.

    I think it likely, and assume that it is so, that when a soliton is created by the computing-element program, that a large number of bions are also created at the same time in nearby computing elements, and each of these created bions will have as its third integer in its identifier block that soliton’s unique identifier.

    For all the bions owned by a soliton, assume that the computing-element program, in effect, will keep these owned bions together with both themselves and their owning soliton, never allowing any of these intelligent particles to be further away from each other than some fixed distance (let the constant SOLITON_AND_ITS_OWNED_BIONS_MAX_SEPARATION_DISTANCE be this fixed distance). I estimate that the value of SOLITON_AND_ITS_OWNED_BIONS_MAX_SEPARATION_DISTANCE is about one inch (an inch is about 2½ centimeters), and this estimate is based on my second solitonic projection (see subsection 10.1.2).

    Also assume that every soliton has the same number of owned bions, and that the soliton can only interact with its owned bions, and cannot directly interact with any common particle and cannot directly interact with any intelligent particle other than its owned bions. At the same time, the soliton is, in effect, invisible to all particles in existence with the sole exception of its owned bions. The soliton’s owned bions are, in effect, that soliton’s mind. Also assume that after its creation, a soliton cannot own any other bions than those initially created for it, but neither does a soliton ever lose any of its owned bions.

  4. Beginning with the fourth integer in the identifier block, the remaining integers in the identifier block are, for a bion, user settable by that bion. Refer to these integers in the identifier block as user-settable identifiers, and refer to this sub-block within the identifier block as the user-settable identifiers block. Thus, for bions, there exists a learned-program statement that can change any of that bion’s user-settable identifiers to either a null value or some integer value. Also, when a bion is created by the computing-element program, assume that these user-settable identifiers are initialized to null values.

    I assume there is only a small number of user-settable identifiers, perhaps less than a dozen, because for the purpose of multicellular development discussed further below, I see only seven user-settable identifiers as needed. However, in the case of organizing a soliton’s owned bions (its mind), a few more than seven user-settable identifiers might be useful. For solitons themselves, I don’t see any need for user-settable identifiers, and one can assume that these user-settable identifiers always have null values for solitons.

3.8.2 The Learned-Program send() Statement: Parameters

As mentioned above, bions need a way to communicate with each other. Additionally, owned bions need a way to communicate with their soliton, and their soliton needs a way to communicate with them. To meet these needs, assume that there is a learned-program send() statement for sending a message. Also assume that each bion and each soliton has a message queue in which messages that were sent to that intelligent particle are stored there by the computing-element program to await processing by that recipient intelligent particle. However, if a recipient of a sent message is either a bion or soliton that is currently asleep (section 9.3), then assume that that message is, in effect, ignored by that recipient and not put in that recipient’s message queue. Also assume that when a bion or soliton goes to sleep, that any messages in its message queue are discarded.

Note: In this book, the data being sent to the intended recipient(s) of a message, is often referred to as the “message text”. The message text will be a part of the sent message, but not the entire message (subsection 3.8.4 details the other components of a sent message). The message text is given to the send() statement as a parameter.

Any time a learned program calls the send() statement to send a message, that sending bion or soliton must identify the intended recipient(s) of that message. When the intended recipient(s) are one or more bions, assume that the intended recipient(s) can be specified to the send() statement in one of two ways:

  1. For each intended recipient bion, give the send() statement that bion’s unique identifier which was given to that bion when it was created (this unique identifier is item 2 in the above description of a bion’s identifier block). Thus, the send() statement is given a parameter, named list_of_bions, that is a list of one or more of these unique identifiers.

    The size of list_of_bions is presumed to be limited to a small number of unique identifiers in the list. I estimate this limit at about ten unique identifiers, which will make the largest possible size of the list_of_bions about the same size as the user_settable_identifiers_block parameter which is the other way that bion recipient(s) can be specified to the send() statement. There are two reasons for limiting list_of_bions to a small size:

    OR

  2. Give the send() statement a user-settable identifiers block that has at least one of its values non-null. When the send() statement is called with this parameter, name this parameter user_settable_identifiers_block. In this case, the recipient bion(s) of this message will be those bions that have the same non-null values in the same locations in that bion’s user-settable identifiers block.

    For example, if the user_settable_identifiers_block parameter has in its first position a non-null integer value X, and in its second position a non-null integer value Y, and the remaining positions have a null value, then each recipient bion within its own user-settable identifiers block must have the same X integer value in its first position, and the same Y integer value in its second position.

Regarding messages to and from a soliton, to protect the communications integrity of the soliton, assume the following rules:

For all the bions owned by a soliton, which collectively form that soliton’s mind, it is reasonable to assume that while awake there will be a lot of within-the-mind communication going on, in the form of messages sent within that mind. To protect the integrity of a soliton’s mind, assume that for any bion that is owned by a soliton, any call of the send() statement by that bion to send a message to one or more other bions must, in effect, pass a parameter to that send() statement specifying whether this is a within-the-mind communication or not: if within-the-mind, then the only bion(s) that can possibly receive the sent message are bion(s) that are also owned by that sender’s soliton; if instead, not within-the-mind is specified, then the only bion(s) that can possibly receive the sent message are bion(s) that are not owned by that sender’s soliton.

To accomplish this within-the-mind-or-not restriction on which bions can potentially receive into their message queues the sent message, regardless of how the recipient bion(s) are specified to the send() statement, assume that this within-the-mind-or-not restriction is the last test done by the computing-element program to determine if it will add the sent message to a bion’s message queue, after already determining that that bion qualifies as a recipient bion given the way the recipient bion(s) were specified to the send() statement when that message was sent.

For any bion that has received a message into its message queue from some other bion, assume that the complete identifier block of the sender is made available to the receiving bion along with that message, and also the parameter given to the send() statement by the sender to identify the recipient bion(s), either list_of_bions or user_settable_identifiers_block, is also made available to the receiving bion. After thinking over security considerations, and given what is already said above in this subsection about restrictions placed on the send() statement, and also to make processing the message easier, I believe it is best to expose all of the sender’s identifier block to the receiving bion(s) and not hide any of it.

In the case of a direct call of the send() statement by a learned program, and the message to be sent is not being sent to or from a soliton, and the message is not being sent within the mind as per the within-the-mind-or-not parameter, then an additional parameter for the send() statement is the use_this_send_distance parameter which is an integer that is used, after editing, to set the value of send_distance for the message to be sent (send_distance is explained in subsection 3.8.4). The use_this_send_distance parameter is also a parameter for some of the other learned-program statements presented in this book, including the get_relative_locations_of_bions() statement which is detailed in subsection 3.8.6. In general, a use_this_send_distance parameter value is edited before it is assigned to send_distance so that its value is not less than 1 and not more than the MAX_SEND_DISTANCE_ALLOWED_FOR_… value for that statement (for example, not more than MAX_SEND_DISTANCE_ALLOWED_FOR_SEND_STATEMENT for the send() statement, and not more than MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS for the get_relative_locations_of_bions() statement).

Regarding the distance represented by the value of MAX_SEND_DISTANCE_ALLOWED_FOR_SEND_STATEMENT, my opinion is that this distance is at least hundreds of miles if not several thousand miles (1 mile is about 1.6 kilometers).

3.8.3 Coordinates in 3D Space, and Message Transmission thru 3D Space

Assume that, in effect, each computing element, which is extremely tiny (see chapter 1), is a cube, and these cubes are, in effect, packed tight together into a gigantic 3D array of computing elements, and this gigantic 3D array of computing elements is the 3D space within which we and the rest of our universe exists (footnote 23 gives a reason why this gigantic 3D array of computing elements itself has a cube shape). Assume that each computing element has defined in its state information this_CE's_XYZ (CE means “computing element”), which is its 3D coordinate within this gigantic 3D array of computing elements. A computing element is uniquely identified by its XYZ coordinate, and is the only computing element in our universe that has that XYZ coordinate. As long as our universe exists, the XYZ coordinate of each computing element in our universe never changes.

The value of this_CE's_XYZ has three components, an X coordinate, a Y coordinate, and a Z coordinate, and these three components can be accessed individually as this_CE's_XYZ.X, this_CE's_XYZ.Y, and this_CE's_XYZ.Z, respectively. A computing element’s XYZ coordinate consists of three non-negative integers: a non-negative integer X for its X coordinate along the X axis, a non-negative integer Y for its Y coordinate along the Y axis, and a non-negative integer Z for its Z coordinate along the Z axis. Also, any two computing elements that are adjacent to each other along an axis (either the X, Y, or Z axis), will have their respective coordinate along that axis differ by 1. In effect, the computing elements have a standard XYZ coordinate system, the same as taught in math books, but without negative coordinates for the computing elements in our universe.

For any two computing elements, the distance between their respective XYZ coordinates can be computed using the distance formula for the distance between two XYZ points. For any two points in 3D space whose coordinates are (x1, y1, z1) and (x2, y2, z2), the distance between those two points is given by the distance formula:

square_root_of((x2 − x1)2 + (y2 − y1)2 + (z2 − z1)2)

For the distance formula, its arithmetic is signed, and each point with its three components can have one or more negative components. For example, if point A is (2, −4, 8), and point B is (1, 3, −9), then the distance between points A and B is square_root_of((1 − 2)2 + (3 − −4)2 + (−9 − 8)2), which reduces to square_root_of((−1)2 + (7)2 + (−17)2) which is square_root_of(1 + 49 + 289), which is square_root_of(339), which is about 18.412. Note that exchanging the two points in the distance formula reverses signs, but because (−n)2 = (n)2, one gets the same answer: square_root_of((2 − 1)2 + (−4 − 3)2 + (8 − −9)2) reduces to square_root_of((1)2 + (−7)2 + (17)2) which is square_root_of(1 + 49 + 289), the same as before.

In a few places in this book, likely non-integer results are assigned to the X, Y, and Z of an XYZ coordinate variable. In these cases, just assume any non-integer result is rounded to the nearest integer before that assignment. Also, in a few places in this book, an XYZ coordinate variable is said to be signed or the computation(s) involving it are said to be signed, which means that its X, Y, and Z components can have negative values.

Regarding the unit of distance used in the computing-element program: Given what is said above about the 3D coordinate system for the computing elements, the unit of distance is the side-width of a computing element, which chapter 1 estimates is 10−16 centimeters wide. Thus, about 1016 distance units is about 1 centimeter in length. Also, to be consistent with the above 3D coordinate system for the computing elements, and consistent with the distance formula applied to that 3D coordinate system, it follows that the side-width of a computing element is also the distance unit used thruout the computing-element program, including the various distance constants, variables, and parameters that I give in this book. For example, assuming that the side-width of a computing element is 10−16 centimeters, if the distance constant SOLITON_AND_ITS_OWNED_BIONS_MAX_SEPARATION_DISTANCE has a value equivalent to 2.5 centimeters, then the actual integer value of SOLITON_AND_ITS_OWNED_BIONS_MAX_SEPARATION_DISTANCE is (2.5 × 1016).

Two different Message-Transmission Algorithms

Our universe is finite, and our galaxy appears to be far away from any edge in our universe where there are no computing elements. Thus, for our galaxy’s location in this universe, let’s assume that every computing element is surrounded on all sides by other computing elements. Let’s now define what is meant in the message-transmission algorithms by the phrase adjacent computing elements: Assuming a cube shape for each computing element, a cube has six sides, and in our galaxy each computing element shares each of its six sides with an adjacent computing element. Also, assume that message transmission only happens across a shared side. Thus, each computing element in our galaxy has a total of six adjacent computing elements that it can transfer messages to and from.

Regarding message transmission thru 3D space, assume that the computing-element program has two different algorithms for message transmission:

  1. A sphere-filling message-transmission algorithm : Every computing element within send_distance of the originating computing element from which the message was sent, will receive a copy of that message (note that send_distance has the same unit of distance discussed above for the computing elements and their 3D coordinate system, which is the side-width of a computing element). In effect, a sphere of 3D space, with the center of that sphere being that originating computing element, will receive that sent message. The details of this message-transmission algorithm are given in subsection 3.8.5.

    With the sole exception of the gravity algorithm given in footnote 23, which uses the second message-transmission algorithm to send mass messages and gravity-info messages, all the message sending in this book uses this sphere-filling message-transmission algorithm to send the message. The reason, in general, is because it appears that all particles in our galaxy are moving thru 3D space, and 3D space is composed of computing elements. Also, nearby particles are typically moving relative to each other. The end result of all this particle movement is that any message that is sent from one particle to one or more other particles, is, in effect, best sent in every direction, filling a sphere, so that the designated recipient particle(s) can, in effect, be found by that message, for any recipient particle that is within the send_distance range of that message.

  2. A message-transmission algorithm that sends the message to a specific computing element : This algorithm sends the message along the shortest path from the originating computing element to the destination computing element that is identified by its XYZ coordinate. This message-transmission algorithm is only used in this book to send mass messages and gravity-info messages (see footnote 23), and the details of this message-transmission algorithm are given at the beginning of footnote 23.

In the attached footnote, I give an efficient algorithm for computing the force of gravity at each computing element, and this gravity algorithm uses the sphere-filling message-transmission algorithm to send gravity messages, and uses the other message-transmission algorithm, that sends the message to a specific computing element, to send mass messages and gravity-info messages.[23]

Note: To avoid excessive repetition, anywhere in this book where I say that a computing element does some specific action, or a particle does some specific action, it is actually the computing-element program that is, in effect, doing that action. The only exception to this rule is the soliton (awareness) which has an agency (its so-called free will) that is independent of the computing-element program. A soliton’s agency, in effect, can decide on the messages that the computing-element program will send to that soliton’s mind (the soliton’s owned bions). (As a side note regarding the soliton, the only reason for the existence of our universe that I can see, is that our universe is a structured playground for all the awarenesses that, in effect, live in our universe: the universe exists for the sake of all these awarenesses.)


footnotes

[23] In this footnote, first a message-transmission algorithm is given for sending a message to a specific computing element, and then an efficient algorithm is given for gravity. This footnote ends with a description of two approximations in the gravity algorithm, and a discussion of the gravity algorithm’s efficiency.

A Message-Transmission Algorithm for Sending a Message to a Specific Computing Element

There is no learned-program statement for sending a message to a specific computing element. Thus, intelligent particles cannot send a message to a specific computing element. Only the computing-element program can send a message to a specific computing element.

Assume the following format for any message whose recipient is a specific computing element identified by its XYZ coordinate:

  1. The message begins with a code that identifies this message to the computing-element program as using this message-transmission algorithm.

  2. sending_CE_XYZ : The XYZ coordinate of the computing element that sent this message. The sending computing element sets sending_CE_XYZ to this_CE's_XYZ, which is that sending computing element’s XYZ coordinate.

  3. X_steps_remaining_until_at_the_recipient_CE
    Y_steps_remaining_until_at_the_recipient_CE
    Z_steps_remaining_until_at_the_recipient_CE

    The following code is executed by the sending computing element to set these three before this message is sent (recipient_CE_XYZ is the XYZ coordinate of the computing element that this message is being sent to):

    set X_steps_remaining_until_at_the_recipient_CE to (recipient_CE_XYZ.X − sending_CE_XYZ.X)
    set Y_steps_remaining_until_at_the_recipient_CE to (recipient_CE_XYZ.Y − sending_CE_XYZ.Y)
    set Z_steps_remaining_until_at_the_recipient_CE to (recipient_CE_XYZ.Z − sending_CE_XYZ.Z)
  4. The message type : The computing-element program has predefined the allowed message types for a message sent to a specific computing element. The message type, in effect, identifies the format and meaning of the message text.

  5. The message text : The data being sent to the recipient computing element.

The following block of code is executed by the computing-element program in this_CE which is the computing element that currently holds the message, and this includes the sending computing element when it is ready to send this message. In effect, this block of code is the message-transmission algorithm for any message having the above format:

if X_steps_remaining_until_at_the_recipient_CE is a negative number then
Add 1 to X_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same Y and Z coordinates as this_CE, but that adjacent computing element’s X coordinate is one less than this_CE’s X coordinate.
else if X_steps_remaining_until_at_the_recipient_CE is a positive number then
Subtract 1 from X_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same Y and Z coordinates as this_CE, but that adjacent computing element’s X coordinate is one more than this_CE’s X coordinate.
else if Y_steps_remaining_until_at_the_recipient_CE is a negative number then
Add 1 to Y_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same X and Z coordinates as this_CE, but that adjacent computing element’s Y coordinate is one less than this_CE’s Y coordinate.
else if Y_steps_remaining_until_at_the_recipient_CE is a positive number then
Subtract 1 from Y_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same X and Z coordinates as this_CE, but that adjacent computing element’s Y coordinate is one more than this_CE’s Y coordinate.
else if Z_steps_remaining_until_at_the_recipient_CE is a negative number then
Add 1 to Z_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same X and Y coordinates as this_CE, but that adjacent computing element’s Z coordinate is one less than this_CE’s Z coordinate.
else if Z_steps_remaining_until_at_the_recipient_CE is a positive number then
Subtract 1 from Z_steps_remaining_until_at_the_recipient_CE and transfer this message to that adjacent computing element whose XYZ coordinate has the same X and Y coordinates as this_CE, but that adjacent computing element’s Z coordinate is one more than this_CE’s Z coordinate.
else this_CE is the recipient and the transfer of this message to the recipient computing element is complete.

An Efficient Gravity Algorithm

Note: In various places in this gravity algorithm, to avoid excessive repetition, I say that a computing element, or a parent, or a child, or this_CE, does some specific action. However, in all such cases, it is actually the computing-element program that is running on that computing element, that is, in effect, doing that action.

Note: because of many references, both forward and backward references, to specific paragraphs within this gravity algorithm, each paragraph in this gravity algorithm, not including the paragraphs in lists and code, is prefixed with a paragraph number, beginning with the next paragraph:

p1:: In this gravity algorithm, the level number, n, is a non-negative integer. Assume that the computing-element program defines a hierarchy of cubes for gravity. Each cube at level (n + 1) in the hierarchy is three times wider than the width of the cube at level n. The lowest level in this hierarchy is level 0, and each cube at level 0 is a single computing element, and its width is one computing-element wide. Each cube at level 1 is three computing elements wide and contains a total of 3×3×3 = 27 computing elements. The width of each cube at level n is 3n computing-element widths, and each cube at level n contains 3(3 × n) computing elements.

p2:: There is a finite number of these levels defined in the computing-element program. To get an approximation of what the largest level number is, the largest and most massive objects in our universe that appear to be held together by gravity, are the galaxies. The width of our Milky Way galaxy is roughly 100,000 light years, and our galaxy is roughly average in terms of galaxy size, so let’s guess that, for the purpose of creating gravity, the largest cube defined by the computing-element program has a width of 100,000 light years. Assuming that the width of a computing element is 10−16 centimeters wide (chapter 1), and given that one light year is a distance of about 1017 centimeters, the number of computing-element widths in 100,000 light years is ((1017 × 100,000) ÷ 10−16)) which is 1038 computing-element widths. Then, to compute the level number for a cube that has a side width of roughly 100,000 light years, find the nearest integer value for n in the equation 3n = 1038. Using my calculator, n is 80. Assume that the level number n ranges from 0 to 80 inclusive.

p3:: Also, to avoid edge cases regarding this hierarchy of cubes for gravity, let’s assume that our universe is contained within a cube of computing elements, and this universe-containing cube’s side width is an integer multiple of the side width of a cube at level 80 (assuming that 80 is the largest level number).

p4:: Each cube at level (n > 0) contains exactly 27 cubes at level (n − 1).

p5:: For each cube at level (n > 0), denote the single computing element at the center of that cube as the parent of that cube.

p6:: For each cube at level 1, the 27 computing elements in that cube are the children of that cube’s level 1 parent—one of those 27 children being that parent itself.

p7:: For each cube at level (n > 1), that level n cube’s parent has 27 children (each of these 27 children is a computing element)—one of those 27 children being that parent itself. And each of those 27 children is itself a parent at level (n − 1).

p8:: Each mass message (see paragraphs p10 and p12) is sent by the message-transmission algorithm given at the beginning of this footnote, because the recipient of each mass message is a specific computing element identified by its XYZ coordinate. The message type identifies the message as a mass message, and the message text has only two components: mm_com_XYZ which is a center-of-mass coordinate, and mm_tm which is a total-mass amount (assume that mm_tm is an integer variable). IMPORTANT: When a child sends a mass message to its parent (see paragraphs p10 and p12), and that child is its parent (the child and its parent are the same computing element), then that mass message, in effect, is sent to itself, but without actually using a message-transmission algorithm to send it to itself.

p9:: A computing element’s status of being or not being a parent at level n, for levels 1 thru 80, is only dependent on that computing element’s XYZ coordinate. Assume that shortly after each computing element came into existence, its computing-element program computed for that computing element (denote as this_CE) the levels, if any, at which this_CE is a parent (see paragraph p29 for the code to do this). And, for each level (n < 80) at which this_CE is a parent, also computed is the XYZ coordinate of that parent’s parent at level (n + 1) (see paragraph p30 for the code to compute a parent’s parent). And all this computed parent info is saved in this_CE’s state information for use by this gravity algorithm. And, immediately after computing and storing all this parent info, also assume that for each level n at which this_CE is a parent, that memory is allocated and initialized for saving the most recently received mass message from each of its 27 children at level (n − 1) (see paragraph p12; assume that each of the 27 entries is initialized to null). Also, if this_CE is not a parent at level 1, then save in this_CE’s state information the XYZ coordinate of this_CE’s parent at level 1 (see paragraph p29 for the code to do this), because this_CE will send a mass message to that parent at level 1 whenever the mass currently held by this_CE changes (see paragraph p10).

p10:: Each computing element at level 0 sends a mass message to its parent at level 1 whenever the mass currently held by that computing element changes. The mass message’s mm_com_XYZ is set to that computing element’s this_CE's_XYZ, and mm_tm is set to the mass currently held by that computing element (this mm_tm value will be set to zero if this computing element was most recently holding a particle with a nonzero mass, but is currently not holding a particle with a nonzero mass).

p11:: Note: In this gravity algorithm, the phrase “a particle with a nonzero mass” is equivalent to the phrase “a particle with a positive mass”, because I see no reason for the computing-element program to define or allow a particle or particle type to have a negative mass as its mass attribute. Also, physics has no experimental evidence for the existence of anything, with respect to gravity, having a negative mass. Thus, assume that particles having a negative mass do not exist.

p12:: Each parent at level (n > 0) can potentially receive at any time a mass message from any of its 27 children at level (n − 1). That parent (a computing element) saves in its allocated memory for level n (see paragraph p9) the most recently received mass message from each of its 27 children at level (n − 1). Periodically, that parent examines the 27 entries in that allocated memory for level n (any null entries—see paragraph p9—are ignored), and adds up the masses into a total-mass value tm, and, if the value of tm is greater than zero, computes a center-of-mass coordinate com_XYZ for that tm value (note: this computed com_XYZ coordinate will lie within the level n cube for which it was computed). If this level n is not the maximum level number 80, then this parent at level n sends a mass message to its parent at level (n + 1): mm_tm is set to the computed tm; if the value of tm is greater than zero, then mm_com_XYZ is set to the computed com_XYZ, otherwise mm_com_XYZ is set to null. (Note: Computing the center-of-mass coordinate for 27 masses whose center-of-mass XYZ coordinate for each of those 27 masses is known, is a simple low-cost computation. For the formula, see, for example, Center of Mass for Particlesextended to three dimensions, at http://hyperphysics.phy-astr.gsu.edu/hbase/cm.html.)

p13:: Also, immediately after each parent at level (n > 0) periodically computes tm and com_XYZ (see paragraph p12), if that level n is at least MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE, then that parent at level n sends a gravity-related message, which is either a gravity-info message or a gravity message (see paragraph p15). MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE is an integer constant. I don’t know what value for MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE would be best overall, but for the sake of being able to give a value for MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE that is probably not too far from its actual value, let’s assume that a level n cube must be at least one centimeter wide before the parent of that level n cube will send gravity-related messages for that level n cube. And, assuming 10−16 centimeters is the side-width of a computing element, that means a line along an axis, of 1016 computing elements, has a length of one centimeter. And, given that the width of each cube at level n is 3n computing-element widths (see paragraph p1), then MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE is the smallest integer such that (1016 <= 3MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE) (note: <= is less than or equal). Using my calculator, MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE is 34.

p14:: Only the computing-element program, when running this gravity algorithm, can send a gravity-related message. Learned programs cannot send a gravity-related message, nor can they send a mass message. Regarding the gravity message specifically, assume that a gravity message begins with a code that identifies that message as a gravity message to the computing-element program. Note that a gravity message does not specify any recipients, because every computing element within range of a sent gravity message is a recipient and will receive that gravity message and examine its content. The criteria by which a computing element decides if it will accept or ignore a received gravity message is given in paragraph p18. Each gravity message has four components in its message text:

p15:: The parent's_XYZ, parent's_level, center_of_mass_XYZ, and total_mass values of a gravity-related message are set by the originating parent. However, the gravity message itself will be sent by the computing element at the center_of_mass_XYZ coordinate in that gravity message, which will most likely be a different computing element than the originating parent. Sending the gravity message from the computing element at the center_of_mass_XYZ coordinate, is done for reasons of overall efficiency, to minimize the total number of computing elements that will receive that sent gravity message. The following procedure is done whenever a parent at level (n >= MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE) is ready to send a gravity-related message, which is either a gravity-info message or a gravity message (note: >= is greater than or equal):

  1. If the center_of_mass_XYZ coordinate is not the same XYZ coordinate as the parent's_XYZ coordinate (not being the same is most likely), then the parent uses the message-transmission algorithm given at the beginning of this footnote to send a gravity-info message to the computing element at that center_of_mass_XYZ coordinate. The message type identifies the message as a gravity-info message, and the message text has four components: parent's_XYZ, parent's_level, center_of_mass_XYZ, and total_mass.

    When the computing element at that center_of_mass_XYZ coordinate receives this gravity-info message, the gravity algorithm at that computing element puts together a complete gravity message, with the received parent's_XYZ, parent's_level, center_of_mass_XYZ, and total_mass values copied into the message text of that gravity message. Also, the send_distance for this gravity message is computed using the same “set send_distance” formula shown below in step 2. And then this gravity message is sent by that computing element using the message-transmission algorithm given in subsection 3.8.5, that, in effect, sends the message out into 3D space, filling a sphere whose radius is the send_distance for that message. Also, that computing element, in effect, sends that gravity message to itself, but without actually using a message-transmission algorithm to send it to itself.

  2. If the center_of_mass_XYZ coordinate is the same XYZ coordinate as the parent's_XYZ coordinate (not likely, but it can happen), then complete the gravity message to be sent. Because the force of gravity between two masses decreases with the square of the distance between those two masses, compute the send_distance for this gravity message as:

    set send_distance to ((a constant defined in the computing-element program’s gravity algorithm) × square_root_of(this gravity message’s total_mass))

    The idea here is to not send a gravity message further out into 3D space than its total_mass amount can gravitationally affect in a significant way a particle of average mass.

    The above (send_distance = ((a constant) × square_root_of(total_mass))) formula was derived by simple algebra applied to Newton’s formula (see paragraph p21 for Newton’s formula for the gravitational force between two masses): First, the other mass in Newton’s formula is assumed here to be a nonzero-mass particle of average mass (perhaps this average mass is roughly the mass of a hydrogen atom), thus, in effect, the other mass is a constant (denote as k1). Also, the gravitational force on the left side of Newton’s formula is assumed here to be, in effect, another constant (denote as k2), because we want the minimum gravitational force that can affect a nonzero-mass particle of average mass in a significant way. Thus, in this case, with two constants, Newton’s formula reduces to (k2 = ((k1 × total_mass) ÷ (d2))). Then, multiply both sides by d2, and then take the square-root of both sides, giving us (send_distance = ((square_root_of(k1) ÷ square_root_of(k2)) × square_root_of(total_mass)), and, because (square_root_of(k1) ÷ square_root_of(k2)) is itself a constant, we get the above (send_distance = ((a constant) × square_root_of(total_mass))) formula.

    After computing the send_distance, the parent sends the gravity message using the message-transmission algorithm given in subsection 3.8.5, that, in effect, sends the message out into 3D space, filling a sphere whose radius is the send_distance for that message. Also, that parent, in effect, sends that gravity message to itself, but without actually using a message-transmission algorithm to send it to itself.

    Presumably, this limitation on the send_distance for a gravity message, along with a less frequent send rate as the level number n increases (see paragraph p16), prevents each computing element in our universe from being overwhelmed by too many gravity messages being received by that computing element.

p16:: In paragraph p12, I say that the parent at level n, for n > 0, periodically computes a total-mass and center-of-mass, and—if n < 80—sends this computed info in a mass message to that parent’s parent, and—see paragraph p13—sends this computed info in a gravity-related message if n >= MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE. But how often is “periodically”? More specifically, how many times per second (denote as tps) is “periodically”? It makes sense that tps decreases as n increases. For example, at the two extremes for level n > 0, 1 and 80, I think it likely that for a parent at level 1, tps is more than a billion times per second, but for a parent at level 80, tps is less than one time per second. Note that the effect of gravity has all the mass in our universe in motion relative to the computing elements (for example, our Earth is moving thru space at roughly 1/245th of lightspeed—see the footnote in subsection 3.8.6). The computing elements, in effect, are the 3D space within which we and our universe exists. Thus, the movement of an object thru 3D space is movement of that object’s particles thru the computing elements that compose that 3D space. The higher the level number n, the larger its level n cube (see paragraphs p1 and p2), and because of this greater cube size, gravitationally significant and substantial change to the periodically computed total-mass and center-of-mass for that cube by its parent, will, in general, happen over a slower time frame than for a smaller cube size.

p17:: Every computing element in our universe is a potential recipient of gravity messages. Assume that shortly after each computing element came into existence, its computing-element program computed for that computing element (denote as this_CE) the XYZ coordinates of the originating parents from which this_CE will accept gravity messages. More specifically, compute at each level n, for levels MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE thru 80 inclusive, the XYZ coordinate of the parent for the level n cube that contains this_CE, and the XYZ coordinates of the 26 parents for the level n cubes that immediately surround the level n cube that contains this_CE (to make clear what is meant by “immediately surround”, these 27 cubes together have the shape of a cube; see paragraph p32 for the code to compute the XYZ coordinates of these 27 parents at level n). The computing-element program saves into this_CE’s state information an initialized gmr entry—gmr stands for “gravity message received”—for each of these 27 parents at level n, for levels MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE thru 80 inclusive (assuming MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE is 34 and the maximum level number is 80, there are ((80 − 34) + 1) × 27 = 1,269 gmr entries in this_CE’s state information). Each gmr entry has five components: from_parent_XYZ, level_num, time_received, center_of_mass_XYZ, and total_mass. Each gmr entry is initialized as follows: from_parent_XYZ is set to the parent’s XYZ coordinate, level_num is set to the parent’s level number, and time_received, center_of_mass_XYZ, and total_mass, are each initialized to null. After all the gmr entries have been initialized, they are then sorted into ascending key order (the key of each gmr entry has two components: the from_parent_XYZ value and the level_num value). The reason for this sort is so that an efficient binary search can be used in paragraph p18 whenever a computing element receives a gravity message, to determine if that computing element has a gmr entry for that gravity message. After all the gmr entries have been initialized, and then sorted, this_CE is ready to receive gravity messages.

p18:: After the gmr entries have been initialized and then sorted (see paragraph p17), whenever a computing element (denote as this_CE) receives a gravity message, it does the following: If one of this_CE’s gmr entries has the same from_parent_XYZ and level_num values as that gravity message’s parent's_XYZ and parent's_level values, respectively, then set that gmr entry’s center_of_mass_XYZ and total_mass values to that gravity message’s center_of_mass_XYZ and total_mass values, respectively, and set time_received to the current time on this_CE’s internal clock; otherwise, ignore that received gravity message.

p19:: In preparation for paragraphs p21, p22, p24, and p26: First, a brief description of force vectors and the terminology used in this footnote: The tail-point XYZ coordinate and the head-point XYZ coordinate are the two end points of a force vector, and the direction of the force vector—the direction of the force—relative to the tail-point, is towards the head-point. Also, the length of the force vector (the distance between the tail-point and the head-point) is the amount of force. Thus, a force vector, described by two XYZ coordinates, represents an amount of force and the direction of that force relative to the tail-point. Also, the unit of force must be the same for any two force vectors added together, and this is directly relevant to paragraph p22 in which different kinds of force vectors are added together (each of these force vectors can contribute to moving the nonzero-mass particle held by this_CE, and this_CE’s XYZ coordinate is the tail-point for each of these force vectors). Note: the head-to-tail method for adding together force vectors is described in paragraph p26.

p20:: The current values of center_of_mass_XYZ and total_mass in each of this_CE’s gmr entries are used to compute the current gravitational force on the particle currently held by this_CE whenever this_CE begins holding a particle with a nonzero mass. For this gravitational-force calculation, ignore any gmr entry whose current total_mass value is null or zero, or whose current center_of_mass_XYZ value is the same XYZ coordinate as this_CE’s XYZ coordinate. For each gmr entry not already ignored by the immediately preceding tests, check that gmr entry’s time_received value as shown in the following code, and ignore that gmr entry if too much time has passed since that gmr entry, in effect, last received a gravity message (note: the text between “/*” and “*/” are comments, not code):

/*
Note: This code is only concerned with gmr entries, and gmr entries are only for parents at level (n >= MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE).

Note: It is assumed that the internal clock of each computing element ticks at about the same rate.

Referring to paragraph p16, for a given level n, for n > 0, the gravity algorithm specifies a constant that is a time interval (denote as k) for that level n (assume that for the gravity algorithm, the time intervals are measured in clock ticks, and k is an integer that is a specific number of clock ticks). For level n, k is the wanted time interval between any two successively sent gravity-related messages by a parent at that level n.

Two reasons for the safety_factor in the below code:
  1. Note that the computing-element program is like an operating system and is assumed to be multitasking, and, even though one can assume that the gravity algorithm runs at a high priority, it is always possible that a specific parent at level n has one or more other high-priority tasks to do at the same time that it is supposed to be doing the work needed to prepare and send that next gravity-related message. Thus, even though the k value for level n is an exact and unchanging constant in the gravity algorithm, one can only assume that a given parent at level n will be sending a gravity-related message approximately every k clock ticks and not exactly every k clock ticks. Thus, k is an approximation, albeit probably a close approximation, of the time interval between two successively sent gravity-related messages by a given parent at level n.

  2. For a given gmr entry and its from_parent_XYZ value, even though the distance between this_CE’s XYZ coordinate and that parent’s from_parent_XYZ coordinate is fixed and unchanging, the most recently received gravity message in that gmr entry, was sent from the computing element at that gmr entry’s center_of_mass_XYZ coordinate (see paragraph p15), and this center_of_mass_XYZ coordinate, for that gmr entry, can change from one received gravity message to the next, which means that the message transit time for each of two successively received gravity messages for that gmr entry, can be different (this message transit time includes the transit time of the gravity-info message, if any, that resulted in that gravity message).

    Note: Reason 1 is why I chose a value of 2 for the below safety_factor. If instead, there was only reason 2 to consider, then a safety_factor of 1.01 would, I think, be more than enough.

The below code, which, in effect, ignores an “old” gravity message, is needed for the following reason: For a given gmr entry, which specifies a specific parent and level number, the most recently sent gravity message that originated from that parent at that level, won’t be received by this_CE if this_CE is not within range of that most recently sent gravity message, even though this_CE has received in the past, perhaps very recently, gravity messages that originated from that parent at that level (denote the most recently received gravity message from that parent at that level, whose center_of_mass_XYZ and total_mass values are currently in that gmr entry, as RR). This situation can happen because:
  1. The total_mass value in that most recently sent gravity message is now smaller (compared to RR) which reduces the send_distance for that most recently sent gravity message (see paragraph p15).

    OR

  2. The center_of_mass_XYZ coordinate in that most recently sent gravity message, which is the coordinate of the computing element from which that most recently sent gravity message was sent (see paragraph p15), is now further away (compared to RR) from this_CE.

    OR

  3. Both 1 and 2.

Thus, unless it is “aged” and, in effect, discarded when too old, the center_of_mass_XYZ and total_mass from an old gravity message—whose center_of_mass_XYZ and/or total_mass values are no longer correct when compared to the most recently sent gravity message that originated from that parent at that level—will remain in that gmr entry until the next gravity message is received from that parent at that level, which, in the worst case, can be very far in the future or never.
*/
set elapsed_time to ((the current time on this_CE’s internal clock) − (this gmr entry’s time_received))

set safety_factor to 2  /* See the above comments regarding this safety_factor. */

set clock_ticks_between_sends to (the gravity algorithm’s k value for this gmr entry’s level_num)  /* See the above comments regarding k. */

set allowed_elapsed_time_until_ignore to (safety_factor × clock_ticks_between_sends)

if elapsed_time > allowed_elapsed_time_until_ignore
then
Ignore this gmr entry. And also set its total_mass to zero so that this gmr entry will have to, in effect, receive another gravity message before this code can be executed again for this gmr entry.
end if

p21:: For each gmr entry not ignored (see paragraph p20): First use the distance formula to compute the distance d between this_CE’s XYZ coordinate and that gmr entry’s center_of_mass_XYZ coordinate. Then, use Newton’s formula for computing the force of gravity between two masses: set temp to (((the mass of the particle currently held by this_CE) × (that gmr entry’s total_mass value)) ÷ (d2)). Then compute that gmr entry’s gravitational force on this_CE’s held particle as being (temp × (a constant defined in the computing-element program for converting the temp value so that its unit of measurement is the force unit used within the computing-element program for those forces that can contribute to moving a nonzero-mass particle; see paragraphs p19 and p22)). The gravitational force vector representing that gmr entry’s gravitational force on the particle currently held by this_CE, begins at this_CE’s XYZ coordinate (this is the vector’s tail-point), and the vector’s length is that gmr entry’s above computed gravitational force, and this vector lies on the line that runs thru these two XYZ points: this_CE’s XYZ coordinate, and that gmr entry’s center_of_mass_XYZ coordinate (computing this vector’s head-point is detailed in paragraph p24). Then, after all these gravitational force vectors have been computed (as many as 1,269 gravitational force vectors, assuming each computing element has 1,269 gmr entries; see paragraph p17), these gravitational force vectors are added together to get a single total-gravitational-force vector that represents the net effect of all the above computed gmr gravitational force vectors on the particle currently held by this_CE.

p22:: After the total-gravitational-force vector is computed (see paragraph p21), that total-gravitational-force vector is added together with other currently applicable force vectors, if any, that can contribute to moving the nonzero-mass particle currently held by this_CE (a final composite force vector is the result of these added-together force vectors). As an example of possible force vectors added to the total-gravitational-force vector, there is probably a momentum force vector for that particle currently held by this_CE, and if that particle is a bion, there may still be an active force vector that resulted from one of that bion’s learned programs very recently calling the learned-program statement move_this_bion(). After this_CE has computed this final composite force vector for moving the particle that this_CE currently holds, this_CE uses this final composite force vector as input for the computing-element program’s algorithm for determining how long this_CE will hold the particle it is currently holding. This final composite force vector is also used, along with some history info (described in paragraph p23), as input into a different algorithm that determines which of this_CE’s six adjacent computing elements to copy that particle’s information block to when this_CE ends its current hold of that particle.

p23:: Regarding paragraph p22 and the “history info” mentioned: Assuming that this_CE is not at the edge of the universe, this_CE has six adjacent computing elements. The final composite force vector computed by this_CE (see paragraph p22) can be pointing in any direction, but there are only six directions in which to move the particle currently held by this_CE. Thus, a correction is needed so that, even though the particle will likely be moving in a very jagged path on the scale of the computing elements, on a larger scale that particle will be moving along the line or curve that the sequence of computed final composite force vectors would have that particle moving along. In addition to the final composite force vector computed by this_CE, also needed as input into the algorithm that decides which of this_CE’s six adjacent computing elements to move that held particle to, is a history of the final composite force vector computed by each of the most recent q computing elements that have held that particle, for some integer number q. Note that each final composite force vector in this history info has as its tail-point the XYZ coordinate of the computing element that computed that final composite force vector. After this history info—that this_CE got from the previous computing element that held the particle that this_CE is currently holding—is used by that algorithm, along with the final composite force vector computed by this_CE, to determine which of this_CE’s six adjacent computing elements to move that held particle to, this history info is then updated: The oldest final composite force vector in this history info is deleted, and the final composite force vector computed by this_CE is added. Then, when this_CE finally moves that particle to an adjacent computing element, this updated history info is also passed along with that particle to that adjacent computing element.

p24:: Regarding paragraph p21 and the gravitational force vector for a gmr entry that is not ignored (denote this gravitational force vector as vector V), vector V’s head-point is computed as follows: Vector V’s tail-point is this_CE’s XYZ coordinate (denote the X, Y, and Z components of this tail-point as x1, y1, and z1, respectively). Also, denote the X, Y, and Z components of the gmr entry’s center_of_mass_XYZ value as x2, y2, and z2, respectively. Note that vector V lies on the line that runs thru the two points (x1, y1, z1) and (x2, y2, z2). Also, the length of vector V (denote as L) was computed in paragraph p21, and this computed length L will always be greater than zero because of the assumption of no negative masses (see paragraph p11) and the conditions for ignoring gmr entries (see paragraph p20). Note that the unknown regarding vector V is its head-point (denote the X, Y, and Z components of vector V’s head-point as x3, y3, and z3, respectively). Thus, given point (x1, y1, z1) which is both this_CE’s XYZ coordinate and vector V’s tail-point, and given point (x2, y2, z2) which is the gmr entry’s center_of_mass_XYZ, and given L (the length of vector V), we need to compute the head-point (x3, y3, z3) of vector V. Note that the computed value of (x3, y3, z3) must satisfy two different equations. The first equation is the equation of a line when two different points on that line—in this case, (x1, y1, z1) and (x2, y2, z2)—are known: ((x3 − x1) ÷ (x2 − x1)) = ((y3 − y1) ÷ (y2 − y1)) = ((z3 − z1) ÷ (z2 − z1)). The other equation is the distance formula: L = square_root_of((x3 − x1)2 + (y3 − y1)2 + (z3 − z1)2). In June 2017, I used the latest version of a commercially available program (Mathematica ®) to solve this. In the below rules and formula, R identifies either x3, y3, or z3 as the value to be computed by the below formula:

p25:: For example, if L is 2.7 and (x1, y1, z1) is (2, 5, 7) and (x2, y2, z2) is (9, 3, 6), then (accurate to 5 decimal places) x3 is 4.57196, y3 is 4.26515, and z3 is 6.63258. Note: Confirming that the computed head-point (x3, y3, z3) was computed correctly can be done by plugging the computed x3, y3, and z3 values into the two equations given in paragraph p24 (the line equation and the distance formula). For the line equation, confirm that the equalities hold (if the denominator is zero—division by zero—for any of the three terms in the line equation that are supposed to be equal to each other, then ignore that term when checking that the equalities hold). For the distance formula, confirm that the computed distance is the L value. (I’ve already used Mathematica to compute many test cases and confirm that the above rules and formula for computing vector V’s head-point are flawless. I assume that the above rules and formula are nothing new, and they may already be on the internet somewhere, but my attempt to find them online was unsuccessful, so I had to do the work myself with Mathematica’s help, because I didn’t want to leave any part of this gravity algorithm unfinished.)

p26:: Regarding paragraphs p21 and p22, the fastest and lowest-computation-cost way to add together force vectors is the head-to-tail vector addition method (for a description of this method, see, for example, Vectors at https://www.mathsisfun.com/algebra/vectors.html). To show how this head-to-tail vector addition method would be done in the case of the force-vector additions done in paragraphs p21 and p22, first note that all these force vectors have the same tail-point which is this_CE’s XYZ coordinate, which is the XYZ coordinate of the particle that these force vectors will act upon. Next, because paragraph p21 has many more force vectors to be added together than paragraph p22, let’s assume that we have 1,269 gravitational force vectors to add together (see paragraph p21). The order in which the vectors are added together doesn’t matter, because the same final head-point XYZ is the result. Note: the word “translate” with regard to a vector means to move that vector without changing its orientation and length. To add a vector, translate that vector so that its tail-point XYZ is moved to the current head-point XYZ of the growing vector chain (denote the current vector to be added to the vector chain as vector G, and denote vector G’s tail-point and head-point as G_TAIL_XYZ and G_HEAD_XYZ, respectively; also denote the current head-point XYZ of the growing vector chain as VC_HEAD_XYZ, and note that the first VC_HEAD_XYZ value is the head-point XYZ of whichever of the 1,269 force vectors is chosen as the first link in the vector chain—the other 1,268 force vectors are then added in sequence to that vector chain). To add vector G to the current vector chain, using the head-to-tail vector addition method, first set diff_X to (VC_HEAD_XYZ.X − G_TAIL_XYZ.X), set diff_Y to (VC_HEAD_XYZ.Y − G_TAIL_XYZ.Y), and set diff_Z to (VC_HEAD_XYZ.Z − G_TAIL_XYZ.Z). Then, compute the new head-point XYZ of the vector chain: set VC_HEAD_XYZ.X to (G_HEAD_XYZ.X + diff_X), set VC_HEAD_XYZ.Y to (G_HEAD_XYZ.Y + diff_Y), and set VC_HEAD_XYZ.Z to (G_HEAD_XYZ.Z + diff_Z). Then, after all 1,269 gravitational force vectors are in the vector chain, the total-gravitational-force vector is the vector whose tail-point XYZ is the XYZ coordinate of this_CE, and its head-point XYZ is the final VC_HEAD_XYZ value of that vector chain, and the net amount of that vector chain’s gravitational force on the particle held by this_CE is the length of that total-gravitational-force vector, which is the distance, computed using the distance formula, between that total-gravitational-force vector’s tail-point XYZ and its head-point XYZ.

Detailed Code for computing Parents, a Parent’s Parent, and Surrounding Parents

p27:: Paragraphs p9 and p17 describe initializations for this gravity algorithm, done by each computing element shortly after that computing element came into existence. Paragraph p9 says:

Assume that shortly after each computing element came into existence, its computing-element program computed for that computing element (denote as this_CE) the levels, if any, at which this_CE is a parent (see paragraph p29 for the code to do this). And, for each level (n < 80) at which this_CE is a parent, also computed is the XYZ coordinate of that parent’s parent at level (n + 1) (see paragraph p30 for the code to compute a parent’s parent). … Also, if this_CE is not a parent at level 1, then save in this_CE’s state information the XYZ coordinate of this_CE’s parent at level 1 (see paragraph p29 for the code to do this), because this_CE will send a mass message to that parent at level 1 whenever the mass currently held by this_CE changes (see paragraph p10).

p28:: A cube has eight corner points. Assuming our universe is a giant cube of computing elements (see paragraph p3), our universe has eight corner points, and each of these eight corner points is a computing element that has an XYZ coordinate. To compute the XYZ coordinates of parents as needed by paragraphs p9 and p17, we have to know which of those eight corner points in our universe has the lowest X, Y, and Z values. I think there are only two reasonable choices for the lowest-coordinate corner point in our universe: either XYZ coordinate (0, 0, 0) or XYZ coordinate (1, 1, 1). My own preference, and it makes the following math a little simpler, is XYZ coordinate (0, 0, 0), and the following math assumes that (0, 0, 0) is the XYZ coordinate of the computing element at the lowest-coordinate corner point in our universe.

p29:: For the following math, (xyz) is the XYZ coordinate of this_CE. And, n is the level number, for levels 1 thru 80. The code for computing the XYZ coordinate of the parent at level n, of the cube at level n that inclusively contains the computing element this_CE whose XYZ coordinate is (xyz), follows:

/*
This code assumes that the computing element at the lowest-coordinate corner point in our universe has XYZ coordinate (0, 0, 0).

x, y, and z are the X, Y, and Z components, respectively, of this_CE’s XYZ coordinate.

A few examples should make clear how the math operator integer_part_of() works: integer_part_of(0) is 0, integer_part_of(3) is 3, integer_part_of(3.001) is 3, integer_part_of(3.999) is 3, integer_part_of(3.9999999999999999999999999) is 3. Note: I’m assuming that the computing-element program can do all the math operations in this code with sufficient precision so that every computing element in our universe will have its parent info computed correctly.

As an example of how the below code works, if the level number n is 2 and this_CE’s XYZ coordinate (xyz) is (23, 6, 17), then cube_side_width is 9 and the low-corner XYZ is (18, 0, 9) = (9 × 2, 9 × 0, 9 × 1) and the add_this is 4 and the parent XYZ is (22, 4, 13) and this_CE is not a parent at level 2.
*/
/*
Part 1 of this code:
*/
set cube_side_width to 3n  /* The width of a level n cube, measured in computing-element widths. */

set low_corner_X to (cube_side_width × integer_part_of(x ÷ cube_side_width))
set low_corner_Y to (cube_side_width × integer_part_of(y ÷ cube_side_width))
set low_corner_Z to (cube_side_width × integer_part_of(z ÷ cube_side_width))

set add_this to integer_part_of(cube_side_width ÷ 2)

set parent_X to low_corner_X + add_this
set parent_Y to low_corner_Y + add_this
set parent_Z to low_corner_Z + add_this

/*
Part 2 of this code:
*/
if x equals parent_X
and y equals parent_Y
and z equals parent_Z
then
this_CE is a parent at level n
else
this_CE is not a parent at level n
if n is 1  /* this_CE, which is not a parent at level 1, needs to know where to send its mass messages. */
then
Save in this_CE’s state information the XYZ coordinate of this_CE’s parent at level 1, which is the XYZ coordinate (parent_Xparent_Yparent_Z).
end if
end if

p30:: When this_CE is a parent at level n, and n is less than the assumed maximum level number which is 80, to compute the XYZ coordinate of that parent’s parent at level (n + 1), just add 1 to n and compute Part 1 in the above code: the resulting XYZ coordinate (parent_Xparent_Yparent_Z) is the XYZ coordinate of that parent’s parent.

p31:: Paragraph p17 says:

Assume that shortly after each computing element came into existence, its computing-element program computed for that computing element (denote as this_CE) the XYZ coordinates of the originating parents from which this_CE will accept gravity messages. More specifically, compute at each level n, for levels MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE thru 80 inclusive, the XYZ coordinate of the parent for the level n cube that contains this_CE, and the XYZ coordinates of the 26 parents for the level n cubes that immediately surround the level n cube that contains this_CE (to make clear what is meant by “immediately surround”, these 27 cubes together have the shape of a cube; see paragraph p32 for the code to compute the XYZ coordinates of these 27 parents at level n).

p32:: The XYZ coordinate of the parent for the level n cube that contains this_CE is computed by Part 1 in the above code, and this parent’s XYZ coordinate is (parent_Xparent_Yparent_Z). The XYZ coordinates of the other 26 parents at level n, that “immediately surround” (parent_Xparent_Yparent_Z), are:

set m to 3n  /* The width of a level n cube, measured in computing-element widths. */

/*
The XYZ coordinates of the other 26 parents at level n, that “immediately surround” (parent_Xparent_Yparent_Z), are:
*/
(parent_X + mparent_Yparent_Z)
(parent_X − mparent_Yparent_Z)
(parent_Xparent_Y + mparent_Z)
(parent_Xparent_Y − mparent_Z)
(parent_Xparent_Yparent_Z + m)
(parent_Xparent_Yparent_Z − m)

(parent_X + mparent_Y + mparent_Z)
(parent_X + mparent_Y − mparent_Z)
(parent_X − mparent_Y + mparent_Z)
(parent_X − mparent_Y − mparent_Z)

(parent_X + mparent_Yparent_Z + m)
(parent_X + mparent_Yparent_Z − m)
(parent_X − mparent_Yparent_Z + m)
(parent_X − mparent_Yparent_Z − m)

(parent_Xparent_Y + mparent_Z + m)
(parent_Xparent_Y + mparent_Z − m)
(parent_Xparent_Y − mparent_Z + m)
(parent_Xparent_Y − mparent_Z − m)

(parent_X + mparent_Y + mparent_Z + m)
(parent_X + mparent_Y + mparent_Z − m)
(parent_X + mparent_Y − mparent_Z + m)
(parent_X + mparent_Y − mparent_Z − m)
(parent_X − mparent_Y + mparent_Z + m)
(parent_X − mparent_Y + mparent_Z − m)
(parent_X − mparent_Y − mparent_Z + m)
(parent_X − mparent_Y − mparent_Z − m)

p33:: Regarding edge cases, if any of the above computed XYZ coordinates of the other 26 parents at level n are outside our universe, then it’s still okay for this_CE to have a gmr entry for that parent’s XYZ coordinate at that level n, because that gmr entry’s total_mass is initialized to null when that gmr entry was made (see paragraph p17), and there will never be a received gravity message from that XYZ coordinate because it’s not in our universe, which means that that gmr entry’s total_mass will always be null and that gmr entry will always be ignored (see paragraph p20). Note: Assuming that the computing element at the lowest-coordinate corner point in our universe has XYZ coordinate (0, 0, 0), then any XYZ coordinate whose X, Y, or Z component is negative, is outside our universe. However, I’m not making any assumption that the gravity algorithm or any other part of the computing-element program knows and uses the XYZ coordinate of the highest-coordinate corner point in our universe, because such knowledge simply isn’t needed.

Regarding the above Gravity Algorithm: Approximations and Efficiency

The above gravity algorithm has two separate approximations that make the computed gravitational force on the particle currently held by this_CE an approximation. The first approximation happens in the part of the algorithm where each cube at level (n > 0) adds up the mass in the 27 cubes at level (n − 1) that compose that level n cube, and computes a single center-of-mass XYZ coordinate for that sum of masses. Newton’s formula for computing the force of gravity between two masses, assumes that the two masses are point masses, but only the mass of the particle held by this_CE (denote this mass as M1) is a point mass. The other mass (denote this mass as M2) is a mass that’s a sum of 27 individual masses, with a computed center-of-mass XYZ coordinate for that sum of 27 individual masses. In this case, the computed gravitational force vector between masses M1 and M2 will most likely be an approximation compared to the more accurate result that would be gotten by computing a separate gravitational force vector between M1 and each of the 27 separate masses at their original XYZ coordinates, and then adding together those 27 gravitational force vectors into a single gravitational force vector as the gravitational force between masses M1 and all of the 27 masses that were combined into mass M2. Although just an approximation, combining masses into a single mass at a single center-of-mass XYZ coordinate is a requirement for any efficient gravity algorithm within the computing-element reality model, because the alternative is to compute at each computing element holding a nonzero-mass particle, as many gravitational force vectors as there are nonzero-mass particles within whatever range one gives to the force of gravity. Given that our Earth alone has an estimated 1050 atoms (How many atoms are there in the world? at http://education.jlab.org/qa/mathatom_05.html), the necessity of combining masses—before computing the gravitational force vectors acting on the particle currently held by this_CE—should be obvious.

The other approximation in the above gravity algorithm is that the parent of a level n cube, for n > MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE, adds up the mass in each of its 27 level (n − 1) cubes that that level n cube contains, and computes the center-of-mass XYZ coordinate for that added-up mass, and then sends a gravity-related message for that level n cube reporting that added-up mass and its center-of-mass XYZ coordinate. However, for each of that level n cube’s 27 level (n − 1) cubes, its parent has also sent a gravity-related message for that level (n − 1) cube’s share of that added-up mass that its enclosing level n cube will also send a gravity-related message for. Thus, assuming 80 is the maximum level number—and ignoring time delays regarding message transmission, and the periodic wait by a level n parent before it sends a gravity-related message for that level—the mass of a single particle will be included in the total_mass of ((80 − MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE) + 1) different gravity messages, each at a different level (if MIN_LVL_FOR_SENDING_A_GRAVITY_MESSAGE is 34, then 47 different gravity messages). (Perhaps this multiple counting of mass by the above gravity algorithm explains the so-called “missing mass” that astrophysicists say our galaxy has. However, this is just a suggestion, because I haven’t done any math or other work to support this suggestion, nor have I spent substantial time studying this “missing mass” subject and its justification.)

Regarding the above two approximations, I don’t think any physical experiment can be done on a small scale here on our Earth that would be able to reveal that the gravitational force vector that applies to an elementary particle of nonzero mass (denote this particle as particle X), is being imperfectly computed by the underlying reality. The reason I say this is because it is impossible to, in effect, freeze time and stop all motion, and then precisely determine the mass and relative location in 3D space (relative to particle X) of every particle that is within gravitational range of that particle X, and then compute the gravitational force vector between each of those particles and particle X, and then add up all those computed gravitational force vectors to get the total-gravitational-force vector, and then compare this perfect total-gravitational-force vector with the exactly measured actual gravitational force that is acting on particle X. Also, any attempt to detect by physical experiment the existence of the cubes defined by the above gravity algorithm, is impractical for multiple reasons, including the fact that our solar system is moving thru 3D space, and thus moving thru the computing elements, at roughly 1/245th of lightspeed which is more than a million meters per second (see the footnote in subsection 3.8.6), which means that any physical experiment would be moving thru the smaller cubes at a very fast rate.

Regarding the efficiency of the above gravity algorithm, for those familiar with time complexity for an algorithm (see, for example, Time complexity at https://en.wikipedia.org/wiki/Time_complexity), all the operations done in the above gravity algorithm have constant-time cost, and some of the multiplying constants involved with those operations are small, such as 80 levels (see paragraph p2), each parent having 27 children (see paragraphs p6 and p7), and at most 1,269 gravitational force vectors—one for each non-ignored gmr entry—computed by a computing element after that computing element begins holding a particle with a nonzero mass (see paragraph p21). The only large multiplying constants for a computing element are the number of messages being sent and/or received by that computing element regarding the gravity algorithm. In the case of mass messages, assuming there is a lot of nonzero-mass particles moving thru a level 1 cube in a short period of time, such as when our Earth is passing thru a level 1 cube, there could easily be a large number of mass messages sent per second, to that level 1 cube’s parent. However, mass messages are simple, and have a low constant-time cost to prepare and send a mass message (see paragraphs p10 and p12), and a low constant-time cost to receive and process a mass message (see paragraph p12). In the case of gravity-related messages sent by a parent at the lowest level number that can send gravity-related messages, my guess is that billions of gravity-related messages per second are sent by that parent. However, computing the center_of_mass_XYZ and total_mass for 27 children is a simple operation with a low constant-time cost (see paragraph p12), and then preparing and sending a gravity-related message is also a simple operation with a low constant-time cost (see paragraph p15). In the case of a computing element receiving gravity messages, my guess is that a computing element currently within our Earth is receiving at least billions of gravity messages per second (recall that the range of a gravity message is dependent on the total_mass value in that gravity message, which is why I specifically said “a computing element currently within our Earth”, because a computing element currently far out in empty space would be receiving a much smaller number of gravity messages per second). However, receiving and processing a gravity message (see paragraph p18) is a simple operation with a low constant-time cost.

Assuming that the computing elements are computing at many, many orders of magnitude faster than our fastest physical computers, it follows that billions of simple messages per second, with a low constant-time cost for each message to either prepare and send that message or receive and process that message, is not a significant burden on a computing element, even if instead of billions of messages per second, the number of these messages per second is actually several orders of magnitude higher (such as trillions per second instead of billions per second). I think it likely that each computing element in our universe uses, per second, less than 1% of its computing power on behalf of the above gravity algorithm, even for the computing elements that have the most gravity-related work to do, which is each computing element at the exact center of a level 80 cube, assuming 80 is the maximum level number (the computing element at the exact center of a level 80 cube is a parent at levels 1 thru 80 inclusive). Note: a parent at level (n > 1) is also a parent at each lower level, all the way down to level 1, but I didn’t mention this detail in the above gravity algorithm because an explicit statement about it wasn’t needed (this detail is a consequence of how a parent and the cube hierarchy are defined in the above gravity algorithm).


3.8.4 The Learned-Program send() Statement: the message_instance

The message_instance is copied by the message-transmission algorithm from a computing element to adjacent computing elements, and so on, to send the message out into 3D space in search of the intended recipient(s) of that message. The following parameters of the send() statement are included in its sent message_instance:

The message_instance also includes the following six items (these six items are always a part of any message_instance, regardless of whether that message_instance was put together within the send() statement’s routine or within some other routine):

3.8.5 The Learned-Program send() Statement: Algorithmic Details

Regarding how I’ve coded the routines in this book: A comment helps explain the code, and begins with “/*” and ends with “*/”. I’ve borrowed various keywords from other programming languages and these borrowed keywords are boldface, and also boldface are a few other commands in the code, such as those regarding semaphores. In the code, the names of variables are italicized (elsewhere, outside of the code, including in comments, the names of variables and routines are both italicized, but the names of routines are always suffixed with “()” to distinguish them from variables). A named constant consists of two or more words separated by underscores, and all letters in the words are capitalized. The variable this_CE in the code is the computing element that is currently executing the routine, and this_bion in the code is the bion that is executing the routine. An if can be nested inside another if, in which case the end if always ends the nearest un-ended preceding if. Many programming languages use what is known as “dot notation” to show that the variable name that is suffixed after the dot is contained in the data structure whose variable name is prefixed before that dot. For example, message_instance.transfer_count refers to the value of a variable named transfer_count that is contained in the data structure named message_instance.

The computing-element program is like an operating system and is assumed to be multitasking, and can have different threads of execution ongoing at the same time. And each thread of execution will have a priority: the higher its priority compared to whatever other threads of execution are currently ongoing, the bigger its time slice compared to the time slices for those other threads of execution that are currently ongoing (a thread’s time slice is how long that thread can run (execute) before its execution is paused by the operating system so that one or more other ongoing threads can run; this explanation of multitasking assumes that the computing element is, in effect, a uniprocessor, and can only be executing one thread at any instant of time).

In general, regarding the speed of message transmission thru 3D space, it is reasonable to assume that message transmission has at least as high a priority in the computing-element program as the priority for moving a photon thru 3D space (photons move at or near lightspeed). Assuming an average-sized message, one can assume that that message will, in effect, from whichever computing element the intelligent particle was in when its message was sent by the send() statement, radiate outward, filling a sphere of radius send_distance, at a speed that’s at least as fast as lightspeed, and probably much, much faster than lightspeed (see chapter 1 regarding instantaneous communication, and the speed of gravity). Because of its importance and its need to be fast, message transmission has a very high priority.

Within the send() statement, after the message_instance has been constructed, the next step copies that message_instance to each computing element that is adjacent to the computing element that the sender, an intelligent particle, is currently occupying during this step in the processing of that send() statement (assume that during this step the computing-element program will not move that sender out of its currently occupied computing element). As soon as at least one of these adjacent computing elements has that message_instance, that is the point at which the message-transmission algorithm presented in this subsection becomes involved with that message_instance.

Handling the Special Case of a Recipient Particle being Moved when the Message Arrives

For the message-transmission algorithm presented in this subsection, a message_instance is copied from computing elements that currently have the message_instance to certain adjacent computing elements, until every computing element within message_instance.send_distance of the message_instance.sender's_XYZ has received that message_instance (filling a sphere-shaped volume of 3D space—but it’s not a math-perfect sphere because the computing elements that compose 3D space have a finite cube size). However, this copying process takes finite time, and during this finite time to fill that sphere, it is possible that some particle X that is a recipient of that message_instance, is in the process of being moved from one computing element to an adjacent computing element when that message_instance is, in effect, being transmitted thru one or both of those two computing elements.

The code in examine_a_message_instance() covers this possibility, so that recipient particle X won’t miss receiving that message_instance even though particle X is currently being moved from one computing element to an adjacent computing element when that message_instance arrives. To support this, assume that the state information of each computing element (denote as this_CE) includes a variable named held_particle_status which always has a value, and there are four different values: NOT_CURRENTLY_HOLDING_A_PARTICLE, CURRENTLY_HOLDING_A_PARTICLE, CURRENTLY_MOVING_HELD_PARTICLE_TO_AN_ADJACENT_CE, and CURRENTLY_RECEIVING_A_PARTICLE_FROM_AN_ADJACENT_CE:

The Sphere-Filling Message-Transmission Algorithm

As was stated in subsection 3.8.3, each computing element is a cube with six sides, and, because our galaxy is apparently far away from any edge of our universe where there are no adjacent computing elements on one side, the code further below in pass_this_message_along() ignores edge cases and assumes that each computing element has six adjacent computing elements. However, one can assume that the complete code in the computing-element program, in effect, handles these edge cases.

For this message-transmission algorithm, whenever a computing element (denote as this_CE) receives (has copied to it) a message_instance from an adjacent computing element, this_CE does the following three steps:

  1. The first thing that this_CE does with the message_instance is increment message_instance.transfer_count by adding 1 to it. Note: this is the only change to the message_instance that this_CE will make.

  2. Then, this_CE starts two different threads of execution:

  3. When both of the threads started by step 2 have completed, then the message_instance is deleted from this_CE’s memory.

Code for the two routines, examine_a_message_instance() and pass_this_message_along(), follows. The examine_a_message_instance() routine is given first:

examine_a_message_instance(message_instance)
{
if this message is a gravity message
then
Process this gravity message as described in paragraph p18 in footnote 23.
return  /* exit this routine */
end if

if held_particle_status is NOT_CURRENTLY_HOLDING_A_PARTICLE
then
return  /* exit this routine */
end if

/*
Note: when held_particle_status is CURRENTLY_RECEIVING_A_PARTICLE_FROM_AN_ADJACENT_CE, then that means that this_CE, which is calling this examine_a_message_instance(), is the move_to_CE described above.
*/
if held_particle_status is CURRENTLY_MOVING_HELD_PARTICLE_TO_AN_ADJACENT_CE
or held_particle_status is CURRENTLY_RECEIVING_A_PARTICLE_FROM_AN_ADJACENT_CE
then
/*
An example of a particle that is not a recipient of the message but can be affected by that message, is a physical atom and the message_instance.special_handling_non_locate is PUSH_PHYSICAL_MATTER (see subsection 3.8.8).
*/
Examine the message_instance and determine if the particle that is being moved is either a recipient of the message or can be affected by the message. If either is true, then copy the message_instance
to this_CE’s messages_for_particle_being_moved list.
return  /* exit this routine */
end if

/*
The held_particle_status is CURRENTLY_HOLDING_A_PARTICLE.
*/
if message_instance.special_handling_locate is null
and message_instance.special_handling_non_locate is null
and this_CE is currently holding an intelligent particle that is not asleep
and that intelligent particle qualifies as a recipient of the message  /* Examine the message_instance and also that intelligent particle’s identifier block to determine this. */
then
if that intelligent particle’s message queue is currently full  /* no room left in that message queue */
then
Delete the oldest message in that message queue.
end if
From the message_instance, add to that intelligent particle’s message queue an entry with the following components: the message text; the sender's_identifier_block; and, if this intelligent particle is a bion, then include the sender’s selection criteria for determining the recipient(s) of this message (either the user_settable_identifiers_block parameter or the list_of_bions parameter, if either was given by the sender to identify the recipient(s) of this message); a distance value named distance_between_sender_and_receiver, which is the distance between message_instance.sender's_XYZ and this_CE's_XYZ (computed using the distance formula).
return  /* exit this routine */
end if

if message_instance.special_handling_locate is null
and message_instance.special_handling_non_locate is null
and this_CE is holding a common particle
and that common particle is a recipient of the message
/*
I haven’t defined in this book any messages that, in effect, end up here, but the computing-element program in its code for interacting common particles with each other may send messages that would end up here, in which case one can assume that that code, or a call to that code, for handling such messages would be here.
*/
return  /* exit this routine */
end if

if message_instance.special_handling_locate is null
and message_instance.special_handling_non_locate is null
then
return  /* exit this routine */
end if

/*
Handle the “special handling” messages.
*/
if (message_instance.special_handling_locate is either GET_LOCATIONS_OF_BIONS or LOCATION_REPLY_FROM_BION)
and this_CE is currently holding a bion that is not asleep
and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
then
if message_instance.special_handling_locate is LOCATION_REPLY_FROM_BION
then
process_a_location_reply_from_a_bion(message_instance)  /* this routine is detailed in subsection 3.8.6 */
else
reply_to_this_location_request_bions(message_instance)  /* this routine is detailed in subsection 3.8.6 */
end if
return  /* exit this routine */
end if

/*
The code to be added to this examine_a_message_instance() routine, that is given in subsections 3.8.8 Bions Seeing and Manipulating Atoms and Molecules, 5.2.1 Out-of-Body Movement during a Lucid Dream, and 5.2.3 How One’s Projected Bion-Body Maintains its Human Shape, goes here.
*/

return  /* exit this routine */
}

Code for the pass_this_message_along() routine follows:

/*
Determine which adjacent computing elements, if any, to copy the message_instance to.
*/
pass_this_message_along(message_instance)
{
set give_to_list to an empty list

/* (sxsysz) is the sender’s XYZ coordinate. */
set sx to message_instance.sender's_XYZ.X
set sy to message_instance.sender's_XYZ.Y
set sz to message_instance.sender's_XYZ.Z

/* (cxcycz) is this_CE’s XYZ coordinate. */
set cx to this_CE's_XYZ.X
set cy to this_CE's_XYZ.Y
set cz to this_CE's_XYZ.Z

/*
Referring to the three set statements after this comment, what is the fastest way to set random_bit_1, random_bit_2, and random_bit_3?

If one assumes that the computing elements encode integer values in binary (the same encoding used by our physical computers), then from a single random number composed of at least 3 bits, the three low-order bits can be extracted and each of those three bits assigned to random_bit_1, random_bit_2, and random_bit_3, respectively.

Because of how random_bit_1, random_bit_2, and random_bit_3 are used in this routine, and the purpose that they serve (see the explanation of the M_determine_which_of_these_two_ifs_executes() macro that is further below), as long as the probability is even-chance (aka 50-50 or 1-in-2 or 0.5 or ½) that the value of random_bit_1 in this routine for this message_instance for this_CE will be the same value for random_bit_1 in this routine for this message_instance when this message_instance is processed by a computing element that is adjacent to this_CE, then that will serve the intended purpose of random_bit_1 (and likewise this same even-chance requirement for random_bit_2 and random_bit_3).

A likely good source for setting random_bit_1, random_bit_2, and random_bit_3, that will give the even-chance probability explained in the previous paragraph, is to use the three low-order bits of this_CE’s internal clock to set random_bit_1, random_bit_2, and random_bit_3, respectively. Note that the three low-order bits would be the three fastest-changing bits as clock ticks are counted by that internal clock.
*/
In effect, set random_bit_1 randomly to either 0 or 1.
In effect, set random_bit_2 randomly to either 0 or 1.
In effect, set random_bit_3 randomly to either 0 or 1.

/*
In the below selection code, “M_append(an XYZ coordinate)” is a macro that represents code that does the following: If the distance (computed using the distance formula) between the given XYZ coordinate and the message_instance.sender's_XYZ is not greater than message_instance.send_distance, then append that given XYZ coordinate to the give_to_list.

Note that the XYZ coordinate given to M_append() in the below selection code will always be the XYZ coordinate of a computing element that is adjacent to this_CE, assuming that this_CE has six adjacent computing elements. The six adjacent computing elements are: (cx + 1, cycz), (cx − 1, cycz), (cxcy + 1, cz), (cxcy − 1, cz), (cxcycz + 1), (cxcycz − 1).

Also, note that in the above description of macro M_append(), and also in the below selection code, I am ignoring edge cases for which this_CE is at an edge in our universe and as a result has less than six adjacent computing elements. However, one can assume that the computing-element program and its computing element, one way or another, handles these edge cases correctly.
*/
/*
This comment describes and explains the macro M_determine_which_of_these_two_ifs_executes(), which appears in three places in the below selection code. The only purpose of this macro is to make equal, for each of the three axes, X, Y, and Z, the average time needed for a message of size s to go from sender to receiver along an axis, regardless of whether the message is moving along that axis in a direction that increases that axis value or decreases that axis value. Making this average transfer time the same in both directions along an axis, eliminates directional bias, and doing this is only important in the case of those learned-program statements, such as get_relative_locations_of_bions(), that determine the relative locations of particles.

In the case of learned-program statements that determine the relative locations of particles, a message of size s is sent that results in reply messages that each have size s. In general, whatever direction the sent message had to move along the three axes, X, Y, and Z, to reach a replying particle, that particle’s reply message, to reach that sender, will have to move in the opposite direction along each of those three axes to reach that sender. The computation of relative locations, detailed in the code of process_a_location_reply_from_a_bion() in subsection 3.8.6, implicitly assumes the same average time to move a message_instance of size s in either direction along each of the three axes.

Regarding the message-transmission code in this routine: Along an axis, there are two directions in which to move, and this direction depends on the result of an expression that compares this_CE’s axis coordinate to the sender’s axis coordinate. Thus, there is an if-then-else statement involved: If the expression is true then move in one direction along that axis, else move in the opposite direction along that axis. The fundamental difficulty in preventing a directional bias is this if-then-else statement and its execution time. Assume that this if-then-else statement is coded in machine code that will execute as fast as possible (by “machine code” is meant the actual code that will execute on a computing element). And note that the execution time for moving in one direction along that axis (denote as A), and the execution time for moving in the opposite direction along that axis (denote as B), are exactly the same. However, the total execution time for executing this if-then-else statement will presumably be different depending on whether A or B is done:
if (v1 > v2)
then
A
else
B
end if
To understand why there is this difference in total execution time depending on whether A or B is done, even though A and B each take the exact same amount of execution time, the following is a description of what the machine code will be doing, assuming that the machine code of a computing element works the same way that our physical-computer machine code works, which means that the machine code is stored sequentially in addressable memory, and there is a memory pointer that points at the current location in memory where the next machine-code instruction to be executed is at:
1) Compare the value of v1 to the value of v2.

2) If the comparison result is not greater than—in other words, v1 not > v2—then advance the memory pointer that points at the next machine instruction to execute, to point at the beginning of the machine code that executes B.

3) The machine code that executes A is here.

4) Advance the memory pointer that points at the next machine instruction to execute, to skip over the machine code that executes B.

5) The machine code that executes B is here.
The presumption here, and the reason for the M_determine_which_of_these_two_ifs_executes() macro, is that the two different execution pathways thru the above if-then-else statement (one pathway executes A, and the other pathway executes B), have different execution times (let etd denote the absolute value of the difference between the following two execution times):
If the above presumption is wrong in the case of the computing elements, which means that the value of etd is zero, then there is no need for the M_determine_which_of_these_two_ifs_executes() macro. In this case, one could just replace each occurrence of this macro in the below selection code with the first (or second) if-then-else statement given as a parameter to that macro. However, if etd is nonzero, then the presumption is that the pathway that executes A takes etd more time than the pathway that executes B.

To make clear how macro M_determine_which_of_these_two_ifs_executes() works, the following is the first occurrence of this macro in the below selection code:
M_determine_which_of_these_two_ifs_executes(
    random_bit_1,
    if (cx > sx) then M_append(cx + 1, cycz) else M_append(cx − 1, cycz) end if,
    if (cx < sx) then M_append(cx − 1, cycz) else M_append(cx + 1, cycz) end if
)
Note that the two if-then-else statements, which are the second and third parameter of the macro occurrence, are functionally equivalent in terms of what, if anything, will be appended to the give_to_list (this same functional equivalence is true for the if-then-else statements in the other two occurrences of this macro in the below selection code; the only difference between the if-then-else statements in the three macro occurrences is the axis—either X, Y, or Z—involved). The first parameter is, in effect, a random value whose value is either 0 or 1. If its value is 1, then the first if-then-else statement (parameter 2) is executed; otherwise, the second if-then-else statement (parameter 3) is executed.

Also regarding the two if-then-else statements in the macro: By analogy with physical-computer machine code, the computing-element machine-code for the comparison (cx > sx) will take the exact same execution time that the computing-element machine-code for the comparison (cx < sx) will take. And also by analogy with physical-computer machine code, adding 1 to a variable will take the exact same execution time as subtracting 1 from that variable, which means, in the above macro occurrence, that the execution time to compute (cx + 1) will be the exact same execution time to compute (cx − 1). This means that each of the M_append() occurrences in the above macro, will take the exact same execution time. Thus, the A and B parts of each of these two if-then-else statements have the exact same execution time. This only leaves the time difference etd regarding how much more execution time each of the two if-then-else statements need in total to do the comparison and execute just the A part, compared to doing the comparison and executing just the B part.

In the below selection code, the M_determine_which_of_these_two_ifs_executes() macro is used for all three axes, X, Y, and Z. The end result is that, as the separation distance along an axis between the sender and receiver increases (denote this separation distance as n), the etd time penalty when executing part A instead of part B will be more evenly distributed between the two directions along that axis, so that each direction along that axis will have a total time penalty from executing part A, of about (½ × n × etd). Note that the value of n is the absolute value of ((sender’s coordinate for that axis) − (receiver’s coordinate for that axis)), which is the number of computing elements along that axis that the sent message_instance will have to pass thru to ultimately get to the receiver, and at each of these pass-thru computing elements—with the sole exception of whichever computing element on the path from sender to receiver is adjacent to the sender—this pass_this_message_along() routine will be executed for that sent message_instance. In conclusion, because of the M_determine_which_of_these_two_ifs_executes() macro, directional bias along each of the three axes is removed, and neither direction along an axis has an execution-time advantage. And this is what is wanted for those learned-program statements that determine the relative locations of particles.
*/
/*
The below selection code has the following structure:
if (condition 1)
    indented-code-block 1
else
    indented-code-block 2
end if
If (condition 1) is true, then indented-code-block 1 is done. If (condition 1) is false, then indented-code-block 2 is done.

Regarding what indented-code-block 1 is doing: Note that the sender’s computing element will give the message to all six of its adjacent computing elements, and indented-code-block 1, in terms of what is appended to the initially empty give_to_list, first moves along the X-axis line that runs thru that sender’s computing element (this XYZ coordinate is appended if it is not more than message_instance.send_distance from the sender). Then, indented-code-block 1 appends the XYZ coordinates of these four adjacent computing elements (if they are not more than message_instance.send_distance from the sender): (cxcy + 1, cz), (cxcy − 1, cz), (cxcycz + 1), and (cxcycz − 1). As a result of condition 1 and indented-code-block 1, indented-code-block 2 will always have the same starting point, regardless of the cx value (regardless of whether cx equals sx, or not), in terms of which computing elements in the circle have already gotten the message (or will get the message) from indented-code-block 1.

Regarding what indented-code-block 2 is doing: indented-code-block 2 will append the XYZ coordinate of each computing element—that is not the sender nor any of the six computing elements adjacent to the sender nor any computing element that will, in effect, get the message from indented-code-block 1—that is in the circle that has a width of one computing element, and this circle’s center is at the XYZ coordinate (cxsysz), and this circle is perpendicular to the X axis of the computing elements. Note: At cx equals sx, the radius of the circle is message_instance.send_distance.

Note: There are three different versions of this selection code that are symmetrically the same: the circle of indented-code-block 2 is perpendicular to the X axis of the computing elements, which is the version shown below; the circle of indented-code-block 2 is perpendicular to the Y axis of the computing elements; the circle of indented-code-block 2 is perpendicular to the Z axis of the computing elements. For the two versions not shown, the other parts of the selection code would be changed accordingly. Also, note that for each of these three different selection-code versions, indented-code-block 2 itself has two different versions (in the selection-code version shown below, the ifs in indented-code-block 2 that compare cz with sz, can be changed to ifs that compare cy with sy, and the rest of the code in indented-code-block 2 would be changed accordingly).
*/
/*
To make clear how the below selection code works, here are several examples:
if (cy equals sy) and (cz equals sz) and (cx < sx) then the give_to_list will have these five XYZ coordinates (assuming the distance from the sender to each is not more than message_instance.send_distance): (cx − 1, cycz), (cxcy + 1, cz), (cxcy − 1, cz), (cxcycz + 1), (cxcycz − 1).

if (cz equals sz) and (cy < sy) then the give_to_list will have these three XYZ coordinates (assuming the distance from the sender to each is not more than message_instance.send_distance): (cxcy − 1, cz), (cxcycz + 1), (cxcycz − 1).

if (cy < sy) and (cz > sz) then the give_to_list will have one XYZ coordinate (assuming the distance from the sender to it is not more than message_instance.send_distance): (cxcycz + 1).
*/
if ((cy equals sy) and (cz equals sz))  /* condition 1 */
/*
indented-code-block 1

Note: (cx not equal sx) here, because this_CE is not the sender. Thus, at this point in the code either (cx > sx) or (cx < sx).
*/
M_determine_which_of_these_two_ifs_executes(
    random_bit_1,
    if (cx > sx) then M_append(cx + 1, cycz) else M_append(cx − 1, cycz) end if,
    if (cx < sx) then M_append(cx − 1, cycz) else M_append(cx + 1, cycz) end if
)
M_append(cxcy + 1, cz)
M_append(cxcy − 1, cz)
M_append(cxcycz + 1)
M_append(cxcycz − 1)
else
/* indented-code-block 2 */
if (cz equals sz)
    /* Note: (cy not equal sy) because otherwise condition 1 above is true. */
    M_determine_which_of_these_two_ifs_executes(
        random_bit_2,
        if (cy > sy) then M_append(cxcy + 1, cz) else M_append(cxcy − 1, cz) end if,
        if (cy < sy) then M_append(cxcy − 1, cz) else M_append(cxcy + 1, cz) end if
    )
    M_append(cxcycz + 1)
    M_append(cxcycz − 1)
    go to label:skip_over
end if
/* Note: at this point in the code, (cz not equal sz). */
M_determine_which_of_these_two_ifs_executes(
    random_bit_3,
    if (cz > sz) then M_append(cxcycz + 1) else M_append(cxcycz − 1) end if,
    if (cz < sz) then M_append(cxcycz − 1) else M_append(cxcycz + 1) end if
)
label:skip_over
end if

/*
Note: In July 2017, I wrote a program in Mathematica’s programming language to test the above selection code to make sure that it was correct and flawless and that it filled without duplicates a sphere of radius send_distance. Mathematica is a commercially available program which I bought for my own use in June 2017 to help me with my gravity algorithm (see the mention of Mathematica in footnote 23). The testing was successful and the above selection code is correct and flawless. Note: “filled without duplicates” means that each computing element within the sphere of radius send_distance that is centered on the sender’s computing element, is given the message exactly once (excluding the sender, and the sender’s six adjacent computing elements which got that message from that sender and not from the above selection code). Note: this test program did not implement nor test the random selection between two functionally equivalent if-then-else statements done by the M_determine_which_of_these_two_ifs_executes() macro.
*/

/*
Copy the message_instance to each adjacent computing element whose XYZ coordinate is in the give_to_list.

The below for goes in order, beginning with the first element in give_to_list, if any, and ending with its last element. However, note that the order of the XYZ coordinates in give_to_list is unimportant, and also note that the give_to_list can be empty if the XYZ coordinate(s) that would otherwise be in give_to_list are more than message_instance.send_distance from the sender.
*/
for each adjacent computing element whose XYZ coordinate is in the give_to_list
do
Give that adjacent computing element the message_instance by copying that message_instance to that adjacent computing element.
end do
end for

return  /* exit this routine */
}

Several Properties of this Sphere-Filling Message-Transmission Algorithm

Given a message of size s (s is the size of the message_instance), a send_distance, a sender computing element (the computing element from which this message was sent), and a receiving computing element (the receiver is a computing element that is not more than send_distance from the sender, and is holding a particle that is a recipient of the sent message), the following are several properties of the sphere-filling message-transmission algorithm given above:

The selection code in pass_this_message_along() is optimal in the following way: Regardless of what the XYZ coordinates of the sender and receiver are, the message_instance received by that receiver will have moved along a shortest path thru adjacent computing elements between that sender and receiver. More specifically, the message_instance.transfer_count at the receiver will have the smallest transfer_count possible, going from that sender to that receiver. (Note: the number of shortest paths between the sender and receiver, each having the same message_instance.transfer_count at the receiver, is either 1, 2, or 6, depending on the XYZ coordinates of the sender and receiver: regarding the X, Y, and Z components of an XYZ coordinate, if two of these three components have the same value for both the sender and receiver—for example, the X component of both the sender and receiver is 8, and the Z component of both the sender and receiver is 13—then there is only 1 shortest path between the sender and receiver; if only one of these three components have the same value for both the sender and receiver, then there are 2 shortest paths between the sender and receiver; if none of these three components have the same value for both the sender and receiver, then there are 6 shortest paths between the sender and receiver, depending on which of the three axes is moved along first, and then which of the remaining two axes is moved along next, and then moving along the one remaining axis to the receiver. Note that in the case of there being 6 shortest paths, the version of the selection code given in pass_this_message_along() above, always moves along the X axis first, then along the Y axis, and finally along the Z axis to get to the receiver.)

Consider the following question: How much time is needed for the message to move from the sender to the receiver? Presumably, the time needed to copy a message of size s is proportional to s. Thus, the larger the message, the more time needed to copy that message from a computing element to an adjacent computing element. Also, because more time is needed to copy a larger message, the computing-element program imposes a limit on the maximum size of a message, so that message transfer is always a fast process. Given the “very high priority” at which pass_this_message_along() runs, there will be an average time for transferring a message of size s from a computing element to an adjacent computing element (denote this average transfer time for a message of size s, as time T). The time needed for the message to move from the sender to the receiver is approximated by:

(message_instance.transfer_count at the receiver) × T

Denote the XYZ coordinate of the sender as (x1, y1, z1), and denote the XYZ coordinate of the receiver as (x2, y2, z2), and note that the X, Y, and Z components are integers and the transfer_count is an integer. With regard to the XYZ coordinates of the sender and receiver, what will the transfer_count at the receiver be? The distance d between the sender and receiver is given by the distance formula:

d = square_root_of((x2 − x1)2 + (y2 − y1)2 + (z2 − z1)2)

Let a = (x2 − x1), let b = (y2 − y1), and let c = (z2 − z1). The rewritten distance formula is:

d = square_root_of(a2 + b2 + c2)

Regarding the pass_this_message_along() routine, note the following:

  1. Given the selection code in pass_this_message_along(), the following equation is always true for the receiver:

    message_instance.transfer_count = (abs(a) + abs(b) + abs(c))

    Note that abs() is a math operator that returns the absolute value of the given number. For example abs(0) is 0, abs(8) is 8, and abs(−8) is 8. For any real number r, abs(+r) is r, and abs(−r) is r.

    As an example regarding the sender and receiver, if the sender’s XYZ coordinate is (4, 13, 1), and the receiver’s XYZ coordinate is (3, 8, 5), then the message_instance.transfer_count at the receiver will be (abs(3 − 4) + abs(8 − 13) + abs(5 − 1)), which is (abs(−1) + abs(−5) + abs(4)), which is (1 + 5 + 4), which is 10.
  2. Denote the integer value of the message_instance.transfer_count at the receiver as n. The distance between the sender and receiver is maximized when only one of the three terms abs(a), abs(b), and abs(c), is nonzero.

    In this case, the distance d between the sender and receiver (computed using the distance formula) is simply the absolute value of whichever term, a, b, or c, is nonzero, which gives a distance of n.

    For example, if the message_instance.transfer_count at the receiver is 9, and abs(a) is 9, and abs(b) and abs(c) are both zero, then the distance between the sender and receiver is square_root_of(92) which is 9.

  3. Denote the integer value of the message_instance.transfer_count at the receiver as n. If n is at least 3 and n is a multiple of 3, then the distance between the sender and receiver is minimized when abs(a) = abs(b) = abs(c), in which case the value of each of these three equal terms is (n ÷ 3).

    In this case, the distance d between the sender and receiver (computed using the distance formula) is square_root_of(3 × ((n ÷ 3)2)), which simplifies to (n ÷ square_root_of(3)).

    For example, if the message_instance.transfer_count at the receiver is 9, and abs(a), abs(b), and abs(c) each have the value of 3, then the distance between the sender and receiver is square_root_of(32 + 32 + 32), which is 5.19615 (accurate to 5 decimal places).

Regarding notes 2 and 3 above, and assuming the same message_instance.transfer_count value—the same n value—for each of the two notes, the ratio of the largest distance between the sender and receiver (note 2), divided by the smallest distance between the sender and receiver (note 3), is:

n ÷ square_root_of(3 × ((n ÷ 3)2))

The above ratio for n > 0, simplifies to square_root_of(3), which is 1.73205 (accurate to 5 decimal places). Denote this ratio constant as SQUARE_ROOT_OF_3, which is used in subsection 3.8.6 to compute how much time to allow for all the replies to a sent query message to be received by the bion that sent that query message.

Given a sphere of radius send_distance that is centered on the sender’s computing element from which the message was sent: A receiver at that sphere’s surface that is on the shortest path from that sphere’s center to that sphere’s surface (note 2 above, applies in this case), will receive the sent message after approximately this much elapsed time since the message was sent (time T is defined above as the average time needed to transfer a message of size s from a computing element to an adjacent computing element; presumably, given the size s of the message, the computing-element program can compute a close approximation of this average transfer time T):

send_distance × T

And a receiver at that sphere’s surface that is on the longest path from that sphere’s center (note 3 above, applies in this case), will receive the sent message after approximately this much elapsed time since the message was sent:

SQUARE_ROOT_OF_3 × send_distance × T

3.8.6 Multicellular Development

Consider the development from a single fertilized egg cell (a zygote) to a complete human baby. That baby has a complex three-dimensional shape and physical composition including a variety of complex internal structures such as: the different organs and the internal structure of each organ, and how these organs are placed and connected in the body; the skeleton and all its bones, joints, muscles, and tendons; the nervous system including the brain; the circulatory system including the lymphatic system. The complete human baby has trillions of cells and their cell-controlling bions, and all this three-dimensional complexity during the development of that baby’s physical body implies the existence in those cell-controlling bions of a means by which a cell-controlling bion can determine its 3D spatial relationship with various other cell-controlling bions in that body.

To allow a cell-controlling bion to determine its location relative to certain other cell-controlling bions in the same developing body, let’s make a few assumptions:

Assume that on our planet Earth, for a cell-controlling bion whose cell is a part of a multicellular body, the learned programs that have evolved on our planet for controlling these cells, make use of the cell-controlling bion’s user-settable identifiers block in the following way:

And let’s also assume that there is a learned-program statement get_relative_locations_of_bions(). In general, this get_relative_locations_of_bions() routine lets a bion learn about other bions in its environment, and where those other bions are in 3D space relative to itself. More specifically, and assuming the user-settable identifiers block detailed immediately above, a cell-controlling bion whose cell is a part of a multicellular body can call get_relative_locations_of_bions() to learn about other cells in that multicellular body, and where those other cells are in 3D space relative to that bion and its cell—assuming that bion is with its cell when it called get_relative_locations_of_bions(). The examine_a_message_instance() routine detailed in subsection 3.8.5 includes calling two routines, reply_to_this_location_request_bions() and process_a_location_reply_from_a_bion(), that are there to support get_relative_locations_of_bions().

The detail of the reply_to_this_location_request_bions() routine follows (note that this routine is called by the examine_a_message_instance() routine that is detailed in subsection 3.8.5):

/*
Note: The parameter requester's_message_instance is the location request that this routine is replying to. The value of requester's_message_instance.special_handling_locate is GET_LOCATIONS_OF_BIONS.
*/
reply_to_this_location_request_bions(requester's_message_instance)
{
/*
Set the message text of this reply, which has three components:
*/
set reply_mt.XYZ to this_CE's_XYZ  /* Note that in this context, this_CE's_XYZ is the XYZ coordinate of the computing element that currently holds the recipient bion that is, in effect, replying to the location request. */

set reply_mt.transfer_count_to_location to message_instance.transfer_count  /* In effect, this is how far message_instance traveled from the sender’s location (a computing element) when it sent this message, to this recipient bion’s location (a computing element). */

set reply_mt.requester's_instance_identifier to the value of requester's_message_instance.instance_identifier

/* set other items */
set message_instance.special_handling_locate to LOCATION_REPLY_FROM_BION
set message_instance.send_distance to requester's_message_instance.send_distance  /* Set the reply message to have the same send_distance as the request message. */

The intended recipient of this LOCATION_REPLY_FROM_BION message is the bion that made the location request that resulted in this reply, and that bion’s unique identifier is contained in the requester's_message_instance parameter. So, extract it from there and set it in the message_instance as the sole recipient of this LOCATION_REPLY_FROM_BION message.

/*
Complete and send this LOCATION_REPLY_FROM_BION message.
*/
Assume that any items defined as being in this message_instance but not explicitly set above are set as stated in subsection 3.8.4 (for example, transfer_count is set to 0). Then, to send this LOCATION_REPLY_FROM_BION message_instance into 3D space, use the same code the send() statement uses to offer a message_instance to the adjacent computing elements.

return  /* exit this routine */
}

The detail of the learned-program statement get_relative_locations_of_bions() follows:

/*
Only a single call of this get_relative_locations_of_bions() routine can be ongoing at any one time, so assume that calling this get_relative_locations_of_bions() routine is protected by a semaphore.

This get_relative_locations_of_bions() routine defines a group of global variables that are visible to the process_a_location_reply_from_a_bion() routine. Access to these globals is protected by a semaphore named grl_globals_exclusive_access.

The get_relative_locations_of_bions() routine, which is a learned-program statement, has three parameters:
As an example of how the first two of the above three parameters would be set, assume the caller is a cell-controlling bion in our world that is already a part of a multicellular body and is currently occupying its cell, and that bion, in its learned program, wants to know the details of the nearest 100 bions that are also cell-controlling bions currently occupying their cells and they are also a part of that multicellular body. In this example, get_details_for_this_many_nearest_recipients is set to 100 and the integers in user_settable_identifiers_block are set as follows: the USID_2 value is set to WITH_MY_CELL, and the USID_4 value is set to the unique identifier for that multicellular body, and the remaining integers in user_settable_identifiers_block are set to null.
*/
get_relative_locations_of_bions(user_settable_identifiers_block, get_details_for_this_many_nearest_recipients, use_this_send_distance)
{
/*
Prepare the message_instance for the GET_LOCATIONS_OF_BIONS message to be sent. Note that there is no message text for a GET_LOCATIONS_OF_BIONS message.

Although the code for this editing is not shown, before use_this_send_distance is assigned below to message_instance.send_distance, it is first edited to make sure that its value is not less than 1 and not more than MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS. One can also assume, if one wants, that if use_this_send_distance is null, then its edited value is some default value appropriate for the get_relative_locations_of_bions() statement.
*/
set message_instance.special_handling_locate to GET_LOCATIONS_OF_BIONS
set message_instance.send_distance to the edited value of use_this_send_distance
set message_instance.instance_identifier to (generate a random number; cannot be null)

The parameter user_settable_identifiers_block identifies the recipient bions, so copy it into the message_instance in the same place that the send() statement would put it.

/*
To make the average transfer time from one computing element to the next be the same for both GET_LOCATIONS_OF_BIONS messages and LOCATION_REPLY_FROM_BION messages, two things are done: 1) the macro M_determine_which_of_these_two_ifs_executes()—in the pass_this_message_along() routine in subsection 3.8.5—eliminates directional bias regarding the execution time needed to move a message_instance along an axis in a positive direction (the value of that axis coordinate is increasing) compared to moving that message_instance along that axis in a negative direction (the value of that axis coordinate is decreasing); and 2) both the GET_LOCATIONS_OF_BIONS message_instance and the LOCATION_REPLY_FROM_BION message_instance will have the same size:
*/
In the process_a_location_reply_from_a_bion() routine detailed further below, the transfer_count to and from each recipient bion is used, in effect, as a precise timer with the implication that the average transfer time from one computing element to the next will be the same for both the GET_LOCATIONS_OF_BIONS message sent out to the specified recipients, and for the LOCATION_REPLY_FROM_BION messages sent back by those recipients. To make this average transfer time sufficiently the same for both these two message types, the size of the GET_LOCATIONS_OF_BIONS message_instance is made the same size as the size of the LOCATION_REPLY_FROM_BION message_instance (by “same size” is meant the same number of bytes or whatever unit of memory is used by computing elements).

Comparing the message_instance sizes, each LOCATION_REPLY_FROM_BION message_instance has as its message text the fixed-format fixed-size reply_mt (defined in the reply_to_this_location_request_bions() routine above), whereas the GET_LOCATIONS_OF_BIONS message_instance has no message text. Let AA be the size of reply_mt. Also, the LOCATION_REPLY_FROM_BION message_instance has only a single bion as its recipient, which is identified by a bion’s fixed-format fixed-size unique identifier (let BB be the size of a bion’s unique identifier), whereas the GET_LOCATIONS_OF_BIONS message_instance has a fixed-format fixed-size user_settable_identifiers_block to identify its recipients (let CC be the size of a user_settable_identifiers_block). Note that I’m assuming CC > BB, and AA > (CC − BB). Thus, to make the two message_instance sizes the same, add dummy padding in the amount of (AA − (CC − BB)) to increase the size of the GET_LOCATIONS_OF_BIONS message_instance so that it will be the same size as each LOCATION_REPLY_FROM_BION message_instance.

/*
At this point in this routine assume the message_instance to be sent is complete and ready to be offered to the adjacent computing elements.

Now initialize global variables that will be visible to the process_a_location_reply_from_a_bion() routine which is detailed further below.
*/
get-the-semaphore grl_globals_exclusive_access

set match_this_instance_identifier to the value of message_instance.instance_identifier
set to zero: sum_all_relative_X, sum_all_relative_Y, sum_all_relative_Z, total_replies, nearest_recipients_count
set requested_nearest_count to the value of the get_details_for_this_many_nearest_recipients parameter

allocate enough memory for a list that can hold as many elements as specified by requested_nearest_count. Each element in this list has three components: the replying bion’s identifier block; the replier's_XYZ_relative_to_000 that is computed in the process_a_location_reply_from_a_bion() routine; and the computed distance between (0, 0, 0) and replier's_XYZ_relative_to_000.

set pointer_to_nearest_recipients_list to point at the location in memory of the just-allocated list

/*
Note: The execution of the following short block of code—that begins immediately below with setting XYZ_start and ends immediately after the GET_LOCATIONS_OF_BIONS message is sent—should be executed with the same high priority as used for message transmission.
*/
set XYZ_start to this_CE's_XYZ

/*
Initialization of globals is complete.
*/
release-the-semaphore grl_globals_exclusive_access

/*
Complete and send the GET_LOCATIONS_OF_BIONS message.
*/
Assume that any items defined as being in this message_instance but not explicitly set above are set as stated in subsection 3.8.4 (for example, transfer_count is set to 0). Then, to send this GET_LOCATIONS_OF_BIONS message_instance into 3D space, use the same code the send() statement uses to offer a message_instance to the adjacent computing elements.

/*
At this point the GET_LOCATIONS_OF_BIONS message has been sent out, and this routine will now wait for the replies to come in. The replies are processed by the process_a_location_reply_from_a_bion() routine. While waiting for the replies to come in, this_CE can execute other threads of execution depending on priorities.

The reason for the below multiplier (2 × SQUARE_ROOT_OF_3), which has the value 3.46410 (accurate to 5 decimal places), is explained above in the remarks regarding the get_details_for_this_many_nearest_recipients parameter.
*/
set estimated_time_to_wait to ((2 × SQUARE_ROOT_OF_3) × (the edited value of use_this_send_distance) × (an estimated average time for transferring the message_instance from a computing element to an adjacent computing element))
halt execution of this thread and wait estimated_time_to_wait, during which time other threads of execution can run.

/*
At this point all replies, in the time allowed, have been received.
*/
get-the-semaphore grl_globals_exclusive_access

set match_this_instance_identifier to null  /* no more replies to the sent message accepted */

/*
Compute the centroid for all the replying bions. The centroid is easy to compute and can be useful in the case of multicellular development. Note that centroid_XYZ_relative_to_000 is a local variable, not a global variable.
*/
set centroid_XYZ_relative_to_000.X to (sum_all_relative_X ÷ total_replies)
set centroid_XYZ_relative_to_000.Y to (sum_all_relative_Y ÷ total_replies)
set centroid_XYZ_relative_to_000.Z to (sum_all_relative_Z ÷ total_replies)

/*
For those globals whose current values will be returned, copy those values from those global variables to local variables, and then return these local variables.
*/
set ret_total_replies to total_replies
set ret_nearest_recipients_count to nearest_recipients_count

set ret_pointer_to_nearest_recipients_list to pointer_to_nearest_recipients_list
set pointer_to_nearest_recipients_list to null  /* this global no longer has access to that list */

release-the-semaphore grl_globals_exclusive_access

/*
return the results
*/
return ret_total_replies, centroid_XYZ_relative_to_000, ret_nearest_recipients_count, ret_pointer_to_nearest_recipients_list  /* exit this routine */
}

The detail of the process_a_location_reply_from_a_bion() routine follows (note that this routine is called by the examine_a_message_instance() routine that is detailed in subsection 3.8.5):

/*
Note: The parameter message_instance is a reply to a location request made by this_bion. The value of message_instance.special_handling_locate is LOCATION_REPLY_FROM_BION.
*/
process_a_location_reply_from_a_bion(message_instance)
{
set XYZ_stop to this_CE's_XYZ

get-the-semaphore grl_globals_exclusive_access

if message_instance.reply_mt.requester's_instance_identifier is not the same as match_this_instance_identifier
then
release-the-semaphore grl_globals_exclusive_access

return  /* ignore this message_instance and exit this routine */
end if

/*
The purpose of the next three labeled steps is for the computed value in replier's_XYZ_relative_to_000 to be the replying bion’s XYZ location relative to this_bion’s computed XYZ location when that replying bion’s XYZ location—the value in message_instance.reply_mt.XYZ—was set in the reply_to_this_location_request_bions() routine.

The idea of step 1 is that we want to know how far along on the vector from XYZ_start to XYZ_stop was this_bion when the replying bion copied its XYZ location into message_instance.reply_mt.XYZ. As stated in the code for the above get_relative_locations_of_bions() routine, the average transfer time from one computing element to the next is the same for the two message types involved (a GET_LOCATIONS_OF_BIONS message_instance and a LOCATION_REPLY_FROM_BION message_instance), and this means that the transfer_count_to which came from a GET_LOCATIONS_OF_BIONS message_instance, and the transfer_count_from which came from this LOCATION_REPLY_FROM_BION message_instance being processed by this routine, have the same unit of time. Thus, ((transfer_count_to + transfer_count_from) × that average transfer time from one computing element to the next) is the total time that this_bion took to go from XYZ_start to XYZ_stop, and dividing (transfer_count_to + transfer_count_from) into transfer_count_to gives how far along on that (XYZ_start, XYZ_stop) vector this_bion was when the replying bion copied its XYZ location into message_instance.reply_mt.XYZ.

Regarding the value of vector_fraction computed in step 1, in our world the value will be at or close to ½ (0.5), assuming that messages move thru 3D space much, much faster than our Earthly biosphere is moving thru 3D space. In general, the greatest deviation of the computed vector_fraction from being ½ will be when the replying bion lies on the line that runs thru XYZ_start and XYZ_stop, and the least deviation of the computed vector_fraction from being ½ will be when the line that runs thru this_bion and the replying bion is at a right angle to the line that runs thru XYZ_start and XYZ_stop.

Step 2 computes the location of this_bion when the replying bion’s message_instance.reply_mt.XYZ value was set. Note that this computation assumes that this_bion moved thru 3D space in a straight line from XYZ_start to XYZ_stop, when in actuality on our Earth, this_bion moved along a curve because of the rotation of our Earth and the effect of gravity on our Earth. This error is considered separately after the following three steps are given.

Step 3 simply subtracts step 2’s this_bion's_XYZ from message_instance.reply_mt.XYZ, giving replier's_XYZ_relative_to_000, which is, in effect, the replying bion’s location relative to this_bion's_XYZ being relocated to XYZ coordinate (0, 0, 0). Note that the actual math is to subtract this_bion's_XYZ from both this_bion's_XYZ (which gives (0, 0, 0)) and from message_instance.reply_mt.XYZ (which gives replier's_XYZ_relative_to_000).

For steps 2 and 3, the arithmetic is signed. Also, any computed .X, .Y, or .Z value for this_bion's_XYZ in step 2 that is not an integer is rounded to the nearest integer before assignment to that .X, .Y, or .Z. For example, if vector_fraction is 0.5, and XYZ_start.X is 9, and XYZ_stop.X is 5, and message_instance.reply_mt.XYZ.X is 6, then this_bion's_XYZ.X is (9 + (0.5 × −4)) which is 7, and replier's_XYZ_relative_to_000.X is (6 − 7) which is −1. In this same example, if instead vector_fraction is 0.498, then this_bion's_XYZ.X is (9 + (0.498 × −4)) which is (9 + −1.992) which is 7.008 which rounds to 7, thus this_bion's_XYZ.X is set to 7.
*/
/* step 1: */
set transfer_count_to to message_instance.reply_mt.transfer_count_to_location
set transfer_count_from to message_instance.transfer_count
set vector_fraction to (transfer_count_to ÷ (transfer_count_to + transfer_count_from))

/* step 2: */
set this_bion's_XYZ.X to (XYZ_start.X + (vector_fraction × (XYZ_stop.X − XYZ_start.X)))
set this_bion's_XYZ.Y to (XYZ_start.Y + (vector_fraction × (XYZ_stop.Y − XYZ_start.Y)))
set this_bion's_XYZ.Z to (XYZ_start.Z + (vector_fraction × (XYZ_stop.Z − XYZ_start.Z)))

/* step 3: */
set replier's_XYZ_relative_to_000.X to (message_instance.reply_mt.XYZ.X − this_bion's_XYZ.X)
set replier's_XYZ_relative_to_000.Y to (message_instance.reply_mt.XYZ.Y − this_bion's_XYZ.Y)
set replier's_XYZ_relative_to_000.Z to (message_instance.reply_mt.XYZ.Z − this_bion's_XYZ.Z)

/*
At this point we have in replier's_XYZ_relative_to_000 the replying bion’s XYZ coordinate relative to this_bion being at coordinate (0, 0, 0). Let’s now consider in the attached footnote the error that results in the above step 2 where this_bion's_XYZ is computed, because the physical biosphere on our Earth is not moving thru space in a perfectly straight line, but instead is moving along a curve (note that the size of this error in step 2 carries over into step 3 which subtracts this_bion's_XYZ from the replying bion’s XYZ).[24] As shown in the footnote, the error is insignificant in the case of the real-world needs of multicellular development.
*/

increment total_replies  /* add 1 to it */

/*
Note: sum_all_relative_X, sum_all_relative_Y, and sum_all_relative_Z will be used in the get_relative_locations_of_bions() routine, after all replies have been received, to compute the centroid for all the replying bion’s.
*/
set sum_all_relative_X to (sum_all_relative_X + replier's_XYZ_relative_to_000.X)
set sum_all_relative_Y to (sum_all_relative_Y + replier's_XYZ_relative_to_000.Y)
set sum_all_relative_Z to (sum_all_relative_Z + replier's_XYZ_relative_to_000.Z)

/*
If conditions are met, insert the reply—more specifically, insert together three relevant details regarding the replying bion—into the current nearest-recipients list.
*/
if requested_nearest_count is greater than 0
then
set distance to (use the distance formula to compute the distance between (0, 0, 0) and replier's_XYZ_relative_to_000)

/*
Regarding the nearest-recipients list, assume that pointer_to_nearest_recipients_list[1] is the first element in that list, pointer_to_nearest_recipients_list[2] is the second element in that list, and so on. The nearest_recipients_count is the current number of nearest-recipients in this nearest-recipients list.

Without showing the code to do an “insertion sort”, assume that an insertion sort is done here. For a description of the insertion sort, see, for example, Insertion sort at https://en.wikipedia.org/wiki/Insertion_sort.

At the end of the code that goes here (not shown), the reply—its relevant details; see the allocate statement in get_relative_locations_of_bions() for a description of the three components in each pointer_to_nearest_recipients_list[] element—has either been inserted into the nearest-recipients list or not. As long as nearest_recipients_count is less than requested_nearest_count, then the reply is inserted into the nearest-recipients list and the nearest_recipients_count is incremented by 1. But if nearest_recipients_count already equals requested_nearest_count, then, to be inserted into the nearest-recipients list, the reply’s distance, computed above, has to be less than the distance of the last element in the nearest-recipients list, which is pointer_to_nearest_recipients_list[nearest_recipients_count].distance, and, if so, then the reply is inserted into the nearest-recipients list and this insertion will, in effect, push what was the last element in the nearest-recipients list, out of that list (deleting it from that list), and the nearest_recipients_count will still be equal to requested_nearest_count.

If the reply was inserted, then at the end of this insertion, the elements currently in the nearest-recipients list will always be in ascending distance order. More specifically, if nearest_recipients_count is greater than 1, then for all values of n between 1 and (nearest_recipients_count − 1) inclusive, the following will always be true: the value of pointer_to_nearest_recipients_list[n].distance is less than or equal to the value of pointer_to_nearest_recipients_list[n + 1].distance.

For those familiar with time complexity for an algorithm, inserting a reply into this nearest-recipients list will have linear-time cost, and this linear-time cost to do an insertion will be proportional to the current value of nearest_recipients_count, which will have the value of the get_details_for_this_many_nearest_recipients parameter as soon as that many replies have been received. Because of this linear-time cost, but at the same time there is a need to process the replies quickly so that the reply messages, in effect, do not pile up in a big way, awaiting processing by this bion: one can assume that the computing-element program imposes a limit on how large the value of the get_details_for_this_many_nearest_recipients parameter can be. For a guess of what this limit is, see the description of the get_details_for_this_many_nearest_recipients parameter.
*/
end if

release-the-semaphore grl_globals_exclusive_access

return  /* exit this routine */
}

Among other things, the detailed data structures and routines presented above in section 3.8 provide a firm foundation for the observed multicellular development that happens in our world, and shows that it is possible to explain something as mysterious as multicellular development by assuming an underlying computation layer (the computing elements) and programming (the computing-element program and learned programs). Of course, regarding all this complexity—a computation layer and a lot of programming in the computing-element program—why should all this underlying complexity exist? To this kind of question I am reminded of the answer my dad gave me when I was very young and I asked him a philosophical question—I no longer remember exactly what I asked, but his reply has stayed with me: Why should anything exist? he answered. But a lot obviously does exist, and an underlying computation layer along with a lot of programming is the best way to explain the more complex objects in existence, such as ourselves.

Regarding the Particle Details returned by the various get_relative_location…() learned-program statements described in this Book

This book names and describes a total of eight different learned-program statements that each compute the relative location—the replier's_XYZ_relative_to_000—of each replying particle. These eight learned-program statements are:

For each of these eight learned-program statements, whenever the detail of one or more replying particles is returned by a call of that learned-program statement, the returned detail of a replying particle always has three components:

A typical use by the calling bion of a replying particle’s replier's_XYZ_relative_to_000, is if that calling bion, in effect, decides to move closer to that particle. In general, a bion has to be able to move in 3D space. To move itself, assume there is a move_this_bion() learned-program statement. At the very least, any call of move_this_bion() must specify as a parameter the wanted direction of movement (additional parameters probably include how far to move and how fast to move, and perhaps also whether to move in the opposite direction (see below)).

To be consistent with replier's_XYZ_relative_to_000, assume that move_this_bion()’s direction-to-move parameter (denote as XYZ_head) is an XYZ coordinate relative to point (0, 0, 0). In effect, (0, 0, 0) is the tail-point and XYZ_head is the head-point of this direction-to-move vector. Then, the first thing the code in move_this_bion() would do is translate this direction-to-move vector (translating a vector moves the vector without changing its orientation and length) so that that vector’s tail-point is the current location of the bion that is calling move_this_bion():

/* Translate the direction-to-move vector. */
set dtm_tail_XYZ.X to this_CE's_XYZ.X
set dtm_tail_XYZ.Y to this_CE's_XYZ.Y
set dtm_tail_XYZ.Z to this_CE's_XYZ.Z

set dtm_head_XYZ.X to (XYZ_head.X + this_CE's_XYZ.X)
set dtm_head_XYZ.Y to (XYZ_head.Y + this_CE's_XYZ.Y)
set dtm_head_XYZ.Z to (XYZ_head.Z + this_CE's_XYZ.Z)

As an example of using move_this_bion(): If a bion calls get_relative_location_of_one_physical_atom() and then wants to move closer to that atom, that bion can then call move_this_bion() after setting the XYZ_head parameter to the value of that atom’s replier's_XYZ_relative_to_000. If, for some reason, that bion wants to instead move in the opposite direction to that atom, that bion, after setting the XYZ_head parameter to the value of that atom’s replier's_XYZ_relative_to_000, can then set another move_this_bion() parameter, that, in effect, tells move_this_bion() to move in the opposite direction of the given direction-to-move vector, in which case move_this_bion() will reverse the signs of the X, Y, and Z components of the XYZ_head parameter before doing the above vector translation. For example, if the value of XYZ_head is (−13, 8, 17), then after the sign reversals its value would be (13, −8, −17).

Avoid Unreasonable Assumptions when Designing Algorithms that will Run on Computing Elements

When I first thought about how to get the relative locations of bions, I realized I needed an absolute coordinate system for 3D space, rooted in the computing elements, and that is why I assume that for any specific computing element, it will have a unique XYZ coordinate that cannot be changed. If one grants the existence of the computing elements, then it is a small step and not unreasonable to also grant that a computing element’s state information includes an XYZ coordinate, that, in effect, gives that computing element’s location within the great number of computing elements that, in effect, comprise the 3D space of our universe.

Also, when I was thinking about how to get the relative locations of bions, I realized that if one assumed that each computing element had a precise time clock, and all the computing elements had the same time on their precise time clocks, then I could make a simplification to my solution (given above in subsections 3.8.5 and 3.8.6) for getting the relative locations of bions, because making both the GET_LOCATIONS_OF_BIONS message and the LOCATION_REPLY_FROM_BION messages have the same average transfer time from one computing element to the next, was only necessary because of how I compute vector_fraction. If instead, all the computing elements have the same time on their precise time clocks, then each LOCATION_REPLY_FROM_BION message could include a timestamp of when the message_instance.reply_mt.XYZ value was set, and vector_fraction can then be computed as (((that timestamp time) − (the time when start_XYZ was set)) ÷ ((the time when stop_XYZ was set) − (the time when start_XYZ was set))).

However, the reason I rejected the all-clocks-have-the-same-time approach, is because it seemed too unreasonable, because either one assumes something like each computing element has a perfect high-precision time clock with zero time drift, and whenever a specific computing element came into existence its time clock started with whatever the current time was on the time clocks of all the other computing elements, if any, already in existence; or, even more unreasonable, one assumes something like instead of each computing element having its own high-precision time clock, there is a single high-precision time clock that is separate from the computing elements but somehow the current time on that clock can be read instantly at any time by any computing element.

That a computing element has a time clock is reasonable, but that all computing elements have the exact same time on their time clocks is unreasonable. There are clock-synchronization algorithms already in use in our world of physical computers, so that computers that are widely separated geographically can have the same or nearly the same clock time, depending on how accurate the clocks are supposed to be. However, in the case of the computing elements, even if one assumes that the most efficient clock-synchronization algorithm possible is being run by the computing-element program to keep computing-element clocks synchronized, it seems to me like a big cost—in terms of the ongoing computing needed at every computing element running that clock-synchronization algorithm—for the small gain of being able to mix in certain algorithms timestamps that came from different computing elements.

Timers and Keeping Track of Elapsed Time

In the above code for get_relative_locations_of_bions() is the line “halt execution of this thread and wait estimated_time_to_wait, during which time other threads of execution can run.” Keeping track of elapsed time would be easy if one could always refer to the same accurate clock, but all the bions on our Earth are moving thru 3D space at roughly 1/245th of lightspeed (see the footnote in this subsection), which means that the computing element that a specific bion on Earth was in a second ago, is now more than 1.22 million meters distant from the computing element that bion is in one second later (1.22 million meters is about 758 miles). How can the above-quoted “wait estimated_time_to_wait” be done without assuming that widely separated computing elements have the same time on their clocks, because, as stated above, I reject the idea that every computing element has the same time on its clock? Although I’m allowing computing-element clocks to have different times on their clocks when compared to each other, I’m assuming—regarding the solution given in the next paragraph—that all computing-element clocks are ticking at very close to the same rate, and this assumption is reasonable.

A basic solution is to assume that a bion’s state information includes a big integer named this_bion's_accumulated_clock_ticks, which was initialized to zero when that bion was created. And, in the computing-element program, is code that does the following: As already stated in section 1.6, a particle is moved from a computing element (denote as CE1) to an adjacent computing element (denote as CE2) by “simply copying that particle’s information block from the computing element that currently holds that particle, to an adjacent computing element that becomes the new holder of that particle, and then at the computing element that no longer holds that particle: in effect, delete that particle’s information block from that computing element’s memory.” In the case of moving a bion, immediately before starting that copying of that bion’s information block from CE1 to CE2, the following is done at basically the same time in CE1 and CE2: in CE2, a big integer in CE2’s memory, named time_this_CE_got_this_bion, is set to the current time on CE2’s internal clock; in CE1, add to the bion’s current this_bion's_accumulated_clock_ticks value the number of clock ticks that this computing element has held that bion, which is, for the purpose of this calculation, defined as ((current time on CE1’s internal clock) − (CE1’s time_this_CE_got_this_bion)).

With each bion having its own this_bion's_accumulated_clock_ticks, which is maintained by the computing-element program as detailed in the previous paragraph, it is easy for each bion to keep track of elapsed time. For example, assume there is a learned-program statement wait_this_long() that has one parameter named total_clock_ticks_to_wait which is an integer whose value is set to the number of clock ticks to wait. Then, for a bion, the computing-element program does the following to execute wait_this_long(): it halts execution of the thread that called wait_this_long(), and saves in that bion’s memory regarding this execution thread a big integer named when_to_resume_execution that is set to (total_clock_ticks_to_wait + this_bion's_accumulated_clock_ticks). And, regarding how and when the end of the wait time is determined: If the bion is not currently asleep, then, immediately after the bion’s information block has been copied to CE2 (see the previous paragraph) and CE2’s computing-element program is ready to resume running the bion’s current execution threads, CE2’s computing-element program does the following check for each of the bion’s current execution threads, if any, that have when_to_resume_execution set: if this_bion's_accumulated_clock_ticks is greater than the thread’s when_to_resume_execution value, then end that wait and resume execution of that thread; otherwise, that execution thread remains waiting and won’t be checked again until a different computing element holds this bion and this bion is not asleep.

The following is an example of using wait_this_long() to impose a delay of three months between doing action A and doing action B:

do action A.
wait_this_long(NUMBER_OF_CLOCK_TICKS_IN_THREE_MONTHS).
do action B if all other requirements for doing action B, if any, are met.

In procedures and learned programs given later in this book, where it is said to wait a specific amount of time—instead of waiting for a specific event to happen, such as waiting to receive a specific message—in these cases, one can assume that the learned-program statement wait_this_long() is used to wait that specific amount of time. For example, in step 5 of the first “basic procedure” given in subsection 5.2.1, it says: “Given the returned distance and how fast one’s awareness/mind is moving towards that bion, compute the estimated time to reach that bion. Then wait a fraction of that estimated time—a fraction close to, but less than 1 (for example, 97/100ths would be good)—before doing step 6”. In this example, the time to wait, in terms of clock ticks, is first computed, and then wait_this_long() is called with that computed time to wait as its parameter.

In addition to the learned-program statement wait_this_long(), also assume there is a learned-program statement get_this_bion's_accumulated_clock_ticks() that returns the current value of this_bion's_accumulated_clock_ticks. This learned-program statement, get_this_bion's_accumulated_clock_ticks(), can be used to time or delay an event that involves more than a single execution thread and/or learned program. And, get_this_bion's_accumulated_clock_ticks() also gives a bion’s learned programs a way to timestamp data when that data is stored in that bion’s memory, by storing along with that data a timestamp set to the value returned by get_this_bion's_accumulated_clock_ticks(). Then, the current age of that stored data can be determined whenever that stored data is accessed at a later time by any of that bion’s learned programs: the current age of that stored data is simply the elapsed time computed as (get_this_bion's_accumulated_clock_ticks() − (that stored data’s timestamp)).

In one’s human mind, it should be obvious that many of one’s memories are, in effect, timestamped, such as one’s personal memories of events in one’s life, so that one can consciously know when a given remembered event happened. Also, by timestamping memories, one’s mind can delete old memories to make room for new memories. Besides one’s various memories being timestamped, each entry in one’s soliton directory (subsection 5.1.1) has at least two timestamps: one timestamp set to get_this_bion's_accumulated_clock_ticks() when that entry was added to one’s soliton directory (this timestamp allows the age of that entry to be computed); and a different timestamp set to get_this_bion's_accumulated_clock_ticks() each time that entry’s total_relationship_score is changed by a significant amount (this timestamp allows the age of the last significant update to that score to be computed). These two ages would be computed and used by the algorithm that determines which entry in the soliton directory to delete when room is needed for adding a new entry.


footnotes

[24] Regarding how our world is moving thru 3D space, let’s use the following numbers (from the 2010 article by Marshall Brain (an apparent pseudonym), titled Good question – How fast are you moving through the universe right now? at http://www.brainstuffshow.com/blog/good-question-how-fast-are-you-moving-through-the-universe-right-now/):

To estimate the error in replier's_XYZ_relative_to_000 that results from the combination of our Earth’s rotation, our Earth’s orbit around the sun, and our solar system’s orbit around our galaxy’s gravitational center, an upper bound for each of these three errors is computed separately below, and then all three of these upper bounds are added together to get a composite upper bound on the error. Also, regarding multicellular development, note that, in general, the closer two cells are to each other, the more important to have a precise location of the other cell, and conversely, the further away two cells are from each other, the less important to have a precise location of the other cell. However, the closer that two cells are to each other, the quicker messages will travel between their occupying bions because of the shorter distance between them, which means less time for the three errors under consideration to accumulate and become significant. So, to give these three errors more of a chance to appear significant, assume a large separation distance of 3 inches (7.62 centimeters).

To compute the error, we have to make an assumption about how fast the messages (the sent GET_LOCATIONS_OF_BIONS message, and the replying LOCATION_REPLY_FROM_BION messages) are moving thru 3D space. First we compute the error assuming these messages are moving at lightspeed, and then we compute the error assuming these messages are moving at half the speed of gravity messages:

For a message moving at lightspeed, its speed is 186,282 miles per second × 5,280 feet in a mile × 12 inches in a foot = 11,802,827,520 inches per second. For the message to go 3 inches takes 3 ÷ 11,802,827,520 = 0.000000000254 seconds (rounded to 3 significant digits). The roundtrip—the message from this_bion to the replying bion and the replying bion’s message back to this_bion—covers 6 inches and takes 0.000000000508 seconds.

(In this paragraph, to use the same orbit terminology, consider the rotation of the Earth at its surface on the Equator as an orbit:) To compute for each of the three cases—the Earth’s rotation, the Earth’s orbit around the sun, and our solar system’s orbit around our galaxy’s gravitational center—an upper bound for the error in replier's_XYZ_relative_to_000 that is caused by that case, a good upper bound—good in the sense of not being too far above the smallest upper bound that can be proven—can be computed as follows: Given the diameter of the orbit, divide that diameter by the time needed to complete half that orbit, to get the speed that this_bion is moving in the direction of the orbited center. Let S be this speed in inches per second, and let Derr be the wanted upper bound of the error resulting from this case. Then, given 3 inches as the separation distance between this_bion and the replying bion, Derr is simply S × the time needed for the 6-inch roundtrip which is 0.000000000508 seconds. Computation of this upper bound for the three cases follows:

  1. Case 1, rotation of the Earth: S = (7,920 miles × 5,280 feet in a mile × 12 inches in a foot) ÷ ((24 hours ÷ 2) × 3,600 seconds in an hour) = 501,811,200 inches ÷ 43,200 seconds = 11,616 inches per second

    Derr = 11,616 inches per second × 0.000000000508 seconds = 0.0000059 inches

  2. Case 2, the Earth orbiting the sun: S = ((93,000,000 miles × 2) × 5,280 feet in a mile × 12 inches in a foot) ÷ ((365 days ÷ 2) × 24 hours in a day × 3,600 seconds in an hour) = 11,785,000,000,000 inches ÷ 15,768,000 seconds = 747,400 inches per second

    Derr = 747,400 inches per second × 0.000000000508 seconds = 0.00038 inches

  3. Case 3, our solar system orbiting the gravitational center of our Milky Way galaxy: S = (((25,000 light years × 2) × 5,878,500,000,000 miles in a light year) × 5,280 feet in a mile × 12 inches in a foot) ÷ ((250,000,000 years ÷ 2) × 365 days in a year × 24 hours in a day × 3,600 seconds in an hour) = 18,623,000,000,000,000,000,000 inches ÷ 3,942,000,000,000,000 seconds = 4,724,300 inches per second

    Derr = 4,724,300 inches per second × 0.000000000508 seconds = 0.0024 inches

Adding together the above three Derr values gives us:

0.0000059 inches + 0.00038 inches + 0.0024 inches = 0.0028 inches (0.0071 centimeters)

So, in the case of multicellular development, and two bions (and their cells) that are 3 inches apart from each other, the error—assuming that the messages are moving at lightspeed—for either bion using the get_relative_locations_of_bions() routine to get the other bion’s location is at worst about 0.0028 inches, which means that replier's_XYZ_relative_to_000, which is the computed point in 3D space for that other bion’s location relative to this_bion being at point (0, 0, 0), may be as much as 0.0028 inches distant, in some unknown direction, from the actual location of that other bion in 3D space relative to this_bion being at point (0, 0, 0). Also note that the error is linear with the separation distance between the two bions: for example, if the separation distance is doubled to 6 inches, the error is doubled to 0.0056 inches; if the separation distance is halved to 1.5 inches, the error is halved to 0.0014 inches. It seems reasonable to conclude that the error is small enough that multicellular development will not be negatively affected by it, and there is no need for a more precise algorithm for computing replier's_XYZ_relative_to_000 than the one I give in this book.

Instead of the assumption in the previous paragraph that the messages are moving at lightspeed, assume that the messages are moving at half the speed of gravity messages. As stated in section 1.1, the actual speed of gravity—which I assume involves the transmission of gravity messages—has been computed by astronomer Tom Van Flandern as being not less than 20 billion times the speed of light (2×1010c). Thus, moving at half the speed of gravity messages is moving at least as fast as (1010 × lightspeed), and the above computed error of 0.0028 inches must be divided by 1010 assuming this faster speed for transmission thru 3D space of the GET_LOCATIONS_OF_BIONS and LOCATION_REPLY_FROM_BION messages, which gives an estimated relative-location error of 0.00000000000028 inches instead of 0.0028 inches, for the two bions (and their cells) separated by 3 inches.


3.8.7 The Learned-Program Statements for Seeing and Manipulating Physical Matter have a very Short Range

By physical matter is meant physical atoms and molecules. The purpose of this subsection is to give the reasons for concluding that the learned-program statements for directly seeing and manipulating physical matter have a very short range, in terms of the maximum distance allowed between an atom and a bion that calls a learned-program statement to see or manipulate that atom (an exception to this short range is getting the relative location of a single physical atom identified by its unique identifier, described in subsection 3.8.9). I estimate that this short range is less than one-tenth of a millimeter (less than 1/250th of an inch).

Calling the learned-program statement get_relative_locations_of_physical_atoms(), which is detailed in subsection 3.8.8, is how a bion can directly see the physical matter in its surrounding environment. This learned-program statement returns a count of the replying atoms and their centroid relative to the calling bion, and can also return in a sorted list, details about the nearest replying atoms, including the replying atom’s location in 3D space relative to the calling bion (see the description of the get_details_for_this_many_nearest_recipients parameter in subsection 3.8.6; the returned list is in ascending distance order—the distance between that replying atom and the calling bion). The furthest away that an atom can be from that calling bion and still be seen by that call is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS whose value I estimate as being less than one-tenth of a millimeter (less than 1/250th of an inch).

Before proceeding further, note the following common-sense rule, because a bion should not be able to directly manipulate a physical atom that lies outside of that bion’s surrounding environment in which that bion can directly see any physical matter in that surrounding environment:

The distance at which a bion can directly manipulate one or more physical atoms by calling a learned-program statement, cannot exceed the maximum distance at which that bion can directly see any and all of the physical atoms surrounding that bion by calling the get_relative_locations_of_physical_atoms() learned-program statement.

Thus, the distance limit for get_relative_locations_of_physical_atoms(), which is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS, is also the distance limit for any learned-program statement that directly manipulates physical atoms. Besides learned-program statements that, in effect, move physical atoms and/or affect chemical bonds, directly manipulating physical atoms would also include any learned-program statements that transmute physical atoms or materialize physical atoms, if any such learned-program statements exist.

The following three reasons each argue for a short range for directly seeing and manipulating physical matter:

  1. Given the above rule that the distance limit for directly seeing physical matter is also the distance limit for directly manipulating physical matter, it is only necessary to argue for a short range for directly seeing physical matter:

    The reason for such a short range for directly seeing physical atoms is to keep the computation cost of calling get_relative_locations_of_physical_atoms() reasonable: In our reality, it appears that physical atoms greatly outnumber intelligent particles. For example, an adult human body has about 50 trillion cells, and if one assumes one cell-controlling bion per cell, that’s about 50 trillion bions, or 5 × 1013 bions. In comparison, an adult human body weighing 70 kilograms (154 pounds) is estimated to have 7 × 1027 atoms (see How many atoms are in the human body? at http://education.jlab.org/qa/mathatom_04.html). Generalizing for a multicellular organism: there are about 1014 atoms per cell, and assuming one cell-controlling bion per cell, about 1014 atoms (100 trillion atoms) for every cell-controlling bion in that multicellular organism.

    The computation cost for a bion calling get_relative_locations_of_physical_atoms() is proportional to the number of atoms whose reply messages are received by that bion during that call. Because there can be so many atoms in such a tiny volume, reducing the volume of space that can be examined by a call of get_relative_locations_of_physical_atoms() is the most direct way to limit the computation cost of that call, and this means a short distance for MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS.

    Regarding get_relative_locations_of_physical_atoms(), another consideration is the possibility of too many reply messages in too short a time for the calling bion—the bion that called get_relative_locations_of_physical_atoms() which sent the GET_LOCATIONS_OF_PHYSICAL_ATOMS message—to process all those replies. Let x be the transfer_count of the received GET_LOCATIONS_OF_PHYSICAL_ATOMS message_instance at a replying atom, and let y be the transfer_count of that atom’s reply message_instance when it is received by the calling bion. If the sum (x + y) for replying atom A is the same or nearly the same sum as the sum (x + y) for replying atom B, then atom A’s reply and atom B’s reply will both be received at the calling bion at nearly the same time (note: assuming that a computing element can only have copied to it one message_instance at a time, then it’s not possible for the calling bion to receive both atom A’s reply and atom B’s reply at the exact same time).

    For the math given in this paragraph, assume the same uniform distribution of replying atoms in the sphere filled by the GET_LOCATIONS_OF_PHYSICAL_ATOMS message, regardless of the send_distance value. In this paragraph, also assume that “nearly the same time” is a very short time interval of fixed size t, and “maximum number of reply messages received by the calling bion at nearly the same time” means the largest number of replies received during time interval t anywhere in the timeframe that is inclusively bounded by the first reply message received by the calling bion and the last reply message received by the calling bion. The maximum number of reply messages received by the calling bion at nearly the same time, will increase with the square of the send_distance used to send the GET_LOCATIONS_OF_PHYSICAL_ATOMS message. If at a send_distance of d the maximum number of reply messages received by the calling bion at nearly the same time is j, then, if the send_distance is increased by a factor of m (m > 1), the send_distance becomes (m × d) and a close approximation of the maximum number of reply messages that will be received by the calling bion at nearly the same time is (m2 × j). For example, if send_distance is 10,000 and the maximum number of reply messages received by the calling bion at nearly the same time is 13, then if send_distance were 30,000 instead of 10,000 (m is 3), a close approximation of the maximum number of reply messages that will be received by the calling bion at nearly the same time is (32 × 13) which is 117. The maximum allowed value for send_distance for get_relative_locations_of_physical_atoms() is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS. Considering the huge number of physical atoms in a single cell in one’s physical body, a short distance for MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS makes the possibility of overwhelming the calling bion with too many reply messages at nearly the same time, much less likely.

  2. The mere fact that our physical world is so filled with multicellular life, where the cells are very tiny, points to a very limited range for those learned-program statements that can manipulate physical matter. If hypothetically, any bion had available to it learned-program statements that gave it the ability to see and manipulate physical matter at a maximum distance from that bion of, say, an inch (about 2.5 centimeters), then why, for example, wouldn’t we see microscopic pond life (both single-celled pond life and multi-celled pond life) using that long range of an inch to see and then move food particles toward itself to feed on those food particles instead of what we actually see which is that both single-celled and multi-celled pond life only consumes food that comes into contact with the surface of that pond life, and that the only two feeding strategies for cellular pond life is to either anchor and wait for the food to contact its surface or to actively swim about in the pond water so as to increase its chances of running into food that it can then consume. Given what is seen with how pond life actually feeds, it is reasonable to conclude a very short range for those learned-program statements that can manipulate physical matter. Also, why would learned programs for multicellular life evolve and develop at all in the first place, when, with that hypothetical one-inch range, it would be possible to evolve and develop instead learned programs for manipulating physical matter on a much larger scale than what is done with physical cells which are very tiny.

    Thus, the fact that multicellular life is so abundant on our Earth, and given the idea of survival of the fittest in this world, which, given the learning algorithms in section 3.6, would also apply to the evolution of learned programs that can manipulate physical matter, it is reasonable to conclude that the multicellular life we see is the fittest, and that the learned-program statements that can manipulate physical matter have a very short range for a bion to directly see and manipulate physical matter, and this very short range is probably less than one-tenth of a millimeter (less than 1/250th of an inch). Note that most of the cells in our human bodies are less than one-tenth of a millimeter in diameter.

  3. Another reason to believe that the learned-program statements for directly seeing and manipulating physical matter have a very short range, or at least a range that is too short to be useful to our human minds in our physical human bodies, is the absence of any mental ability to directly see and manipulate physical matter with our minds alone. The learned-program statements for manipulating physical matter are too limited in what they can do, to have become a part of our minds. In addition to we humans, none of the other higher animals in our physical world show any psychokinetic ability.

3.8.8 Bions Seeing and Manipulating Atoms and Molecules

In this subsection, learned-program statements are presented that allow a bion to see and manipulate physical atoms and molecules.

The physical structure of a cell is composed of many different kinds of molecules (atoms are bonded together with other atoms forming molecules). For a bion to be able to control a cell, it needs a way to see atoms and how those atoms are bonded to other atoms (chemical bonds). And a bion also needs a way to move an atom (and if that atom is a part of a molecule, and chemical bonds are preserved, then moving that atom will also move that molecule).

There are more than 100 different kinds of atoms in our world (reference the Periodic table of the chemical elements; three examples of atoms are hydrogen, oxygen, and carbon atoms). An atom is known to have subatomic structure, being composed of electrons, protons, and neutrons, and those protons and neutrons in turn are composed of quarks. However, for simplicity, let’s assume that in our everyday world—not the world of, for example, particle accelerators doing high-energy collisions of atoms—an individual atom at any point in time only occupies a single computing element, and all the state information for that atom is stored in that computing element’s memory. This state information has all the relevant information for that atom, including a large random number that serves as a unique identifier for that atom. Also, the state information for any atom A includes info about the other atoms, if any, that are currently chemically bonded to atom A (this info would include for each of these other atoms, the other atom’s unique identifier, and also that other atom’s kind—which chemical element it is).

For a bion to see an atom, assume there is a learned-program statement get_relative_locations_of_physical_atoms(). This routine and its supporting routines and code can be designed in the same way as was done for the get_relative_locations_of_bions() routine and its supporting routines and code, with only a small number of changes: For example, corresponding to GET_LOCATIONS_OF_BIONS and LOCATION_REPLY_FROM_BION there would be two new values for special_handling_locate, GET_LOCATIONS_OF_PHYSICAL_ATOMS and LOCATION_REPLY_FROM_PHYSICAL_ATOM, and there would be new code inserted near the end of the examine_a_message_instance() routine (a comment in the examine_a_message_instance() routine shows where the following code would be placed):

if message_instance.special_handling_locate is GET_LOCATIONS_OF_PHYSICAL_ATOMS
and this_CE is currently holding an atom
then
/*
The reply_to_this_location_request_atoms() routine has the same design as the reply_to_this_location_request_bions() routine, but with atom details in the message text to be sent, instead of bion details.
*/
reply_to_this_location_request_atoms(message_instance)  /* The detail of this routine is not given. */
return  /* exit this routine */
end if

if message_instance.special_handling_locate is LOCATION_REPLY_FROM_PHYSICAL_ATOM
and this_CE is currently holding a bion that is not asleep
and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
then
/*
The process_a_location_reply_from_an_atom() routine has the same design as the process_a_location_reply_from_a_bion() routine, but with atom details in the received message text instead of bion details.
*/
process_a_location_reply_from_an_atom(message_instance)  /* The detail of this routine is not given. */
return  /* exit this routine */
end if

For the reply_to_this_location_request_atoms() routine, the message text of the reply will have the current XYZ location of the atom, along with other info about that atom including that atom’s unique identifier and that atom’s type, and whether that atom is chemically bonded to one or more other atoms and if so then for each of those other atoms what is its unique identifier and its type. The process_a_location_reply_from_an_atom() routine will save details of those atoms that are nearest to the bion that called get_relative_locations_of_physical_atoms(), in the same way that process_a_location_reply_from_a_bion() saves certain details of those bions that are nearest to the bion that called get_relative_locations_of_bions(). Also, get_relative_locations_of_physical_atoms() has the same get_details_for_this_many_nearest_recipients parameter that get_relative_locations_of_bions() has.

For get_relative_locations_of_physical_atoms(), the maximum value allowed for its use_this_send_distance parameter is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS, and my estimate is that this maximum value is less than one-tenth of a millimeter (less than 1/250th of an inch). The reasons for such a short maximum distance for locating atoms are given in subsection 3.8.7, which also gives an important rule stated as follows:

The distance at which a bion can directly manipulate one or more physical atoms by calling a learned-program statement, cannot exceed the maximum distance at which that bion can directly see any and all of the physical atoms surrounding that bion by calling the get_relative_locations_of_physical_atoms() learned-program statement.

Applying the above rule: The value of MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS also becomes the maximum transfer distance allowed for any learned-program statement that, in effect when its sent message is received and acted upon, moves and/or otherwise manipulates one or more atoms. Two such learned-program statements are given below in this subsection: move_a_physical_atom() and push_against_physical_matter(), and for each the maximum allowed value for its use_this_send_distance parameter is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS.

The get_relative_locations_of_physical_atoms() routine can be made more useful by assuming it has the option to be given parameters that specify the kinds of atoms—including an option to specify chemical bonds for those atoms—that the bion calling get_relative_locations_of_physical_atoms() wants replies from. For example, if these parameters were set to only want replies from carbon atoms that are chemically bonded to two hydrogen atoms, then only those computing elements holding a carbon atom that is chemically bonded to two hydrogen atoms would reply back by calling reply_to_this_location_request_atoms().

In effect, the learned-program statement get_relative_locations_of_physical_atoms() allows a bion to see the atoms and molecules that are nearby, and know where in 3D space relative to itself those atoms and molecules are. Besides being able to see atoms and molecules, to control a cell a bion needs a way to move itself around in 3D space (the move_this_bion() learned-program statement provides this capability), and also a way to move atoms and molecules in 3D space.

For a bion to move an atom in 3D space, assume there is a move_a_physical_atom() learned-program statement. Calling move_a_physical_atom() results in sending a message whose special_handling_non_locate is set to MOVE_PHYSICAL_ATOM. With regard to this sent message, the recipient computing element (denoted below as recipient-CE) is whichever computing element is holding the specified atom to be moved, when the message instance is offered to that computing element. The move_a_physical_atom() routine has the following four parameters in addition to its use_this_send_distance parameter:

  1. The unique identifier of the atom to be moved (denote this atom as atom M). Presumably this unique identifier, and also the chemical-bonds info for parameters 3 and 4 below, was gotten from a previous call of get_relative_locations_of_physical_atoms().

  2. If specified, a vector that gives the wanted direction of movement for atom M in 3D space.

  3. If atom M is currently part of a molecule, then this parameter is a list of zero or more unique identifiers, each of which is the unique identifier of an atom that is currently bonded to atom M. For each of these listed atoms, its chemical bond with atom M will be preserved during the move of atom M.

  4. If atom M is currently part of a molecule, then this parameter is a list of zero or more unique identifiers, each of which is the unique identifier of an atom that is currently bonded to atom M. For each of these listed atoms, its chemical bond with atom M is to be broken as a result of the move of atom M.

Although one might imagine a parameter that specifies how far to move the atom (and perhaps there is such a parameter, albeit with limitations), in the case of an atom that is part of a molecule, it may be better to assume that the computing-element program which has all the code for implementing a move_a_physical_atom() request, will only move atom M a very short distance, enough to satisfy preserving and breaking bonds as was specified by parameters 3 and 4 when move_a_physical_atom() was called. Note that moving atom M will involve, in effect, the state information of atom M transiting across some number of computing elements beginning at recipient-CE (this state information will be updated during the move process as needed to reflect changes, if any, in atom M’s chemical bonds). Also, let’s assume that when the final outcome of the move request is known, then whichever computing element ends up holding atom M will send a reply message to the bion that called move_a_physical_atom() giving the result of that move request, either success or failure, and if failure then some additional detail as to why the move request wasn’t doable as specified (for this reply message, its send_distance can be set to MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS, or set to a lesser value if it can be calculated from the available information).

Chemistry is a subject that I know very little about, and I know nothing about the strength of chemical bonds and what that strength depends on. Because of my ignorance, there may well be a better overall design for move_a_physical_atom() and its parameters than what I’ve given above. Also, besides move_a_physical_atom(), there may be other learned-program statements for manipulating physical matter in a way that can alter chemical bonds, but given my ignorance I’ll leave to others any further consideration of move_a_physical_atom() and any other learned-program statements that involve chemical bonds.

Besides learned-program statements that can directly work with individual atoms and their chemical bonds, there is a need for a learned-program statement that can “push” against physical matter (any atoms and/or molecules encountered by this pushing force) with a specified force in a specified direction. Assume there is a learned-program statement push_against_physical_matter(). The push_against_physical_matter() routine has the following four parameters, in addition to its use_this_send_distance parameter:

  1. The radius of a right circular cylinder, denoted RCCradius. Regarding what the allowed range of values is for this radius, my guess is that the smallest value is the size of an atom (assuming as already stated above, that in our everyday world an individual atom at any point in time only occupies a single computing element, then the size of an atom is the side-width of a computing element, which has distance value 1), and the largest allowed value would be (½ × use_this_send_distance).

  2. The length of a right circular cylinder, denoted RCClength. This length is the distance between the two end-caps of the cylinder, as measured from the center of each circular end-cap. Regarding what the allowed range of values is for this length, the smallest allowed value would be 1, and the largest allowed value would be use_this_send_distance.

  3. A vector V that gives the wanted direction of the push in 3D space. Denote the two endpoints of vector V as A and B, with the direction of the vector being from point A to point B.

  4. An integer F specifying how much force to apply pushing against each atom that will be pushed against as a result of this push_against_physical_matter() call.

The two parameters RCCradius and RCClength define a right circular cylinder, and the push_against_physical_matter() routine will compute the location of this right circular cylinder in 3D space, such that the center of one of the circular end-caps is at the current location in 3D space of this_bion, and the line that runs thru the center of those two end-caps will be parallel to vector V having the same orientation in 3D space.

After computing the location of that right circular cylinder in 3D space, a message is then sent with the relevant details. For any computing element that gets the sent message and is currently holding an atom, and that computing element’s location in 3D space is within the bounds of that right circular cylinder, then that computing element (more specifically, the computing-element program that it runs) will apply force F against that atom in the direction that is parallel to vector V. In other words, the volume of space defined by that right circular cylinder, that is within message-reach for the given use_this_send_distance value, is where the force F will be applied against any atom currently in that volume of space, and the direction of that applied force is parallel to vector V.

The detail of the push_against_physical_matter() routine follows:

push_against_physical_matter(RCCradius, RCClength, V, F, use_this_send_distance)
{
/*
Define a new vector (A_endcap_center, B_temp) that translates vector V so that instead of beginning at point A, it begins at point A_endcap_center.

For computing the three coordinates (X, Y, and Z) of B_temp, the arithmetic is signed.
*/
set A_endcap_center to this_CE's_XYZ  /* the current location of this_bion */

set B_temp.X to (B.X + (A_endcap_center.X − A.X))
set B_temp.Y to (B.Y + (A_endcap_center.Y − A.Y))
set B_temp.Z to (B.Z + (A_endcap_center.Z − A.Z))

/*
At this point the vector (A_endcap_center, B_temp) has the same length as vector V, and is parallel to vector V, and points in the same direction as vector V, but for the line that passes thru points A_endcap_center and B_temp) we want to find the point B_endcap_center on that line that is RCClength distant from point A_endcap_center such that the vector (A_endcap_center, B_endcap_center) points in the same direction as vector V.
*/
set V_length to the distance between points A and B  /* Use the distance formula for computing the distance between two points in 3D space. */
set ratio to (RCClength ÷ V_length)

set B_endcap_center.X to (A_endcap_center.X + (ratio × (B_temp.X − A_endcap_center.X)))
set B_endcap_center.Y to (A_endcap_center.Y + (ratio × (B_temp.Y − A_endcap_center.Y)))
set B_endcap_center.Z to (A_endcap_center.Z + (ratio × (B_temp.Z − A_endcap_center.Z)))

/*
Prepare the message to be sent, and send it.
*/
/* set the message text */
set message_instance.mt.rcc.radius to RCCradius
set message_instance.mt.rcc.length RCClength
set message_instance.mt.rcc.A_endcap_center to A_endcap_center
set message_instance.mt.rcc.B_endcap_center to B_endcap_center
set message_instance.mt.V to V
set message_instance.mt.F to F

/* set other items */
set message_instance.special_handling_non_locate to PUSH_PHYSICAL_MATTER
set message_instance.send_distance to use_this_send_distance

/*
Note that the intended recipients of this PUSH_PHYSICAL_MATTER message are computing elements holding physical atoms. Complete and send this message.
*/
Assume that any items defined as being in this message_instance but not explicitly set above are set as stated in subsection 3.8.4 (for example, transfer_count is set to 0). Then, to send this PUSH_PHYSICAL_MATTER message_instance into 3D space, use the same code the send() statement uses to offer a message_instance to the adjacent computing elements.

return  /* exit this routine */
}

For the two values of special_handling_non_locate given in this subsection, MOVE_PHYSICAL_ATOM and PUSH_PHYSICAL_MATTER, there would be new code inserted near the end of the examine_a_message_instance() routine (a comment in the examine_a_message_instance() routine shows where the following code would be placed):

if message_instance.special_handling_non_locate is MOVE_PHYSICAL_ATOM
and this_CE is currently holding the specified atom  /* The unique identifier of the atom to be moved is in the message_instance. */
then
process_a_move_atom_request(message_instance)  /* The detail of this routine is not given. */
return  /* exit this routine */
end if

if message_instance.special_handling_non_locate is PUSH_PHYSICAL_MATTER
and this_CE is currently holding an atom  /* any physical atom */
then
/*
The is_point_within_right_circular_cylinder() routine returns "yes" if the given point lies within the given right circular cylinder; otherwise it returns "no". Although the detail of this routine is not given in this book, the math needed to answer this question can be found, for example, in Determining if a given point is inside a right circular cylinder at https://www.physicsforums.com/threads/determining-if-a-given-point-is-inside-a-right-circular-cylinder.200082/
*/
set this_atom_XYZ to this_CE's_XYZ  /* the location of the atom held by this_CE */

returned_value = is_point_within_right_circular_cylinder(this_atom_XYZ, message_instance.mt.rcc)  /* The detail of this routine is not given. */
if (returned_value is "yes")
then
/*
A and B are the two points in vector V (in message_instance.mt.V), and the direction of the vector is from point A to point B.

Define a new vector (this_atom_XYZ, B_temp) that translates vector V so that instead of beginning at point A, it begins at point this_atom_XYZ. Note that this new vector will be parallel to vector V and point in the same direction as vector V, and this is the direction in which the held atom is to be pushed with force message_instance.mt.F

For computing the three coordinates (X, Y, and Z) of B_temp, the arithmetic is signed.
*/
set B_temp.X to (B.X + (this_atom_XYZ.X − A.X))
set B_temp.Y to (B.Y + (this_atom_XYZ.Y − A.Y))
set B_temp.Z to (B.Z + (this_atom_XYZ.Z − A.Z))

process_a_push_atom_request(this_atom_XYZ, B_temp, message_instance.mt.F)  /* The detail of this routine is not given. */
end if
return  /* exit this routine */
end if

Regarding physical matter, Newton’s third law of motion states: “For every action there is an equal and opposite reaction.” However, this law applies to physical matter interacting with itself, and does not apply to an intelligent particle that is, in effect, moving or pushing physical matter by calling a learned-program statement. Thus, when a bion, in effect, pushes against some physical matter, there is no push-back against that bion, and there is no movement of that bion as a result of, in effect, pushing against some physical matter.

3.8.9 How Cell-Controlling Bions Stay with their Cells

Aside from gravity which affects both intelligent particles and common particles, physical cells are subject to physical forces that do not affect intelligent particles. In particular, a cell can be moved about by physical forces that do not move the bions controlling those cells. The question then becomes, how does a cell-controlling bion stay with its cell when that cell is moved in 3D space by one or more physical forces that do not also move its cell-controlling bion?

There are many different kinds of moving forces from many different sources that a physical cell can undergo in a large multicellular organism, including for us humans such things as: all the voluntary movements we can make with our limbs and other parts of our body, including, for example, moving every cell in our body by walking; breathing which moves a lot of cells in our upper body; the beating of our heart and the pulsing of major blood vessels also moves a lot of cells in addition to the blood cells themselves that move with the circulating blood; other involuntary muscle movements such as movements in our digestive tract including movements by our stomach processing a meal; all the ways a physical body and its cells can be moved or buffeted about by external physical forces, including such things as floating in water or being a passenger in a vehicle, such as being in a car or jet aircraft. In all such movement cases, a cell-controlling bion needs a way to stay with its cell.

Because the earliest cells in our world billions of years ago presumably existed as single cells in watery solutions, subject to being moved about by movements of the surrounding water, a learned program to keep cell-controlling bions with their cells would have evolved in the earliest stage of organic life.

Because each physical atom has a unique identifier, and cells are collections of physical atoms, the easiest solution is to, in effect, keep a cell-controlling bion close to a specific atom in its cell, identifying that specific atom by its unique identifier. Name this learned program that all cell-controlling bions in our world have, LP_keep_this_bion_close_to_this_physical_atom (LP is short for learned program). This learned program has two inputs:

Regarding the detail of this learned program, assume there is a learned-program statement get_relative_location_of_one_physical_atom(), and besides having the use_this_send_distance parameter, its other parameter is the unique identifier of a physical atom. As already explained in subsection 3.8.7, get_relative_locations_of_physical_atoms() has a very short range, and its MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS is estimated as less than one-tenth of a millimeter (less than 1/250th of an inch). The root reason for its very short range is the potentially enormous number of recipient atoms for the sent message when a bion calls get_relative_locations_of_physical_atoms(). However, calling get_relative_location_of_one_physical_atom() has only a single recipient atom, and its MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ONE_ATOM can easily be a distance of thousands of miles (1.6 kilometers per mile), and I assume this is its maximum range.

Given get_relative_location_of_one_physical_atom(), the learned program LP_keep_this_bion_close_to_this_physical_atom can work as follows: Many times per second get_relative_location_of_one_physical_atom() is called (for efficiency, assume the use_this_send_distance parameter is initially set to stay_within_this_distance, and use_this_send_distance is increased in steps as needed if no reply is received back from the specified atom in the time allowed for that reply, then when a reply is received and the bion moves closer to that atom, use_this_send_distance can be decreased accordingly). Assuming a reply from the atom, the info returned by that call of get_relative_location_of_one_physical_atom() includes the distance to that atom and the replier's_XYZ_relative_to_000 for that atom, and, if that distance is further away than stay_within_this_distance, then move_this_bion() is called, with its direction-to-move parameter set to the value of that atom’s replier's_XYZ_relative_to_000, so that this call of move_this_bion() will move the bion in the direction to that atom. Thus, when a cell-controlling bion is running its learned program LP_keep_this_bion_close_to_this_physical_atom, many times per second that bion checks the location of that atom relative to itself, and will move closer to that atom as needed so as to keep within the stay_within_this_distance (for a typical cell, stay_within_this_distance is probably typically set to a value that is short enough to keep that cell-controlling bion either mostly or completely within the confines of its cell).

When a cell-controlling bion is within its cell, its learned program LP_keep_this_bion_close_to_this_physical_atom is probably always running, with possible changes as needed, while it is running, to its two inputs, as that bion moves around in its cell manipulating atoms and molecules as needed to keep that cell alive and functioning. A cell-controlling bion will stop running its learned program LP_keep_this_bion_close_to_this_physical_atom if that bion is leaving its cell either temporarily or permanently. An example of temporarily leaving its cell is if that cell-controlling bion becomes a part of a projected bion-body (see the description of bion-body projections in chapter 5); at the end of a bion-body projection, to get back to their cells, each cell-controlling bion in that projected bion-body resumes running its learned program LP_keep_this_bion_close_to_this_physical_atom. An example of a cell-controlling bion permanently leaving its cell is if that cell is no longer physically viable because either that cell itself was destroyed by some poison, or some external physical force, or that cell is a part of a larger multicellular body that has died and is no longer maintaining the physical environment that cell needs, such as circulating oxygenated blood (such as when we humans physically die and our heart stops beating and we stop breathing).

How does a Sleeping Cell-Controlling Bion stay with its Cell, and how does a Sleeping Bion in a Bion-Body stay with that Bion-Body

When a bion is asleep, none of its learned programs are running (section 9.3). In the case of a cell-controlling bion that is currently with its cell because its learned program LP_keep_this_bion_close_to_this_physical_atom is currently running, when that cell-controlling bion falls asleep, how does that bion stay with its cell if that cell moves away from that bion during that bion’s sleep?

A similar problem is how does a bion that is currently a part of a bion-body, such as a part of one’s afterlife bion-body (section 6.3) or a part of a Caretaker’s bion-body (section 7.6), stay with that bion-body if that bion-body moves when that bion is asleep? The answer to both these problems is to assume the following code in the computing-element program:

/*
If this_bion is not owned by a soliton, then assume the following code is executed by the computing-element program for this_bion when the computing-element program is about to put this_bion to sleep:
*/
if this_bion very recently called get_relative_location_of_one_physical_atom()
and this_bion is currently near whatever atom was specified by this_bion’s most recent call of get_relative_location_of_one_physical_atom()
then
For the duration of this_bion being asleep, keep this_bion close to that specified atom.
else if this_bion is currently close to one or more other bions
then
Select one of those nearby bions, and for the duration of this_bion being asleep, keep this_bion close to that selected bion.
end if
end if

4 Experience and Experimentation

This chapter considers psychic phenomena and the related subject of meditation. The chapter sections are:

4.1 Psychic Phenomena
4.2 Obstacles to Observing Bions
4.3 Meditation
4.4 Effects of Om Meditation
4.5 The Kundalini Injury

4.1 Psychic Phenomena

Unlike the mathematics-only reality model, the computing-element reality model is tolerant of human experience, because much more is possible in a universe with intelligent particles. For example, the learned-program send() statement allows direct communication between two or more minds, and this communication does not use the physical senses nor physical substance to carry the messages that are sent from one mind to one or more other minds. Thus, ESP (extrasensory perception) is possible using the send() statement, and not only possible but it happens a lot in the human world. Given what was said in section 3.8, the owned bions of one’s soliton (awareness) are one’s mind, and a soliton can only send messages to, and receive messages from, its owned bions (its mind) and not to or from any other mind. The end result is that for humanity as a whole, probably the majority of all the send()-statement messaging between different human minds is not brought to the awareness of any of the humans involved (they remain unconscious of it). However, for some people in some cases one’s mind might give one’s awareness (one’s soliton) a thought or feeling of some sort about some specific communication with one or more other minds.[25],[26],[27]

In contrast to the computing-element reality model, the mathematics-only reality model cannot accommodate ESP. With only common particles to work with, ESP cannot be explained, and the mathematics-only reality model states that ESP does not exist.

Besides ESP, there are many other human experiences that are denied by the mathematics-only reality model. However, these experiences are explained by the computing-element reality model. For example, out-of-body experiences, an afterlife, and communication with the dead are all allowed by the computing-element reality model. An afterlife is possible, because a soliton and its owned bions, and also the cell-controlling bions in one’s physical body, are elementary particles. In general, the breakdown of a structure leaves intact the elementary particles composing that structure. Because one’s human memories are located somewhere in the collective memory of those bions—the owned bions of one’s soliton—that collectively form one’s mind, one’s memories can also survive the death and destruction of one’s physical body. For the same reason, because intelligent particles have an existence separate from physical matter and are not dependent on physical matter, out-of-body experiences are possible. And also, because of the afterlife and the learned-program send() statement, communication with the dead is possible.

One psychic power that we don’t have, although popular in fiction, is a psychokinetic power that would allow one to move physical objects at a substantial distance from oneself, just by using one’s mind. The range of the learned-program statements that can manipulate physical matter is very short (estimated to be less than one-tenth of a millimeter), which means that distant movements of physical matter by one’s mind is impossible (see subsection 3.8.7).


footnotes

[25] Broadly, ESP (extrasensory perception) is perception by psychic means. Most often, ESP refers to the ability to feel what other people are thinking or doing. An example of ESP is the commonly reported experience of feeling when one is being stared at by a stranger: upon turning around and looking, the feeling is confirmed.

Precognition is a consequence of ESP. The most likely explanation for successful instances of precognition is non-physical messaging between minds using the learned-program send() statement. For example, when a person feels that the phone is about to ring, and the phone rings. When this happens, typically the caller and callee are already known to each other, being, for example, relatives or friends. The most likely explanation is that the two minds involved in the call, unconsciously communicated between themselves that the call would happen, and the callee’s mind gave the callee’s awareness a feeling that the phone was about to ring.

Synchronicity or coincidence is another consequence of ESP. Because our minds can communicate with other minds by using the learned-program send() statement, without us having any conscious awareness of this communication, meaningful coincidences can be arranged with one or more other minds. For example, the minds of two persons can arrange between themselves a meeting without either person having any conscious knowledge that their two minds are going to bring them together. Depending on the detail of how the meeting is brought about, one or both persons may later see their meeting as extraordinary and/or very unlikely, and then explain it as destiny, or the will of God (actually, the will of their two minds).

Claims of time travel by psychic means—viewing alleged past or future events—are sometimes made, but are necessarily erroneous. The computing-element reality model does not allow time travel. In general, the passage of time is due to the continued processing by the underlying network of computing elements that fill the universe. Because all particles are just a consequence of that processing—and not its cause—there is nothing any particle or group of particles can do to alter the underlying network’s processing order (corresponding to the direction of time) and processing rate (corresponding to the flow of time). Thus, there is nothing we or any machine we might build can do, in a real and not an imaginary sense, to move forward or backward in time relative to the current processing order and processing rate of the underlying computing-element network that fills the universe.

[26] I believe the following is an example of ESP from my own experience: During my late thirties I once went to a psychic fair offering readings by professional psychics (I had seen an ad for this in the local newspaper). Interested in a personal demonstration, I selected one of the available psychics. To avoid helping her during the reading, I did not ask questions, give personal information, comment on her reading’s accuracy, or even look at her. Nevertheless, the reading she gave me was a personally convincing demonstration of direct communication between minds, where the communications were brought to awareness in the mind of the psychic.

After the reading was over, the psychic remarked that I was very easy to read, and that sometimes she gets very little or nothing from the person being read. The explanation follows: During a reading, bions in the psychic’s mind are receiving information communicated (sent) by bions in the mind of the person being read. If that person’s mind refuses to communicate, or is unable to, then that psychic draws a blank, and must either admit defeat, or rely on some secondary means, such as interpreting tarot cards according to fixed rules, and/or making guesses based on whatever clues are available such as a person’s age and appearance. Thus, a skeptic who wants “proof” that a psychic is fake, can get “proof,” by unconsciously refusing to communicate, or by unconsciously communicating false information.

Psychic readings, when genuine, provide a means to consciously learn about hidden plans and/or expectations in one’s own mind, circumventing the normal paths to awareness which are restricted and heavily filtered. Channeling, when the source is not merely the channel’s own mind, is a closely related talent which many psychics have. When a psychic channels communications from another mind, such as from the mind of a dead person, the same direct communication between minds is taking place. For some psychics, channeling and doing a psychic reading are the same thing, in which the mind of a dead person acts as an intermediary who telepathically talks to the psychic and provides information about the person being read; the psychic then repeats more or less what the intermediary says.

Regarding the various props that psychics use, such as tarot cards, tea leaves, crystal balls, astrological charts, personal effects held by the psychic (psychometry), and such, working psychics have commented as follows: “I read tarot cards for people one-on-one, in person, or over the phone. They’re just a point of concentration. I could use a crystal ball or goat innards, but tarot cards are lighter than a ball and less messy than the goat innards!” (Cooper, Paulette, and Paul Noble. The 100 Top Psychics in America. Pocket Books, New York, 1996. p. 266), and, “Sometimes I use cards because then the person doesn’t become preoccupied with ‘Where the hell is she coming up with this stuff from?’ It’s easier to blame it on the cards.” (Ibid., p. 250). Regarding what is brought to awareness in the mind of the psychic, this depends on the psychic and the circumstances—or, more specifically, the received communications and the way those communications are processed in the mind of the psychic and then sent to that psychic’s awareness—but, in general, “pictures, sounds, and symbols that the psychic verbalizes” (Ibid., p. 297).

[27] Despite the apparent usefulness of ESP in general, overt displays of ESP are not widespread in human society, and it appears that different evolutionary forces are at work to suppress such psychic phenomena. For example, social forces are at work. In Europe during the Middle Ages, women who were overtly psychic were murdered as witches by the religious establishment.

Another factor, perhaps the dominating factor, that limits the conscious display of ESP including conscious direct communication with another mind, is that having these capabilities, without perceptual confusion regarding the source of the perception, has a cost in terms of requiring an allocation of awareness-particle input channels (section 9.6). More specifically, for the awareness to have extra-sensory perceptions and simultaneously know they are extra-sensory perceptions, then either a separate allocation is needed for carrying these extra-sensory perceptions to the awareness, or, if the same allocation is used to carry both extra-sensory and sensory perceptions, such as, for example, telepathic hearing and normal hearing carried by the same allocation, then a separate allocation is needed for carrying a feeling—sent to the awareness at the same time as the extra-sensory perception—that alerts the awareness that the current perception has an extra-sensory origin.


4.2 Obstacles to Observing Bions

Experimentation is an important part of the scientific method. Because bions are particles, one might expect to observe bions directly with some kind of physical instrument. However, observing bions with an instrument made of common particles is unlikely, because bions are selective about how they interact with common particles.[28] For example, if a bion, in effect, chooses to ignore a man-made detection instrument, then that detection instrument will not detect that bion.[29]

Being partly composed of bions, it is possible for a man to be his own instrument to observe bions. However, because of the fragility of the physical body and its overriding needs, most people cannot directly observe bions without some kind of assistance, such as by meditation.


footnotes

[28] Of course, the computing-element program decides all particle interactions (the only exception is the soliton, which has an agency of its own)—either directly in the case of common particles, or indirectly thru learned programs in the case of intelligent particles—and all particles are blocks of information manipulated by the computing elements that run the computing-element program. However, as a literary convenience, intelligent particles will sometimes be spoken of as having their own volition. This avoids excessive repetition of the details of the computing-element reality model.

[29] Regarding computation, note that ignoring other particles and not interacting with them is always easiest, because interaction requires computation, whereas non-interaction requires nothing in terms of computation. Thus, for example, bions passing thru a wall is computationally easier for those bions than being repelled by that wall. And bions remaining invisible to ordinary sight is computationally easier for those bions than having and running a learned program to reflect and/or absorb and/or emit light, and thereby be seen.


4.3 Meditation

The ancient books of Hinduism are collectively known as the Vedas. It is not known with any certainty when the Vedas were written, but typical estimates are that the oldest books were written 3,000 years ago.

Among the Vedas are the Upanishads, a collection of ancient writings which embody the philosophy of Hinduism. The Upanishads speak clearly about a means to experience psychic phenomena. It is an amazingly simple method: mentally repeat, over and over, the sound Om (rhymes with the words Rome and home). The o sound is short and the m sound is typically drawn out. Robert Hume, in his book The Thirteen Principal Upanishads, translates from the original Sanskrit:

The word which all the Vedas rehearse,
And which all austerities proclaim,
Desiring which men live the life of religious studentship—
That word to thee I briefly declare.
That is Om!

That syllable, truly, indeed, is Brahma!
That syllable indeed is the supreme!
Knowing that syllable, truly, indeed,
Whatever one desires is his!

That is the best support.
That is the supreme support.
Knowing that support,
One becomes happy in the Brahma-world.[30]

The above verse is from the Katha Upanishad. In this verse, praises are heaped upon Om. There is also a promise of desires fulfilled and happiness attained. The word Brahma is a technical term which occurs frequently in the Upanishads, and often refers to the experiences one can have as a result of using Om.

Taking as a bow the great weapon of the Upanishad,
One should put upon it an arrow sharpened by meditation.
Stretching it with a thought directed to the essence of That,
Penetrate that Imperishable as the mark, my friend.

The mystic syllable Om is the bow. The arrow is the soul.
Brahma is said to be the mark.
By the undistracted man is It to be penetrated.
One should come to be in It, as the arrow [in the mark].[31]

The above verse is from the Mundaka Upanishad. The syllable Om is identified as a bow in the fifth line, and in the first line the bow is called the great weapon. By this bow-and-arrow analogy, the power of Om is expressed. A straightforward interpretation of this verse is that the use of Om can launch the awareness into an out-of-body experience.

As the material form of fire when latent in its source
Is not perceived—and yet there is no evanishment of its subtle form—
But may be caught again by means of the drill in its source,
So, verily, both are in the body by the use of Om.

By making one’s own body the lower friction-stick
And the syllable Om the upper friction-stick,
By practicing the friction of meditation,
One may see the God who is hidden, as it were.[32]

The above verse is from the Svetasvatara Upanishad. It uses an outdated analogy, as did the previous verse. Before matches and lighters, man started fires by such means as rapidly spinning a stick of wood called a drill, the pointed end of which—surrounded by kindling—is pressed against a wooden block; the heat from the friction then ignites the kindling. The beginning of the verse is scientifically inaccurate; it is saying that fire exists in wood in some subtle form. This mistake is excusable, given that the Upanishads are prescientific writings.

The meaning of this verse starts with the fourth line. The first three lines make the claim that fire has both a visible form and a subtle hidden form. The remaining lines make the claim that there is something similarly hidden in the body. Normally, this something is hidden, as the writer of the verse supposed that fire is hidden in the stick. But by using Om, one can draw out this hidden something, and make it known to one’s own awareness. Referring to the computing-element reality model, this hidden something is the population of bions inhabiting the cells of the physical body.

Whereas one thus joins breath and the syllable Om
And all the manifold world—
Or perhaps they are joined!—
Therefore it has been declared to be Yoga.[33]

The above verse, from the Maitri Upanishad, defines yoga as involving the use of Om.


footnotes

[30] Hume, Robert. The Thirteen Principal Upanishads, 2nd ed. Oxford University Press, London, 1934. pp. 348–349.

[31] Ibid., p. 372. (The bracketed note on the last line is by the translator, Robert Hume.)

[32] Ibid., p. 396.

[33] Ibid., p. 439.


4.4 Effects of Om Meditation

If one wants to meditate using Om, and risk the injury described in the next section, then the typical procedure seems to be the following: Lie down comfortably on a bed—preferably at night before sleeping. The room should be quiet. Then, close your eyes and mentally repeat the sound Om over and over, at whatever seems like a normal pace; do not say the sound aloud. Avoid stray thoughts, and try not to feel the body. Movement should be avoided, but move if it will correct any physical discomfort. During the meditation, the attention has to settle somewhere, and a good place to focus the attention is the center of the forehead.

There is no guarantee that the use of Om will produce results. The results of Om meditation have a high threshold. A single sounding of Om is useless. Instead, it must be repeated many times. Many hours of using Om, spread over many days, may be necessary before there are any results. The following are some of the effects that may result from Om meditation:

  1. Upon waking from sleep, there is an enhanced clarity and frequency of dream remembrance.

  2. During sleep, there is lucid dreaming. A lucid dream is when one is conscious within what appears to be a surrounding dream world, and in that dream world one can freely move about. Chapter 5 explains lucid dreams as out-of-body experiences.

  3. During sleep, there is an onset of consciousness and a direct perception of a nonphysical body. Often, this bion-body, which is a body composed solely of bions, is either coming out of, or reentering, one’s physical body. This nonphysical body—which is capable of movement independent of the physical body—convinces those who experience it that they are truly exterior to their physical body.

  4. Something is felt in the body during the Om meditation. This may be a vibration, or a loss of sensation in the limbs, or a shrinking feeling.

If one is going to have an out-of-body experience, the best time for it is when one is asleep, because during sleep one’s physical body has the lowest need for the control provided by one’s awareness/mind (defined in chapter 5). Thus, if one’s awareness/mind were to wander off and leave the physical body alone during sleep, then most likely the physical body will remain safe without it.

4.5 The Kundalini Injury

Although Om meditation has the potential to promote unusual experiences, it also has the potential to cause a very painful injury. Om meditation, and meditation in general, can, after long use, cause the devastating injury known as kundalini. This injury, which appears to be nonphysical, happens during the actual meditation. Briefly, the cause of the injury is too much meditation. More specifically, a possible explanation is that excessive meditation can cause a neuron-inhabiting bion in the lower spine to self-program, causing an alteration or corruption in one of its learned programs; and the ultimate consequence of this reprogramming is the burning pain of the kundalini injury.

The details of the kundalini injury are as follows: At some point during meditation, and without any warning, there is a strong sensation at the spine in the lower back, near the end of the spine. There is then a sensation of something pushing up the spine from the point of the original sensation. How far this sensation moves up the spine is variable. Also, it depends on what the person does. He should immediately get up, move around, and forswear future meditation. Doing so can stop the copying of the learned-program corruption, if that is what the felt movement up the spine is: a side effect of the corruption-originating bion copying to neighboring neuron-inhabiting bions, and those neighbors copying to their neighbors, and so on up the spine.

The onset of the pain is variable, but it seems to follow the kundalini injury quickly—within a day or two. Typically, the pain of the kundalini injury is a burning sensation across the back—or at least a burning sensation along the lower spine—and the pain may also cover other parts of the body, such as the head. The pain is sometimes intense. It may come and go during a period of months or years and eventually fade away, or it may burn incessantly for years without relief.

The common reaction by the sufferer to the kundalini injury is bewilderment. Continued meditation seems to aggravate the kundalini injury, so the typical sufferer develops a strong aversion to meditation.

The Indian, Gopi Krishna, suffered the kundalini injury in December 1937 at the age of 34. He had a habit of meditating for about three hours every morning, and he did this for seventeen years. Apparently, he did not practice Om meditation. Instead, he just concentrated on a spot centered on his forehead. In his case the sensation rose all the way up his spine and into his head. The pain he suffered lasted several decades.

The Indian, Krishnamurti, who had been groomed as the World Teacher of the Theosophical Society, suffered the kundalini injury in August 1922 at the age of 27. He had been meditating. His suffering lasted several years, and the pain would come and go. In one of his letters of 1925, Krishnamurti wrote, “I suppose it will stop some day but at present it is rather awful. I can’t do any work etc. It goes on all day & all night now.”[34] Such are the hazards of meditation.


footnotes

[34] Lutyens, Mary. Krishnamurti: The Years of Awakening. Avon Books, New York, 1983. p. 216.


5 Out-of-Body Travels

Regarding section 3.8, bions can move themselves by using the move_this_bion() learned-program statement. However, in the case of a soliton and its owned bions, it’s assumed that the computing-element program keeps a soliton and its owned bions together and limits to a short distance how far apart any of them can move away from each other. For convenience, because they are always kept together regardless of where they are, an awareness (a soliton) and its mind (that soliton’s owned bions) will often be referred to simply as awareness/mind or soliton/mind. The awareness/mind phrase emphasizes more the soliton as the seat of our consciousness, and the soliton/mind phrase emphasizes more that the seat of our consciousness is itself an intelligent particle.

This chapter considers two kinds of out-of-body experiences: lucid-dream out-of-body experiences and bion-body out-of-body experiences. The chapter sections are:

5.1 Internal Dreams and External Dreams
5.1.1 The Soliton Directory
5.1.2 External Dreams aka Lucid Dreams
5.2 Movement when Out-of-Body
5.2.1 Out-of-Body Movement during a Lucid Dream
5.2.2 Vision and Movement during my Bion-Body Projections
5.2.3 How One’s Projected Bion-Body Maintains its Human Shape
Moving my Projected Bion-Body’s Limbs
5.3 Lucid-Dream Projections ~ Oliver Fox
5.4 Bion-Body Projections ~ Sylvan Muldoon

5.1 Internal Dreams and External Dreams

Dreams need no introduction, because dreaming is an experience most people have. However, there has long been the question as to the location of dreams. Some past cultures believed in a separate dream world, which exists around the dreamer, and when a person dreams, that person’s awareness/mind is moving about in that external dream world. Call this kind of dream an external dream. What is commonly known as a lucid dream, is an external dream. The alternative is that dreams are spatially confined to the dreamer’s mind: call this kind of dream an internal dream.

The mathematics-only reality model cannot explain external dreams, and according to that reality model all dreams are internal. The computing-element reality model allows both kinds of dreams, because one’s awareness/mind is not physical matter, and a running learned program is needed to keep one’s awareness/mind in one’s physical head when one is awake in one’s physical body (see the description of the learned program LP_maintain_AM_position_close_to_one_bion in section 5.2).

For an internal dream, the imagery and sounds of that dream are generated by one’s own mind, without using substantial sensory input. It is certain that the human mind can generate high-quality images and sounds without sensory input, because most people can imagine or recall low-quality (and for some people, higher or high-quality) images and sounds while awake, and psychedelics such as LSD and DMT can provoke a torrent of high-quality images while the person is awake. One’s visual imagination generates the imagery for an internal dream (see the discussion of the visual imagination in section 3.1). And, because one is asleep while having an internal dream, the generated imagery is not made faint.

I suppose that for most people, at least in terms of what they remember after they awake from sleep, internal dreaming is the rule, and external dreaming is the exception. However, given the possibility of one’s mind messaging with other minds using the learned-program send() statement, without the awareness knowing about it, a given internal dream can incorporate communicated information from other minds, without a person having any conscious knowledge of this. Thus, even an internal dream can have an external component.

5.1.1 The Soliton Directory

An obvious feature of our minds is the ability to recognize people known to us, and associated with that recognition of a person is knowledge of one’s relationship with that person. For us humans, one can assume that this recognition ability extends to send() messages received by one’s own mind from another person’s mind (in this context, “another person” is a soliton/mind other than oneself, typically but not necessarily another human). Recall that, as stated in subsection 3.8.4, every message sent by a bion includes a copy of that bion’s complete identifier block, and, if that bion is owned by a soliton, included in that bion’s identifier block is the unique identifier of that owning soliton. Within the learned programs of one’s mind, to identify the person who sent a message received by one’s mind, assume there is a lookup table named soliton directory, and its lookup_key is the unique identifier of a soliton (only one table entry per soliton). For simplicity, assume that among the learned programs of one’s mind, there is one central soliton directory, and each entry in this soliton directory, besides its lookup_key, includes the following:

Given the importance of being able to identify a person, and also the simplicity of being able to identify a person by the unique identifier of that person’s soliton, and also the importance of the soliton—one can, if one wants, see the purpose of our universe as being a playground for all the awarenesses in existence—it may be that the data structure for the soliton directory is predefined in the computing-element program, along with supporting routines. However, beyond noting this possibility it is not considered further.

To simplify messaging between owned minds (an owned mind is the owned bions of a soliton), let’s assume that each of our minds has a single owned bion designated to both send and receive any messages between one’s own mind and any other owned mind(s) (presumably, other parts of one’s mind would construct the messages to be sent by this bion, and process any messages received by this bion). Also, regarding subsection 3.8.1 and the user-settable identifiers block, let’s assume that in the case of owned bions, the integers of the user-settable identifiers block are used to subdivide one’s mind into different functional parts, and let’s assume that this single bion in one’s mind that will send/receive any messages between one’s own mind and any other owned mind(s) is identified in one’s mind by having the first integer in its user-settable identifiers block, USID_1, set to MESSAGING_WITH_OTHER_OWNED_MINDS, and no other bion in one’s mind has the first integer in its user-settable identifiers block, USID_1, set to MESSAGING_WITH_OTHER_OWNED_MINDS.

With the assumptions of the previous paragraph, define the phrase owned-minds broadcast message as being any message sent by the send() statement that identifies the intended recipient(s) of that message by setting the user_settable_identifiers_block parameter as follows: the first integer in user_settable_identifiers_block is set to MESSAGING_WITH_OTHER_OWNED_MINDS and the other integers are set to null. When an owned-minds broadcast message is sent, each owned mind within range of that sent message, assuming its MESSAGING_WITH_OTHER_OWNED_MINDS bion is awake, will receive that sent message (the range of a sent message is determined by that sent message’s message_instance.send_distance value).

Thus, by sending an owned-minds broadcast message, an owned mind can send a message to all the other owned minds that are within range of that sent message, regardless of whether or not the sending owned mind currently knows anything about the within-range recipient(s) of that sent message. In effect, among other things, the owned-minds broadcast message is a way to initiate contact with strangers (in this section, a stranger is any soliton/mind for which there is no entry in one’s soliton directory). Whether or not a stranger replies back in some way to one or more received owned-minds broadcast messages, depends on the stranger’s reaction, if any, to those messages.

As a specific example of how owned-minds broadcast messages can be useful during the lucid-dream stage of the afterlife (section 6.3), a given person who wants to speak to, or sing to, or generate music for, a nearby audience which may include strangers, can send a stream of short-range owned-minds broadcast messages with enough range to cover that intended audience. However, note that, in general, there is no guarantee that the sent owned-minds broadcast messages will be consciously listened to by any of the recipients, because that depends on the recipients.

In general, when a message is received by one’s mind from the owned mind of another person, regardless of whether or not that received message is an owned-minds broadcast message, assume that the initial processing of that received message includes a search of one’s soliton directory for the unique identifier of that person’s soliton. If found, then that person is identified and the other data stored in that soliton-directory entry for that person can be referenced and/or used as needed when processing that received message. However, if there is no entry in one’s soliton directory for that person, then assume that there is an initial step that decides whether or not this received message should be discarded, by using the value of distance_between_sender_and_receiver that was computed for that message when it was received (see the code for examine_a_message_instance() in subsection 3.8.5). In this context, distance_between_sender_and_receiver is the distance between that sender’s soliton/mind and one’s own soliton/mind. If distance_between_sender_and_receiver is a far distance, then that is a good reason to discard that received message, especially if that received message is an owned-minds broadcast message. But if that distance_between_sender_and_receiver is a very short distance, then that would be a good reason to, in effect, consider that received message further by examining its message text.

Assuming there is no entry in one’s soliton directory for the person who sent that received message, and that received message was not discarded by the initial step described in the previous paragraph, then the learned program that manages one’s soliton directory can add that person’s soliton to one’s soliton directory if there is sufficient accumulated feedback from one’s own soliton that justifies doing so (accumulated feedback from one’s own soliton that would result in either a sufficiently large enough positive or negative number being the starting value of the total_relationship_score for that person’s entry in one’s soliton directory if that entry were made).

For example, if during the lucid-dream stage of the afterlife one is consciously listening to a stranger who is sending a stream of owned-minds broadcast messages, and one consciously likes what that stranger is sending (for example, it could be a lecture by that stranger, or singing, or music), then, if one’s accumulated conscious reaction to that stranger is strong enough (for example, after listening for a while to that lecture, singing, or music), then the learned program that manages one’s soliton directory will make an entry in one’s soliton directory for that stranger, at which point that stranger will no longer be a stranger. Note: to make an entry in one’s soliton directory, besides having an initial value for the total_relationship_score, the unique identifier of that stranger’s soliton and the unique identifier of that stranger’s MESSAGING_WITH_OTHER_OWNED_MINDS bion are also needed, and both of these unique identifiers are in each of the received messages from that stranger, in the message_instance.sender's_identifier_block (the stranger’s MESSAGING_WITH_OTHER_OWNED_MINDS bion is the bion that sent those received messages, and its identity as a MESSAGING_WITH_OTHER_OWNED_MINDS bion can be confirmed by checking the USID_1 value of the user-settable identifiers block in that message_instance.sender's_identifier_block, the integer value of which should be MESSAGING_WITH_OTHER_OWNED_MINDS).

Another example regards the people one encounters during one’s physical human life: If one has conscious interactions with a person who is not currently in one’s soliton directory, and one’s accumulated conscious interactions with that person has resulted in a large enough positive or negative total_relationship_score for that person to be added to one’s soliton directory, then perhaps the human mind has a learned program that will send a short-range owned-minds broadcast message when that other person is nearby, asking, in effect, for that other person’s mind to send a reply message. And if one’s mind then receives that wanted reply message from that person, then the learned program that manages one’s soliton directory would then make an entry in one’s soliton directory for that person.

Besides having programming for adding a new entry, the learned program that manages the soliton directory would also have programming to delete an older entry when room is needed to add a new entry. I don’t know how many entries that managing learned program allows in the soliton directory before it will delete an older entry, but my guess is a soliton-directory size that can hold many thousands of entries. Regarding the algorithm for selecting among all the entries in the soliton directory which one to delete, my guess is that this algorithm considers several factors, including but not limited to how old an entry is, what its total_relationship_score is, and how long ago was the last significant update to that total_relationship_score. For a typical person one’s soliton directory probably includes many people from their everyday lives, such as one’s parents, one’s children if any, other relatives of significance, friends, workplace acquaintances, perhaps one or more persons currently deceased, and perhaps one or more non-human animals that have a soliton/mind, such as one’s pet dog or pet cat if any.

For each entry in one’s soliton directory, representing a specific soliton/mind: Its total_relationship_score represents in a single number a summation of how that soliton/mind has interacted with one’s own soliton/mind over the duration of that entry. Each time one’s soliton/mind has an interaction with that other soliton/mind, and that interaction gets a conscious reaction from oneself (a reaction from one’s soliton), the learned program that manages one’s soliton directory either adds to (if a positive conscious reaction from one’s soliton) or subtracts from (if a negative conscious reaction from one’s soliton) that total_relationship_score, and that number that is added or subtracted is proportional to how good or bad respectively one consciously reacted to that interaction. Thus, that total_relationship_score, whether positive or negative and by how much, summarizes in a single number past relations with that soliton/mind, and can be the basis for either wanting to continue with, or wanting to avoid, future relations with that soliton/mind if there is that opportunity in one’s next human life, even though one’s mind after reincarnation may no longer have any memory of past-life events that contributed to that total_relationship_score for that soliton/mind.

In subsection 5.2.1, the soliton directory is used by one’s soliton/mind during a lucid dream to locate and move to where a soliton/mind known to oneself is, and after the death of one’s physical body this ability will also be useful during the lucid-dream stage of the afterlife (section 6.3), because one can reunite in the afterlife with important persons to oneself who are also currently in the lucid-dream stage of the afterlife and have yet to reincarnate. Also, I see no reason for one’s soliton directory to have its entries deleted just because one has lost one’s physical body (has died) or has reincarnated in a new physical body. Instead, just specific additions to, and deletions from, one’s soliton directory over the course of time, without regard to any of the major transition points in one’s life cycle, such as the death of one’s physical body, one’s time in the afterlife, and one’s reincarnation. And, for this reason, one’s association with a particular person (soliton/mind) can span across several lifetimes or even many lifetimes.

Also, in general, memories are deleted from memory storage over time, to make room for new memories, and a lot of this memory loss probably happens for a typical person around the time of reincarnation to make room for new memories in the new life. In addition, a lot of this memory loss is probably already happening during the lucid-dream stage of the afterlife, because that is a new life unto itself with its own need for new memories in a very different environment than when one was physically embodied in the physical world. The lucid-dream stage of the afterlife may be when one forgets the majority of one’s memories of one’s physical life. And, soon after reincarnating, is probably when a typical person loses his memories of his time in the afterlife and also loses whatever remains of his memories of his previous life in a physical body.

As long as a person’s entry remains in one’s soliton directory, the total_relationship_score for that person is a summary of one’s past experiences with that person, and this will be useful after the loss of past-life memories, because there is always the possibility of reencountering that person’s soliton/mind sometime in the future when one is again in physical embodiment or again in the lucid-dream environment. Also, a future reencounter can be planned: For example, two persons who have a large positive total_relationship_score for each other, could agree in the afterlife to meet again when they are both human again, and then each of them chooses parents so that they will both live in the same country and speak the same language, and then it is up to their unconscious minds to arrange, sometime in the future, a physical meeting between the two. However, because human life, in general, is filled with so much unpredictability, including the possibility of hazards and circumstances beyond one’s control, an actual physical meeting between the two might not actually happen, regardless of what they agreed to before reincarnating. A more certain way to guarantee a physical meeting in the next human life is to reincarnate into a family or extended family, as siblings or close relatives, but this option, unless the culture allows it (such as cousin marriages in some cultures), would not be the choice of married couples or lovers who want to become again with each other a married couple or lovers.

5.1.2 External Dreams aka Lucid Dreams

Briefly, what happens during an external dream, also known as a lucid dream (“lucid” because one is fully conscious during a lucid dream), is that one’s soliton/mind leaves behind, temporarily, both one’s physical body and also all the cell-controlling bions of one’s physical body. In this separated state, one’s soliton/mind can move freely in any direction, at various speeds including very fast, and can also locate and move to where other persons who are currently also just a soliton/mind are, so as to interact with them, and can also locate and move back to one’s physical body at the end of the lucid dream.

The primary interaction with others during a lucid dream is talking with them, and even though one typically sees an appearance of the other person or persons as dressed humans, that appearance is mostly static with little or no movement of that appearance (for example, while one hears someone in front of oneself talking to oneself, there is no seen movement in their apparent face (no moving of the mouth or lips). Thus, one can assume that talking and being talked to during a lucid dream is just direct communication between minds using the learned-program send() statement. What is heard as one’s own talking and the talking of others sounds to one’s awareness very much like talking with another person when in our physical bodies. However, because the lucid-dream environment is so very different from being in one’s physical body, there is no danger during a lucid dream of thinking that one is in one’s physical body just because lucid-dream conversations sound the same as when conversing in one’s physical body. In terms of the messages one’s mind sends to the soliton (awareness) during a lucid-dream conversation, the final construction of those messages is probably using the same learned programs that are used to communicate to the awareness conversations when in one’s physical body.

Regarding what is seen during a lucid dream, both based on my own experience with lucid dreaming and also the written experiences of other lucid dreamers, it really does seem that the world of lucid dreaming has its own class of common particles that are very different than the common particles of our physical world. For convenience, call the common particles of physics p-common particles (this includes both the elementary particles of physics such as electrons, quarks, and photons, and also for convenience the atoms of physical matter), and call the common particles observed during a lucid dream d-common particles. These d-common particles do not interact with p-common particles, and these two classes of common particles are, in effect, invisible to each other (invisible to each other because the computing-element program has no programming to interact p-common particles with d-common particles; the one exception would be gravity assuming d-common particles have a nonzero mass). In terms of what can be seen, the lucid-dream world and the physical world are separate from each other, but the lucid-dream world that we humans have access to, exists in the same large volume of space that our physical world exists in, since both d-common particles and p-common particles are just data manipulated by the computing elements, and the computing elements themselves are the space in which both common particles and intelligent particles exist.

Regarding d-common particles, my own experience and the experience of other lucid dreamers is that our minds, based on what is seen and experienced during a lucid dream, can directly see and, in effect, both create and destroy objects composed of d-common particles (my vision during lucid dreaming was always in color, and overall the lucid-dream world is colorful, and both the man-made objects and the appearances of others always looked smooth and continuous and were never grainy looking). Thus, there are learned-program statements that can see, create, destroy, and manipulate d-common particles. The rule given in subsection 3.8.7, besides applying to physical atoms, also applies to d-common atoms:

The distance at which a bion can directly manipulate one or more d-common atoms by calling a learned-program statement, cannot exceed the maximum distance at which that bion can directly see any and all of the d-common atoms surrounding that bion by calling the get_relative_locations_of_d_common_atoms() learned-program statement.

Subsection 3.8.7 also gives reasons for why the learned-program statements for directly seeing and manipulating physical matter have a very short range estimated by me as being less than one-tenth of a millimeter (less than 1/250th of an inch). In all my lucid dreams, which typically included seeing other people, I always just assumed that the size of the persons that I was seeing was the same size as their physical bodies (or, in the case of seeing someone currently dead, assuming that their seen appearance had the same size that their physical body had). However, after all the thinking that went into subsection 3.8.7 regarding the reasons for a limited range for seeing p-common atoms (physical atoms), and realizing there would also be a limited range for seeing d-common atoms, it occurred to me that because the only part of myself that was present in a lucid dream was my soliton/mind—the intelligent particles of which, are, in effect, confined to a sphere of about an inch in diameter—that perhaps when I was seeing the appearance of another person in a lucid dream—assuming that appearance was constructed from d-common atoms by that person’s mind—that that seen appearance was only a few inches in height or perhaps substantially less than that.

Let get_relative_locations_of_d_common_atoms() be the learned-program statement for seeing d-common atoms. What is the maximum distance at which a bion can see d-common atoms by calling get_relative_locations_of_d_common_atoms()? In a lucid dream one can see at a good distance from oneself, and if a seen person in a lucid dream has the same height as their physical body, then my very rough estimate is that one can see in a lucid dream out to a distance of about 100 feet (about 30 meters). However, the actual size of the persons seen in a lucid dream may be much smaller than the size of their physical bodies. For the sake of being able to compute some numbers and compare d-common atoms with physical atoms, let’s consider two different values for the maximum distance at which a bion can see d-common atoms by calling get_relative_locations_of_d_common_atoms():

In either case, because even the shorter distance of 1 foot is much greater than the maximum distance at which a bion can see physical atoms by calling get_relative_locations_of_physical_atoms(), and given subsection 3.8.7, it follows that in a given volume of space on our Earth, many more physical atoms can fit in that volume of space than d-common atoms. Presumably, a d-common atom, like a low-energy physical atom, is, at any instant in time, just data in a single computing element (note that in a given “instant in time”, the computing element currently holding the data of an atom may be in the process of transferring that atom’s data to an adjacent computing element, moving that atom thru space). Also presumably, the computing-element program limits how close together any two d-common atoms can be, with the end result that a lot more physical atoms can fit in a given volume of space than d-common atoms.

Comparing d-common atoms with physical atoms, one can approximate the difference between how closely d-common atoms can be packed together, and how closely physical atoms can be packed together, as follows: Assume 1/250th of an inch is how far distant from the calling bion that physical atoms can be, and still be seen by calling the get_relative_locations_of_physical_atoms() learned-program statement. And assume either 1 foot or 100 feet (the two cases given above) is how far distant from the calling bion that d-common atoms can be, and still be seen by calling the get_relative_locations_of_d_common_atoms() learned-program statement. Also, the volume of 3D space viewed by these atom-seeing learned-program statements is—given the message-transmission algorithm in subsection 3.8.5—the volume of a sphere.

The volume of a sphere is the cube of the radius, times a small constant (approximately 4.19) which we will ignore here. For the three distances given in the previous paragraph, 1/250th of an inch, 1 foot which is 12 inches, and 100 feet which is 1200 inches, the three volumes, measured in cubic inches, are (1 ÷ 250)3, (12)3, and (1200)3, respectively, which is 0.000000064, 1728, and 1,728,000,000, respectively. Regarding the limit—1/250th of an inch—on how far away physical atoms can be from the calling bion and still be seen by calling get_relative_locations_of_physical_atoms(), and also regarding the limit—either 1 foot or 100 feet—on how far away d-common atoms can be from the calling bion and still be seen by calling get_relative_locations_of_d_common_atoms(): If we assume that the reason for these two distance limits is the limited computation speed of a computing element and the need to avoid being overwhelmed by too many replies in too short a time, then, because the total number of replies to the sent message is, at most, the total number of atoms—either physical atoms in the case of calling get_relative_locations_of_physical_atoms(), or d-common atoms in the case of calling get_relative_locations_of_d_common_atoms()—in the spherical volume of space reached by that sent message, then one can conclude the following: If the maximum distance at which a bion can see d-common atoms by calling get_relative_locations_of_d_common_atoms() is 1 foot, then, in a given volume of space in our world, about (1728 ÷ 0.000000064) = 27 billion (2.7×1010) times more physical atoms can fit in that volume of space than d-common atoms. And if instead the maximum distance at which a bion can see d-common atoms by calling get_relative_locations_of_d_common_atoms() is 100 feet, then, in a given volume of space in our world, about (1,728,000,000 ÷ 0.000000064) = 27 million billion (2.7×1016) times more physical atoms can fit in that volume of space than d-common atoms.

To operate fully in the lucid-dream world, one’s mind needs several different learned programs that together do the following:

Based on my own experience with lucid dreams, the lucid-dream world is devoid of lower forms of life: no trees or plants to see, no birds or insects to see, no fish or small animals to see. Perhaps the computing-element program only allows the mind of a soliton/mind—in other words, only allows owned bions—to call the learned-program statements for seeing, creating, destroying, and manipulating d-common atoms. In my approximately 400 lucid-dream projections, only in one lucid dream did I see a non-human animal, a tiger, mentioned in section 10.1. Although I had encountered two different pet cats during a few of my bion-body projections (at the time of each encounter, both that pet cat and myself were living in the same house; see subsection 5.2.2 and section 10.1 for some detail of those pet-cat encounters), I never encountered any pet cats during my lucid-dream projections. Perhaps the pet-cat mind lacks one or more of the learned programs needed to fully operate in the lucid-dream world, or alternatively, because there is a lot of self-segregation that goes on in the lucid-dream world, even if pet cats have the learned programs needed to fully operate in the lucid-dream world, it may be that they only move to where other such cats are, and present their constructed appearance there.

My guess is that most if not all humans have the learned programs needed to fully operate in the lucid-dream world, but for those other animal species whose members each have a soliton/mind, with the exception of that one tiger encounter, I don’t know. The answer to this question about the presence or absence of other animal species in the lucid-dream world is probably an ongoing question that is answered in the lucid-dream stage of the afterlife for any humans curious about it, because the currently dead human inhabitants who live there full-time, typically for many years before they reincarnate, are free to roam about looking for other animal species in the lucid-dream world, and they are free to report their findings to others who are interested in this question. In fact, it is probably standard lore in at least some of the human lucid-dream afterlife societies as to which kinds of animals if any are also in the lucid-dream world and what kind of interactions if any are possible with them or are reported to have happened with them. Also, regarding the Caretakers (section 7.6), if one assumes that the life cycle of a Caretaker includes a stage without its bion-body where it’s just a soliton/mind for an extended period of time before reincarnating into a new bion-body, then that standard lore may also include accounts of interactions with Caretaker afterlife societies in the lucid-dream world.

5.2 Movement when Out-of-Body

During both lucid dreams (section 5.3) and bion-body projections (section 5.4), one’s awareness/mind is often moving in 3D space, being able to move in any direction, and moving at a variety of speeds that range from very slow to very fast, and these movements, when they happen, can be consciously decided or unconsciously decided. What I am saying here about movement during lucid dreams and bion-body projections is based on my own experience, although the same is said about these movement abilities by others in the out-of-body projection literature.

I say in chapter 10 that I’ve had about five hundred out-of-body projections, about four-fifths were lucid dreams during which my awareness/mind was separate from my physical body, and about one-fifth were bion-body projections during which my awareness/mind was separate from my physical body but was situated in the head of a ghost-like body that was shaped like my human body. I am confident that that ghost-like body was composed of cell-controlling bions temporarily withdrawn from cells in my physical body, because during those bion-body projections I was often fully conscious both when that ghost-like body withdrew from my physical body and later when that ghost-like body reentered my physical body ending that out-of-body experience (I also experienced during many of my bion-body projections the brief returns to the physical body that other bion-body projectionists have explained as a recharging of the projected body so that the projection could continue, which I explain in subsection 5.2.3 as being the return to their cells of those cell-controlling bions in the projected bion-body whose allowed time away from their cells is over or will soon be over, and their replacement with other cell-controlling bions that can currently leave their cells and join the projected bion-body).[35]

Regarding awareness/mind movement when out-of-body, how is this movement coordinated between a soliton and its owned bions: Because the programming of our minds is divided into different functional parts, there is probably a separate group of owned bions in one’s mind whose learned programs are specialized to handle awareness/mind movement, with one or more of those bions using the move_this_bion() learned-program statement to, in effect, move the awareness/mind. When out-of-body, those learned programs take into account different inputs including what the soliton wants in terms of movement (thus, sometimes one has conscious control over one’s movement out-of-body and sometimes one doesn’t).

Among the learned programs that handle awareness/mind movement, there is a learned program that keeps our awareness/mind within our physical head when we are awake in our physical body, and this same learned program also keeps our awareness/mind within our bion-body head during a bion-body projection. For convenience because it is referenced further below, this learned program is named LP_maintain_AM_position_close_to_one_bion (LP is short for learned program, and AM is short for awareness/mind). When running, this learned program, whose input is the unique identifier of a bion, calls many times per second the get_relative_location_of_bion_uid() statement (detailed in subsection 5.2.1), and depending on the returned distance and direction to that bion, one’s awareness/mind is moved closer to that bion as needed so as to stay very close to that bion (by “very close” my guess is a distance of less than a millimeter, which is less than 1/25th of an inch). Regarding when LP_maintain_AM_position_close_to_one_bion is running:


footnotes

[35] The various molecules of a cell are more or less stable. Thus, typically, a cell without its bion soon reaches a mostly stable state where chemical reactions cease, and the structure of the cell just before that bion’s departure remains mostly unchanged—succumbing only slowly to environmental stresses from outside the cell. This quasi-stability means that a bion can leave its cell for at least a short time, and, upon return, find its cell in much the same state as when it left that cell (in effect, a bion also “leaves” its cell each time it sleeps—see section 9.3—and this periodic sleeping of a cell’s bion has probably been a contributing factor in the evolution of the cell’s stability).

However, because there is so much interdependency in the human body, subpar performance by cells whose bions are absent—depending on how many bions are absent, for how long, and from which cells—could have a cascading effect that ultimately causes sickness or possibly even death. It seems that to avoid these dangers, the bions are, in effect, collectively careful about staying with their cells in the physical body. For the typical person who has bion-body projections, the bions in their projected bion-body apparently maintain comfortable safety margins limiting how long they are away from their cells.


5.2.1 Out-of-Body Movement during a Lucid Dream

Regarding out-of-body movement during a lucid dream, in my own case a typical lucid dream involved rapid moves to different locations where at each location I would stop and interact, often by telepathic talking, with one or more persons who were at that location, and, like myself, those persons were just their awareness and mind (they, and I presume myself, typically looked like dressed people, but this was just a constructed appearance, constructed by our minds, explained in section 5.3). The question is, how is this navigation done, knowing which direction in 3D space to move so as to get to where another person is, and also, at the end of the lucid dream, knowing which direction to move so as to get back to one’s physical body? The answer is already in subsection 3.8.6 with the learned-program statement get_relative_locations_of_bions() and its two supporting routines reply_to_this_location_request_bions() and process_a_location_reply_from_a_bion(): one can make minor changes to the code for get_relative_locations_of_bions() and its two supporting routines, to get a new learned-program statement get_relative_location_of_bion_uid() with its supporting routines reply_to_this_location_request_bion_uid() and process_location_reply_from_bion_uid(), and also add the following code to the examine_a_message_instance() routine in subsection 3.8.5 to call these two supporting routines:

if message_instance.special_handling_locate is GET_LOCATION_OF_BION_UID
and this_CE is currently holding a bion  /* It’s okay if this bion is asleep: if this bion is the wanted recipient, then want its location regardless of whether it is asleep or not. */
and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
then
reply_to_this_location_request_bion_uid(message_instance)
return  /* exit this routine */
end if

if message_instance.special_handling_locate is LOCATION_REPLY_FROM_BION_UID
and this_CE is currently holding a bion that is not asleep
and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
then
process_location_reply_from_bion_uid(message_instance)
return  /* exit this routine */
end if

The get_relative_location_of_bion_uid() statement has two parameters: the first parameter is bion_uid which is an integer and its value should be the unique identifier of a bion, and the second parameter is use_this_send_distance. If a call of get_relative_location_of_bion_uid() is successful—which means that in the time allowed to receive a reply, a location reply was received by the supporting routine process_location_reply_from_bion_uid(), and that reply’s message_instance.sender's_identifier_block shows that this reply is from the bion whose unique identifier is bion_uid—the call returns three things (the same three things that get_relative_locations_of_bions() returns in each entry of its returned nearest-recipients list):

For the get_relative_location_of_bion_uid() statement, MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID is the maximum use_this_send_distance value allowed, and this maximum, based on what can be done in a lucid dream, is a distance of at least several thousand miles. In comparison, the maximum use_this_send_distance value allowed for get_relative_locations_of_bions() is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS which is estimated at 10 feet.

During a lucid dream, to move to where some other awareness/mind is, one’s mind probably has a learned program whose basic procedure looks like the following:

  1. Initialize list_of_solitons_out_of_reach to empty.

  2. Given the soliton directory, and avoiding any solitons currently in the list_of_solitons_out_of_reach, decide on which soliton to visit (this decision process would presumably include messaging with that soliton’s mind to determine if it is currently projected and can be visited in a lucid dream). If no soliton is selected to be visited, for whatever reason, then exit this procedure returning "failure".

    If a soliton was selected to be visited, then for the two parameters of get_relative_location_of_bion_uid(), set bion_uid to the unique identifier of that soliton’s MESSAGING_WITH_OTHER_OWNED_MINDS bion, and set use_this_send_distance to MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID.

  3. Call get_relative_location_of_bion_uid(). If the call failed, then add the selected soliton to the list_of_solitons_out_of_reach, and then go to step 2; otherwise, this call was successful, and in the following steps, “returned distance” and “returned direction” refer to those two returns from this successful call (the also returned identifier block is not used in this procedure).

  4. The returned distance is the distance to the bion whose unique identifier is bion_uid (this bion is referred to simply as “that bion” in the remainder of this procedure). However, because that bion is owned by a soliton, and a soliton and its owned bions are always kept very close together by the computing-element program, the returned distance is also the distance between one’s own awareness/mind and the awareness/mind to be visited. If the returned distance is close enough to that bion (my guess is that “close enough” is being within 10 feet or so of that bion), then stop moving and exit this procedure returning "success".

    (When this procedure exits with "success", I expect that one or more other learned programs take over to adjust one’s position relative to the surrounding environment which includes the awareness/mind to be visited but may also include other persons nearby and/or various lucid-dream objects (in this lucid-dream context, “person” means an awareness/mind). Based on my own experience with lucid dreaming, as a rule after moving to a new location where one or more persons were, I was always positioned level with them and just a few feet from the nearest person which as a rule was the person with whom I then interacted with, such as by conversing with that person telepathically. And thinking about it now, it makes sense that, with regard to this procedure, that nearest person was the person to be visited.)

    Depending on the returned distance, set as needed how fast to move towards that bion, and move towards that bion in the returned direction, using the awareness/mind’s movement ability.

  5. Given the returned distance and how fast one’s awareness/mind is moving towards that bion, compute the estimated time to reach that bion. Then wait a fraction of that estimated time—a fraction close to, but less than 1 (for example, 97/100ths would be good)—before doing step 6 (one reason to wait a fraction of that estimated time is because, in general, the further away one is from that bion, the less accurate the returned direction will be in terms of pointing exactly at that bion; another reason is possible inaccuracies in the estimated time to reach that bion).

  6. Set use_this_send_distance to ((1 − (the fraction used in step 5)) × (the returned distance) × (a small safety factor such as 2)).

    Note that (1 − (the fraction used in step 5)) × (the returned distance) is an estimate of the distance remaining to reach that bion. The reason for the small safety factor is because of the inaccuracies mentioned in step 5 and also the possibility that that bion—more specifically with regard to lucid dreaming, that the awareness/mind to be visited—is moving in some direction at a speed that introduces a substantial error in the estimate of the distance remaining to reach that bion. My guess is that a safety factor of 2 is sufficient to avoid a failed call at step 3. Also, if one assumes that communication with the awareness/mind to be visited preceded trying to visit that awareness/mind, and this communication included getting an okay from that awareness/mind about being visited, then that awareness/mind is not going to be doing any substantial movements while it awaits that visit which should happen shortly.

    Go to step 3.

At the end of a lucid dream, to return to one’s physical body, one’s mind probably has a learned program whose basic procedure looks like the following:

  1. For the two parameters of get_relative_location_of_bion_uid(), set bion_uid to the unique identifier of one of the brain bions in one’s physical body (for simplicity, this can be the same brain bion that was the most recent input for the learned program LP_maintain_AM_position_close_to_one_bion when one was awake in one’s physical body), and set use_this_send_distance to MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID.

  2. Call get_relative_location_of_bion_uid(). If the call failed: if the value of use_this_send_distance is not MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID, then set use_this_send_distance to MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID and do this call over again; otherwise, this call failed even though the value of use_this_send_distance was MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID, so in this case exit returning "max distance failed".

    (When this procedure exits with "max distance failed", then one or more other brain bions can be tried, and if that also fails, then perhaps MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BION_UID is less than the diameter of the Earth and one’s awareness/mind is simply too far away from one’s physical body. If trying a few other brain bions has also failed, then one’s awareness/mind can try moving far distances in different directions, staying close to the Earth’s surface—after each long-distance move of one’s awareness/mind, go to step 1 and repeat this procedure to locate the given brain bion—until success is attained. If still no success, then at some point I guess one’s awareness/mind gives up and stops trying to return to one’s physical body. Whether or not this kind of failure ever happens, I have no way of knowing, since all those who talk or write about their lucid dreams were able to return to their physical bodies successfully, and in my own case, I was never consciously aware of any problems getting back to my physical body.)

    This call was successful, and in the following steps, “returned distance” and “returned direction” refer to those two returns from this successful call (the also returned identifier block is not used in this procedure).

  3. The returned distance is the distance to the bion whose unique identifier is bion_uid (this bion is referred to simply as “that bion” in the remainder of this procedure). If the returned distance is close enough to that bion (my guess is that “close enough” is being within a few feet of that bion), then stop moving and exit this procedure returning "success".

    (When this procedure exits with "success", the learned program LP_maintain_AM_position_close_to_one_bion, which was described earlier above, is run with its input being the unique identifier of a brain bion that is near the center of one’s physical head.)

    Depending on the returned distance, set as needed how fast to move towards that bion, and move towards that bion in the returned direction, using the awareness/mind’s movement ability.

  4. Given the returned distance and how fast one’s awareness/mind is moving towards that bion, compute the estimated time to reach that bion. Then wait a fraction of that estimated time—a fraction close to, but less than 1 (for example, 97/100ths would be good)—before doing step 5 (one reason to wait a fraction of that estimated time is because, in general, the further away one is from that bion, the less accurate the returned direction will be in terms of pointing exactly at that bion; another reason is possible inaccuracies in the estimated time to reach that bion).

  5. Set use_this_send_distance to ((1 − (the fraction used in step 4)) × (the returned distance) × (a small safety factor such as 2)).

    Note that (1 − (the fraction used in step 4)) × (the returned distance) is an estimate of the distance remaining to reach that bion. The reason for the small safety factor is because of the inaccuracies mentioned in step 4 and also the possibility that that bion—more specifically with regard to the purpose of this procedure, that one’s physical body—is moving in some direction at a speed that introduces a substantial error in the estimate of the distance remaining to reach that bion. My guess is that a safety factor of 2 is sufficient to avoid a failed call at step 2.

    Go to step 2.

5.2.2 Vision and Movement during my Bion-Body Projections

The raw data for vision during a lucid dream is provided by one’s mind calling get_relative_locations_of_d_common_atoms(), and the raw data for vision during a bion-body projection is provided by one’s mind calling get_relative_locations_of_bions(). In both vision cases, for the images constructed and ultimately sent to one’s soliton so that one can consciously see: those d-common atoms or bions respectively that are too close to whichever bion in one’s mind is calling that get_relative_locations_of_…() statement to get the raw vision data, are not included in the constructed image to be seen by one’s soliton. Thus, during a lucid dream, one cannot see one’s own constructed appearance, assuming it is there, when seeing the constructed appearance(s) of whichever person(s) one is currently interacting with, because the d-common atoms composing one’s own constructed appearance are too close to one’s own mind, and likewise during a bion-body projection, one cannot see one’s own bion-body head because the bions composing one’s bion-body head are too close to one’s own mind. Note that it is easy to not include in the constructed image the d-common atoms or bions respectively that are too close to the calling bion, because computing the distance from the calling bion to each replying d-common atom or bion respectively, is a simple matter of using the distance formula to compute the distance between point (0, 0, 0) and point replier's_XYZ_relative_to_000, and then discarding those replies whose computed distance from the calling bion is less than some cutoff value (the computation of replier's_XYZ_relative_to_000 for a replying bion is given in subsection 3.8.6, and this computation of replier's_XYZ_relative_to_000 is the same for the other forms of get_relative_locations_of_…() including get_relative_locations_of_d_common_atoms()).

Another consideration regarding vision during a lucid dream, and also during a bion-body projection, is that the get_relative_locations_of_…() learned-program statement used to get the raw vision data, has nothing to do with light of any kind, neither physical light nor any other kind of light. There are no light sources, no shadows, no reflective surfaces nor mirrors of any kind, when seeing with either get_relative_locations_of_d_common_atoms() or get_relative_locations_of_bions(). And this complete lack of light sources, shadows, and reflections of any kind is consistent with my own experience, since I never saw any light sources, shadows, or reflections of any kind during any of my approximately 500 out-of-body projections, with the sole exception being when my mind’s third-eye, which sees physical light using the get_photon_vectors() learned-program statement (section 5.4), was activated during my one dense bion-body projection.

During a lucid dream, the fastest speed that the awareness/mind can move at in our world, is estimated by me in chapter 10 as being about 250 miles per second (400 kilometers per second), based on an estimated time to move an estimated distance. However, during a bion-body projection, the fastest speed that I ever experienced when moving in a bion-body was much, much slower (I don’t have an actual estimate for the fastest speed when I was moving in a bion-body, but thousands of times slower than the fastest speed during a lucid dream is probably approximately correct). In my approximately 100 bion-body projections, my awareness/mind was always inside where my bion-body head would be (I couldn’t actually see my bion-body head, presumably for the reason given above about my awareness/mind being too close to my bion-body head). The rest of my bion-body, insofar as I could see it, was complete, including my upper body and two arms and hands and my lower body including my legs, and I always had conscious control over how I could move my bion-body arms (I was also able to consciously move my bion-body legs, and when moving slowly in my bion-body I would often move my bion-body legs as if I were walking, but my projected bion-body was never able to make contact with anything in the physical world—the reason for this lack of physical contact is given in section 5.4—and there was never any floor or ground to walk on that I could see; I assume this attempt at walking when moving slowly thru 3D space in my projected bion-body was simply a result of habit, because like most people I walk a lot in my physical body to move around).

During one bion-body projection that I still remember clearly after more than 35 years—it is April 2016 and I am 60 years old as I write this paragraph—I experimented with how fast I could move my bion-body forearms in up-and-down chopping motions, and it was approximately twice as fast as I could move my physical forearms in those same up-and-down chopping motions (after that bion-body projection was over, for comparison I tried doing the same up-and-down chopping motions in my physical body). Regarding my vision during my bion-body projections, I never saw in color, only a grayscale (in contrast, lucid-dream vision for me was always in color), and with the sole exception of the one dense bion-body projection that I had (described in subsection 10.1.1), I never saw any physical objects and the world that I could see was mostly empty with just my own bion-body and only rarely the projected bion-body of another person, as well as a few bion-body projections during which I could see the moving bion-body of our family’s pet cat which I believed was also projected at those times when I could see it (in every case, when seeing my own bion-body or seeing another bion-body, it always looked grainy in its composition). And whenever I did see another bion-body, whether a human’s or our pet cat’s, I only saw it when it was very close to my own projected bion-body, which makes me think that my vision had a very short range during those bion-body projections.[36] In terms of moving my entire bion-body in a given direction, there was no noticeable weight to my bion-body, and I was able to move in any direction, and often did so with conscious intent.

The awareness/mind has its own independent movement ability as described earlier in this section, and the individual bions in the bion-body, presumably by using the move_this_bion() learned-program statement, also have their own independent movement ability, which is clearly demonstrated, for example, when the limbs of the bion-body move relative to the rest of the bion-body, such as when I was “walking” or doing that chopping-motion experiment. Apparently, the cell-controlling bions composing my projected bion-body, when away from their cells, will respond to move messages from my mind by calling move_this_bion() as needed to move as my awareness/mind wants.

During all my bion-body projections, my awareness/mind always remained in my bion-body head, and this was done by the learned program LP_maintain_AM_position_close_to_one_bion which was described earlier in this section.


footnotes

[36] As already stated, the raw data for vision during a bion-body projection is provided by one’s mind calling get_relative_locations_of_bions(), and in subsection 3.8.6 I guess that MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS is less than 10 feet (about 3 meters). I describe in section 10.1 a bion-body projection I had in 2012 that involved my pet cat, during which I saw that cat’s projected bion-body move rapidly around my own bion-body, always keeping about a foot distant from my bion-body as it moved around me. That cat’s projected bion-body was most distant from me—furthest from my awareness/mind which was in my bion-body head—when it was about a foot away from the bottom of my bion-body feet: which was a distance I estimate at about 6½ to seven feet, which means MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS is probably at least seven feet.

Regarding how the image is constructed from a call of get_relative_locations_of_bions(): Assume that get_relative_locations_of_bions() has three additional parameters, a parameter named return_an_image whose value is either true or false, and, when return_an_image is true, two other added parameters: a vector that points in the direction of view; and an exclusion distance so that replies from bions whose computed distance from the calling bion is less than this exclusion distance, will, in effect, be ignored and not put in the image being constructed (in my case, the calling bion was whichever of my mind bions called get_relative_locations_of_bions() to form the image; and, since I never saw any of my bion-body head—nor any of my bion-body neck—this exclusion distance for my bion vision was probably set at about six inches). Also, when return_an_image is true, get_details_for_this_many_nearest_recipients is set to zero, and the image will be constructed piecemeal in the process_a_location_reply_from_a_bion() routine after each reply’s replier's_XYZ_relative_to_000 is computed. The image plane that each replier's_XYZ_relative_to_000 is projected onto, will depend on the given direction-of-view vector. At the end of the wait time for all the replies to be received, get_relative_locations_of_bions() will return the constructed image.

Also, assuming subsection 3.8.6, to get replies from all nearby cell-controlling bions that are not currently with their cells, which is the case with the bions in a projected bion-body that is away from its physical body, the user_settable_identifiers_block parameter for the get_relative_locations_of_bions() call would have its USID_2 set to NOT_WITH_MY_CELL, and the other values of that parameter would be set to null. And, to get the appearance of continuous vision, my guess is that the learned program that provides this vision makes about 30 calls per second of get_relative_locations_of_bions() to generate a sequence of constructed images, and, after further processing in one’s mind, the processed images are sent to one’s soliton so that, in my own case, I was able to consciously see my own bion-body—excluding my bion-body head and neck for the reason given in the previous paragraph—and its movements, as well as seeing any other nearby projected bion-body and its movements (assuming that other bion-body is in my current field of view, and its bions are within range and are recipients of the GET_LOCATIONS_OF_BIONS message sent by the image-generating call of get_relative_locations_of_bions()). And all this seeing was always relative to myself (myself being my soliton/mind in my bion-body head).


5.2.3 How One’s Projected Bion-Body Maintains its Human Shape

For us humans, one’s projected bion-body always maintains the same shape and size as one’s physical body. That wasn’t just my own experience, that was also the experience of many others who have had bion-body projections and have had their projections written about. A bion-body projection involves both one’s mind and also cell-controlling bions in one’s physical body. Thus, there is a learned program in one’s mind, and also a different learned program in cell-controlling bions, that work together to bring about a bion-body projection. The actual procedure for a bion-body projection probably looks like the following procedure which details this interaction between one’s mind and the cell-controlling bions in one’s physical body, and this procedure also results in a projected bion-body that has the same shape and size as one’s physical body (in this procedure: “one’s mind” refers to the learned program in one’s mind that manages a bion-body projection; BB is short for bion-body; when a message is being sent, user_settable_identifiers_block is the parameter for a call of send() that sends the specified message):

  1. In this procedure, wherever user_settable_identifiers_block is set, although not mentioned explicitly, assume that USID_1 is set to MY_CELL_IS_ACTIVE if this is a bion-body projection while one is still alive (this will form a bion-body whose bions will have MY_CELL_IS_ACTIVE as their USID_1 value). If instead, one’s mind is running this procedure to form the afterlife bion-body (my guess is this will typically happen about five minutes after one’s heart has stopped), then assume USID_1 is set to MY_CELL_IS_IN_STASIS (this will form the afterlife bion-body whose bions will have MY_CELL_IS_IN_STASIS as their USID_1 value; the big advantage of forming the afterlife bion-body from bions whose USID_1 is MY_CELL_IS_IN_STASIS instead of MY_CELL_IS_ACTIVE, is that the MY_CELL_IS_IN_STASIS bions will not return to their cells after a short time as happens when MY_CELL_IS_ACTIVE). Also, if one’s mind is running this procedure to form the afterlife bion-body, then assume step 4 will send BB_PROJECTION_REQUEST messages for many non-skin-cell types so as to get a bion-body projection that is dense enough to activate the mind’s third-eye and third-ear, so that the physical world can be seen and heard during that afterlife bion-body projection (likewise, if one has a sufficiently dense bion-body projection while still alive, one’s mind’s third-eye and third-ear will be activated; see section 5.4 for details regarding the third-eye and third-ear).

    Assume that for cell-controlling bions, the possible values for USID_5 include the value SKIN, which represents all skin-cell types including the cells under our fingernails and toenails.

    To begin the bion-body projection, one’s mind sends a BB_PROJECTION_REQUEST message to all the skin-cell bions in its physical body (user_settable_identifiers_block has USID_4 set to the unique identifier of one’s multicellular body, USID_5 set to SKIN, and the other integers in this parameter are set to null).

    Each bion recipient of the BB_PROJECTION_REQUEST message checks to see if it can join a bion-body projection (presumably this would depend on the current state of that bion’s cell and what that bion is currently doing with its cell, if anything), and if this bion, in effect, decides that it can join a bion-body projection, then this bion runs a learned program whose steps follow (this_bion is the bion running this learned program); the remaining steps in this procedure also include additional actions, where specified, by one’s mind:

  2. For this_bion, first stop running all learned programs for manipulating and maintaining its cell, but keep running the learned program LP_keep_this_bion_close_to_this_physical_atom so that this_bion continues to remain with its cell for the time being. After all the learned programs for manipulating and maintaining its cell have been stopped, then start a timer named elapsed_time_away_from_my_cell, which is the elapsed time since this_bion stopped running all those learned programs.

    Then save this_bion’s user-settable identifiers block, so that it can be restored in later steps where indicated, and then change its USID_2 to NOT_WITH_MY_CELL, and its USID_3 to AWAY_FROM_MY_CELL.

  3. After completing step 2, this_bion then waits a short time, probably a fraction of a second, so as to allow time for other skin-cell bions that got the BB_PROJECTION_REQUEST message and are going to join the bion-body projection, to complete step 2.

    Indented below is the description of the learned-program statement get_relative_locations_of_bions_distance_distributed(). The rest of step 3 follows after this description of get_relative_locations_of_bions_distance_distributed().

    Assume there is a learned-program statement get_relative_locations_of_bions_distance_distributed(). This statement works the same way that get_relative_locations_of_bions() does, in that it returns the relative locations of one or more bions, but instead of returning details of the nearest replying bions as get_relative_locations_of_bions() does, get_relative_locations_of_bions_distance_distributed() returns details of the most distant bion in each distance interval, as well as a count of all replying bions in each distance interval. The distance intervals are defined further below.

    For get_relative_locations_of_bions_distance_distributed(), assume its MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS_DD (DD: distance distributed) has the same value as MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS.

    In the same way that get_relative_locations_of_bions() has two supporting routines, get_relative_locations_of_bions_distance_distributed() also has two supporting routines, reply_to_this_location_request_bions_for_DD() and process_a_location_reply_from_a_bion_for_DD(). And add the following code to the examine_a_message_instance() routine in subsection 3.8.5 to call these two supporting routines:

    if (message_instance.special_handling_locate is either GET_LOCATIONS_OF_BIONS_FOR_DD or LOCATION_REPLY_FROM_BION_FOR_DD)
    and this_CE is currently holding a bion that is not asleep
    and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
    then
    if message_instance.special_handling_locate is LOCATION_REPLY_FROM_BION_FOR_DD
    then
    process_a_location_reply_from_a_bion_for_DD(message_instance)
    else
    reply_to_this_location_request_bions_for_DD(message_instance)
    end if
    return  /* exit this routine */
    end if

    Regarding reply_to_this_location_request_bions_for_DD(), other than the name of the routine and the message_instance.special_handling_locate which is set to LOCATION_REPLY_FROM_BION_FOR_DD, the code for reply_to_this_location_request_bions_for_DD() is identical to the code for reply_to_this_location_request_bions().

    Regarding the parameters of get_relative_locations_of_bions_distance_distributed(), it has two of the same parameters as get_relative_locations_of_bions(), namely user_settable_identifiers_block and use_this_send_distance, but instead of the get_details_for_this_many_nearest_recipients parameter that get_relative_locations_of_bions() has, get_relative_locations_of_bions_distance_distributed() has the integer parameter number_of_intervals.

    For get_relative_locations_of_bions_distance_distributed(), its use_this_send_distance parameter must have a value that is not less than 1 and not more than MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS_DD, and its number_of_intervals parameter must have a value that is at least 2 and not more than the rounded-down value of (½ × use_this_send_distance) (in practical use, because memory must be allocated for each interval, and because, in general, the distance unit for all learned-program statements is the side-width of a computing element which is estimated at 10−16 centimeters wide, the value of the number_of_intervals parameter will typically be extremely small compared to the value of the use_this_send_distance parameter).

    Regarding the initialization of the global variables that get_relative_locations_of_bions_distance_distributed() makes available to process_a_location_reply_from_a_bion_for_DD(), some of these global variables are the same as in get_relative_locations_of_bions(). But the differences are these: nearest_recipients_count and requested_nearest_count are not set nor used. Also, instead of the following lines in get_relative_locations_of_bions():

    allocate enough memory for a list that can hold as many elements as specified by requested_nearest_count. Each element in this list has three components: the replying bion’s identifier block; the replier's_XYZ_relative_to_000 that is computed in the process_a_location_reply_from_a_bion() routine; and the computed distance between (0, 0, 0) and replier's_XYZ_relative_to_000.

    set pointer_to_nearest_recipients_list to point at the location in memory of the just-allocated list

    get_relative_locations_of_bions_distance_distributed() has these lines:

    set the integer max_subscript to the number_of_intervals parameter

    allocate enough memory for a list that can hold as many elements as specified by max_subscript. Each element in this list has four components: 1) The integer replies_count, initialized to zero, that counts all the replies for this distance interval. 2) The identifier block of the most distant replying bion in this distance interval. 3) The replier's_XYZ_relative_to_000 that is computed in the process_a_location_reply_from_a_bion_for_DD() routine for the most distant replying bion in this distance interval. 4) its_distance, computed in process_a_location_reply_from_a_bion_for_DD(), for the most distant replying bion in this distance interval.

    set pointer_to_distance_intervals_list to point at the location in memory of the just-allocated list

    /*
    Compute the interval_size.

    For example, if use_this_send_distance is 1335 and number_of_intervals is 19, then interval_size is (1335 ÷ 19) which is 70.2631579.
    */
    set interval_size to (use_this_send_distance ÷ number_of_intervals)
    Another difference between get_relative_locations_of_bions_distance_distributed() and get_relative_locations_of_bions(), is that the returns of the two routines are different as follows:
    /*
    Returns for get_relative_locations_of_bions():
    */
    return ret_total_replies, centroid_XYZ_relative_to_000, ret_nearest_recipients_count, ret_pointer_to_nearest_recipients_list
    /*
    Returns for get_relative_locations_of_bions_distance_distributed():
    */
    return ret_total_replies, centroid_XYZ_relative_to_000, ret_interval_size, ret_pointer_to_distance_intervals_list

    The difference between process_a_location_reply_from_a_bion_for_DD() and process_a_location_reply_from_a_bion(), is that instead of the following comment and if-test in process_a_location_reply_from_a_bion():

    /*
    If conditions are met, insert the reply—more specifically, insert together three relevant details regarding the replying bion—into the current nearest-recipients list.
    */
    if requested_nearest_count is greater than 0
    then

    end if

    process_a_location_reply_from_a_bion_for_DD() has these lines:

    set distance to (use the distance formula to compute the distance between (0, 0, 0) and replier's_XYZ_relative_to_000)

    /*
    Compute the integer list_subscript. For example, if distance is 113.7 and interval_size is 5,039.3, then division_result is 0.022562 (accurate to six decimal places), and rounding division_result up to the nearest integer gives 1 as the list_subscript, and pointer_to_distance_intervals_list[1] is the first element in the distance-intervals list.
    */
    set division_result to (distance ÷ interval_size)
    set the integer list_subscript to (division_result if the value of division_result is an integer greater than 0; otherwise, round division_result up to the nearest integer)

    /*
    This if-test is included for completeness, because floating-point math, which has finite precision, is used to compute interval_size, distance, and division_result. Another separate consideration is how replier's_XYZ_relative_to_000 is computed, which is then used to compute distance. So, this if-test covers the worst case where division_result has a value slightly more than the integer value of max_subscript.
    */
    if list_subscript is greater than max_subscript
    then
    set list_subscript to max_subscript
    end if

    /*
    The list_subscript identifies the distance interval for this replying bion. After incrementing the replies_count, save the detail of this bion if it is either the first replying bion for this distance interval, or it is currently the most distant bion for this distance interval.
    */
    add 1 to pointer_to_distance_intervals_list[list_subscript].replies_count

    if pointer_to_distance_intervals_list[list_subscript].replies_count is 1  /* This is the first replying bion for this distance interval. */
    or pointer_to_distance_intervals_list[list_subscript].its_distance is less than distance
    then
    set the other three components of the list element at pointer_to_distance_intervals_list[list_subscript] as follows: set the second component to the replying bion’s identifier block which is in the message_instance, set the third component to replier's_XYZ_relative_to_000, and set the fourth component its_distance to distance.
    end if

    Indented above is the description of the learned-program statement get_relative_locations_of_bions_distance_distributed(). The rest of step 3 follows:

    After the wait at the beginning of step 3, this_bion then calls get_relative_locations_of_bions_distance_distributed() with its user_settable_identifiers_block parameter set as follows: USID_2 is NOT_WITH_MY_CELL, USID_3 is AWAY_FROM_MY_CELL, USID_4 is the unique identifier of one’s multicellular body, USID_5 is SKIN, and the other integers in this parameter are set to null. Also for this call, set the number_of_intervals parameter to 3, and set the use_this_send_distance parameter to a short distance of 1/4th of an inch (0.635 centimeters) which I believe is a good compromise because there is a reason to make use_this_send_distance smaller and a reason to make use_this_send_distance larger: The smaller the use_this_send_distance value is, the less likely skin-cell bions from nearby separate physical structures—two adjacent fingers, for example—will be, in effect, tied together, given step 7 for skin-cell bions. The larger the use_this_send_distance value is, the more quickly the dragging effect of a single skin-cell bion will spread across the length and width of a limb (see “Moving my Projected Bion-Body’s Limbs” at the end of this subsection).

    If the call of get_relative_locations_of_bions_distance_distributed() failed to return a most distant bion in any of the three distance intervals (if ret_pointer_to_distance_intervals_list[1].replies_count is zero, or ret_pointer_to_distance_intervals_list[2].replies_count is zero, or ret_pointer_to_distance_intervals_list[3].replies_count is zero), then this_bion ends its attempt to join the bion-body projection: this_bion restores its user-settable identifiers block to its values before step 2, and then resumes running all learned programs for manipulating and maintaining its cell, and then exits (stops running) this procedure for this_bion.

    If instead the call of get_relative_locations_of_bions_distance_distributed() succeeds, then for each of the three bions in ret_pointer_to_distance_intervals_list[1], ret_pointer_to_distance_intervals_list[2], and ret_pointer_to_distance_intervals_list[3], save both the bion’s unique identifier and its_distance, into bion1, bion2, and bion3, respectively. (Assuming a large number—such as millions—of skin-cell bions in the projected bion-body, and assuming a somewhat even distribution of these skin-cell bions along the surface of the projected bion-body, and assuming that for the above call of get_relative_locations_of_bions_distance_distributed() its use_this_send_distance parameter has the suggested distance of 1/4th of an inch, then for the typical skin-cell this_bion, its bion1, bion2, and bion3 will be at a distance from this_bion of nearly 1/12th of an inch, nearly 1/6th of an inch, and nearly 1/4th of an inch, respectively.)

    After setting its bion1, bion2, and bion3, this_bion then sends a SKIN_CELL_BION_READY message to the same mind bion that sent the BB_PROJECTION_REQUEST message to this_bion in step 1, and then waits for a reply from one’s mind.

  4. One’s mind, after first sending the BB_PROJECTION_REQUEST message in step 1 requesting participants from among the skin-cell bions, for a bion-body projection, then waits a short time, perhaps about a second, for all the SKIN_CELL_BION_READY messages to be received (one SKIN_CELL_BION_READY message from each skin-cell bion that is ready to be a part of the bion-body projection). If the total number of received SKIN_CELL_BION_READY messages during that wait time is too low (not enough skin-cell bions will participate in the wanted bion-body projection), then one’s mind sends a BB_PROJECTION_CANCEL message to all the cell-controlling bions waiting in this step for a reply from one’s mind (user_settable_identifiers_block has USID_2 set to NOT_WITH_MY_CELL, USID_3 set to AWAY_FROM_MY_CELL, USID_4 set to the unique identifier of one’s multicellular body, and the other integers in this parameter are set to null).

    If the waited-for reply is BB_PROJECTION_CANCEL, then restore this_bion’s user-settable identifiers block to its values before step 2, and then resume running all learned programs for manipulating and maintaining its cell, and then exit (stop running) this procedure for this_bion.

    At this point one’s mind has enough skin-cell bions for the projection to proceed, but one’s mind may want to now add more bions to the projection, so let’s assume that to get a more dense bion-body projection, one’s mind has the option to send additional BB_PROJECTION_REQUEST messages with USID_5 values other than SKIN. The user_settable_identifiers_block parameter for each of these sent messages would have the wanted USID_5 value, USID_4 would be set to the unique identifier of one’s multicellular body, and the other integers in this parameter would be set to null. After sending as many of these additional BB_PROJECTION_REQUEST messages, if any, that it is going to send, one’s mind then waits a short time, perhaps about a second, so as to allow time for each recipient not-a-skin-cell bion to, in effect, decide if it will join the bion-body projection, and if so, then do the following sub-procedure (in this sub-procedure, this_bion is a not-a-skin-cell bion that has decided to join the bion-body projection):

    1. This first step is not an action step; instead, it describes the learned-program statement get_relative_locations_of_bions_near_line_segment(), which is used in step III of this sub-procedure.

      Assume there is a learned-program statement get_relative_locations_of_bions_near_line_segment(). This statement works the same way that get_relative_locations_of_bions() does, in that it returns the relative locations of one or more bions, but instead of returning details of the replying bions nearest to the bion calling get_relative_locations_of_bions(), get_relative_locations_of_bions_near_line_segment() returns details of the replying bions nearest to the calling bion, that are also sufficiently close to a specific line segment. This line segment is defined below where the parameter line_segment_type is described.

      (NOTE: My motivation for assuming the existence of this get_relative_locations_of_bions_near_line_segment() statement is for the sake of being able to get the bion_B value that I want in step III of this sub-procedure, with the ultimate aim of being able to construct a confinement algorithm for the non-skin-cell bions in the projected bion-body (this confinement algorithm is given in step 7), that can account for the swirling interior bions that I both felt and saw within my projected bion-body during the one dense bion-body projection that I had (described in subsection 10.1.1). However, besides this specific use here in this sub-procedure, get_relative_locations_of_bions_near_line_segment() has many potential uses during the development and growth of a multicellular physical body. Cell-controlling bions having this learned-program statement get_relative_locations_of_bions_near_line_segment() in their toolkit, is roughly equivalent to a human builder having a ruler and a straight-edge in his toolkit when constructing a house. END NOTE)

      Regarding parameters, get_relative_locations_of_bions_near_line_segment() has the same three parameters as get_relative_locations_of_bions()—namely user_settable_identifiers_block, get_details_for_this_many_nearest_recipients, and use_this_send_distance—and three additional parameters:

      • bion_uid: the unique identifier of a bion.

      • line_segment_type: the value of this parameter is either START_AT_BION_UID or START_OPPOSITE_BION_UID.

        In either case, regardless of whether START_AT_BION_UID or START_OPPOSITE_BION_UID is specified, the line that the line segment is a part of, is the line that passes thru point (0, 0, 0) and the computed point replier's_XYZ_relative_to_000 for the bion identified by the bion_uid parameter.

        If START_AT_BION_UID is specified, then the starting point of the line segment is the computed point replier's_XYZ_relative_to_000 for the bion identified by the bion_uid parameter, and the end point of the line segment extends off into infinity in the direction away from point (0, 0, 0). For example, if the bion identified by bion_uid has its replier's_XYZ_relative_to_000 value computed as (2, −3, 8), then (2, −3, 8) is the start of the line segment.

        If START_OPPOSITE_BION_UID is specified, then the starting point of the line segment is the opposite of the computed point replier's_XYZ_relative_to_000 for the bion identified by the bion_uid parameter, and the end point of the line segment extends off into infinity in the direction away from point (0, 0, 0). For example, if the bion identified by bion_uid has its replier's_XYZ_relative_to_000 value computed as (2, −3, 8), then the opposite—reverse the signs of X, Y, and Z—is (−2, 3, −8), and this (−2, 3, −8) is the start of the line segment.

      • max_allowed_distance_from_line_segment: the maximum allowed distance from the line segment.

      For get_relative_locations_of_bions_near_line_segment(), assume its MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS_NLS (NLS: near line segment) has the same value as MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_BIONS.

      In the same way that get_relative_locations_of_bions() has two supporting routines, get_relative_locations_of_bions_near_line_segment() also has two supporting routines, reply_to_this_location_request_bions_for_NLS() and process_a_location_reply_from_a_bion_for_NLS(). And add the following code to the examine_a_message_instance() routine in subsection 3.8.5 to call these two supporting routines:

      if (message_instance.special_handling_locate is either GET_LOCATIONS_OF_BIONS_FOR_NLS or LOCATION_REPLY_FROM_BION_FOR_NLS)
      and this_CE is currently holding a bion that is not asleep
      and that bion qualifies as a recipient of the message  /* Examine the message_instance and also that bion’s identifier block to determine this. */
      then
      if message_instance.special_handling_locate is LOCATION_REPLY_FROM_BION_FOR_NLS
      then
      process_a_location_reply_from_a_bion_for_NLS(message_instance)
      else
      reply_to_this_location_request_bions_for_NLS(message_instance)
      end if
      return  /* exit this routine */
      end if

      Note that the determination of the line segment and whether or not a replying bion is sufficiently close to that line segment, takes place in the process_a_location_reply_from_a_bion_for_NLS() routine, and not in the reply_to_this_location_request_bions_for_NLS() routine. Thus, other than the name of the routine and the message_instance.special_handling_locate which is set to LOCATION_REPLY_FROM_BION_FOR_NLS, the code for reply_to_this_location_request_bions_for_NLS() is identical to the code for reply_to_this_location_request_bions().

      Before get_relative_locations_of_bions_near_line_segment() sends the GET_LOCATIONS_OF_BIONS_FOR_NLS message, it initializes additional globals (compared to get_relative_locations_of_bions()) that are needed by process_a_location_reply_from_a_bion_for_NLS(): globals for saving the begin point and end point of the line segment (initialized to null), and globals for showing to process_a_location_reply_from_a_bion_for_NLS() the values of get_relative_locations_of_bions_near_line_segment()’s three added parameters (bion_uid, line_segment_type, and max_allowed_distance_from_line_segment).

      In process_a_location_reply_from_a_bion_for_NLS(), immediately after the replier's_XYZ_relative_to_000 is computed and added to the centroid sums, the algorithm for determining whether or not to include a replying bion in the nearest-recipients list follows:

      1. if the line segment is null
        then
        if the replying bion’s unique identifier is bion_uid
        then
        set the begin and end points of the line segment, as stated above in the description of the line_segment_type parameter
        else
        do not add this reply to the nearest-recipients list  /* nothing is added to the nearest-recipients list until we have the line segment */
        end if
        end if

      2. Beginning with the next reply after we have the line segment, this is done: Check if the replier's_XYZ_relative_to_000 is close enough to the line segment, and if it’s computed shortest distance to that line segment is less than max_allowed_distance_from_line_segment, only then will that replying bion, in the code of process_a_location_reply_from_a_bion_for_NLS(), be subjected to the same code given in process_a_location_reply_from_a_bion() that determines whether or not that replying bion will be inserted into the nearest-recipients list:

        /*
        If conditions are met, insert the reply—more specifically, insert together three relevant details regarding the replying bion—into the current nearest-recipients list.
        */
        if requested_nearest_count is greater than 0
        then

        end if

        Note: It is already known how to compute the shortest distance between a point and a line segment in 3D space. To compute the shortest distance between a point and a line in 3D space, see, for example, Point-Line Distance — 3-Dimensional at http://mathworld.wolfram.com/Point-LineDistance3-Dimensional.html. To compute the shortest distance between a point and a line segment in 3D space, see, for example, Determine the distance from a line segment to a point in 3-space at http://math.stackexchange.com/questions/322831/determing-the-distance-from-a-line-segment-to-a-point-in-3-space.

    2. For this step and the next step in this sub-procedure, define for the human body this constant: MAX_BODY_THICKNESS, and give it a value of 4 feet (1.2 meters), which should be enough for everyone.

      For this_bion, we want two anchors, bion_A and bion_B, selected from among the skin-cell bions of step 3, that currently compose the skin-cell-bion part of what will be the projected bion-body. For bion_A and bion_B, save that bion’s unique identifier.

      For bion_A, selecting from among the skin-cell bions that currently compose the skin-cell-bion part of what will be the projected bion-body, we want the closest skin-cell bion to this_bion, which is gotten by calling get_relative_locations_of_bions() with its user_settable_identifiers_block parameter set as follows: USID_2 is NOT_WITH_MY_CELL, USID_3 is AWAY_FROM_MY_CELL, USID_4 is the unique identifier of one’s multicellular body, USID_5 is SKIN, and the other integers in this parameter are set to null. Also for this call, set the get_details_for_this_many_nearest_recipients parameter to 2, and set the use_this_send_distance parameter to MAX_BODY_THICKNESS. When the call returns with the details of the two nearest replying bions, save the nearest bion’s unique identifier in bion_A, and save the next-nearest bion’s unique identifier in bion_B.

      Anchor bion_A is now complete, and anchor bion_B has its default value, in case the next step, step III, fails to set a different value for anchor bion_B.

    3. Regarding get_relative_locations_of_bions_near_line_segment(): For bion_B, selecting from among the skin-cell bions that currently compose the skin-cell-bion part of what will be the projected bion-body, we want the nearest skin-cell bion to this_bion, that is also sufficiently close to the line segment whose begin point is opposite the computed point replier's_XYZ_relative_to_000 for the bion identified by bion_A.

      To get the wanted bion_B, call get_relative_locations_of_bions_near_line_segment() with the following parameters: user_settable_identifiers_block (USID_2 is NOT_WITH_MY_CELL, USID_3 is AWAY_FROM_MY_CELL, USID_4 is the unique identifier of one’s multicellular body, USID_5 is SKIN, and the other integers in this parameter are set to null); get_details_for_this_many_nearest_recipients is set to 1; use_this_send_distance is set to MAX_BODY_THICKNESS; bion_uid is set to the value of bion_A; line_segment_type is set to START_OPPOSITE_BION_UID; max_allowed_distance_from_line_segment is set to a distance of 1 millimeter (about 0.04 inches), assuming this distance is more than enough so that it is very likely that at least one skin-cell bion in what will be the projected bion-body, is within this distance of the line segment assuming that line segment passes thru a skin-cell-bion layer in what will be the projected bion-body.

      If the returned ret_nearest_recipients_count is 1: If that nearest recipient bion is the same bion in either bion_A or bion_B, then discard that nearest recipient bion and leave the values of bion_A and bion_B unchanged; otherwise, save that nearest recipient bion’s unique identifier in bion_B, replacing the default value for bion_B set in the previous step II.

      At this point we have the final values for this_bion’s two skin-cell anchors bion_A and bion_B. In practice, if this_bion’s not-a-skin-cell cell is anywhere in a layer of skin cells (perhaps, for example, a nerve cell, or a blood-vessel cell, or a blood cell currently in the skin), then it is probably a certainty that the two skin cells of bion_A and bion_B are very close together. Similarly, if this_bion’s not-a-skin-cell cell is somewhere in the interior of the physical body but is closer to the nearest skin than the value of max_allowed_distance_from_line_segment, then it is very likely that the two skin cells of bion_A and bion_B are separated from each other by a distance less than the value of max_allowed_distance_from_line_segment.

      If this_bion’s not-a-skin-cell cell is somewhere in the interior of the physical body and is farther away from the nearest skin than the value of max_allowed_distance_from_line_segment, then, assuming a short distance for max_allowed_distance_from_line_segment like the 1 millimeter set above, most of the non-skin-cell cells in one’s physical body are in this group of interior cells that are farther away from the nearest skin than the value of max_allowed_distance_from_line_segment, and for these cells’ bions that are joining the projected bion-body, most of them will have their two skin-cell anchors bion_A and bion_B on opposite sides of the skin that wraps around whatever part of one’s physical body a given not-a-skin-cell bion is in, whether it be in a finger or toe, palm or foot, arm or leg, or in one’s head, neck, or trunk. For example, if this_bion’s not-a-skin-cell cell is in the middle of one’s hand, and is also closer to the skin on the back of one’s hand than the skin on the palm of one’s hand, and is also more than max_allowed_distance_from_line_segment distant from the closest skin cell in that back-of-hand skin, then bion_A’s skin cell will be in the skin on the back of one’s hand and bion_B’s skin cell will be in the skin on the palm of one’s hand.

    4. Do step 2 for this_bion, and then send a NON_SKIN_CELL_BION_READY message to the same mind bion that sent the BB_PROJECTION_REQUEST message to this_bion in step 4, and then wait for one’s mind to send the BB_PROJECTION_SEPARATE message.

  5. At this point one’s mind has finished waiting for bions to be a part of the bion-body projection, and sends the BB_PROJECTION_SEPARATE message to all the cell-controlling bions waiting for this reply (user_settable_identifiers_block parameter has USID_2 set to NOT_WITH_MY_CELL, USID_3 set to AWAY_FROM_MY_CELL, USID_4 set to the unique identifier of one’s multicellular body, and the other integers in that parameter are set to null).

    For this procedure one’s mind has counted all the NON_SKIN_CELL_BION_READY messages, if any, sent to it in step 4, and if this count is large enough to represent a dense bion-body, then activate during this bion-body projection the mind’s third-eye and third-ear so that one’s awareness can see and hear the physical world while in one’s projected bion-body. (Note that the number of cell-controlling non-skin-cell bions needed to activate the mind’s third-eye and third-ear during a bion-body projection has nothing to do with how the mind’s third-eye and third-ear work. Instead, it is simply a high barrier that has evolved to greatly limit when this hidden ability of our minds can consciously appear during a bion-body projection, and there is apparently a similar high barrier against activation of the mind’s third-eye and third-ear during a lucid dream, because my third-eye and third-ear never activated during any of my approximately 400 lucid dreams. Also, the third-eye and third-ear cannot operate when we are in our physical bodies, because of the surrounding physical matter (see section 5.4). Most people will not experience their third-eye and third-ear until they are in their afterlife bion-body.)

    After sending the BB_PROJECTION_SEPARATE message, one’s mind waits a very short time (perhaps at most a few milliseconds) to allow time for all the recipient bions to stop running LP_keep_this_bion_close_to_this_physical_atom as stated in the next step, and after this very short wait, one’s mind can then send move messages as wanted to those bions: If one’s mind sends a move message to all the bions in its projected bion-body, then user_settable_identifiers_block has USID_2 set to NOT_WITH_MY_CELL, USID_3 set to AWAY_FROM_MY_CELL, USID_4 set to the unique identifier of one’s multicellular body, and the other integers in this parameter are set to null, and in this case that entire projected bion-body will move in the specified direction.

    Besides being able to move the entire projected bion-body as a whole by sending move messages whose recipients are all the bions currently composing the projected bion-body, one’s mind can also send messages to specific skin-cell bions in the projected bion-body, and, given step 7 below for skin-cell bions, and depending where those skin-cell bions are in the projected bion-body, one’s mind, among other things, can move the limbs of the projected bion-body. See “Moving my Projected Bion-Body’s Limbs” at the end of this subsection for more detail.

  6. At this point, this_bion has waited for a reply from one’s mind, and that reply is BB_PROJECTION_SEPARATE. In response, this_bion stops running its learned program LP_keep_this_bion_close_to_this_physical_atom and also enables a flag that means, in effect, that any move message received by this_bion from one’s mind will be followed and not ignored as it otherwise would be if received when LP_keep_this_bion_close_to_this_physical_atom is running.

    Among other things, once the bions in the projected bion-body are no longer running LP_keep_this_bion_close_to_this_physical_atom and will now move as one’s mind messages them to move, one’s projected bion-body can move away from one’s physical body and move independent of one’s physical body. And one’s awareness/mind will be along for the ride, using the learned program LP_maintain_AM_position_close_to_one_bion to stay close to one of the skin-cell bions that is a part of the head of one’s projected bion-body (presumably the skin-cell bion selected to stay close to would be at the front of the head, probably selected from the skin between one’s physical eyes).

  7. After step 6, this_bion is no longer, in effect, tied down to its cell. With regard to movement of this_bion during this bion-body projection, other than responding to any move messages received from one’s mind, this_bion will also do the following:

    First assume there is a learned-program statement get_relative_location_of_each_bion_in_list() that is essentially the same as the learned-program statement get_relative_location_of_bion_uid() defined in subsection 5.2.1. The only difference is that get_relative_location_of_bion_uid() has as its first parameter bion_uid which is the unique identifier of a single bion, and get_relative_location_of_each_bion_in_list() has as its first parameter list_of_bions which is a short list of the unique identifiers of two or more bions. Both of these learned-program statements have the same value for their respective MAX_SEND_DISTANCE_ALLOWED_FOR_….

    if this_bion’s USID_5 is SKIN (this_bion’s cell is a skin cell)

    Set list_of_bions to the unique identifiers of the three bions in bion1, bion2, and bion3. Then many times per second call get_relative_location_of_each_bion_in_list() (if necessary, adjust the call’s use_this_send_distance as needed until one gets a reply from all three of those bions), and do the following after each successful call:

    begin ACTION_7A  /* naming this action ACTION_7A because it is referenced elsewhere in this subsection */

    With the returned distance and direction to each of those three bions relative to this_bion (note: the also returned identifier block of each of those three bions is not used in this procedure), use as needed move_this_bion() to move this_bion so that it remains at about the same distances from those three bions as the original distances that were saved in bion1, bion2, and bion3 when both this_bion and those other three skin-cell bions were all still with their cells in the physical body (for example, if the original distances in bion1, bion2, and bion3 are 8, 4, and 16, respectively, then after each successful call of get_relative_location_of_each_bion_in_list(), this_bion—in accordance with whatever algorithm is computing how to move—moves as needed so that it remains at a distance from bion1 of about 8, at a distance from bion2 of about 4, and at a distance from bion3 of about 16).

    However, because bion1, bion2, and bion3 may have changed their separation distances relative to each other, it may not be possible for this_bion to move so as to remain at about the same distances from those three bions as the original distances that were saved in bion1, bion2, and bion3. For this reason, after each successful call of get_relative_location_of_each_bion_in_list(), in addition to having the returned distances between this_bion and bion1, bion2, and bion3, use the distance formula to compute the three current separation distances between bion1, bion2, and bion3 (the distance between bion1’s replier's_XYZ_relative_to_000 and bion2’s replier's_XYZ_relative_to_000, the distance between bion1’s replier's_XYZ_relative_to_000 and bion3’s replier's_XYZ_relative_to_000, and the distance between bion2’s replier's_XYZ_relative_to_000 and bion3’s replier's_XYZ_relative_to_000), and also copy the original distances saved in bion1, bion2, and bion3 to wanted_distance_to_bion1, wanted_distance_to_bion2, and wanted_distance_to_bion3 respectively, and then whatever algorithm is computing how to move this_bion will have to take into account the three current separation distances between bion1, bion2, and bion3 and adjust as needed the three wanted_… distances, and then use these three adjusted wanted_… distances and the returned relative locations of bion1, bion2, and bion3 to compute how far to move this_bion, and in what direction, so as to be at those adjusted wanted_… distances from bion1, bion2, and bion3, and then move this_bion that far and in that direction.

    end ACTION_7A

    Given the presumably great number of awareness/mind beings in our galaxy, and the probable rarity of planets like our Earth that have organic life, I think it is very likely that most of the awareness/mind beings in our galaxy that have a body, have a bion-body without any physical component (the Caretakers, section 7.6, are an example of awareness/mind beings that have a bion-body that has no physical component). These bion bodies need a way to hold together. Thus, because of this widespread need, there is probably a learned-program statement that takes as input one, two, or three different bions, along with the wanted distances between the calling bion and those one, two, or three input bions, and then adjusts those wanted distances if necessary when there are two or three input bions, and then moves the calling bion so that it is close enough to being at those wanted possibly-adjusted distances from the input bions (in the case of three input bions, this learned-program statement would do what was described above in this step 7 for skin-cell bions, beginning with successfully calling get_relative_location_of_each_bion_in_list() with the three input bions and wanted distances in bion1, bion2, and bion3, and then doing ACTION_7A). Aside from mentioning that this learned-program statement likely exists, it is not considered further in this book.

    else  /* this_bion’s USID_5 is not SKIN (this_bion’s cell is not a skin cell) */

    Referring to this_bion’s two anchors bion_A and bion_B (see step 4), set list_of_bions to the two unique identifiers bion_A and bion_B. Then many times per second call get_relative_location_of_each_bion_in_list() (if necessary, adjust the call’s use_this_send_distance as needed until one gets a reply from both of these bions), and do the following after each successful call: With the returned bion_A’s replier's_XYZ_relative_to_000 and bion_B’s replier's_XYZ_relative_to_000, which are the locations of bion_A and bion_B relative to this_bion, use the distance formula to compute their distances from this_bion and set current_distance_to_bion_A and current_distance_to_bion_B respectively. Also, use the distance formula to compute the distance between the two points bion_A’s replier's_XYZ_relative_to_000 and bion_B’s replier's_XYZ_relative_to_000, and set current_distance_between_bions_A_and_B. After computing these three distances, do the following small block of code:

    /*
    PART ONE

    This PART ONE isn’t really needed, but I’m adding it both for efficiency reasons and because it will do a better job of keeping non-skin-cell bions that were within or near the skin during step 4, from possibly straying a little bit outside of the, in effect, outer shell of the projected bion-body which is composed of skin-cell bions.
    */
    /*
    “1 centimeter” in the below if is an arbitrary choice, but should give good results.
    */
    if current_distance_between_bions_A_and_B is less than 1 centimeter (about 0.4 inches)
    then
    /*
    “half a millimeter” in the below if is an arbitrary choice, but should give good results.
    */
    if current_distance_to_bion_A is greater than half a millimeter (about 0.02 inches)
    then
    move this_bion so that it is within half-a-millimeter distance from bion_A.
    end if
    exit this small block of code.
    end if

    /*
    PART TWO
    */
    set sphere_radius to (h × current_distance_between_bions_A_and_B)  /* h is a constant that is greater than ½ and less than 1. */

    /*
    Confine this_bion to the convex-lens-shaped intersection of two equal-sized spheres of radius sphere_radius, with one sphere centered on bion_A and the other sphere centered on bion_B. For convenience, call this convex-lens-shaped intersection of the two spheres the confinement lens.

    Note that the reason constant h must have a value greater than ½ is because if it were less than ½ then there is no intersection of the two spheres (at h = ½, the two spheres intersect at a single point). Likewise, if h is greater than 1, this would make it too likely for this_bion to often be outside of the interior of the projected bion-body.
    */
    if current_distance_to_bion_A is greater than sphere_radius
    or current_distance_to_bion_B is greater than sphere_radius
    then
    /*
    this_bion is currently outside of its confinement lens, so move this_bion so that it is within its confinement lens.
    */
    Move this_bion so that it is not greater than sphere_radius distant from bion_A, and not greater than sphere_radius distant from bion_B. This move places this_bion back within its confinement lens.

    After this_bion has been moved back within its confinement lens, use the move_this_bion() statement to give this_bion a small velocity of a few inches per second (1 inch is 2.54 centimeters), because this is my estimate of the speed of the swirling interior bions that I saw during my one dense bion-body projection. Assume that this small velocity is in a direction relative to the confinement lens that will have the effect over a short time—combined with the possibly many-times-per-second movement of this_bion back within its confinement lens when it is outside of its confinement lens—of having this_bion moving along the perimeter of its confinement lens.

    exit this small block of code.
    end if

    Regarding the mathematics of the intersection of two spheres, see, for example, Sphere-Sphere Intersection at http://mathworld.wolfram.com/Sphere-SphereIntersection.html: A formula is given for the radius a of the circle where the two spheres intersect. This circle is also the perimeter of the confinement lens. Because here the two spheres have the same radius, and we want to know how changing h changes radius a, we can simplify the given formula for radius a to (½ × square_root_of((4 × h2) − 1)). For example, a value for h of 0.559 gives radius a = 0.25, which means that the confinement lens for this_bion has a diameter that is half of current_distance_between_bions_A_and_B. As a few more examples, a value for h of 0.55 gives radius a = 0.229 and a confinement-lens diameter of (0.458 × current_distance_between_bions_A_and_B), and a value for h of 0.6 gives radius a = 0.332 and a confinement-lens diameter of (0.664 × current_distance_between_bions_A_and_B).

    Because my one dense bion-body projection was when I was about 24½ years old, and I am 60 years old as I write this paragraph, I no longer have a clear memory of exactly what I saw in the interior of my upper bion-body chest when I looked there during that dense bion-body projection, other than that I saw multiple circular flows of interior particles, including one near my left armpit, and my estimate is that the particles in those circular flows were moving at a speed of a few inches per second. Assuming the confinement algorithm above in PART TWO for those interior particles (bions) that I saw in those circular flows, I can’t say with any confidence what the h value was, although my guess is an h value at or near 0.559, giving a confinement-lens diameter for this_bion at or near half of current_distance_between_bions_A_and_B.

    Regarding any specific pair of skin-cell bions in bion_A and bion_B to which PART TWO applies, all the non-skin-cell bions in the projected bion-body that have those same two skin-cell bions as their bion_A and bion_B—which of these two specific skin-cell bions is in bion_A and which is in bion_B doesn’t matter here—will all at any given point in time have the same confinement lens in 3D space, regardless of where in the physical body—whether closer to bion_A or bion_B and by how much—those non-skin-cell bions were when step 4 was being done for them, and this concentration of what will be swirling bions will make them more visible to one’s vision during a dense bion-body projection, assuming that the more bions in one’s line of sight to what will be a pixel in one’s vision, the more likely that the algorithms in one’s mind that produce the images that are sent to one’s awareness will make that pixel dark representing more bions. Regarding what is consciously seen, there is also the effect of all the complex post-processing of raw images and their sequence that happens that, among other things, emphasizes motion in what is finally sent to the awareness, and this post-processing probably played a role in making a few of the many swirls in my upper bion-body chest stand out and be more noticeable to my awareness. Note that there are about 50 trillion cells (5 × 1013 cells) in an adult human body, and during my one dense bion-body projection that I had, my dense bion-body probably had at least billions of bions in the interior of my upper bion-body chest, if not actually several trillions of bions. Thus, to see a dark pixel in my bion-body during that dense bion-body projection, may have required at least many thousands of bions if not many millions of bions in the line of sight to that pixel.

    end if

    Regarding this step 7 as a whole: The end result of this step 7 is that the projected bion-body has, in effect, an outer shell that is composed of skin-cell bions, and this shell has the same human shape and size as one’s physical body from which those skin-cell bions came. Also, if the projected bion-body has additional bions that are not skin-cell bions (USID_5 is not SKIN), then at least most of these non-skin-cell bions are, in effect, confined within that outer shell of the projected bion-body, and for those non-skin-cell bions that are sometimes outside of that outer shell, it won’t be by much in terms of distance from that outer shell.

  8. This step 8 discusses the return of the cell-controlling bions that compose the projected bion-body to their cells in the physical body. The discussion in this step 8 assumes that all these cell-controlling bions have USID_1 value MY_CELL_IS_ACTIVE (see step 1). For a discussion of the afterlife bion-body, whose cell-controlling bions have USID_1 value MY_CELL_IS_IN_STASIS, see section 6.3.

    Regarding how my bion-body projections ended, they all ended the same way: my projected bion-body moved back into my physical body and quickly reintegrated into my physical body, and without any break in my consciousness I was fully back in my physical body. After I was back in my physical body, as a rule I then thought about that projection experience, and after thinking about it for a while, I then went back to sleep. To end the bion-body projection, one’s mind sends a BB_END_THE_PROJECTION message (user_settable_identifiers_block has USID_2 set to NOT_WITH_MY_CELL, USID_3 set to AWAY_FROM_MY_CELL, USID_4 set to the unique identifier of one’s multicellular body, and the other integers in that parameter are set to null), which causes each of the recipient bions to stop doing step 7 and then change its USID_3 to RETURNING_TO_MY_CELL, and then resume running its learned program LP_keep_this_bion_close_to_this_physical_atom that it had stopped running in step 6. Then, after that cell-controlling bion has moved close enough to the specified atom and is thereby back with its cell, that cell-controlling bion then restores its user-settable identifiers block to its values before step 2, and then resumes running all learned programs for manipulating and maintaining its cell, and then exits (stops running) this procedure for that cell-controlling bion. However, note that before one’s mind sends the BB_END_THE_PROJECTION message, some or all of the cell-controlling bions composing one’s projected bion-body may have already returned or be returning to their cells because of time outs, as explained in the next paragraph. Assuming that one’s mind wants to end the current bion-body projection, then assume that it will send the BB_END_THE_PROJECTION message no later than when the projected bion-body is back inside one’s physical body.

    Regarding how long my bion-body projections lasted, my rough estimate is that the longest were at least several minutes long, perhaps at most five minutes long, but those longer-lasting bion-body projections always had during that bion-body projection brief returns to my physical body to do the exchange of used bions for unused bions described in section 5.4. For my bion-body projections that did not have any brief returns to my physical body, those were my shorter-lasting bion-body projections, although as I recall, most of my bion-body projections had one or more brief returns to my physical body before the last return which was also the end of that bion-body projection. Based on my experience, my estimate is that the cell-controlling bions in my physical body with USID_1 value MY_CELL_IS_ACTIVE only allowed about one minute of time to be away from their cells in a bion-body projection. Thus, the timer started in step 2, elapsed_time_away_from_my_cell, is periodically checked by this_bion to see if it is time for its return to its cell, and if so, this_bion does what was already described in the previous paragraph for the recipient bions of the BB_END_THE_PROJECTION message: stop doing step 7 and then change its USID_3 to RETURNING_TO_MY_CELL, and then resume running its learned program LP_keep_this_bion_close_to_this_physical_atom that it had stopped running in step 6. Then, after this_bion has moved close enough to the specified atom and is thereby back with its cell, this_bion then restores its user-settable identifiers block to its values before step 2, and then resumes running all learned programs for manipulating and maintaining its cell, and then exits (stops running) this procedure for this_bion.

    For a cell-controlling bion whose USID_1 value is MY_CELL_IS_ACTIVE: Regarding the time allowed by that bion to be away from its cell in a bion-body projection, perhaps there is some variation in this time allowed depending on the cell type, but even if there is some variation, it is probably the case that a substantial fraction of the bions composing one’s projected bion-body will time out at about the same time and then return to their cells. Given step 7, if the non-skin-cell bions time out first before the skin-cell bions, then their leaving the projected bion-body will not have any effect on the skin-cell bions in that projected bion-body, but when the skin-cell bions time out, the timed-out skin-cell bions will drag whatever remains of the projected bion-body back into the physical body.

    Instead of waiting to be dragged back into one’s physical body because of time outs, one’s mind could send move messages to all the cell-controlling bions in one’s projected bion-body to move one’s projected bion-body back into one’s physical body (user_settable_identifiers_block has USID_2 set to NOT_WITH_MY_CELL, USID_3 set to AWAY_FROM_MY_CELL, USID_4 set to the unique identifier of one’s multicellular body, and the other integers in that parameter are set to null), and then when the bion-body is inside or otherwise close enough to being inside one’s physical body one’s mind sends the BB_END_THE_PROJECTION message to those same bions. I believe this is what happened in the case of my one dense bion-body projection, and also in the case of my less-dense bion-body projection in 2012 that involved my pet cat (both described in chapter 10), because in both of those cases my bion-body was motionless approximately two feet above my physical body and then moved slowly back down into my physical body. But in all my other bion-body projections (approximately 100), and regardless of what happened after my bion-body had returned to my physical body (either continuing with the bion-body projection after first exchanging used bions for unused bions and then leaving my physical body again, or, after returning, ending the bion-body projection), when my bion-body was returning to my physical body, my bion-body moved fast, and straight into my physical body with no prior positioning or slow movement of my bion-body when it was close to my physical body: therefore, for those bion-body projections, and given step 7, I believe my bion-body was dragged back into my physical body by skin-cell bions that had timed out.

    Regarding the exchange of used bions for unused bions because one’s mind wants to continue with the bion-body projection if it can—and regardless of whether one’s bion-body was dragged back by its bions into one’s physical body or one’s mind directed one’s bion-body back into one’s physical body—the simplest way to do the exchange of used bions for unused bions, given the steps in this procedure, is after one’s projected bion-body is back in one’s physical body and one’s mind has sent the BB_END_THE_PROJECTION message, is to wait a short time—perhaps a few milliseconds for all those bion-body bions, if any, that haven’t already done so, to return to their cells—and then start this procedure over again with step 1. And with this simple approach, the exchange of used bions for unused bions is more specifically a complete replacement of all the cell-controlling bions in one’s projected bion-body with, after returning to one’s physical body, a completely new group of cell-controlling bions in one’s projected bion-body (among other things, this complete replacement avoids the complication of having to, in effect, stitch together the new skin-cell bions with any old skin-cell bions still remaining in the projected bion-body; and also this complete replacement avoids being dragged back to one’s physical body prematurely because any non-replaced skin-cell bions still in the projected bion-body that have yet to time out will time out sooner than the newly added skin-cell bions).

Based on my own experience with out-of-body projections, they most commonly occurred on a night where before going to sleep I first did Om meditation for five or ten minutes, and then within a few hours after I had fallen asleep, either I became fully conscious in a lucid dream or I became fully conscious in my physical body but I then avoided moving my physical body because I wanted to try to have a bion-body projection, and then in my mind (not speaking out loud) I mentally repeated slowly, over and over again, the word Om—this frequent Om-meditation usage was all before my 25th birthday and my kundalini injury on that date—with the purpose of triggering a bion-body projection, and with that repeated mentally saying Om, before long I would often feel a vibration in my body, and continuing with that mentally repeating of Om, within a few seconds I would then be able to move away from my physical body in my projected bion-body (back then, and still today, I always go to sleep while lying on my side, but my physical body was always lying flat on its back whenever I became fully conscious in my physical body before any bion-body projection that I had, so my unconscious mind had always first moved my physical body onto its back before making me conscious so that I could then proceed with the Om meditation in an attempt to bring about a bion-body projection). Much less common for me during those years of Om-meditation use was to become fully conscious when I was already projected in a bion-body. (Let me just note in passing that I have no idea nor explanation for why mentally saying Om worked for me, resulting in lucid dreams and bion-body projections. It doesn’t work for everyone who tries Om meditation, but I know that besides myself it has worked for at least some people who have tried it as a meditation method, because of what is said about Om meditation in the Upanishads. Let me also note that with the sole exception of the one dense bion-body projection that I had, all the bion-body projections that I had as a result of Om meditation always consisted of a low-density bion-body with no noticeable swirling of bions in the interior of that bion-body. Given step 7 above, all those low-density bion-body projections that I had, had mostly or only skin-cell bions composing my projected bion-body.)

Moving my Projected Bion-Body’s Limbs

Given step 7 above for skin-cell bions, how was my mind able to direct the movement of my projected bion-body’s limbs, specifically my moving my bion-body legs as if I was walking, and my moving my bion-body forearms in that up-and-down chopping-motion experiment that I did (both are described in subsection 5.2.2). The physical structures for moving my limbs in my physical body—neurons, muscles, tendons, bones, and joints—were not present in my projected bion-body. In the case of that bion-body projection where I experimented with how fast I could move my bion-body forearms in up-and-down chopping motions, my bion-body forearms were moving as if they were bending at the elbow like how my physical forearms move when I’m in my physical body. I remember that shortly after that bion-body projection had ended, I then did the same up-and-down chopping motions in my physical body for comparison, and I concluded that doing those up-and-down chopping motions in my projected bion-body were about twice as fast as I could do those same chopping motions in my physical body. Remembering this, and testing myself with a stopwatch at the time of this writing in 2016, for each forearm in my physical body I can do about 2½ of those up-and-down chopping motions per second, so that means in that bion-body projection I was able to do those up-and-down chopping motions at about 5 per second. Given step 7 for skin-cell bions, if one skin-cell bion moves, that skin-cell bion, in effect, drags after itself the other skin-cell bions that have that skin-cell bion as its bion1, bion2, or bion3, and those dragged skin-cell bions in turn drag other skin-cell bions, and so on, in a chain reaction of movement. To get those up-and-down chopping motions of my bion-body forearms while my bion-body upper arms remained unmoved, my mind probably only needed to do the following for each of my two bion-body arms:

And similarly with how my mind made walking movements with my bion-body legs: For each leg, to three different skin-cell bions in that leg—one near that leg’s hip area, one near that leg’s knee, and one near that leg’s foot—first send a message to each of these three bions to temporarily stop doing step 7, and then to the latter two bions (the one near the knee and the other near the foot), send a sequence of move messages that result in moving each leg as if I were walking. When finished with these walking movements, send a message to each of these three bions saying, in effect, to resume doing step 7.

Given step 3 above for skin-cell bions, each skin-cell bion has an average of three other skin-cell bions that have that skin-cell bion as its bion1, bion2, or bion3. Assuming a single skin-cell bion starts receiving move messages from my mind and begins dragging other skin-cell bions, regarding how much the patch of skin-cell bions being dragged will widen after n executions by each of the skin-cell bions in the projected bion-body of step 7’s (call get_relative_location_of_each_bion_in_list() successfully and then do ACTION_7A), a rough approximation is n × (the average distance from this_bion to its bion1, bion2, and bion3 when step 3 was done), which, for the suggested use_this_send_distance value of 1/4th of an inch given in step 3, gives an average distance of 1/6th of an inch, and the rough approximation then reduces to (n × 1/6th of an inch). Measuring from my elbow to my wrist is 11 inches, so n = 66 to spread that far. Regarding step 7 for each skin-cell bion, my rough estimate is that if step 7’s (call get_relative_location_of_each_bion_in_list() successfully and then do ACTION_7A) is done one-thousand times per second, then this would be more than fast enough to allow the speed at which I was able to move my bion-body forearms in those up-and-down chopping motions that I did, with, as I saw it, my bion-body forearm and hand holding together and moving smoothly as one, assuming a single skin-cell bion in my forearm near the hand, receiving move messages from my mind, by chain reaction dragged after it the other skin-cell bions in my bion-body forearm and hand.

Although I did that up-and-down chopping-motion experiment with my bion-body forearms, if, during one of my approximately 100 bion-body projections I had done a similar experiment with one of my bion-body hands, I’m sure I would have remembered it, but I never tried to do anything with my bion-body hands. Perhaps, given steps 3 and 7 for skin-cell bions, my bion-body fingers—depending on how my physical hands were lying on my bed at the beginning of my bion-body projection, and depending on the actual value of use_this_send_distance in step 3—were typically, in effect, stitched together because one or more skin-cell bions in one finger along the length of that finger each had for its bion1, bion2, or bion3 a skin-cell bion in an adjacent finger, and as a result each finger was not capable of independent movement because of step 7. Also, ignoring for a moment what was just said about my projected bion-body fingers possibly being stitched together, perhaps my mind simply didn’t have any programming to move my bion-body fingers independently of each other, or to move them at all, and this lack of programming is reasonable because my projected bion-body’s hands, being composed of cell-controlling bions, are essentially useless during a bion-body projection, and cannot touch or handle the physical objects that our physical hands can handle.

The consideration stated in the previous paragraph about adjacent physical structures (fingers) being, in effect, stitched together, makes me think that to guarantee that the limbs in the afterlife bion-body are not stitched together, nor to one’s side in the case of one’s arms, it would be best in the case of an elderly person who is known to be dying to do the following shortly after his breathing has stopped (after the last breath, the heart will typically stop a few minutes later, and then by my guess, probably about five minutes after the heart has stopped, that person’s mind will run its procedure to form its afterlife bion-body): After his last breath, lay that person flat on his back with his feet about 12 inches (30 centimeters) apart from each other, and the arms laid at that person’s side but separated from the body’s side by at least an inch. The person’s head should also be turned if needed to face upward toward the ceiling (or sky if outside). Doing all this will probably give that person an afterlife bion-body that he will be the most psychologically comfortable with inhabiting. However, if a person’s physical body when that person’s afterlife bion-body formed was positioned in such a way that that person’s afterlife bion-body has its legs stitched together and/or its arms stitched to the bion-body’s sides, then, although not ideal, my guess is that this won’t have too large an impact on that person’s experience of the bion-body stage of the afterlife, because, assuming the third-eye and third-ear are activated, a person in his afterlife bion-body, after he gets used to the fact that he can see and hear the physical world, and is in a ghost-like body that cannot make contact with anything in the physical world, that person when awake during his bion-body stage of the afterlife is going to be more focused on where he can go and what he can see and hear in the physical world, than focused on whether or not he can move his bion-body limbs which cannot make contact with anything in the physical world anyway. See section 6.3 for more discussion of the bion-body stage of the afterlife.

5.3 Lucid-Dream Projections ~ Oliver Fox

Regarding out-of-body experiences, many good accounts have been written in English. Many people have had isolated out-of-body experiences, and some of these experiences have been collected and published by researchers. However, there are also books written by individuals who have had many out-of-body experiences; such people are called projectionists, because they are self-aware while projected away from their physical bodies—and they remember their experiences long enough to record them.

In 1920, the personal account of Hugh Calloway—who used the pseudonym Oliver Fox—was published in a British journal. About two decades later he wrote the book Astral Projection, which recounted his experiences more fully.[37] Fox was a lucid dreamer.

Fox had his first lucid dream at the age of 16, in 1902. He dreamed he was standing outside his home. In the dream, the nearby ocean was visible, along with trees and nearby buildings; and Fox walked toward his home, and looked down at the stone-covered walkway. Although similar, the walkway in the dream was not identical in appearance to the real-life walkway that it imitated. During the dream, Fox noticed this difference and wondered about it. The explanation that he was dreaming occurred to him, and at that point he became self-aware. His dream ended shortly afterward.

After that first lucid dream, lucid dreaming became a frequent occurrence for Fox. He would be asleep and dreaming, and at some point he would become conscious within the dream. Fox noted two interesting things about his lucid dreams: he could move about within the dream, such as by gliding across an apparent surface; and the substance that formed the objects in the dream could be molded by thought.

Fox’s lucid dreams were typically short, and he did his best to prolong them. But he would feel a pain in his dream-head, and this pain signaled the need to return to his body. As this initially weak pain grew, he then experienced a dual perception consisting of his dream sensations and his body’s sensations. A sort of tug-of-war resulted, with the body winning.

Unlike Fox, most lucid dreamers never report having a choice about returning to their body, because at some point the lucid dream just ends without any warning, and the dreamer awakes. Presumably in Fox’s case, the perceptions he felt of his physical body were communicated (using the learned-program send() statement) to his mind (his soliton’s owned bions) by the same brain bions in his brain that communicate sensory signals to his mind when he is awake in his body. Similarly, communications from the mind to brain bions can ultimately affect the body, as demonstrated by sleep-lab experiments in which the physical body can show various movements and other responses that correlate with events in the lucid dream.[38]

Fox had wondered what would happen if he resisted the warning-pain signal and delayed the return to his body. He decided to experiment. About a year after his first lucid dream, he became self-aware in another of his walk-around-the-town dreams. He felt the warning pain and ignored it. The dual perception occurred, and he successfully willed to retain the dream perception. Next, the growing pain in his dream-head peaked and then disappeared. At that point Fox was free to continue his lucid dream.

As Fox’s lucid dream continued, he soon wanted to awake, but nothing happened; his lucid dream continued. Fox then became fearful, and tried to concentrate on returning to his body. Suddenly, he was back in his body, but he found himself paralyzed. His body senses were working, but he was unable to make any movements. Fortunately, this condition did not last long, and he was soon able to move again. However, immediately afterward he was queasy, and he felt sick for three days. This experience deterred him for a while, but a few weeks later he again ignored the warning-pain during a lucid dream, and the same pattern resulted. He says the sickness was less this time, and the memory of the dream was lost. After this second experience, Fox no longer fought against the signal to return.

Fox remarks that years later he learned that if he had only relaxed and fallen asleep when he was paralyzed in his body, then the subsequent sickness would not have occurred.

During his teens and twenties, Fox continued to have lucid dreams, and he noticed a pattern. Often, his lucid dreams never reached the warning-pain stage, because he would do something that would cut the dream short and cause him to awake. Fox gives some examples of what he means: After ordering a meal in a restaurant and then eating it, trying to taste the food he was eating caused him to awake. While watching a play in a theater, a growing interest in the play would cause him to awake. If Fox encountered an attractive woman, he could converse with her, but when he thought of an embrace or such, he would awake. In general, to prolong a lucid dream, “I may look, but I must not get too interested—let alone touch!”[39]

The lucid dreamer is just his awareness/mind, separated from his physical body and its cell-controlling bions. In the lucid-dream environment, he can move to different locations, interact with others who are in the same condition as himself (being just an awareness/mind), and he can see appearances—whether of individuals or objects—constructed of d-common atoms. Regarding the specific things that Fox consciously wanted to do while in his lucid dream, that required his physical body with its cell-controlling bions to do, Fox’s mind, responding to what Fox consciously wanted, returned him to his physical body. Although I can no longer remember any specific examples from my own approximately 400 lucid dreams (too many years have passed—I am writing this sentence in 2016), I had lucid dreams end for the same reason as Fox, consciously trying to do something during a lucid dream that was not doable for me without my physical body and its cell-controlling bions.

Seeing and hearing are the two senses of the lucid dreamer that work just as well in the lucid dream as they do in the physical body. The typical lucid dreamer sees clearly in color, and can hear and talk by means of telepathic communication. In contrast to seeing and hearing, the other senses are noticeably absent. For the lucid dreamer, the senses of taste, touch, and smell, are missing. Any conscious attempt, either explicit or implicit, to use any of these three missing senses during a lucid dream causes a return to one’s physical body (for example, wanting sex with a woman one is currently with in a lucid dream is an implicit wanting to touch; although I no longer remember any details, I know this happened to me a number of times).

Instead of being an idle spectator watching the world go by, the lucid dreamer is frequently in motion. He may be moving slowly, or moving more quickly. However, the most spectacular motion for the lucid dreamer is a sudden acceleration to a great speed. At first, the lucid dreamer may be at a relative standstill, or moving, when this sudden acceleration begins. As the acceleration quickly builds, the sight goes black, and there may be a loss of consciousness. The next thing the lucid dreamer is aware of, is a change in the location of the lucid dream. Apparently, the sudden acceleration happens when a large distance has to be traveled.

The lucid-dream literature has many lucid-dream stories in which transcontinental and transoceanic distances are quickly traveled by the lucid dreamer. Thus, there is reason to believe that the awareness/mind (the soliton and its owned bions)—when it is by itself and not, in effect, tied down with its physical body or a projected bion-body—can quickly accelerate to a speed of roughly several hundred kilometers per second.

Although the motion of the lucid dreamer is an impressive clue that there is an external dream world, additional evidence comes from encounters with persons known to the lucid dreamer. These lucid-dream encounters are sometimes independently confirmed when the awakened dreamer later talks with the encountered persons. For example, Fox tells the following story: He was discussing dreams with two friends. The three of them then agreed to meet together that night in their dreams. Fox remembered meeting only one friend in a dream that night. The next day the three friends compared experiences. The friend whom Fox met in the dream also recalled meeting Fox. Both Fox and this friend agreed they never saw the third friend, who claimed to have no memory of his dreams that night.

The experience that most convinced Fox that there is an external dream world, involved a girlfriend of his, when he was 19, in the summer of 1905. Fox had talked about his lucid-dream experiences with her, but her attitude was that such things were wicked. Fox tried to overcome her objections by claiming that she was ignorant and he could teach her. However, her reaction was that she already knew about such things, and could appear in his room at night if she wanted to. He doubted her claim, and she became determined to prove it. That night, Fox had what he calls a False Awakening—where he becomes self-aware, very close to his physical body, having both his lucid-dream vision and lucid-dream hearing. While he was in this condition, his girlfriend made a sudden, dazzling appearance in his bedroom. She appeared fully formed, wearing a nightdress. She said nothing, but looked about the room. After a while, Fox tried to speak to her, but she disappeared, and Fox awoke.

The following day, Fox met with his girlfriend to compare experiences. She greeted him enthusiastically with the news of her success. Without having been in his room before, she successfully described both its appearance and content. The description was sufficiently detailed to convince Fox of the reality of her visit. Fox remarks that his girlfriend said his eyes were open during the visit.

In describing his projections, Fox often shows an apparent confusion between dream-world objects and physical objects. For example, he seems to think his girlfriend saw his physical bedroom, and that is why he makes the remark about her saying that she saw his eyes open during the visit. He is quite sure that his physical eyes were closed. He finally concludes that she probably saw the open eyes of his dream appearance.

It seems to be a rule that the things seen during a lucid dream are objects composed of d-common atoms. Probably when Fox’s girlfriend visited him that night, she was having a lucid dream and both of them were actually in a d-common replica of his physical room, which may have actually been much smaller in size than his physical room (see the discussion about the size of things seen during a lucid dream, in subsection 5.1.2).

In a lucid dream, d-common objects often duplicate the shape and coloring of physical objects.[40] For example, the appearances of other people who are known to oneself from one’s daily life, seen during a lucid dream, are typically imitations of their physical appearances. Assuming Fox’s girlfriend was having a lucid dream when she made her appearance that night, then the only part of her that was in that room was her awareness/mind (her soliton and its owned bions).

A valid question is what causes d-common atoms to assume shapes and colorings that imitate physical objects? Probably what shaped, colored, and clothed Fox’s girlfriend during her visit, was her mind, which constructed out of d-common atoms the appearance that Fox saw. The observed replica room was perhaps part of a larger replica house or building. Probably these replicas are constructed by the minds of the people who live there. The replica of Fox’s room was probably constructed by Fox himself, unconsciously.

Fox mentions the existence in the lucid-dream world of an entire city—an imitation London which he visited and explored. Besides imitation buildings that looked familiar, there were also buildings and monuments that Fox knew had no equivalent in the real city of London. Fox says that it was his experience that his repeated lucid-dream trips to the same town or city showed the same buildings and monuments—including those that had no counterpart in the real town or city.

Once made, a d-common object seems to remain in the same location, and retain its form—until a mind moves, changes, or destroys it. Although the actual manipulation of d-common atoms is normally done unconsciously, sometimes a lucid dreamer can consciously will a change in some nearby d-common object and see the change happen.

Despite an often similar appearance, there is no linkage between d-common objects and p-common objects. For example, an experiment that is often reported by lucid dreamers is that they successfully move some d-common object that they think corresponds to a familiar physical object; but once they are awake and check that physical object, they always find it unmoved.

Fox remarks how the memories of his lucid-dream projections were fleeting. To counter this, he would often write down an account of his projection as soon as he was awake. In his book, Fox wonders why such memories are not more permanent. Of course, for most people the memory of ordinary dreams is very fleeting, too. Occasionally a dream or lucid dream makes an impression on long-term memory, but that is the exception not the rule. It seems that the learned programs that manage the mind’s memory, when deciding long-term retention, assign a comparatively low priority to both dreams and lucid dreams.


footnotes

[37] Fox, Oliver. Astral Projection. Citadel Press, Secaucus, 1980.

[38] LaBerge, Stephen. Lucid Dreaming. Ballantine Books, New York, 1987. pp. 82–95.

[39] Fox, op. cit., p. 44.

[40] Regarding the perceived shape and coloring of d-common objects, these conscious perceptions are perceptions of mental constructions that were made by the relevant learned programs concerned with vision; these learned programs process the sensory data for the d-common objects being seen. This is the same situation as with ordinary waking sight and p-common objects. In both cases, one consciously sees only a mental construction of what is there.

Given that d-common particles do not interact with p-common particles, this means that the sensory data used to construct the perceptions of d-common objects is different than the sensory data used to construct the perceptions of p-common objects. More specifically, photons (composing visible light) are p-common particles, and are not involved in the perception of d-common objects. Instead, the learned-program statement get_relative_locations_of_d_common_atoms() is, in effect, the sensory source for seeing d-common objects.


5.4 Bion-Body Projections ~ Sylvan Muldoon

Sylvan Muldoon was born in the USA in 1903, and spent his life in the Midwest. In November 1927, he sent a letter to Hereward Carrington, a well-known writer on paranormal subjects. Muldoon had read one of Carrington’s books, and he wanted to let Carrington know that he, Muldoon, knew a lot more about projection than did the sources Carrington used in his book. Carrington was so impressed by Muldoon’s letter that he wrote him back and invited him to write a book which he, Carrington, would edit and write an introduction for. The result was The Projection of the Astral Body, published in London in 1929.[41]

Overall, lucid dreams are more common than bion-body projections. But Muldoon had only bion-body projections. And his projected bion-body was much more substantial (having many more cell-controlling bions) than does the typical bion-body projectionist, who has lucid dreams more than bion-body projections. In its main elements, Muldoon’s account is consistent with the many other accounts in the literature of bion-body projections. The main elements of agreement are: a complete and unchanging bion-body that comes out of the physical body and then later reenters it; an inability to contact or otherwise affect physical objects; the relatively short duration of the projection experience, sometimes punctuated by brief returns to the physical body. Where Muldoon’s account differs from the standard account, each of the differences is attributable to either the greater density of his projected bion-body, or to the presumed details of whatever learned programs regulated his projections.

Muldoon was only 12 years old when he had his first projection experience. His mother had taken him to a camp of gathered spiritualists in Iowa, because she was interested in spiritualism. Muldoon slept in a nearby house that night, with other persons from the camp. He had been asleep for several hours when he awoke slowly. At first he did not know where he was, and everything was dark. Eventually he realized he was lying down on his bed—but he could not move. Muldoon soon felt his whole body vibrating, and he felt a pulsing pressure in the back of his head. Also, he had the sensation of floating.

Muldoon soon regained his sight and hearing. He then realized that he was floating about a meter above the bed. This was his bion-body floating, although he did not yet realize it. Muldoon still could not move. He continued to float upward. When his bion-body was about two meters above the bed, his bion-body was moved upright and placed onto the floor standing. Muldoon estimates he was frozen in this standing position for about two minutes, after which the bion-body became relaxed and Muldoon could consciously control it.

The first thing Muldoon did was turn around and look at the bed. He saw his physical body lying on it. He also saw what he calls a cable, extending out from between the eyes of his physical body on the bed. The cable ran to the back of his bion-body head, which is where he continued to feel some pressure. Muldoon was about two meters from his physical body. His bion-body, being very light, was not firmly held down by gravity, and it tended to sway back and forth despite his efforts to stabilize it.

Not surprisingly, Muldoon was both bewildered and upset. He thought he had died—so he resolved to let the other people in the house know what had happened to him. He walked to the door of the room, intending to open it, but he walked right thru it. Muldoon then went from one room to another and tried to wake the people in those rooms, but was unable to do so. His hands passed thru the physical bodies he tried to grab and shake. Muldoon remarks that despite this inability to make contact with physical objects, he could still see and hear them clearly. Muldoon says that at one point during his movements in the house he both saw and heard a car passing by the house. Muldoon also says that he heard a clock strike two. Upon looking at the hands of the clock he verified that it was two o’clock.

Muldoon gave up trying to wake the other people in the house. He then wandered around in the house for about fifteen minutes. At the end of that time he noticed that the cable in the back of his head was resisting his movements. The resistance increased, and Muldoon soon found himself being pulled backward toward his physical body, which was still lying on its bed. He lost conscious control of his bion-body, which was automatically repositioned, as before, above his physical body. The bion-body then lowered down, began vibrating again, and reentered the physical body. Upon reentry, Muldoon felt a sharp pain. The projection was over. Muldoon concludes his story by saying, “I was physically alive again, filled with awe, as amazed as fearful, and I had been conscious throughout the entire occurrence.”[42]

Over the years that followed, Muldoon says that he had several more projections similar to the first one, in which he was conscious from the very beginning of the projection until its very end. In addition, Muldoon says he had several hundred other projections, where he was conscious for only part of the time during the projection. Typically, he would become conscious after the bion-body had moved into a standing position a short distance from his physical body. As far as he could tell, the order of events established by his first projection experience was always maintained. His situation, in terms of his sight, hearing, bion-body, and cable connection, was the same from one projection to the next.

The cable that, in its appearance, connects the projected bion-body with the physical body is more commonly called a cord, and has been noticed by some, but not all, bion-body projectionists (in my own bion-body projections, I never saw a cord of any kind). If, for at least some bion-body projectionists, there is a cord, then what is this cord? The cord, if it is there, is, like the rest of the projected bion-body, composed of cell-controlling bions that were temporarily withdrawn from cells in the projectionist’s physical body.

The cord that Muldoon noticed during his first projection, was a common feature of his later projections. He often studied this cord when he was projected. For Muldoon, out to a somewhat variable distance of a few meters from his physical body, his cord remained thick. As long as the cord appeared thick, his bion-body was strongly influenced by his physical body. Within this range, Muldoon felt happenings to his physical body reproduced in his bion-body. For example, once a pet dog jumped on the bed and snuggled against Muldoon’s physical body while he was projected within range. He felt this dog as though it were pressing against his bion-body. Besides feeling his physical body’s sensations, Muldoon could also control its breathing when within range. As Muldoon moved further away from his physical body, the cord became very thin, like a thread. Muldoon claims that the cord kept its threadlike thinness out to whatever distance he moved to—even to a distance of many kilometers.

During a bion-body projection, it often happens that at regular intervals the bion-body briefly returns to the physical body. During each such brief return, a kind of pumping sensation is sometimes felt. First, the bion-body quickly reenters the physical body. Then, during the brief period of a few seconds when the bion-body is with the physical body, the projectionist may feel the whole bion-body pumping. Muldoon and other projectionists have interpreted these brief returns as a recharging, or reenergizing, of the projected body. This is the fuel-is-low and batteries-are-run-down kind of explanation. However, a more correct and detailed explanation is given in subsection 5.2.3, which in summary explains these brief returns to the physical body as necessary to prolong the total time of the bion-body projection, by completely replacing those bions in the projected bion-body that are returning to their cells or will soon be returning to their cells, with other bions from the physical body that are currently available to join the bion-body projection.

The consistent shape of the bion-body suggests its origin. The bion-body is always a match of the physical body in terms of its general outline (the reason for this is also given in subsection 5.2.3). No projectionist ever reports an incomplete bion-body, or—aside from ordinary movement such as the bending of limbs—a bion-body that alters or transforms its shape. This is different from what is possible during a lucid dream. The apparent body of a lucid-dream projectionist is constructed on the spot out of d-common atoms, which have no connection to the projectionist’s physical body. Thus, lucid-dream projectionists sometimes report having no body—or an incomplete body, or a nonhuman body. Also, they sometimes report seeing someone else undergo a transformation of their apparent human form. However, such variability is never reported for the bion-body.

The typical bion-body projectionist finds himself in a flimsy bion-body. These projectionists make no connection between physical health and bion-body projections—unless to claim that good health promotes projections. Muldoon, of course, was not the typical bion-body projectionist. When compared to other projectionists, his bion-body was consistently dense; and his projections were sometimes long lasting, such as the approximately twenty-minute duration of his first projection. It is interesting that Muldoon takes a very decisive position on the relationship between physical health and projection ability. He claims that sickness promotes projection, and health has the opposite effect. His basis for this claim was his own experience: Muldoon was often sick. According to Carrington, Muldoon wrote his book from his sickbed.

Muldoon’s identification of sickness with projection ability may be accurate in Muldoon’s case. Muldoon’s opinion was that sickness comes first, and then the projections follow. However, Muldoon’s projections kept many of his physical body’s cell-controlling bions away from their cells, and sometimes for comparatively long periods of time. Therefore, it seems more reasonable to suppose that the projections came first—followed by the sickness.

Regarding the vibration of the bion-body, the bion-body is known to vibrate at times. The typical literature of the 20th century has an erroneous explanation for this vibration of the bion-body, based on the premise that there are different invisible planes of existence. The phrase planes of existence is a figure of speech used in the literature to suggest separateness. According to this erroneous explanation, these planes operate at different frequencies, and the vibration rate of the bion-body can match these different frequencies. Thus, according to this explanation, the vibration rate of the bion-body determines which of these invisible planes becomes visible and accessible to the projectionist.

There are three reasons why this erroneous explanation came about. First, bion-body projectionists report that when they feel the vibrations increasing in frequency, then separation of the bion-body from the physical body will happen. Conversely, when they feel the vibrations decreasing in frequency, then reassociation of the bion-body with the physical body is likely. Thus, it was argued that there is a correlation between a low vibration frequency and the physical plane of existence. Second, projectionists often report experiences that are very different from each other. It was argued that this suggests different planes of existence. For example, lucid dreams are happening on one plane, and bion-body projections are happening on a different plane. Third, vibrations are easily described with mathematics. Thus, a vibrational model of reality appealed to those who were influenced by the mathematics-only reality model.

The correlation of decreasing frequency with physical reassociation, and increasing frequency with physical disassociation, suggests that when the bion-body is separated from the physical body, and the projectionist does not feel any vibration, that the bion-body is nevertheless vibrating, but at a frequency too high to be felt or otherwise noticed. Probably this vibration of the bion-body is a consequence of the process that keeps the bion-body together when it is away from the physical body. However, regardless of the specific cause, the vibrations have nothing to do with tuning in alternate realities—as though the bion-body were a radio-tuner or television-tuner switching stations and channels, instead of being what it really is: a population of cooperating intelligent particles.

After the onset of the vibrations, Muldoon felt himself floating. As he was floating upward, his senses of hearing and sight became active. It is unusual that Muldoon could see and hear our physical world. Most bion-body projectionists do not have the dense bion-body that Muldoon had, and cannot see or hear our physical world when in their projected bion-body; but they can see their own projected bion-body—typically as a darkness-enveloped, grainy, gray-looking, wispy body—when they look at it. To try to understand what Muldoon’s senses were like, here are a few quotes:

When the sense of hearing first begins to manifest, the sounds seem far away. When the eyes first begin to see, everything seems blurred and whitish. Just as the sounds become more distinct, so does the sense of sight become clearer and clearer.[43]

As is often the case, everything at first seemed blurred about me, as though the room were filled with steam, or white clouds, half transparent; as though one were looking through an imperfect windowpane, seeing blurry objects through it. This condition is but temporary, however—lasting, as a rule, about a minute in practically all conscious projections.[44]

Once you are exteriorized, and your sense of sight working, the room, which was dark to your physical eyes, is no longer dark—for you are using your astral eyes, and there is a ‘foggish’ light everywhere, such as you see in your dreams, a diffused light we might call it, a light which seems none too bright, and yet is not too dim, apparently sifting right through the objects of the material world.[45]

The primary difference between Muldoon and most other bion-body projectionists, was the greater density of Muldoon’s projected bion-body, and this greater density was enough to trigger activation of his mind’s third-eye and third-ear (see step 5 in subsection 5.2.3), allowing Muldoon to see and hear our physical world during his bion-body projections. For those of us who have this third-eye and third-ear, the third-eye and third-ear exist as learned programs in one’s mind. I think it’s very likely that all humans have this third-eye and third-ear, and humanity inherited these learned programs for the third-eye and third-ear from the Caretakers (section 7.6) in the distant past when humanity began. But most of us humans will not consciously experience the third-eye and third-ear until we are in the afterlife (section 6.3).

The third-eye, when active, repeatedly calls the learned-program statement get_photon_vectors(), which detects photons and constructs a visual image from those photons.[46] The third-ear, when active, repeatedly calls the learned-program statement get_relative_locations_of_physical_atoms() to measure sound waves in air.[47]

Assuming we each have the third-eye and third-ear in our human minds, one might wonder why they never activate when we are in our physical bodies. And especially in the case of people who are blind and/or deaf, why does this hidden capability not activate so that they can see and hear? The simple answer is that one’s awareness/mind is located inside one’s skull when one is in one’s physical body, and this means that one’s awareness/mind is surrounded by physical matter that blocks all the light (photons) from outside one’s physical head, and without that light one’s third-eye cannot see our physical world. And likewise for one’s third-ear, which needs to be surrounded by air, to hear the sounds in that air.

In general, during an out-of-body experience, more than one vision system may be operating simultaneously (and this is also true during the afterlife). For example, during my one dense bion-body projection (described in subsection 10.1.1), I saw simultaneously both a part of my dense bion-body and a part of my physical room (the parts that were within my field of view; presumably, whenever two vision systems are, in effect, operating simultaneously, they each use the same direction-of-view vector so that they each have the same field of view). Both were seen simultaneously from the vantage point of my awareness/mind located in my bion-body head. Thus, my vision of bions was operating at the same time that my third-eye was operating. In terms of computation cost, note that combining, aka compositing, two images together into a single image—in this case, compositing the current image from my vision of bions with the current image from my third-eye—is a low-cost operation directly proportional to the number of pixels in the composite image assuming that the two images to be composited together are the same size. Assuming that both vision systems in one’s mind, when running, are generating many images per second, so as to give continuous, smooth vision of movement, there will be many composite images generated per second that one’s mind will send to one’s awareness so that one consciously sees in the direction one is currently looking at.

Also note that in my one dense bion-body projection, it was already early morning after sunrise, and sunlight was streaming into my room through the closed window blinds. My third-eye vision was working good from the outset, and there was none of the initially “blurred and whitish” vision that Muldoon describes above for himself. However, this difference between us may simply be because of the difference in light levels: Muldoon describes his situation in a dark nighttime room, and my own situation was in an early-morning sunlit room.

During Muldoon’s first projection, he tried to make contact with the other people in the house. He saw their physical bodies lying in bed, but his bion-body hands passed right thru them. The reason he wasn’t able to make contact with physical matter is the same reason that none of us humans can make contact with physical matter during a bion-body projection: The projected bion-body is composed of cell-controlling bions that were withdrawn from the cells of one’s physical body. The learned programs for cell-controlling bions are specialized to only manipulate physical matter within their own cells and the very close environment around their cells. Thus, cell-controlling bions that are currently separated from their cells, such as during a bion-body projection, cannot affect physical matter because their learned programs have no programming to do so when cell-controlling bions are away from their cells.

Muldoon remarks how frustrated he was that he could never make contact with physical objects. In the many projections he had, his bion-body never made contact with a physical object while he was conscious. However, Muldoon believed that there were a few instances when his bion-body made contact with a physical object while he was unconscious. For example:

On the night of February 26, 1928, Muldoon had a serious stomach sickness, which caused him great pain. At near midnight he was overcome with pain and called out to his mother for help. She was asleep in an upstairs bedroom and did not hear him. Muldoon struggled out of bed, still calling, and he fainted from the pain and effort. He regained consciousness, only to struggle and faint again. The next time he regained consciousness, he was projected in his bion-body. His bion-body was moving without conscious control up the stairs, thru a wall, and into the room where his mother and small brother were sleeping. Muldoon saw both of them sound asleep on the bed. Then Muldoon lost consciousness for a brief period. Upon regaining consciousness, Muldoon saw his mother and small brother excitedly talking about being rolled out of bed by an uplifted mattress. After witnessing this scene, Muldoon’s bion-body was drawn back and reentered his physical body. Back in his physical body, Muldoon called again to his mother. This time she heard him and came downstairs. Ignoring that he was lying on the floor, she excitedly told him how spirits had lifted the mattress several times. And she was, of course, frightened by it.

Muldoon assumed that his bion-body moved that mattress while he was unconscious. However, being conscious or unconscious does not change any of the learned programs in the cell-controlling bions that composed his bion-body, and I do not believe that Muldoon’s bion-body moved that mattress. Instead, assuming that the event actually happened, other explanations are possible, including help from a Caretaker (section 7.6; the Caretaker bion-body has no cell-controlling bions in it, and can move physical objects such as that mattress). With this explanation, Muldoon going unconscious was the price he had to pay to receive that help, because, apparently, the Caretakers want to keep their occasional involvement in human affairs secret.

When projected in a bion-body, we humans cannot make contact with physical matter, but we can make contact with other projected bion bodies. Most bion-body projectionists eventually have encounters with other bion bodies. Struggles and fights are often reported. These encounters can be both frightening and painful. Muldoon gives one example of this kind of encounter:

In 1923, Muldoon listened to a conversation between his mother and another woman who lived in town. This other woman described what an awful man her husband, who had just died, had been. Because of the stories the woman told, Muldoon became angered against that man. That night Muldoon had a projection. Upon turning to look at his physical body, Muldoon was shocked to see the bion-body of the dead man talked about earlier in the day. Muldoon describes this man as having a savage look and being determined for revenge—and he quickly attacked the projected Muldoon. There was a fight and Muldoon was getting the worst of it—as well as being cursed at. However, the fight soon ended when Muldoon was drawn back into his physical body. Once he reentered his physical body, Muldoon no longer felt or heard the attack of his enemy. Muldoon remarks how his attacker clung to him and continued his attack while Muldoon was being slowly drawn back toward his physical body. However, the attacker was unable to prevent Muldoon’s reentry.

This chapter has considered in detail both lucid-dream projections and bion-body projections. A third kind of projection is covered in chapter 6.


footnotes

[41] Muldoon, Sylvan, and Hereward Carrington. The Projection of the Astral Body. Samuel Weiser, New York, 1980.

[42] Ibid., p. 53.

[43] Ibid., p. 233.

[44] Ibid., p. 255.

[45] Ibid., p. 204.

[46] Assume there is a learned-program statement get_photon_vectors(), and it has at least the following two parameters:

The design of the get_photon_vectors() routine along with its associated code, is somewhat similar to the design of the push_against_physical_matter() routine and its associated code, but instead of defining a cylinder in 3D space which push_against_physical_matter() does, a circular truncated cone is defined in get_photon_vectors(). For this circular truncated cone, the center-point of its smaller endcap is the current location in 3D space of this_bion which is calling get_photon_vectors()—in the code for get_photon_vectors(), this_CE's_XYZ is this center-point location.

Define the vector CTC_vector as the vector that runs from the center-point of the smaller endcap to the center-point of the larger endcap, and this vector has the same orientation—points in the same direction as, and is parallel to—vector v_look_in_this_direction. Regarding what the length of CTC_vector should be, and also the diameter of the smaller endcap and the diameter of the larger endcap, one must first consider how this circular truncated cone will be used:

  1. After the circular truncated cone is fully defined in get_photon_vectors(), then get_photon_vectors() sends out a special_handling_non_locate request message that has a short range: set the message’s send_distance to (the length of CTC_vector + the diameter of the larger endcap) so that the message will reach all computing elements within the volume of space defined by the circular truncated cone. The sent message-text includes the defined circular truncated cone and also the get_photon_vectors() parameter that specifies the wanted frequency range for the photons to be seen.

  2. The special handling at each computing element that receives the sent message is as follows:

    if the computing element currently holds a photon whose frequency is within the wanted frequency range
    and the vector that gives the current direction of travel for that photon (this vector would be in the current state information for that photon) shows that that photon’s travel, assuming it continues uninterrupted in a straight line, will intersect the small endcap of the circular truncated cone (in effect, this small endcap is the pinhole opening of a pinhole camera)
    and the XYZ coordinate of this computing element that currently holds this photon is inside the circular truncated cone
    then
    Send a reply message back to the bion that sent the request message being replied to (set the send_distance of this reply message to the same send_distance value in the request message being replied to). The reply message-text will include the photon’s current location in 3D space, its frequency, and the intersection point on the small endcap given that photon’s current direction of travel.
    end if

  3. Associated with the get_photon_vectors() routine, there is another routine that processes the reply messages, if any, that result from calling get_photon_vectors(). Assume that get_photon_vectors() after defining the circular truncated cone, saves that definition into global variables that are visible to this routine that processes the reply messages, because info about the small endcap which is, in effect, the pinhole opening of a pinhole camera, is needed to process the reply messages. Then, for each received reply message, compute where that photon will intersect with the viewing plane that lies behind and is parallel to the small endcap (by “viewing plane” is meant an imaginary plane where, in a physical pinhole camera, the sheet of recording film would lie flat against that plane). The end result is that for each reply message there will be a computed point on that imaginary viewing plane where that photon will hit assuming its straight-line travel. Then, after all the reply messages have been received in the time allowed for their reception—this allowed time would be about 2 × the estimated time for the request message sent by get_photon_vectors() to travel its send_distance distance—the output of this routine that processed all the reply messages would be, in effect, an image, and at each pixel of that image an average frequency value and also a count of all the photons that intersected that pixel on the viewing plane.

Referring back to the size of the circular truncated cone defined in get_photon_vectors(), that would depend at least in part on the wanted frequency range for the photons to be seen. Assuming that the wanted frequency range is the ordinary light that we humans can see, then an estimate for an optimal pinhole size is given as 0.17 millimeter in diameter assuming that the viewing plane is one inch behind the pinhole (see the Selection of pinhole size section in the Wikipedia article Pinhole camera at https://en.wikipedia.org/wiki/Pinhole_camera). Regarding the other two lengths of the circular truncated cone, being the length of the CTC_vector and the diameter of the larger endcap, I assume that, in general, the larger the volume of the circular truncated cone, the greater the light gathering ability of the pinhole camera. Offsetting this advantage of more light gathering with a larger volume, is the disadvantage of potentially more reply messages and more processing, and also a larger volume means more space in which intervening physical matter may be that will interfere with the straight-line path of those photons in that larger volume that would otherwise pass thru the pinhole. For the length of the CTC_vector and the diameter of the larger endcap, perhaps one or both are optional parameters of the get_photon_vectors() statement. In this case, for the default values when these optional parameters are not given, and assuming that the wanted frequency range of photons is the ordinary light that we humans can see, then my rough guess for the length of CTC_vector is about one centimeter, and the diameter of the larger endcap is about twice that (2 centimeters).

In the case of Sylvan Muldoon and his seeing physical objects while projected in his bion-body, presumably repeated calls of get_photon_vectors() by at least one of his mind bions—“his mind bions” are his soliton’s owned bions; at least two of his mind bions if his third-eye had stereoscopic vision—provide the sequence of raw images that are then further processed by his mind, such as identifying any objects in those images, and the finished images and associated data are then sent to Muldoon’s soliton so that he can consciously see, and know what he is seeing. For continuous vision there would be successive calls of get_photon_vectors(), each producing a single image. Given the size of the circular truncated cone and the resulting value for send_distance, to further increase light gathering close to its limit, assuming successive calls of get_photon_vectors() by a bion, the target time interval between any call of get_photon_vectors() by that bion and the next call of get_photon_vectors() by that bion should be the time for a photon traveling at the speed of light to travel send_distance distance (the idea is to allow time for the photons that were in the circular truncated cone defined by the previous call of get_photon_vectors(), to completely leave that circular truncated cone and be replaced with new photons if any, by the time the next call of get_photon_vectors() is made). Given this target time interval, the next call of get_photon_vectors() by that bion should be made as soon as both of the following are true: 1) the processing by that bion of the received replies (if any) that resulted from its previous call of get_photon_vectors(), has completed; and 2) the target time interval computed for that previous call of get_photon_vectors() has elapsed since that previous call of get_photon_vectors() was made.

Assuming the previous paragraph, this maximizing of light gathering will result in a bion making very many calls of get_photon_vectors() in a very short time. The result will be a large number of images generated in a very short time, because for each call of get_photon_vectors(), one image is generated from the replies to that call’s sent request message. In this case, many successive images composited into a single image is probably happening very early in the process: Assume that get_photon_vectors() has an optional parameter that allows setting, in effect, an exposure time. For example, calling get_photon_vectors() with an exposure time of 1/60th of a second would result in, in effect, as many calls of get_photon_vectors() without that optional parameter, as could be made in 1/60th of a second given the rule in the previous paragraph as to how long to wait until making the next call of get_photon_vectors(), and all the replies that result during that 1/60th of a second time interval are composited into a single image that is returned by that top-level call of get_photon_vectors() that specified that 1/60th of a second exposure time.

Note that calling get_photon_vectors() has no effect on the physical world: Unlike a physical pinhole camera which blocks and absorbs light, calling get_photon_vectors() has no effect on any photons, neither blocking/absorbing them nor altering their speed, trajectory, or frequency.

[47] In our physical bodies, the sounds we hear with our ears originated as sound waves (rapid changes in air pressure that vibrate our ear drums). Measuring within a small volume of air, a sound wave passing thru that volume briefly changes the total number of air atoms in that volume (this is the pressure change). The learned-program statement get_relative_locations_of_physical_atoms() (subsection 3.8.8) can be used to detect sound waves in air, by counting, many thousands of times per second, the number of air atoms in a small volume surrounding the bion that is calling get_relative_locations_of_physical_atoms(). Regarding the parameters for calling get_relative_locations_of_physical_atoms(): Specifying the maximum allowed value for use_this_send_distance which is MAX_SEND_DISTANCE_ALLOWED_FOR_LOCATING_ATOMS (estimated at less than one-tenth of a millimeter), and specifying either the nitrogen atom or the oxygen atom as the atom to get replies from and thereby count (nitrogen and oxygen are the two major components of our atmosphere), would probably work good for measuring sound waves in our air. Also, the get_details_for_this_many_nearest_recipients parameter would be set to zero, because we only want a count of those atoms in that spherical volume of space (this count will be the returned ret_total_replies value).

The learned program in one’s mind that is calling get_relative_locations_of_physical_atoms() many thousands of times per second with the purpose of sensing sound waves would send this sensory data after some initial processing to other learned programs in one’s mind for further processing, such as identifying the sounds, if any, being heard, and constructing the messages that would be sent to one’s soliton so that one can consciously hear, and know what one is hearing.


6 Awareness and the Soliton

This chapter considers awareness and the intelligent particle associated with awareness. There is also a discussion of the afterlife. The chapter sections are:

6.1 The Soliton
6.2 Solitonic Projections
6.3 The Afterlife
The Bion-Body Stage of the Afterlife
The Lucid-Dream Stage of the Afterlife
Transitioning from one Animal Type to a different Animal Type
6.3.1 Birds of a Feather, Flock Together
6.3.2 How a Mind Connects with a Brain before Birth

6.1 The Soliton

The soliton is an intelligent particle that has an associated awareness.[48] Each person has a single soliton. This soliton is the location of the separate, solitary awareness that each person experiences.

The soliton can only directly interact—by sending and receiving messages—with its owned bions, and a soliton’s owned bions were created when that soliton was created, and every soliton has the same number of owned bions. A soliton is, in effect, invisible to all particles in existence with the sole exception of its owned bions, and the computing-element program always keeps a soliton and its owned bions together, limiting how far away these intelligent particles can be from each other. A soliton’s owned bions collectively form that soliton’s mind.

The computing-element program, in effect, makes a soliton the ruler over its owned bions. These owned bions that collectively form the soliton’s mind are like a government that reports to, and receives orders from, its soliton ruler. The role of the soliton ruler is to be the final decision maker—to set goals for the government, and provide feedback and guidance to the government.

For us humans, the intellectual work of one’s human mind is done by the owned bions of one’s soliton (awareness). For example, our memories are stored in our minds (more specifically, stored in the memory of those owned bions whose learned programs have specialized them for memory storage and retrieval) and not anywhere in our awareness (more specifically, not anywhere in our soliton’s memory). Also, these owned bions provide all processing of the raw sensory data sent by brain bions to one’s mind, including, for example, all recognition work of what is being seen and/or heard and/or tasted and/or touched and/or smelled. In addition, these owned bions provide all language operations such as parsing sentences and constructing sentences, and they provide all voluntary motor control, sending messages to those brain bions that can trigger muscle movements. Moreover, they provide all problem-solving and creative services—and so on. For us humans, the total amount of intellectual product generated by one’s unconscious mind is much greater than the total amount of intellectual product that is brought to the attention of one’s awareness.[49] (See section 9.6 for a detailed discussion of the soliton’s limited capacity for receiving input from the human mind.)

As humans, we live in a physical world that is filled with a great variety of non-human animals. For those animals that can purposefully move about, one can assume that behind that purposeful movement is some number of bions that collectively form that animal’s mind. But, for a given specific kind of animal, such as, for example, an ameba or an insect, does its mind have a soliton (an awareness)?

In the case of an ameba (already considered in section 2.2), what is it about an ameba that needs a complex mind with a soliton ruler to make the “tough” decisions about which direction it should move. Obviously, in the case of an ameba, there are no “tough” decisions to make about which direction that ameba should move, and, in the case of an ameba, a single cell-controlling bion with a learned program for detecting the chemical markers of food and then moving the ameba towards that food (and likewise for detecting repellent chemical markers and then moving the ameba away from those markers), is all that’s needed to make a good decision as to which direction to move. Thus, for such a simple animal as an ameba, there is no reason to believe that it has a soliton and its owned bions, deciding how that ameba should move. Thus, an ameba does not have an awareness.

Higher up on the complexity scale are insects. I have been living in Florida USA for many years, and with its warm weather there are a lot of insects in Florida. Observing different kinds of insects, and how they react when I intrude into their space, they all have immediate reactions that show no sign of any weighing of options by a soliton ruler. However, given their senses, and four or more legs to control, and for some insects also wings, I have no doubt that the insect mind consists of many bions and many learned programs, but I can’t find any reason to believe that an insect is guided by a soliton/mind. Thus, an insect does not have an awareness.

For a given insect species, when an insect of that species develops, my guess is that its mind—the bions that will end up being that insect’s mind—will be a group of bions that initially don’t have the learned programs needed by that species, but, by means of the learning algorithms in the computing-element program, will copy, when those bions are asleep, the learned programs needed for that insect’s mind from some other nearby insect mind that currently has the learned programs for that species. I think it likely that this copying is from the mother insect’s mind to what will be the mind bions in each of her developing insect eggs while those eggs are still in her body.

Moving higher up on the complexity scale, at what point is an animal likely to have a soliton/mind instead of just a mind. My belief, which is partially based on my direct experience with dogs and cats, is that the members of many of the larger and more intelligent animal species—such as dogs and cats (and all their relatives, such as wolves and lions), elephants, dolphins, horses, apes, chimpanzees, and owls—each have a soliton/mind instead of just a mind. But exactly where does the dividing-line fall? In other words, of those animal species that clearly have a complex mind with complex senses and complex movements, and can pause in judgment as to how to react to its environment, which of those animal species, if any, lack a soliton? This is not an easy question. For example, do cattle have solitons and consequently are conscious? The mere fact that cattle are routinely butchered for food in many countries does not necessarily mean that these animals lack a soliton. (I’m a meat-eater myself, but if an animal has an awareness and is to be killed for food, its killing should be done as quickly and painlessly as possible.)


footnotes

[48] The word soliton is a coined word which I made up as follows: solit from the word solitary, and the on suffix to denote a particle.

[49] Because bions have no awareness, it follows that our human minds, being composed of bions, are always unconscious. However, in this book I often use the phrase unconscious mind when I want to emphasize something happening in a soliton’s mind that is not brought to the attention of that soliton (the soliton has no awareness of that happening).


6.2 Solitonic Projections

The existence of a solitary particle of awareness, the soliton, is supported by a rare projection experience: During an otherwise ordinary out-of-body projection, the following happens (however, I don’t know whether this happening is initiated by the mind or by the soliton): Much of the ordinary communications from the mind to the soliton is, in effect, stopped, and this includes stopping all the normal sensory info that is sent to the awareness during an out-of-body projection. But, at the same time, the soliton remains awake (section 9.3). Call this projection experience a solitonic projection.

A solitonic projection can happen to someone without a prior history of out-of-body experiences, but this seems to be very rare. More likely, a solitonic projection can happen to experienced lucid-dream projectionists, and to bion-body projectionists. The Om meditation method, described in chapter 4, has the potential to elicit a solitonic projection.

The comparative rarity of solitonic projections is indicated by a reading of the principal Upanishads. There seems to be confusion when most of the principal Upanishads talk about the awareness—called the soul in the verses that follow. However, the Katha Upanishad appears knowledgeable on the subject:

Know thou the soul as riding in a chariot,
The body as the chariot.
Know thou the intellect as the chariot-driver,
And the mind as the reins.[50]

The above verse from the Katha Upanishad portrays the awareness as separate from the mind, just as the soliton is separate from its owned bions.

Though He is hidden in all things,
That Soul shines not forth.
But he is seen by subtle seers
With superior, subtle intellect.[51]

A certain wise man, while seeking immortality,
Introspectively beheld the Soul face to face.[52]

The above two verses from the Katha Upanishad are probably talking about a solitonic projection. The starting point of a solitonic projection is either a lucid-dream projection or a bion-body projection. With much of one’s mind no longer sending data to one’s awareness, the following is experienced: One finds oneself existing as a completely bodiless and mostly mindless awareness—residing at the center of a sphere. All the normal sensory inputs to the awareness are gone. But it is typically still possible to think to oneself, in which case some communication with one’s mind is still happening. Also, one cannot later recall a solitonic projection unless it is remembered. Therefore, any remembered solitonic projection always involves some interaction between the soliton and its mind, storing memories of that experience.

The perception of a surrounding spherical shell—around the point-like awareness—appears to be a common feature of a solitonic projection. Given the solitonic-projection data, it seems that the apparent shell is only a few centimeters in diameter. A more detailed description of this shell is given in subsection 10.1.2.

Solitonic projections are typically short in duration—lasting less than a minute, or perhaps only a few seconds. For a solitonic projection that occurs during a lucid-dream projection, it typically begins during the sudden acceleration that occurs for long-distance travel. In contrast, a solitonic projection that occurs during a bion-body projection typically begins when the bion-body is stationary.

Based on the scanty reports of solitonic projections scattered in the literature, and also based on the two solitonic projections I myself had (see chapter 10), the awareness is separate from the mind. Besides the separateness of the awareness, the point-like quality of the awareness—as experienced during a solitonic projection—is compatible with the awareness being associated with a single particle.

In our lives as humans, we cannot directly perceive another person’s awareness. Instead, we simply infer that other people each have an awareness, because oneself has an awareness. This lack of being able to directly perceive the awareness in another person also applies when out-of-body during lucid dreams and bion-body projections. In my own lucid dreams and bion-body projections, I never had any way of “seeing” another person’s awareness. We each experience our own awareness, but the awareness of others is always invisible and we only infer it is there in them, because that is more reasonable than to assume that only oneself has an awareness. Given that the awareness of others is always invisible to oneself, that is one of the reasons why I concluded in section 3.8 that the computing-element program only allows a soliton’s owned bions to interact with that soliton, and none of the other particles in existence in the universe can interact with that soliton.


footnotes

[50] Hume, op. cit., p. 351.

[51] Hume, op. cit., p. 352.

[52] Hume, op. cit., p. 353.


6.3 The Afterlife

Intelligent particles have an unknown lifetime. In the case of a soliton/mind, one could, for example, imagine that when a soliton and its owned bions are created, a big integer named this_soliton's_accumulated_clock_ticks is in that soliton’s state information and is initialized to zero and will serve as a timer counting how many clock ticks have elapsed since that soliton’s creation, and when some specific large number of clock ticks have elapsed, that soliton and its owned bions are then, in effect, erased (as a soliton moves thru 3D space, its this_soliton's_accumulated_clock_ticks value is updated in the same way that this_bion's_accumulated_clock_ticks is updated; see subsection 3.8.6 regarding this_bion's_accumulated_clock_ticks). However, there is no real way to know how much longer a given soliton/mind will remain in existence. I’m guessing that because our sun has another four or five billion years of life left, that each soliton/mind in our solar system also has that much time left. But this is just a guess, with the idea that when a new solar system comes into existence, a large population of solitons and their owned bions is created by the computing-element program—along with creating a large population of non-owned bions—to inhabit that solar system.

In the case of a physically embodied human, or other kind of physically embodied animal that has a soliton/mind, one’s life-cycle alternates between having a physical body and not having a physical body, and the time without a physical body is referred to as the afterlife. For we humans, assuming a typical death, there are two stages to the afterlife, first the bion-body stage of the afterlife, followed by the lucid-dream stage of the afterlife:

The Bion-Body Stage of the Afterlife

Subsection 5.2.3 gives a detailed procedure for a bion-body projection, and the first paragraph of step 1 states that the cell-controlling bions composing the afterlife bion-body have USID_1 value MY_CELL_IS_IN_STASIS, instead of USID_1 value MY_CELL_IS_ACTIVE. The main difference for the bion-body projection, is that the MY_CELL_IS_IN_STASIS bions can be away from their cells for a long time—perhaps years or many years—without trying to return to their cells or trying to find newly formed cells to occupy, whereas the MY_CELL_IS_ACTIVE bions will return to their cells within minutes after leaving them (my estimate, based on my approximately 100 bion-body projections, is that MY_CELL_IS_ACTIVE bions will return to their cells after about one minute).

Assuming a typical death for one’s physical body, in which one’s heart has stopped beating and one’s physical body remains intact or mostly intact for about five minutes after one’s heart has stopped, then a dense afterlife bion-body will form, having the same size and shape as one’s human body, and one’s awareness/mind will have this afterlife bion-body as the first stage of its afterlife existence.[53],[54],[55]

The bion-body stage of the afterlife for one’s awareness/mind will last until either one’s awareness/mind decides to abandon its afterlife bion-body, or the time limit expires after whatever the time limit is before the MY_CELL_IS_IN_STASIS bions composing that afterlife bion-body change their USID_1 value to HAVE_NO_CELL and they then leave that bion-body to find and occupy newly formed cells. Either way, whether because one’s awareness/mind has abandoned its afterlife bion-body, or because the bions composing one’s afterlife bion-body have reached their timed limit and have left that bion-body, the first stage of the afterlife for one’s awareness/mind ends.

When a bion in the afterlife bion-body changes its USID_1 value from MY_CELL_IS_IN_STASIS to HAVE_NO_CELL, assume that that bion stops running whatever other learned programs it might be running and instead just runs the learned program(s) that will allow that bion to find and occupy a newly formed cell. Among other things, this run stoppage by that bion means that it will stop running for itself step 7 in the procedure given in subsection 5.2.3, which was keeping that bion with that afterlife bion-body. Once that bion moves away from that afterlife bion-body, it may, in effect, because of the other bions in that afterlife bion-body that are still running step 7, drag that afterlife bion-body after itself. However, it is probably true that many or all of the bions composing one’s afterlife bion-body have the same time limit for how long a human-body-cell’s bion can have MY_CELL_IS_IN_STASIS set as its USID_1 value before that value is changed to HAVE_NO_CELL. Assuming that all the bions composing one’s afterlife bion-body changed their USID_1 value to MY_CELL_IS_IN_STASIS within a few minutes of each other at the time of physical death, and assuming the same time limit for the bions composing one’s afterlife bion-body, then, after that time limit has elapsed for the first bions in one’s afterlife bion-body that had changed their USID_1 value to MY_CELL_IS_IN_STASIS, that entire afterlife bion-body will completely disintegrate in a short span of a few minutes as each of its composing bions reaches its time limit and changes its USID_1 value to HAVE_NO_CELL and leaves that bion-body to find and occupy a newly formed cell, and this disintegration will happen regardless of whether or not one’s awareness/mind is still, in effect, inhabiting that bion-body.

As far as I know, the bion-body stage of the afterlife typically has a short duration of a few weeks or months, and I assume that the reason for this short duration is voluntary abandonment of the afterlife bion-body by its occupying awareness/mind. Reasons to abandon the afterlife bion-body include the possibility of unwanted body feelings, such as feeling pain, while in it: Assuming one still has one’s allocation for body feelings (section 9.7) during one’s bion-body stage of the afterlife, one’s mind has the potential to send to one’s awareness unwanted body feelings regarding one’s afterlife bion-body, such as the pain feeling, at one or more times during one’s time in one’s afterlife bion-body, even though one’s afterlife bion-body—unlike one’s physical body—has no needs and cannot be damaged. Whether or not any unwanted body feelings happen to a typical person during his time in his afterlife bion-body, I don’t know and I have no data. A possible scenario in which one’s mind might send to one’s awareness unwanted body feelings is the possibility of one’s afterlife bion-body being attacked by one or more other bion bodies, such as when the projected Sylvan Muldoon was attacked by a recently deceased neighbor. However, for the typical person, fights during the bion-body stage of the afterlife can probably be avoided, just as physical fights can typically be avoided during our human lives (this assumes civilized people).

Regarding feeling pain during a bion-body projection, I don’t recall feeling pain during any of my bion-body projections, although I do remember feeling the swirling of particles thruout the interior of my dense bion-body, including in the arms and legs of my dense bion-body, during my one dense bion-body projection (subsection 10.1.1). Thus, I had a conscious feeling that my awareness perceived as being localized in my projected bion-body, and this is similar to how one consciously feels pain in one’s physical body: in a typical instance of feeling body pain, one’s awareness perceives that pain as being localized to a specific spot on or in one’s physical body. Thus, when another bion-body projectionist says that he felt pain in his projected bion-body, I have no good reason to doubt him. For example, if a bion-body projectionist says that he felt pain while being attacked by another projected bion-body, then my guess is that his mind, by analogy with the protective needs of his physical body, was sending the pain feeling to his awareness so as to, in effect, motivate his awareness to defend and protect his projected bion-body, even though his projected bion-body is invulnerable to any damage from an attack by another projected bion-body.

During the bion-body stage of the afterlife, assuming one’s bion-body has sufficient density to have triggered the activation of one’s mind’s third-eye and third-ear (subsection 5.2.3, step 5), one can see and hear the physical world, and one’s vision of bions will also be operating at the same time as one’s third-eye vision, because one is in a bion-body (see subsection 5.2.2 regarding vision of bions). Both vision systems operating simultaneously, with their respective generated images composited together and then sent to the awareness to be consciously seen (section 5.4), gives the result that Sylvan Muldoon experienced, and that I experienced during my one dense bion-body projection, of seeing simultaneously both a part of one’s bion-body and a part of the physical world (the parts that are within one’s current field of view).

However, even though one will be able to see and hear the physical world in one’s afterlife bion-body, you won’t be able to make contact with, and move, physical objects, because, as already discussed in section 5.4 regarding the projected bion-body of humans and other animals, each of their bion bodies, being composed of cell-controlling bions, lacks the programming in the learned programs of those cell-controlling bions that would allow that bion-body to move or otherwise disturb any of the physical objects around it.

Regarding voluntary abandonment by an awareness/mind of its afterlife bion-body, and also regarding what others have written in the past about how long a person (awareness/mind) remains as an Earth-bound ghost after he dies: One can assume that the ghost of a person is that person’s afterlife bion-body, and being Earth-bound means that that person (awareness/mind) is still inhabiting his afterlife bion-body and is using his mind’s third-eye and third-ear to observe the physical human world that he was recently a part of before dying, which means that he must remain a close distance to the physical environment of humans—thus the term Earth-bound—so that he can see and hear physically alive people and their physical environment (section 5.4 describes in detail how the third-eye and third-ear work, by seeing physical light, aka photons, and hearing physical sounds). Because the third-eye sees by means of physical light, the Earth-bound ghost has the same limitations seeing physical things as someone who is still in his physical human body. For example, to see what is happening in a closed, lit room without windows, the Earth-bound ghost must be inside that room, because physical light is blocked by the floor, ceiling, and walls of that physical room. And the situation is similar in the case of physical sounds, because physical obstructions, such as walls, lessen the sounds that pass thru them. Thus, to hear a conversation in a closed room between two physically embodied persons, the Earth-bound ghost, just like when he still had his physical body, can best hear that conversation by being in that room with those two persons.

With the assumptions of the previous paragraph as to the meaning of the terms ghost and Earth-bound, it is probably true, as others have written in the past, that, on average: A young person is likely to remain Earth-bound longer than an elderly person. The reason is that an elderly person has already had a full human lifetime and is more likely to voluntarily abandon his afterlife bion-body sooner than a young person. Also, a person who had a violent death is likely to remain Earth-bound longer than a person of the same age who had a peaceful death. The reason is that a person who lost his life as a result of violence probably has more reasons—reasons related to his death—to remain an observer of the physical human world than a person of the same age who had a peaceful death, because having more reasons to remain Earth-bound will, in general, probably result in being Earth-bound longer than otherwise, and this means delaying any voluntary abandonment of one’s afterlife bion-body.

The Lucid-Dream Stage of the Afterlife

Regardless of whether the abandonment of one’s afterlife bion-body was voluntary due to conscious or unconscious choice, or involuntary due to the disintegration of one’s afterlife bion-body, the following is true: When one no longer has one’s afterlife bion-body and is just an awareness/mind without a body, one’s mind changes conscious vision and hearing (the vision and hearing that is sent to the awareness) so that it is no longer from the third-eye and third-ear and from the vision of bions, but instead is from lucid-dream vision and telepathic hearing (my guess is that telepathic hearing is also active during the bion-body stage of the afterlife, but with a lower priority than hearing with the third-ear). Without a body, and having lucid-dream vision and telepathic hearing as one’s primary senses, one has started the next stage of the afterlife which is the lucid-dream stage of the afterlife, during which one is just an awareness/mind.

Unlike the bion-body stage of the afterlife, which typically has a short duration of a few weeks or months, the lucid-dream stage of the afterlife can last for many years, even centuries. In general, without a body there are no body-related pains or distress during the lucid-dream stage of the afterlife. Instead, one leads a benign and possibly enjoyable existence, and one will also be more intelligent than when one had a body (regarding being more intelligent during the lucid-dream stage of the afterlife, see the discussion at the end of section 9.7). However, at some point the lucid-dream stage of the afterlife ends in some form of rebirth (aka reincarnation) which will give one a new body (for the typical human, this will be a new human body).[56],[57],[58]

Note that my mind’s third-eye and third-ear were never active during any of my approximately 400 lucid dreams that I had, and I think it likely that one’s third-eye and third-ear will remain, in effect, dormant during the lucid-dream stage of the afterlife, until one is near the time for reincarnation, assuming one is going to reincarnate as physically embodied again. Having one’s third-eye and third-ear active during one’s search for which family to reincarnate into would be very helpful, because one would be able to see the physical bodies of potential parents, and hear them speak (whether or not one understands the language they are speaking is a separate consideration), and also see and hear the physical environment in which they live.

Regarding our personal memories and how long they last: As stated in section 3.8, after a soliton and its owned bions are created by the computing-element program, the computing-element program will keep that soliton and its owned bions together, and the number of those owned bions is fixed and unchanging (no additions nor deletions regarding the bions owned by a soliton). Regarding memories, such as our own personal memories, including memories of events in our own lives, for each of us these memories are stored in one’s mind (the soliton’s owned bions). However, because there is a fixed limit on the size of a bion’s memory, and also there is a fixed unchanging number of bions composing one’s mind, there is a limit on how many memories and how much other data a mind can retain. The management of one’s memories depends on the learned programs in one’s mind. However, as available storage for memories becomes filled, storing new memories requires replacing old memories. Thus, one’s mind either forgets one’s past or one’s present, and, because in general, newer memories are more valuable and relevant to one’s current situation than older memories, we humans, as time passes, forget details from our past.[59] This finite memory that each of us has, is also an underlying reason why, for at least most of us, we have no memories of our existence before our current human life.

Transitioning from one Animal Type to a different Animal Type

For a soliton/mind whose current life cycle includes a physical embodiment in our world, to what extent can that soliton/mind move from one animal type to a different animal type, with humanity obviously being at the top. For example, can a pet cat—after perhaps first transitioning thru one or more other animal types that are closer to the human type than the cat type—eventually reincarnate as a human. Also, conversely, does it ever happen that someone who is currently human later reincarnates as some non-human animal, such as, for example, a chimpanzee. As the author of this book, I have no data on this subject of a soliton/mind transitioning from one animal type to a different animal type, although, given the reality model presented in this book, I am certain that it happens. Also, ignoring other considerations for the moment, a current human is most likely to reincarnate as human again, and, in general, a soliton/mind that is currently a non-human animal is most likely, after its time in the afterlife, to reincarnate as that same animal type again. In other words, once one is a specific animal type, whether human or otherwise, it is easier to remain that same animal type from one reincarnation to the next.

If you are currently tired of the negatives of human life and want to escape the so-called “wheel of rebirth”—I’m not currently in this group of those who want to escape human life, because I look forward to, and am optimistic about, my next human embodiment—you can always aspire to eventually leave behind a life cycle that includes physical embodiment, and, if you meet whatever the requirements are, either join the Caretaker civilization (section 7.6) or whatever other intelligent civilizations there may be in our Earthly environment whose members have no physical embodiment in their life cycle. However, as with transitioning to a different animal type, I have no data on humans transitioning to a life cycle that has no physical embodiment, other than to say that it is possible and I believe that it happens—and likewise for transitioning in the opposite direction.

In general, regarding a soliton/mind transitioning to a different type than its current type, and assuming there is a big enough difference between the two types to require a replacement of its mind’s learned programs: Its mind (the owned bions of its soliton) needs to have its current learned programs replaced with a copy of the learned programs that are in a different mind that currently has the programming for that different type to be transitioned to, and this copying of learned programs from one mind to another mind can only happen when both minds are asleep (section 9.3). Also, in the specific case of transitioning from one animal type to a different animal type, and assuming there is a big enough difference between the two animal types to require a replacement of the mind’s learned programs—an actual example that probably sometimes happens, is the transitioning of an awareness/mind from being a chimpanzee in Africa, to being in its next incarnation an african human—I guess that a good time for this copying to happen is after that soliton/mind has selected and moved close to a fetus of the wanted animal type to be transitioned to, and then the learned programs for the mind of that wanted animal type can probably be copied from the mind of the mother when both the mother’s soliton/mind and that soliton/mind are asleep. Note that the copying of all the learned programs from one solitonic mind (the owned bions of a soliton) to a different solitonic mind is made easy by the assumption that every soliton has the same number of owned bions. One can assume that this copying is done by a routine in the computing-element program (and this copying would not include any copying of personal memories, which means that a soliton with a freshly copied new mind will start with zero personal memories). Regarding a decision by a soliton/mind to transition to a different type, I suppose that the desire of the soliton (awareness) is the most important factor, and the unconscious mind is also involved.

For a given soliton/mind, in the case of transitioning from its current type to a different type that results in a complete replacement of all the learned programs in its mind, what is still the same? The soliton is still the same, unchanged, and presumably it is still in the habit of working with the previous set of learned programs in its mind, but now there is a new set of learned programs in its mind that that soliton must work with. In this case, I think it is very likely that there will be a learning process for that soliton to learn the capabilities of this new set of learned programs in its mind, and how best to interact with this new set of learned programs in its mind. Also, for something as complex as the human mind, it may take the typical soliton, which most likely had a non-human animal mind previously, at least several human lifetimes or perhaps many human lifetimes to fully learn, in effect, the ins and outs of its new human mind, and how best to adapt to it and interact with it. Recall that the soliton is not just a recipient of info from its mind, but the soliton must also be able to give guidance and feedback to its mind. The messages are going in both directions, both from its mind to the soliton, and from the soliton to its mind. Given that a big difference between humans and the other animals in our world is our language ability, a soliton in its first human life, and also perhaps in at least its first several human lives, may typically be, in effect, very stupid, because the extensive language capabilities of the human mind are new to that soliton, assuming that before getting its human mind that soliton had a non-human animal mind with very limited language abilities.


footnotes

[53] As a hypothetical, what would happen if the death of one’s physical body was caused by being very close to a nuclear bomb when it exploded? In this scenario, one’s entire physical body would be blown to tiny bits and burned up very quickly, and the cell-controlling bions in one’s physical body, because they are running the learned program LP_keep_this_bion_close_to_this_physical_atom, would, in effect, chase after their specified atoms as those atoms, on average, rapidly move away from each other, with the result that the cell-controlling bions in one’s physical body will, on average, be rapidly separating wider and wider from each other. Regarding the detailed procedure for a bion-body projection that I give in subsection 5.2.3, which includes the formation of the afterlife bion-body, I can think of three reasons why that procedure will fail to form the afterlife bion-body in this hypothetical situation of being very close to a nuclear-bomb explosion:

  1. With such a complete and rapid destruction of at least most of the physical cells in one’s physical body, it’s very likely that their cell-controlling bions will change their USID_1 status directly from MY_CELL_IS_ACTIVE to HAVE_NO_CELL, in which case step 1 of the procedure for forming the afterlife bion-body, which expects MY_CELL_IS_IN_STASIS as the USID_1 value, will fail.

  2. Regarding the use_this_send_distance value when the BB_PROJECTION_REQUEST message is sent in step 1: Although I didn’t suggest a use_this_send_distance value in that procedure for sending the BB_PROJECTION_REQUEST message, a reasonable value is 10 feet (about 3 meters) given that the length of the human body is less than 10 feet, and this would be more than enough so that the sent BB_PROJECTION_REQUEST message reaches every cell in a human’s physical body assuming that physical body is intact. But with such an extreme explosion, probably most of the cell-controlling bions in one’s physical body will be out of range and not even receive the BB_PROJECTION_REQUEST message when it is sent in step 1.

  3. The procedure in its step 3 expects short separation distances for skin-cell bions, which won’t be the case after such an extreme explosion.

Also, note that the procedure for forming the afterlife bion-body presumably evolved to be compatible with the typical post-death situation, which would be an intact or mostly intact physical body at both the time of death and minutes later. Thus, it’s very unlikely that that procedure would have evolved any programming for rare physical-destruction scenarios.

If for whatever reason the afterlife bion-body does not form, what then? Without an afterlife bion-body, that would only mean that that person (awareness/mind) will skip the bion-body stage of the afterlife and instead begin its afterlife with the lucid-dream stage of the afterlife. Thus, not a big loss.

Considering a less extreme scenario, but one that is happening in our human world when I am writing this footnote in 2016, what if death resulted from having one’s head cut off and then placed on one’s back, as I’ve seen done in a few Islamic-State videos on the internet. In this case, none of the three reasons given above would apply to keep the afterlife bion-body from forming, although the person (awareness/mind) who will inhabit that afterlife bion-body may quickly abandon it because his bion-body head is stuck to his bion-body back. If instead, one’s head after being cut off is quickly moved far enough away from the physical body that the above reason 2 applies, then, in that case, if there are enough skin-cell bions in just the head to still form the afterlife bion-body (avoiding the BB_PROJECTION_CANCEL message at the beginning of step 4), then most of that person’s afterlife bion-body will be missing, being composed of just cell-controlling bions from his head. In this scenario, especially if the number of non-skin-cell bions in the afterlife bion-body—which in this case is just a bion-body head—is not large enough to trigger activation of the third eye and third ear (see step 5), then that person (awareness/mind) having just that afterlife bion-body head, will probably quickly abandon it and begin the lucid-dream stage of the afterlife.

A somewhat different scenario is: What if one is caught in a physical event that kills one’s physical body and breaks it into a number of mostly intact pieces (intact so that most of the cell-controlling bions of the cells in those pieces, will set their USID_1 value to MY_CELL_IS_IN_STASIS after a few minutes), and there are enough of these pieces within range (regarding reason 2 above) so that the afterlife bion-body forms. In this case, the afterlife bion-body will be in pieces (and perhaps with some bion-body pieces visibly missing because either the corresponding physical-body pieces were out of range regarding reason 2 above, or those visibly missing bion-body pieces are actually there but are too far away to be seen by one’s mind’s vision of bions which has only a short range of perhaps less than ten feet). But after the person (awareness/mind) gets used to seeing that his afterlife bion-body is in pieces, insofar as he can see this, and assuming there were enough non-skin-cell bions in his afterlife bion-body to trigger activation of his third eye and third ear, then he will probably soon switch his mental focus to what he can see and hear in our human world, and where he can go to in our human world.

As a completely different kind of scenario, consider a patient in a hospital who is brain dead but his physical body is on life-support (a ventilator that, in effect, keeps his body breathing). In this case, the cell-controlling bions occupying the cells of his physical body are still there with that physical body, keeping it alive. In the typical brain-dead case, the lack of brain activity is probably a consequence of the awareness/mind that had that physical body having abandoned that physical body sometime before. But, because his physical body was still alive when his awareness/mind abandoned it, he was unable to form the afterlife bion-body because the MY_CELL_IS_IN_STASIS cell-controlling bions needed to form the afterlife bion-body were not there because his physical body was still alive, leaving his awareness/mind no choice but to begin his afterlife with the lucid-dream stage of the afterlife.

[54] This first stage of the afterlife should not be confused with the many published accounts of NDEs (near-death experiences). During an NDE, the person having the NDE has not yet died.

There is a large literature on NDEs, and journalist Pierre Jovanovic summarizes the typical experience: “The subject suddenly finds himself outside his body, floats up to the ceiling and observes what is happening around his physical envelope. … In general the patient does not understand what is happening to him, above all when he discovers that he can pass through walls or when he tries to explain to the doctors that he is not dead. [then] After this observation period, he feels himself sucked at extraordinary speed into a tunnel (drain, pipeline, shaft, tube, canal, etc.) at the end of which he sees a light beckoning him on. … After having traveled through the tunnel, the subject may meet near and dear ones who died earlier. [then] Fusion with the light, which seems like a living being made of light, overflowing with an unconditional love for the subject. His whole life passes before him like a film, in the space of ten seconds, but in three dimensions, with the effects of his actions and words experienced by others. [then] A dialogue (not aloud but in thought) with the Light being, who ends the encounter by saying: ‘Your hour has not come; you must return and finish your job.’ Sometimes the subject is asked, ‘Do you wish to stay here or return?’ [then] Return to the body.” (Jovanovic, Pierre. An Inquiry into the Existence of Guardian Angels. M. Evans and Co., New York, 1995. pp. 29–30).

The typical NDE is a lucid-dream projection. The part about being “sucked at extraordinary speed into a tunnel (drain, pipeline, shaft, tube, canal, etc.) at the end of which,” is clearly a description of the acceleration and high-speed movement of that person’s awareness/mind to a remote location that, in the typical case, is probably many hundreds or thousands of kilometers distant (as mentioned in section 5.3, intelligent particles can accelerate rapidly to a speed of at least several hundred kilometers per second).

Distant travel, as a common feature of NDEs, is not surprising. An NDE can potentially happen to a person anywhere, but the “near and dear ones who died earlier,” and especially the “Light being,” are going to be at some more or less fixed location in the afterlife domain, which presumably envelops the Earth. Also, the “Light being”—who is, perhaps, a Caretaker (section 7.6)—may be a specialist in handling NDE encounters. And just as people typically travel as needed to the various specialists in their daily lives, so with an NDE: typically the person having the NDE travels to the specialist, instead of the specialist coming to him.

Regarding the NDE’s life-review, the life-review is presumably internally generated by that person’s mind (and not generated by the “Light being”). In effect, the life-review is a highly condensed highlights film: only the self-judged significant parts are reviewed (for example, don’t expect to see a review of what you were doing ten minutes ago, whatever that was).

There is not much time for the life-review to take place, so the data is fed to the soliton at a much higher data rate than is normal for waking consciousness. After the NDE, when the person remembers the life-review experience, that remembering takes place at the normal data rate. This causes the person remembering the experience to make typically exaggerated comments about how his whole life was lived in a few moments—when he compares the believed duration of the original experience with the duration of the remembering.

Note that the feeding of data to the soliton at a much higher data rate than normal is also at least sometimes present during the happening of a serious accident for the person undergoing that accident. I have an incident from my own life that illustrates this: In 1986 I was in my car, a 1984 Mercury Capri, stopped at a red traffic light, waiting behind a large garbage truck. Then the traffic light changed to green, and the traffic in the adjoining lane, going in the same direction as my car, was already moving. But for some reason the garbage truck in front of me was not moving. A few seconds passed, and I was just sitting there in my car waiting for that garbage truck to move—wondering why it wasn’t moving. Then, time suddenly slowed: as if in slow motion, my car, with me in it, was thrown forward, smashing into the back tires of that garbage truck, which had still not moved (my car had been hit from behind by a red MG sports car, driven by a young woman who was bloodied and hurt from that crash, but not too badly, although her car was totaled; I had my seat-belt on, and also my seat’s head-rest was up to prevent the possibility of whiplash in the event of an accident, and I was not hurt, but my car was damaged at both the car’s front end and back end). The garbage truck was only a few feet in front of my car, and it seems safe to say that from the moment of the initial impact from behind, until the moment that my car was stopped by its impact with that garbage truck, that less than a second had elapsed. And yet, my experience and memory of that time period seemed to last for many seconds (a rough guess would be somewhere between five and ten seconds).

As a final note regarding the soliton and its perception of the passage of time, it is a common observation that a day seems longer when one is a child, and gets shorter as one grows older. The likely explanation is that the average rate at which data is fed to the soliton decreases with age: time shortens as one grows older.

[55] In the previous footnote I gave an account of the slow-motion effect I had experienced during a car accident in 1986. In this footnote, regarding this subject of the slow-motion effect during a car accident, I add the following text from a written note I made less than two hours after my talk with a nurse on this subject in mid-October 2013 (edited in 2015 for improved clarity):

On Monday morning October 14, 2013, at our house in Gainesville Florida USA, hospice CNA Terry (CNA = Certified Nursing Assistant) was here for dad’s bed-bath etc, and while talking with her she mentioned that she had had a bad car accident about two years before, and that prompted me to ask if she had had the same slow-motion effect that I had had during my 1986 car accident which I briefly told her about. She said yes, and that prompted further questioning from me resulting in the following details: Her accident happened when someone had stepped out onto the road in front of her car which was moving at about 50 miles-per-hour, and she swerved off the road to avoid hitting him, and her car rolled over three times down an embankment. The slow-motion effect for her began once she realized she had lost control of her car right before the start of the rollovers, and lasted until her car came to a stop (for my car accident, like hers, my slow-motion effect ended when my car had stopped moving). She had several physical injuries from her car accident and she spent some time in a hospital afterwards recovering.

She said that she was so interested in that slow-motion effect that had happened to her during her car accident, that after being treated in the hospital for her injuries and then returning to her nursing job—she had a job in a hospital helping to care for emergency-room admissions—she asked every patient who was there because of a car accident if they had had the slow-motion effect (upon my questioning of her as to how many car-accident victims she had asked, she made clear that it was dozens but a lot less than 100), and she repeatedly emphasized to me that without exception they all answered either that they didn’t have any memory of their car accident or that they did indeed have the slow-motion effect during their car accident; thus, many said that they had had the slow-motion effect. I’ll just add that although Terry and those post-car-accident hospital patients she questioned all had physical injuries from their accidents, being injured as a result of the accident is not a requirement of having and/or remembering the slow-motion effect, because I had no injuries as a result of my 1986 car accident, but I certainly had the same slow-motion effect as my car accident happened, and my memory of that slow-motion of my car accident is still with me after about 27 years, since it was so memorable to me (it’s October 14, 2013 as I write this paragraph).

Also, during our conversation on this slow-motion effect, after asking her a number of questions about her car accident and slow-motion effect, Terry asked me about my own slow-motion car-accident experience: had I seen the accident coming, as she had seen hers coming (that man stepping out onto the road in front of her car), because she thought maybe one had to see the accident coming to get the slow-motion effect. But, as I said in answer to her, I had not seen my car accident coming, because I was stopped at a red light behind a garbage truck waiting for that garbage truck to start moving after that red light had changed to green, and yet my slow-motion experience began with the start of the movement forward of my car after being hit from behind by that red MG sports car, which I only saw for the first time after the car accident was over and I had gotten out of my car.

[56] Rebirth is an old belief with a long history, and there is a large literature. The psychiatrist Ian Stevenson has collected over 2,600 reported cases of past-life memories, and he has written extensively on the subject. In one of his books (Stevenson, Ian. Where Reincarnation and Biology Intersect. Praeger Publishers, Westport CT, 1997), Stevenson presents cases that show a correlation between conditions or happenings in the most recent previous life, and current marks or defects on the body. For example, in some cases a birthmark marks the location of a fatal wound received in the previous life.

Regarding what accompanies the soliton into the new body, there are several considerations: the fact that the soliton finds its way into the new body; the evidence in the literature that some children accurately recall at least some details from their most recent previous life; the evidence presented by Stevenson that the new body can be marked according to conditions or happenings in the previous life. These various considerations are consistent with the soliton and its owned bions (its mind) remaining together, not just during the afterlife, but also into the next physically embodied life (one’s rebirth), accounting for the navigation to the new body, the past-life memories, and the marks made on the new body.

[57] Assuming rebirth, and assuming that most of those currently being born as human were also human in their previous embodied life, this means that because the embodied human population has grown manyfold in the last few centuries (I am writing this footnote in 2005), the average time spent in the afterlife between one embodied human life and the next, has decreased proportionately during that time. As the embodied human population continues to grow, the average time spent in the afterlife between successive human embodiments will continue to decrease.

My own opinion, based on my study of the rebirth literature and other considerations, is that about a century or two before the end of the 20th century, the average time between successive human embodiments was measured in centuries, but by the end of the 20th century the average time between successive human embodiments is measured in decades, perhaps only a few decades. At some point the embodied human population will stop growing and start shrinking, and the current trend toward less time in the afterlife will reverse.

[58] Astrology associates solar and/or planetary positions—relative to the Earth—with specific influences on human personality and/or events. For any given culture that has an astrological system, there may be a kernel of truth in that system, but the rest of the system is probably dross that has accumulated over time, due to the need of professional astrologers to add to the complexity of the system and broaden its claims, so as to increase the demand for their services and the amount of money they can charge for those services.

In the case of the astrological system of the European peoples, there seems to be, in at least some cases, a correlation between personality and sun sign (i.e., the person has, to some extent, the personality predicted by his birth zodiac sign: Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, or Pisces).

Such a correlation is possible, given the computing-element reality model, but the details of the mechanism by which the correlation is maintained are not clear. One possibility is that there is some sort of “birds of a feather, flock together” effect going on, in which people are reborn in large groups that self-segregate in terms of when during the year they will be born, based on planned personality characteristics in the next life.

[59] The Caretakers (section 7.6), on average, may be longer lived than humans, but not immortal. In a hypothetical society of immortals, relearning forgotten or soon-to-be-forgotten material would be an ongoing process.


6.3.1 Birds of a Feather, Flock Together

With regard to their physical bodies, children resemble their parents. Also, it is known that at least many physical characteristics are coded in one’s DNA, which is a mix of the biological parents’ DNA. Thus, one’s physical body is inherited from one’s biological parents.

Regarding mental qualities, it’s a common observation that stupid parents tend to have stupid children, and intelligent parents tend to have intelligent children. More specifically, the German philosopher Arthur Schopenhauer in the 19th century said that general intelligence seems to be inherited from the mother, and personality from the father. I initially agreed with Schopenhauer on this inheritance, and in the 11th edition of this book I said “for a typical person, copied from each parent is a partial allocation plan (section 9.6) that determines to a large extent intelligence (the partial allocation plan copied from the mother) and personality (the partial allocation plan copied from the father)”. However, for the 12th edition of this book, I have changed my thinking on this subject, and with the exception of one’s first human life when one’s human mind is copied from one’s biological mother, I believe there is a “birds of a feather, flock together” effect going on. Specifically, for a typical person in the afterlife who already has a human mind, when that person is ready to end his stay in the afterlife and be reborn as a human again, he searches—either consciously or unconsciously—for a human family to be born into, that, among other considerations, will be compatible with the kind of intelligence and personality that he—either consciously or unconsciously—expects to have as an adult in his new human life. Thus, as the saying goes, like seeks like, but there will always be exceptions, and also mistakes made, but for most people, they will search for parents that they will be compatible with. Thus, a stupid person typically seeks stupid parents, and an intelligent person typically seeks intelligent parents. And also, my guess is that the unconscious mind of the mother, and perhaps also the unconscious mind of the father, typically communicate with one or more potential candidates to become their child, and these communications have the potential to influence who will become their child.

With, as a rule, the primary goal of the selection process being compatibility between a child and its parents, this search for compatibility also applies to whatever ethnic group and/or nation and/or race those parents belong to. The like-seeks-like effect, both before being born and after being born, causes families to group with other families that they are sufficiently similar to, and this similarity includes similar body characteristics, as well as similar mental characteristics. The end result is different human populations in their own geographical areas; often, but not always, distinguished by at least some physical differences in their bodies compared to other nearby human groups. In political terms, in recent centuries, imperialism often steps in, under different guises, and mixes up these different human populations. However, this mixing by imperialism cannot alter, despite attempts at its suppression, the like-seeks-like effect. In the end, even when it takes centuries, empires and their mixings of human populations are undone. The USA, in which I was born and have spent my life, is one of these mixed imperial creations. Other things being equal, life is simply better when people are able to live with others like themselves.

Regarding the selection by a person in the afterlife of his parents to be, a different consideration involves the general undesirability of old parents (a mother in her forties or older, and/or a father in his fifties or older). Statistics show that, in general, there are likely to be more problems for a child when one or both of its parents were old when that child was born. For example, autism for a child is more likely if at least one of its parents was old when that child was born. Some blame it on old eggs due to the mother being old, and/or defective sperm due to the father being old. However, another possible explanation is that, in general, the afterlife persons that as new children will end up having less problems, are simply less likely, on average, when compared to those afterlife persons that as new children will end up having more problems, to choose for their parents a couple with one or both of them being old. And the simple reason for not wanting an old parent when being born in a new human life, is because, on average, an old parent has fewer years of support that it can give to its child, compared to a young parent. A related consideration is that because an old parent is, in general, less desirable as a parent to those persons in the afterlife awaiting rebirth, that parent will, out of necessity if it wants a new child, have to lower its selection criteria and be willing to accept a new child that is more likely to have problems.

An important factor regarding human populations, is the number of currently qualified candidates waiting in the afterlife to be born into a specific ethnic-group and/or nation and/or race. As a specific example, regarding the so-called baby-boom in the USA from 1945 thru 1965, how much of that baby-boom was due to people who had died prematurely, primarily in Europe, from the mayhem of so-called World War 2 and its aftermath—my guess is a lot. Another example, but involving a shortage of qualified candidates instead of an excess of qualified candidates, is Japan. With many Japanese living into old age, and a current population in 2016 of 128 million, one can give various reasons for Japan’s low birth rate, but I think a major contributor to that low birth rate is that there is simply not enough qualified candidates waiting in the afterlife to be reborn as Japanese, because most of the qualified candidates are already living as Japanese. Also in 2016, the currently low birth rates worldwide for so-called whites is probably also mainly due to not enough qualified candidates waiting in the afterlife to be reborn as white. The correct solution for so-called white countries is not to lower the quality of life for whites by importing people who are incompatible with them—which in 2016, unfortunately, is happening in many countries including the USA—but instead to simply accept whatever population shrinkage happens with whites until an equilibrium population of whites is reached.

6.3.2 How a Mind Connects with a Brain before Birth

As said in section 3.7, “there must be interface programming—existing in one or more learned programs on the brain side, and existing in one or more learned programs on the mind side—that interfaces one’s mind with one’s brain.” This subsection considers this interface programming in more detail.

Animals with brains have been on Earth for a very long time. For example, the Editor’s summary at http://www.nature.com/nature/journal/v490/n7419/full/nature11495.html, regarding the 2012 article Complex brain and optic lobes in an early Cambrian arthropod, in the journal Nature, reads as follows:

The Cambrian explosion refers to a time around 530 million years ago, when animals with modern features first appeared in the fossil record. The fossils of Cambrian arthropods reveal sophisticated sense organs such as compound eyes, but other parts of the nervous system are usually lost to decay before fossilization. This paper describes an exquisitely preserved brain in an early arthropod from China, complete with antennal nerves, optic tract and optic neuropils very much like those of modern insects and crustaceans.

In section 3.7 I’ve already given justification, in the case of complex animals with complex senses and movement capabilities, for the brain bions (the cell-controlling bions that occupy and make alive the nerve cells of the brain) to be a separate group of bions than the bions that collectively compose that animal’s mind (its mind analyzes the animal’s sensory data and decides when and how to respond to that sensory data, and that response can include sending activate-muscle messages to the animal’s brain so as to move that animal as that mind wants). And, if the animal has a soliton/mind instead of just a mind, then that awareness, in general, is also involved in the decision-making process regarding that animal’s movements and other voluntary actions.

I’ve already given my reasons in section 6.1 for why I think insects have a mind but not a soliton/mind. Thus, some animals, including insects, have just a mind and are consequently unconscious, and other animals, including the human animal of course, have a soliton/mind and are consequently conscious. However, I think it likely that whatever the detail of the interface programming that developed and evolved hundreds of millions of years ago, that basically the same interface programming is still used today to connect a mind to a brain, regardless of whether that mind has an associated soliton or not, because the presence or absence of a soliton is not directly relevant to the mind/brain interface programming.

The interface programming involves messaging protocols between the brain and the mind. To identify the intended recipient(s) of a sent message when the intended recipient(s) are one or more bions, the send() statement has two mutually exclusive parameters (only one of these two parameters may be used when calling the send() statement): either use the list_of_bions parameter, or use the user_settable_identifiers_block parameter. The list_of_bions parameter identifies the intended recipient(s) by their unique identifiers. The user_settable_identifiers_block parameter is explained in subsection 3.8.2.

Let’s consider the following scenario: You are in the afterlife and are ready to reincarnate as human again, and you have already found the parents you want, and they (their unconscious minds) have, in effect, approved you as their child, and the baby’s body is already developing in the mother, and the brain in that baby has reached sufficient development to start establishing connections with a human mind. And your awareness/mind is nearby ready to connect. What happens now? The problem is that the relevant brain bions that have sensory data to send to the mind, don’t know the unique identifiers of the mind bions that they should send their sensory data to, and likewise, your mind bions that will be sending activate-muscle messages to brain bions in the motor cortex, to cause muscle movements, don’t know the unique identifiers of the brain bions that they should send their activate-muscle messages to. Thus, to establish initial communications between a mind and what will be its brain, the user_settable_identifiers_block parameter of the send() statement must be used.

Regarding the interface programming, assume that there are two different integer constants used, one is named BRAIN_BION_HAS_SENSORY_DATA, and the other is named BRAIN_BION_CAN_ACTIVATE_A_MUSCLE. And in any mind that has interface programming for connecting with a brain, there is one bion in that mind whose seventh integer in its user-settable identifiers block, aka USID_7, is set to the integer value of BRAIN_BION_HAS_SENSORY_DATA. Also, there is one bion in that mind whose USID_7 value is set to the integer value of BRAIN_BION_CAN_ACTIVATE_A_MUSCLE. The mind bion that has the integer value of BRAIN_BION_HAS_SENSORY_DATA as its USID_7 value, will be the recipient of any BRAIN_BION_HAS_SENSORY_DATA messages (described in the next paragraph). And likewise, the mind bion that has the integer value of BRAIN_BION_CAN_ACTIVATE_A_MUSCLE as its USID_7 value, will be the recipient of any BRAIN_BION_CAN_ACTIVATE_A_MUSCLE messages (described in the next paragraph).

When a brain bion has sensory data to send to the mind, but that brain bion does not yet know which specific mind bion to send its sensory-data messages to, that brain bion periodically sends a short-range message, setting the user_settable_identifiers_block parameter of the send() statement as follows: USID_7 (the seventh integer in the user_settable_identifiers_block parameter) is set to BRAIN_BION_HAS_SENSORY_DATA, and the other integers are set to null. That brain bion then sends this BRAIN_BION_HAS_SENSORY_DATA message, with the sensory data in the message text. A specific mind bion (see the previous paragraph) receives this BRAIN_BION_HAS_SENSORY_DATA message. Note that any message sent by a bion always includes a copy of the entire identifier block of that bion when that message was sent. Here, the relevant parts of that identifier block are that bion’s unique identifier and that bion’s user-settable identifiers block which for a cell-controlling bion in a multicellular animal identifies its cell type and other cell-related info (see subsection 3.8.6). With this detail about the sending brain bion and its cell, and also if there is any relevant detail in the message text (for example, perhaps vision sensory data includes a coordinate that, in effect, locates that pixel on the image seen by that eye), the learned programs in the receiving mind bion can presumably determine which specific mind bion in that mind, call it bion P, should process the sensory data in that received message. After determining bion P, that receiving mind bion then sends a message to bion P (the message text will include, at a minimum, the received sensory data, and the unique identifier of the brain bion that sent that sensory data). Bion P, after receiving that sent message, then processes that sensory data and sends an acknowledgement message to the brain bion that sent that sensory data (the list_of_bions parameter of the send() statement is set to that brain bion’s unique identifier). And, upon receiving that acknowledgement message, the interface programming running in that brain bion simply extracts the unique identifier of the mind bion that sent that acknowledgement message, and henceforth that brain bion will send its sensory data to that specific mind bion, instead of sending a BRAIN_BION_HAS_SENSORY_DATA message. And likewise, the same process happens in the case of a brain bion that can activate a muscle: that brain bion periodically sends a short-range BRAIN_BION_CAN_ACTIVATE_A_MUSCLE message, until it gets an acknowledgement message, the sender of which is the specific mind bion that that brain bion will henceforth accept activate-muscle messages from.

For any animal that is guided by a separate mind that interfaces with that animal’s brain, I think it likely that by the time that animal is born (in the case of a live birth), or hatches from an egg (in the case of an animal that hatches from an egg), that for that animal, as a rule, none of its brain bions are still sending BRAIN_BION_HAS_SENSORY_DATA messages or BRAIN_BION_CAN_ACTIVATE_A_MUSCLE messages, because that brain has already fully connected to the mind that will be the guide for that animal during its life. And all messages between that mind and that brain—after that animal’s birth or hatching, whichever applies—will be sent to specific bion recipients identified by their unique identifiers.

Assuming a separate mind and brain in the case of complex animals with complex senses and movement capabilities, and that this separateness has been present in our world for more than 500 million years, I think 500 million years is much more than enough time for protocols to have evolved in animal minds to avoid a situation where two or more animal minds are competing for connection with the same brain and have each established a partial connection with that brain (in this case the animal would probably not survive long enough to reproduce, because each of the competing minds would only have some of the sensory data and/or some of the control of that animal’s movements, with the end result that that animal would probably soon succumb to the dangers in its environment, such as starving to death or being caught and eaten by a predator). Also, I assume that protocols have evolved to prevent the wrong kind of mind, that is incompatible with an animal, from connecting with that animal’s brain (for example, to prevent an insect mind from connecting to a monkey brain).

In the case of humans, regarding the possibility of having one’s physical body, in effect, taken over by a “spirit” or “demon” or “entity” or whatever one wants to call it, this is the idea of “possession”, of being “possessed” by a being other than oneself. However, because one’s mind has already fully connected with one’s brain by the time of one’s birth, it will be impossible for some other mind or awareness/mind, regardless of how close (in terms of distance) it may be to one’s brain, to receive any of the sensory-data messages from one’s brain, because only the specified recipients of those sensory-data messages can receive those messages, and the specified recipients are mind bions in one’s mind, identified by their unique identifiers. Likewise, that other mind or awareness/mind, regardless of how close it may be to one’s brain, cannot control any of the muscles that one’s brain can activate, because the relevant brain bions in the motor cortex will only accept activate-muscle messages from specific mind bions in one’s mind, identified by their unique identifiers. In summary, having one’s brain possessed by some other mind or awareness/mind is impossible and does not happen. Thus, all stories of such possession are fiction (a related subject is multiple personality disorder, which is considered separately in section 9.7).

7 The Lamarckian Evolution of Organic Life

This chapter considers the evolution of organic life. The explanation for evolution offered by the computing-element reality model involves both Lamarckian evolution and a civilization of beings called Caretakers. The chapter sections are:

7.1 Evolution
7.2 Explanation by the Mathematics-Only Reality Model of the Evolution of Organic Life
7.3 Darwinism
7.4 Darwinism Fails the Probability Test
The First Self-Reproducing Bacterium
7.5 Darwinism Fails the Behe Test
7.6 Explanation by the Computing-Element Reality Model of the Evolution of Organic Life, and the Existence of the Caretaker Civilization
Learned Programs and Organic Life

7.1 Evolution

With regard to organic life, evolution says that new organic life-forms are derived from older organic life-forms. Often, this derivation involves an increase in complexity, but this is not a requirement of evolution.

The idea of evolution is very old. A theory of evolution, such as Darwin’s theory, or Lamarck’s theory, offers an explanation of the mechanism of evolution.

In more general terms, evolution is a process by which something new is created by modifying something old. This kind of evolution is so common thruout human activity that one takes it for granted. All the man-made machines in current use are at least partly derived from knowledge that was previously developed and used to make one or more preexisting machines. For example, if a group of engineers is asked to design a new car, they do not throw out everything known about cars and reinvent the wheel.

7.2 Explanation by the Mathematics-Only Reality Model of the Evolution of Organic Life

The mathematics-only reality model would have one believe that the entire history of organic life—including the transformation of the early atmosphere to the current atmosphere, and the active ongoing maintenance of the current atmosphere in a state of disequilibrium—was accomplished in its entirety by common particles jostled about by random events.[60]

Intelligent processes are too complicated to be explained by mathematical equations. Therefore, the mathematics-only reality model denies that there is any intelligence at the deepest level of the universe. By a process of elimination, the mathematics-only reality model has only common particles and random events with which to explain all the many innovations during the history of organic life.


footnotes

[60] The oldest known organic life is bacteria. The fossil record shows that bacteria first appeared at least 3½ billion years ago. Since then, organic life has radically altered the atmosphere. For example, the removal of carbon dioxide from the atmosphere probably started with the first appearance of bacteria; and all the oxygen in the atmosphere originated from photosynthesis, an organic process.

The assertion that organic life actively maintains the atmosphere to suit its own needs, is known as the Gaia Hypothesis. The Gaia Hypothesis was developed by atmospherics scientist James Lovelock. While working as a NASA consultant during the 1960s, Lovelock noticed that Venus and Mars—the two nearest planets whose orbits bracket the Earth—both have atmospheres that are mostly carbon dioxide. As a means to explain the comparatively anomalous Earth atmosphere, he formulated the Gaia Hypothesis (Margulis, Lynn, and Gregory Hinkle. “The Biota and Gaia: 150 Years of Support for Environmental Sciences.” In Scientists on Gaia, Stephen Schneider and Penelope Boston, eds. MIT press, Cambridge, 1993).

The current atmosphere of the Earth is not self-sustaining. It is not an equilibrium atmosphere that would persist if organic life on the Earth disappeared. Instead, the atmosphere is mostly a product of life, and is actively maintained in its present condition by life. The composition of the atmosphere by volume is about 78% nitrogen, 21% oxygen, 1% argon, and 0.03% carbon dioxide. Other gases are present in smaller amounts. As Lovelock states in his book Gaia, if life on Earth were eliminated, the oxygen would slowly leave the atmosphere by such routes as reacting with the nitrogen. After a million years or so, the Earth would have its equilibrium atmosphere: The argon would remain, and there would be more carbon dioxide. But the oxygen would be gone, along with much of the nitrogen (Lovelock, James. Gaia. Oxford University Press, Oxford, 1982. pp. 44–46). However, instead of moving to this equilibrium state, the atmosphere is maintained in disequilibrium by the coordinated activities of the biosphere.

One of the more interesting examples of control over the atmosphere by organic life is the production of ammonia. The presence of ammonia in the atmosphere counteracts the acids produced by the oxidation of nitrogen and sulfur. Lovelock estimated that without ammonia production by the biosphere, rainwater would be as acid as vinegar (Ibid., pp. 68, 77). Instead, there is just enough ammonia produced to counteract the acids and keep the rainwater close to neutral. Besides ammonia production, there are many other Gaian processes (Shearer, Walter. “A Selection of Biogenic Influences Relevant to the Gaia Hypothesis.” In Scientists on Gaia, op. cit.).


7.3 Darwinism

Darwinism—named after the British naturalist Charles Darwin, who first proposed his theory in the mid 19th century—is a theory of how organic evolution has happened. The theory states that during the production of a child organism, random events can cause random changes in that child organism’s characteristics. Then, if these new characteristics are a net benefit to that organism, that organism is more likely to survive and reproduce, thereby passing on these new characteristics to its children.

Darwin’s theory has two parts. The first part identifies the designer of organic life as randomness. The second part, called natural selection, is the means by which good designs are preserved and bad designs are eliminated. Natural selection is accomplished by the environment in which the organism lives.

As discussed in the previous section, random events applied to common particles is the only mechanism allowed by the mathematics-only reality model for the evolution of organic life. Thus, in effect, Darwinism applies the mathematics-only reality model to the question of how organic life has come about. This is the reason Darwinism is embraced by those who embrace the mathematics-only reality model.

The strong point of Darwinism is natural selection (for example, see the use of natural selection in explaining the evolution of learned programs, in section 3.6). The weak point of Darwinism is its exclusive reliance on random events as the cause of the changes winnowed by natural selection.[61]


footnotes

[61] As was described in chapter 2, the production of sex cells has steps in which the genetic inheritance from both parents is randomly mixed to form the genetic inheritance carried by each sex cell. Thus, for sexually reproducing organisms, randomness does play an important role in fine-tuning a species to its environment, insofar as that species is defined by its genetic inheritance.

Although sexual reproduction uses randomness—as part of the total sexual reproduction process—that does not mean, as Darwinism would have it, that the process itself was produced by random physical events. For example, in computer science there are many different optimization problems whose solutions are most efficiently approximated by randomly trying different possibilities and keeping only those tries that improve the quality of the solution. This is a standard technique. However, because a computer program uses randomness to find a solution, that does not mean that the program itself was produced by random physical events. Quite the contrary, the programs of computer science were produced by intelligent designers—namely computer scientists and programmers.

In the computing-element reality model, randomness is assumed to play an important role in the origin of learned programs, because, in essence, learning by trial and error (section 3.6) is an algorithm that makes random changes within the confines defined for that algorithm.


7.4 Darwinism Fails the Probability Test

In various forms, the probability argument against the randomness of Darwinism—in which odds are computed or estimated—has been made by many different scientists since Darwinism was first proposed. One way to make the probability argument is to use the known structure of major organic molecules such as DNA and protein.[62],[63] For example, the probability p of getting in one trial an exact sequence of N links, when there are C different equally likely choices for each link, is:

p = (1 ÷ (CN))

Applying this equation to DNA, where C is 4—or to protein, where C is 20—quickly gives infinitesimally small values of p as the number of links N increases.

The First Self-Reproducing Bacterium

Consider the DNA needs of the first self-reproducing bacterium. And note that until there is self-reproduction, Darwinian natural selection has nothing to work with, because Darwinian natural selection assumes there is already a population of reproducing organisms. Darwinian apologists typically avoid considering how the first self-reproducing cell—presumably a bacterium because it is the simplest self-reproducing cell—came about, and the feeble attempts I’ve seen by them, invoke natural selection operating at the level of atoms and molecules, which is absurd and contrary to physical science. Also, they assume there is only the physical, and they know nothing regarding intelligent particles. Thus, to argue at their level, bions and their involvement with organic cells is ignored until the end of this section.

How likely is it that the first self-reproducing bacterium happened by chance? To improve the odds for the Darwinian apologists, let’s ignore the current high complexity of bacteria, and also let’s ignore any consideration of what a physical self-reproducing machine needs in terms of its components.[64] And let’s assume that the DNA needed to code that first self-reproducing bacterium is only 10,000 links (this is enough to code a small number of proteins, totaling about 3,300 protein links; the bacteria in our world today have much more DNA than this). Also, let’s assume that at any DNA link, any two of the four bases will be adequate in coding that link, because, presumably, there are many DNA sequences for our 10,000-link DNA that would adequately code a usable set of proteins for that first self-reproducing bacterium (this assumption lowers C from 4 to 2). Also, let’s greatly exaggerate the total number of trials that could have happened in the past to bring about that 10,000-link DNA (the total number of trials is multiplied by p to get the final probability of the wanted 10,000-link DNA happening). Specifically, let’s assume a million trials per second (106) for the estimated age of our Milky Way galaxy (15 billion years is approximately 1018 seconds), times all the places where these trials could have happened, which we will greatly exaggerate as being the same as the total number of elementary physical particles in the visible universe (estimated by physicists at approximately 1080 particles). And also, let’s assume that nothing else is needed, other than this 10,000-link DNA strand, to make that first self-reproducing bacterium. With all these extremely generous assumptions, which greatly increase the probability of that first self-reproducing bacterium arising by chance, let’s compute the probability of it, which is ((the probability of one trial succeeding) × (the total number of trials)), which is:

(1 ÷ (210,000)) × (106 × 1018 × 1080) ≈ 10−2,906

In other words, the odds are about 102,906 to one, against. And this means that that first self-reproducing bacterium did not arise by chance.

For those Darwinian apologists who admit that the odds are against them, they claim that the odds don’t matter because the mere fact that self-reproducing cells exist means that the odds were beaten. But they are wrong, because what the odds show is that the first self-reproducing cell did not arise by chance. An explanation other than randomness is needed, and this book provides that explanation: bions.


footnotes

[62] Each molecule of DNA is a long molecule composed of chemical units called bases. These bases are strung together like links on a chain. There are four bases. Thus, there are four choices for each link.

The sequence of bases in an organism’s DNA is very important, because this sequence is the means by which DNA stores information, which is known to include the structure of individual proteins. A bacterium—the simplest organic life that can reproduce itself without the need to parasitize other cells—typically has many strands of DNA, containing altogether hundreds of thousands or millions of bases.

[63] A protein is a long folded molecule. Just as DNA is composed of a sequence of smaller building blocks, so is protein. However, whereas the building blocks of DNA are four different bases, the building blocks of protein are twenty different amino acids. Although a protein has more choices per link, a protein rarely exceeds several thousand links in length.

A bacterium has several thousand different proteins. The average length of these different proteins is somewhere in the hundreds of links.

[64] Any self-reproducing machine in the physical universe must meet certain theoretical requirements. A self-reproducing machine must have a wall to protect and hold together its contents. Behind this wall the self-reproducing machine needs a power plant to run its machinery, including machinery to bring in raw materials from outside the wall. Also, machinery is needed to transform the raw materials into the components needed to build a copy of the self-reproducing machine. And machinery is needed to assemble these components.

All this transport, transformation, and assembly machinery, requires a guidance mechanism. For example, there must be some coordinated assembly of the manufactured components into the new copy of the self-reproducing machine. Thus, the guidance mechanism cannot be too trivial, because its complexity must include a construction plan for the entire self-reproducing machine.

The requirements of a wall, power plant, transport machinery, transformation machinery, assembly machinery, and a guidance mechanism—all working together to cause self-reproduction—are not easily met. Consider the fact that there are no man-made self-reproducing machines.


7.5 Darwinism Fails the Behe Test

The Behe test refers to the main argument made against Darwinism by the biochemist Michael Behe:

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have anything to act on.[65]

After giving the example of a mousetrap as an irreducibly complex system, Behe then gives several detailed examples of specific biochemical systems that are irreducibly complex: the cilium;[66] the bacterial flagellum;[67] blood clotting;[68] the immune system’s clonal selection, antibody diversity, and complement system.[69]

By focusing on the issue of irreducibly complex systems, and being clear about that focus, Behe avoids the strong part of Darwinism, which is natural selection, and instead concentrates on the weak part of Darwinism, which states that random physical events are the cause of the changes winnowed by natural selection.

Calculating the probability for one of the irreducibly complex biochemical systems given by Behe, assuming that the system arose by chance, is non-trivial. However, mathematician William Dembski, in his book No Free Lunch, tackles the specific problem of calculating a probability for the formation of a bacterial flagellum by chance.[70] For a bacterium that has one or more flagella, its flagella are a means of moving that bacterium about in its watery environment. Each flagellum has a long whip-like filament that extends outward from the bacterium’s cell wall. This filament is attached to a structure, called a hook, that acts as a universal joint which connects the filament to a specialized structure embedded in the cell wall that acts as a bi-directional motor that can rotate the filament in either a clockwise or counterclockwise direction. Because of the helically wound structure of the filament, one of these rotation directions causes the spinning filament to act like a propeller that pushes the bacterium in one direction, and the opposite rotation causes the spinning filament to act as a destabilizer that causes the bacterium to tumble (the bacterium tumbles when it wants to change the direction it is moving in).

For comparison purposes, Dembski first calculates what he calls a universal probability bound, the idea of which is that anything dependent on chance whose probability is smaller than this universal probability bound is extremely unlikely to happen no matter how much time and material resources in the universe one invokes on the side of chance.[71] His universal probability bound, which is very generous to those who want to invoke Darwinism and its reliance on chance, is computed as follows (1080 is the estimate by physicists of the number of elementary physical particles in the visible universe; 1045 is approximately the number of Planck-time intervals in one second; 1025 is more than ten million times the age of our Milky Way galaxy in seconds):

(1 ÷ (1080 × 1045 × 1025)) = 10−150

Thus, given this universal probability bound, anything with a probability less than 10−150 can be safely dismissed as so unlikely that there is no reason to consider it as possible when offering an explanation for the formation of an irreducibly complex biochemical system.

Dembski then defines an equation for the probability of a structure arising by chance.[72] His equation may be written as:

pstructure  =  ( poriginate-parts × plocalize-parts × pconfigure-parts )

In the above equation, pstructure is the probability of getting either the specified structure or a functionally equivalent structure; poriginate-parts is the probability of originating all the parts that are needed to build an instance of the specified structure or a functionally equivalent structure; plocalize-parts is the probability that the needed parts are located together at the construction site; pconfigure-parts is the probability that the localized parts are configured (assembled) in such a way that either the specified structure or a functionally equivalent structure results.

In Dembski’s computation of pstructure for a bacterial flagellum, the parts of the structure are individual proteins. For the Escherichia coli bacterium, Dembski refers to the technical literature and says that about 50 different proteins are needed to make the flagellum, with about 30 different proteins being in the final form of the flagellum, including:

The filament that serves as the propeller for the flagellum makes up over 90 percent of the flagellum’s mass and is comprised of more than 20,000 subunits of flagellin protein (FliC). … The three ring proteins (FlgH, I, and F) are present in about 26 subunits each. The proximal rod requires 6 subunits, FliE 9 subunits, and FliP about 5 subunits. The distal rod consists of about 25 subunits. The hook (or U-joint) consists of about 130 subunits of FlgE.[73]

Given these details, Dembski computes plocalize-parts as follows:

Let us therefore assume that 5 copies of each of the 50 proteins required to construct E. coli’s flagellum are required for a functioning flagellum (this is extremely conservative—all the numbers above were at least that and some far exceeded it, for example, the 20,000 subunits of flagellin protein in the filament). We have already assumed that each of these proteins permits 10 interchangeable proteins. That corresponds to 500 proteins in E. coli’s “protein supermarket” that could legitimately go into a flagellum. By randomly selecting proteins from E. coli’s “protein supermarket,” we need to get 5 copies of each of the 50 required proteins or a total of 250 proteins. Moreover, since each of the 50 required proteins has by assumption 10 interchangeable alternates, there are 500 proteins in E. coli from which these 250 can be drawn. But those 500 reside within a “protein supermarket” of 4,289 [different] proteins. Randomly picking 250 proteins and having them all fall among those 500 therefore has probability (500/4,289)250, which has order of magnitude 10−234 and falls considerably below the universal probability bound of 10−150.[74]

For computing probability poriginate-parts, Dembski is willing to concede a value of 1 (certainty) if one wants to assume that the needed proteins are already coded in the bacterium’s DNA for other uses. For computing probability pconfigure-parts, the computational approach used by Dembski is more complex than that used for computing plocalize-parts, but gives a similar result of a probability that is much smaller than his universal probability bound of 10−150.


footnotes

[65] Behe, Michael. Darwin’s Black Box. Touchstone, New York, 1998. p. 39.

[66] Ibid., pp. 59–65.

[67] Ibid., pp. 69–72.

[68] Ibid., pp. 79–96.

[69] Ibid., pp. 120–138.

[70] Dembski, William. No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Rowman and Littlefield Publishers, Lanham Maryland, 2002. pp. 289–302.

[71] Ibid., pp. 21–22.

[72] Ibid., p. 291. The pstructure equation is equivalent to—but more descriptively labeled than—the pdco equation given by Dembski.

[73] Ibid., p 293.

[74] Ibid., p 293.


7.6 Explanation by the Computing-Element Reality Model of the Evolution of Organic Life, and the Existence of the Caretaker Civilization

Like the mathematics-only reality model, the computing-element reality model also offers the same possible explanation for the evolution of organic life: common particles jostled about by random events. However, as shown in the previous sections, this is not a viable explanation, and is not considered further.

Another possible explanation is that the computing-element program explicitly programs the details of organic life. For example, the computing-element program could include the details of the DNA, proteins, and other molecules, in the first bacterium. However, this possible explanation, which is not considered further, is weak for many reasons, not the least of which is that it greatly increases the complexity of the computing-element program.

Another explanation—and much more promising—is that the evolution of organic life is the result of the cooperative action of intelligent particles—beginning in the remote past at least 3½ billion years ago, and continuing into the present. Note that with the availability of intelligent particles, there are two basic approaches in which intelligent particles can be involved with the evolution of organic life:

  1. An inside-out process: Design innovations in an organism originate from the intelligent particles that occupy a specific instance of that organism. Once made, an innovation can be copied from the originating population of bions to other bion populations that occupy and develop new instances of that organism.[75] In effect, this is Lamarckian evolution.[76]

  2. An outside-in process: There is nothing in the computing-element reality model that implies a need for common particles in the composition of a sentient being. Instead, only intelligent particles are needed. And as shown in earlier chapters, even we humans, who have physical bodies, exist quite well without them. Thus, given these considerations, it seems very likely that a large fraction of the sentient beings in the universe do not have a physical body, and never have one at any time in their life-cycle (unlike humans who alternate having a physical body with not having one—physical embodiment alternates with the afterlife).

    It is likely that civilizations of such beings, who never have a physical body, exist widely thruout the universe. And it is likely that at least some of these civilizations are highly advanced in their ability to interact with physical matter, and in their scientific knowledge of the physical.

    For the members of such a civilization, their interaction with physical matter would first include, by means of learned programs, being able to see and manipulate physical matter. Assuming that the beings are already intelligent, once the beings can directly see and manipulate physical matter, they can then proceed—more or less in the same way that humanity has proceeded—to master the science of physical matter; and then, as their interests dictate, they can use that knowledge to construct highly sophisticated physical environments and/or machines, including physical computers.

    Thus, given the computing-element reality model, it is possible that such a civilization, wise in the ways of physical matter, existed in our solar system more than 3½ billion years ago—before the beginning of organic life on Earth. And it is possible that this same civilization, or a more evolved version of it, still occupies our solar system today, and has played, and continues to play, a role in the existence of organic life on this Earth. The range of their possible activity with regard to Earth’s organic life suggests for this civilization the name of Caretakers.

    The members of this Caretaker civilization would each have an awareness/mind, just as we humans have, but their bodies consist only of bions (no physical matter as a part of their bodies; no physical body of any kind). Also, a Caretaker bion-body is not limited in the way that the human bion-body is, because the Caretaker bion-body is not composed of cell-controlling bions. Instead, the Caretaker bion-body is composed of bions whose learned programs have evolved without having to support a physical body and its burdensome microscopic needs, and have evolved under the influence of the Caretakers to be compatible with the Caretaker mind and to do what is possible to do as the Caretakers want. This means, among other things, that the Caretaker bion-body, unlike the human bion-body, can manipulate physical objects on a macroscopic scale. Specifically, to interact with physical matter on a macroscopic scale, the learned programs of the bions composing the Caretaker bion-body would make use of the learned-program statement push_against_physical_matter() to push against physical matter, when directed to do so by that Caretaker’s mind. For example, if the Caretaker’s mind sends messages to a large number of its bion-body bions to push with a specified force in a specified direction, and those bions are very close to physical matter (recall the very short range of the push_against_physical_matter() statement, estimated at less than one-tenth of a millimeter), and those bions are spread out, the end result, assuming that the specified force is substantial, can be the movement of a large physical mass in the specified direction.

    Because our human hands work so well for handling physical matter on a macroscopic scale, the Caretaker bion-body probably has—or can form when directed to by that Caretaker’s mind—appendages that look similar to human arms with hands, but without the elbow joints and rigid straight parts due to bones, because the Caretaker bion-body is boneless. With regard to pushing against physical matter with its bion-body, and to lessen the control burden on the Caretaker’s mind, there are different ways one can, in effect, offload much of the control burden onto the bions that compose its bion-body. For example, let’s assume that each bion in its bion-body has a learned program named LP_push_against_physical_matter_at_the_surface, that allows that Caretaker’s bion-body to push against physical matter in a way that is similar to what we experience in our physical bodies (the surface of one’s physical body, at the point of contact with physical matter, pushes against that physical matter). Regarding this learned program, the Caretaker’s mind can send several different messages, including a start-running message that causes this learned program to run, a stop-running message that causes this learned program to stop running, and a pushing-force message that specifies the amount of force to be used.

    In the same way as was done for the cell-controlling bions of a multicellular body, assume that all the bions composing a Caretaker’s bion-body have the same USID_4 value, which is, in effect, a unique identifier for that bion-body. Also, assume that the start-running, stop-running, and pushing-force messages, sent by the Caretaker’s mind, are using the send() statement’s user_settable_identifiers_block parameter to identify the intended recipients of these sent messages, and the USID_4 value of that user_settable_identifiers_block parameter for those sent messages is always set to the USID_4 value for that Caretaker’s bion-body. The following steps are a brief outline of the learned program LP_push_against_physical_matter_at_the_surface (when a bion in the Caretaker’s bion-body receives a start-running message from the Caretaker’s mind, it starts with step 1; when a bion in the Caretaker’s bion-body receives a stop-running message from the Caretaker’s mind, that bion stops running this learned program if it is currently running it):

    1. First determine if this bion is, in effect, at the current surface of the bion-body. Call the get_relative_locations_of_bions() statement, with a short distance specified for its use_this_send_distance parameter, to get the relative locations of nearby bions in the bion-body, from which it can be determined if this bion is a surface bion. If this bion is not at the current surface of the bion-body then stop running this learned program for this bion until the next start-running message is received by this bion, in which case start over again with this step 1.

    2. For this surface bion, use the relative location of the centroid that was among the returned items of the call of get_relative_locations_of_bions() in step 1, and set vector V so that it points opposite to the vector that points from this surface bion to that centroid. The end result is that vector V will point outward from this surface bion, away from the bion-body’s nearby interior.

    3. If this surface bion has recently received a pushing-force message from the Caretaker’s mind that specifies the amount of pushing force F to apply, then call push_against_physical_matter() specifying F as the pushing force and vector V as the direction of that pushing force.

    4. If enough time has passed, let’s say 50 milliseconds since last did step 1, then go to step 1 (the idea is to periodically start over again with step 1 because the bion-body may have altered its shape in some way because of commands sent by the Caretaker’s mind). Otherwise, wait a short time, let’s say a few milliseconds, then go to step 3 (the idea is to keep applying that pushing force).

    Presumably, the Caretakers can see our physical world. Less important but also useful would be the ability to hear our physical world when in its atmosphere. The Caretaker mind probably has basically the same learned programs that Sylvan Muldoon’s mind has (section 5.4), that allowed Muldoon to both see and hear the physical world when projected in his bion-body. These learned programs, for seeing and hearing the physical world, use, respectively, the learned-program statements get_photon_vectors() and get_relative_locations_of_physical_atoms(), to, after further processing, let the awareness see and hear our physical world.

    In general, because the Caretaker civilization predates the first appearance of humans, it is very likely that the Caretakers were the original source of those learned programs in the human mind (the third-eye and third-ear), that enabled Muldoon to both see and hear our physical world during his bion-body projections. Also, the first humans may have been Caretakers (although, as soon as they became human they were no longer Caretakers): This may be how certain parts of the human mind, including its intellectual parts, were inherited from the Caretakers. Any current differences between these inherited learned programs in current humanity and the corresponding learned programs in the Caretakers of today, would be due to the continued evolution of these learned programs in both Caretakers and humans after that initial copying from Caretakers to humanity in the distant past.

    In summary, the ways in which the Caretaker civilization could be involved with the evolution of organic life on Earth include the following:

Learned Programs and Organic Life

Organic life depends on the learned programs in cell-controlling bions that, in effect, carry the knowledge and ability to construct and operate the organic structures that compose a given organism. These organic structures have a wide range in terms of size: the smallest are organic molecules such as DNA and protein, then sub-cellular structures and cells, then constructions of cells including complete organs such as the heart and lungs, and finally the largest structure, being the entire physical body of an organism.

Regarding learned programs in general, learned programs cannot be directly programmed into intelligent particles by any mechanism other than the computing-element program and its learning algorithms (section 3.6). The reason for this limitation is that the computing elements are inaccessible: All particles, whether intelligent or common, are data stored in computing elements (chapter 1). Thus, particles—as an effect of the computing elements—cannot be used to directly probe and/or manipulate the computing elements. Thus, no civilization in this universe can ever know the actual instruction set of the computing elements, nor can it ever know the actual programming language of learned programs. Thus, no civilization in this universe can ever write, as one writes on paper, a new set of learned programs, and then program those learned programs into one or more bions. Thus, it is not possible that the Caretaker civilization, in the distant past, designed and then caused to come into existence the first self-reproducing bacterium, because they could neither write nor program the learned programs needed by whichever bion would operate that first self-reproducing bacterium.[80] Thus, only Lamarckian evolution can be the cause of an organic feature that requires a new or modified learned program to go along with that organic feature.

The next chapter considers in more detail what seem to be the current activities of the Caretakers with regard to our Earth, and in particular with regard to human life.


footnotes

[75] If the innovation is a change to one or more learned programs, then the copying that is done is the copying of those learned programs from one population of bions to another.

If the innovation is a change that can be recorded into that organism’s DNA—such as recording, for example, a new design for a specific protein—then, in accordance with the rules for DNA encoding of information, that change can be made by that organism’s bions to that organism’s germ-cell DNA, and allowed to propagate thru the normal reproduction means for that organism. Presumably, the rules for DNA encoding of information exist in one or more learned programs that all cell-controlling bions share, so that they all speak the same DNA language.

[76] Lamarckism—named after the French naturalist Jean Lamarck who proposed his theory in the early 19th century—is a theory of how organic evolution has happened. His theory states that an organism can adapt to its environment by making structural changes to itself, which can then be inherited.

Historically, Lamarckism was replaced by Darwinism due to Darwinism’s better fit with the mathematics-only reality model. Also, Lamarckism had the drawback that there is no apparent physical mechanism by which Lamarckism could happen. However, this objection is removed by the computing-element reality model, because intelligent particles provide the means by which Lamarckian changes can take place.

[77] The transport of water to the Earth may be an ongoing process. Geophysicists Louis Frank and John Sigwarth have published a number of papers during the 1980s and 1990s regarding what they call small comets. Their claim, based on Earth-observing satellite data, is that:

Every few seconds a ‘snowball’ the size of a small house breaks up as it approaches Earth and deposits a large cloud of water vapor in Earth’s upper atmosphere. [quoted from their website at http://smallcomets.physics.uiowa.edu]

If this alleged influx of snowballs is correct, then it may be that this influx is the result of a deliberate transport program operated by the Caretakers.

Frank and Sigwarth have calculated that the infall rate of these small comets can account for the Earth’s oceans. Regarding the origin of the Earth’s oceans, geologist David Deming comments:

No existing theory of ocean origin by outgassing or rapid accretion on a very young Earth survives falsification. The unifying theory that explains both the origin of the ocean and the continents is the slow and gradual accumulation of water on the surface of the Earth by extraterrestrial accretion. [Eos. Trans. AGU, 82(47), Fall Meet. Suppl., Abstract U52A–0006, 2001]

[78] For example, the arising by means of Lamarckian evolution of a parasitic or poisonous species that is judged to be too damaging, could be singled out for eradication if eradication of that species is possible without excessive amounts of unwanted damage elsewhere in the environment.

[79] For example, during extinction events caused by comets and asteroids, such as the Cretaceous extinction event of about 65 million years ago, some species could be singled out for preservation. Representative members of a species could be collected and kept in a protected environment for as long as needed, until they can be safely reintroduced into the Earth’s biosphere.

An extinction event could also be arranged by the Caretakers, so as to allow a general “housecleaning” of the Earth’s biosphere, followed by the selective reintroduction of those species wanted on the newly “cleaned” Earth.

[80] The Caretakers, in theory, could have designed the molecular composition of the first self-reproducing bacterium—its DNA, proteins, etc. But without a bion to animate it, the Caretakers would have had only a lifeless lump of organic matter—a lump that would, among other things, have been unable to reproduce itself.


8 Caretaker Activity

This chapter briefly surveys what is known about UFOs, by describing the UFO and the UFO occupants. After the survey, an evaluation of the evidence concludes that the UFO occupants are the Caretakers. The possibility of interstellar travel by the Caretakers is also considered. The chapter sections are:

8.1 The UFO
8.2 The UFO according to Hill
My Analysis
8.3 The UFO Occupants
8.4 Identity of the UFO Occupants
8.5 Interstellar Travel

8.1 The UFO

Starting with the flood of UFO reports in the USA that occurred in 1947,[81] the USA Air Force established an official investigation in September 1947, which existed under different names until December 1969 when it was closed. For most of its life the investigation was lightly staffed and had a policy of debunking and dismissing each one of the thousands of UFO reports that accumulated in its files.

An astronomy professor, J. Allen Hynek, was a consultant to the investigation from 1952 to 1966. However, he quit in disgust after being subjected to public ridicule for his infamous “swamp gas” explanation of the March 21, 1966, UFO sighting on the Hillsdale College campus in Michigan:[82] On the night of March 21, a civil-defense director, a college dean, and eighty-seven students, witnessed the wild maneuvers of a car-sized football-shaped UFO. Keith Thompson, in his book Angels and Aliens, summarizes: “The curtain came down on this four-hour performance when the mysterious object maneuvered over a swamp near the Hillsdale College campus.”[83]

Although initially disbelieving, Hynek underwent a conversion during the 1960s as he was overcome by the weight of evidential UFO reports.[84] He had personally investigated many of these reports by interviewing UFO witnesses as part of his role with the Air Force as a UFO debunker. In a 1975 conference paper, quoted by Leonard Stringfield in his book Situation Red, Hynek summarized his position as follows:

If you object, I ask you to explain—quantitatively, not qualitatively—the reported phenomena of materialization and dematerialization, of shape changes, of the noiseless hovering in the earth’s gravitational field, accelerations that—for an appreciable mass—require energy sources far beyond present capabilities—even theoretical capabilities—the well-known and often reported E-M effects, the psychic effects on percipients, including purported telepathic communications, the preferential occurrence of UFO experiences to the “repeaters”—those who are reported to have so many more UFO sightings that it outrages the noble art of statistics.[85]

The statement about materialization and dematerialization refers to reports where the UFO becomes visible or invisible while being stationary.[86] The statement about shape changes refers to reports where a UFO undergoes a major change in its apparent shape—such as when two smaller UFOs join to form a single larger UFO. The statement about E-M effects refers to electromagnetic effects, such as the bright lights and light beams that often emanate from UFOs. Also, there is the effect that UFOs can have on electrical machinery. For example, a UFO in proximity to a car typically stops that car’s engine.

UFO sightings are not evenly distributed over time. Instead, the sightings tend to clump together in what are called waves. During a UFO wave, the number of reported sightings is much higher than normal. Waves are typically confined geographically. For example, France experienced a large wave in 1954, which included landings and observed occupants. Sweden and Finland experienced a wave beginning in 1946 and lasting until 1948. In that wave, the UFOs were cigar-shaped objects which were termed at the time ghost rockets. More recent was the wave in Belgium that began in November 1989 and lasted thru March 1990. The USA waves include those of 1897, 1947, 1952, 1957, 1966, and 1973. Computer scientist Jacques Vallee, in his book Anatomy of a Phenomenon, summarizes some earlier sightings:

Their attention, for example, should be directed to the ship that was seen speeding across the sky, at night, in Scotland in A.D. 60. In 763, while King Domnall Mac Murchada attended the fair at Teltown, in Meath County, ships were also seen in the air. In 916, in Hungary, spherical objects shining like stars, bright and polished, were reported going to and fro in the sky. Somewhere at sea, on July 29 or 30 of the year 966, a luminous vertical cylinder was seen.... In Japan, on August 23, 1015, two objects were seen giving birth to small luminous spheres. At Cairo in August 1027, numerous noisy objects were reported. A large silvery disk is said to have come close to the ground in Japan on August 12, 1133.[87]

There is no standard size, shape, or coloring of UFOs. Reported sizes, as measured along the widest dimension, have ranged from less than a meter to more than a thousand meters.[88] However, most reported UFOs whose size was observed from the ground at close range were roughly between a small car and a large truck in size. In modern times, most UFOs have resembled spheres, cylinders, saucers, or triangles with rounded corners. Sometimes the observed UFO has a distinct dome, and sometimes the UFO has what appear to be windows or portholes.

When viewed as solid objects, UFOs often have a shiny metallic finish, although dark colors are also sometimes reported. When viewed as lights, or as flashing lights on a UFO body, typical colors seem to be white and red, with other colors, such as yellow, blue, and green, reported less frequently.


footnotes

[81] The Roswell hoax—the alleged crash of a UFO in Roswell, New Mexico, and the subsequent recovery and dissection by the USA military of several dead alien crash victims—dates to an event in July 1947: Debris from a crashed balloon (the balloon was part of a secret project by the USA military named Project Mogul) was misidentified by an Army Air Force intelligence officer—who knew nothing of the secret project—as the remains of a crashed saucer (apparently because of the very recent and widespread USA news coverage about “flying saucers”). This misidentification was reported in the local Roswell newspaper and then reported across the USA. But within a few days the USA military retracted the story as a misidentification of debris that belonged to a weather balloon (Project Mogul was a military secret and not declassified and made public until 1994, so a more accurate and detailed explanation was not forthcoming).

Although the Roswell event dates to 1947, the Roswell myth did not grow large until the 1980s and 1990s, when many books were written on the subject. As researcher Kal Korff says, “The Roswell ‘UFO crash’ of 1947 is not the only case in UFO history to be blown out of proportion, nor is it going to be the last. … Let’s not pull punches here: The Roswell UFO myth has been very good business for UFO groups, publishers, for Hollywood, the town of Roswell, the media, and UFOlogy.” (Korff, Kal. The Roswell UFO Crash: What They Don’t Want You to Know. Prometheus Books, Amherst NY, 1997. pp. 217–218).

Although money is an important factor in explaining the peddling of the Roswell myth as factual, there is perhaps a bigger reason that explains why there was a demand for this myth: The mathematics-only reality model does not allow UFOs and their occupants—if they are real—to be something that the mathematics-only reality model cannot explain. But the commonly reported characteristics of the occupants—for example, their widely reported use of telepathy when communicating with humans—cannot be explained by the mathematics-only reality model. Thus, because the mathematics-only reality model is the dominant reality model of the 20th century, and many people believe this model, this belief creates a potential paying public for false UFO stories—such as Roswell—to counteract and contradict the UFO evidence that undermines the mathematics-only reality model. Thus, the creation and consequent peddling of both the Roswell myth and similar crash-and-recovery myths; the ultimate purpose of which is to place the aliens on the dissection table, so as to expose them as physical, as the mathematics-only reality model requires.

[82] Thompson, Keith. Angels and Aliens. Addison-Wesley, New York, 1991. pp. 80–84.

[83] Ibid., p. 81.

[84] Ibid., pp. 80, 83–84, 117.

[85] Stringfield, Leonard. Situation Red: The UFO Siege. Fawcett Crest Books, New York, 1977. p. 44.

[86] Because UFOs have the ability to accelerate and decelerate so quickly—faster than the eye can follow—this ability is typically given as the explanation for the reports of materializing and dematerializing UFOs. And this is probably the correct explanation, assuming that the UFO involved is physical.

[87] Vallee, Jacques. Anatomy of a Phenomenon. Ace Books, New York, 1965. p. 21.

[88] Although typically classified in the UFO literature simply as UFOs—because they are seen as unidentified objects moving thru the sky—the smallest objects, typically seen as small balls of light less than a meter in size (and which are sometimes seen moving in formation, and are often seen moving to and from a larger UFO), are, apparently, individual beings. For example: “Also common within abduction reports is the ball-of-light visitation. They have been dubbed ‘bedroom lights’ by UFO researchers. Sometimes the glowing ball will dissipate and disgorge an alien entity. At other times, the alien entity will dissipate and become a luminous ball. Again, with the feeling of deja vu, I too had an encounter with a small light hovering before my bed when I was a child.” (Fowler, Raymond. The Allagash Abductions. Wild Flower Press, Tigard OR, 1993. p. 197).

The dissipation that UFO researcher Raymond Fowler is referring to in the above quote is probably the reorganization of that being’s bion-body, either to or from whatever shape that being assumes when it is about to interact with a human. For us humans, our bion bodies are composed of cell-controlling bions, and one’s projected bion-body keeps its human shape (see subsection 5.2.3). However, the bions composing a UFO occupant’s bion-body are not cell-controlling bions, and its bion-body can apparently assume different shapes, presumably under the control of that being’s awareness/mind. Perhaps the undifferentiated shape of a ball is more conducive for high-speed travel, and the beings typically adopt that shape when they want to change locations quickly. In either case, whether the being appears as a ball or in some alien form, and whether the being is flying thru the air or moving about on the ground, the movement ability of its bion-body comes from the bions of that bion-body using the learned-program statement move_this_bion().

The question arises as to why the beings are sometimes appearing as a ball of light, instead of simply remaining invisible. Presumably, the bions in its bion-body have a learned program that calls a learned-program statement that can generate visible light, perhaps by ionizing molecules in the surrounding air in such a way as to cause the emission of visible light. The reason the beings may want to be lighted when they travel as a ball to and from their UFO ship at night, could be the same reason that their UFO ship is often lighted. In general, when the beings are closely interacting with physical matter, they themselves, apparently, can see by means of visible light. Thus, in general, when it is nighttime and dark outside, having its bion-body generate visible light may be its equivalent of a human using a flashlight to see when it is dark. In the specific case of Raymond Fowler, quoted above, saying that he “had an encounter with a small light hovering before my bed when I was a child”: the light generated by that being was most likely done so that it could see better in that dark, nighttime room.

As explained above, the small balls of light are the beings themselves. However, the larger UFOs, from car-size on up, are, apparently, the actual physical ships used by these beings to transport various physical objects—such as physical computers, sensors, and recording devices—used by their civilization.


8.2 The UFO according to Hill

Aeronautical engineer Paul Hill (1909–1990) presents a detailed technical evaluation of UFOs in his book Unconventional Flying Objects.[89] His experience with UFOs included two different sightings that he had. Both sightings were made in Hampton, Virginia. The first sighting was on July 16, 1952:

In the early 1950s, I studied the UFO pattern and noticed their propensity for visiting defense installations, flight over water, evening visits, and return appearances. … Accordingly, expecting conformance to the pattern, at 5 minutes to 8 P.M., just at twilight, a companion and I arrived at the Hampton Roads waterfront, parked, and started to watch the skies for UFOs. … They came in side by side at about 500 mph [about 800 kilometers per hour], at what was learned later by triangulation to be 15,000 to 18,000 feet altitude [about 4500 to 5500 meters]. From all angles they looked like amber traffic lights a couple of blocks away, which would make them spheres about 13 to 20 feet [about 4 to 6 meters] in diameter. … Then, after passing zenith, they made an astounding maneuver. Maintaining their spacing of about 200 feet [about 60 meters], they revolved in a horizontal circle, about a common center, at a rate of at least once per second.[90]

Hill computes the acceleration of the revolving UFOs at about 122 g’s.[91] Hill’s second sighting, made in 1962, was of a single large dirigible-shaped UFO maneuvering over Chesapeake Bay, which he saw while he was riding as a passenger in a car:

… I was surprised to see a fat aluminum- or metallic-colored “fuselage” nearly the size of a small freighter, but shaped more like a dirigible, approaching from the rear. It was at an altitude of about 1000 feet [about 300 meters] .... It was moving slowly, possibly 100 mph [160 kilometers per hour] … It looked like a big, pointed-nose dirigible, but had not even a tail surface as an appendage. … Soon … it began to accelerate very rapidly and at the same time to emit a straw-yellow, or pale flame-colored wake or plume, short at first but growing in length as the speed increased until it was nearly as long as the object. Also, when it started to accelerate it changed from a level path to an upward slanting path, making an angle of about 5 degrees with the horizontal. It passed us going at an astounding speed. It disappeared into the cloud layer … in what I estimated to be four seconds after the time it began to accelerate. The accelerating distance was measured by the car odometer to be 5 miles [8 kilometers].[92],[93],[94]

Hill computes the acceleration of this dirigible-shaped UFO at about 100 g’s. Its speed, when he last saw it, was about 9,000 mph (about 14,500 kilometers per hour, which is about 4 kilometers per second).[95] Although an acceleration of 100 g’s would kill a man, beings like the Caretakers have no physical body to crush, and would be safe.

Assuming that a UFO is composed of physical matter, an acceleration of 100 g’s is not necessarily destructive to that UFO’s physical content. And Hill points out that the USA military has self-guiding cannon shells that contain electronics, sensors, and maneuverable flight surfaces. These cannon shells are subjected to more than 7,000 g’s at launch, and are designed to survive 9,000 g’s.[96]

Based on the observation that UFOs tilt to move—which implies a single thrust vector—and based on the various reported effects of UFOs including the bending down and breaking of tree branches when a UFO flies too closely over them, Hill concludes that the UFO moves by means of a directed force field that repels all physical matter, in the same way that gravity attracts all physical matter.[97] This anti-gravity force field is unknown to modern-day physics.

My Analysis

I have not seen a UFO myself, but some of the reported reaction effects outside a UFO may be nothing more than a result of rapid changes in air pressure around the physical UFO, caused by the UFO’s rapid movements thru the air at low altitudes. Thus, for example, tree branches can be bent downward by wind caused when a UFO quickly moves close overhead and stops, and also when that UFO quickly leaves that location.

Regarding what moves the physical UFO: Instead of invoking an anti-gravity force field, which probably doesn’t exist, a physical UFO, in theory, could have some part of itself infused with bions whose learned programs would use the push_against_physical_matter() statement to move that physical UFO. However, as explained in section 7.6, bions cannot be directly programmed by any civilization. Thus, how would those bions be programmed to move that UFO as desired by the UFO’s occupants? Alternatively, one can suggest that one or more of the UFO occupants themselves are using their bion bodies to push against the physical UFO and move it. Also, as described in section 8.1, the UFO occupants can change the shape of their bion bodies. Thus, one can imagine one or more of these beings flattening their bion bodies and covering the surface of one or more large metal plates that are firmly attached inside the physical UFO, and then pushing against those metal plates as needed, to get the wanted movement of the entire physical UFO and its physical contents.

As stated in subsection 3.8.8, there is no push-back against a bion when it pushes against physical matter by using the push_against_physical_matter() statement, and this means that a UFO occupant, regardless of whether it is inside or outside its physical UFO, can continuously push against whatever part of that UFO its bion-body is very close to (recall that the push_against_physical_matter() statement has a very short range, estimated at less than one-tenth of a millimeter, hence the need to be very close). Thus, that UFO occupant can thereby apply a force against that UFO part without any equal and opposite force pushing back against that UFO occupant’s bion-body. And, assuming that that UFO part pushed against is, for the total force applied to it, unbreakably attached to the main structure of the UFO, then, assuming enough force is applied—getting other UFO occupants to help with the pushing if needed—the entire UFO can be moved. In the case of a UFO moving thru Earth’s atmosphere, a constant upward pushing against a horizontal metal plate would be needed to counteract the downward pull of gravity.

It may sound ridiculous to suggest that the beings themselves are pushing against their physical UFO to move it, but I do believe this is the most likely explanation. It reminds me of the story of Roman galley slaves chained to their oars. It does sound somewhat primitive to suggest that the presumably very advanced civilization behind these UFOs can’t do better in terms of propulsion than having to use their own people to, in effect, row the boat. But, what is the alternative? The only alternative is to use, in Earth’s atmosphere, aircraft wings and a physical propulsion method such as propellers and/or jets and/or rockets, which means a much greater weight for the physical UFO because, in addition to the weight of the wings, the UFO would have to carry those propulsion engines and all the fuel that those propulsion engines need, and it also means a much longer acceleration/deceleration time and a much slower top speed when compared to what is reported for UFOs. Also, with a physical propulsion method comes the need to manufacture the fuel and also the need to do periodic maintenance on the engines to keep them running smoothly. My guess is that in the distant past, the civilization of the UFO occupants developed such physical propulsion methods and experimented with them, but in the end they decided that, all things considered, it was better to just move their physical ships themselves. Note that in the case of their physical UFOs traveling in outer space, away from the Earth’s gravity, they would only have to push a short time to get their physical UFO moving very fast in the wanted direction, and then they can just coast to their destination without having to keep pushing. This assumes that they sometimes travel elsewhere in our solar system.


footnotes

[89] Hill, Paul. Unconventional Flying Objects: a scientific analysis. Hampton Roads Publishing, Charlottesville VA, 1995. (Hill’s book, although completed in 1975, was not published until 1995, five years after his death.)

[90] Ibid., pp. 44–45.

[91] Ibid., p. 48.

[92] Ibid., pp. 175–176.

[93] According to Hill’s analysis (Ibid., pp. 53–82, 179–180), the plume emitted by this dirigible-shaped UFO is the result of the ionization of the air that moves into the wake of this UFO. This ionization is caused by soft x-rays, presumably emitted as a consequence of the UFO’s propulsion system. The plume—although it looks like a flame—is not a flame: there is no burning, and the plume is not hot. The plume lengthens as the UFO moves faster thru the air, because there is a relaxation time for the ionization.

According to Hill, this emission of soft x-rays—primarily in the direction of the UFO’s thrust vector—is a common feature of UFOs, and this accounts for the reported instances of radiation sickness in those persons who get too close to the outside of a UFO for too long. The ionization plume is not normally visible during daylight, but is visible under low-light conditions. For example, a saucer-shaped UFO hovering at night can appear cone-shaped: the cone under the saucer is the ionized air beneath the saucer (Ibid., pp. 144–145). In general, the ionization around a UFO tends to interfere with the ability to clearly see the surface of that UFO.

[94] According to Hill, he heard no noise from this dirigible-shaped UFO, even though it was moving—when he last saw it—at supersonic speed. According to Hill’s analysis (Ibid., pp. 181–218), as the UFO moves at supersonic speeds thru the atmosphere, both the lack of a sonic boom and the apparent lack of any significant heating of the UFO are due to the same cause: the same type of force field used to move the UFO is also used to move the air smoothly around the UFO.

[95] Ibid., pp. 48–49.

[96] Ibid., p. 49.

[97] Ibid., pp. 98–118.


8.3 The UFO Occupants

According to the UFO literature, UFO occupants come in different humanoid shapes and sizes. Regarding shape, the occupants have more or less the basic humanoid shape: two legs, two arms, a head, and bilateral symmetry. Regarding size, the UFO occupants are typically described as being small, ranging from about 3 to 5 feet in height (1 to 1½ meters).

There are reports that UFO occupants abduct people. In premodern times, when UFO occupants wanted to abduct someone, they typically appeared to the abductee as dwarfish people. These occupants would then play a ruse on the abductee. They invited the abductee to come along with them, either to provide help of some kind or to participate in their celebrations. Some such excuse would be made, to help win the abductee’s initial cooperation in his own abduction. The people at the time believed these occupants to be members of an advanced human race that lived on mountains, in caves, or on islands; in places not inhabited by ordinary people. But this deception became obsolete when it became unbelievable in modern times. However, the deception was used in Europe until as late as the 19th century when the practice died out completely. Jacques Vallee, in Dimensions (quoting Walter Evans-Wentz, who wrote a thesis on Celtic traditions in Brittany, and a book in 1909 titled The Fairy-Faith in Celtic Countries):

The general belief in the interior of Brittany is that the fees once existed, but that they disappeared as their country was changed by modern conditions. In the region of the Mene and of Erce (Ille-et-Vilaine) it is said that for more than a century there have been no fees and on the sea coast where it is firmly believed that the fees used to inhabit certain grottos in the cliffs, the opinion is that they disappeared at the beginning of the last century. The oldest Bretons say that their parents or grandparents often spoke about having seen fees, but very rarely do they say that they themselves have seen fees. M. Paul Sebillot found only two who had. One was an old needlewoman of Saint-Cast, who had such fear of fees that if she was on her way to do some sewing in the country and it was night she always took a long circuitous route to avoid passing near a field known as the Couvent des Fees. The other was Marie Chehu, a woman 88 years old.[98]

Regarding the UFO literature at the end of the 20th century, reports of alien abduction are common, but these reports are mostly based on memories recovered by the use of hypnosis, and for that reason are unreliable.[99] In older literature, there does not seem to be much regarding what happens during an alleged abduction, because “the mind of a person coming out of Fairy-Land is usually blank as to what has been seen and done there.”[100]

Although UFO occupants have apparently been seen collecting rocks, soil, and plants; in recent times almost no one has reported seeing them collecting farm animals. However, the UFO literature includes claims by some researchers that UFO occupants are responsible for so-called cattle mutilations, which are characterized by a recently dead animal that is missing parts of its body, such as “sex organs, tongues, ears, eyes, or anuses”[101]. But the explanation given by others, that the culprits are small animals that preferentially eat the exposed soft parts of recently dead cattle, sounds more convincing.


footnotes

[98] Vallee, Jacques. Dimensions. Ballantine Books, New York, 1988. pp. 70–71.

[99] Psychologist William Cone describes the typical expectation that a subject has regarding hypnosis (Randle, Kevin, Russ Estes, and William Cone. The Abduction Enigma. Forge, New York, 1999):

Most people who undergo hypnotic regression believe that the unconscious has recorded everything and that hypnosis can bring those memories to the surface. This becomes a self-fulfilling prophecy. They know they are supposed to remember something, and so they do. [Ibid., p. 334]

But the idea that memory is a complete and accurate recording of events is simply wrong:

Hundreds of studies have shown that this idea is not true. Memory is not recorded but seems, according to the research, to be stored in a highly complex manner consisting of impressions, ideas, and feelings filtered through our own belief system. Each time someone reaches for a memory, it is not “played back” but reconstructed. … Furthermore, according to research, as time goes by, memories are modified to fit the beliefs of the society and world around us. [Ibid., p. 333]

Another problem with hypnosis is that leading the subject is unavoidable:

The truth is that it is impossible not to lead someone under the influence of hypnosis. A question as innocent as, “What happened next?” presupposes that something else happened, but more important, primes the subject to continue the narrative. [Ibid., p. 337]

Or what about David Jacobs? According to those who have witnessed his sessions, he doesn’t say much as he interrogates the victims of abduction. But for those who have been privileged to hear the tapes of those sessions, it is clear what he is doing. When the abductee strays from what Jacobs believes to be the norm, he makes no audible comment. However, when the subject touches on a point in which he believes, he nods and says, “Uh-huh.” It doesn’t take the abductee long to pick up on the cues and begin to massage the tale for the verbal approval of Jacobs. [Ibid., pp. 347–348]

Because of the various ways, subtle and otherwise, that a subject can be led during hypnosis, Cone draws the very reasonable conclusion that leading the subject causes the similarities between the reported abduction stories: the abduction researchers are, in effect, working from the same script, and they lead the subject to give the expected account. The end result is that the abduction researchers can point, which they do, to these similar accounts, and claim that this similarity is validation that these abduction stories are accounts of real events.

Another consideration regarding abduction accounts is the question of what motivates a person to play the role of an abductee. Cone notes the interesting detail that “gay men and women are overwhelmingly represented in the abduction population” (Ibid., p. 292). Regarding women claiming lost pregnancies—the typical story is that her alien abductors artificially inseminated her, and then removed the resultant fetus sometime later in the next few months:

The psychological literature is full of reports on why women who cannot conceive believe that they have, through some miracle, become pregnant. Such a belief fulfills a real psychological need in these women. [Ibid., p. 326]

[100] Vallee, Jacques. Passport to Magonia. Henry Regnery Company, Chicago, 1969. p. 87. (Jacques Vallee is quoting Walter Evans-Wentz.)

[101] Thompson, op. cit., p. 129.


8.4 Identity of the UFO Occupants

According to the UFO literature, the UFO occupants communicate with people telepathically. For telepathic communication to work, the learned programs that are directly involved in the telepathic communication are either the same or very close to being the same in the minds of both parties. The learned programs in the minds of both parties must agree, at least in large part, as to the format and meaning of the message texts sent and received, that carry that telepathic communication. This sameness of learned programs is consistent with the UFO occupants being the Caretakers. Presumably, these learned programs were, in effect, copied from Caretakers to humans in the distant past when humanity began.

In contrast to humans, the UFO occupants—like the Caretakers—are composed solely of intelligent particles. Thus, without the burden of a physical body, the UFO occupants are free to pass thru physical walls and closed doors, and to shape-shift and assume different appearances. This shape-shifting ability may include the ability to form the appearance of clothing, although on certain occasions actual physical clothing may be worn to better deceive the target human(s) into thinking that the UFO occupants have a physical body like humans do.

That UFOs are described in old historical records, is consistent with the UFO occupants being the Caretakers, because the Caretaker civilization is assumed to be extremely old, and is probably older than the beginning of organic life on Earth more than 3½ billion years ago. The collecting of rocks and soil by UFO occupants, although not necessarily a Caretaker function, can be a Caretaker function, because various biosphere-related chemicals, bacteria, and other organisms are typically found on rocks and in soil.

In conclusion, the UFO occupants are the Caretakers.

Regarding physical UFOs, it may seem contradictory that non-physical beings have physical flying ships. But these physical flying ships are used to hold and transport physical objects that these non-physical beings use, such as physical computers, sensors, instruments, and recording devices. These physical flying ships are not needed to hold and transport the non-physical beings themselves, because, in general, these non-physical beings can fly very fast on their own. In addition, their physical flying ships can also be used to hold and transport safely plants and animals that have physical bodies, including humans. And another possible use, assuming some of their physical ships are designed for bulk transport, is for transporting physical material in bulk to the Earth or elsewhere in our solar system. A possible example of such bulk transport is the transporting of comet ice to Earth and then dumping that ice while still in space but close enough to the Earth so that that dumped ice will be pulled down to the Earth by the Earth’s gravity (see section 7.6).

8.5 Interstellar Travel

Presumably, the Caretakers can and do travel within our solar system. However, travel to other stars is much less certain. Even if the Caretakers can do it, it would be a time-consuming trip, made at less than lightspeed.[102] Also, in another solar system, the learned programs in the minds of whatever intelligent beings are there, may be different enough from those of the Caretakers to make personal interaction with them difficult (for example, telepathic communication with them may be impossible due to programming differences). Thus, solar systems are probably fairly isolated from each other, even for the Caretakers.

However, since the Caretaker civilization is very old, perhaps the Caretakers, in the past, have launched (without any Caretakers onboard) automated physical ships that have physical propulsion systems, that have taken thousands or many thousands of years to go to one or more nearby solar systems, recorded what was found there, and then returned to our solar system so that the Caretakers could review the results. The limitation of this automated physical approach is that it would only report on the physical details of a visited solar system, and not report on whatever intelligent-particle beings are inhabiting that visited solar system, with the possible exception of those intelligent-particle beings, if any, that have physical bodies.


footnotes

[102] If any of the Caretakers travel outside our solar system to another solar system, then they probably use a physical ship whose physical sensors and computers reliably track the star of that other solar system and provide course-correction info when needed, using a physical display that the Caretakers can see, so that whichever Caretakers are currently responsible for pushing the physical ship when needed (section 8.2) can push as shown by that physical display, to make that course correction. And likewise when slowing down the physical ship when it arrives at that destination solar system.

As the saying goes: When in Rome, do as the Romans do. So, to reliably navigate to a remote physical object (in this case a distant star and its solar system), use other physical objects that respond to that destination physical object. More specifically, use a physical ship that has physical sensors that detect the electromagnetic emissions from that solar system’s star.

And this reason to use a physical ship to navigate to a distant solar system also applies to the use of physical flying ships by the Caretakers within our own solar system. For example, if the Caretakers want to go to some exact geographical spot on the Earth for whatever reason, a physical ship guided by a physical computer that is processing data from physical sensors (such as accelerometers) can show the Caretakers—by means of a physical display as mentioned above—what pushing is needed to get that physical ship to that geographical spot regardless of whether or not any of the Caretakers in that ship know the terrain and can fly directly to that geographical spot by themselves.

Also, regarding traveling to a distant solar system, the top speed at which the Caretakers, or an awareness/mind in general, can travel at, may be much lower than lightspeed, because there is much more data to copy from one computing element to an adjacent computing element, to move an intelligent particle thru 3D space, compared to moving a common particle thru 3D space. Assuming that there is a speed limit for an awareness/mind that is much lower than lightspeed, then my guess is that in the distant past the Caretakers got close to this speed limit as they attempted to push a physical ship faster and faster and eventually got close enough to that speed limit so that they were unable to push their physical ship any faster because their learned programs were no longer running fast enough to push their physical ship any faster. In this case, assuming their learned programs were no longer running fast enough, for each computing element that briefly holds a bion that is a part of that Caretaker’s bion-body, and that bion is pushing against that physical ship, that computing element is spending much of its processing time, because of the current speed that that bion is moving at thru 3D space, just moving that bion to an adjacent computing element, and there is not enough processing time left over to run that bion’s learned programs enough to result in that physical ship being pushed any faster.


9 The Human Condition

This chapter considers humanity as a whole. The chapter sections are:

9.1 The Age of Modern Man according to Cremo and Thompson
9.2 The Gender Basis of the Three Races
Some Additional Evidence for the Gender Basis of the Three Races
9.3 The Need for Sleep
9.4 A Brief Analysis of Christianity
9.5 Karma
9.6 Orgasm
9.7 Allocation Changes during Growth and Aging

9.1 The Age of Modern Man according to Cremo and Thompson

Michael Cremo and Richard Thompson are the authors of The Hidden History of the Human Race.[103] The basic case made by Cremo and Thompson is that since the Darwinian theory of man’s evolution became the dominant theory in the 19th century, the validity of archeological finds—including issues of dating—are judged based on their fit into the Darwinian theory.[104] For example:

This pattern of data suppression has been going on for a long time. In 1880, J. D. Whitney, the state geologist of California, published a lengthy review of advanced stone tools found in California gold mines. The implements, including spear points and stone mortars and pestles, were found deep in mine shafts, underneath thick, undisturbed layers of lava, in formations ranging from 9 million to over 55 million years old. W. H. Holmes of the Smithsonian Institution, one of the most vocal critics of the California finds, wrote: “Perhaps if professor Whitney had fully appreciated the story of human evolution as it is understood today, he would have hesitated to announce the conclusions formulated [that humans existed in very ancient times in North America], notwithstanding the imposing array of testimony with which he was confronted.” In other words, if the facts do not agree with the favored theory, then such facts, even an imposing array of them, must be discarded.

This supports the primary point we are trying to make in The Hidden History of the Human Race, namely, that there exists in the scientific community a knowledge filter that screens out unwelcome evidence. This process of knowledge filtration has been going on for well over a century and continues to the present day.[105]

Drawing largely from papers published in the scientific literature, Cremo and Thompson present a wide variety of evidence—including stone tools and complete skeletons—for the existence of modern man in remote times. According to Cremo and Thompson, the physical evidence shows that modern man has been on Earth for many millions of years.[106]

In a follow-up book, Forbidden Archeology’s Impact, Michael Cremo comments on why the “knowledge filter” has been so pervasive, concealing the great antiquity of the human race:

The current theory of evolution takes its place within a worldview that was built up in Europe, principally, over the past three or four centuries. We might call it a mechanistic, materialistic worldview. … Historically, I would say that the Judeo-Christian tradition helped prepare the way for the mechanistic worldview by depopulating the universe of its demigods and spirits and discrediting most paranormal occurrences, with the exception of a few miracles mentioned in the Bible. Science took the further step of discrediting the few remaining kinds of acceptable miracles, especially after David Hume’s attack upon them. Essentially, Hume said if it comes down to a choice between believing reports of paranormal occurrences, even by reputable witnesses, or rejecting the laws of physics, it is more reasonable to reject the testimony of the witnesses to paranormal occurrences, no matter how voluminous and well attested. Better to believe the witnesses were mistaken or lying. … the presentation of an alternative to Darwinian evolution depends upon altering the whole view of reality underlying it. If one accepts that reality means only atoms and the void, Darwinian evolution makes perfect sense as the only explanation worth pursuing.[107]


footnotes

[103] Cremo, Michael, and Richard Thompson. The Hidden History of the Human Race. Govardhan Hill Publishing, Badger CA, 1994. (The Hidden History of the Human Race is the abridged version of Forbidden Archeology: The Hidden History of the Human Race published in 1993.)

[104] The basics of Darwin’s theory of evolution (section 7.3) do not require that the appearance of modern man be recent. However, because the fossil record shows different ape-like creatures alive during the last few million years, and because modern man is assumed by the theory to be an evolution from ape-like predecessors, the assumption is made that modern man appeared only recently, so as to allow as much time as possible for the randomness of Darwinism to make changes in the ape-like predecessors. Thus, the first appearance of modern man is typically dated within the last 100,000 years (first-appearance dates of 30,000 years ago in Europe, and 12,000 years ago in North America, are common).

Assigning a recent date for the appearance of modern man, besides conforming to Darwinian thought, also has the advantage of avoiding an unpleasant question: If modern man has been on Earth for millions of years, what has happened to all the previous human civilizations that one might expect have existed during the course of those millions of years?

Unfortunately, it turns out that there is a very good answer to this question: civilization-destroying comets and asteroids hit the Earth on a more or less regular basis. For example, astronomer Duncan Steel roughly estimates a civilization-destroyer—an impactor that would, in effect, blast mankind back into the stone-age—as a comet or asteroid 1 to 2 kilometers in diameter. The impact energy of a 1-kilometer-wide comet or asteroid is roughly equivalent to the explosive force of 100,000 megatons of TNT. These civilization-destroyer impacts happen roughly once every 100,000 years (Steel, Duncan. Rouge Asteroids and Doomsday Comets. Wiley, New York, 1995. pp. 29–31).

Also, over the last 20,000 years, there has been an ongoing breakup in the inner solar system of a giant comet—the fragments of this breakup constitute the Taurid meteor stream (Ibid., pp. 132–136). The presence of this Taurid stream has increased the likelihood of a civilization-destroying impact. And, apparently, from this stream, a civilization-destroyer impacted in the ocean about 11,000 years ago. Among other things, this impact explains the Atlantis myth, the many flood myths, and why mankind was recently in a stone-age (Hancock, Graham. The Mars Mystery. Crown, New York, 1998. pp. 250–258).

Also relevant to this discussion is my essay Debunking the Ice Age at relative link or absolute link. This essay deals specifically with the civilization-destroying impact of roughly 11,000 years ago.

[105] Cremo and Thompson, op. cit., pp. xvii–xviii. (The bracketed note is in the original.)

[106] If man has been on Earth for many millions of years, then a question arises regarding technology: Have any previous human civilizations attained at least the same level of technology as that attained by man at the end of the 20th century? Using the standard belief at the end of the 20th century that oil, natural gas, and black coal are non-renewable resources (supposedly because these resources derive from organic debris buried about 200 million years ago), then the likely answer is no, because if alternatively the answer is yes, then these resources would have already been depleted by those previous civilizations. However, professor Thomas Gold has debunked this belief that oil, natural gas, and black coal derive from buried organic debris (Gold, Thomas. The Deep Hot Biosphere. Copernicus, New York, 1999):

Nobody has yet synthesized crude oil or coal in the lab from a beaker of algae or ferns. A simple heuristic will show why such synthesis would be extremely unlikely. To begin with, remember that carbohydrates, proteins, and other biomolecules are hydrated carbon chains. These biomolecules are fundamentally hydrocarbons in which oxygen atoms (and sometimes other elements, such as nitrogen) have been substituted for one or two atoms of hydrogen. Biological molecules are therefore not saturated with hydrogen. Biological debris buried in the earth would be quite unlikely to lose oxygen atoms and to acquire hydrogen atoms in their stead. If anything, slow chemical processing in geological settings should lead to further oxygen gain and thus further hydrogen loss. And yet a hydrogen “gain” is precisely what we see in crude oils and their hydrocarbon volatiles. The hydrogen-to-carbon ratio is vastly higher in these materials than it is in undegraded biological molecules. How, then, could biological molecules somehow acquire hydrogen atoms while, presumably, degrading into petroleum?[Ibid., p. 85]

Instead of deriving from buried organic debris, the underground deposits of oil, natural gas, and black coal derive from a continuously upwelling flow of hydrocarbons—primarily in the form of methane—from much greater depths within the Earth:

At high pressures, hydrocarbons represent the stable configuration of hydrogen and carbon. Hydrocarbons should therefore form spontaneously in the upper mantle and deep crust. But at low pressures at or near the earth’s surface, liquid hydrocarbons are supercooled, unstable fluids. As they upwell into lower-pressure regimes, they begin to dissociate, and this means they begin to shed hydrogen. This is exactly what we see in the vertically stacked patterns of a hydrocarbon region that go from methane at the deepest levels to oils and eventually to black coals at the shallowest levels. Each step in that stack is one of further hydrogen loss. [Ibid., p. 130]

This idea that buried organic debris is not the source of oil, natural gas, and black coal, did not begin with Thomas Gold, since in large part he is only echoing what was already the consensus opinion in Russia since the 1960s, which has guided for decades their successful oil-exploration efforts in rock strata that the organic-debris theory claims should have no oil.

[107] Cremo, Michael. Forbidden Archeology’s Impact. Bhaktivedanta Book Publishing, Los Angeles, 1998. pp. 337–338. (Michael Cremo is quoting himself, from a letter he wrote in 1993.)


9.2 The Gender Basis of the Three Races

There are three commonly recognized races: african (more specifically, sub-Saharan african is meant, aka blacks), caucasian, and oriental. And there appears to be a strong correlation between the comparative traits of these three races, and the comparative traits of the two human genders: men and women. Briefly, the correlation is that on a scale from masculine to feminine, the three races are ordered: african (the most male race), caucasian, and oriental (the most female race). Consideration for specific traits follow—and when speaking of specific traits as they appear in each gender and race, averages and the average case is always assumed, because, of course, there are always exceptions when considering individuals.

Regarding physical size, speed, and strength, obviously men are larger, faster, and stronger than women. For the three races, obviously the oriental race is the smallest and weakest. Less obvious is the difference between the african race and the caucasian race. In general, it appears that the african race can move its limbs (arms and legs) faster than the caucasian race, but the caucasian race does better in pure strength events such as weightlifting. In general, fast movement versus slower movement but greater strength, as manifested in the african race compared to the caucasian race, is a tradeoff with no clear gender difference, in terms of making one of these two races more male than the other. The needs of surviving in their different physical environments probably, in effect, takes precedence over there being a clear gender difference between these two races with regard to body size, speed, and strength.

Regarding life expectancy, women have a longer life expectancy than men. For the three races, when nutritional and sanitary conditions are the same, the african race has the shortest life expectancy, the oriental race has the longest life expectancy (for example, at the end of the 20th century, the Japanese have the longest life expectancy in the world), and the caucasian race is in-between.

Regarding coloring, men tend to be darker than women. For the three races, obviously the african race is the darkest (a fraction of this race is completely black, which is not seen in the other two races). Also, note the geographic bias affecting skin coloring: living in sunny lands tends to darken the skin, and living in dark lands (less sunlight) tends to lighten the skin. Thus, for example, the caucasian Finns, who live further north (in dark lands) than most people, are very light-colored (very pale); whereas the caucasians of the Indian subcontinent (in sunny lands) are dark—much darker than the oriental Vietnamese, who live at the same latitudes—but not as dark and black as the africans at those same latitudes. Once the geographic bias of coloring is discounted—by comparing races at the same latitudes—it becomes apparent that the caucasian race is darker than the oriental race.[108]

Regarding a tendency for violence, obviously men are more violent than women. For the three races, an examination of worldwide crime statistics shows the african race as the most violent, the oriental race as the least violent, and the caucasian race is in-between.

Regarding general intelligence, women obviously have more verbal aptitude (use and understanding of language, both spoken and written) than men, and verbal aptitude is a major component of general intelligence. For the three races, the african race scores lowest on IQ tests, the oriental race scores highest (for example, the Japanese score highest in the world on standard IQ tests), and the caucasian race is in-between.

Another factor that probably adds greatly to lower the verbal aptitude and overall intelligence of the african race, is that there is probably a much higher percentage of new and recently new humans among the african race compared to the caucasian race and oriental race. I’m specifically referring to what I said in section 6.3 about the solitons of new humans needing at least several human lifetimes or perhaps many human lifetimes to fully learn how to work with the human mind including the language abilities of the human mind which far exceed the language abilities of whatever animal a typical human was before acquiring a human mind and having his first human life.

Regarding why there would be a much higher percentage of new and recently new humans among the african race compared to the caucasian race and oriental race: On average, men are less verbal and language-capable than women. Because of the “birds of a feather, flock together” effect described in subsection 6.3.1, it follows that most new and recently new humans, having been animals with very little language ability before becoming human, would have their early human lives as africans, because africans are the most male race, being less verbal and language-capable compared to the other two races.

Given the gender basis of the three races, it seems likely that the three races will endure far into the future, because, in effect, the three races represent the large-scale range of gender-difference that the total human population wants to express.

Given the discussion of the Caretakers in previous chapters, it is reasonable to assume that the Caretaker civilization is more intelligent than the most intelligent human nation (i.e., more intelligent than the Japanese).[109] And note that many of the gender differences in humans—such as the degree of talkativeness, and the desire to socialize—are mental differences that do not require having a physical body. And note that the Caretakers apparently have the same two genders as mankind. The reason why both the Caretakers and humanity have two genders is explained in section 9.6.

Some Additional Evidence for the Gender Basis of the Three Races

For most of the items in the following list, I have been exposed to much more data regarding africans compared to caucasians, and much less data regarding orientals compared to caucasians:


footnotes

[108] Regarding the two colors, black and white, the implication as to their psychological meaning is clear: black represents masculine qualities, and white represents feminine qualities. Thus, for example, a man dressed in black, dancing with a woman dressed in white, works (witness the many movies that use this dress scheme); whereas the opposite dress scheme, being a man dressed in white, dancing with a woman dressed in black, does not work, and is rarely seen.

[109] The implication is clear: on the gender scale, the Caretaker civilization is more feminine than the oriental race. And besides the assumed correlation of intelligence, there is also an apparent correlation regarding violence, because the Caretakers appear to be very nonviolent. And even though the Caretakers have no physical body, note the apparent correlation regarding size, as the Caretakers typically adopt a comparatively small size when they assume a form for interacting with humans.


9.3 The Need for Sleep

Sleep—in the sense of an organism becoming periodically inactive—is widespread thruout nature, and is not limited to the higher animals. For example: “many insects do rest during the day or night. These rests are called quiescent periods.”[110],[111] And: “The authors of the book The Invertebrates: A New Synthesis write: After activity there is need for rest, even for ‘busy bees’. Honey bees enter a state of profound rest at night, with remarkable similarities to the phenomenon of sleep.”[112] And: “Fish do have a quiescent period which can be called ‘sleep’. Tropical freshwater fish in home aquaria can be observed resting immediately after turning the lights on in a room which has been darkened for several hours.”[113] And: “Yes, frogs and toads sleep with their eyes closed. … Snakes, like all reptiles, do sleep. They are capable of doing this quite soundly despite the fact that they have no moveable eyelids. Moving your hand in front of the face of a sleeping snake will often not cause it to wake up for several seconds.”[114] And: “Sharks don’t sleep as we know it, but they do rest. Often they will come to a quiet bottom area and stay there motionless.”[115]

Why does sleep happen? The mathematics-only reality model has no explanation for sleep, because it denies the existence of intelligent particles, and there is nothing known about physical particles that implies the need for periodic shutdowns. However, unlike the mathematics-only reality model, the computing-element reality model does have an explanation for why intelligent particles sleep (common particles do not sleep):

When an intelligent particle is asleep, none of its learned programs are running (any of its learned programs that were running immediately before sleep, have been stopped by the computing-element program in preparation for this sleep time). Then, during sleep, the learning algorithms of the computing-element program (section 3.6) are running for that intelligent particle, with the possibility of making changes to that intelligent particle’s learned programs. Thus, any changes to an intelligent particle’s learned programs (including any additions or deletions of learned programs) can only happen when that intelligent particle is asleep.[116]

Given that sleep is a part of each intelligent particle—irrespective of the presence or absence of a common-particle body—it follows that all beings with an awareness/mind sleep. Thus, for example, the Caretakers sleep. And, for example, people in the afterlife sleep. And each organic life form—assuming it has at least one bion—sleeps. Thus, for example, bacteria sleep.

For a complex organism with many bions, the periods of sleep for those bions can be synchronized and/or unsynchronized as needed, in accordance with the needs of that organism. For example, each bion in a plant—which lacks a nervous system and associated mind—can probably sleep according to its own arbitrary schedule, without causing harm to the plant as a whole (at any one time, about the same percentage and distribution of the plant’s bions would be asleep—although for plants that rely on photosynthesis, probably a higher percentage of the bions are awake during daylight). But for those organisms that have a nervous system and associated mind, which controls the organism’s movements in its environment, a more or less synchronous sleeping of those bions would be the case, during which time the organism is perceived to be asleep, resting, quiescent.

For any bion, a longer sleep period means more time for the computing-element program to apply its learning algorithms to that bion’s learned programs. Regarding a human child and the learned programs in its mind—its mind being the owned bions of its awareness (soliton)—it seems likely that the need for modifications (such as minor adjustments) to a child’s learned programs would be greatest at its earliest age given its new physical body and physical environment, and then decline with age as its physical body and environment become less and less new with less and less need for further adaptation by that child’s mind to its situation. And this appears to be the case: “It is well established that infants and children need much more sleep than adults. For example, infants need about 16 hours of sleep, toddlers about 12, and school age children about 10. … during puberty our need for sleep actually increases again and is similar to that of toddlers.”[117]


footnotes

[110] The web citations in this section are all from a website called The MAD Scientist Network, provided by the Washington University School of Medicine in St. Louis USA. The purpose of this website is to provide a forum where people can ask questions to be answered by scientists. The quoted selections—three of these quotes are slightly edited for improved readability (specifically, three commas and a missing to were added)—are from answers to questions asked by other persons (none of the questions were asked by me).

[111] From the post RE: insects, made by Kurt Pickett (Grad Student Entomology, Ohio State University).

At http://www.madsci.org/posts/archives/dec96/840907460.Gb.r.html

[112] From the post RE: ants and sleep, made by Keith McGuinness (Faculty Biology).

At http://www.madsci.org/posts/archives/dec96/841965056.Zo.r.html

Also regarding sleeping insects, the fruit fly (Drosophila) sleeps (Fly naps inspire dreams of sleep genetics. Science News, volume 157, number 8 (February 19, 2000): p. 117):

The researchers videotaped flies during rest periods to document the insects’ behavior. During the night, the flies crawled off to resting places and settled into what the researchers define as a sleep pose, slumped “face down,” Hendricks [the lead researcher] says. For about 7 hours every night, the flies stayed still except for a few small twitches of the legs and proboscis. As the evening progressed, it took louder and louder taps on the cages to rouse the insects.

In some sessions, the scientists kept the flies from their rest by tapping whenever the insects stayed still for more than a minute. The rest-deprived animals compensated by sleeping more over the next few days, as sleep-deprived people do.

[113] From the post RE: Do fish sleep?, made by Bruce Woodin (Staff Biology. Woods Hole).

At http://www.madsci.org/posts/archives/may96/827165604.Zo.r.html

[114] From the post RE: Do snakes eat their own eggs?, made by Kevin Ostanek (Undergraduate, Lake Erie College).

At http://www.madsci.org/posts/archives/mar97/853172239.Zo.r.html

[115] From the post RE: Sharks, made by Roger Raimist (Prof. Biological Sciences).

At http://www.madsci.org/posts/archives/may97/863470221.Zo.r.html

[116] Having a clear separation between the time when the learned programs can be running, and the time when they are being processed for possible modification by what are, in effect, a separate group of programs, the learning algorithms, simplifies the overall complexity of the computing-element program, because synchronization and load-balancing issues between the two program groups are minimized. Also, by not running the learning algorithms when the learned programs are running, this means that during the time when the learned programs are running, they get a larger share of the underlying processing power of each computing element that is currently holding that intelligent particle as that intelligent particle moves thru 3D space, and can consequently, in effect, do more during that time when the learned programs are running.

Alternatively, if there is no clear separation between the time when the learned programs can be running and the time when they are being processed for possible modification, then one faces the difficult problem of modifying a program while that program is still running. Given an arbitrary learned program that is running, and an arbitrary set of changes to be made to that learned program, how does one modify that learned program without corrupting what will be the output from that learned program when that learned program finishes its current processing? And, the answer here isn’t as simple as saying just wait for that running learned program to stop running, because what if it doesn’t stop running, and also, even if it does stop running, perhaps it takes input from, or gives output to, one or more other learned programs that are running. The correct solution here is to have a clear separation between the time when learned programs can be running and the time when they can be modified, and this is what actually happens.

[117] From the post RE: You are right, children are much more active in their sleep, made by Salvatore Cullari (Professor and Chair, Lebanon Valley College).

At http://www.madsci.org/posts/archives/aug97/866216598.Ns.r.html


9.4 A Brief Analysis of Christianity

Christianity is a religion that I was exposed to in my youth (see Chapter 10 for details regarding my experience with this religion). Christianity has a long history in both Europe and also in the USA where I live. The central figure in Christianity is Jesus, which Christianity presents as a historical person, but is actually a fictional character.[118]

Because the setting for the Jesus story is Judea, it has long been assumed that Christianity had its origins in Judea and its people. However, there is a better argument that Christianity had its origins in the imperialistic goals of the Roman Empire which had conquered Judea in the first century CE.[119]

Although Christianity was originally contrived and constructed to domesticate the recently conquered population of Judea, later in Roman history we see Christianity promoted by the Roman emperor Constantine as a religion for the entire Roman world and its peoples. And from there, over time, this imperialism-friendly religion of Christianity was imposed on the rest of Europe.

In the remainder of this section, other than a brief consideration of prayer at the end, I consider the underlying reason for some of the specific teachings and beliefs of this Christian religion that are not in this religion to support imperialism, but instead support a very different goal of Christianity, which is to maintain and grow the number of Christians so as to better support those people who directly live off Christianity as, in effect, paid employees of Christianity.

Regarding the afterlife: According to Christianity we each have only one physically embodied life, which is one’s current life, and upon death what awaits us is an endlessly eternal afterlife of pain and suffering unless one is a Christian when one dies, in which case one will have an endlessly eternal afterlife of the opposite of pain and suffering. In effect, being a Christian means one accepts Christianity as one’s reality model. There are different phrases in use to denote acceptance of Christianity as one’s reality model. For example, in the USA during my own lifetime, the short phrase “accept Jesus as your lord and savior” is perhaps the most commonplace example: the word Jesus is specific to Christianity, and the words lord and savior imply submission to Christianity as one’s reality model.

In a nutshell, Christianity says to the public: be a Christian and one will be endlessly rewarded with a very desirable afterlife, otherwise an endlessly punishing, very undesirable afterlife awaits. This is a classic “carrot and stick,” and its primary purpose is to motivate non-Christians to become Christians, so as to get the carrot and avoid the stick, and also to motivate current Christians to work at converting any non-Christians who they care about, such as friends and family, so that those non-Christians will be converted to Christianity and thereby avoid an endlessly punishing afterlife. The end result of this carrot-and-stick approach is more Christians, which in general means more material support for those who directly live off Christians, including priests and pastors and the higher-ups in the church hierarchy.

Instead of having to accept the reality model of Christianity or of any other religion to have a good afterlife, the reality model presented in this book says that what one consciously believes about the afterlife during one’s physically embodied life has no substantial effect on what one’s afterlife experiences will be, during what will be an afterlife measured in years or many years (not Christianity’s eternity) before one reincarnates, most likely reincarnating as a human again.

Regarding Christianity’s position on sexual matters, Christianity has a long history of being hostile to sex for any purpose other than the production of children. Thus, given this emphasis on having children, Christianity, in general, has a history of being against birth control, abortion, infanticide, and homosexuality. The reason Christianity has these attitudes is because Christianity wants its current believers to have many children, because Christian parents are likely to take their young children to a Christian church, with an end result that this early indoctrination of the child into a particular religion, in this case the Christian religion, makes it much more likely—when compared to a child who had no such early indoctrination into that religion—that that child when adult will be a member of that religion and an active supporter of it—putting money in the collection plate, for example. Thus, what has guided Christianity’s position on sexual matters is the self-interest of those who directly live off Christianity, because, in general, more Christians means more material support for those who directly live off Christianity.

While on this subject of Christianity, consider Christianity’s emphasis on praying to its God and/or Jesus. However, given the reality model presented in this book, silent prayer to oneself is only “heard” by one’s own unconscious mind. And one’s own unconscious mind, albeit under command by one’s awareness, is also the producer of that silent prayer that is sent to the awareness. Thinking that one’s silent prayer is being heard by a powerful being is only true in the sense that that powerful being is simply one’s own unconscious mind. By emphasizing prayer, those who directly live off Christianity—such as priests and pastors and the higher-ups in the church hierarchy—are, in effect, telling Christians to solve their personal problems with prayer instead of bothering them with it.


footnotes

[118] For those familiar with the Christian religion and its New Testament which describes materializations and other miracles performed by Jesus, it would seem that Jesus was a god-man. However, subsection 3.8.7 explains why the learned-program statements for seeing and manipulating physical matter have a very short range. And, among other things, this very short range means that the alleged materialization miracles done by Jesus—such as his alleged materialization of bread and fish to feed many people—are not possible and are fiction. Also, many scholars have argued that Jesus is not a historical person. For example, Lars Adelskogh gives a good summary of this position:

Jesus Christ is the central figure of the Western civilisation, just as Muhammad is the central figure of the Arab civilisation, and Confucius, of the Chinese civilisation. These are trite observations. However, whereas we are quite positive that Muhammad and Confucius were historical figures, we are not in a position to say with certainty that Jesus Christ, as portrayed in the Gospels, ever existed.

Indeed, quite a number of scholars have come to the conclusion that Jesus is a mythical figure, no more real in any historical sense than Hercules or Dionysus, Sherlock Holmes or Donald Duck. … This revisionist school of Jesus research, if I may so call it, takes its stand on three basic facts:

  1. The complete absence of historical evidence for Jesus outside the New Testament. Contemporary authors, who ought to have heard and then written about him, if he was such a remarkable figure as the Gospels intimate, are silent.

  2. The complete, or almost complete, lack of originality of the teachings of Jesus as given in the Gospels. Essentially everything taught is found in the Old Testament, contemporary rabbinic literature, or so-called paganism, Hellenistic wisdom literature, pagan cults, etc.

  3. The many features that Jesus of the Gospels shares with several so-called pagan saviour gods, or godlike men, such as Asclepius, Hercules, Dionysus, Mithras, Krishna, and, of course, Gautama the Buddha.

[Referring to item 3:] These common features, or similarities, embrace so many essential aspects of the Jesus figure, his birth, his life, his actions, and his death, and often do so in such a striking manner, that you easily get the impression that Jesus of the Gospels, the new saviour god, is little more than a rehash of the older pagan saviour gods. [An address by Lars Adelskogh at the International Seminar “The Sanskrit and Buddhist Sources of the New Testament”, Klavreström, Sweden, September 11, 2003. At: http://www.jesusisbuddha.com/larsa.html]

Christian Lindtner, a Danish professor whose specialty is Buddhist studies, shows that much of the New Testament (its original written language is Greek) plagiarizes specific Buddhist texts that were written in Sanskrit. He describes his process of discovery:

In many ways this author agrees with the results arrived at by previous researchers in the field of CGS [Comparative Gospel Studies]. In general, however, these scholars have been satisfied if they could point out parallels, similar ideas, or similar motives.

This author asks for more. Parallels are not sufficient. To be on firm ground, we must “require close verbal similarity”—something that Derrett [a CGS scholar] … and virtually all other scholars, feel would be “to ask too much.”

When I insist that we must ask for close verbal similarity, I have a good reason for doing so. The main Buddhist source of the New Testament gospels is the bulky Sanskrit text of the Mulasarvastivadavinaya (MSV), and this text was simply not available to previous scholars, including Derrett—who was, as he writes, “shocked” when he received a copy of that text, first published in 1977, from me not long ago, after he had published his own book.

I had published a review of the MSV way back in 1983 in the journal Acta Orientalia, and, of course, read the Sanskrit text before preparing the brief review. Then I turned to other matters. Six or seven years ago, I turned to New Testament studies. One late evening it struck me that what I now was reading in Greek I had already read some years ago, but in Sanskrit. Could the MSV really be a source of passages in the New Testament? So I started comparing systematically the Greek with the Sanskrit. It was a thrill; I could hardly believe my own eyes.

Comparing, then, the two sources carefully word for word, sentence for sentence, motive for motive, for some years, I came to the firm conclusion that the New Testament gospels could be well described as ‘pirate copies’ of the MSV. Gradually it also became clear to me that other Buddhist texts had also been used by the otherwise unknown authors of the New Testament gospels. The most important source apart from the MSV, it is now clear to me, is the famous Lotus Sutra, known in Sanskrit as the Saddharmapundarikasutram. [Lindtner, Christian. “A New Buddhist-Christian Parable.” The Revisionist: Journal for Critical Historical Inquiry, volume 2, number 1 (February 2004): p. 13. See also Lindtner’s website on this subject, at: http://www.jesusisbuddha.com/]

[119] Consider the main idea of Joseph Atwill (Atwill, Joseph. Caesar’s Messiah: The Roman Conspiracy to Invent Jesus. CreateSpace, Charleston, 2011), that the Jesus story with its setting in Judea, and with its principal characters including Jesus being Jews, was an invention of the Roman empire in the first century CE as a psychological-warfare effort to help pacify the recently conquered country of Judea with a new, contrived religion, Christianity. In his book, Atwill presents a lot of evidence for this idea that Christianity is a Roman invention.

The main reason I believe Joseph Atwill is correct, is because the social teachings that come out of the mouth of the Jesus character support how an empire, which holds captive one or more foreign peoples, would want those captive peoples to behave in that empire. Thus, as a tool of Roman imperialism and imperialism in general, this constructed religion of Christianity teaches thru its main character Jesus and his so-called apostles the following:


9.5 Karma

The basic idea of karma is that one’s actions have consequences. Good actions have good consequences, and bad actions have bad consequences. Also, the death of the physical body is not a barrier for karma. Some consequences may, in effect, be deferred until one’s next incarnation in a physical body.

Given the computing-element reality model, one can dismiss any suggestion that karma is some universal law of the universe that operates in some impartial and perfect way. Instead, karma is personal and depends on one’s own mind, and, in the revenge case, also depends on the person or persons seeking revenge. Negative karma (karma for bad actions) operates in two ways:

Although some of the misfortunes that befall one may be the result of karma, one should avoid oversimplifying and assuming that all misfortunes are the result of karma.

We are finite, and our minds are limited. Also, our physical bodies are fragile, very complex, and subject to damage, illness (typically caused by attacking pathogens or parasites that are too small to see with one’s eyes), and aging. Also, constructing and keeping alive the physical body, and repairing the physical body when needed, and defending against pathogens and parasites when needed, is the job of cell-controlling bions, and these cell-controlling bions are separate from one’s mind, and their cell-related activities cannot be directly controlled or influenced by one’s mind. Thus, for example, having the right mental attitude or belief system or way of thinking or whatever, is, in general, not going to help one’s physical body when faced with physical problems regarding one’s physical body.

Real accidents do happen (although some apparent accidents may be unconsciously arranged by one’s own mind and/or other minds). Also, one may be caught in some larger social process and/or economic situation which has nothing to do with one’s individual karma, but has a negative effect on one’s life. And, of course, the limitations and misfortunes of old age befall everyone who lives long enough to experience them.

Also, there is a lot of unpredictability in this world we live in. There are many unknowns, limited information, and a lot of the unpredictability comes from ourselves and other people, both in terms of people acting as individuals and in terms of people acting as a member of one or more groups. And, one’s own mind, no matter how intelligent one may be, has a limited ability to collect and analyze data in an effort to anticipate future events and outcomes that may have a negative effect on one’s life and/or the life of one or more other persons that one cares about. And, even if one correctly foresees some specific misfortune for oneself and/or for one or more others, one’s ability, and/or the ability of those others if any, to avoid that specific misfortune, may be limited. This unpredictability of our world is only important because we each have a weak, fragile, and needy physical body, and, as a result, serious misfortunes, possibly very painful, can happen that have nothing to do with one’s individual karma. Just having a physical body, which at its worst can put one in extreme and prolonged pain, puts everyone who has a physical body at risk (the underlying reason for body pain is to alert the awareness that something is abnormal with the physical body and needs attention).

In general, because of our own complexity, life is complex, with many often-conflicting influences. Karma is only a part of what can influence our lives. And, as the saying goes: You have to take the good with the bad.


footnotes

[120] Just as all that one sees is a construction of one’s own mind (section 3.6), so is all that one hears a construction of one’s own mind. Thus, all voices that one may hear, are constructed by one’s own mind, regardless of whether the text (what the voice is saying) has an internal origin (from within one’s own mind) or an external origin (from one of the sensory sources: either one’s physical hearing or telepathic hearing).

A typical person is familiar with two kinds of voices: the voice of one’s own thoughts (this is the same as the voice one hears when reading), and voices heard thru one’s physical hearing (whether hearing oneself talk or hearing others talk). Apparently, given the unerring ease with which one distinguishes between hearing the voice of one’s own thoughts and hearing voices heard thru one’s physical hearing, and given the fact that both types of hearing can take place simultaneously without interference between them, it follows that there are two different non-overlapping allocations of awareness-particle input channels (section 9.6) for carrying the two kinds of hearing to the awareness: one allocation carries the voice of one’s own thoughts, and the other allocation carries all the sounds, including voices, heard thru one’s physical hearing.

As a rule, the text for the voice of one’s own thoughts has an internal origin, and this voice has a soft and unobtrusive sound. A person may also have experience with hearing voices in one’s dreams. In this case, the dream voices have the same sound quality as voices heard thru physical hearing. In other words, they sound like normal voices, instead of sounding like the voice of one’s own thoughts. These dream voices are probably carried to the awareness over the same allocation used for carrying the sounds heard thru one’s physical hearing.

In the case of psychics who claim to hear normal voices, which they believe are telepathic communications from other minds (typically from the dead), such telepathic communication is certainly possible. However, alternatively, in at least some cases, the text may have an internal origin or a mixed origin. In either case, if the voice sounds normal (in other words, the voice sounds like voices heard thru one’s physical hearing) then that heard voice is probably carried to the awareness over the same allocation used for carrying the sounds heard thru one’s physical hearing.

Apparently, for some people—typically women—who take their religious beliefs too seriously about submitting their will to God, Jesus, or whatever, they may find themselves being ordered about by their own unconscious mind masquerading behind a normal-sounding voice. In effect, they abdicate the natural right of their awareness as ruler, and give that right to their unconscious mind, imagining that the voice they hear is the voice of God, Jesus, or whatever.

Schizophrenics who are tormented by accusatory voices claim to hear those voices as normal voices. But it should be clear that the text of the voices, regardless of how they sound and regardless of which allocation is carrying those voices to the awareness, has an internal origin, given such reported characteristics as the extreme monotony and repetition of the text. For these schizophrenics, the rebellion that they caused in their own mind has a voice (this rebellion, typically, is a consequence of actions in their previous incarnation).

Perhaps I should mention that my brother, who is two years younger than me, is schizophrenic, and that is why I know a lot about this subject of schizophrenia. He lives in a group home for schizophrenics.


9.6 Orgasm

My sister’s son (born 1980) has Tourette Syndrome (TS). In December 2001, knowing of my interest in his condition, he sent me an email in which he gives “a general synopsis that I wrote that covers some of the subtleties about TS that I will use when it is necessary to educate those around me”:

What is Tourette Syndrome?

Tourette Syndrome (TS) is a nervous system disorder characterized by involuntary, rapid, sudden movements or vocalizations called tics that occur repeatedly in the same way. TS is not degenerative in any way; it is not a sign of mental illness; it is not caused by poor parenting or abuse of any kind. TS usually begins in childhood and often continues throughout life; it is found across all ethnic backgrounds and at all socioeconomic levels; although socially awkward its expression is largely cosmetic.

About Tics

The expression of tics is unlimited and is unique to each person. The complexity of some tics sometimes makes it hard for others to believe the strange actions and inappropriate vocal utterances are not deliberate.

Examples are: facial grimaces, eye blinking; head jerking; shoulder shrugging; throat clearing; yelping noises; tongue clicking; snapping; touching other people or objects; self-injurious actions; copraxia (obscene gestures); coprolalia (obscene language); echolalia (repeating a sound or word just heard); mimicking someone’s mannerisms.

Tics occur many times a day usually in bouts waxing and waning in their severity and periodically changing in frequency, type and location.

Complex verbal tics are often triggered by a completely unrelated thought and do not represent what the person is thinking about.

Tics are suggestible. Merely the mention or sight of a specific tic may induce it.

Repressing tics is difficult and only increases the tension making the tics come out worse later.

Most of the time in situations where it would be socially inappropriate for certain tics the person with TS will not have any tics or ones that are not disturbing, but as soon as they move to a less restricted environment they will often experience a major tic bout.

When someone is told to stop ticcing or if they are in a place where they know they can’t tic they will sometimes feel the compulsion even more strongly.

Stress, positive or negative emotional excitement, fatigue, Central Nervous System stimulants, unpleasant memories or lack of understanding from others can all significantly increase tics.

Being in a stimulating or new environment, involved in conversation, meeting new people, concentrating on a task, relaxation, and acceptance by others can all significantly decrease tics.

What I Ask of You

Now that I understand what TS is, I accept it and it’s not a big deal to me. Most of the time I will be fine, but sometimes it can be overwhelming so when I do tic please ignore them as they are harmless. However if I offend you or you can’t handle seeing me this way then talk directly to me about it and I will try to accommodate you by redirecting the tic into something less threatening to your sensibilities, but note that I will not ostracize myself because of this. If you have any other concerns or questions please ask me.

His above synopsis is drawn from his study of the TS literature, and from his own experience with TS. During a two-week visit by my nephew in July 2001, I had the chance to observe his TS characteristics: he has both movement tics and vocal tics, including both copraxia and coprolalia. Near the end of his visit I developed an explanation for Tourette Syndrome that I believe is correct:

Briefly, my explanation for Tourette Syndrome is that for a person with TS, a part of the mind that for an average person of that nation and gender outputs to n input channels of the awareness particle (the soliton), has instead substantially fewer input channels of the awareness particle to which it can output, because those input channels, at some stage during that person’s previous development, were, in effect, allocated to one or more other mind-parts (I use the word mind-part to mean some specific functional part of one’s mind).[121] Then, over time, that mind-part that is missing its normal allocation of awareness-particle input channels, compensates, proportionate to its loss, by sending its outputs elsewhere, ultimately resulting in the tics of TS.[122],[123]

In terms of tics, those with TS range from mild to severe. For example, a person who just does a lot of eye blinking or throat clearing would be a mild case; my nephew, with his copraxia and coprolalia, is a severe case. As far as what determines TS severity, the primary determinant is probably the mind-part involved and the extent to which that mind-part has lost its normal allocation of awareness-particle input channels: the fewer the number of lost channels, the more mild the TS; the greater the number of lost channels, the more severe the TS.

In theory, the mind-part that suffered the loss may be different in different persons with TS. However, for at least many with TS, and, I believe, in the case of my nephew, the specific mind-part that suffered the loss is that mind-part—here called the sexual mind-part—that is heavily involved in sexual feeling, desire, and attraction. The primary reason to believe this, is because TS tics, in a typical severe case, often have a strong sexual content.[124] In addition, another reason is the similarity between the strong insistence of TS tics and the strong insistence of sexual desire.[125]

Assume that each soliton has the same total number of input channels, and that each input channel is identical in terms of its data-carrying capabilities. Given the central governing role that the soliton has vis-a-vis the owned bions that collectively form its mind, it stands to reason that a soliton’s input and output channels are not wasted: they are all utilized. Thus, if some mind-part does not have its normal allocation of awareness-particle input channels, then those awareness-particle input channels have been allocated elsewhere. Regarding Tourette Syndrome, it is interesting to note that, in general, those with TS have a reputation for being intelligent. For example, there are many statements like the following on the internet:

Many of my patients with Tourette syndrome are of above average intelligence, frequently intellectually gifted.[126]

… most people with TS appear to have above average intelligence.[127]

Many people believe there is a link between intelligence, creativity, and Tourette syndrome. Certainly in my experience, children with Tourette’s are often quite intelligent …[128]

Regarding intelligence, what I have noticed about my nephew is that in some intellectual areas we are about the same, but in other intellectual areas he either clearly exceeds me (for example, writing ability) or far exceeds me (for example, mathematical ability). Thus, regarding my nephew, he fits the pattern of having TS and being intelligent; in his case, very intelligent. Note: In 2016 I inadvertently learned that my nephew has some color-blindness. Perhaps in his case, his color-blindness is a result of having a below-average allocation of awareness-particle input channels for the human visual field, in which case probably some of his far-above-average intelligence is due to that lower allocation he has for color vision, and not just due to the lower allocation he has for the sexual mind-part (with fewer awareness-particle input channels allocated to his visual field, there were more awareness-particle input channels that could be allocated elsewhere, which in my nephew’s case were allocated to further increase his intelligence).

Given the association of Tourette Syndrome with intelligence, it seems safe to assume that for a typical person with TS, the mind-parts that, in effect, account for intelligence, have been allocated more than their normal share of awareness-particle input channels. Thus, in effect, the allocation loss of the sexual mind-part has been the allocation gain of the intellectual mind-parts. In general, the greater the allocation loss for the sexual mind-part, the greater the allocation gain and enhancement of intelligence. The extent of the allocation loss for the sexual mind-part, and its consequent effects, varies from one TS person to the next. In the case of my nephew, his allocation loss was great enough to cause, among other things, a complete absence of orgasm. Here is a dictionary definition of orgasm:

orgasm: The climax of sexual excitement, marked normally by ejaculation of semen by the male and by the release of tumescence in erectile organs of both sexes.[129]

This dictionary definition describes the physical events that coincide with the orgasm experience, which for a male is the ejaculation of semen. Its description of the orgasm feeling is limited to a statement about that feeling’s relative strength and its placement on the pleasure-pain scale (climax of sexual excitement: presumably very pleasurable). In general, describing a feeling is limited to stating such things as that feeling’s strength or intensity, its duration, its placement on a scale that ranges from pleasure to pain, and its comparison to other feelings. As a rule, reading a written description of a feeling does not cause one to experience that feeling, because the data sent to the awareness particle for reading comprehension is different than the data sent to the awareness particle for causing that feeling. Likewise, the act of remembering a feeling does not cause one to experience that feeling, because the data sent to the awareness particle for remembering that feeling is different than the data sent to the awareness particle for causing that feeling.

Drawing on my own experience with male orgasm: it came in waves, with each wave coinciding with each ejaculation of semen; it was a feeling that was strong but not overwhelmingly so, at least for me; it definitely felt good; nothing else in my life has felt like an orgasm.[130] Note that for a typical male in his physical prime (younger than middle-age), from the first ejaculation to the last, typically less than ten seconds elapse, so the accumulated duration of the orgasm feeling is even less than this.

In November 2000, my nephew, during a phone call, surprised me by asking about my orgasm experience. As I then learned, he has never had an orgasm during ejaculation (nor at any other time), and he was asking me about my own experience, because he was trying to find out if he had inherited his no-orgasm condition from his relatives. His no-orgasm condition is a rarity for young males. However, the loss of orgasm by older males is more common, as I was to find out for myself, a mere six months later, in May 2001: At age 45½, over a period of about a month, my orgasm experience, being noticeably weaker each successive time I had an orgasm, faded away to nothing; and yet, everything else, including the ejaculation, was the same—it was just the orgasm feeling itself that had disappeared.

My orgasm loss, I assume, was a consequence of my advancing age. The male-orgasm experience is obviously a reward, whose ultimate purpose is the production of children. As a male ages, his value as a potential new father declines for many reasons. Thus, the withdrawal of the orgasm reward is understandable.

At the time of my orgasm loss, I was not expecting something positive to result in consequence; but that is what happened. About three months later, in August 2001, while replaying a computer game, I noticed that the game seemed much easier for me (beyond what I had experienced before when replaying computer games). Then I replayed two other computer games, and, among other things, I noticed that I was playing in a way that I had never played before with any such game: I was actually planning my movements, and, for the first time, I was able to shoot accurately while moving; I also found myself thinking about movement strategies at other times of the day when I was not playing. Overall, I was much more focused on, and interested in, how I moved during combat encounters, than I had been in the past. My combat strategy in the past consisted of little more than trying to find the best spot to be in at the beginning of the encounter, and then just standing still, firing the best weapon I had at the targets; complex movement sequences during combat were simply beyond me: I did not think about them, and I did not make them. Regarding my past game play, I have known about my weak game play since the early 1980s, based on my experience with coin-operated video games. In recent years, playing 3D first-person-shooter games on my computer, I would choose the easiest game-difficulty settings out of necessity, and I would also use cheats as needed, such as god-mode (invulnerability), to get thru game sections that I could not otherwise get thru. Now, however, with my newfound movement abilities, I play typical shooter games on normal difficulty, and I get thru them without cheats, so I appear to now be about average, compared to other males who play these computer games.

Regarding my loss of orgasm, the following explanation seems likely: My sexual mind-part had a substantial number of awareness-particle input channels that were dedicated to carrying the data that causes the orgasm feeling.[131] With my advancing age, my sexual mind-part gave up these input channels, which were then acquired by a different mind-part that up until that time had a below-normal allocation of awareness-particle input channels (as demonstrated by my weak game play compared to other males).

Regarding how my orgasm faded away over a period of about a month, being progressively weaker each time I had an orgasm, the following explanation seems likely: The strength of the orgasm feeling—and of feelings in general—is proportional to the number of awareness-particle input channels carrying the data that causes that feeling.[132],[133] My progressively weaker orgasm each time I had an orgasm was caused by having progressively fewer awareness-particle input channels carrying to my awareness the data that caused the orgasm feeling.

Overall, the allocation of the awareness-particle input channels among the different mind-parts is a major determinant in how one person differs from another, assuming that the solitons of the persons being compared to each other have roughly the same level of experience, in terms of human lifetimes, interacting with and controlling the human mind; and, in addition, regarding differences between people, there is also the independent agency, aka free will, of the awareness that has an influence on the detailed allocation plan a person has. One’s allocation plan affects one’s intelligence, one’s athletic ability, and one’s personality including one’s emotional makeup. Also, how the awareness-particle input channels are allocated among the different mind-parts is a major determinant in how, in terms of their psychology, the two genders, men and women, differ from each other; and how the various ethnic-groups and nations of mankind differ from each other, and how the three races of mankind differ from each other (see also the lack of experience with the human mind in the case of a large part of the african race, explained in section 9.2).[134],[135],[136] For example, the average woman has a weaker orgasm experience than the average man. Thus, the allocation plan for the average woman allocates fewer awareness-particle input channels to orgasm, than does the allocation plan for the average man.[137]


footnotes

[121] Based on the size of one’s visual field that we humans consciously experience, which has about a million pixels, and assuming each pixel in our visual field uses at least one awareness-particle input channel, and given that one’s visual field is just one use among many uses of awareness-particle input channels, with each use having its own separate allocation of awareness-particle input channels (albeit one’s visual field is probably the biggest use in terms of having the largest number of awareness-particle input channels in its allocation), the total number of input channels that the awareness particle (soliton) has, is probably at least a few million. Thus, the value of n could easily be in the thousands or many thousands.

[122] That a mind-part can establish new connections for its outputs and/or inputs when its normal connections are lost is demonstrated by the fact that many people who suffer serious brain damage—as a consequence of such things as head wounds, strokes, and brain tumors—and initially lose one or more of their mental abilities, are able to regain some or all of their lost mental abilities in the following months or years as the affected mind-parts learn to make use of different neural pathways to carry the affected input and/or output data.

That a mind-part can establish new connections when its normal connections are lost is also demonstrated by the phenomenon of phantom limbs. Developing a phantom limb is a typical result for someone who has had a limb amputated. In the case of a limb amputation, there is no brain damage. Instead, because of the amputation, the normal neural pathways that used to carry the signals from that limb have fallen silent. The affected mind-part then compensates—regaining sensory input for that limb—by remapping the lost limb onto an adjoining area of primary motor cortex, and interpreting the sensory input from that adjoining cortex area as sensory input from the amputated limb. For example:

… touching the stump of an amputated arm often causes two sensations: one is the normal sensation you expect from touching skin; the second is … a feeling that the phantom hand is also being touched. [Hoffman, op. cit., p. 173]

V.Q. was seventeen when his left arm was amputated six centimeters above the elbow. Four weeks later he was tested by Ramachandran and colleagues, who found a systematic map of his phantom hand on his left arm, about seven centimeters above the stump. They also found a map of the phantom hand on his face, on the lower left side … [Ibid., p. 175. Hoffman also describes another amputee with a similar amputation, who likewise had a map for his phantom hand on both his face and on his arm above the stump. And, as Hoffman notes, the cortex area for the hand, adjoins the cortex area for the rest of that arm, and also adjoins on the opposite side the cortex area for the face.]

[123] This explanation for Tourette Syndrome—that a mind-part that is unable to deliver its outputs to where it normally expects to deliver its outputs, will eventually compensate, proportionate to its loss, by sending its outputs elsewhere—can also be used to explain the condition known as tardive dyskinesia (my brother, who is schizophrenic, had tardive dyskinesia after years of taking the neuroleptic drug Haldol for his schizophrenia). Here is a brief description of tardive dyskinesia:

Tardive dyskinesia is a neurological syndrome caused by the long-term use of neuroleptic drugs. Neuroleptic drugs are generally prescribed for psychiatric disorders, as well as for some gastrointestinal and neurological disorders. Tardive dyskinesia is characterized by repetitive, involuntary, purposeless movements. Features of the disorder may include grimacing, tongue protrusion, lip smacking, puckering and pursing, and rapid eye blinking. Rapid movements of the arms, legs, and trunk may also occur. Impaired movements of the fingers may appear as though the patient is playing an invisible guitar or piano.

There is no standard treatment for tardive dyskinesia. Treatment is highly individualized. The first step is generally to stop or minimize the use of the neuroleptic drug. …

Symptoms of tardive dyskinesia may remain long after discontinuation of neuroleptic drugs; however, with careful management, some symptoms may improve and/or disappear with time. [Tardive Dyskinesia Information Page, National Institute of Neurological Disorders and Stroke, at: http://www.ninds.nih.gov/health_and_medical/disorders/tardive_doc.htm]

Neuroleptic drugs interfere with normal brain chemistry and can block neuron signal transmission in one or more brain areas. If some mind-part has the neural pathways that it normally uses for its outputs blocked for a long time, then that mind-part is going to try to compensate, proportionate to its loss, by sending its outputs elsewhere, which may ultimately result in the movement tics of tardive dyskinesia.

[124] For example, a man with severe Tourette Syndrome, commenting about his spoken vocal tics, asks: “Why is it always sexual?” (from the one-hour TV program Tourette’s Syndrome: Uncensored, BBC, 2000).

In the case of my nephew, his spoken vocal tics were often sexual in content, but not exclusively so. One way to explain this variety in my nephew’s spoken vocal tics is to suggest that the sexual mind-part—which cannot by itself understand what a given spoken phrase means (language understanding is accomplished by a different mind-part)—selects the verbal phrases it will output based on data from other mind-parts; and this selection criteria—whatever it is—sometimes results in non-sexual phrases being selected.

[125] For example, as my nephew says in his Tourette-Syndrome synopsis:

To make the similarity to sexual desire clear, here is my rewritten version of his points (I am assuming a typical young man):

[126] At: http://www.doctorjudith.com/disorder_info.htm

[127] At: http://www.tourettesyndrome.co.uk/information.htm

[128] At: http://www.bestdoctors.com/en/askadoctor/b/brown/lwbrown_061200_q9.htm

[129] Webster’s II New Riverside University Dictionary. Houghton Mifflin Company, Boston, 1984.

[130] Unfortunately for me, I was circumcised as an infant, as were approximately 70% of the other USA males born in 1955 (regarding circumcision, see my essay The Psychological Harm of Male Circumcision at relative link or absolute link). However, I am mentioning this fact about my being circumcised because my study of the circumcision subject has made me aware of the fact that circumcision—in addition to its many other negative sexual effects—tends to suppress and lessen the orgasm experience. For example, the results of a poll titled Cut vs Intact vs Restored/Restoring, created in December 2002 by razniq, shows the harm that circumcision does to the orgasm experience (the poll is at http://www.misterpoll.com/poll.mpl?id=803956922; the poll results are at http://www.misterpoll.com/results.mpl?id=803956922; the bracketed [notes] are mine, added for clarity):

Describe what you feel when you come [orgasm].

353 total votes [353 participants in this poll]

[For each person taking the poll, choosing from the above eleven choices, the poll allows only a single answer. But note that the total of the above percentages adds up to 95% instead of 100%, presumably because the poll results are rounded down to the nearest integer.]

[Note that I saw a post by razniq in an anti-circumcision forum, telling about his poll; I assume the high percentage for “restoring/restored” (totals 26%) is a direct consequence of the places where razniq advertised his poll, because men who have done foreskin restoration tend to congregate in foreskin-restoration and anti-circumcision forums.]

The above poll results make clear the negative effect that circumcision has on the orgasm experience. As the poll results show, circumcision can steal from its victim the experience of a full-body orgasm. Additional evidence for this conclusion is the fact that some men who have restored their foreskins (only a partial restoration is possible) report making the transition from a localized orgasm to a full-body orgasm. For example, a foreskin-restoration forum post by zac0212, dated April 15, 2003, says:

I would describe my circumcision as loose with a partial frenulum (damaged during circ). I have been restoring for a little over a month. Before restoring my orgasms were very localized. In the short time that I have been restoring, my orgasms have changed significantly. My inner foreskin remnant and frenulum have become much more sensitive. I am amazed at how much more I can experience during sex, and my orgasms take over my whole body. Amazing![At: http://health.groups.yahoo.com/group/ForeskinRestoration/message/3232]

As for myself, exactly what is meant by a full-body orgasm I do not know, because I never had one; I only had the localized kind. Thus, my orgasm description is that of a circumcised man who has never had a full-body orgasm.

[131] Having dedicated awareness-particle input channels to carry the data for a specific feeling to the awareness, means that there are no channel-sharing conflicts and no need for arbitration, which would otherwise be the case if a given awareness-particle input channel were used to carry other data besides the data that produces that specific feeling.

In the case of orgasm, those awareness-particle input channels allocated to carry the orgasm-producing data will be unused most of the time. If one were to assume that all awareness-particle input channels are more or less dedicated, then the awareness-particle input channels allocated to carry the orgasm-producing data are probably among the least used awareness-particle input channels. As an example of high utilization, consider the awareness-particle input channels dedicated to carrying vision to the awareness.

[132] The reason that the strength of a feeling would be proportional to the number of awareness-particle input channels carrying the data that causes that feeling, is because this is a simple and reliable arbitration method for the awareness particle: in effect, the strength of a feeling is proportional to the number of votes for that feeling, with each input channel counting as one vote. The alternative, having the strength of a feeling encoded as part of the input data for that feeling, would be dangerous, as it would mean that a single input channel would have the capability to deliver a very strong feeling.

[133] Note that the orgasm feeling differs from most other feelings in that the orgasm feeling—based on my own experience with orgasm before my loss began, and based on how others describe their orgasm experience—has much less variation in its perceived intensity range. Each orgasm feels the same as the previous orgasm. Given this sameness, this means that when the orgasm feeling is sent to the awareness, of those awareness-particle input channels allocated (dedicated) for carrying the data that causes the orgasm feeling, the same or nearly the same fraction of those allocated awareness-particle input channels are utilized for carrying the orgasm-producing data to the awareness. Assuming that the orgasm feeling is not suppressed by some external cause such as circumcision, the typical orgasm feeling probably utilizes all or nearly all of the total allocation of awareness-particle input channels for carrying the data to the awareness that causes the orgasm feeling.

For most other feelings, including emotional feelings and also the feeling of physical pain, the mind-part that sends a specific feeling to the awareness typically utilizes only a fraction of the total allocation of awareness-particle input channels for carrying the data to the awareness that causes that specific feeling, with the total size of that fraction—in terms of the total number of awareness-particle input channels comprising that fraction—determining the intensity of that specific feeling.

For example, assume person A has a total allocation of 5,000 awareness-particle input channels for carrying the data to his awareness that causes the specific feeling of anger, and person B has a total allocation of 10,000 awareness-particle input channels for carrying the data to his awareness that causes the specific feeling of anger. In this case, person B, because he has twice the total allocation for the anger feeling that person A has, will, for the same utilization fraction of their respective total allocations, cause person B to feel the anger feeling twice as intensely as person A. For example, if both person A and person B are currently feeling anger, and for this current anger feeling each of their respective minds is currently using half of its total allocation for carrying the anger feeling to the awareness, then person A is currently using 2,500 of his total allocation of 5,000 for the anger feeling, and person B is currently using 5,000 of his total allocation of 10,000 for the anger feeling. The end result is that because the intensity of a specific feeling depends on the total number of awareness-particle input channels currently carrying that specific feeling to the awareness, and because person B is currently utilizing twice as many awareness-particle input channels to carry the anger feeling to his awareness than person A, it follows that person B feels that current anger twice as intensely as person A. The maximum feeling of anger that a person can experience is when his entire total allocation for the anger feeling is currently carrying the anger feeling to his awareness. Thus, person B has a range for experiencing the anger feeling—from zero anger to maximum anger—that is twice as great as person A.

[134] This explanation for human mental differences, that to a large extent human mental differences result from differences in how the awareness-particle input channels have been allocated, means that humanity as a whole can have the same underlying programming of the mind-parts, subject to possible individual customization as mentioned in section 3.6 (regarding human mental differences, also note that new and recently new humans—assuming they were animals before becoming human—are very stupid, as explained in section 6.3). This presumed sameness of the underlying programming, with differences in how the awareness-particle input channels are allocated, greatly lessens the burden placed on the learning algorithms that create and evolve learned programs (section 3.6), because there is no need to suggest that big human mental differences between two persons—for example, the difference between someone with a 100 IQ and someone with a 150 IQ, or the difference between someone who is athletic and someone who is clumsy, or the difference between someone with a lot of emotions and someone with few emotions—result from differences in how they developed and evolved their mental programming over the short time frame of about 20 years as they developed from being human babies to being human adults. The great complexity of the human mind is not something that develops anew in each single human as they go from being a baby to being an adult.

One implication of this explanation for human mental differences, in the case of intellectual abilities, is that a limiting factor for consciously expressed intelligence is the limited number of awareness-particle input channels available for allocation to the intellectual mind-parts, which is insufficient to fully connect all the various intellectual mind-parts so that each of these intellectual mind-parts is connected to its maximum potential, assuming that an intellectual mind-part’s maximum potential is represented by historical persons who achieved greatness regarding their mental abilities associated with that intellectual mind-part. Thus, the potential of the human mind, in terms of what its programming can do, includes the math abilities of great mathematicians such as Newton, the writing abilities of great writers such as Tolstoy, the inventiveness of great inventors such as Edison, the graphic-art abilities of great artists such as Michelangelo, and so on.

However, this doesn’t mean that a soliton/mind can, for example, be a Newton or a Tolstoy or an Edison or a Michelangelo after just a few human lifetimes, as long as he gets the same allocation plan, respectively, that Newton had, or that Tolstoy had, or that Edison had, or that Michelangelo had. No, not at all. Not a chance. And the reason is that at the very least, the soliton (awareness) has to learn how to interact with, and control, those parts of the human mind to the extent that those great men had done, and I think it likely that to achieve their level of mental competence, will require many human lifetimes. And also, there has to be the will to do so, to strive for and gain that level of mental mastery, by a soliton/mind. This involves the agency of his soliton (awareness), which is the free will that each soliton (awareness) has, which is independent of the computing-element program. In other words, that soliton (awareness) has to want that level of interactive cooperation and harmony with its human mind and has to actively work toward that goal, if that is what it wants.

[135] The explanation that we humans all share the same underlying mental programming but the limiting factor for its conscious expression is the limited number of awareness-particle input channels, explains the commonly observed truism that excelling (compared to the average) in one or more ways is accompanied by deficits (compared to the average) elsewhere. For example, when I was a teenager in high-school, it was a commonplace truism that the athletes were stupid, and the smart kids were unathletic. Well, it was true about myself and most of my friends (smart and unathletic), but there was an exception in that one of my friends was smart and also very athletic, which only means that his deficits were elsewhere.

In effect, the allocation of awareness-particle input channels is what is known as a zero-sum game, where the total gain distributed among one or more players is a loss by the same total amount distributed among one or more of the other players. The players are the various mental programs (mind-parts) that have outputs intended for connection to the awareness particle. Each of these mental programs is a potential recipient of an allocation of awareness-particle input channels, and its allocation defines the extent to which that mental program can connect its awareness-intended outputs to the awareness particle.

For the human mind, there are many players in this allocation game (I estimate more than 50 different players), and the number of awareness-particle input channels to be allocated is large (at least several million), so there is a very large number of different allocation plans that are sufficiently different from each other, that one would be able to observe differences between people having these different allocation plans.

About the truism that excelling in one or more ways is accompanied by deficits elsewhere, I have long been aware of many of my own various deficits (compared to the average of other Euro-American men in the USA). My sensory and motor deficits include: a weak sense of smell; a below-average sense of taste; low athletic ability. My intellectual deficits include: no artistic ability and a below-average memory for many of the things that the average person remembers, such as remembering details of one’s own life; I also have a very poor sense of direction. As far as I know, my emotional deficits are somewhat typical for a man, with at least one exception being my lack of the fear emotion (most men, as far as I know, have the fear emotion).

For a typical person, his allocation plan, regarding how it allocates to the various intellectual abilities, spreads the wealth, so to speak. In contrast, for those persons known as idiot savants, their allocation plan is very one-sided with one intellectual ability far above average while most of the other intellectual abilities are substantially below average. Psychologist David Gershaw gives an overview regarding idiot savants:

Leslie Lemke—born mentally retarded, blind, and suffering from cerebral palsy—sat down at the piano for the first time and played an almost perfect rendition of Tchaikovsky’s First Piano Concerto!

Bob, now in his sixties is a “calendar calculator”—he can name the day of the week for any given date since 1947. He gives most of his answers in less than 8 seconds! Yet Bob is mentally retarded. He lives in a foster home, because he cannot even manage simple daily living skills.

Although these people would perform below normal on any conventional measure of intelligence, they have fantastic abilities in very limited areas. In the past, psychologists have referred to such people as idiot savants—a term that literally means “learned idiots.”

However, this term is not really correct. First, although they are mentally retarded, they are not idiots—those at the lowest level of intelligence. Also they are not savants—people with great knowledge. Their amazing talents—most often in the areas of music, art, mathematics, calendar calculation or memory for obscure facts—are in sharp contrast to their low levels of general functioning. Psychologists estimate that less than one percent of mentally retarded people have some sort of “savant” talents.

In addition, an estimated 10% of autistic people have these “savant” abilities. Autism is a disorder that affects communication, learning and emotions—and sometimes includes mental retardation. Autistic people shun human relationships but may become completely absorbed with mechanical objects. [Gershaw, David. Islands of Genius, 1988. At: http://www.members.cox.net/dagershaw/lol/GeniusIsland.html]

In the case of idiot savants, besides having a severely unbalanced and one-sided allocation plan, it may also be the case that the total number of awareness-particle input channels that are allocated to the various intellectual abilities is substantially below average. In the case of a mentally retarded person who has no savant ability, his allocation plan is more balanced, but for whatever reason his allocation plan simply allocates too few awareness-particle input channels to the various intellectual abilities.

Autism, mentioned in the above quote, is another condition that is understandable in terms of being the result of an allocation plan that allocates a substantially below-average number of awareness-particle input channels to those mental programs involved in providing what autism is deficient in. According to the Autism Society of America:

Autism is a complex developmental disability that typically appears during the first three years of life. … Children and adults with autism typically have difficulties in verbal and non-verbal communication, social interactions, and leisure or play activities.

The overall incidence of autism is consistent around the globe, but is four times more prevalent in boys than girls. Autism knows no racial, ethnic, or social boundaries, and family income, lifestyle, and educational levels do not affect the chance of autism’s occurrence. [What is Autism?. At: http://www.autism-society.org/site/PageServer?pagename=whatisautism]

About the much greater incidence of autism in males, there is a simple explanation: Women are known to be, on average, much more social and communicative than men. Thus, the allocation plan for the average female allocates many more awareness-particle input channels to those mental programs involved in socializing and communicating, than does the allocation plan for the average male. Thus, more males than females will have autism. More specifically, if one were to see the distribution curve (perhaps it’s a bell curve) plotting for the entire female population the distribution of the number of awareness-particle input channels allocated to the mental programs involved in socializing and communicating, and compare this distribution curve with the same distribution curve for the entire male population, then, given that autism is “four times more prevalent in boys than girls,” this means that the area under the male distribution curve between point 0 (no awareness-particle input channels allocated to the mental programs involved in socializing and communicating) and point x (the maximum allocation—to the mental programs involved in socializing and communicating—that is still likely to result in a diagnosis of autism; likely means at least 50% probability) is four times the area under the female distribution curve between those same two points (0 and x).

[136] In section 9.2 it was mentioned that the Caretakers apparently have the same two genders as mankind. Regarding allocation plans, one can outline the evolutionary process that would over time result in the Caretakers having two genders. This same evolutionary process also explains the two human genders (supplementing and in addition to the organic reason involving sexual reproduction):

Assume that at some point in their evolutionary development the Caretakers reached the same situation that currently applies to mankind, in which the limiting factor for their consciously expressed self (including their senses, feelings and emotions, personality, and intelligence) is the limited number of awareness-particle input channels, which is insufficient to fully connect all the various mind-parts so that each mind-part is connected to its maximum potential. In this situation, each newly formed Caretaker—assuming they undergo a rebirth process, albeit without a physical body—is faced with a winner-take-all choice, because, as a rule, the allocation of awareness-particle input channels, once done, does not change, except for certain age-related changes such as the age-related orgasm loss in my own case, and the other changes that happen at different stages in one’s growth to adulthood and one’s decline into old age (see the discussion in the next section about how the allocation plan changes for humans at different stages in their development as they grow and age).

As sociologists have already noted, it is known in human society that a winner-take-all election scheme eventually results in only two major political parties that capture most of the votes. Similarly, one may assume that the evolving Caretaker society would eventually have only two major allocation plans for allocating the awareness-particle input channels—resulting in their two genders, which are apparently similar to our own two genders. Each newly formed Caretaker, in effect, typically chooses one of these two major allocation plans (perhaps this choice is made unconsciously), and then makes adjustments to that allocation plan as wanted and/or needed according to whatever influences are involved (perhaps these adjustments are also made unconsciously).

[137] Given the gender basis of the three races (section 9.2), one may infer that the african race has the strongest orgasm, the oriental race has the weakest orgasm, and the caucasian race is in-between.


9.7 Allocation Changes during Growth and Aging

Regarding how the awareness-particle input channels are allocated among the various mind-parts, different changes happen at different times in one’s life, with most or all of the substantial changes (changes that are noticeable) happening during growth and aging. The in-between period, between the allocation changes that happen during growth and aging, begins sometime after puberty and extends until one reaches the first substantial allocation changes that happen during middle age.

Puberty—defined as the period during which one becomes capable of sexual reproduction—has both physical changes and allocation changes. The allocation changes include giving the sexual mind-part a substantial share of the awareness-particle input channels. Among the allocations to the sexual mind-part are allocations for carrying the feelings of sexual desire and attraction, and also allocations for feeling sexual pleasure, including an allocation for the orgasm feeling.[138]

Prior to puberty, children have much fewer awareness-particle input channels allocated to the sexual mind-part. However, given that there are many statements by mothers remarking how their infants and young children like to play with their genitals, this suggests that prior to puberty at least some awareness-particle input channels have already been allocated for carrying feelings of sexual pleasure. Some of the other parts of the sexual mind-part may also have nonzero allocations prior to puberty, although these allocations are much smaller than what is allocated at the time of puberty.

Puberty is when the single largest increase in allocations to the sexual mind-part happens, but there may be additional allocation increases that happen in the years immediately after puberty, since it seems typical for sexual desire and attraction to grow and blossom in the immediately following years. However, regardless, any additional allocation increases to the sexual mind-part are probably completed well before the sexual peak is reached, which for average caucasian males is said to be about age 19 (puberty for them happens at about age 12).

As was explained in section 9.6, the allocation of awareness-particle input channels is a zero-sum game. What is allocated to one mind-part must be taken from one or more other mind-parts. This means that the allocation increases for the sexual mind-part are offset by allocation decreases in one or more other mind-parts. A likely candidate for the source of a substantial fraction of the awareness-particle input channels that are shifted to the sexual mind-part is the mind-part involved in learning new spoken languages.

Very young children easily learn whatever spoken languages they are exposed to, and this implies a substantial allocation of awareness-particle input channels to the mind-part involved in learning new spoken languages. For the average person, this ability to learn a new spoken language is substantially less after puberty, and continues to decline in the following years, and by adulthood this ability to learn a new spoken language is mostly gone.[139]

There are certainly more allocation changes that happen as one grows from an infant to an adult, at different points along the way, involving various mind-parts, but the two allocation changes described above—allocation increases for the sexual mind-part, and allocation decreases for the mind-part involved in learning new spoken languages—are easy to see and understand, and they happen to most people.

During the growth period from an infant to an adult, there are physical changes, allocation changes, and other changes that are neither physical changes nor allocation changes (for example, the higher average rate at which data is fed to the soliton when one is a child—mentioned in section 6.3—is neither a physical change nor an allocation change). Similarly, during the aging period from the start of middle age until death from old age, there are physical changes, allocation changes, and other changes that are neither physical changes nor allocation changes.

In 2001 (at age 45) I wrote about my own entry into middle age as follows:

With middle age comes changes: both the mind and the body decline in various ways. I entered middle age about a month after my 41st birthday, undergoing the various body changes—such as a decrease in how much the bladder can empty—that are described in the medical literature. Also, in my first month of middle age, my former ability to do mental work about 70 hours per week—in my case, programming work—rapidly declined to about 40 hours per week (after I had experienced this mental-work decline, which has remained unchanged since then, I understood where the 40-hour work-week came from).

Although for the most part the big middle-age changes that I experienced happened to me in that first month of middle age, there have been a few lesser changes that have happened in the last few years.

One thing I remember telling people the first year or two after my entry into middle age, is that during my twenties I had an excess of physical energy; during my thirties the excess energy was gone but nothing was missing (there were no deficits, and everything still worked the same); but upon my entry into middle age, that was my first big experience with the negative effects of aging. I had substantially less physical energy, and I had specific physical deficits in the sense that certain specific body functions were no longer working as well or effortlessly as they used to.[140],[141]

About 1½ years after my entry into middle age, at about age 42½, I suddenly lost my interest in listening to music and watching movies. At the time of that loss, there were no physical changes, illnesses, dietary changes, or other changes happening in my life. Thus, my previous interest in listening to music and watching movies simply disappeared with no apparent cause, other than that I was getting older. This loss of interest has remained with me unchanged for the last seven years (I am writing this at about age 49½).

My current thinking about that loss is that it was probably due to an allocation change. More specifically, before my loss I probably had an allocation of awareness-particle input channels for carrying a pleasure feeling whose intensity was based on whatever criteria the relevant mind-part was using to judge how good a piece of music was. Thus, I listened to music I liked because I was getting pleasure from listening to that music. But once that allocation was gone, so was the pleasure, and my reason for listening to music.[142],[143] The simultaneous loss of my prior interest in watching movies—more specifically, I had a decades-long habit of going at least once a month to a movie theater to watch a movie—was probably also due to my loss of interest in music, because the movies I watched typically had a lot of music in them, and those movie theaters all had good sound systems.

At about age 45½ my orgasm disappeared, with noticeable reallocation effects afterwards, as described in section 9.6. At about age 48½ (around late March/early April 2004), I underwent another big change. Knowing that I would probably want to write about it in the next edition of this book, I wrote the following account on August 16, 2004 (edited for improved readability and clarity):

I finally came to the conclusion that the smells in the kitchen refrigerator and elsewhere in the house, starting roughly two months ago, is because my sense of smell has improved compared to my previous sense of smell.

Note that around late March/early April I knew I was changing in some negative ways, because it seemed that my sexual interest had dropped down greatly compared to my previous level of sexual interest (this drop did not recover, it is still there now, five months later). This drop reminds me of the much smaller sexual-interest drop that coincided with or followed my orgasm loss at age 45½. So, since my entry into middle age, this is the second time my sexual-interest level has undergone a significant noticeable decrease that is permanent.

So, I have now put 2 and 2 together, and I understand that the improvement in my sense of smell, which became noticeable roughly two months ago, is a result of a reallocation of awareness-particle input channels that were previously allocated to my sexual mind-part. And so, like for the time between my orgasm loss and noticing game-playing improvement, roughly three months had elapsed. So, in both cases the reallocation process took roughly three months.

added August 27, 2004:

I also think my ambition drive (trying to be descriptive) is weaker now (I had already noticed this when I wrote the August 16 comment, but I had no description for it). So, given my sexual-interest decline and ambition decline that happened back around early April, I can see how I am heading toward becoming how old men seem: sexual interest at 0 (like was described in Plato’s dialog), and a mild manner (ambition and competitiveness are at 0).

It is now mid-April 2005 as I write this paragraph, and I want to comment on a few things in my above statement. My sense of smell did indeed improve greatly compared to what it was previously. Many times last year, both indoors and outdoors, I was actively walking around, investigating, smelling different things, and noting smells and scents that were new to me. The newness of my improved sense of smell has since worn off, and I am used to it now. Note that my statement in section 9.6—my sensory and motor deficits include: a weak sense of smell; a below-average sense of taste; low athletic ability—was written by me in March 2004 for the 9th edition of this book. I no longer have a weak sense of smell, but I did when I wrote that. I guess my sense of smell is now close to being average, or at least a lot closer to being average than it was, since I can now smell the same things that other people smell and talk about, which was not the case before. Note that I had learned early in my life that I had a weak sense of smell, because many times in my life I have been in the company of other people who were talking about smells, such as food smells, that either I could not smell at all or could only smell weakly if I got close enough.

About the decline in my ambition: Either coincident with, or shortly after the late March/early April 2004 changes that happened to me, I knew I had changed in a big way, but it took time for me to understand and verbalize to myself how I had changed. The sexual-interest decline was quickly apparent and easy to state. But I had also changed in a way that was not easy for me to see and state. Thus, it was not until about five months after those changes that I was ready to explain that other big change as being a substantial decline in my ambition.

In terms of allocation changes, middle age includes a reallocation away from the sexual mind-part. There may also be reallocations away from certain other mind-parts, which in my case included the music-pleasure and ambition mind-parts. More recently, around the beginning of 2005 at age 49, I had a reallocation away from whichever of my mind-parts allowed me to intensely concentrate.[144]

It is said that one grows wise with age. When I was young, I just assumed that insofar as this saying is true, the reason for it is simply accumulated life experience. Certainly, life experience is an important factor in being wise. However, given that middle age sees a reallocation away from the sexual mind-part, this means that as men and women pass thru middle age, they are going to see increased allocations for one or more other mind-parts, some of which may be mind-parts involved with wisdom, including whatever mind-parts are involved with understanding, judgment, and being knowledgeable.

Although the large allocation losses for the sexual mind-part during middle age are easy to see, typically less obvious and easy to see are where the deallocated awareness-particle input channels are reallocated to. Based on my own experience so far, it seems that reallocations tend to go where one has the greatest allocation deficits compared to what an average person of one’s gender, race, and nation would have in terms of their allocations. In my own case, it was only because I had some big allocation deficits compared to the average, and two of these big allocation deficits were each largely erased in their entirety by a single reallocation, that I experienced such big and easy-to-see reallocation changes. Thus, after my orgasm loss at age 45½ and the consequent reallocation, I quickly went from being a long-time very-weak first-person-shooter computer-game player who had to use the lowest difficulty settings and god-mode cheats to have any chance of being able to get thru the game, to being a good player of about average ability who was able to consistently get thru these games with my newfound skills and abilities, playing at normal difficulty settings without any cheats.[145] Similarly, after the large decline in my ambition and sexual interest at age 48½ and the consequent reallocation, I quickly went from having a weak sense of smell to having a sense of smell that is much closer to average than it was. But besides these reallocation-caused changes in my game-playing ability and sense of smell, I also had a few smaller and less-obvious reallocation-caused changes elsewhere.[146],[147] Also, note that I never had any conscious choice about any of the reallocations that have happened in my life.[148]

My experience with my own middle-age reallocation changes was that there was typically a delay of about three months after the allocation loss before I noticed an allocation gain elsewhere. However, at least some reallocations can happen with very little delay, if any, between the allocation loss and the allocation gain. One such rapid reallocation is when a woman becomes pregnant and there is a large increase in her sense of smell and a coincident loss typically in one or more mental abilities.[149],[150]

After middle age comes old age, during which there are probably additional allocation changes for those who live long enough to experience them. At some point during old age, if not sooner, comes death and the afterlife. During the afterlife there are probably allocation changes that make the allocation changes of middle age and old age seem small in comparison. Specifically, after first the physical body and then the bion-body are, in effect, abandoned, the allocations that one still had when one abandoned one’s physical body, for carrying body feelings and localization data to the awareness, are available to be reallocated elsewhere.[151] In addition, the allocations that one still had when one abandoned one’s physical body, for carrying the sense of smell and the sense of taste to the awareness are also available to be reallocated elsewhere. And also available to be reallocated elsewhere, is the allocation that one still had when one abandoned one’s physical body, for carrying to the awareness data from the mind-part involved with voluntary control over muscle movements (an athletic person would have a bigger allocation for this than a non-athletic person like myself).

So, where do these after-death reallocations go? Note that out-of-body projectionists cannot answer this question from their own experience, because they still have their physical body that they return to after their out-of-body projection ends. Thus, no alive human can say from his own direct experience how greatly his mind, as experienced by his awareness, is enhanced in the afterlife, until sometime after his death. However, given the body-related allocations described in the previous paragraph (body feelings and localization data, sense of smell, sense of taste, and input data needed for voluntary control of muscles), for a typical after-death human, what is the total number of his awareness-particle input channels that were body-related allocations at the time of his death, but are available to be reallocated elsewhere after his death? My very rough estimate is that this total number is about one million awareness-particle input channels. Perhaps some of this after-death reallocation happens during the bion-body stage of the afterlife, but whether or not it does, I think it very likely that all the body-related allocations that one had at the time of one’s physical death, will have been reallocated elsewhere no later than very early in the lucid-dream stage of the afterlife.

At the onset of the lucid-dream stage of the afterlife, because one no longer has a body, neither a physical body nor an afterlife bion-body, the after-death reallocations have nowhere to go except to one’s mental abilities. Thus, for a typical person, as experienced by his awareness, there are large increases in the areas of intelligence, memory recall, and perhaps also certain emotions.[152],[153],[154],[155],[156],[157],[158]


footnotes

[138] In this section, as a literary convenience, allocated awareness-particle input channels are said to carry the perceived end-result of the data they carry to the awareness, instead of being said to carry the data that causes that perception in the awareness. Thus, for example, “carrying the feelings of sexual desire and attraction” instead of “carrying the data that causes the feelings of sexual desire and attraction.” Doing this avoids excessive repetition of such phrases as “the data that causes.”

Also in this section, each instance of the word reallocations (and likewise for the singular reallocation) has one of two meanings:

  1. The word reallocations is referring to both sides of the allocation ledger: the functional part or parts of the mind that had the allocation decrease (loss); and also the functional part or parts of the mind that had the consequent allocation increase (gain). Regarding the number of awareness-particle input channels involved in this reallocation, the total allocation increase equals the total allocation decrease.

  2. The word reallocations is referring to only the gain side of the allocation ledger: the functional part or parts of the mind that had an allocation increase (gain). This allocation increase is a consequence of an earlier allocation decrease (loss) by one or more other functional parts of the mind.

The intended meaning for each instance should be clear from the context or larger context. Or, if it is not clear, assume whichever meaning is most reasonable for that context.

[139] I’m like most people in that I lost my ability to easily learn a new spoken language as I grew older (the Spanish courses I had in high-school, and also the four semesters of German I had in college, were a complete waste of time as I quickly forgot what I had learned, and I never could say that rolling-r sound that Spanish has, nor pronounce German too well; I was already too old). However, given the explanation that this loss was due to a reallocation of awareness-particle input channels elsewhere, away from the mind-part involved in learning new spoken languages, this implies that the unconscious mind still has the capability to support easy learning of a new spoken language, since there is no reason to presume any changes in the underlying programming and algorithms that were involved when one was young and able to easily learn a new spoken language.

Presumably, if my awareness-particle input channels were reallocated so that the mind-part involved in learning new spoken languages had the same allocations it had when I was a child, then my ability to easily learn a new spoken language would return. Well, no such allocation changes have happened, and no such allocation changes are expected during the remainder of my current life. However, in late 2004 I got an unexpected personal demonstration that my unconscious mind can still do what is needed for the easy learning of a new spoken language, at least for that part of learning a new spoken language for which I still had an abundant allocation of awareness-particle input channels, which in my case was simply my hearing. In anticipation that I would probably want to discuss this personal experience in the next edition of this book, I wrote a detailed account of my experience about a week after it happened (the written account is dated November 16, 2004, which means I wrote it on my 49th birthday). Here it is (edited for improved readability and clarity):

In early July 2004 I got a broadband internet connection [this cable-modem was much faster than the dial-up modem I had been using previously for internet access]. Soon afterwards I tried file-sharing [BitTorrent] for the first time, and I soon discovered a huge world of Japanese anime that I could download and watch. I have long been a fan of Japanese anime, but my only experience with it up until that point had been some series and movies that I had seen on TV, and they had all been translated and dubbed into English.

Initially, I just downloaded Japanese anime that had been dubbed into English, because that is what I was already used to, and English is the only language I know, but soon I was downloading and watching fansubbed anime (the original unedited Japanese-language version, with English subtitles added by anime fans, hence the word fansubbed).

Watching fansubbed anime was my first exposure to the spoken form of the Japanese language. Initially, spoken Japanese sounded musical and beautiful to me, but that impression soon wore off after watching a few episodes and hearing about an hour total of spoken Japanese. Also, spoken Japanese all ran together: when a character was speaking, I only heard a continuous stream of sound with no word breaks; the only noticeable breaks happened when the speaker briefly stopped speaking, at what I guess was an occasional phrase or sentence break (people often pause when speaking, if for no other reason than to catch their breath so they can resume speaking).

This condition of hearing spoken Japanese as a continuous stream persisted and I got used to it. But in early November 2004, after having watched in total what I later estimated to be somewhere between 30 and 40 hours of fansubbed anime, while in the middle of watching an episode, I suddenly got quite a surprise when all of a sudden I went from hearing spoken Japanese as continuous, to hearing spoken Japanese as having what I presume were word breaks (at least, that is where my unconscious mind thought the word breaks were), as if a switch had suddenly been turned on.

At the time, I recognized the significance and underlying reason for this event, because just about a week earlier during my daily web-browsing habit, which includes checking Slashdot, I had seen How Infants Crack the Speech Code, which referred to Early Language Acquisition: Cracking The Speech Code, which says:

Infants learn language with remarkable speed … New data show that infants use computational strategies to detect the statistical and prosodic patterns in language input, and that this leads to the discovery of phonemes and words. …

Each language uses a unique set of about 40 phonemes, and infants must learn to partition varied speech sounds into these phonemic categories. …

There is evidence that infants analyse the statistical distributions of sounds that they hear in ambient language, and use this information to form phonemic categories. They also learn phonotactic rules—language-specific rules that govern the sequences of phonemes that can be used to compose words.

To identify word boundaries, infants can use both transitional probabilities between syllables, and prosodic cues, which relate to linguistic stress. Most languages are dominated by either trochaic words (with the stress on the first syllable) or iambic ones (with the stress on later syllables). Infants seem to use a combination of statistical and prosodic cues to segment words in speech.

Ever since that moment when I started hearing what I assume were word breaks in spoken Japanese, I have no conscious control over this process, and I cannot turn it off, just like I cannot control or stop the word breaks that I hear in spoken English. This is like so much of the mental processing that takes place in our unconscious minds, in that we have no conscious control over it. Instead, we just get the final product of all that mental processing, sent to the awareness in a form that causes the perceptions that we experience.

At the time I am writing this footnote, in late March 2005, which is about 4½ months after I began hearing spoken Japanese with word breaks, I am still watching Japanese fansubbed anime (I watch about 1 to 1½ hours a night, when I eat my dinner), and nothing has changed in how I hear spoken Japanese, other than that I quickly got used to hearing it with word breaks, although I guess my unconscious mind is now doing a better job of deciding where the word breaks are, since I have heard many more hours of spoken Japanese in these last few months (it wouldn’t surprise me if my unconscious mind is making at least a few mistakes regarding where the spoken-Japanese word breaks are, but I wouldn’t know since I only consciously recognize and know the meaning of maybe half-a-dozen spoken Japanese words; my tiny Japanese vocabulary was learned by matching the English subtitle with the heard Japanese word).

[140] Regarding my entry into middle age, the decline in how many hours of mental work I could do per week certainly had a cause, but I do not see this cause as involving or requiring allocation changes.

[141] Regarding the cause of aging, there are many reasons to believe that aging is programmed, including the following:

Given that aging is programmed, and given bions, one’s aging plan is carried out by the cell-controlling bions of one’s physical body. Regarding aging changes to one’s physical body, the cell-controlling bions of one’s physical body make those changes. Regarding the form and residence of the aging plan for one’s physical body, perhaps that aging plan is in one’s DNA, encoded somewhere in the so-called “junk” DNA whose language is presently unknown (section 2.6). Or, if not encoded in one’s DNA, then perhaps there is a learned program for the aging of the human body, among all the other learned programs that the cell-controlling bions of one’s human body have.

As a rule, multicellular plants and animals, especially among those that are big enough to be easily seen by us without magnification, have a limited lifespan and undergo an aging process that ends in death. I think the single best explanation for why aging happens is found in the expression out with the old, in with the new. The same creativity, in terms of learned programs and the evolution of learned programs, that bions had in the remote past to create and evolve the first cellular life, has remained active ever since and that creativity does not want, in effect, to rest on its laurels. As the early biosphere filled up with organic life, the aging of complex multicellular organisms was probably developed and evolved by cell-controlling bions as a way to help keep the biosphere from becoming clogged up with a bunch of large (compared to the young of each species), old, never-dying, plants and animals. In such an environment, filled up with large, old, never-dying, plants and animals, innovation in design becomes more difficult, and also, there would be much less opportunity for young offspring to survive and grow, because the environment is already filled up with the old, that, because they aren’t aging and becoming less capable of surviving, have all the advantages compared to the young. Thus, in general, the aging process, in which animals and plants have a limited lifespan, avoids stagnation of the biosphere.

Given the previous paragraph, the following is a possible scenario as to how aging first came about: Let’s assume that there were no aging plans in the early biosphere and it did become filled up with large, old, never-dying, plants and animals. In this environment, there is an opportunity for a new design: a small parasite that is a fast breeder with a short lifespan, that specifically targets one of the large animals or plants that is currently widespread in the biosphere. The short lifespan prevents this parasite from becoming stagnant in terms of its design, which allows the bions of this parasite to make quick adjustments to the parasite’s design details, so as to maintain an overall advantage against its targeted animal or plant species. The end result is that once this fast-breeder-with-a-short-lifespan strategy proved successful, other parasite designs copied this strategy, with the end result that all those large, old, never-dying, plants and animals, were completely wiped out. To compete with this successful fast-breeder-with-a-short-lifespan strategy, non-parasitic plants and animals simply copied the same strategy of a limited lifespan, with the end result that aging is everywhere.

[142] Back when I lost my interest in listening to music, I was reminded of the commonplace stereotype of old people who only listen to music that they heard when they were young. I soon found myself in that same situation. After I lost my interest in listening to music, I rarely listened to music, but when I did listen to music, I wanted to listen to music that I heard and liked when I was young, in my teens or twenties.

Since my loss, I don’t feel any pleasure when I listen to the music from my youth, so why do I have that preference? Well, I no longer feel any pleasure from listening to any music, but I used to have that pleasure, so a possible reason for my preference is that I am returning to what used to give me pleasure, even though it no longer gives me pleasure. Another reason, and this reason is often mentioned by old people, is that it brings back memories of their youth. In my own case, when I listen to music that I liked when I was young, I tend to recall and think about my life from those times.

Even though I no longer feel any pleasure from listening to music, I can still listen to new music and judge whether it is good or bad. For example, about ten days before writing this footnote—I am writing this footnote in early April 2005—I got an email from someone who sent me some links to some music he had created. In his email he said he was a musician and he wanted to give me some of his music in exchange for my writings which he liked. Well, since he put it like that, I kinda felt like I should listen to his music even though I didn’t want to. So, I listened once to each of the four pieces of music he had given me links to.

Those four pieces of music were a kind of music I hadn’t heard before. He had described it as “acid techno and industrial.” Three of the pieces sounded good to me (surprisingly good), and one piece sounded bad, and I knew what I didn’t like about that bad piece. But after fulfilling my self-imposed obligation, I had no desire to listen again to any of his music, since listening to music no longer gives me any pleasure.

Given my own experience, and also given the need for dedicated allocations to avoid channel-sharing conflicts (section 9.6), it follows that the allocation of awareness-particle input channels for carrying to the awareness music-listening pleasure (or displeasure) is separate from whatever allocations are involved in carrying to the awareness a rational judgment and critique of that music. However, presumably the same music-judging mind-part is the primary source for both the explicit rational judgment and critique of a piece of music and the implicit judgment of felt pleasure (or displeasure) for that piece of music, which is why both the explicit and implicit judgments of a piece of music always coincide and agree. In the felt-pleasure (or displeasure) case, that music-judging mind-part sends that feeling directly to the awareness, but in the rational judgment and critique case, that music-judging mind-part is just an input to some other mind-part that constructs the rational judgment and critique and sends it to the awareness.

[143] Given the gender basis of the three races (section 9.2), and given the strong association that the african race has with music, it seems likely that the african race has the biggest allocation of awareness-particle input channels for carrying the music-listening pleasure feeling to the awareness, the oriental race has the smallest allocation, and the caucasian race is in-between. This also agrees with my own observation that men, on average, are more into music than women. Presumably, men are more into music because they are getting a bigger reward from music, feeling more pleasure when listening to whatever their minds judge as good music. Note that the pleasure one feels from listening to music is also a motivator for creating new music. Thus, africans, on average, are more motivated to create new music than the other two races, and men, on average, are more motivated to create new music than women.

[144] Around the beginning of 2005 at age 49, I lost my previous ability to intensely concentrate. At the time, this change went largely unnoticed by me, because its primary effect was that I was simply no longer concentrating like I used to when I did my work. At the time it just seemed to me like I didn’t want to concentrate any more. Thus, for most of 2005 I didn’t see the change as an actual loss, although it was, because, as I write this footnote in June 2006, about 1½ years have passed, and the state of intense concentration that prior to 2005 I used to enter easily when doing certain intellectual tasks—including such things as my programming work and in general whenever I wanted to think deeply about something—is now just a memory for me, because I can no longer concentrate like that, and I haven’t done so for the last 1½ years. Just to be clear, I can still concentrate, but just not intensely like I used to.

I guess my current ability to concentrate is about average for a man of my nationality, whereas before 2005 it was well above average, because I’ve known for a long time that most people couldn’t concentrate like I could. Prior to 2005 it was routine for me to concentrate so intensely that I had to take precautions so that I wouldn’t be disturbed while in that state, because if I were disturbed by such things as a phone ringing or someone unexpectedly talking to me, I would have what I called the startle reaction where I would kinda jump with shock as my intense concentration was broken.

Apparently, the ability to concentrate requires an allocation of awareness-particle input channels. In my own case, around the beginning of 2005 I lost much of my previous allocation for concentration. This allocation loss was apparently reallocated elsewhere in a way that greatly lessened a memory deficit I had: my memory deficit was a very below-average ability to remember text sequences. In June 2005 I noticed the memory improvements enough to write about them. Here are the notes I wrote on June 5, 2005 (edited for improved readability and clarity):

This morning, prior to getting out of bed, I was recalling some sentences from my book [I mean this book, for which I had just finished work on the 10th edition about a month previous], and I knew I was recalling those sentences verbatim. It soon occurred to me that this was something new for me, because in the past I could never recall anything from my own writings verbatim unless it was a very short phrase of at most a few words.

As I thought about it, while still lying in bed prior to getting up, I tried to remember how long this had been going on, and I thought I was also recalling sentences verbatim prior to today, but I’m not sure. Regardless, this morning is the first time I noticed this verbatim recall of more than a few words. As I sit here writing this now, it occurred to me to do a simple test of my memory, so I picked up a sheet of technical documentation that I last read a few months ago, and I selected and silently read once to myself, at my normal reading speed, a sentence I chose at random from the middle of that page. I put the page down and then tried to recall that sentence I had just read, and I was surprised to see that I was able to recall what I thought was the entire sentence. I immediately checked my recall by rereading that sentence (the sentence is 20 words long). I had made a few mistakes, but even so, this level of recall is definitely new for me, because I could never do anywhere close to this good in the past.

Up until now, prior to this improvement in my recall ability, I used to tell others that I could never recall anything verbatim, which was true. This inability to recall verbatim included my own writings and all other writings, and also anything spoken or said by myself or others. So, up until now I always had to paraphrase when I remembered something I had read or heard, because I could never recall anything verbatim no matter how little time had passed, even if only seconds had passed since reading or hearing it and then trying to recall it verbatim. I think my previous verbatim recall ability was far below average, but now it seems I got a reallocation from somewhere (I don’t know where), and my verbatim recall ability is now closer to being average. Actually, it just occurred to me that I did notice once or twice while I was doing that three-week [programming] job, which I finished two days ago, that my memory seemed better, but I didn’t think any more about it at that time, perhaps because I was very focused on doing that job. So, this improved recall looks very real, but I have no idea where the reallocation came from. What mind-part lost the allocation that my recall mind-part [more specifically, the mind-part responsible for recalling a sequence of symbols] ended up getting? Well, whatever. But I’m glad to have this improved recall, because I always knew I was weak there.

The above notes talk specifically about a substantial improvement in my ability to remember word sequences, but my recall improvement is for any sequence of symbols, including sequences of letters and digits. For example, before this recall improvement, I was unable to read a several-digit number and remember that number long enough to type it into the computer a few seconds later (even a two-digit number was a problem for me). Thus, I was in the habit of always reading the number and typing it in at the same time, digit by digit, and then I would double or triple check that the number I typed in and see on the screen matches the number on the printed page. Now, after my recall improvement that happened no later than May 2005, the situation is very different, as I can now read an arbitrary sequence of characters up to about six or seven characters in length, and still correctly remember that sequence several seconds later, giving me more than enough time to type it into the computer without having to look back at the printed page from which I read that sequence.

In late 2005 I finally realized that the offsetting loss for my memory gain was my concentration. Here are the notes I wrote on November 9, 2005 (edited for improved readability and clarity):

Around the end of September 2005, more than a month ago, it finally occurred to me that the counterbalancing loss for my memory gain was my concentration.

I remember that in late 2004 I was growing reluctant to desk-check my programming. [My habit was that I always concentrated intensely when I desk-checked my program code. As a rule, this allowed me to find any and all errors in that program code.] If I recall correctly, I stopped doing desk-checking in very-early 2005, but I’m not too sure about exactly when.

More tellingly, as far as I can remember, I haven’t had the startle reaction at all in 2005, and it’s certain that I can no longer enter the state of concentration that I used to enter on a routine basis when I did my work. I can’t recall when I last entered that state of concentration, other than that I was still doing it in late 2004.

In early 2005 when I got the new phone—[actually, it was the same old phone, but with a new phone number and an internet connection]—I left the ringer on and was no longer startled by it when it rang unexpectedly. [Prior to 2005 I always had the ringer on that phone turned off, forcing whoever was calling me to leave a message, because, if my phone were to ring when I was concentrating, I would have the startle reaction, which is something I wanted to avoid having, since it was always a big shock for me.]

Also, in early 2005 I noticed that my movements when fixing my dinner had become faster but less careful and deliberate. In the past I moved more slowly and deliberately. I guess my previous higher concentration level meant more was under my conscious control, hence I was more slow then.

This faster but less careful and deliberate way of fixing my food is paralleled with how my programming work has become faster but less careful and deliberate. The thought of having errors in my programming code [aka bugs] no longer seems as important to me as it used to be, and I certainly no longer carefully desk-check like I used to.

Besides preparing my dinner faster and with less care than I used to, another similar speedup that I noticed in the first half of 2005—I no longer remember exactly when I first noticed it—was that I was typing on my computer keyboard substantially faster and less carefully than I used to. In the past, prior to 2005, I was a slow hunt-and-peck typist, and I almost never made a typo. However, ever since this typing speedup began, I’ve been typing substantially faster than my pre-2005 typing speed, and I often make typos which I quickly correct. Note that this typing speedup happened without my consciously wanting it to happen. I wasn’t trying to type faster. Instead, it just happened.

I think the reason that the loss of my previous ability to intensely concentrate also resulted in my faster and less careful body movements when fixing my dinner and also when typing, is that the decreased allocation that happened to whichever of my mind-parts had previously allowed me to intensely concentrate, meant not only a decrease in my maximum concentration level, but also a decrease in my average concentration level for when I do such ordinary tasks as fixing my dinner or typing at the keyboard. Thus, after the allocation decrease that happened around the beginning of 2005, my concentration level while doing a given task is, on average, lower than what it was before 2005.

Besides the symbol-sequence-recall mind-part, there are other recall mind-parts that presumably have their own allocations of awareness-particle input channels. This is consistent with how some people are strong in certain kinds of memory and weak in others. For example, in my own case I was weak in symbol-sequence recall, but at the same time my visual recall was good (I believe my visual recall was, and still is, at least average, and maybe a little better than average).

[145] In the course of 2004 my interest in playing first-person-shooter computer games disappeared, even though my ability to successfully play thru them remained intact. I simply lost interest. I attribute this loss of interest to my ambition decline that happened earlier that year.

[146] After the large decline in my ambition and sexual interest at age 48½ and the consequent reallocation, besides the big change in my sense of smell, there were also a few smaller changes for me. In summary, as I reflect upon those smaller changes, it seems that I’ve gotten allocations for a few things that, on average, are more heavily allocated to women than men.

Most noticeable for me was a new feeling: happiness. My first recollection of when I was feeling happy was back in mid-2004 when I was in a supermarket having this feeling, and it suddenly occurred to me that I was feeling happy. And I was kinda shocked by it, because up until that time I only knew that happiness was a feeling that makes girls bounce around and be cheery with their happiness. That was the extent of my understanding of what happiness was, until I felt it for myself in that supermarket. Many times since then, I have found myself feeling happy at different times, with no apparent cause. This happiness feeling is just a mild feeling for me, but it’s nice when it happens.

It doesn’t look like I got a happiness allocation big enough to make me bounce around and be cheery, at least not to the extent I’ve seen girls do it, but the allocation I got was enough so that I can see from my own experience what the happiness feeling is like, and I can easily imagine that if this feeling were substantially intensified I would be bouncing around all cheery too. Happiness is a nice feeling. And for me, most of its appearances have been when I was either acquiring food (in the supermarket) or preparing food (in the kitchen). Also, I have heard statements by women that they often felt happy during their pregnancy, and I have seen women acting happy when they are with their small children. So, it looks like the happiness feeling is given as a reward for actions that are either life-sustaining, such as acquiring and preparing food, or life-perpetuating, such as having and caring for a child. So, it is easy to see why the happiness feeling, on average, is more heavily allocated to women than men, because, by virtue of their giving birth and being mother, women are more directly involved with life-perpetuating actions than men are. And, regarding life-sustaining actions, women, on average, are more involved with food acquisition and preparation than men are, and women can also breast-feed after giving birth.

[147] Another reallocation-caused change that I am sure of, after the large decline in my ambition and sexual interest at age 48½ and the consequent reallocation, is that I now find myself easily moved to feeling emotional and shedding tears when exposed to certain recalled memories and certain scenes in romance stories. My first conscious realization regarding this change was during a conversation I had in mid-February 2005 when I was telling a personal story that I had recalled and told before in past years without feeling anything, but during this telling I felt myself becoming very emotional and I felt like I was going to cry. After that conversation, as I thought about what had just happened to me, I suspected that an allocation change was responsible, although there was already earlier evidence for this allocation change but I just didn’t see it until shortly before writing this footnote in late April 2005, after all the thinking I did in an effort to better understand how I had changed, so that I could write this footnote.

I mention in another footnote that I began watching downloaded Japanese anime in mid-2004. Prior to the latter half of 2004 I never had any interest in romance stories or shows, and I never watched them. The few romance scenes that I had seen in movies or on TV before that time had never emotionally moved me. Also, from my early teens until the latter half of 2004, I had only cried or felt like crying a few times in my life, and I had never cried or felt like crying for any recalled personal memory or for any scene in a movie or TV show. Well, anyway, without even realizing it, during the latter half of 2004 I was interested in romance stories and I downloaded and watched several anime romance series, and I got emotional at times and shed a few tears while watching them. At that time, I just thought how great this Japanese anime was, and I didn’t make the connection that my having any interest in romance stories was something new to me.

Less than a week before writing this footnote in late April 2005, I downloaded and watched a fansubbed non-anime Japanese-TV romance series, and I got teary and emotional at a number of different points in that series. However, I did pay attention to the actual feeling, because I knew I would be writing this footnote. As far as I know, there is no English word for the feeling that goes along with the tears, which is why I’ve been using the word emotional in this footnote when I mean this feeling. Henceforth, I’ll use the phrase crying feeling when I mean this feeling.

My own experience with the crying feeling is that it seems to be a neutral feeling that is neither painful nor pleasant. The lack of an English word for the crying feeling is probably due to the feeling’s close association with being teary. In effect, given this close association, there is less need for a separate word for the crying feeling, because the crying feeling is implicit depending on the context when one uses words for being teary. For example, saying “that story made me cry,” implies that one felt the crying feeling when crying. Saying “I felt like crying,” implies that one felt the crying feeling even though one didn’t cry.

In terms of allocation changes, apparently I got a substantial allocation increase for whatever mind-part is involved with causing crying and its associated crying feeling. I guess my newfound interest in romance stories also traces to this mind-part, at least partially so.

[148] Based on my own experience so far (I am writing this sentence in June 2006 at age 50½), the reallocations that happened to me in my middle age have mostly been used to fill in allocation deficits I had. Thus, as a result of those reallocations I am now closer to being average for a man of my age and nationality.

The reallocations that happened to me in my middle age were not subject to my conscious control or wishes. If I had had a conscious choice about those reallocations, I would have used all the allocation losses from my sexual mind-part and elsewhere to improve my intelligence. The improvement of my memory was something I would have consciously wanted, but not at the expense of losing my ability to intensely concentrate, which is what happened. Also, I certainly would not have chosen my sense of smell or my game-playing ability for improvement, but that is where a substantial part of the reallocations went.

For the purpose of this footnote, I’m defining a typical human’s current life cycle as having begun when that person’s soliton/mind integrated with the brain of its current human body prior to the birth of that body, and then extending thru that human life and the following afterlife, and then ending shortly before that person’s soliton/mind integrates with the brain of what will be its next human body, prior to that body’s birth.

Given that allocation changes are not subject to conscious control, the question regarding reallocations is what are the guidelines that the unconscious mind uses to determine how a given reallocation is distributed among the mind-parts. More specifically, which mind-part, or parts, get the awareness-particle input channels recently lost by some other mind-part. Perhaps the most important guideline is what may be called use it or lose it. In effect, for each mind-part that can get an allocation of awareness-particle input channels, that mind-part will typically get at some point in a typical human’s current life cycle an adequate allocation for that mind-part, because otherwise, if too much time passes without that mind-part getting an adequate allocation, the evolutionary forces at work in that person’s mind may eventually change that mind-part so that it loses its capability to accept or use an allocation (more specifically, by evolutionary forces I mean the evolution of learned programs; see section 3.6). Thus, in effect, use it or lose it.

As a rule, to avoid the use it or lose it danger, body-related mind-parts, especially those mind-parts that are inactive in the lucid-dream stage of the afterlife, including the senses of smell, touch, and taste (the senses of smell and taste are probably also inactive during the bion-body stage of the afterlife), will need their allocations when the physical body is still present (this explains why my weak sense of smell had priority for an allocation increase). For those mind-parts that do not need the physical body, such as all the intellectual mind-parts and the emotional mind-parts, adequate allocations for those mind-parts can be postponed until the afterlife.

[149] Being a man, I had never heard of this large increase in a pregnant woman’s sense of smell until July 2009 when a woman I knew told me about it during some idle conversation (she was in her late thirties then and was a mother of several children). I asked her if I could make notes, and she said yes, and here are my notes of what she said, and she agreed they were accurate when I read them back to her immediately after writing them (edited for improved readability and clarity):

During pregnancy (she says for her this change happened at the beginning of pregnancy as far as she could tell), her sense of smell was greatly enhanced (she estimates 80 times more sensitive than her normal sense of smell which she considers average), and her sense of smell went back to normal very quickly after her baby was born.

She says every woman she has talked to about this sense-of-smell enhancement during pregnancy (she estimates more than 25 women), has had the same experience with their own pregnancy. And she also says when she asked 6 gynecologists about the very large sense-of-smell enhancement that she had during her pregnancy, she says they just said it was normal for a pregnant woman.

During pregnancy she says she was cleaning a lot just to keep down the smells, and that certain foods smelled very bad to her and she wouldn’t eat them (including some foods she liked outside of pregnancy); she says everything had an odor, even what one would consider neutral objects such as furniture, pens, playing cards; and that plastics in general reeked. For the pens, she says the ink especially smelled very bad to her. Also, she suggested that strange food selections by pregnant women are at least partially driven by how the food smells.

Continuing with these notes, regarding the losses she experienced during her pregnancy that were coincident with the large improvement in her sense of smell (and likewise these losses, as with the large improvement in her sense of smell, ended very quickly after her pregnancy ended):

She says she had lower alertness in general, and her short-term memory seemed to be less.

She mentions the phrase “mommy brain,” which she says is a common phrase used by pregnant women referring to their memory loss and confusion during pregnancy. She says memory loss includes everything such as what you recently ate, what you did the day before, where your keys are, what day of the week it is, etc.

In general, she says this memory loss could not be overcome by a determined conscious effort to remember such things.

Although I didn’t write it down in those notes, I do recall that I asked her why she had done so much research about this subject, asking more than 25 women, and 6 gynecologists, and she said the reason she became so interested is because she had never heard of it until it happened to her when she became pregnant. No one had told her about it in advance, and she thought someone should have, after she found out from her many inquiries that it’s a normal part of pregnancy.

I am writing this footnote in December 2012, and since I first learned about this large increase in a pregnant woman’s sense of smell back in July 2009, I have in the intervening years asked about five or six different women about this, including two who were pregnant when I asked them, and they all said they had the same great increase in their sense of smell during their pregnancy. They were less clear about their experience of “mommy brain” during their pregnancy.

Given the above, it should be obvious that the primary purpose of this great increase in a pregnant woman’s sense of smell is to help guide that woman as to what she should eat and not eat, for the sake of her baby’s physical development. I suppose a secondary purpose is to guide the woman away from potentially harmful substances in the air that could enter her body thru her lungs and ultimately reach her developing baby.

[150] The rapid reallocation of awareness-particle input channels that happens to pregnant women when their pregnancy begins, greatly enhancing their sense of smell, and also the rapid undoing of that reallocation at the end of that pregnancy, shows that a rapid reallocation of awareness-particle input channels during one’s human life is possible. And, if the condition known as multiple personality disorder (MPD), is, in at least some cases, real, and not simply an act of lying and deception, then a rapid reallocation back and forth between two different allocation plans, that, in effect, give two different personalities, is the likely explanation for any real cases of multiple personality disorder. A dictionary definition of multiple personality disorder, at https://www.merriam-webster.com/dictionary/multiple personality disorder, says:

a disorder that is characterized by the presence of two or more distinct and complex identities or personality states each of which becomes dominant and controls behavior from time to time to the exclusion of the others and results from disruption in the integrated functions of consciousness, memory, and identity —called also multiple personality, dissociative identity disorder

I have not known anyone with this disorder, and a case that received a lot of publicity when I was young—the book Sybil published in 1973—is probably fiction (see the discussion about it being fiction at https://en.wikipedia.org/wiki/Sybil_(book)). However, a male relative of mine claims that one of his past girlfriends had MPD, having two different personalities, and that girlfriend herself was consciously aware of having two different personalities. If that past girlfriend really did have MPD, then perhaps her unconscious mind scored her two different personalities as being about equal—each personality being the result of a different allocation plan—and, in effect, her unconscious mind couldn’t decide between those two different personalities, and was switching back and forth between those two different allocation plans that gave those two different personalities. However, despite such switching back and forth, it’s still the same soliton (awareness) experiencing the different personalities.

Also, if there are real MPD cases, then the soliton/mind experiencing life with MPD, would always have access to any memories made during either allocation plan. In other words, the person when currently being one of the alternating personalities would remember what was done and experienced when being the other personality (subject to normal forgetfulness with the passage of time). Any MPD story that claims the person doesn’t know what she did when she was the other personality is probably fiction, because I see no reason for that to be the case, other than to serve as a plot device to allow a more interesting story to be told. Also, any MPD story that claims more than two personalities is probably fiction, because having an unconscious mind that can’t decide between two different personalities for the current human life is unlikely enough, without adding more personalities into the mix, which would probably be done by a fiction writer just to make the story more interesting.

[151] Body feelings include all those feelings that, in effect, report the status of the body to the awareness. Two different kinds of body feelings are listed below. The first list has those body feelings that are not consciously perceived as being localized to somewhere specific on or in the body, and the second list has those body feelings that are consciously perceived as being localized to somewhere specific on or in the body. The first list, which lists non-localized body feelings, includes the following (if I’ve left anything out that belongs in this list, feel free to add it):

While feeling thirsty, one may also have one or more localized body feelings such as a dry mouth, but feeling thirsty itself is not localized. And likewise, while feeling hungry, one may also have one or more localized body feelings that are a consequence of going without food, but feeling hungry itself is not localized. And likewise, while feeling a need for sleep, one may also have one or more localized body feelings that are a consequence of going without sleep for too long, but feeling a need for sleep itself is not localized.

The reason that water needs, nourishment needs, and sleep needs, are felt by the awareness without any simultaneous perception by the awareness of where in the body that need is, is because the need applies to the entire body. Also, in the specific case of sleep needs, sleep is more than just a body need, because one’s awareness/mind itself needs periodic sleep.

Note that I considered adding breathing needs to the first list, because the need applies to the entire body (more specifically, the need for oxygen). However, testing myself by holding my breath for a while, I consciously felt the need to breath simultaneous with the perception that this need to breath was in my upper chest and throat. Thus, the need to breath belongs in the second list. Excretion needs are also needs for the entire body, however, because there are only two excretion outlets, one for urine and one for feces, excretion-need feelings are consciously felt with a localized perception regarding where in the body the waste needing excretion is (the result of a sufficiently full bladder, or of enough fecal matter in the colon).

The second list, which lists localized body feelings, includes the following (if I’ve left anything out that belongs in this list, feel free to add it):

Note that localized body feelings have two separate components that are, in effect, simultaneously sent to the awareness:

Because one can experience different localized body feelings apparently at the same time, and these different localized body feelings may be felt as being at different locations in and/or on one’s body, how is this done when I say above that there is a single allocation of awareness-particle input channels that carries the localization data regardless of which localized body feeling is being sent? My answer is the following design for a learned program named LP_determine_and_send_localized_body_feelings:

/*
This learned program would, in effect, be running continuously in the background in one’s mind when one is awake and in a body, either one’s physical body or a bion-body.

Note: observing what I can consciously feel simultaneously with my own physical body, I believe that the constant MAX_SIMULTANEOUS_TO_SEND has a value of 4.

For the purpose of fooling the awareness into perceiving sequentially sent body feelings as being simultaneous, a value of 1/100th of a second for the constant TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS is probably short enough to fool the awareness.
*/
LP_determine_and_send_localized_body_feelings
{
label:repeat_steps_1_and_2  /* This named label marks a location in the code that the go to statement can jump to. */

/*
step_1

Assuming that the computing-element program is capable of multitasking and can have different threads of execution currently ongoing, then ideally step_1 and step_2 would be in separate threads of execution, with a semaphore protecting access to the global variables that step_1 sets and step_2 uses. However, to keep this presentation simple, in the following code I show step_1 as preceding step_2, in what would be a single thread of execution.
*/

Determine the current localized body feelings, if any, that should be sent to the awareness. Note that some specific minimum intensity level is required for a body feeling before it would be sent to the awareness. Create a table of the localized body feelings to be sent: included for each entry in this table is the body feeling, the intensity level for that body feeling, and the localization data for that body feeling at that intensity level.

If this table has more than MAX_SIMULTANEOUS_TO_SEND entries, then only keep in the table those entries that have the highest intensities, so that this table then has MAX_SIMULTANEOUS_TO_SEND entries; the other entries, having lower intensities, are removed from the table.

/*
Note that the same body feeling can have multiple entries in the table. For example, just sitting in my chair as I type this, if I stop typing and focus instead on what I am consciously feeling of my body, I feel simultaneously the touch feeling at multiple locations on my body, and each of these touch feelings has its own intensity and localization data (for example, feeling my bare feet on the carpet, feeling my back against the chair, feeling my bottom on the chair, and feeling my forearms on the chair armrests). Testing myself further while sitting in this chair, it seems that I can feel the touch feeling with four different intensity levels simultaneously, with each of these four different intensity levels localized differently on my body.
*/

set number_of_table_entries to the number of entries in the table. This number will be between 0 and MAX_SIMULTANEOUS_TO_SEND inclusive.

/*
step_2
*/

if number_of_table_entries is 0  /* no entries */
then
wait TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS  /* Other threads of execution can run while waiting. */

go to label:repeat_steps_1_and_2
end if

if number_of_table_entries is 1
then
Extract from that one table entry the body feeling (actually just a number that identifies which localized body feeling to send to the awareness), intensity level, and localization data, and, in effect, send that body feeling at the intensity level specified, along with the localization data, continuously, to the awareness, for the length of time specified by TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS.

go to label:repeat_steps_1_and_2
end if

/*
At this point the value of number_of_table_entries is somewhere between 2 and MAX_SIMULTANEOUS_TO_SEND inclusive.

When there is more than one table entry to send to the awareness at the same time, if one were to literally send them at the same time, then instead of having a total allocation of N awareness-particle input channels for carrying the localization data to the awareness, one would need instead a total allocation of (N × MAX_SIMULTANEOUS_TO_SEND) awareness-particle input channels for carrying the localization data to the awareness. And, because the value of N is large (my guess is that N has a value of roughly 500,000), that would be very wasteful of the awareness-particle input channels, which are a limited resource.

To avoid being wasteful with the awareness-particle input channels, the method used here is to, in effect, fool the awareness into perceiving several different localized body feelings as happening at the same time, when actually they are happening sequentially within the very short time span TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS.

Regarding how to divide the time interval TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS among the different entries in the table, the simplest way is to give each table entry the same amount of time, which would be (TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS ÷ number_of_table_entries). However, testing on my own body with the touch feeling while sitting in a chair, I think it more likely that the actual learned program in the human mind gives more of the time interval TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS to a table entry if that table entry has a substantially higher intensity level than the other table entries. The idea of this is to further focus the awareness on that more intensely felt location. The simple math that follows this comment, computing total_send_time for each table entry, does what is wanted, sizing each table entry’s total_send_time based on its intensity level compared to the intensity levels of the other table entries.

As an example of how total_send_time is computed, assume number_of_table_entries is 3, and table[1].intensity_level is 380, table[2].intensity_level is 189, table[3].intensity_level is 761, and TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS has the value 4,000. Then, sum_intensities is (380 + 189 + 761) = 1,330. And table[1].total_send_time is 1,143, table[2].total_send_time is 568, and table[3].total_send_time is 2,289 (the three add up to 4,000, which is the value of TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS in this example).
*/
set sum_intensities to the sum of the intensities in the table

for each entry in the table: set its total_send_time to ((that entry’s intensity level ÷ sum_intensities) × TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS)

/*
The following for works as follows: First, sub is set to 1, and then as long as the value of sub is less than or equal to number_of_table_entries, the code between do and end do is executed. Note that table[1] is the first entry in the table, and table[number_of_table_entries] is the last entry in the table.

Also note that the total elapsed time for this for to process all the table entries is TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS (the total_send_time values in the table add up to TIME_INTERVAL_FOR_SENDING_BODY_FEELINGS).
*/
for (sub = 1, sub <= number_of_table_entries)
do
Extract from table[sub] the body feeling, intensity level, and localization data, and, in effect, send that body feeling at the intensity level specified, along with the localization data, continuously, to the awareness, for the length of time specified by table[sub].total_send_time.

increment sub  /* add 1 to it */
end do
end for

go to label:repeat_steps_1_and_2
}

[152] A specific reallocation that probably happens in the afterlife to many people, perhaps most people, no later than very early in the lucid-dream stage of the afterlife, is that one regains the same ability one had as a young child to easily learn whatever spoken languages one is exposed to, and this implies a substantial allocation of awareness-particle input channels to the mind-part(s) involved in learning a new spoken language. The reason for this reallocation is that the spoken language or languages that are widely used in the lucid-dream stage of the afterlife will be languages that are efficient for use in the lucid-dream environment, whereas human languages, such as English, are efficient for use in the physical environment of our physical world. Thus, regaining the ability one had as a young child to easily learn new spoken languages will allow one to quickly learn whatever new language(s) one is exposed to in the lucid-dream stage of the afterlife. However, because a large allocation is needed to have the language-learning ability that one had as a small child, I think it likely that soon after one has learned the new spoken language(s) that one is exposed to in the lucid-dream stage of the afterlife, that, for a typical person, at least most of that large language-learning allocation will be reallocated elsewhere.

In the lucid-dream stage of the afterlife, one’s physical body and bion-body are both gone. One is just an awareness/mind, and instead of living in a world of physical matter, one now lives in a world of d-common matter, and, in general, living with d-common matter is a very different experience than living with physical matter. The most prominent difference is that, unlike with physical matter, one’s human mind has a direct manipulative ability over nearby d-common matter, including an ability to create and destroy d-common matter. Additionally, besides this different common-particle environment, there are other things in the lucid-dream stage of the afterlife that will increase the need for a different spoken language than the language one spoke as a physically embodied human, including, but not limited to, the following:

Instead of there being just one universally used spoken language in the lucid-dream stage of the afterlife, it is probably more likely that there are a number of different spoken languages used in the lucid-dream stage of the afterlife, that evolved in the past from different human languages. For example, given a man whose human language was English: early in the lucid-dream stage of his afterlife he will probably associate with other afterlife residents whose human language was also English, and the language he will then learn and use in the lucid-dream stage of the afterlife will be an afterlife language evolved from English. Also, at least some afterlife residents may be multilingual having learned several different spoken languages that are used in the lucid-dream stage of the afterlife. Also, spoken languages are living languages and continue to evolve as needed over time. For example, given how computers and computer programs are a recent feature of human life for many people, English words and phrases such as computers and computer programs might be recent loan-words into one or more of the afterlife languages.

Regarding the efficiency of spoken languages, note that, in general, in a given language, the shorter and more easily said words, consisting of one or two syllables, are the more frequently used words in that spoken language, which means that the definitions that have been assigned to those shorter and more easily said words are definitions of things that, on average, are spoken of more frequently than the things that have been assigned to the longer and less easily said words. In a human language, a lot of the shorter and more easily said words are used for common physical objects, including also many of the words having to do with our physical bodies. In the lucid-dream stage of the afterlife, there are no physical objects and no physical bodies, so probably many of the shorter and more easily said human-language words are reused in the lucid-dream stage of the afterlife with new or modified definitions for those spoken words. Also, given that the vowels and consonants of human speech are limited to the sounds that can be made by the human voice box (larynx), and given that in the lucid-dream stage of the afterlife spoken words are communicated to others telepathically by one’s unconscious mind (and not communicated as vibrations in the air), it follows that at least some of the spoken languages used in the lucid-dream stage of the afterlife may have at least some of their words using sounds that cannot be closely duplicated by human speech.

[153] Also, regarding reallocations that happen for the lucid-dream stage of the afterlife, perhaps for a typical person the reallocations include a large enough allocation for music-listening pleasure to make listening to what that person’s mind judges as good music very pleasurable. Many lucid-dream projectionists have had incidents of hearing music that sounded very good to them: I had one such incident myself that I still remember decades later (it is August 2015 as I write this sentence), in which I saw a man nearby in that lucid dream who appeared to be “playing” what looked like a stringed instrument on his lap, and the music that I heard was a fast and very beautiful sequence of notes that sounded to me like they could have come from some kind of stringed instrument.

Note that when I was having lucid-dream projections beginning at age 19 and continuing up to age 25 (see chapter 10), I already had an allocation for music-listening pleasure. If instead, I didn’t already have an allocation for music-listening pleasure, then in that lucid dream I still might have consciously heard that music, but I wouldn’t have felt any pleasure from hearing it, and I thus assume that those lucid-dream projectionists who report hearing pleasurable music during a lucid dream, already have an allocation for music-listening pleasure as I did. Also, note that in each incident of hearing music in a lucid-dream projection, either the heard music was a construction of the lucid-dream projectionist’s own unconscious mind, or the heard music was in the mind of a nearby person in that lucid dream who sent that music as a stream of messages to the mind of the lucid-dream projectionist. In my own case, it certainly appeared that the music I heard was produced by that person I saw “playing” that apparent stringed instrument, and I assume that person was indeed the source of that music that I heard and remembered.

In the lucid-dream stage of the afterlife, and assuming one has a substantial allocation for music-listening pleasure, one may well become, at least for a while, a member of a music club in which oneself and others are at times actively composing in their minds instrumental music and/or other kinds of music and/or songs to be sung, and then sharing their creations with others in that club and perhaps with others elsewhere. In such a club, listening to good music and feeling pleasure from it may be a commonplace activity.

Regarding communities and interest groups in the lucid-dream stage of the afterlife, just as one may have various interests that change over time in one’s human life, the same is probably true during one’s life in the lucid-dream stage of the afterlife. For example, besides possibly being a member in some music club as suggested above, one may eventually grow tired of that club and switch to a different club whose members, for example, work on mental puzzles and/or games. Or perhaps, for example, join a math club, or a story-tellers club, or an acting club such as a theater group that puts on plays, there are so many different possibilities. Also, one may concurrently be a member and active participant in several different clubs, only leaving a club after losing interest, or after wanting to spend more time in a different club. The possibilities are many regarding the different kinds of active clubs that are in the lucid-dream stage of the afterlife, so as to both pass the time and also, in effect, to exercise the different parts of one’s unconscious mind.

In the lucid-dream environment, regarding how one’s mind sends messages when speaking to, or singing to, or “playing” music for, a nearby audience which may include:

To send messages that will be received by everyone in the above-described audience which has an uncertain membership, the correct way is to send owned-minds broadcast messages (section 5.1.1). For example, in the lucid dream described above, during which I heard music that a nearby man was “playing”, his mind was probably sending that music as a stream of owned-minds broadcast messages. Note that I didn’t recognize him, so I’m guessing we were strangers to each other.

Instead of sending owned-minds broadcast messages to a nearby audience of certain and/or uncertain membership, one’s mind can send the same messages to a small list of one or more known persons (each known person has an entry in one’s soliton directory) by using the list_of_bions parameter of the send() statement to identify the intended recipient(s) of the sent messages: for each intended recipient person of the sent messages, the unique identifier of that person’s MESSAGING_WITH_OTHER_OWNED_MINDS bion, which is in one’s soliton directory, is added to the list_of_bions parameter. Thus, for example, one could telepathically talk to three specific friends simultaneously, without anyone else within range of the sent messages being able to hear what one is saying.

[154] This and the next several footnotes are going to talk mostly about human emotions. This footnote introduces the fear emotion, which besides being very common among humans, is probably also very common in non-human animals that have a soliton/mind.

Regarding myself, I have never felt fear or terror in my life, nor have I ever been scared or afraid, so apparently I have a zero allocation for the fear feeling. Although I have never felt fear myself, I accept that fear is a real feeling that many people—perhaps most people—have experienced. This acceptance is based on the abundance of written and spoken material in this USA society in which I live, that talks about fear as if it were a real feeling, and also based on conversations I’ve had with people who claim to have felt fear, been scared, been terrified, and such. (Based on what I’ve been told, English has different words that all refer to the same fear feeling, including the words fright, scared, and terror. For example, I was told that being terrified means feeling fear more intensely than usual.) One of these conversations was in the Fall of 2004, and the person I was talking with repeatedly made the point that fear has a protective purpose. In reply I said “obviously,” but he kept making the point that fear has the purpose of protecting the person from possible harm.

So, after that conversation, later that night, I had a sudden insight about the reason for a decades-long habit I had: Since at least my mid-twenties I have had the habit of running thru my mind worst-case scenarios as the possible outcome for whatever possible action I was considering, and I then made a decision on that possible action, based on my estimation of how probable a worst-case outcome was. The insight I had was that I was compensating for my lack of fear. I was doing by conscious rational means what a person who feels fear does by simply feeling fear (the fear mind-part probably uses the same basic algorithm that I was consciously using). Thus, I had developed an alternative protective mechanism for myself, because I still had to be protected, since the physical body has many needs—food, water, clothing, shelter—and is structurally weak and easily damaged. (As an aside, since the purpose of the fear feeling is to protect the physical body, and, given that the Caretakers have no physical body, it’s unlikely any of the Caretakers have the fear feeling, or, if they do have the fear feeling, it must serve a different purpose for them.)

Regarding the idea that fear has a purpose, after thinking about it I reached the conclusion that all the various feelings, including both good feelings and bad feelings, have purpose, and their purpose is that these various feelings are the primary means by which the unconscious mind influences the soliton without forcing the soliton’s decisions. The soliton is still the ruler, but similar to the situation in a human government, the ruler is subjected to various influences coming from lower levels in that government. Also, the soliton may sometimes get conflicting feelings coming from different mind-parts, which is similar to a human government where the ruler may sometimes be subjected to conflicting influences coming from different governmental departments. Note that none of the feelings we experience are generated by the soliton. Instead, all feelings are generated by the unconscious mind and sent to the soliton where they are felt.

The purpose of good feelings and bad feelings, explained in the previous paragraph, also applies to all the non-human animals that have a soliton/mind. In general, the non-human animals that have a soliton/mind, probably have the same body feelings that humans have, because their physical bodies have the same needs and vulnerabilities that the human body has. Also, I think it very likely that, in general, non-human animals that have a soliton/mind, typically get pleasurable feelings as a reward, when they eat and drink, and have sex. Regarding the list of ten different human emotions given in a later footnote, at least some of those ten different emotions, in addition to the fear emotion in that list, are probably also found in many of the non-human animal species that have a soliton/mind. For example, if a non-human animal that has a soliton/mind, has the fear emotion, then that animal may also have the joy emotion as a counterweight to that fear emotion, and would probably experience the joy feeling when, for example, that animal comes across an unexpected food or water source. As another example, the happiness emotion, which, among other things, rewards acquiring food and rewards being a mother, may be a common emotion for the non-human animals that have a soliton/mind.

[155] As mentioned in the previous footnote, I have never felt fear in my life. Or, if I have felt fear, it was when I was so young that I no longer consciously remember it (perhaps I had an allocation for the fear feeling during my childhood that was reallocated elsewhere as I grew older, but I have no conscious memories to support this possibility). Besides having never felt fear, I have also never felt loneliness nor sadness.

Until recently I didn’t even know if loneliness and sadness were real feelings or not. I had never really thought much about it, and hadn’t done any research. However, because of the new happiness feeling and enhanced crying feeling that were two of the consequences of the reallocations that followed the large decline in my ambition and sexual interest at age 48½ in 2004, I had a lot of interest during the following year (2005) in the whole subject of feelings, and among other things I wanted to know about loneliness and sadness.

To get answers to these questions, I turned to my then 21-year-old niece, Melanie (my sister’s daughter), who I knew was very feminine and had strong feelings. My niece lives in a different state, so I had to call her on the phone. The first thing I wanted to know about was loneliness, because it seemed to me that loneliness was mentioned more in USA media than sadness, so I reasoned that it was more likely to be real. I phoned her July 2nd, 2005, asking specifically about loneliness. Here are the notes I wrote two days later on July 4th (edited for improved readability and clarity):

I talked with Melanie and asked her about the loneliness feeling. In answer to my questioning, what she said can be summarized as follows: She said it’s a real feeling, and it has the same kind of intensity range as other feelings (one can feel a little lonely, more lonely, or very lonely). She also said it’s closest to the depression feeling in how it feels, but it’s still a separate and distinct feeling. She has often felt lonely at the same time as feeling depressed, but she has at other times felt lonely without feeling depressed, and at still other times felt depressed without feeling lonely.

Note: She mentioned that she had often been depressed for the last five years. She recently turned 21, so this means her depression started around age 16. This roughly agrees with my own recollection of when I first started hearing about Melanie being depressed.

During the talk, she said that the loneliness feeling was often felt as being lonely for finding the right guy, but sometimes she felt lonely in a non-specific way. Given how she described it, I got the impression that the loneliness feeling as she experienced it, was more often than not, and more intensely so, focused on finding and being with the right guy.

A few hours after I talked with her, the cause of her depression occurred to me: her unconscious mind wanted her to be mated by age sixteen, and was using negative feelings—the depression feeling and the loneliness feeling—as motivators. The obvious problem for her is that she is living in a crap society (the USA), which forces an extended childhood on young women, and treats a man having sex with them as a criminal worthy of a long imprisonment.

So, my talk with Melanie was very productive, because not only did I get detailed information about the loneliness feeling, but I also got an explanation for the chronic depression that afflicts many teenaged girls in the USA. For those who don’t know, in Europe up until a few centuries ago, it was commonplace for girls to marry in their early or mid-teens (men typically married at a later age). The transformation of society by such things as industrialization, the imposition of forced schooling in the 19th century, and the anti-national policies of imperialism which include hostility to sex and family, have created an environment that is actively hostile to early-teen and mid-teen marriages for women. Apparently, it is easier to change society than it is to change the unconscious mind, with the end result that a lot of young women suffer like Melanie did.

There must be people who have a zero allocation for the depression feeling, but I’m not one of them. Instead, I apparently have a small allocation for the depression feeling, because I’ve been depressed three times in my life for a total of about five hours of feeling depressed: two different romantic disappointments with women when I was young, each resulting in a period of depression that lasted about two hours, and one time in my early forties (actually, the day before my 41st birthday) when I realized that I had been successfully lied to by the USA media regarding a specific matter of history (I’ve written about this elsewhere as follows: “As this realization hit me, I felt very small and weak, and was depressed for about an hour.”). Insofar as I remember what depression feels like (I am 50 years old as I write this footnote in January 2006), it’s an oppressive, negative feeling, and I was really lethargic while having that depression feeling (I just sat in my chair and didn’t want to move).

On July 29th, 2005, I phoned Melanie again, as a followup to our July 2nd, 2005, conversation about loneliness. Here are the notes I wrote September 6th, 2005, about that July 29th followup call (edited for improved readability and clarity):

Supplement for the July 4, 2005 notes about Melanie and the loneliness feeling, and my explanation for her depression:

I knew I was going to tell her about my explanation for her depression by reading her my July 4, 2005 notes, but before doing that, without giving her any clue as to why I was asking, I wanted to put my depression explanation to an immediate test, because I had already learned from previous conversations with her, that she had met a man earlier in the year, and she had an active and ongoing relationship with him, and her feelings for him were very strong. So, knowing all this, and knowing that I had what I believed was the correct explanation for her roughly five years of depression, and knowing that she didn’t sound depressed during our recent phone conversations, I expected to hear that her depression disappeared coincident with that recent entry of that man into her life.

So, I asked her was she still depressed. She said “No.” I then asked her when the depression stopped, and her answer was exactly what I was expecting to hear: she lost her depression at the same time as that man entered her life. Thus, her answers agreed with my explanation as to the cause of her previous chronic depression. After this questioning of her and hearing her answers, I then read her my July 4, 2005 notes, and she said my notes were very accurate regarding what she said about loneliness and her experience with it, but she disagreed with my explanation of the cause of her previous chronic depression. Her disagreement is what I expected, since she had already been brainwashed by the USA media to believe the bogus “chemical imbalance” explanation for depression. During the years she was depressed she had been to different psychiatrists, and she had been taking the various “anti-depressant” pills they prescribed, but her depression—although dulled by the pills along with the rest of her mind—remained. But with the entry of that man into her life her depression disappeared.

This footnote is already big enough, so my talk with Melanie about sadness is in the next footnote.

[156] Continuing from the previous footnote, on October 30th, 2005, I phoned Melanie asking about the sadness feeling. The following are the notes that I wrote during that conversation (edited for improved readability and clarity):

According to Melanie, sadness is a real feeling that is separate from the other feelings, including the loneliness feeling, the fear feeling, and the depression feeling. Sadness has the usual intensity range for a feeling. However, in her experience, sadness is a less intense feeling than loneliness, fear, and depression. [Apparently, Melanie has a smaller allocation for the sadness feeling than she has for the loneliness feeling, the fear feeling, and the depression feeling, which is why she hasn’t felt sadness as intensely as she has felt loneliness, fear, and depression.]

From her own experience, triggering causes for the sadness feeling include the following:

  1. A sudden loss of a possession (for example, a few days ago a pair of earrings she had just bought were stolen from her after she had put the bag down in a different store). Also, just losing things in general that she can’t find, but that’s less sad since it’s not compounded with betrayal by a stranger.

  2. Betrayal by a friend (for example, when Melanie found out that one of her friends had lied to her and was talking about her behind her back). Also, she felt sad when a friend was mean to her. She has also felt sad when remembering these things.

  3. Getting a bad grade in school on a test she tried hard on. She can also feel sad even if she didn’t try hard.

  4. When she got out of the hospital and she was a lot heavier than when she went in, the conscious realization that she was substantially overweight made her feel sad.

  5. A car accident she had. She was sad when it happened, and afterwards when she thought about it.

  6. An embarrassment or embarrassing situation, such as being made fun of, can bring about sadness.

  7. Having a minor physical injury, including when she was bit by a dog, and when she fell off her bike, and when she got a scissors stuck in her foot requiring stitches.

  8. She kissed a male friend, and she helped him out, but he didn’t call, and she felt sad as a result. Her feelings were hurt.

  9. She felt really sad when her rabbit died, and less sad when her hamster died. She also felt sad when her fish died.

  10. She felt very sad when she had bad acne on her face, and when she had to go out in public like that. She felt sad when looking in the mirror, and when she thought about people looking at her.

  11. She has felt sad thinking about how her family doesn’t have much money, and yet she’s spending some of it, so she feels like a weight is on her, and she feels sad about it.

  12. Someone dies that she knows, or something bad happens to someone she knows. For example, when one of her cousins died in a car accident, and also when she heard about her grandmother suffering from pain caused by shingles. It doesn’t have to be a person close to her. For example, she felt sad about 911 (September 11, 2001 [the unexpected destruction of the World Trade Center in Manhattan, New York, USA, on this date]), the whole thing.

  13. If she breaks something accidentally that has value to her or others, she may feel sad about it, especially if that broken thing meant something to either her or someone else.

  14. She’s sad if it’s really cold outside. She doesn’t like the feeling of being cold, and she can’t enjoy being outside when it’s like that. She feels sad (a light sadness) when she transitions from a warm environment into the painful cold. And if she’s stuck in that cold, she can feel sad at different times while being stuck in that cold.

The above list of triggering causes is given in the original order that I wrote them down. I kept asking Melanie for more and more examples of what caused her to feel sad. These triggering causes are listed in the order that Melanie remembered them and told them to me (I wrote each one down as she was talking). Some of the events mentioned were recent events in her life, and others were years or many years in the past. I basically wanted her to tell me everything she could remember about sadness, and our talk only ended when she couldn’t think of anything else that has caused her to feel sad. Thank you Melanie for your help.

[157] The following table lists ten emotions. Each of these emotions has its own very specific and unique feeling which can vary in intensity but is always the same feeling in terms of how it feels to the awareness. Each of these emotions is distinct and separate from the other emotions and all other feelings, and each of these emotions, assuming one has a nonzero allocation for it, has its own dedicated non-shared allocation of awareness-particle input channels.

Although some people have a nonzero allocation for each of these ten emotions, there are also other people who do not, including myself. In my own case (I’m 50 years old as I write this sentence in 2006), I have yet to feel four of the ten emotions listed below: fear, joy, loneliness, and sadness. This implies that for my entire life so far, I’ve had a zero allocation for these four emotions (it’s possible that I had a nonzero allocation for one or more of these four emotions during my infancy and/or early childhood, but I don’t have any conscious memories to support this possibility).

depression:

Assuming one has an adequate allocation to feel it, depression is perhaps the worst feeling to have. Given the conditions under which it appears, and also its potential to be chronic, it seems that the depression feeling is sent to the awareness as a notification or signal that the unconscious mind is frustrated with the current situation. The purpose of the depression feeling is to provoke the awareness to change the current situation, because depression is something that the awareness will want to avoid feeling.

The immobilizing quality of depression makes it harder for the awareness to continue with its life as usual. As long as the situation remains unchanged, the depression can remain, becoming chronic.

Changing the situation in a way that ends the depression depends on the situation. In my own life I’ve been depressed three times, including twice because of romantic disappointments with women, and in each of those two cases, while sitting in my chair immobilized with the depression feeling, I realized things weren’t going to turn out as I wanted and it was time to give up and move on, which is what I did. Thus, I changed the situation by simply giving up, and it worked insofar as my depression ended (in each of those two cases I was depressed for about two hours). Similarly, much later in my life when I was depressed after realizing that the USA media had successfully lied to me and much of the world, about a specific 20th-century historical matter, I accepted that I had been deceived and I resolved to study how I was deceived and learn from that experience (in this case I was depressed for about an hour).

happiness:

My own experience with the happiness feeling—described in a previous footnote—is that it is a very nice and pleasant feeling. Based on my own experience and the experience of others, happiness is given to the awareness as a reward for actions that are life-sustaining or life-perpetuating. Thus, the purpose of the happiness feeling is to encourage life-sustaining and life-perpetuating actions.

I have heard of people crying from being so happy. In my own case, I have yet to feel a strong or intense happiness, so my current happiness allocation is probably too small for me to ever cry with happiness (my current crying-feeling allocation is probably adequate, but both allocations are needed). However, I did ask my niece, Melanie, about crying from being so happy, and she said that she herself has cried from being so happy. So, as long as one has a big enough allocation for the happiness feeling, and also a sufficient allocation for the crying feeling, it can happen. Hmm … it sounds rather blissful, being so happy.

fear:

I’ve already described in a previous footnote a conversation I had with a friend who repeatedly made the point that fear has a protective purpose. Thus, the purpose of the fear feeling is to warn the awareness of potential danger. More specifically, the fear feeling is a signal to the awareness that the unconscious mind judges the object of the fear feeling—whatever one is feeling fearful of—as something that is potentially threatening or endangering in some way to either oneself or others or both.

In answer to my question of where the fear feeling lies on the pleasure-pain scale, another friend, my brother-in-law (age 60), described it as follows: “unpleasant to extremely painful, depending on the situation.” He also said in answer to further questioning that depending on the situation, he has felt fear for others, including feeling fear for the well-being of people who were neither close nor well-known to him. However, in general, the closer his relationship to a person, the more intensely he can feel fear if that person’s well-being is endangered. Also, most of his experience with the fear feeling has been in situations where he himself felt threatened or endangered in some way, with his own personal safety and well-being at apparent risk.

Regarding fear, on June 20, 2006, I received an interesting email from a 64-year-old man in Texas who described his own experience with fear as follows (quoted with his permission):

Fear has been my most driving force since I can remember (I was a bed-wetter). Fear is probably responsible for most major decisions in my life—I quit smoking because of fear, I quit drinking because of fear, I avoided many risk-prone pleasures because of fear. The absolute most fearful moment in my life was when I first laid eyes on the lady that would become my wife, and she stared me down—I felt a yellow streak run down my back that I had only read about before—sheer utter debilitating FEAR. I have never experienced that level of fear since, but fear, even heavy-duty levels, have always been ready and waiting. I always despised myself for being so afflicted with fear, but, after reading your page on fear [he is referring to the 10th edition of this book, specifically the above-mentioned previous footnote where I describe the conversation I had with a friend about fear having a protective purpose], I am now rethinking my attitude—maybe I should be thankful for having been born with such a massive dose of fear. My life has been more or less blessed and charmed, somewhat.

joy:

My source for information about the joy feeling is my brother-in-law. During a phone conversation on February 22, 2006, I was asking him where the fear feeling lies on the pleasure-pain scale, because after reviewing what I had written so far about emotions in this footnote, I realized I was missing that detail. However, after getting his answer about fear, I then asked him if I was missing anything from my then list of eight emotions which I read to him, and he said I was missing joy.

Initially I was skeptical about this claim of a joy feeling (as far as I know, I haven’t felt joy myself), but after detailed questioning and note taking, I realized a few things: joy is a real feeling that my brother-in-law has felt at different times, and it’s not the same as the happiness feeling. Although he has an allocation for this joy feeling, apparently he has a zero allocation for the happiness feeling, because his idea of the happiness feeling is the same kind of intellectual idea of happiness that I used to have before I got an allocation for the happiness feeling. His idea of happiness is when everything is going well in his life, then he is happy. Fortunately, in sharp contrast to his ignorance about happiness being a real feeling, he has a lot to say about joy being a real feeling.

About the joy feeling, here are my notes which I took during that phone conversation (edited for improved readability and clarity). These notes record my brother-in-law’s answers to my various questions:

Other English words for the joy feeling: elation, thrilled, cloud 9.

Regarding what triggers the joy feeling, he says there are two essential requirements:

Regarding where joy lies on the pleasure-pain scale, he said it is highly pleasurable, intensely pleasurable. When it triggers, it’s usually a 9 or 10 on the pleasure scale (10 is max pleasure). The better the outcome, and the more unexpected it is, the more intense the joy.

Joy is personal. He hasn’t felt joy for unexpected good things happening to others.

Events in his own life that he remembers as causing him joy:

So, given the above information about the joy feeling, what is its purpose? Initially I was puzzled about its purpose, but it now seems rather obvious to me: The purpose of the joy feeling is to encourage risk taking by rewarding it with the joy feeling when a good outcome results. Apparently, one of the effects of the joy feeling is that it can act as an antidote for the fear feeling, because my brother-in-law, who also has an allocation for fear, said how he felt some fear while skiing down that hill the first time, but he knew that the joy feeling was waiting for him if he succeeded.

In the English language, for those who experience the joy feeling, an often used phrase for it is adrenaline rush, especially when they talk about a physical activity or sport that has an element of physical danger. For those individuals who actively seek the joy feeling from dangerous sports and/or other dangerous physical activities, a common phrase for such persons is adrenaline junkie. Examples of dangerous sports include mountain climbing, mountain biking, skydiving, paragliding, bungee jumping, and kayaking down rapids. Also, at least some adrenaline junkies are drawn to the non-sport activity of riding roller coasters. For example, the following are the notes that I wrote on January 31, 2012, after talking with a 23-year-old woman who said she had ridden roller coasters hundreds of times, and that she had been on more than a dozen different big roller coasters. Her name is Jennifer, and I refer to her in these notes (edited for improved readability and clarity) as Jenn (she was one of the nurses I had helping me care for my father at home after my mother had died; Jenn was very intelligent and I found her very sexually attractive and that is why I talked with her a lot, until I was worn down by her repeated rejections of my advances because, as she repeatedly told me, I was too old, old enough to be her father, and so on; oh well, such is the situation for old men—I was 56 then—young women don’t want us as potential fathers of their future children, which is understandable):

Jenn says she always feels some fear and anxiety when she first gets on a roller coaster, and her fear and anxiety feelings continue while she waits for the roller coaster to start moving (this wait can be as long as a minute or two), and once the roller coaster starts moving, her fear and anxiety feelings are continuing until the roller coaster reaches the first peak, before the first drop-off of that roller-coaster ride; and Jenn says the emotional intensity of her fear and anxiety is at about a medium-high on her intensity scale, from the beginning when she first gets on the roller coaster and continuing until the roller coaster reaches that first peak.

Once the roller coaster is at that first peak and starts its fall downward and picks up speed, it is at that moment when Jenn first starts feeling the joy feeling (Jenn calls it an adrenaline rush), and the intensity of this joy feeling for her is variable during the remainder of that roller-coaster ride, with her intensity peaks occurring at the different loops, twists, and turns.

At the end of the roller-coaster ride, as soon as Jenn is unbuckled and off the roller coaster, and her feet are on the platform alongside the roller coaster, Jenn feels overjoyed (a strong joy feeling).

I don’t really know about the prevalence of the joy feeling in the general population or in the two genders. However, given its purpose to encourage risk taking, it seems likely that, on average, it is more heavily allocated to men than women. Also, as I think about it, on average, men like to gamble more than women, and perhaps in many cases a man who likes to gamble also has an allocation for the joy feeling, and he knows that if he wins against the odds he will get a reward: the joy feeling.

loneliness:

According to my niece, Melanie, the loneliness feeling can be loneliness for the company of others in general, or loneliness for a specific person or kind of person, and, in particular, loneliness for a mate. She said that loneliness is closest to the depression feeling in how it feels, so this means that loneliness is a painful, unpleasant feeling.

Given the very narrow and specific focus of the loneliness feeling, its purpose is very obvious: The purpose of the loneliness feeling is to promote and encourage socialization and mating.

anger:

I have felt anger many times in my life, and I have felt anger at many different intensity levels: ranging from feeling just a little angry, all the way up to feeling so intensely angry that I am almost completely taken over by it and it’s a real struggle for me to retain control over myself. So, I think I have an anger allocation that is at least average for a man of my nationality, and perhaps substantially above average.

Just yesterday (January 22, 2006) I got moderately angry, and after I got home I thought about it a lot, because I was in the middle of writing this footnote about emotions. Here was the triggering cause: I had to drive my mother to a building on the other side of town, and I thought she knew where it was exactly, but it turns out that she didn’t know, and she had me driving around in circles for roughly ten minutes before I got angry about it. In reaction to my own anger, I decided to stop the car and park nearby, with the idea of getting out of the car and just walking into the nearby buildings and asking as needed until we got the right building that she wanted, which is what we did. So, my getting angry served a useful purpose, because it provoked me into changing the current situation of my driving around in circles which was getting us nowhere.

As I thought about it later that day, I realized that anger is similar to depression in that both feelings are expressions of frustration with the current situation. The anger feeling, like the depression feeling, is sent to the awareness with the purpose of provoking the awareness to change the current situation. However, these two different feelings seem to cover different kinds of situations with little overlap, if any, between them.

Regarding what anger feels like, it’s definitely an unpleasant feeling, but not very unpleasant. On the pleasure-pain scale I would have to say that anger, even when I felt extreme anger, was at most only a little painful. Regarding gender difference, anger is more common among men than women. Comparing anger with depression, the low-pain of anger allows one to change the current situation quickly, whereas the immobilizing quality of depression has the opposite effect. Thus, given that anger is more common in men, and depression is more common in women, this adds to the perception of men being active and women being passive.

laughing feeling:

English seems to lack a single word for the feeling that goes along with laughing—I’m using the phrase laughing feeling for this feeling. Note that words like funny, humorous, and comical, refer to things that cause this feeling, but not the feeling itself. The reason English lacks a word for this laughing feeling is basically the same reason English lacks a word for the crying feeling: the close association of that feeling with an easily seen outward action (laughing and crying, respectively). This means that the feeling is implicit depending on the context when one uses words for that outward action. For example, saying “that made me laugh,” implies that one felt the laughing feeling when laughing.

Although English is lacking, there is still a need for being able to refer to the laughing feeling directly, and likewise for the crying feeling, because one can have that feeling without its associated outward action, as I know from my own experience. For example, I can feel that something is funny—feeling the laughing feeling—without actually laughing about it, although sometimes I do laugh: The more intense the laughing feeling is, the more impetus there is to laugh. However, at less intense levels, the laughing feeling can result in just a smile or perhaps some chuckling, or no outward show at all.

Like anger, the laughing feeling is a feeling that I have a lot of experience with. I think I have a laughing-feeling allocation that’s about average for a man of my nationality. On the pleasure-pain scale the laughing feeling is mildly pleasurable. Perhaps you’ve heard the expression that goes like this: “I laughed so hard that it hurt.” Well, that has happened to me at least a few times in my life, and the pain referred to is just ordinary body pain caused by the physical strain of prolonged, hard laughing. The laughing feeling itself is never painful.

The laughing feeling has a purpose, of course, so what is its purpose? Arthur Schopenhauer said that finding something funny involves detecting a misapprehension. My Webster’s dictionary defines misapprehension as a failure to interpret correctly; a misunderstanding. I remember analyzing Schopenhauer’s explanation after I first learned about it, back in my mid-twenties: I analyzed examples of things I found funny, and I could see that Schopenhauer’s explanation was correct. Thus, the purpose of the laughing feeling is to signal the awareness that there is a misapprehension (and the existence of a misapprehension is also signaled to others who see or hear oneself laughing).

In preparation for writing about the laughing feeling, for the last few days I’ve been paying attention to things I found funny. For example, in a fansubbed non-anime Japanese-TV romance-comedy series that I downloaded and was watching in 2006, here was a scene I found moderately funny: The main character is in a room with several of his friends, and after a setup which I’ve already forgotten, we see him ranting and raving to his friends a completely wrong understanding of something that happened in the previous scene. Of course, for the audience to find that misapprehension funny, we have to be shown in advance what the correct interpretation is—this was done in previous scenes—so that we know with certainty that that main character has gotten things completely wrong. This comedic strategy was used several times in that romance-comedy: The main character was set up for a misunderstanding, but the audience is given the correct interpretation in advance, and then we see that main character emphatically voicing his misunderstanding to others.

A misapprehension can happen in many different ways. For example, an expectation that proves to be wrong is one kind of misapprehension. Last night I watched an anime that had the following scene that I found funny: The main character is told by a second character that he has to join an ongoing battle taking place in a nearby park (these two characters are watching the battle on a video screen). The main character agrees with that suggestion, and the next thing we see is a rocket that out-of-nowhere springs up from the floor, closes around that main character, and then flies him away while he yells and acts surprised at what is happening. I was surprised too, and I laughed a bit. Of course, I was simply using my own expectation for how that main character was going to get to that park, and my expectation did not include a rocket. Thus, I was laughing at my own misapprehension, which was also the misapprehension of that main character, since he was yelling and acting surprised.

This comedic strategy of an expectation that proves to be wrong, has been excessively overused when it comes to exploiting the expectations that we all have of how people move the parts of their body. For example, in past years on USA TV, I have seen way too many exaggerated physical movements for me to still laugh at such things. The already mentioned Japanese romance-comedy series, had several such attempts at humor. For example, one scene had two guys in an office spreading out on the floor many pages from a report they had to prepare. Then we see another character walk into that office, and without seeing those pages he starts to walk on them until he is loudly told to get off, at which point we see him react with wildly exaggerated movements, trying to get off those pages on the floor, with the end result that he makes a complete mess of them. I guess if I hadn’t already seen that kind of joke—wild exaggerated movements—a thousand times before on USA TV, I might have laughed at it.

In addition to the above examples, a few days ago during my web browsing I came across a joke that made me laugh out loud even though I was in a room by myself. The joke was part of the write-up for a fund-raising auction of a single t-shirt by the file-sharing guys who run The Pirate Bay, which is located in Sweden. The winning bidder has to fly to Sweden at his own expense to collect the t-shirt, but he gets to meet, talk, and have drinks with The Pirate Bay crew. The joke was in the form of a question-answer pair, with a quasi-serious question being answered with a pseudo-serious joke as follows:

Q: If I were to win this shirt, and fly out to see you, wouldn’t you then, in return, have to fly back to visit me to keep your ratio at 1:1?

A: Actually, we would have to travel and visit several people (especially your sister) as we prefer to keep our ratio well above 1.

This question-answer pair has several misapprehensions in it, all of which are deliberate: The first misapprehension is that both the question and answer parts treat the file-sharing upload-download ratio as if it also applies to visits between people. The second misapprehension (in the answer part) turns the idea of reciprocal visits between people (introduced by the question part) into the idea of the guys from The Pirate Bay showing up to have sex with the questioner’s sister. So, this question-answer pair has a real one-two punch in terms of misapprehensions, with the first misapprehension in the question part serving as the setup for an even bigger misapprehension in the answer part. Well, anyway, it certainly made me laugh.

crying feeling:

The crying feeling, which I have already described in a previous footnote, is a neutral feeling that is neither painful nor pleasant. In terms of its purpose, the crying feeling is like the laughing feeling: both feelings signal something to the awareness, and also to the awareness of others when one outwardly does the action that is closely associated with that feeling. For the laughing feeling, that action is laughing; for the crying feeling, that action is crying or becoming teary eyed. The laughing feeling signals detection of a misapprehension, but the crying feeling is harder to pin down regarding what it is signaling, because, based on my own experience with the crying feeling, it has many different triggering causes, including good things, and also bad things.

It seems that most humans starts out with a substantial allocation for the crying feeling, because most babies will cry as a signal to others when hungry, or in pain, or experiencing discomfort. Small children are also prone to crying, especially when they suffer physical hurt or injury. For a typical person, probably no later than puberty, at least some of that crying-feeling allocation is reallocated elsewhere, and the things that trigger the crying feeling also change, at least to some extent. In my own case, from my teen years onward, considering how rarely I felt the crying feeling, it seems that by my early teens at the latest, most of my previous allocation for the crying feeling had been reallocated elsewhere.

From my teen years onward, prior to the large decline in my ambition and sexual interest at age 48½ and the consequent reallocation which included a large increase in my crying-feeling allocation, I had only cried or felt like crying four times in my life, and each time it was about something very bad.

However, after that large increase in my crying-feeling allocation, I have felt the crying feeling many times, sometimes also becoming teary eyed or crying a little, when watching certain things in Japanese anime and non-anime shows. As a rule, at least in my own case, triggering causes seem to be almost exclusively moments when either family togetherness wins against obstacles, or friendship wins against obstacles, or lovers win against obstacles. These are good things that I have the crying feeling for, in sharp contrast to when I had a much smaller allocation for the crying feeling and only certain very bad things were sufficient to trigger that crying feeling.

Based on my own experience with the crying feeling, and also after thinking about examples of when others cry, it appears that the crying feeling is signaling to the awareness that the unconscious mind considers the triggering cause as something important that affects survival within a community. Thus, the purpose of the crying feeling is ultimately to promote community development and stability. Typically, the community is some local community of two or more people, such as family, friends, lovers, fellow workers (a workplace community), and so on.

The neutrality of the crying feeling, being neither painful nor pleasant, is consistent with that feeling being triggered by both good things (things that promote survival within a community), and bad things (things that work against survival within a community). Note that it would be inconsistent if the crying feeling were painful when triggered by a good thing, and likewise inconsistent if the crying feeling were pleasant when triggered by a bad thing. Thus, it’s appropriate that the crying feeling is neither painful nor pleasant.

sadness:

Given Melanie’s list of triggering causes for the sadness feeling (see the previous footnote), it appears that the common element is a loss of some kind. So, Melanie has felt sad over different kinds of personal loss, including such things as loss of physical possessions, loss of trust (betrayal by strangers and friends), loss of social standing (embarrassment, poor grades), loss of normal physical appearance (being overweight, having acne), loss of normal body integrity (suffering an injury), loss of pets (deaths), loss of freedom (lack of money), loss of other people (deaths), and loss of physical comfort (being cold).

So, the sadness feeling is a signal to the awareness that there has been a loss of some kind, and its ultimate purpose is to encourage the awareness to make decisions that will tend to avoid or lessen future losses. According to my niece, sadness is always painful. This is consistent with the sadness feeling always signaling something bad.

Sadness is an emotion that, on average, women have more than men. Regarding this gender difference, I recall a quote from a fansubbed non-anime Japanese-TV drama series that I recently downloaded and watched in 2006:

Women choose life, and men choose death.

The context for the above quote was the following: A high-school girl is in love with her math teacher, and he is in love with her, but he has a brain tumor that will soon kill him, and he is against having a low-chance-for-success operation that could keep him alive but leave him with serious brain damage and resulting mental losses. So, he’s against having the operation, but in the end, when he is close to death, his girlfriend, along with an older woman, successfully work together to get him to agree to have the operation (this older woman is the one who says the above quote). The story ends with hints of a final happy post-operation outcome in which the two lovers are ultimately together again.

So, what does this quote have to do with the sadness feeling? Well, if one has an allocation for the sadness feeling, sadness is something to be avoided, because sadness is a painful feeling. My niece was saddened by death. So, on that basis alone, she would be inclined to choose life-preserving actions for someone close to her, because she knows from her own experience that death makes her sad. The above quote was memorable to me because upon hearing it I realized I was thinking like a man, since in the same situation I would choose death too. Since I have never felt sadness myself, I haven’t had the kind of pro-life reinforcement that Melanie has had as a result of her being saddened by death.

anxiety:

In the course of writing this footnote, after I had written the text for the nine emotions listed above, I asked the same friend who more than a year previously had made the point that fear had a purpose, if there was any emotion I had missed. He suggested anxiety, and he made it sound like a real feeling which in his case happens in certain social situations in which he gets this anxiety feeling and wants to flee the scene. Thus, the apparent purpose of this anxiety feeling is to encourage avoidance of certain social situations that pose some kind of difficulty for that person.

I haven’t felt anxiety myself, at least not an intense anxiety like he has sometimes felt, although I do remember having been nervous a few times in my teens when facing certain social situations I didn’t want. For example, I remember that in high-school there were a few times when everyone in the class had to prepare and give a talk to the whole class about some subject approved by the teacher, and I always felt nervous right before having to give my talk. I guess feeling nervous in a social situation is an example of the anxiety feeling.

Regarding anxiety’s place on the pleasure-pain scale, it has to lie on the pain side, because feeling nervous is unpleasant. Presumably, the more intense the anxiety feeling, the worse it feels. Regarding gender difference, it seems that, on average, the anxiety feeling is more heavily allocated to women than men, because displays of high anxiety levels, including such things as so-called panic attacks, seem to be more common among women than men.

For the ten emotions listed above, seven emotions—depression, happiness, fear, loneliness, the crying feeling, sadness, and anxiety—are each, on average, more heavily allocated to women than men, and the other three emotions—anger, joy, and the laughing feeling—are each, on average, more heavily allocated to men than women.

Some readers may wonder why I didn’t include love in the above list of emotions (specifically, I mean love between men and women, the underlying purpose of which is to bring forth new children). The reason is that there is no specific unique feeling associated with being in love. In other words, one doesn’t know that one’s in love by virtue of having the love feeling, because there is no specific love feeling. Instead, being in love is typically characterized by such things as thinking a lot about the loved one, and being strongly attracted to that loved one. (As an aside about love: In my latter twenties I read a magazine article in which the author remarked that everyone falls in love 2½ times in their life, and she was basing this on her own experience and the experience of her friends. I have long since forgotten what that article was about, but that remark about falling in love 2½ times has been memorable for me, because it was also true in my own case: I had fallen in love a total of three different times—three different females—and my last time was only about half as intense as the first two times. I am now much older than when I read that article, but that ½ love I had in my early mid-twenties is still the last time I was in love. I guess the reason the last love is less intense is because it’s a transition from full love to no love.)

With a baby born to parents, comes the parent-child bond, and a different context in which the word love is often used in the English language. However, there is no specific unique feeling associated with a mother or father having love for their child, or with a child having love for his or her parents. Instead, when a need for help is involved, this love is characterized by one being inwardly compelled by one’s unconscious mind to help one’s child, or parent, who needs help; and the strength of this inward compulsion—given how intensely a given person can feel this inward compulsion which depends on that person and also surrounding circumstances—is roughly proportional to the perceived importance and urgency of the help needed. The most common example is a baby or young child dependent on, and in need of help, from his or her parents, since we all start off in life as completely dependent, helpless babies. A less common example is the case of an independent grown child who finds himself inwardly compelled to help his elderly parents as they become increasingly dependent and helpless due to complications of advancing old age—I experienced this myself beginning in early 2006 and continuing for years afterwards up until their respective deaths. The more my parents needed help from me, the more I found that I wanted to help them; I was inwardly compelled by my unconscious mind to help them. Early on in my care of my parents, I realized that the parent-child bond is completely symmetric: not only is a parent inwardly compelled to help his or her dependent baby or child, but this same inward compulsion drives a child to help his or her parent if and when that parent needs help. The primary purpose of the parent-child bond is to bring help as needed between a parent and child; and, which one, parent or child, currently needs help, can change over the course of time. Regarding being inwardly compelled by one’s unconscious mind to help one’s child or parent when help is needed, it certainly appears that, on average, women are more inwardly compelled than men. And, given the gender basis of the three races (section 9.2), it follows that the african race is least inwardly compelled, the oriental race is most inwardly compelled, and the caucasian race is in-between (my study of the three races confirms this).

Regarding emotions, the same argument used to reject love as an emotion, being the lack of a specific unique feeling, can be used to reject other things that one might otherwise think of as being an emotion. For example, hate is not an emotion because there is no specific unique feeling associated with hating something or someone. Anger is probably the one emotion that people will most often associate with hate, but anger is not hate. If one has an allocation for anger, one may perhaps feel anger at different times against some hated object. However, one could truthfully say that they hate someone or something even if they never feel anger towards that hated object, because hate is an intellectual judgment in the sense of being a statement of strong opposition or rejection.

In addition to love and hate, by using the same argument none of the following are emotions either: pride, kindness, gratitude, appreciation, veneration, despair, hope, cowardice, bravery, jealousy, envy, affection, and friendship. Others can add to this list of non-emotions, since it’s not complete.

Regarding feelings, in addition to the body-feelings category, and the emotions category, another category is the other-feelings category which is a catchall for any feeling that is neither a body feeling nor an emotion. More specifically, other than body feelings and emotions, any feeling that serves as a signal from the unconscious mind to the awareness—alerting the awareness to whatever it is that one is having that feeling about—that feeling can be put in this other-feelings category. This category includes such miscellaneous items as the following (if I’ve left anything out that belongs in this list, feel free to add it):

[158] Regarding gender and the physical body, Ian Stevenson’s book, Where Reincarnation and Biology Intersect (op. cit.), left me with the impression that it’s typical for a person to have many human lives in a row as the same gender (man or woman), before eventually switching to the opposite gender. Typically, somewhere between ten to forty human lives being the same gender, before switching. This is my crude estimate based on the limited, relevant data given in his book. Note that this typically long time of ten to forty human lives before an awareness/mind switches to having the other gender’s physical body, is not a problem for the construction of that new physical body, because the awareness/mind that passes from one human life to the next—with the afterlife in-between—is not responsible for constructing its human body. Instead, cell-controlling bions construct the human body.

Stevenson’s book details about half-a-dozen reincarnation cases where the child was the opposite gender in its remembered previous life (for example, a girl remembering her previous life as a man). In these cases that Stevenson details, typically for that child there is some significant carryover from the previous life in terms of that child’s attitudes and preferences, the most common of which (based on those cases in his book) is a preference for wearing the opposite gender’s clothing (for example, a girl who remembers being a man in her previous life, wanting to dress like a boy). Perhaps this cross-dressing preference is primarily due to that child wanting to identify with its remembered previous life, or perhaps it’s primarily due to that child’s current allocation plan, or perhaps both factors are contributing to that child’s cross-dressing preference. Most children, of course, have no conscious memory of their previous human life. So, for most children, even if they were the opposite gender in their previous life, they are probably less likely, on average, to want to cross-dress than those children who actually remember their previous life as the opposite gender.

Assuming it’s typical to have many human lives in a row as the same gender, and then switch to having many human lives in a row as the opposite gender, it seems reasonable to suppose that there will be some kind of carryover when one has many lives in a row with a man’s body, and then switches to having a woman’s body. And likewise, some kind of carryover when one has many lives in a row with a woman’s body, and then switches to having a man’s body.

Define the switch life as being either the first life in a man’s body after many lives in a row in a woman’s body, or the first life in a woman’s body after many lives in a row in a man’s body. Probably the most significant carryover for one’s switch life, in terms of its affect on that switch life, is the current detail of one’s sexual mind-part and its associated data that identifies what one is sexually attracted to.

The allocation plan that one has for the switch life will determine how strongly one feels sexual attraction when a young adult. In general, the bigger the allocation to the sexual mind-part, the more strongly one will experience sexual desire and attraction. However, the size of the allocation to the sexual mind-part does not determine what one is sexually attracted to. Instead, what one is sexually attracted to is dependent on the detail of one’s sexual mind-part and its associated data that identifies what one is sexually attracted to. After thinking about it, and weighing the evidence, I’ve concluded that what one is sexually attracted to has three components:

In my own case, since puberty, I have only found the opposite sex (females) to be sexually attractive. Many years later when I thought about it, I concluded that my being sexually attracted to females was inborn, because it was there from the beginning and I didn’t have to learn it. Thinking about it now, I believe that that “inborn” quality was simply a carryover from my most recent previous human life, in which I had a man’s body. Also, assuming it is normal to have at least ten lives in a row as the same gender, then it’s likely that my most recent previous human life was not a switch life, and my sexual mind-part has already had more than enough time to fully learn how to identify what I am sexually attracted to, which is women in their late teens to early thirties (in my teens after puberty, I was sexually attracted to girls my own age). However, when I have my next switch life, having a woman’s body, my sexual mind-part will still, in effect, be programmed to find females sexually attractive, and that “inborn” quality will probably make me a lesbian, at least initially until my experiences in that switch life, and, if necessary, additional experiences in the first one or two human lifetimes after that switch life, teach me to find men sexually attractive instead of women (in effect, reprogramming my sexual mind-part to find the then currently opposite sex attractive, instead of what would then be the same sex).

Given the above regarding carryover, consider the following four human groups:

Due to carryover, the rates of homosexuality and bisexuality will be much higher in the switched-to-male and switched-to-female groups, compared to the continuing-male and continuing-female groups. Of the four groups, continuing-female is the most feminine, and continuing-male is the most masculine.

Note that the biggest problem caused for oneself by switching genders, which is being sexually attracted to the same gender instead of the opposite gender, is probably an underlying reason for why it is typical for humans to have many lives in a row as the same gender, and why the percentage of homosexuals is always small in the total human population. However, people do eventually switch genders, perhaps because one eventually gets bored with being the same gender all the time.


10 A Brief Autobiography of myself Kurt Johmann

This chapter is new for the 12th edition of this book. I think it will be helpful to the reader to give some background about myself that explains to a large extent what motivated me to work on this book, and keep working on it over the course of many years. Of course, since this is only a brief account, a lot of detail from my life is left out, including all the programming work that I did—both paid work at a corporation, and unpaid work on my own making PC products that in the end never made me a profit (I am not much of a businessman, admittedly, but I do have a real talent for programming). The chapter sections are:

10.1 My own Relevant Experiences regarding Lucid-Dream Projections, Bion-Body Projections, Solitonic Projections, and the Kundalini Injury
10.1.1 My One Dense Bion-Body Projection
10.1.2 My Two Solitonic Projections
10.1.3 My Kundalini Injury
10.2 Motivation and Means for Writing this Book
10.3 Some Details of my Early Life
10.4 Some Details of my Later Life

10.1 My own Relevant Experiences regarding Lucid-Dream Projections, Bion-Body Projections, Solitonic Projections, and the Kundalini Injury

In the previous editions of this book I didn’t say anything about my own experiences with out-of-body projections, solitonic projections, and the kundalini injury, but I certainly had them. In brief, beginning when I was 19 years old, as a result of using Om meditation (which I learned about from reading an English translation of the principal Upanishads), and continuing up until my 25th birthday when I suffered the kundalini injury (described below), I typically did the Om meditation about twice a week, and typically had a conscious out-of-body projection later that night when I was asleep. I later estimated that during those years of Om meditation, I had in total about five hundred out-of-body projections, of which about four hundred were lucid-dream projections, and the rest—about one hundred—were bion-body projections (only one of those bion-body projections had, compared to all my other bion-body projections, a dense bion-body with apparently a much greater number of bions in it, described below). I also had two solitonic projections (described below).

For all the many out-of-body experiences I had back then, including the most extraordinary of them (the one dense bion-body projection, and the two solitonic projections), I never wrote any notes about any of them (I have never kept a diary, either). My reasoning at that time was if the experience was important enough, I would remember it. Thus, what I relate now about them is recalled from memory. (Note that my projection experiences guided my selection of the two books Astral Projection and The Projection of the Astral Body as reliable primary sources which I used in Chapter 5.)

Over the course of about four hundred lucid dreams, I encountered and interacted with a large number of humans (some were already dead, and others—a minority of whom I knew from my everyday life—were still alive as humans but were projecting, I presume, while they were asleep, and I would also presume that most wouldn’t consciously remember any of it when awake). Regarding animals, only once in a lucid dream did I encounter an animal; it was a tiger—that is what it looked like—and I remember thinking when I encountered it that it was kept in a zoo and it didn’t like its life and it wanted to be human. The only other times I remember encountering an animal during an out-of-body projection was during a few of my bion-body projections where I saw the projected bion-body of our family’s pet cat, and a much later bion-body projection, in 2012, when I saw the projected bion-body of my current pet cat.[159] But only once did I encounter an awareness/mind during an out-of-body projection that impressed me as being neither animal nor human, and that encounter was during the one dense bion-body projection that I had.


footnotes

[159] It is March 2016 as I write this footnote and I am 60 years old. When I was a child of around age seven or eight, my parents had gotten our family an orange tabby cat when it was a small kitten about two months old. Later in life, while attending Rutgers University, I was living with my parents in their home in Berkeley Heights, New Jersey, and I drove to my classes as a commuter student. I graduated from Rutgers in May 1978 at age 22, and shortly afterwards I got a job as a programmer. I moved out of the Berkeley Heights home in the summer of 1978 into an apartment in the nearby town of Chatham, New Jersey (I lived in that apartment for the next ten years until moving down to Gainesville, Florida at age 32 in the summer of 1988 to become a graduate student at the University of Florida). As best I can remember, my bion-body encounters with our family cat all happened when I was still living in the Berkeley Heights home with that cat also living in that house (after I moved out, the cat remained with my parents until its eventual death from old-age years later).

Although my out-of-body projection experiences were pretty much ended by my Kundalini injury on my 25th birthday, there were more than a few lucid dreams that happened in the years that followed, without any attempt by me—such as by doing meditation—to make them happen. Also, I had an extraordinary bion-body projection that happened to me recently in 2012 at age 56, and it involved a pet cat. As already mentioned in the About the Author at the beginning of this book, I was living with and helping my parents during their final years of life. In January 2010 at age 54 I decided to get a pet cat for our household, since I thought my parents would like seeing a cat in the house. From the county animal shelter I selected an orange tabby cat that I was told was six or seven years old. Although my parents have since died, I have kept this cat and it is living with me in my retirement. The cat has both its front and rear paws covered with white fur, and white fur on its underside including under its mouth and on the front of its neck and chest, looking very much like the pet cat I grew up with, which, I assume, is the reason I selected it from the other cats at that animal shelter. My dad when he was growing up also had an orange tabby cat, and I assume that is why my parents had gotten an orange tabby cat for the family when I was a child. Thus, another orange tabby cat in my life.

I no longer remember the exact month in 2012 when that bion-body projection happened, but it was about three-quarters of the way into that year of 2012. In my opinion, the buildup that led that cat to do what it ended up doing to me when it was projected out-of-body, is that the cat was becoming frustrated with me: The cat had a lot of energy back then and often harassed me for affection, wanting me to pet it all the time, which I was typically reluctant to do. Also, shortly before the out-of-body incident, the cat kept jumping up on my recliner chair which I would sit in when using my computer or watching TV (I had a large-screen HDTV that I used for both watching TV and for using my computer). Over and over and over again, the cat jumped up on my recliner chair and each time I quickly pushed it off. Many, many times this happened. The cat was very persistent, and this was going on for days. I tried closing my door a few times to keep the cat out, but soon stopped because I had to remain attentive to the needs of my father who was in a different room (my mom had died at the end of 2011, so only my dad was still alive at that time).

After that buildup with the cat, what happened out-of-body is the following: It was nighttime and I was asleep in my bed. I became conscious and found myself (my awareness/mind) in a bion-body that was facing upright toward the ceiling, a foot or two above my physical body on my bed (my bion-body was much less dense—fewer bions—than my one dense bion-body projection described in subsection 10.1.1, but substantially more dense—more bions—than my approximately 100 other bion-body projections that happened before my 25th birthday). Moving rapidly around me, keeping about a foot distant from my projected bion-body, and moving around me at the same height level as my projected bion-body, was the bion-body of my cat (the rapidly moving object that I saw was about the size of my cat and I knew in my mind as I saw it that it was my cat; it had what looked like an elongated ovoid shape and was very dark or black). It was moving very fast around me without stopping, and I assume this fast movement is why I wasn’t able to see any detail in the shape of that cat’s projected bion-body. Presumably, my mind woke me up making me conscious because of what that projected cat was doing. After becoming conscious and realizing what was happening, after about ten seconds of seeing the cat’s projected bion-body rapidly moving around my projected bion-body, my projected bion-body slowly descended down back into my physical body and my conscious state continued as I then lay awake in my physical body on my bed and thought about what had just happened. I estimated that my cat in its projected bion-body was making about three or four complete circuits around my projected bion-body each second—passing close by my head, feet, and sides, and this continued until I was back in my physical body and could no longer see the cat’s projected bion-body. Also, before eventually falling back to sleep, I remember wondering if my bion-body projection close above my physical body was a normal event during my nightly sleep, with my soliton (awareness) normally asleep and unconscious during that bion-body projection time. Perhaps so, although that was the only conscious bion-body projection that I’ve had in recent years, and that was the first and last time that I’ve had an out-of-body encounter with that cat.


10.1.1 My One Dense Bion-Body Projection

Regarding the one dense bion-body projection that I had, it was about six months before my 25th birthday, so I was about 24½ years old, and during the previous five years I had already had hundreds of out-of-body projections, including the two solitonic projections, so I was very experienced with out-of-body projections at that time, but that night was going to be different. I did the normal Om meditation earlier that evening, shortly before I fell asleep, and sometime after I was asleep I became fully conscious while still in my physical body and then felt a lot of vibration as a very dense bion-body—composed of a great many more bions than any previous bion-body projection I had had—moved slowly upward out of my physical body to a height of about two feet above my physical body which was flat on its back (the bion-body was also flat on its back during that upward move). During that slow movement upward, and continuing until the end of this dense bion-body projection, I felt a lot of what seemed like swirling around of the bions in my bion-body limbs, and in my bion-body trunk. It felt like there was a lot of swirling movement happening thruout the interior of my bion-body. And I also saw specific swirls in the interior of my upper bion-body chest when I looked there during that slow movement upward, although after all these years I no longer have a clear memory of exactly what I saw when looking within my dense bion-body. However, for those swirling bions that I saw in my upper bion-body chest, my estimate is that they were moving at a speed of a few inches per second (1 inch is 2.54 centimeters).

Note that I never saw or felt any swirling of bions in all the other, much less dense, bion-body projections that I have had. The swirling of bions that I experienced within my dense bion-body has, I assume, been experienced by others who have also had one or more dense bion-body projections, and I assume that in India centuries ago, what was thought, said, written, and taught about this swirling within a dense bion-body projection, resulted in the subject of chakras.[160]

After the slow upward movement of my dense bion-body had stopped (that upward movement, in my estimate, lasted about four or five seconds), I could see clearly from the vantage point of my bion-body head that it was early morning in my apartment with sunlight already streaming in thru the closed venetian blinds on the windows (I still have a memory of what this looked like). For the first time while in a bion-body I could see what I knew were physical objects (an explanation of how it’s possible to see the physical world when in a bion-body is given in section 5.4, using the learned-program statement get_photon_vectors()).

Also, soon after that upward movement of my dense bion-body had stopped, I became aware that there was a small being of indistinct shape very close above me; it was closer to the apartment ceiling than my bion-body was, and it was right above my bion-body face. Within a few seconds of becoming aware of this being, it spoke to me in clear English with a deep voice that did not sound like a human voice at all (because of its deep voice I’ll refer to it with male pronouns). Here is the very brief conversation we had: He asked me if I was surprised about a certain past success I had had, but instead of answering him, I asked him a question of my own, if I would get something specific I wanted, to which he simply said “No.” I don’t normally remember conversations verbatim, even brief ones, but I’ve remembered and thought about that dense bion-body projection many times over the years, and have often replayed that conversation in my mind, so I still remember that conversation verbatim. However, it was too specific to myself at that time in my life, and would require too much explanation for a verbatim recounting in this book to be useful to the reader.

So, who or what was that being? It didn’t say anything about itself, and my own impression of it during that encounter was that it was not a human, neither an in-the-afterlife human nor an embodied human who was projecting out-of-body. The only non-human intelligent beings considered in this book that are intelligent enough to learn English and communicate with it, are the Caretakers. Thus, my guess is that it was a Caretaker, and, given its deep voice, a male Caretaker.

At that time, considering the sequence of events, I believed that being caused that dense bion-body projection that I had. However, given the procedure in subsection 5.2.3 for a bion-body projection, instead of directly causing that dense bion-body projection that I had, my current thinking is that my unconscious mind was in communication with that being, and my unconscious mind decided to run its bion-body-projection procedure, specifically for a dense bion-body projection, so as to give my awareness the experience that resulted, including interacting with that being.

After that final word, “No”, which ended that conversation with that being, a few seconds later my bion-body moved slowly back down into my physical body, taking about as much time to descend down, an estimated four or five seconds, as it took to ascend up out of my physical body at the beginning of that bion-body projection. To the nearest minute, my estimate is that the entire bion-body projection, from its beginning when my bion-body started moving up out of my physical body, until its end when my bion-body had fully reentered my physical body and I was fully back in my physical body, had lasted one minute. As soon as that bion-body projection had ended, I stayed awake for about two hours just thinking about it, since it was such an extraordinary event.

Regarding my approximately 100 bion-body projections that I had, there was an unpleasant side to them that I often felt when I was away from my physical body (far enough away from my physical body that I was not aware of it being nearby)—I often felt disoriented (I remember often thinking to myself how disorienting a bion-body projection was after having a bion-body projection during which I was away from my physical body). I never felt any disorientation at any time during any of my lucid-dream projections, and I didn’t feel any disorientation when in my bion-body close to my physical body, such as during my one dense bion-body projection and during my 2012 bion-body projection (described earlier in this section). After thinking about this, I think the primary reason that my unconscious mind was making me feel disoriented is because the up/down orientation that the constant downward pull that Earth’s gravity has on one’s physical body was absent, because there was no consciously noticeable downward pull by Earth’s gravity on my projected bion-body. My unconscious mind was sending to my awareness an unpleasant feeling that I consciously interpreted as feeling disoriented, because I was in a body—albeit not my physical body—but the downward pull on my body that my unconscious mind was used to always being there when I was in my physical body, was absent when my awareness/mind was in my projected bion-body. And the reason why my unconscious mind never sent that unpleasant feeling of disorientation to my awareness during any of my lucid-dream projections is because my awareness/mind was not in a body during those projections. During the bion-body stage of the afterlife, my guess is that even if one’s unconscious mind is initially sending that unpleasant feeling of disorientation to one’s awareness, probably one’s unconscious mind will soon stop sending that, as one’s unconscious mind gets used to the apparent weightlessness of that afterlife bion-body (as stated elsewhere in this book, intelligent particles have mass and are subject to gravity, but because of the learned-program statement move_this_bion(), bions have the potential to move in any direction without regard to gravity—see chapter 5 regarding how one’s mind controls the movement of one’s projected bion-body).[161]


footnotes

[160] Presumably others have had dense bion-body projections, and the swirling of particles within the dense bion-body became the basis of the Indian teachings about chakras (wheels) in the subtle body (the projected bion-body, as I call it). Regarding these chakras, the first paragraph in the Wikipedia article Chakra at https://en.wikipedia.org/wiki/Chakra says:

In some Indian religions, a chakra (Sanskrit cakra, “wheel”) is thought to be an energy point or node in the subtle body. Chakras are believed to be part of the subtle body, not the physical body, and as such, are the meeting points of the subtle (non-physical) energy channels called nadi. Nadi are believed to be channels in the subtle body through which the life force (prana) (non-physical) or vital energy (non-physical) moves. Various scriptural texts and teachings present a different number of chakras. It’s believed that there are many chakras in the subtle human body, according to the tantric texts, but there are seven chakras that are considered to be the most important ones.

The above quoted paragraph is an example of how poor the explanatory tools were before using the approach I use in this book to explain such things as one’s awareness, one’s mind, and out-of-body projections, with, in effect, a computerized universe. Given what I say in this book, there is no need to use old terminology like “energy”, “subtle body”, “life force”, or “vital energy”, to explain a bion-body projection.

Also, claims of there being “seven basic chakras” in the subtle body—complete with colorful pictures on the internet that show these “seven basic chakras” as circles centered on a straight line that runs from the top of one’s head where the first of these “seven basic chakras” is, down to one’s genitals where the last of these “seven basic chakras” is—is simply a fanciful and imaginary construction made by others who most likely never had a dense bion-body projection themselves.

[161] Regarding the gravity algorithm’s paragraph p22 in footnote 23, I say that the total-gravitational-force vector is added together with other currently applicable force vectors, including, if applicable, move_this_bion()’s force vector. Although I didn’t say it there, because it was too much of an aside for that footnote, I’ll say it here:

I’ve already said elsewhere in this book that I believe that our reality framework—the computing elements and the computing-element program—exists for the benefit of all the awarenesses currently living in this reality framework. Because of this purpose, I assume that the computing-element program’s code regarding move_this_bion() and its force vector, will, in effect, always allow a bion to move against any so-called gravity well, no matter how strong that gravity well is. Thus, for example, even if, hypothetically, one were to fly into the sun, one’s awareness/mind will not become trapped in the sun’s gravity well and sink to the sun’s center. Instead, one’s awareness/mind will be able to use move_this_bion() to fly out of the sun and move far away from it.


10.1.2 My Two Solitonic Projections

With the passage of about 35 years (I am 57 years old as I write this paragraph), I can no longer recall exactly how old I was when I had those two solitonic projections, although they were about a year apart from each other, and happened in my early twenties. Note that section 6.2 Solitonic Projections—which was also a section in all previous editions of this book including the first edition written back in 1993—is based on those two solitonic projections I had.

The context for my first solitonic projection was a lucid dream in which I was over in Europe (my physical body was still back on my bed in New Jersey USA). Marking the end of that lucid dream was a sudden acceleration to a very high speed, but instead of going unconscious until I was either back in my physical body or at a different lucid-dream location, I instead remained fully conscious and found myself (quoting from section 6.2 Solitonic Projections) reduced to “existing as a completely bodiless and mostly mindless awareness—residing at the center of a sphere.” A few features of that solitonic projection not mentioned in section 6.2 Solitonic Projections, is that the whole time during that solitonic projection, moving at high speed, I heard a kind of ongoing crackling sound and felt the high-speed movement.

After I was back in my physical body I estimated that the duration of that solitonic projection was about 12 seconds, and I later used that time estimate and the rough distance between New Jersey USA and Europe to come up with a rough estimate of how fast I was moving during that high-speed return to my physical body: “a speed of roughly several hundred kilometers per second” (this quote is from section 5.3 Lucid-Dream Projections). Although the lucid-dream literature, as I say in section 5.3 Lucid-Dream Projections, gives reason to believe moving at “a speed of roughly several hundred kilometers per second” is possible, it was actually my first solitonic projection that was the basis for my giving this speed estimate in my book. Expressed in miles, and assuming I was near ground level when in Europe (during the lucid dream I thought I was in Holland), and assuming my return to my physical body was a straight-line path thru the Earth, a speed estimate of about 250 miles per second is reasonable.

The context for my second solitonic projection was a bion-body projection. It began with my becoming conscious in a bion-body while I was still coincident with my physical body which was flat on its back on my bed, but instead of quickly moving out of my physical body (as with my typical bion-body projection back then), suddenly I felt the bions of my head move up and away from me, leaving me in the solitonic-projection state (a point-like awareness at the center of a perceived spherical shell). Then, after what seemed like a few seconds, I felt the bions of my bion-body head rush back onto me (me being my soliton/mind in this context) pulling me down a short distance back into my physical head as my bion-body and my soliton/mind reintegrated with my physical body—and with that the entire projection experience was over.

This, my second solitonic projection, was the basis for where I say in section 6.2 Solitonic Projections that “a solitonic projection that occurs during a bion-body projection typically begins when the bion-body is stationary”, and “it seems that the apparent shell is only a few centimeters in diameter.” My estimate for the diameter of the perceived spherical shell was about an inch in diameter (a few centimeters), and that estimate, which I made right after that projection experience was over, was based on my perception of how big that spherical shell seemed to be as the bions of my bion-body head rushed back onto my soliton/mind, and, in effect, re-enclosed me, and as that re-enclosing was happening I gained a wider perception of the bions of my bion-body head as it all came back together and coincided with my physical head—this whole rushed-back-onto-me and reintegration episode only lasted a second or two at most. Knowing how wide my physical head is, I was able to make that one-inch estimate of the diameter of the perceived surrounding spherical shell during a solitonic projection.

I have wondered in the past what that perceived surrounding shell is, and beginning with the 6th edition of this book in 2001, up thru the 11th edition in 2006, I’ve been saying that “This apparent shell is probably the limit of the soliton’s direct perception when it is in the solitonic-projection state.” Thus, I reasoned that that perceived surrounding shell wasn’t a real object. However, with this 12th edition, I have changed my thinking on what that perceived surrounding shell is, and I now believe it is composed of my soliton’s owned bions (my mind). Thus, that surrounding spherical shell that I perceived in my two solitonic projections is my mind, with my soliton (awareness) at the center of it.

In both of my solitonic projections, my perception of that surrounding spherical shell, from the perspective of being inside it at the center of it, was that it had a smooth and continuous surface, with no holes or other openings in it. However, that surrounding spherical shell is not, in effect, a solid shell that prevents intrusion by other particles. Instead, it is a porous shell, as demonstrated by the residence of one’s soliton/mind in one’s physical head, with physical matter and cell-controlling bions in close proximity to one’s soliton.[162]

Something else I got from that second solitonic projection, is that my soliton resides, as best I could tell from that reintegration episode, at the center of my physical head. Note that in section 6.2 Solitonic Projections—referring to the earlier editions of this book—I didn’t mention this detail of the soliton residing at the center of one’s physical head (or if not exactly at the center then close to the center), because I didn’t want to reveal my own solitonic-projection experience as the basis for that detail, and besides, it certainly feels like my awareness is right in my head, between my eyes but further back in my head, and I assume it’s the same for other humans. So, giving my projection experience as confirmation of what we humans already feel is the location of one’s awareness in one’s physical body, didn’t seem that important back then, but because I am now open about my two past solitonic projections, I am mentioning it.

Also, regarding my two solitonic projections, there was no color to the perceived surrounding spherical shell: the shell was neither white nor black nor any other color, it was not a vision perception. Also, there was nothing within the perceived surrounding shell, other than my awareness at the center of that spherical shell.


footnotes

[162] Assuming that that perceived surrounding spherical shell was indeed my mind (my soliton’s owned bions), one can estimate the number of bions in a soliton’s mind (each soliton has the same number of owned bions): My estimate of the diameter of that spherical shell is one inch (2.54 centimeters), which means that the radius of that sphere is 1.27 centimeters and its surface area is about 20 square centimeters. The estimate from chapter 1 is that each computing element is a cube with a side-width of 10−16 centimeters. A square centimeter, one computing-element thick, contains an estimated 1032 computing elements, and 20 square centimeters, one computing-element thick, contains an estimated 2×1033 computing elements. Assume that the thickness of that shell, regarding the soliton’s owned bions, is only one owned-bion thick (justification for this assumption is given in the last paragraph of this footnote).

Also, because that spherical shell, composed of owned bions, is porous: Let’s guess as an upper bound, that only one percent (1/100th) of the sphere’s surface area is occupied by the soliton’s owned bions (and these owned bions are spread out evenly on that sphere’s surface). And, let’s guess as a lower bound, that only 1/10,000,000,000th of the sphere’s surface area is occupied by the soliton’s owned bions (and these owned bions are spread out evenly on that sphere’s surface). Given the previous paragraph, and the guessed-at upper bound and lower bound for how porous the spherical shell is, the upper bound on the number of bions in a soliton’s mind is (2×1033 ÷ 100), which is 2×1031 bions, and the lower bound on the number of bions in a soliton’s mind is (2×1033 ÷ 10,000,000,000), which is 2×1023 bions. For comparison, the number of cell-controlling bions in an adult human body (assuming one bion per cell) is only about 50 trillion bions (5 × 1013 bions). However, regardless of the actual number of bions composing one’s mind, I think it likely that the great majority of the bions composing one’s mind are used for memory storage.

Given this spherical shell of owned bions that is centered on the owning soliton, an obvious question is why does the computing-element program keep all of a soliton’s owned bions at, or nearly at, the same distance from that soliton? I think the primary reason is that the computing-element program’s algorithm for moving a soliton/mind thru 3D space is simplified by defining a single constant separation distance (denote as sd) which is the target distance at which to keep each owned bion from its soliton as that soliton/mind moves thru 3D space. Also, a minor benefit of keeping all the owned bions at the same or nearly the same distance from their soliton is that whenever the soliton sends a message that has more than a single recipient (a soliton can only send messages to its owned bions), those recipients will receive that message at the same time. The porosity of the spherical shell allows that soliton/mind to move quickly thru a mass of other particles without substantially disturbing those other particles. The combination of keeping all of a soliton’s owned bions at about the same distance sd from that soliton, and spreading the owned bions out so as to make the owned mind very porous, results in those owned bions, in effect, all being very close to the surface of a sphere of radius sd, centered on the owning soliton.


10.1.3 My Kundalini Injury

My own kundalini injury, which I suffered on November 16, 1980 (my 25th birthday), was my primary source for writing section 4.5 The Kundalini Injury, which was also a section in all previous editions of this book including the first edition written back in 1993. To describe my own experience with kundalini, I’ll quote from section 4.5 The Kundalini Injury and add bracketed [notes] as needed to give more detail:

At some point during meditation [that night before going to sleep it was my usual Om meditation that I was doing about twice a week back then, with myself lying on my back on my bed, and mentally sounding the word Om over and over again], and without any warning, there is a strong sensation at the spine in the lower back, near the end of the spine. There is then a sensation of something pushing up the spine from the point of the original sensation. How far this sensation moves up the spine is variable [in my case it got about halfway up my back before I got concerned about it and moved, which stopped it]. Also, it depends on what the person does. He should immediately get up, move around, and forswear future meditation.

The onset of the pain is variable [for me it began the next day while I was at my desk in the office building where I worked back then; the pain was a burning sensation across my upper back], but it seems to follow the kundalini injury quickly—within a day or two. Typically, the pain of the kundalini injury is a burning sensation across the back [this was my typical experience]—or at least a burning sensation along the lower spine [this was less common for me]—and the pain may also cover other parts of the body, such as the head [I never had this, but if I recall correctly Gopi Krishna did]. The pain is sometimes intense [for myself, the pain wasn’t that bad; it didn’t prevent me from doing my job and it was never disabling or overwhelming for me]. It may come and go during a period of months or years and eventually fade away [in my case as I recall, the burning-sensation episodes would typically last anywhere from minutes, to an hour or two at most; in the first year or two after my kundalini injury, several burning-sensation episodes per week was typical for me; with the passage of time the intensity of the pain, its duration, and frequency of occurrence all declined, and after four or five years it ended completely], or it may burn incessantly for years without relief [this was the case for Gopi Krishna, if I recall correctly—it’s been more than thirty years since I read Gopi’s book, which I bought and read soon after my own kundalini injury].

The common reaction by the sufferer to the kundalini injury is bewilderment. Continued meditation seems to aggravate the kundalini injury [this was definitely the case for me], so the typical sufferer develops a strong aversion to meditation [whenever I would try doing Om meditation after my kundalini injury, I would quickly get a painful burning sensation along my lower spine, and that was enough to stop me from continuing with that meditation—it’s been decades since I last tried to do Om meditation].

10.2 Motivation and Means for Writing this Book

So, given my above out-of-body projection experiences, and given the fact that I’m an intellectual who wants to understand both the world around me and myself, and knowing that neither current religions nor materialism explain the reality I encountered using Om meditation, thus I was motivated to write this book. And, thanks to the historically recent development of computers, and my own interest since childhood in computers, and my many years as a computer programmer, and my formal education in computers which includes a PhD in computer science, I had the background needed to write those parts of this book that deal with computing.

Apparently my unconscious mind knew what it wanted to accomplish in my current life before I consciously knew it, because my earliest memory directly relevant to this book is that in the summer of 1973 at the age of 17, having recently graduated from high-school, I was standing outside the Public Library in the town I grew up in (Berkeley Heights, New Jersey, USA), and I suddenly thought to myself verbatim: “Someday I’ll write a book that explains everything,” and I had that thought with a feeling of great conviction even though I knew nothing back then of what is in this book: all the learning, projection experiences, other experiences, and thinking, needed to write this book was still ahead of me back then.

10.3 Some Details of my Early Life

My parents—my father Frank T. Johmann (born March 18, 1927; he died at home on December 14, 2013 at age 86), and my mother Marion Johmann (her maiden name was Reynolds; she was born March 31, 1924, and she died at home on December 19, 2011 at age 87)—were both born and lived in Saint Louis, Missouri, USA, up until 1955 when they moved to New Jersey, USA. The story my mother told me many years ago about their first meeting, was that she was visiting a science fair at Washington University in Saint Louis (she was working as an executive secretary at that time, and was not a student), and she met my dad (he was a chemical-engineering student at Washington University) who was demonstrating the heating of apple slices using microwaves. They married July 2, 1949 (after marriage my mom became a housewife, and they remained married and living together until death parted them—she died first).

My father, with his chemical-engineering degree, worked first at some chemical company whose name I no longer remember, and then at Ford (the car company), where his job involved the chemistry of paints used on cars. However, my dad aspired to more, and while working his day job, he went nights to the law school at Saint Louis University and in 1955 earned a law degree. Shortly after graduation from law school he got a job with Esso (the giant oil company that later changed its name to Exxon) in Elizabeth, New Jersey, USA, to work as a patent attorney writing chemical patents, and my parents then moved in the middle of 1955 to Elizabeth, New Jersey for that job—I was born a few months later, on November 16, 1955, in the local hospital (Elizabeth General Hospital). My parents had a total of three children: I have an older sister born in 1953 in Saint Louis, Missouri, and a younger brother born in 1957 in Elizabeth, New Jersey; I was the middle child.

For their first few years in New Jersey my parents lived in an apartment in Elizabeth, New Jersey, close to my dad’s job, but in 1958 they bought a new house in a new housing development of about 100 houses in Berkeley Heights, New Jersey, USA (its street address was: 49 Hampton Drive, Berkeley Heights, NJ 07922). This move was fortunate for me, because there were many young families that bought those new houses, and as I grew up over the years I had a number of good friends my own age who also lived in that housing development, and the houses where they lived were just a short walk or bicycle ride away. (This housing development and most if not all of Berkeley Heights back then had no minorities as I grew up, which contributed to a good and safe environment for us boys to play in.)

I still remember the first good friend I had back then and how I met him. He was a boy my own age and I was about eight years old at the time (he was the first of many different good friends, all male, I would have at different times in my life), and I was riding my bicycle on the street. He was playing on his front yard kicking an inflated rubber ball around, and he kicked the ball towards me as I was riding my bicycle down the street, and that was the start of our friendship (we played together a lot over the next few years until his family moved away). There were a total of four other good friends I had in later years who also lived in that housing development, all boys my own age, including my two best friends during my high-school years.

I have many good memories from my childhood and teen years, involving playing with my friends (for certain sports we often included in our play other boys in the area). A lot of the outdoor activities that we would organize ourselves included such things as—during winter—sledding, building things out of snow including snowmen and igloos, and of course snowball fights. During the warmer months there were bicycle rides with friends to various destinations, outdoor games of various kinds including kickball, touch football, badminton, basketball, softball, and Berkeley Heights also had a community swimming pool that we often went to. Indoors, year-round, in our house and in friends’ houses, there were many different board games we played, and also card games and chess. During junior-high and high-school one of my best friends had a separate game room in his house that included a pool table, and we often played pool.

The only substantial blemish on those years was the forced government schooling we all had to go thru. Only after I was finally out of the last of that forced government schooling, high school, graduating May 1973 at age 17, did my mental development and learning really take off, because the forced “dumbing us down” that was the true purpose of those government schools was finally over (alternatively or additionally, my allocation plan may have changed at about that time, so that I was more intellectually capable).

At this point I’d like to say thanks to my parents: my dad was a good provider for the family and my mother was a good mother. Together they made my home life easy. Christmas time was also great, since there was always a nicely decorated christmas tree and plenty of presents under the tree on Christmas day, and a turkey in the oven, that my parents cooked. They also left my sister, brother, and myself a substantial inheritance, which in my case allowed my early retirement. Thank you, mom and dad.

10.4 Some Details of my Later Life

At least for the decades I knew them, and before they became too old and debilitated to continue doing so, both my parents were voracious readers thruout their lives—almost exclusively non-fiction on different subjects that interested them. Like my parents, I too have been a voracious reader of whatever non-fiction subjects I was currently interested in. Although I remember reading some science fiction and fantasy novels during my high-school years, once I was out of high-school my readings over the years and decades since have been almost exclusively of non-fiction material.

Starting in my late teens (once I was out of high-school at age 17), and continuing up until my early forties, I was buying and reading what I estimated back then as being about 100 non-fiction books per year (I always had enough money, whether from my parents or myself, to buy whatever books I wanted). From my early forties (late 1990s) onward, my reading became increasingly dominated by reading on the internet. By my late forties (early 2000s) and since (it’s 2013 as I write this paragraph), I’m still a voracious reader, but most of my reading has been on the internet, including forums.

Regarding my beliefs as to what we are, they have evolved over time. Even though neither of my parents were religious (my dad was a materialist), they both grew up in Christian households and I suppose that influence caused my mother to take me on Sundays to a local protestant church (beginning at what age for me I no longer remember, but ending no later than sometime during middle-school (middle-school was a two-year period before the four years of high-school). So, for at least a few years I was a Christian in my beliefs. Then, at roughly ages 13 thru 16, influenced by science, I remember being a materialist in my beliefs. At age 17, thinking that materialism was inadequate to explain myself, I started reading on Christian religions, including Catholicism, and became a Catholic at age 18, even going to the local Catholic church for service on Sundays, but I remained a Catholic for less than a year. While a Catholic I had broadened my readings to other world religions, including first Buddhism and then Hinduism. The idea of reincarnation, a feature of both Buddhism and Hinduism but absent from Christianity, just made more sense to me in terms of explaining what I am, and at that point, a few months before my 19th birthday, at age 18, I discarded my belief in Catholicism and Christianity. And then, a few months after my 19th birthday (I remember it was still Winter, early in the year), as a result of using Om meditation (already mentioned at the beginning of this chapter) I had my first out-of-body projection.

Regarding the kind of books I was reading in my late teens, besides buying and reading many books on the religions just mentioned, I also bought and read many classics of Greek and Roman history (all in English translation, since English is the only language I know), and I continued buying and reading classics of this and other kinds far into my twenties. After I had my first out-of-body projection at age 19, during that year and following years, far into my twenties, I bought and read many books on out-of-body projections and also books on other psychic phenomena (on these subjects and other subjects, I didn’t just limit my book buying to what I could find in local bookstores, since I also ordered many books from various book catalogs from specialty publishers). Thruout my twenties I also bought and read many books on nature (organic life in general, including books on plants and animals; I also visited many different zoos during my vacations to Europe in my mid and late twenties).

In late 1985, a few days before my 30th birthday, I became diabetic: Over the course of several days I could tell I was becoming sick in some way, but I didn’t know what it was. Unknown to me, the amount of glucose in my blood was rising over the course of those three or four days until it reached such a high level that I became unconscious on the sofa in my apartment, on my 30th birthday (November 16, 1985). Fortunately, I had telephoned my mother the day before that happened and told her about my developing sickness, which was a mystery to me. She called me the next day (the day I became unconscious) to check on me, and she said that I answered the phone but just kind of grunted and couldn’t say anything. Alarmed, she drove over to my apartment, got the building manager to unlock my apartment door, and found me unconscious lying on my sofa. The end result was that I regained consciousness some hours later in the local hospital and was told by a doctor that I had diabetes and would be injecting insulin for the rest of my life (the insulin is injected under the skin, and not into a blood vessel). Thus, on my 30th birthday I learned that I had insulin-dependent diabetes, and, just as that doctor said, I’ve been injecting insulin ever since (I had the kundalini injury on my 25th birthday, and I learned I was diabetic on my 30th birthday, so I remember being very cautious on my 35th birthday, but nothing happened; the only good thing about those two events happening on my birthday is that it has always been very easy for me to remember exactly when those two events happened in my life). Diabetes has definitely had a negative effect on my life, and among other things it substantially reduces my life expectancy, but at least it is painless and has been easy to live with, most of the time. The only positive I see in being type-1 diabetic is that I probably won’t live long enough to become helpless and dependent from old age, which happens to many people at some point when they get into their eighties (I saw it happen to my parents). Knowing with certainty about the afterlife and reincarnation, I am comfortable with the thought of my own eventual death, but, like most people, I’m in no hurry to get there.

During the time I was a graduate student at the University of Florida in Gainesville Florida (August 1988 at age 32, thru May 1992 at age 36; earning first a Masters degree and then a PhD degree, both in computer science) I developed an interest in the subject of UFOs, and over the course of about two years I bought and read scores of books on that subject (many of the older books I found in used bookstores, of which Gainesville had several at that time). I have not seen a UFO myself, nor have I seen or had any interaction with a UFO occupant, but my time in grad school is when I became interested in that subject and studied it. Also, I want to say that the formal education in computer science that I got at the University of Florida definitely improved my knowledge and understanding of computation, algorithms, networks, and programming in general, and allowed me to write on those subjects—insofar as they are in this book—with greater clarity and correctness than I believe would have been possible for me had I not had that formal education.[163] Indeed, I began work on the first edition of this book soon after my May 1992 graduation at age 36 with a PhD in computer science, and self-published a printed paperback of the first edition of this book in 1993.[164],[165],[166]


footnotes

[163] There is a story behind why in 1988 at age 32 I became a grad student at the University of Florida in Gainesville Florida, and it directly involved my parents: In January 1986, my father at age 58, after 30 years working at Exxon, retired from his job there. In 1987 my parents decided they would retire to Gainesville Florida, because they wanted the warm weather of Florida, and my dad wanted to be able to use the University’s libraries to pursue his own research interests. Back then, in 1987, I was living in an apartment in Chatham, New Jersey, which was about an eight-mile drive to my parent’s house in Berkeley Heights, New Jersey (the same house I grew up in), and I visited them about once a week and also kept in touch by phone, so I knew of their retirement plans. Near the end of 1987 my parents surprised me with the following suggestion: They offered to pay all my living and tuition expenses if I became a grad student at the University of Florida. This was their idea, not mine, since I wasn’t consciously thinking at all about going to grad school. However, as I thought about their offer, I liked the idea of learning more about computer science and in the end I accepted their offer, but I had to also get accepted at the University, so I applied for admission to the computer-science department at the University of Florida, and was accepted for admission in the Fall semester of 1988.

Then, in the summer of 1988 both my parents and I moved to Gainesville Florida. Once in Gainesville, I rented an apartment within walking distance to where my classes would be on the University campus. My parents rented a house less than ten miles from the University, and in 1989 they had a large two-story house constructed in a new housing development in Gainesville known as Haile Plantation. In early 1993 at age 37, my parents wanted me to move in with them. I accepted because, in addition to they being my parents, it was free and it was a large spacious house with a pool and a big yard with many trees and palms (I was given two large rooms, and a bathroom with a walk-in shower, and a small kitchen, all for my own exclusive use on the second floor). Years later, I was able to pay my parents back by helping them when they became too old and needed a lot of help. At the end of 2010 at age 55 I moved us into a smaller one-story house in Gainesville, because in the two-story house my rooms were upstairs but my parents were downstairs, and getting us all on the same floor made my job caring for them easier.

[164] For the second edition of this book, in 1994 I both self-published a printed paperback and also for the first time put my book on the internet, in the form of HTML webpages (each section of the book in its own webpage). For the third thru eleventh editions of this book, I published exclusively on the internet, always making the book available as both HTML webpages (each section in its own webpage), and as a single HTML webpage (the entire book in a single webpage). The HTML for my book has always been free (no charge by me to read it and/or download it), and this will also be true for the 12th edition once I finish it (since you are reading this, that means I finished it).

[165] All of the editions of The Computer Inside You were published, unlike my first attempt at such a book, titled A-Space, E-Space, Gaia, and Soul, which I wrote beginning in the Fall of 1987 at age 31, and completed in March 1988 at age 32. I never published that book, but I did register a copyright for it (the FORM TX that I have, says that the Effective Date of Registration is April 6, 1988).

Much of what I had written in A-Space, E-Space, Gaia, and Soul—regarding meditation, the syllable Om, and the Upanishads, and out-of-body projections including Oliver Fox and Sylvan Muldoon, and the kundalini injury, and the soul and soul projections—I copied into the first edition of The Computer Inside You (in the first edition of The Computer Inside You, I renamed the soul and soul projections as the soliton and solitonic projections). Most memorable to me about my A-Space, E-Space, Gaia, and Soul book, is that during my first semester in graduate school in the Fall of 1988, about six months after I had finished writing A-Space, E-Space, Gaia, and Soul, I realized that I had made a major error in that book, because I imagined, in effect, two different kinds of computing elements with their own programming, that I named A-Space and E-Space, instead of the more simple and efficient single kind of computing element with its computing-element program. Quoting from my unpublished book:

The deductive leap is to say that there are two types of space, and they are very finely nested with each other. The pieces of space which support the physical world will be called A-space. The pieces of space which support the hidden world will be called E-space. This is the theory of A-space and E-space. In a large volume of space, such as a cubic meter, we could say that half of the cubic meter is A-space and the other half is E-space. We would also say the two spaces are very finely nested together. In fact, the two spaces are probably nested together in a very regular and orderly pattern.

Looking back at that first book and the many editions of this book, writing a book that puts computation at the root of reality has been a long, drawn-out process for me, with many mistakes made by me along the way, beginning with A-Space, E-Space, Gaia, and Soul, and continuing with the different editions of The Computer Inside You. One type of mistake I made repeatedly in earlier editions of this book, is that I believed other authors regarding claims of macroscopic materializations of physical matter done by a person’s mind, and also claims of a person’s mind directly moving physical matter at a substantial distance from that person. An example of this mistake by me, in The Computer Inside You, is in the fourth thru eleventh editions, in the section titled Sai Baba According to Haraldsson. The following is copied from the first paragraph of that section in the fourth edition:

Psychologist Erlendur Haraldsson (a professor at the University of Iceland) has written a study of the Indian guru Sathya Sai Baba (born November 23, 1926), in his book Modern Miracles. Haraldsson’s personal experience with Sai Baba included witnessing several materializations, which is the type of miracle for which Sai Baba is most famous.

I had bought and read Haraldsson’s book and I believed him, and put Sai Baba in my own book as someone who was able to do materializations. It was only while working on the 12th edition of this book in 2016, thinking and writing subsection 3.8.7 The Learned-Program Statements for Seeing and Manipulating Physical Matter have a very Short Range, that I realized that the described materializations done by Sai Baba are simply impossible and didn’t happen. After that realization, I deleted that Sai Baba According to Haraldsson section from the 12th edition. Also impossible and didn’t happen, are any claims that a person, using his mind alone, directly moved physical matter at a substantial distance from himself.

Given the previous paragraph, one may wonder if I have also been too trusting of what others have written regarding UFOs being real. As I’ve already said, I have not seen a UFO myself. Thus, I am relying on what others have written about UFOs being real. I am sure that there is at least some fiction in the UFO literature, that some reported sightings and encounters are fiction. However, knowing from my own direct experience that we have an existence separate from our physical bodies, the existence of the Caretaker civilization described in section 7.6 is, in my opinion, very likely. And, assuming that the Caretaker civilization exists, it seems likely that they would have physical flying machines, for the reasons given in chapter 8.

[166] As of mid-September 2017, I am only a day or two from finishing the text of this 12th edition and publishing it at my website at https://solitoncentral.com. This footnote was written more than a month ago, with the expectation that I would fill in my estimate of the total hours worked when I reached this point of being finished: For this 12th edition, I have consciously worked on it for a total of about 31 months, working an average of about 3½ hours per day. This gives a total of about (30.5 × 31 × 3½) = about 3,300 conscious work hours on this 12th edition. I estimate that close to four-fifths of this total work time of about 3,300 conscious work hours was spent on the following three new parts of the book which have a lot of algorithms, data structures, and code:

The single most time-consuming footnote that I added to the 12th edition is footnote 23. This footnote includes a detailed, efficient algorithm for gravity. In total, to produce footnote 23, including all the conscious time needed to work out every detail of the gravity algorithm, took about 6 weeks, working an average of about 4½ hours a day, for a total conscious work time of about 190 hours. I did the majority of my work on footnote 23 in June 2017, which was near the end of my work on the 12th edition. Most of this book is focused on ourselves as complex, intelligent-particle beings living in physical bodies in a physical world with other physically embodied humans and a wide variety of physically embodied life forms, and, I didn’t initially think that I should take the time and effort needed to explain how gravity happens within the framework of the computing-element reality model. However, since I was nearing completion of the 12th edition, I decided to take the time and make the effort needed to explain gravity with an efficient algorithm, and I’m glad I did, because of the successful and pleasing result.

The reason that I refer to my conscious work time instead of just my work time, is because, when doing mental work in general, trying to solve some specific problem, such as, for example, how best to design a specific algorithm, I often find that my mind needs more time to think about it, because I’ve consciously reviewed what my unconscious mind currently has to offer regarding that specific problem, but I am consciously unsatisfied for one or more reasons and, in effect, I tell my unconscious mind that more work is needed. Then, often after “sleeping on it”, and sometimes waking up with new thinking quickly presented to my awareness regarding that specific problem, I know that my unconscious mind has, in effect, found a new approach or solution to that specific problem. Sometimes, depending on the specific problem, a number of days are needed with my unconscious mind working in the background, before I get the feeling from my unconscious mind that it is ready for me to consciously review its work. I don’t consciously know how many total work hours my unconscious mind has spent working in the background on this 12th edition, but my guess is that it’s more than half of my conscious work hours on this 12th edition, which means that I’m guessing that my unconscious mind has spent more than 1,650 hours working in the background on this 12th edition when I was not consciously working with my mind on this 12th edition.

For the 12th edition of this book, which will be free on the internet, I have changed the title of this book from The Computer Inside You, to its new title, A Soliton and its owned Bions (Awareness and Mind). And I’ve added a subtitle: These Intelligent Particles are how we Survive Death. My reason for changing the book title is because The Computer Inside You is simply too ambiguous regarding what this book is about. Also, I wanted this 12th edition’s title/subtitle to focus on something that is important to many potential readers, which is answering these two questions about oneself: What am I? and What happens to me after my physical body dies?

Perhaps I should mention that I expect this 12th edition to be the last edition of this book, and I have been working on this book these last few years with that goal in mind. Of course, there will always be more that can be written on this subject of a computed reality, including more algorithms, more questions, and more answers to those questions. And, of course, more can be written on the subject of ourselves, within the context of a computed reality. However, I am getting old and somewhat worn out (it is July 2017 and I am 61 years old as I write this paragraph), and I leave all this additional work to others. Although it is an ambitious hope, I do hope that this 12th edition will help many intelligent people to improve their knowledge and understanding of the reality we all live in.


Bibliography

Note: The above references are for those books that I specifically mention and/or quote from. The above references do not include the books that I used when writing some of the various descriptive parts in this book, such as the descriptions of cell division, brain structure, neurons, and other descriptions of biological structures and processes. When writing these descriptive parts, which I have written in my own words, I typically drew from multiple sources so as to have confidence that the information I was using was correct and widely accepted as factual.


END OF BOOK