The Holographic Principle

Heroic & Dark Fantasy and Science Fiction Character created by Kevin L. O'Brien

Untitled, © by Fred Fields

Untitled, © by Adult Art Network

The Holographic Principlehere are many different ideas for how to explain the nature of the universe, but perhaps one of the strangest is the Holographic Principle. This essay explains what the principle is, and how it could prove fruitful for world building. To properly understand the principle, however, we have to begin by understanding entropy.

The Holographic PrincipleMost people are familiar with the concept of entropy, but what they know tends to be incomplete or even inaccurate. Entropy is loosely defined as the amount of disorder in a system, and while this is true at a fundamental level, it is misleading, especially in a technical context. There are three technical ways of describing entropy, and while they all involve some form of disorder, the definition of disorder is different in each case. Since a proper understanding of entropy is necessary to make sense of the holographic principle, we must digress for a number of paragraphs.

Entropy

The first way to describe entropy is by using thermodynamics. This is the science of heat flow, and it grew out of the study of heat engines during the eighteenth and nineteenth centuries (a heat engine is a device that uses heat energy to perform work). One of the first things engineers discovered was that no engine, no matter how well designed, could be made 100% efficient; that is, no engine could use 100% of its heat energy to do work. They explained this by speculating that of the total heat pumped into an engine, only a portion was useful (that is, could be used to do work), with the rest being waste heat that simply saturated the engine and then escaped into the surrounding environment. They also discovered that heat engines tended to loose efficiency over time. They explained this by speculating that the heat used to do work was converted into waste heat during the performance of work, so that over time the drop in efficiency was due to the consumption of useful heat and its conversion into waste heat. Since these engineers would define a disordered system as an engine that cannot do work, the engines gradually became more disordered as they lost efficiency.

It wasn't long before they tried to quantify this disorder. They coined the phrase entropy and gave it the symbol S, then defined it as the ratio of waste heat (q) to useful heat (T). This definition was expressed mathematically as

S = q / T.

This was based on their observation that the amount of entropy was proportional to the amount of waste heat and inversely proportional to the amount of useful heat.

This in turn led to the second law of thermodynamics, which stated that the entropy of a system must either remain the same or increase, and cannot decrease, while the system performs work. This is expressed mathematically as

Sf ≥ Si;

that is, the final entropy of a system (Sf) after performing work is equal to or greater than the initial entropy of that system (Si).

Nowadays, we express the second law somewhat differently, because we understand there are three types of systems.

  • An isolated system is one where neither matter nor energy can enter or leave;
  • a closed system is one where only energy can enter or leave;
  • and an open system is one where both may enter or leave freely.

The second law as originally stated is now considered to apply to isolated systems only, since both open and closed systems can reduce their entropy while doing work by pumping waste energy and sometimes matter into the surrounding environment. Doing so, however, causes the entropy of the environment to increase, and often this increase is greater than the decrease within the system.

As such, the modern expression of the second law is that the sum of the change in entropy for both the system (δSsys) and its surroundings (δSsur) must be equal to or greater than zero. Expressed mathematically, this becomes

(δSsys + δSsur) ≥ 0.

In any event, the thermodynamic concept of entropy is as a measure of the degree to which a system can do work.

The most interesting thing about thermodynamics at that time was that it assumed energy in general and heat in particular was a fluid, hence terms like "heat flow" and heat "saturating" an engine. At the turn of the twentieth century, atomic theory was still not widely accepted, and Entropy, © by Alexiusseven some very prestigious scientists, including a few Nobel Prize winners, denied the reality of atoms, based largely on empirical grounds. One scientist who did accept that atoms were real, however, was Ludwig Boltzmann (he in fact committed suicide when he feared atomic theory would be disproven). He realized that energy was not a fluid but was a property of atoms, and that the appearance of energy flowing was due to either the atoms themselves moving or by them passing their energy content to other atoms.

This allowed him to develop a new idea for entropy, one based on arrangements of atoms. He envisioned that a group of atoms, such as a gas confined within a bottle, could arrange themselves into a variety of configurations, which he called states. For him, disorder meant that it would be very difficult to describe the behavior of such a group, because the atoms could achieve a wide variety of states. Conversely, the fewer the number of states the atoms could achieve, the more ordered the group would be.

He thus defined entropy as being proportional to the log of the number of states (W). Expressed mathematically, this becomes

S = k log W,

where "k" is a constant for calculating "S". Because of this, Boltzmann is considered to be the father of modern statistical mechanics.

So, the statistical mechanical concept of entropy is as a measure of the degree of complexity of a system.

The linking of disorder to complexity figures strongly in the third and final conceptualization of entropy. A scientist named Claude Shannon was trying to find a way to measure the amount of Thermodynamicsinformation in a message. Note that he was not trying to measure the quality of the information, which is dependent upon context, but simply its quantity. He realized early on that the information content of, for example, a specific string of letters should be exactly the same regardless of whether the placement of the letters was chosen randomly or determined by the instructions of a code.

Eventually he derived a formula that looked identical to Boltzmann's; in fact, Shannon labeled the calculated information content as the entropy of a message. What his formula did was to derive a value in bits as if the message was in digital computer code; in other words, his formula would calculate the digital information quantity for any message using any series of symbols.

Shannon discovered that the more complex the message, the higher its entropy. However, he saw this as a good thing, because it also showed him that the more complex a message was, the more information it contained. Hence, a complex, highly disordered message, having a large entropy, contained more information than a simple, highly ordered message with low entropy. In fact, a message composed solely of a string of A's has zero entropy, and therefore zero information content. This led to the development of modern information theory.

So, the information theory concept of entropy is as a measure of the quantity of information a system possesses.

Entropy expands as the number of states increasesIt should be noted that, despite being based on different theories, these three conceptualizations of entropy all measure the same thing, once care has been taken to adjust the formulas for the same degrees of freedom and to render all results in bits. Whether one calculates the entropy of a system thermodynamically, statistical mechanically, or informationally, the value of that entropy should be exactly the same. However, they do not need to be. Very often the informational (Shannon) entropy is less than the thermodynamic or statistical mechanical (Boltzmann) entropy, because while the latter describes the maximum information content of a system, the former describes the actual current information content of the same system.

For example, a silicon chip at room temperature carrying a gigabyte of data has a thermodynamic entropy of ten raised to the power 23 — 10^23 — bits, but a Shannon entropy of only 10^10 bits. Someday we may be able to develop a technology that will allow us to have chips that can carry enough data to push the Shannon entropy right up to the limit of the thermodynamic entropy, but we can never develop any technology that would allow us to exceed the thermodynamic limit. This has led to the coining of two special terms.

  • Information capacity is defined as the maximum amount of information that a system can possibly hold,
  • whereas information content is defined as the actual amount of information the system currently contains.

And information content can equal information capacity, but it can never exceed it.

Black Holes

What this is leading up to is that, while entropy is considered to be a universal law, there was one object that seemed to violate it. A black hole results from a super-massive star collapsing in on itself. Essentially the black hole's gravitational pull becomes so strong nothing can escape it, not even light. The structure of a black hole is very simple:

  • a point at the center called a singularity
  • and a larger sphere of area around it whose boundary is called the event horizon.

The event horizon is the point of no return; once crossed, there is no force imaginable that can allow anything to escape. There is a direct relationship between a black hole's mass and the A black holesurface area of its event horizon, such that we can precisely calculate either by knowing one. Also, the area of the event horizon will increase proportionally by the amount of energy or mass (which is itself a form of energy) that the black hole consumes. The exception is angular momentum; the angular momentum of an object that falls past the event horizon will increase the black hole's angular momentum, not its surface area. In this way, a black hole does not violate either the laws of conservation of energy or of angular momentum.

However, it did seem at first that it would violate the second law of thermodynamics, because the entropy of any object that fell into a black hole was thought to disappear entirely. At least, it didn't seem to affect the dynamics or properties of a black hole in any way. However, a scientist named Jacob D. Bekenstein did recognize an analogy with entropy. It seems that during various processes, such as the merging of two black holes, the total area of the event horizons never decreases. From that he proposed that the entropy of a black hole is proportional to the area of its event horizon. Mathematically this is expressed as

Sb = A / 4,

where Sb is the entropy of a black hole and A is the area of the event horizon. In essence, he is stating that the entropy of an object swallowed by a black hole is absorbed by the black hole itself, just as the mass and energy is, and is reflected in the increase of the surface area of the event horizon. He is also stating that the increase in entropy of the black hole must be equal to or greater than the entropy lost by the in-falling object.

A visual representation of Hawking radiationOne potential problem with this idea is a process called Hawking radiation. The intense gravitational field of the event horizon can create what are called virtual particles in the area of space above the horizon. These are pairs of matter and antimatter particles. They are called virtual because most of the time they simply combine and annihilate each other, but sometimes one of the particles is captured by the black hole. This transfers enough energy to the other particle to allow it to escape the gravitational pull and travel out into space. It takes energy to create a virtual particle, and this energy comes from the black hole's mass. Normally the black hole can recover this energy when the particles annihilate each other, but when one particle manages to escape, it carries away some of that mass energy.

As such, Hawking radiation causes a black hole to shrink as it looses mass. This shrinkage also causes the surface area of the event horizon to shrink, thereby reducing the black hole's entropy. However, the particles that make up Hawking radiation contain as much or more entropy than that lost by the black hole. Thus Bekenstein created a law similar to the modern second law of thermodynamics: the sum of the entropy changes in a black hole and the surrounding space must be equal to or greater than zero. He called this the generalized second law (GSL).

The Generalized Second Law

This might have remained just a curious abstraction, except for its cosmological implications. The GSL allows us to estimate the bounds on the information capacity of any isolated physical system. Imagine a moon with a surface area of B. If it were to collapse in on itself and form a black hole, the surface area of the event horizon would now be smaller than B, and the entropy of the black hole would be less than B/4. Since according to the GSL the moon cannot loose any of its entropy as it collapses, this means that its original entropy must also have been less than B/4. This is true for any physical system, even an area of empty space. As such, this tells us that for any isolated physical system, it cannot have a greater information content than that of the black hole that would result from its collapse.

The generalized second lawWhat is even more fantastic, however, is that the maximum possible entropy of any isolated physical system, and thus its information capacity, depends, not on its volume as intuition would suggest, but on its boundary surface area. Imagine that we begin creating a pile of computer memory chips. As we continue, the actual data storage capacity of this collection increases with the volume, as we would expect, while the theoretical information capacity increases with the surface area. Since volume increases faster than surface area, we would expect that at some point the data storage capacity will meet and then exceed the information capacity as the pile grows. If it did, then the GSL would be wrong; if it did not, then our concepts of thermodynamic, Boltzmann, and Shannon entropy would be wrong.

But in fact, neither would happen. What would actually happen is that, long before the volume would grow large enough to pose a threat to the surface area, the mass of the accumulated chips would become great enough to cause the pile to collapse in on itself, forming a black hole whose entropy, and thus information capacity, would be proportional to its surface area. After that, each additional chip added would simply increase the mass and the surface area of the black hole, and thus its entropy and information capacity, in a manner consistent with the GSL.

The Holographic Principle

The GSL, while not yet proven, has passed enough stringent, if theoretical, tests that it is generally accepted. Again, though, this might seem like a significantless abstraction except that it suggests that the holographic principle might be right. Like most scientific principles, the holographic principle is not yet proven and may be ultimately unprovable; in which case, its veracity would be determined by how well it can be demonstrated and how much it explains. For example, the conservation of mass and energy has been tested a great many times and no anomalies have been detected that cannot be explained by conversion from one form to another or by invoking undetected particles, and it can be easily demonstrated in a grade school classroom, so it is generally accepted as being a scientific law. Unlike most other principles, however, the holographic principle is not yet generally accepted because, while it has been demonstrated theoretically, it does not seem to explain any scientific mystery. At least not yet.

The holographic principle is derived from an analogy with holographic technology. A hologram is a two-dimensional plate or piece of film that creates a three-dimensional image when it is illuminated in the correct manner. All the information needed to create this image is contained within the plate. The holographic principle extends this idea to any system occupying a three-dimensional region. It states that another physical theory, defined only by the two-dimensional boundary of the region, can fully describe the physics of the three-dimensional region itself. In other words, what at first glance may appear to be a three-dimensional system operating by one set of physical laws within the volume of a specific region may in fact be an A modern hologramillusionary "projection" from a two-dimensional system operating by another set of physical laws on the boundary surface of said region. More importantly, we would be unable to distinguish between a real three-dimensional region and a holographic projection from a two-dimensional surface.

If this principle is applied to our own universe, what we get is the idea that our four dimensional universe — with three dimensions of space and one of time — could simply be a holographic projection built into the three-dimensional boundary of our universe, which still has time, but now only two spatial dimensions. To be true, we would have to find a set of physical laws that can operate on a three-dimensional surface that would produce physical effects identical to a four-dimensional region of spacetime. As yet we do not know of any three-dimensional theory that can do this.

Another problem concerns the boundary itself. We have no conception of what a boundary to spacetime means, or how it could create a four-dimensional hologram. Indeed, if our universe is, as we suspect, infinite in size, then the idea that a boundary even exists becomes absurd. Add to that the ideas that the universe is uniformly filled with matter and radiation, and that it will expand forever, and even the GSL may fail, because under such conditions the entropy of the universe would be truly proportional to its volume. Even assuming we live in an eleven-dimensional cosmos would not "solve" the problem, because it could still be a holographic projection built into the ten-dimensional surface of said cosmos, and we would be unable to tell the difference. All it would do is increase the size of the physical law and boundary problems.

There have been numerous attempts to solve these problems, some of which are very promising in that they preserve the GSL and can even locate the surfaces on which a hologram could be set up. So the holographic principle is far from dead. In fact, as a basic concept it is flawless; the question is whether it has anything new to tell us about the nature of the cosmos. It does seem inevitable, however, that some way of explaining the GSL and the dependence of information on surface area must be made, whether it is the holographic principle or something else. What the holographic principle also seems to make inevitable is that information, not field theory, is the ultimate language of physics. If this turns out to be true, then physical processes will be best understood, not in terms of interacting fields, but in terms of information exchange.

Conclusion

An Archon, a being from between the SpheresThe implication of all this for our essay is that, while we might really be denizens of the surface boundary of our universe, there might be other beings who truly the inhabit the interior volume. The idea is that, even if our universe is just a hologram created by a three-dimensional surface, that surface still bounds a four-dimensional region of spacetime. As models of theoretical universes created to test the holographic principle have shown, that region can have its own independent existence, with its own set of physical laws. (To be consistent with the holographic principle, the two universes have to simply appear equivalent, and therefore be indistinguishable by observation.) It is even possible, under the right conditions, for forces and events in the region universe to be felt in the surface universe. The forces and events seen in the surface universe will appear differently from their counterparts in the region universe, but they will have equivalent effects.

What this suggests is that the beings could be able to directly affect our universe by doing things in their own; they might even be able to alter the information of the hologram in the surface, and thereby alter the nature and characteristics of our universe. If the beings were able to visit out universe, they might be able to assume a myriad of guises: different actions, perhaps even different emotional states, may manifest themselves in different physical shapes in the surface universe. This can even suggest that the beings might fulfill the role of the demiurge that created our universe (the information that forms our holographic universe is simply a reflection of their minds). That would make mankind, our universe, and everything in it not just insignificant, meaningless, and valueless, but even insubstantial, having no more physical reality than a Brahminic dream.

Sources / Further Reading

"Information in the Holographic Universe" by Jacob D. Bekenstein, Scientific American, Vol. 289, No. 2, August 2003, pp. 58-65

Back to General Concepts.