Some Evidence Against The Theory

One of the presumptions behind the theory of us living in a simulation is that there are limits to what is simulated. For example, if “our” universe is infinite, but we can’t see it all, is there any point rendering places we could never visit or observe?

Closer to home, is there any point rendering the center of the planet in precise detail if we will never achieve anything better than guesses and simulations as to what is down there?

Recently living organisms have been discovered deep in the Earth’s crust. As deep as 3600m below the Earth’s crust (in a South African gold mine, see New Scientist, 27 April 2013, page 37).

“The discovery floored me,” says Tullis Onstott, a geologist at Princeton University, who discovered these nematode worms swimming in the water-filled fissures of the Beatrix gold mine in 2011.

The fact is, complex organisms just shouldn’t be able to live so far beneath the Earth’s surface. The nourishment and oxygen that animals need to survive are in short supply just tens of metres below ground, let alone 1.3 kilometres down. Noting that the worms shunned light like a mythical devil, Onstott’s team named them Halicephalobus mephisto, after Mephistopheles, the personal demon of Dr Faustus.

Travelling even deeper into South Africa’s crust, they found more surprises. On a trek down into TauTona, the country’s deepest gold mine, they came across another species of nematode worm at 3.6 kilometres below ground?- making it the deepest land animal found to date.

[Found here]

The point is, if we would happily accept that life cannot exist down there, why create it for us to see in the simulation? It would take a lot of processing power for no need.

Of course we don’t know if out makers even have limitations on processor power…

The Logic Behind The Non-Reality

The short answer is this:

If man on Earth ever progresses to the point where they can create Earth (and everything on it) simulations so good it is hard to tell them apart from the real world… then they’ll presumably make many of them and the odds of us being in the one real world is slim.

Here’s a more elegant way of putting it, from Nick Bostrom:

At least one of the following three propositions must be true:

1. Almost all civilisations at our level of development become extinct before becoming technologically mature.

2. The fraction of technologically mature civilisations that are interested in creating ancestor simulations is almost zero.

3. You are almost certainly living in a computer simulation

Where Are The Glitches?

Pretty much anything made by man is not perfect. Sure, near enough to perfect that you probably can’t tell the difference – but not perfect.

Engineers marvel at how close the perfect the base of the Great Pyramid is, more accurate than the variations we allow in modern constructions. But it still isn’t perfect.

I could go on…. but what I am trying to say is that in a simulation as massive and complex as the one we live in, there are bound to be mistakes, glitches, errors and even Easter eggs.

By Easter eggs, I mean hidden messages like computer game programmers regularly place in software, usually to name the authors and thank people. They are meant to be found, but only by a few people. Where would such a message be on Earth? Under miles of ice in Greenland? At the bottom of the Mariana Trench?

Now there might be glitches that only the simulation designers would know about. For example, what if zebras weren’t meant to have stripes? Or what if rainbows were supposed to have eight colors instead of seven?

But seriously, our exploration of our planet, of nanoscopic objects, and the visible universe, combined with our scientific knowledge – verging on cracking the theory of everything – suggests that if there is a glitch to be found, we have the capability to find it and recognize it.

John D. Barrow suggests that rather than using up processor power to actually generate the laws of nature, a simulation might be easier to achieve if the laws of nature were just faked.

Firstly, the simulators will have been tempted to avoid the complexity of using a consistent set of laws of nature in their worlds when they can simply patch in ‘realistic’ effects. When the Disney company makes a film that features the reflection of light from the surface of a lake, it does not use the laws of quantum electrodynamics and optics to compute the light scattering. That would require a stupendous amount of computing power and detail. Instead, the simulation of the light scattering is replaced by plausible rules of thumb that are much briefer than the real thing but give a realistic looking result – as long as no one looks too closely. There would be an economic and practical imperative for simulated realities to stay that way if they were purely for entertainment. But such limitations to the complexity of the simulations programming would presumably cause occasional tell-tale problems; perhaps they would even be visible from within. Even if the simulators were scrupulous about simulating the laws of nature, there would be limits to what they could do. Assuming the simulators, or at least the early generations of them, have a very advanced knowledge of the laws of nature, it is likely they would still have incomplete knowledge of them (some philosophers of science would argue this must always be the case). They may know a lot about the physics and programming needed to simulate a universe, but there will be gaps or, worse still, errors in their knowledge of the laws of nature. These would, of course, be subtle and far from obvious; otherwise our ‘advanced’ civilization would not be advanced. These lacunae would not prevent simulations being created and running smoothly for long periods of time. But gradually the little flaws would begin to build up.
Eventually, their effects would snowball, and these realities would cease to compute. The only escape is if their creators intervene to patch up the problems one by one as they arise. This is a solution that will be very familiar to the owner of any home computer who receives regular updates.

If the creators can patch up our world whenever something goes wrong, presumably they could erase any of our memories so that we never knew the glitched occurred in the first place.

So either the simulation will start falling apart at the seams (making it apparent to all of us),  or the simulation crashes and we cease to exist, or it is a perfect simulation and there’s no need to ever fix it. None of those are actionable.

What about mysteries? Crop circles, ghosts, UFOs, Bigfoot and so on. Could they be clues? Could they be glitches? Anything that science says doesn’t exist, but people think it does, could be the glitch we are looking for. Perhaps OOPARTs – Out-of-place Artifacts?

Underlying Lattice of our Reality

In an attempt to find a universal theory of everything, scientists look for underlying code and mathematical constants that can define our universe. But if we find such a simple answer to how our environment is constructed, then it suggests that the universe is designed, implying creation. Some might take that to mean divine creation, while others like myself see at as evidence for our world being part of a simulation.

Here’s the abstract for Constraints on the Universe as a Numerical Simulation:

Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

The authors point out that simulations already use this lattice in a computer environment, so all that is a building block in the direction of a full simulation:

With the current developments in HPC and in algorithms it is now possible to simulate Quantum Chromodynamics (QCD), the fundamental force in nature that gives rise to the strong nuclear force among protons and neutrons, and to nuclei and their interactions. These simulations are currently performed in femto-sized universes where the space-time continuum is replaced by a lattice, whose spatial and temporal sizes are of the order of several femto-meters or fermis (1 fm = 1015 m), and whose lattice spacings (discretization or pixelation) are fractions of fermis. This endeavor, generically referred to as lattice gauge theory, or more specifically lattice QCD, is currently leading to new insights into the nature of matter.

So, they have created a good theory. In the future, this specific lattice concept could be a feature of simulations. Which means there might be tests that prove we are in a simulation. The math is very complex, but for now all we need to know is that we don’t have the technology (although, presumably, one day we will…):

The spectrum of the highest energy cosmic rays provides the most stringent constraint that we have found on the lattice spacing of a universe simulation, but precision measurements, particularly the muon g 2, are within a few orders of magnitude of being sensitive to the chiral symmetry breaking aspects of a simulation employing the unimproved Wilson lattice action.