Not a Simulation?

February 23, 2019  •  Leave a Comment

An unusual blog post this month - something weird and philosophical perhaps.

OK, truth is I can sometimes be a total brat and enjoy messing with people, particularly I like messing with folks who can be a bit pompous, ie. I remember half listening to a college student talking about whether we live in a computer simulation and he was saying that it would be easy to disprove such a notion because it is unlikely that the simulation would be error free and that some 'bug', 'glitch' or other 'limitation' would give it away as a simulation.

I didn't think much of it at the time, but after a while it did get me thinking, so in hindsight, this is how I would have messed with him...

First of I would challenge the notion that any sort of "error" would be recognizable as such, instead we might observe such an “error”, but how would we (as part of said simulation) interpret the effect of such a “bug”? I’ll come to this one later, so stick around cos that one is gonna blow your mind, believe me.

OK, I do have to state for the record, that I do not believe all this is some incredibly advanced powerful version of “The Sims” being played on some alien kids computer. I believe that what we see and experience and think of as reality, is the real universe… But it is always fun to think “what-if..?”

So lets talk about limitations first & how they might “manifest” in our interpretation of reality within a vast computer. Let us assume that history as we know it did occur in the simulation and that the simulation didn’t just start populated with humans and pre-history is a created backstory; but hang-on what if there was a start point when the simulation was started and that start was 6000 years ago and all of Earths history was part of a preloaded setting. That’s would make creationists correct.

So anyway, big simulator that has been running a history of Earth from formation to now and at the current moment is modelling over Seven Billion humans and not just that, modelling everything on the Earth at least. Before you object about the size of the model, I’m going for Gigantic computing power to help point out that there can never be enough computing power to run a simulation with such accuracy to be problem free.

Problem #1: Limits to Resolution.

Look around you right now, if this is a computer simulation then you are interpreting a 3D model updated in real time.

We know in our interpretation of reality, the screen you are reading this is made up of atoms. Now I’m not going to get into the nature of atoms within a simulation, after all the screen could be described within the simulator by a wireframe model. Far better than trying to simulate every atom. But a simulation would require every described object within it one extra quality to make the model work – coordinates. Among the data that describes your screen within the simulator will be coordinates.

No matter how big the computer is running the simulation, it will be limited in the degree of detail it can distinguish. Currently 64-bit is the largest mainstream architecture, with a few 128-bit & 256-bit in private research, so lets go big and assume our simulation is on a 65536-bit machine (2^256), on that alien kids desk no doubt. But even that (probably soon to be replaced by a better model) computer with all of its processing power will have a limit to how accurate it can measure an objects position.

There will be a scale at which there is defined a distance that is the smallest distance that can be measured to separate two distinct coordinates within the simulation. At this scale a moving object would appear to ‘jump’ instantaneously to a point close by without seemingly go through space between the two points. Basically what is the resolution of the simulated universe.

Any form of simulation, whether ‘turn’ based or ‘algorithm’ based ie. “sims” vs a mathematical model of the height of sea tides, require time as a component. For our ‘turn’ base simulation, we live in slices of time like old fashioned stop-animation – there is a snapshot of time, then the simulation is updated and then another snapshot and so on. For ‘algorithm’ based, I recall reading once that for the next 10 million years, assuming that nothing unexpected or unknown perturbs the orbits of the planets, that the orbits of the planets are so well modelled that if their measurements of Pluto’s position was off by more than 100 miles then that error after just 1 million years would grow to where Pluto was on the other side of its long orbit. A bit long winded I know, but it is one of those facts that have stuck in my head for years.

Both versions of simulations requires time be defined by units ie. Within a simulation, there has to be a measure of the smallest possible unit of time, the time between each ‘turn’ or time between each call to the ‘algorithm’ to update Pluto’s position.

But reality would not work like that, no computer or simulation could ‘tick’ so fast that even the fastest update would be truly seamless and instantaneous or be able to resolve detail on a scale down to infinity…

…except.

There is this thing called “Planck units”, among which is “Planck Length”, “Planck Time”, “Plank temperature” and a few other delights https://en.wikipedia.org/wiki/Planck_units. “Planck Length” is the shortest distance between two points where the Standard model of the universe can be reconciled with General Relativity etc., but smaller than that then all bets are off and quantum weirdness rules. The Planck Length is smaller than an electron – in fact (Quoting a Quora answer) “If you could stretch an atom to a size of our galaxy, the plank length will be the size of an atom. If you could stretch a football field to a size of our observable universe, the plank length would be the size of an atom.”

“Planck Time” is the time it takes for light to travel a “Planck Length” and it therefore the shortest length of time that can be defined in the standard model. In fact in cosmology the Planck epoch is described as the very first moment following the Big Bang being when the universe was one “Plank Time” old.

So that means in a way we see that reality actually does not have infinite resolution and the “Planck Length” could also be the size of the pixels or the absolute limits of the coordinates of an object.

Now would be a good time to go lie down or get a really good drink…

Problem #2: An Error or Glitch in the simulation.

Back to the original point that any sort of error (said it would be worth hanging around for), whether a bug in the simulation code or some other “glitch” would be obvious and give the game away that it is all a simulation. An example being a recent Doctor Who episode where the inhabitants realized they were simulations and one of the proofs being that when asked to say aloud a random number, they all said the same number.

Seriously, that is one piece of sloppy Friday afternoon hack after a few too many lunchtime pints down the pub piece of shit coding. That would have been spotted in the unit test. OK so how would it have been fixed? Easy.

First solution is that nearly all random number generator require a “seed” value to generate a sequence of numbers that look random from an algorithm. The ‘random’ result is then fed back in as the seed to the next random number to be requested. So add to the algorithm something that could also be used along with the seed, such as when a request for a random number is made, the algorithm accesses the simulation clock (which is measured in “Planck units”). There is no way a group of ‘people’ if simulated can all ‘think’ of a random number, using the same seed at the same “Planck moment.” Some simulated people will take a few seconds to respond (how much Planck time would that be?), a few will anticipate the need and request a number too quickly – so already we have a huge spread of input into the Random number generator and so there will be different numbers shouted out.

Second solution would be that during one of the simulations ‘turn’ updates, the simulator needs to provide everyone in a group with a random number, so instead of each simulated person have their own random number generating routine, that during the turn update the simulator would poll all the simulated object for those that require a random number and add those requests to a queue to be processed. They are all processed in turn (remember it is a fast 65536-bit computer) and each result is looped back in as the seed to the next random number. All this is done during the ‘tick’. A single random number generating routine accessed by all the elements within the simulation that require anything to be “random” (via a convenient API) would actually be a much more efficient programming solution and would reinforce the pseudo-randomness of the numbers.

Problems like that would be caught and fixed during unit testing.

What I think would happen if there were such a ‘glitch’ is that it would not be an obvious smoking gun pointing to a computer simulation, instead it would be interpreted as being some sort of phenomenon that defies explanation.

Remember we were discussion algorithms and a slight error could over time compound an error, as in the case of Pluto’s orbit. When I did my university internship, one of my projects was to port a modelling system from Fortran-6 to Fortran-77+. Part of the port basically involved cleaning up some of the sloppy F6 coding so that it would compile for F77+ with strict encoding enforced for best optimization. Once the port was done (and the code changes also put into the F6 code stream so to maintain one code that could be compiled both ways) the results had to be compared and differences in output resolved. Both compiled versions of the code ran on the same mainframe, but what I observed was that the two compilers treated and stored floating point number differently, resulting in small differences that were only apparently for very small numbers with many digits post decimal point. Basically there was a rounding error. Exact same code, exact same computer hardware, just different compiler.

Now imagine a slight rounding error in the algorithm that determines the distances between two points defined as coordinates in the simulation. The rounding error could be so small that the algorithm that calculated the distanced from your eye to this text could be in the order of a few atoms – seriously not much for us to be bothered about right?. But as we enlarge the scale, such errors as these can easily compound themselves and suddenly when we measure the distance from the Earth to the Sun, that rounding error could mean plus-or-minus several miles/kilometers to the measured distance. As we look towards some of the galaxies in the local group and the distance calculating algorithm in the simulation could be on the order of plus-or-minus several light-years.

And as we look at the distant universe at objects 12 Billion light years away, using Type Ia supernova to determine distance, now we could potentially see this inflated rounding error, but how would we interpret it? When we look at those far objects we look at the red-shift which is caused by the expansion of the fabric of space, the further away, the greater the red-shift the faster it is moving from use and that speed is proportional to the distance from us. So what we would observe, if there was a bug in the simulation is that when we measure the red-shift of that far away object, we can deduce just how far away it is from us, but then when we measure Type Ia Supernova to directly determine distance we would get a different value. Would theoretical physicists and astrophysicists at this point throw up their hands and exclaim that they have found something that is conclusive proof that it is all a simulation. Or would they think that what they are observing is correct and that there is something wrong with the model they have for the universe and then work out how to fix their model.

Don’t believe me?

Those objects 12 Billions light years away, when we look at the light from those supernova, the light pattern suggests that the object is further away from us than is allowed for by the red-shift. So what could be a bug in the simulation has instead lead astro-physicists to consider that the expansion of the universe is accelerating. To help explain this we have the concept of Dark energy, which is the mysterious energy pushing the expansion, and that in time the galaxies will fly apart from each other because of the accelerating expansion. As its influence grows, it will stretch the space between planets and their stars and actually atoms will be ripped apart as the fabric of space expands faster.

Until finally the distance that the Dark energy is able to act upon reaches down to the Planck Distance at which point the fabric of the universe itself is ripped apart. Certainly a very different end of the universe scenario from the expand forever into blackness or crunch back down to nothing.

So the whole scenario of the Universe being torn apart by accelerating expansion driven by Dark energy came about because of an observable inconsistency resulting from an normally insignificant rounding error and our attempts to reconcile the effect it has, when there may be no such thing as Dark Energy and no Accelerating Expansion. The simulated universe could be behaving exactly as it should according to Newtonian Physics and the Standard model before these new observations. What if the elaborate theories about Dark Energy etc. is an attempt to rationalize observations of a bug in the simulation.

Problem #3: The inevitability of duplication.

What more? Oh yeah, you bet…

Lets go back to random numbers for a moment shall we. Lets assume that the simulator has as routine “create person”, input basic parameters such as location, ethnicity etc. and a random number from our random number generator so that we can create a population that looks diverse. Now it is always possible to enter different values and get similar results – that could be why some people have near identical ‘doubles’, that’s just the nature of random numbers etc.

Lets assume that our “create person” routine, when provided with parameters, creates the full person, name, profession & appearance etc. so it should be extremely unlikely that in a real universe (such as ours) that the “create person” would produce two identical people (excluding twins here, since that would be just one crank of the “create person” procedure).

Now the thing about random number generators, especially if we ‘fixed’ it as previously described, for it to be truly random then there is always a chance that the generator can produce the same output consecutively. I mean, when playing roulette, it is possible for the ball to land in the exact same number as the previous play. So lets assume, our rando generator can spit out the same number, and if we call the ”create person” with the same basic input values to create two individuals and unbeknownst to us the generator also provided the same randomized value then potentially those two people we created, could be totally unrelated and yet could be identical.

Yeah right, that I think would be stretching it too much…

…except.

Literally as I was writing this blog, I found this article, two baseball players, with exactly the same name, they go to the same doctor, both red-haired (only 2% of the Caucasian population BTW) and even look the same, even the same height, but a DNA test shows that they are totally unrelated (however not the same birthday, otherwise the article would have so mentioned that I think) https://www.huffingtonpost.com.au/2019/02/20/2-baseball-players-named-brady-feigl-take-dna-tests-to-see-if-theyre-related_a_23674425/ . OK, so when I read that article, it actually made me pause…

 

So while I do not believe we live in a computer simulation, anyone who says that it can be proved that we do not live in one because “we could tell” has not thought it through.

How would we tell exactly?

So, when did the Simulation start? Was it 6000 years ago perhaps.

Do the Planck units represent the level of detail that the simulation is capable of?

And are some of the unexplained observations of our universe simply just how we inside the simulation interpret bugs or glitches within the system?

And do we know why one of the voyager probes seems to be accelerating as it leaves the solar system?

But I hope I’ve given you pause for thought and remember – its Happy Hour somewhere.

[ADDENDUM]

I love reading Quora and shortly after I posted this, there was a very simply question which on the face of it seemed an innocent question, but turned out to be a whole lot more. They asked that if 0 Kelvin was absolute lowest possible temperature with no energy, was there an absolute highest possible temperature? My thought was "no, you just add energy!", you know a bit like "what is the largest possible number?" - because whatever answer someone provides, you just add '1', but I would be wrong. Heat is governed by the electro-magnetic force and the hotter an object is, the more energetic the light that the object emits, ie. Shorter wavelength. AND the shortest wavelength possible is 1 planck length, so light with a wavelength of 1-planck would be emitted by an object that was 10^44 degrees Kelvin – anything higher would be shorter wavelength, which as we know is not possible, so that defines a maximum possible temperature that can ever occur in the universe. BTW for comparison the Big Bang is thought to have been about 10^25 (ish) Kelvin, so even the big bang was not as hot (or explosive) as the potential maximum possible – WOW!. This also means that temperature is "granular" just like time and distance, because the minimum difference between two distinct temperatures would be where their respective electro-magnetic wavelengths differed by just 1 planck length.

I also found a slightly better description of "planck time", one that really plays with the mind. Not only is it the shortest possible unit of time, but it is the shortest time that separates events. Shorter than that then we are playing in the realm of quantum physics (and time also becomes quantum now) then it is no longer possible to determine which event occurred first – effect could come before cause – or not! But whenever we talk of a “split second”, we really do mean “planck time” and the “planck epoch” following the Big Bang was the first moment after the big bang event…

[ADDENDEM to the ADDENDEM]

Anyone who knows me knows that sometimes (especially when cold meds are involved trying to overcome the inevitable dose of something nasty caught flying back to the States from Europe) my brain can go down a rabbit-hole in a recursive way, & verily it did… BIG TIME!!!

An interesting Sci-Fi story idea, scientist looks to create the conditions of 10^44 K and attempts to push the temperature even higher, thus inadvertently creating a burst of light with wavelength in the quantum realm and the potential impact that would have on reality. That was thought 1.

Thought 2. – Naturally this is impossible because all the matter/energy in the universe was crunched together in a singularity at the moment of the big bang and all the energy in the universe could only achieve a temperature of 10^25 K, so in that aspect we are safe.

Thought 3. Or rather the total energy of the Observable universe, when crunch together down to a singularity would derive a temperature of 10^25 K. The current theory is that shortly after the big bang (several planck ‘ticks’ later I assume) there was the inflationary period in which the fabric of space expanded faster than the speed of light, thus creating the observable universe within the much larger universe, the majority of which lies beyond the viewable horizon.

Thought 4. BTW it was at this point when flying over Greenland that the stuffy head, cold meds and the complimentary G&T’s provided by the nice BA flight crews really kicked into action.

If just our observable slice of the universe accounted for the energy to create 10^25 K of the big bang temperature and this was prior to the inflationary period, then that part of the universe beyond the observable horizon would also be part of that momentary fireball, but we can no longer see its contribution to those early conditions and the Background microwave cosmic radiation, only the CMBR created by the observable universe. There is plenty more of it, but its beyond our observable horizon.
Which means that there was much more of it (and therefore more energy) than would have created the 10^25K fireball that represents the observable universe! All because of inflation.

Thought 5. Really on a roll now, yes I’ll have another G&T <sniff> Now if we go back to our maximum temperature of 10^44 K – if this really is the absolute hottest temperature, then the big bang could not be any hotter, so suddenly we get the possible maximum size of all the universe beyond what we see, which would be 10^19 times bigger than the observable universe. 
 

At which point my head exploded…

 

 


Comments

Visit my facebook page www.facebook.com/ChrisFieldhousePhoto/
Subscribe
RSS
Archive
January February March April May June July August September October November (2) December (1)
January February March April May June (1) July August September October November (1) December
January (1) February (1) March April May June July August September (1) October November December
January February March April May June July August September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December