Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep8_banner.thumb.jpg.7fc4114c7705b14c0786cf342cea1f9c.jpg

Michael Kieth Adams

Could the solar system be an artifact?

Recommended Posts

1 minute ago, michael.h.f.wilkinson said:

But it will be a pseudo-orbit, not a true orbit.

But what is "a true orbit" in terms of constructing a behavior in a world that has no firm boundaries (no precise position/velocity/energy) of it's constituent parts?

In sense of above example - true orbit is one starting at Z0 and ending up in ZN which is in vicinity of Zx, where in our case this means that ZN and Zx are indistinguishable by measurement.

Share this post


Link to post
Share on other sites
1 minute ago, vlaiv said:

But what is "a true orbit" in terms of constructing a behavior in a world that has no firm boundaries (no precise position/velocity/energy) of it's constituent parts?

In sense of above example - true orbit is one starting at Z0 and ending up in ZN which is in vicinity of Zx, where in our case this means that ZN and Zx are indistinguishable by measurement.

Wrong. The pseudo-orbit starts at Z0, and ends at ZN at an epsilon distance from Zx, the true orbit starting at Z0 is unknown. All you know is that there exists some point in an epsilon surroundings of Z0 that ends in an epsilon surroundings of ZN, which contains Zx. When you iterate these kinds of maps at various resolutions to see where the orbits end up, e.g. to compute fractal images like those of the Mandelbrot and Julia sets you will notice that as you zoom in, and increase the number of iterations N, each pixel on the boundary region of the set gets split up into smaller and smaller regions ending up in different regimes. Thus, if I pick any point in an epsilon surroundings of Z0, I will then not necessarily end up anywhere near the ZN I found in the first run, and therefore not anywhere near Zx.

Incidentally, how do you intend to store the state in phase space of the entire universe in  a physical computer which must therefore be part of that same universe (i.e. a strict subset). How could that computer even hold that information?

Share this post


Link to post
Share on other sites
2 minutes ago, michael.h.f.wilkinson said:

Wrong. The pseudo-orbit starts at Z0, and ends at ZN at an epsilon distance from Zx, the true orbit starting at Z0 is unknown. All you know is that there exists some point in an epsilon surroundings of Z0 that ends in an epsilon surroundings of ZN, which contains Zx. When you iterate these kinds of maps at various resolutions to see where the orbits end up, e.g. to compute fractal images like those of the Mandelbrot and Julia sets you will notice that as you zoom in, and increase the number of iterations N, each pixel on the boundary region of the set gets split up into smaller and smaller regions ending up in different regimes. Thus, if I pick any point in an epsilon surroundings of Z0, I will then not necessarily end up anywhere near the ZN I found in the first run, and therefore not anywhere near Zx.

Incidentally, how do you intend to store the state in phase space of the entire universe in  a physical computer which must therefore be part of that same universe (i.e. a strict subset). How could that computer even hold that information?

Ok, yes, you've got the point there - since we use finite precision, all Z0, Z1, ... , ZN will be in close proximity of Z0', Z1', ..., ZN' (certainly epsilon proximity, but as we iterate backwards from N to 1 proximity will need to shrink, but don't worry it won't need to shrink to infinitesimal value - it will still be finite).

But this point "Thus, if I pick any point in an epsilon surroundings of Z0, I will then not necessarily end up anywhere near the ZN I found in the first run, and therefore not anywhere near Zx", while valid is not applicable to the problem, since we are not interested in epsilon surroundings of Z0, we are only interested in epsilon surroundings of Zx.

If you thought about algorithm, you might have noticed that one approach to take would be to take Zx and find Zn-1 (by adding 1 and then finding square root) to a sufficient precision that Zn-1 (exact value since it is finite in precision) squared and 1 subtracted lends in epsilon vicinity of Zx. Now you have to shrink epsilon to twice the distance between Zx' and Zx (lower bound in finite precision calculation of Zx') and then work out what would be (lower bound) epsilon_n-1 such that all numbers in Zn-1 with epsilon_n-1 vicinity in next iteration give values in Zx' +/- shrunken epsilon vicinity (hence Zx with epsilon vicinity) - since it is single iteration - that just means square of number minus 1 - it will not start widely diverging and we can actually calculate how much epsilon needs to shrink for that step (again lower bound in finite precision).

If you think about it, limit where precision tends to infinity and epsilon to 0 in above case is pointwise convergence to exact orbit ending up in Zx.

Share this post


Link to post
Share on other sites
16 minutes ago, michael.h.f.wilkinson said:

Incidentally, how do you intend to store the state in phase space of the entire universe in  a physical computer which must therefore be part of that same universe (i.e. a strict subset). How could that computer even hold that information?

Ah, yes, my argument here goes like this:

If one was to create / design solar system, it can't be within our universe - one would need to create / design whole universe, and hence our solar system (as well as every other solar system out there, and all the rest stuff) so that we could say that our solar system was designed. I mentioned in one of my earlier posts that such a designer entity would need to be "outside" of our existence / universe, and there we can't pose any limits on computing power, type of calculation device (can be as powerful as non deterministic Turing machine or more powerful - if one is able to design and create universe with quantum mechanics - enabling quantum computer - the "subset" of NDTM, then surely they might have more advanced machines available to them), nor storage space

  • Like 1

Share this post


Link to post
Share on other sites
5 minutes ago, vlaiv said:

Ok, yes, you've got the point there - since we use finite precision, all Z0, Z1, ... , ZN will be in close proximity of Z0', Z1', ..., ZN' (certainly epsilon proximity, but as we iterate backwards from N to 1 proximity will need to shrink, but don't worry it won't need to shrink to infinitesimal value - it will still be finite).

But this point "Thus, if I pick any point in an epsilon surroundings of Z0, I will then not necessarily end up anywhere near the ZN I found in the first run, and therefore not anywhere near Zx", while valid is not applicable to the problem, since we are not interested in epsilon surroundings of Z0, we are only interested in epsilon surroundings of Zx.

If you thought about algorithm, you might have noticed that one approach to take would be to take Zx and find Zn-1 (by adding 1 and then finding square root) to a sufficient precision that Zn-1 (exact value since it is finite in precision) squared and 1 subtracted lends in epsilon vicinity of Zx. Now you have to shrink epsilon to twice the distance between Zx' and Zx (lower bound in finite precision calculation of Zx') and then work out what would be (lower bound) epsilon_n-1 such that all numbers in Zn-1 with epsilon_n-1 vicinity in next iteration give values in Zx' +/- shrunken epsilon vicinity (hence Zx with epsilon vicinity) - since it is single iteration - that just means square of number minus 1 - it will not start widely diverging and we can actually calculate how much epsilon needs to shrink for that step (again lower bound in finite precision).

If you think about it, limit where precision tends to infinity and epsilon to 0 in above case is pointwise convergence to exact orbit ending up in Zx.

You are talking limits of infinite precision, which in the real world doesn't work (many students of my course on Modelling and Simulation within our CS Master programme have found this out the hard way ;) ). In fact, we often suggest that if you are studying chaotic system, forget about high precision solvers as all they do is waste computer time trying to impose numerical stability on an inherently (physically) unstable system. 

Share this post


Link to post
Share on other sites
5 minutes ago, vlaiv said:

Ah, yes, my argument here goes like this:

If one was to create / design solar system, it can't be within our universe - one would need to create / design whole universe, and hence our solar system (as well as every other solar system out there, and all the rest stuff) so that we could say that our solar system was designed. I mentioned in one of my earlier posts that such a designer entity would need to be "outside" of our existence / universe, and there we can't pose any limits on computing power, type of calculation device (can be as powerful as non deterministic Turing machine or more powerful - if one is able to design and create universe with quantum mechanics - enabling quantum computer - the "subset" of NDTM, then surely they might have more advanced machines available to them), nor storage space

That is akin to invoking magic, but unless it has infinite accuracy (or the infinite tape of a Turing machine) it won't be able to predict the long-term behaviour of even a simple chaotic pendulum, let alone the universe. They won't be able to solve the Halting problem either

Share this post


Link to post
Share on other sites
9 minutes ago, michael.h.f.wilkinson said:

That is akin to invoking magic, but unless it has infinite accuracy (or the infinite tape of a Turing machine) it won't be able to predict the long-term behaviour of even a simple chaotic pendulum, let alone the universe. They won't be able to solve the Halting problem either

This is where we disagree - long-term implies finite time, and finite time with finite error in phase space in my view (and above example) requires finite precision.

Share this post


Link to post
Share on other sites
Just now, vlaiv said:

This is where we disagree - long-term implies finite time, and finite time with finite error in phase space in my view (and above example) requires finite precision.

Then I suggest you study chaos theory in more detail

Share this post


Link to post
Share on other sites
2 minutes ago, michael.h.f.wilkinson said:

Then I suggest you study chaos theory in more detail

I certainly will, for the time being, do you disagree with above example that I gave?

Share this post


Link to post
Share on other sites

 

Just now, vlaiv said:

I certainly will, for the time being, do you disagree with above example that I gave?

Certainly: In your simple backward iteration step example, run the code to compute your starting point Z0. Now run the iteration forwards, and see whether you end up in the same starting point or somewhere way different (in a chaotic system, in most cases you will not get anywhere near Zx). A positive real part of a Lyapunov exponent means that any epsilon deviation of an orbit grows exponentially in time. Thus, if my initial perturbation is a rounding error of say 1.0E-307, and the value of the Lyapunov exponent suggests this error doubles every hour of simulated time, or equivalently grows tenfold in 3 hours 20 minutes.  In just 1024 hours (about 43 days), my error has reached 1.0, and in 1024 hours more it is 1E+307. Working out the error after 13.8 billion years is left to the reader.

Share this post


Link to post
Share on other sites
4 minutes ago, michael.h.f.wilkinson said:

 

Certainly: In your simple backward iteration step example, run the code to compute your starting point Z0. Now run the iteration forwards, and see whether you end up in the same starting point or somewhere way different (in a chaotic system, in most cases you will not get anywhere near Zx). A positive real part of a Lyapunov exponent means that any epsilon deviation of an orbit grows exponentially in time. Thus, if my initial perturbation is a rounding error of say 1.0E-307, and the value of the Lyapunov exponent suggests this error doubles every hour of simulated time, or equivalently grows tenfold in 3 hours 20 minutes.  In just 1024 hours (about 43 days), my error has reached 1.0, and in 1024 hours more it is 1E+307. Working out the error after 13.8 billion years is left to the reader.

Very nice example - except, like I said at the beginning - be careful when you use term infinity. Let's say that error doubles every second, and we let the simulation run for 13.8 billion years, number that we get for magnitude of error is almost infinitesimally small compared to infinity, it is after all finite. Any finite error - how ever large, we can use to get small initial error (again finite) so that total error will be less than any specified finite required error. Time is not infinite and things don't blow up "beyond repair".

Just because we perceive number to be largest number imaginable - incomparable to anything in our everyday life, incalculable on any of today's computers or even any computer that we can envision in the future, if it is finite - it is very very tiny compared to infinity :D

  • Like 1

Share this post


Link to post
Share on other sites
1 minute ago, vlaiv said:

Very nice example - except, like I said at the beginning - be careful when you use term infinity. Let's say that error doubles every second, and we let the simulation run for 13.8 billion years, number that we get for magnitude of error is almost infinitesimally small compared to infinity, it is after all finite. Any finite error - how ever large, we can use to get small initial error (again finite) so that total error will be less than any specified finite required error. Time is not infinite and things don't blow up "beyond repair".

Just because we perceive number to be largest number imaginable - incomparable to anything in our everyday life, incalculable on any of today's computers or even any computer that we can envision in the future, if it is finite - it is very very tiny compared to infinity :D

We are basically using ∞ here, as the extreme limit of the real line, not one of the aleph numbers.

Let me put differently, for every hour (or second) of simulated time added, you need to double the precision of the initial position, requiring doubling the precision of the representation at least. You are also tacitly assuming that the simulation is done after 13.8 billion years, and that time is finite. Current models suggest accelerating expansion, so the universe could expand for ever. So my postulate holds, for every finite precision calculation, at some rapidly approaching point in time (compared to infinity), the errors will explode from 1E-307 to 1E+307 in a matter of 3 months in my example, or within the hour in yours

The above tacitly assumes you are not making any further errors along the way, BTW.

Share this post


Link to post
Share on other sites
5 minutes ago, michael.h.f.wilkinson said:

We are basically using ∞ here, as the extreme limit of the real line, not one of the aleph numbers.

Let me put differently, for every hour (or second) of simulated time added, you need to double the precision of the initial position, requiring doubling the precision of the representation at least. You are also tacitly assuming that the simulation is done after 13.8 billion years, and that time is finite. Current models suggest accelerating expansion, so the universe could expand for ever. So my postulate holds, for every finite precision calculation, at some rapidly approaching point in time (compared to infinity), the errors will explode from 1E-307 to 1E+307 in a matter of 3 months in my example, or within the hour in yours

The above tacitly assumes you are not making any further errors along the way, BTW.

I agree, that is why I said that we need to put a limit on time - finite limit.

Mind you, I'm not trying to assert that something is true, I'm just saying that based on our current knowledge it can't be ruled out. It is indeed highly speculative of me to pose finite limit on time - it can come in form of sudden death (universe ceases to exist at some future time T) or perhaps state close to thermal equilibrium when gradients are so low as to prohibit formation of information processing systems is a trigger for a "shutdown".

All I wanted to do is to point out that chaos theory does not prevent our universe to be designed and/or created. In fact, it is interesting topic to discuss - whether in principle we can rule out possibility of our universe being created/designed at all.

Share this post


Link to post
Share on other sites
1 minute ago, vlaiv said:

I agree, that is why I said that we need to put a limit on time - finite limit.

Mind you, I'm not trying to assert that something is true, I'm just saying that based on our current knowledge it can't be ruled out. It is indeed highly speculative of me to pose finite limit on time - it can come in form of sudden death (universe ceases to exist at some future time T) or perhaps state close to thermal equilibrium when gradients are so low as to prohibit formation of information processing systems is a trigger for a "shutdown".

All I wanted to do is to point out that chaos theory does not prevent our universe to be designed and/or created. In fact, it is interesting topic to discuss - whether in principle we can rule out possibility of our universe being created/designed at all.

My point is that the notion that the universe has been designed requires something unimaginably many orders of magnitude more complex than the entire universe itself. That extra structure needs an explanation in its turn (a designer of the designer of the designer, turtles all the way down, with each level more complex than the layer above it). Occam's razor should be applied to cut that short

  • Like 3

Share this post


Link to post
Share on other sites
1 minute ago, michael.h.f.wilkinson said:

My point is that the notion that the universe has been designed requires something unimaginably many orders of magnitude more complex than the entire universe itself. That extra structure needs an explanation in its turn (a designer of the designer of the designer, turtles all the way down, with each level more complex than the layer above it). Occam's razor should be applied to cut that short

This is very logical reasoning if we apply our type of logic at the problem. However, I did point out this in one of the first posts - in order to dig deeper into all of that we need to add some flexibility to our way of thinking.

First of all, we are applying our notion of design / creation process to the realm that can be completely different than our own - our notion depends on time, at least very basic notion of time - a system with states and transition - simplest example would be two step system - universe not created -> universe created. However we can conclude with some degree of confidence that time is in fact property of our universe, and as such might not exist outside of the realm of our universe.

Another point is that creator/designer implies intent and "free will". We have no evidence of free will in our universe. Well scratch that - we need to define free will in the first place. Laws of physics don't allow for what we intuitively perceive as free will, and there is a question why do we have illusion of the free will at all. Then comes intent - which we associate with consciousness which we in turn don't fully understand. For example - a tree has fallen due to storm in someones back yard and "created" a mess :D - but we would not say that tree actually "created" a mess by choice / intent. But a person creating a mess will be "assigned" a notion of choice, after all one had a "choice" not to make a mess in the first place (this kind of reasoning will fast get one into ethics territory).

Anyway, my point is that if we go by our experience, then yes, creator needs a creator or causality based explanation, but if we accept that causality / time and above mentioned things are "local" to our universe, then we can possibly conceive that it might not be the case for "one level up".

 

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, vlaiv said:

This is very logical reasoning if we apply our type of logic at the problem. However, I did point out this in one of the first posts - in order to dig deeper into all of that we need to add some flexibility to our way of thinking.

First of all, we are applying our notion of design / creation process to the realm that can be completely different than our own - our notion depends on time, at least very basic notion of time - a system with states and transition - simplest example would be two step system - universe not created -> universe created. However we can conclude with some degree of confidence that time is in fact property of our universe, and as such might not exist outside of the realm of our universe.

Another point is that creator/designer implies intent and "free will". We have no evidence of free will in our universe. Well scratch that - we need to define free will in the first place. Laws of physics don't allow for what we intuitively perceive as free will, and there is a question why do we have illusion of the free will at all. Then comes intent - which we associate with consciousness which we in turn don't fully understand. For example - a tree has fallen due to storm in someones back yard and "created" a mess :D - but we would not say that tree actually "created" a mess by choice / intent. But a person creating a mess will be "assigned" a notion of choice, after all one had a "choice" not to make a mess in the first place (this kind of reasoning will fast get one into ethics territory).

Anyway, my point is that if we go by our experience, then yes, creator needs a creator or causality based explanation, but if we accept that causality / time and above mentioned things are "local" to our universe, then we can possibly conceive that it might not be the case for "one level up".

 

I am not going by experience or intuition when I explore things mathematically. Using mathematics we can often prove things that seem counter-intuitive, like Gödel's incompleteness theorem, or the related proof  the Halting problem is undecidable by Alan Turing (Gödel, Escher, Bach is a great read on this topic).  When studying general relativity and quantum mechanics you learn not to trust everyday intuition or experience (and gain quite a bit in mental flexibility). My point is that if you need to design a system that has the kind of chaotic behaviour actually observed in the universe, that puts constraints on the system used to design it, if by design you mean creating a construct with a particular aim or desired end product (what the OP suggested). As the concept of time is built into the universe to be designed, that must in some way be implemented in the design system. The conclusion is that the design system must be far more complex than the system it is designing, because of the constraints set by the unpredictability of the universe. Besides, we are left to explain the designer itself, and there we too readily get into territory that is banned from SGL for a very good reason

 

  • Like 2

Share this post


Link to post
Share on other sites

Wow, what did I start?  Again I will state that just because we can’t imagine how to do it does not mean it can’t be done.  We do not know everything and it may well be that the current solar system could have been produced from a range of origins rather than  a small specific set.  I don’t think we can assume that we are the desired outcome, but a second generation planet supporting life may have been. The end result may not have been X but Q through Z.   There is so much we don’t know take planet number nine, we know about where it must be, and how big it must be but that’s about it.   We thought  we had Pluto figured out but we were wrong about almost everything.  What we think we know is pretty amazing until you realize we are missing about 90% of the universe.  It’s like planet 9, we  know it’s there and about how much of it there should be, but what it is we still don’t know.

I think you are overthinking this.  Constructing a solar system might be as simple as moving a few planetesimals where they need to be and then watch the dominos fall for half a dozen billion years.  Moving Jupiter sounds impossible but moving something much smaller might produce a move .  Think cheap.

Share this post


Link to post
Share on other sites
1 hour ago, Michael Kieth Adams said:

Wow, what did I start?  Again I will state that just because we can’t imagine how to do it does not mean it can’t be done.  We do not know everything and it may well be that the current solar system could have been produced from a range of origins rather than  a small specific set.  I don’t think we can assume that we are the desired outcome, but a second generation planet supporting life may have been. The end result may not have been X but Q through Z.   There is so much we don’t know take planet number nine, we know about where it must be, and how big it must be but that’s about it.   We thought  we had Pluto figured out but we were wrong about almost everything.  What we think we know is pretty amazing until you realize we are missing about 90% of the universe.  It’s like planet 9, we  know it’s there and about how much of it there should be, but what it is we still don’t know.

I think you are overthinking this.  Constructing a solar system might be as simple as moving a few planetesimals where they need to be and then watch the dominos fall for half a dozen billion years.  Moving Jupiter sounds impossible but moving something much smaller might produce a move .  Think cheap.

So how would an entity know which planetesimals to move where? It would need the ability to predict the effect of such a perturbation of the very chaotic early solar system. That is difficult into the extreme, as argued before, especially when concerning large systems over huge time spans. It would not be sufficient to calculate the predicted outcome on the solar system in isolation, as the effects of passing stars, exploding supernovae, etc should be taken into account too. Mathematically, that would require computational machinery many times the mass and complexity of the solar system itself, and would most likely require more energy than the rest energy of the visible universe, if computation is sped up to the limit predicted by quantum uncertainty. So unless our understanding of physics and mathematics are turned completely on their heads, it is impossible.

Also, please note that science cannot prove a theory is right, but it can prove things wrong, by showing the theory is contrary to observations, or in the case of mathematics that a particular assumption leads to a contradiction. So it is not a matter of limits of imagination, but limits set by physics, mathematics, and logic.

  • Like 3

Share this post


Link to post
Share on other sites

Wow, what did I start?  Again I will state that just because we can’t imagine how to do it does not mean it can’t be done.  We do not know everything and it may well be that the current solar system could have been produced from a range of origins rather than  a small specific set.  I don’t think we can assume that we are the desired outcome, but a second generation planet supporting life may have been. The end result may not have been X but Q through Z.   There is so much we don’t know take planet number nine, we know about where it must be, and how big it must be but that’s about it.   We thought  we had Pluto figured out but we were wrong about almost everything.  What we think we know is pretty amazing until you realize we are missing about 90% of the universe.  It’s like planet 9, we  know it’s there and about how much of it there should be, but what it is we still don’t know.

 

Current cosmographers are able to make pretty good models of how the solar system formed with the computers we have now.  If I got it right we have new systems about to come on line that will make current super computers look like toys.

I would be very careful about using words like impossible.  Liquid water on Pluto was impossible not long ago.  I still think you are overthinking it.  Our planetary system is anomalous.  We have found none like it.  Why?  They should be there.  I think that one possible answer might be that our solar system is an artifact.  Big problems for us yep, but maybe there are others  for whom it might not be.  I also think you may be looking at this as a problem that must always come out exactly right,but maybe the same thing was tried in a hundred and fifty systems, and we were the lucky ones.  Maybe Mars was the desired result and we are just the leftovers from a failed experiment.

.4

Share this post


Link to post
Share on other sites

Some like to believe the Universe is a projection or a hologram or it was constructed in some way, my belief is it's certainly constructed but by the mechanics of physics, mass, elemental composision, action and reaction of those criteria in complete chaos with our perspective being orginized chaos as we look from afar in ultra short exposures of time. If one could view the universe from overhead as a giant in real time and watch it from its tiny beginning to its massive end all taking place as an object 3 feet spherical in a few seconds it would become quite clear the unorganized reaction before us.

                      Freddie.

Share this post


Link to post
Share on other sites
On 26/06/2019 at 20:23, Michael Kieth Adams said:

Wow, what did I start?  Again I will state that just because we can’t imagine how to do it does not mean it can’t be done.  

Yes but unless we root our proposition in what physics we do know then what we are left with  leaves the realm of science and becomes something else.  We should resist making up any old narrative.

Jim 

  • Like 1

Share this post


Link to post
Share on other sites

This has been a lot of fun.   I will say this, our universe is a very large place,  I don’t think it’s needed to require a universal designer.  We are close to shifting asteroids around.  We haven’t done it yet but when we do we will have taken the first steps towards designing solar systems ourselves  though considering how long it takes we might opt for shorter term projects.  Different beings experience time differently. When I was a child a year was forever, now they zip by so very fast.  To beings with lifespans of thirty billion years or so, creating second generation planets might be a form of gardening or something.  We are near very exciting times.  I would bet a quarter that we will begin to understand dark matter and energy within the next twenty five years.  The basic laws of physics may look very different then.  Everything changes, how wonderful!  I can’t wait to see what planet nine is like, or if there is life in the dwarf planets or mars.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.