Jump to content

Gravitational stuff simulations


Recommended Posts

Hey guys, i've been working on some studies about globular clusters and i needed to do some simulations... 

I did the "simulator" (c++, now trying java), worked well for earth-sun system (using an "ok" time step) but when simulating 30+ objects it gets soooooooooooo sloooooooooow!

I'm using verlet integration and trying to work on time steps... make it as big as it can keep being "real".

Maybe the problem resides in my computer, does anyone know how can i get it pumping? :D

Or maybe i got optimization problems.. :/

HELP MEEEEEEE :D

Link to comment
Share on other sites

To be honest, if you program in a simple way (e.g. use of primitives, simple c-style loops, etc) in Java in order to limit the creations of objects and therefore the work the garbage collector has to do, Java is not much behind C++.

I have recently been doing analysis on files of size ~100GB using a program I wrote in Java and runs smoothly in terms of memory and CPU. I would not worry too much about the programming language, but rather on the algorithm complexity.

If you cannot change the algorithm, you could play on its parameters. If you sacrifice a bit in accuracy (e.g. the time step), the overall results will be faster.

Link to comment
Share on other sites

One further problem is that the standard algorithm has a quadratic complexity (CPU time rises with the square of the number of objects).

In other words, 30 objects is going to take about a trillion times as much computing power as two objects...

Google 'rice on the chessboard'

Link to comment
Share on other sites

There are ways to account for only the forces of nearby objects, and averaging the potential of the rest. That saves a lot of time and can be O(N log N) rather than O(N2). Some general info here:

https://en.wikipedia.org/wiki/N-body_simulation

Having written a BASIC interpreter, that's probably because computers typically work out powers by converting the figures to logs, so you are bypassing a step. Shouldn't it be O(2 log N) though?

Link to comment
Share on other sites

I'm getting an almost imperceptible whooshing noise of stuff going way over my head :grin:

:) If it is due to the above conversation, they (michael & stub) are talking about algorithmic complexity. Briefly it is a way to estimate how an algorithm (here the simulation of motion of rotating objects) performs in terms of computational time depending on the number of considered objects (input parameters). 

The O "Big O" stands for "upper limited by".

For instance, O(n * log (n)), means that the time for an algorithm to execute is upper limited by (=no more than) a function n * log (n) ,  where n is the number of input. 

For 1 object you have a time of 1 * log (1) 

For 2 objects you have a time of 2 * log (2) , 

for N objects you have a time of N*log(N) 

A function like n * log (n)  is "still good", because it means that it runs fast. This because there is a log (n) inside and n is linear (it does not have exponents). 

If you look on the internet for the plot of this function you will see that it is a function which grows slowly. Translated: if you increase the input, this function computes relatively fast.

More problematic are functions containing quadratic or cubic or higher terms. e.g. O(n^2), O(n^3) ... 

In this case, the time grows quadratically, cubically (and so on) depending on the number of input parameters. If you google a function like n^2, you will see that it grows much much faster than functions like log(n), or n log(n). 

Even more problematic are functions like like O(k^n) where k is another input parameter.. These can grow even faster.. 

In these cases you might consider approximated algorithms for instance (just to mention on.. this is actually a huge field of research).

Hence, an important question is (actually one of the questions of the century): can we make algorithms which seem to be slow, faster? For instance can we always transform an algorithm having exponential complexity (e.g. O(2^n)) into an algorithm having logarithmic complexity (e.g. O(n*log(n)) )? 

If you can answer (with proof) this question, you will be in the position of buying a very nice telescope.. ! :)

(for curiosity: https://en.wikipedia.org/wiki/P_versus_NP_problem) )

Here the plots for some of the mentioned functions for giving an idea: 

Aq09a.png

Elements = number of input parameters. In this thread = # of objects to simulate 

Operations = number of computer operations. This is equivalent to Time, because generally one computer operation is executed within a fixed fraction of time. 

IMPORTANT: 

1. Although improving the efficiency of a computer can make things running faster, this is not sufficient for running an algorithm faster, meaning that the algorithmic complexity remains the same. You can have the fastest traditional (=deterministic) computer ever built, but a problem with exponential complexity will still run slow unless the equivalence in the mentioned problem (technically P=NP) is proved to be true. 

2. Using parallel computing will help, but again only to a certain extent. 

3. A quantum computer might be a big change for these problems. Not because they are faster, but because they work differently from the traditional computers we use to work.

Link to comment
Share on other sites

  • 2 weeks later...

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.