https://alvinng4.github.io/grav_sim/5_steps_to_n_body_simula...
Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.
But in our simulation (reality, or whatever), uncertainty increases the closer we look.
Does that suggest we don’t live in a simulation? Or is “looking closer increases uncertainty” something that would emerge from a nested simulation?
laughs in 32kb of RAM
Sounds like a great project, though. There's a lot of fundamental concepts involved in something like this, so it's a great learning exercise (and fun as well!) :)
For a few-body system (e.g., the Solar System) this probably won't provide any speedup. But once you get to ~100 bodies you should start to see substantial speedups by running the simulator on a GPU.
For the programmer, yes it is easy enough.
But there is a lot of complexity hidden behind that change. If you care about how your tools work that might be a problem (I'm not judging).
Obviously if you want maximum performance you'll probably still have to roll up your sleeves and write CUDA yourself, but there are a lot of situations where you can still get most of the benefit of using a GPU and never have to worry yourself over how to write CUDA.
I'm trying to evaluate if I can get away with f32 for GPU use for my molecular docking software. Might be OK, but I've hit cases broadly where f64 is fine, but f32 is not.
I suppose this is because the dominant uses games and AI/ML use f32, or for AI even less‽
> Large-scale simulation: So far, we have only focused on systems with a few objects. What about large-scale systems with thousands or millions of objects? Turns out it is not so easy because the computation of gravity scales as . Have a look at Barnes-Hut algorithm to see how to speed up the simulation. In fact, we have documentations about it on this website as well. You may try to implement it in some low-level language like C or C++.
Once particles accelerate they'll just 'jump by' each other of course rather than collide, if you have no repulsive forces. I realized you had to take smaller and smaller time-slices to get things to slow down and keep running when the singularities "got near". I had a theory even back then that this is what nature is doing with General Relativity making time slow down near masses. Slowing down to avoid singularities. It's a legitimate theory. Maybe this difficulty is why "God" (if there's a creator) decided to make everything actually "waves" as much as particles, because waves don't have the problem of singularities.
Edit: To answer myself, I think this is because one of the factors is to normalize the vector between the two bodies to length 1, and the other two factors are the standard inverse square relationship.
a = G * m1 * m2 / |r|^2 * r_unit
r_unit = r / |r|
a = G * m1 * m2 / |r|^3 * r F = G * m1 * m2 / |r|^2 * r_unit
F = G * m1 * m2 / |r|^3 * r
Or as acceleration, by dividing out an `m` using `F=ma`: a = G * m / |r|^2 * r_unit
a = G * m / |r|^3 * rhttps://benchmarksgame-team.pages.debian.net/benchmarksgame/...
2022 "N-Body Performance With a kD-Tree: Comparing Rust to Other Languages"
https://ieeexplore.ieee.org/document/10216574
2025 "Parallel N-Body Performance Comparison: Julia, Rust, and More"
https://link.springer.com/chapter/10.1007/978-3-031-85638-9_...
I vividly remember the Pythagorean three-body problem example, and how it required special treatment for the close interactions.
Which made me very pleased to see that configuration used as the example here.
ParallelNBodyPerformance Public
Now you're making me worry I'm not "physicsing" hard enough