Another one is that a planet sphere renderer will often tesselate the sphere into quads in lat-lon space. Of course the quads are split into two triangles for rendering. However, at the poles, one of the triangles has zero area because two of its vertices are the same, the pole. Then when you texture map that "quad" with a square texture, half of the texture is not shown, and you get visible seams (Google Earth suffers from this artifact, or at least it did in the past). What's less obvious is that this problem is present to a lesser extent in every quad on the sphere, because the triangle with the horizontal edge nearer the pole is smaller than the other, so half of the texture is stretched and half is shrunk. The fix is to use homogeneous texture coordinates.
(This fix was known as Krakensbane: it solved a bug known as the Deep-Space Kraken, which was essentially that floating-point physics inaccuracies would tear your ship apart when you got much further than the moon or so, before it was implemented.)
They've also recently fixed issues with single precision calculation in KSP1 and used a double-precision QuaternionD.LookRotation in the maneuver nodes to keep interplanetary trajectories from hopping around a lot.
[ oh it also uses left handed coordinates, which is terrible which means (0,1,0) is the north pole and the left handed cross product gets used so dual vectors like angular momentum point south for east-going orbits -- except the Orbit class uses normal right handed vectors and when you forget to .xzy swizzle a vector for one reason or another you can wind up debugging an issue all day long ]
In my custom engine I did the world to camera transform for each object on the CPU in double precision, essentially making the camera the origin for all subsequent single precision computations on the GPU. That works for small objects and even large objects that are split into small parts, like a planet split into LOD terrain tiles. But it didn't work for orbit lines because they are a single object at planet scale that you can zoom in on to see at human scale (I didn't have an adaptive tesselation system like the one in the article).
It also wouldn't have worked for galaxy scale, where even double precision wouldn't be enough. I don't know exactly what Celestia and similar "entire universe" apps do. Emulated quad precision floating point?
Edit: I just realized that you are talking about precision issues with the physics engine, while I'm talking about precision issues in the rendering engine. Related but slightly different. Physics engines aren't constrained by GPUs and can use double precision throughout. But they often have stability issues even in normal circumstances so I can certainly imagine that solar system scales would be a problem even in double precision.
Tessellated sphere looks like this: https://imgur.com/l3GmWq3
We use fluids as a way to create 3D nebulae that can be flown through. https://thefulldomeblog.com/2013/08/20/the-nebula-challenge/
Or if you constrain the fluid into a sphere, then you have a dynamic volumetric sun. https://thefulldomeblog.com/2013/07/30/customizing-a-close-u...
When needing to fly through a star field, relying on particle sprites is an easy way to quickly render thousands of stars. https://thefulldomeblog.com/2013/07/03/creating-a-star-field...
Background stars are achieved by point-constraining a poly sphere to the camera. Having a poly sphere allows for easy manipulation to create realistic diurnal motion. https://thefulldomeblog.com/2013/11/13/background-stars-v2/
Flying through a galaxy field can be achieved with loads of galaxy images mapped to poly planes. For galaxies that are seen edge on, we sometimes add more detail by emitting fluid from the image colors. https://thefulldomeblog.com/2013/07/16/flying-through-a-gala...
Simulating the bands of Jupiter is tricky but I've done some experiments with 2D fluids. https://thefulldomeblog.com/2014/01/30/jupiter-bands-simulat...
And of course since the visuals are rendered for a planetarium dome, we gotta render using a fisheye camera. These days all render engines support fisheye, but 10 years ago it was a different story. https://thefulldomeblog.com/2019/09/07/exploring-render-engi... https://thefulldomeblog.com/2013/06/28/fisheye-lens-shader-o... https://thefulldomeblog.com/2013/07/23/stitching-hemicube-re...
Nice! I've always wanted to do some fluid dynamics on the surface of a sphere, but the math is too hard for me. I found a video on youtube where someone has done some interesting things a few years ago: https://www.youtube.com/watch?v=Lzagndcx8go&t=1s but there's very little information about it. Then there was what was done for the film 2010: The year we make contact" http://2010odysseyarchive.blogspot.com/2014/12/
I've had to resort to simpler means myself, which means faking it. I use OpenSimplex noise on the surface of a sphere, and from this I can find the gradient of the noise field tangent to the surface of the sphere, rotate this vector 90 degrees about an axis passing through the center of the sphere -- which is equivalent to some kind of spherical curl of the noise field -- which gives me a non-divergent velocity field. Because incompressible fluid flows are also non-divergent, there's a strong but superficial resemblance -- it looks like fluid flow, even though it is just an arbitrary process. Into this field, I dump a bunch of colored particles and let them flow around, painting alpha blended, slowly fading trails behind them onto the surface of a cube to be used later as textures of a cubemapped sphere.
For the bands, I superimpose a simple velocity field of counter rotating bands on top of this curl-noise generated velocity field. Something like: horizontal_velocity += K x sin(5 x latitude)
Results looks like this: https://duckduckgo.com/?q=gaseous-giganticus&t=h_&iax=images...
The idea for using the curl of a noise field to mimic fluid dynamics is from a paper by Robert Bridson, et al.: https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...
This program is open source, it's here: https://github.com/smcameron/gaseous-giganticus
Are there GPUs without FP64 functionality at all? Or are you just referring to most consumer GPUs being built for FP32 performance over FP64?
e.g. 1080gtx
FP16 (half) performance
138.6 GFLOPS (1:64)
FP32 (float) performance
8.873 TFLOPS
FP64 (double) performance
277.3 GFLOPS (1:32)
e.g. 3090rtx FP16 (half) performance
35.58 TFLOPS (1:1)
FP32 (float) performance
35.58 TFLOPS
FP64 (double) performance
556.0 GFLOPS (1:64)
Only generally 'tesla' class cards targeted at super computers have a 1:2 ratio (e.g. v100, A100, Titan V). Note, I believe Titan V is the only Titan series GPU with good double performance, as the Volta architecture was never available to Geforce GPUs.https://www.techpowerup.com/gpu-specs/geforce-gtx-1080.c2839
https://www.techpowerup.com/gpu-specs/geforce-rtx-3090.c3622
https://www.techpowerup.com/gpu-specs/tesla-v100-sxm3-32-gb....
Some napkin/WolframAlpha math:
if you wanted to use simple x,y,z coordinates,
with the sun at the center
and be able to represent locations at 30 AU (Neptune)
with an accuracy of 1mm, e.g. 30AU vs 30.000...0001AU
you'd need ~16 decimal digits of precision
which is the same number of bits as a double (FP64)
of course there's better ways to do this,
in the surrounding smarter commentsCould you do double-single computations on a GPU? (By that I mean something like double-double arithmetic, only with two singles instead of two doubles.)
It's not required for space rendering, as once everything is in camera coordinates you no longer have any visible precision issues (as long as you are using a float reverse Z buffer). You just have to be careful to make sure your world to camera transforms are done on the CPU, and you break large objects like planets into small LOD chunks with their own local coordinate systems, which you need to do anyway.
Icosahedron doesn't work well if you have textures. The triangle topology near the poles lead to bad texturing.
Icosahedron may work better if you have a fully procedural pipeline and don't need to worry about square textures.
Space combat physics is still done very well.
"Orbit Type Diagrams"[3][4] show the fractal-like complexity of three or n-body problems[5][6]
[0] https://github.com/mockingbirdnest/Principia
[1] https://www.youtube.com/watch?v=l3PCCJZzVvg
[3] https://www.semanticscholar.org/paper/Crash-test-for-the-res...
[4] https://www.semanticscholar.org/paper/Crash-test-for-the-Cop...
I mean, even Interstellar, with a Nobel Prize on board sometimes forgoes scientific accuracy for nicer pictures.
There is also a game about accuracy, viewer expectations, and attention. For example, most people will thing that the best way to land from orbit is to point the ship towards the ground is fire the thrusters, obvious right. If the ship points 90 degrees away, people will ask themselves why. If orbital mechanics is central to your movie, that's good, but you may have some explaining to do. If you are in the middle of an epic space battle, it is not the time for a physics lesson, so go for the obvious (and wrong) and let the viewer focus on the action.
In the real world, a spacecraft or other object in orbit around Earth is also being constantly influenced by other celestial bodies, especially the sun and moon. Over short timescales this causes the spacecraft's orbital parameters to slowly drift; over longer timescales, it means the long-term position and fate of an object is chaotic and unpredictable. The behavior near the "boundary" between two spheres of influence is just a situation where these perturbations are more noticeable.
KSP only implements two-body physics, so a spacecraft is only affected by the gravity of one celestial body at any given time. This allows you to put something in orbit and know that it will stay there without you needing to constantly check on it and perform stationkeeping.
It's also the key simplification that makes "time warp" possible, since two-body orbits have closed-form solutions. To implement time warp with many-body physics, you would need to either keep the integration step size the same and drastically increase the amount of computation, or increase the step size and suffer from extreme inaccuracy, causing objects to crash or fly off into space.
It makes things nicer at extremely high time warps, but it's not necessary. It's not like you need to update orbits nearly as often as part physics. The max time warp is 100000x, and at that speed if you updated orbits every 10 game seconds that would only be 400 calculations per tick, per craft. So without that simplification you might need a smaller cap on satellite swarms, or a max speed of 10000x, but time warp would still be well inside the realm of "possible".
Edit: You could probably get processor use really low by using an exact curve for the most influential object and a very slowly updated offset for other influences.
The non-continuous flip lets them keep orbits exclusively centered around the you're-most-likely-orbiting-this thing, which makes them all look "normal" and the same. Though I would like to be able to see either option in the stock game - they both have their uses.
For a more immersive experience, install the RasterPropMonitor mod to get interactable IVA displays. Add a compatible camera mod and Docking Port Alignment Indicator and you can even do a multi-craft Apollo style mission entirely from the cockpit.
[1] https://apollo11space.com/apollo-11-windows/
[2] https://www.hq.nasa.gov/alsj/coas.htm
[3] (yes, ISS has an optical viewfinder called VShTV - it's installed in Zarya module. It was meant for emergencies and never been used to reorient the module manually, AFAIK)
I was yelling at the screen that it wouldn't work, but somehow he made it there, though with the same delta-v he could have gone to Eve.
Tessellation shaders are useful for processing polygonal geometry with many thousands of polygons, but they have a fairly constrained programming model. And as you can see from the example images, rendering a high-quality orbit path only requires a few dozen vertices. The performance benefit from moving that computation into a tessellation shader is likely to be insignificant compared to the additional complexity and overhead.