Features:
- raycaster engine running at 25 - 30 FPS
- animated textures
- lighting system
- destroyable walls
- automap
- 3 enemy types
- final boss
game is fully playable in Altirra emulator
video: https://www.youtube.com/watch?v=lRd3MucaRoU
homepage: https://atari8.dev/final_assault
discussion: https://atariage.com/forums/topic/326709-final-assault-new-g...
https://atariage.com/forums/topic/326709-final-assault-new-g...
simply amazing what they did. could you imagine if this came out in the 80s?!?!
Ps there was a game that had a similar FPS view in those days. It was a maze game with polygon graphics but no textures. I forget the name. No shooting though!
http://archive.6502.org/publications/dr_dobbs_journal_select...
6502 cannot multiply or divide (division is a costly operation even on modern CPUs) so I would use adding or subtracting logarithms for this purpose. 12-bit precision logarithm table requires just 8 Kb RAM (and you can get 13-bit precision with interpolation).
The disadvantage of this approach is that every time you convert a number from linear to logarithm or vice versa you get approximately up to 0.3% error (for 12-bit logarithm). So multiplying or dividing several numbers in a row is fine, but if you have to alternate multiplication with additions you will accumulate an error. So I would look for formulas that avoid this. But for a game a little error in calculations is not noticeable.
Also one might think that the most time-consuming part for pseudo-3D game is math and calculations. I doubt that. The most of CPU cycles are usually spent in rasterisation and applying textures. Is is easy to calculate positions of 3 vertices of a triangle, but it takes a lot of time to draw it line by line, pixel by pixel, and if you want to have textures this time can be multiplied 5x-10x.
- mostly 8 colors
Its impressive, but unplayable.
You could completely revolutionise computer graphics on that era of hardware, with the view to increasing vectorisation, and probably strongly steer it towards ray tracing instead of rasterisation (even skipping over the local minimum of k-D tree methods, the introduction of Surface Area Heuristic and eventually settling on modern BVH building and traversal).
What's the advantage over BSP/kD-trees/octrees?
And what do you mean by rasterization - we still have to deal with pixels in the end, so it has to happen somewhere? (..I'd love to play with a color vector monitor though!).
With BVH, the partitioning is fundamentally over lists of objects rather than space; if you split by space, you can/will have objects on both sides of the splitting plane, leading to duplicate references.
Doing it by lists means there are no duplicate references, however the combined bounding volumes can overlap, which is to be minimised, subject to the Surface Area Heuristic cost. It winds up being something like a quicksort, although for the highest quality acceleration structures you also want to do spatial clipping... this is an extremely deep field, and several people have spent considerable part of their professional career to it, for example the amazing Intel Embree guys :)
It also happens to work out best for GPU traversal algorithms, which was investigated by software simulation quite a few years ago by the now-legendary Finnish Nvidia team, and together with improvements on parallel BVH building methods and further refinements is basically what today's RTX technology is. (As far as I'm reading from the literature over the years.)
Here's a fundamental paper to get started: https://research.nvidia.com/publication/understanding-effici... (Note that these are the same Finnish geniuses behind so many things... Umbra PVS, modern alias-free GAN methods, stochastic sampling techniques, ...)
At a high level, you could think about it as where in your nested loops you put the loop over geometry (say, triangles). A basic rasterizer loops over triangles first, and the inner loop is over pixels. A basic ray tracer loops over pixels, and the inner loop is over triangles (with the BVH acting as a loop accelerator). Just swapping the order of the two loops has significant implications.
Compared to octrees, BVHs deals well with data that’s unevenly distributed. At each level you split along the axis where you have the most extent. Finding the pivot is the interesting part. When I recently implemented a BVH from scratch, I ended up using Hoare partitioning and median-of-three and it worked really well. The resulting structure is well balanced, splitting the population of bodies roughly in half at each level, and that’s not even the state of the art, that’s just something my dumb ass coded in an afternoon.
Also I wonder how you can achieve clock frequency like 300 MHz without a cache. Shouldn't CPU stumble on fetching every command?
Both ‘ray casting’ and ‘ray tracing’ are overloaded terms, the distinction isn’t as clear as you suggest. You’re talking about 2d ray casting, but 3d ray casting is common, and means to many people the same thing as ‘ray tracing’. Ray casting “is essentially the same as ray tracing for computer graphics”. https://en.wikipedia.org/wiki/Ray_casting
There’s also Whitted-style recursive ray tracing, and path tracing style ray tracing, but ray tracing in it’s most basic form means to test visibility between two points, which is what ray casting also means from time to time.
I think the biggest difference between ray-type algorithms is everything vs ray-marching, because regardless of recursion, and strategies towards lighting, texture sampling and physical realism, with ray-marching a single ray is not really a ray at all but lots of little line segments, and you don't usually bother finding explicit and intersections which gets really complex and expensive... that's the whole point, instead you find proximity or depth, which means you can render implicit surfaces like fractals.
At its base clock speeds the Celerons were middling chips. But since they readily overclocked you could get them up to 450-466MHz. They wouldn't be equivalent to the same speed Pentium II (because of no L2 cache) but they punched above their weight for the price.
Pretty interesting but it's a little hard to make out what's going on sometimes.
This atari game might also look a lot better on a crt or something.
https://en.wikipedia.org/wiki/Atari_8-bit_family
Jay Miner was one of the main developers who later went on to be the "father of the Amiga"
Good intro to raycasting for those interested: https://lodev.org/cgtutor/raycasting.html
https://moegamer.net/2019/03/05/atari-a-to-z-capture-the-fla...
Like him otherwise, but nope he didn't invent it...
All those LucasArts games were stunning for the time and Rescue At Fractalus has a genuinely terrifying experience within it not replicated IMO until the first time you hear "Anytime..." in Alien vs Predator...
I also had my 400 upgraded to 48K with a mechanical keyboard.
Also I don't recall any of the 400/800 3rd party upgrades being compatible - they used a different bank-switching mechanism - with the later XL/XE models.
It might be "useless" in a utilitarian sense, but I think of it as an incredible art form.
I'll download the game after work and see if I can run it under an emulator, but the graphics and music alone seems wonderful.
I used this tutorial in the 2000's to write a raycaster myself in C using Allegro. It's wonderful that these tutorial are still useful so many years after they've been written.