Related: A coworker of mine wrote a 3d accelerator with FPGA that outperforms some of the early Voodoo -era 3d accelerators. It could do higher triangle counts at a higher resolution with multisample antialiasing. The demos he had looked quite spectacular.
He is a software engineer and he gave us a very nice demo and a very enlightening presentation on how writing software differs from writing FPGA code.
Unfortunately, since we work for a GPU company, it's unlikely that he'll ever release the work to the public. There's no way the company lawyers would allow that and releasing it without consulting them first could result in him losing his job :( Despite that he did it on his own time and did not use any company IP.
He would not gain more "reputation points" because his name wouldn't be linked to the project, but that would avoid for the code to be forgotten in the cellar while nobody cares.
I've tried to encourage him to try to convince the company lawyers to give him the OK to release it but he's no too keen. Like most engineers, he is allergic to lawyers and isn't too interested on sharing the code anyway. It was "just" a learning experience for him.
Output was bit-banged VGA signal into a cheap-ass monitor.
Like the first Voodoo-era 3d accelerators, it was a triangle rasterizer with Z-buffering and perspective correct texture mapping. Ie. there was no 3d transformations done on the chip, they were done on the CPU. The limiting factor in the demos was actually the ARM CPU (synthetic on the FPGA) which couldn't push enough triangles to keep the GPU busy.
It was a tile based rasterizer (in two stages: coarse and fine) rather than a scanline rasterizer (like SW rasterizers in the Quake era).
This is pretty much all I can remember about it.
The DE0-Nano[1] is one that I used to do a project[2] very similar to this one. It includes all the parts you'd need besides a video output (you can rig up a 15-bit RGB VGA output pretty easily[3]).
In any case, I think this sort of knowledge is very, very helpful and infinitely applicable to software developers. Great stuff.
[1] http://www.terasic.com.tw/cgi-bin/page/archive.pl?No=593
[2] https://github.com/SkylerLipthay/project-costanza
[3] http://www.lucidscience.com/pro-vga%20video%20generator-2.as...
This design seems to be framebuffer-based, a straightforward hardware acceleration of the method you would use for software 2D rendering. If you aren't aware already, you should look into how the classic 2D graphics chips on consoles like the NES worked. They evaluated the background and sprite rendering logic as the video signal was being scanned out, causing some hard limitations on e.g. the number of sprites per line, but generating the signal with less latency and without requiring an expensive (for the time) framebuffer.
http://wiki.nesdev.com/w/index.php/PPU_rendering
Obviously the approach falls apart if you want to introduce scaled/rotated sprites or go 3D.
I have a couple of FPGA boards sitting around. I know what my next project is going to be.
"The top two manufacturers in the FPGA market are Xilinx and Altera. Xilinx was first off the block in the FPGA industry and has the lion’s share of the market."
"Lion's share" is a bit of an exaggeration. Market share numbers for Xilinx are in the 45%-50% range and for Altera in the 40%-45% range.
http://www.eejournal.com/archives/articles/20140225-rivalry/
https://www.kickstarter.com/projects/1812459948/minispartan6...
http://webcache.googleusercontent.com/search?q=cache:andybro...