To answer what seems to be the most common question I get asked about this, I am intending on open-sourcing the entire stack (PCB schematic/layout, all the HDL, Windows WDDM drivers, API runtime drivers, and Quake ported to use the API) at some point, but there are a number of legal issues that need to be cleared (with respect to my job) and I need to decide the rest of the particulars (license, etc.) - this stuff is not what I do for a living, but it's tangentially-related enough that I need to cover my ass.
The first commit for this project was on August 22, 2021. It's been a bit over two and a half years I've been working on this, and while I didn't write anything up during that process, there are a fair number of videos in my YouTube FuryGpu playlist (https://www.youtube.com/playlist?list=PL4FPA1MeZF440A9CFfMJ7...) that can kind of give you an idea of how things progressed.
The next set of blog posts that are in the works concern the PCIe interface. It'll probably be a multi-part series starting at the PCB schematic/layout and moving through the FPGA design and ending with the Windows drivers. No timeline on when that'll be done, though. After having written just that post on how the Texture Units work, I've got even more respect for those that can write up technical stuff like that with any sort of timing consistency.
I'll answer the remaining questions in the threads where they were asked.
Thanks for the interest!
Of course plenty of hobbies let people spend thousands (or more) so there's nothing wrong with that if you've got the money. But is it the end target for your project? Or do you have ambitions to go beyond that?
One thing to note that is that while the US+ line is generally quite expensive (the higher end parts sit in the five-figures range for a one-off purchase! No one actually buying these is paying that price, but still!), the Kria SOMs are quite cheap in comparison. They've got a reasonably-powerful Zynq US+ for about $400, or just $350ish the dev boards (which do not expose some of the high-speed interfaces like PCIe). I'm starting to sound like a Xilinx shill given how many times I've re-stated this, but for anyone serious about getting into this kind of thing, those devboards are an amazing deal.
[1] https://www.aliexpress.us/item/3256806069467487.html
[2] https://www.digikey.com/en/products/detail/amd/XC7K325T-1FFG...
Thank you very much!
I desperately want something as easy to plug into things as the 6502, but with jussst a little more capability - few more registers, hardware division, that sort of thing. It's a really daunting task.
I always end up coming back to just use an MCU and be done with it, and then I hit the How To Generate Graphics problem.
Regarding graphics, initially output serial. Abstract the problem away until you are ready to deal with it. If you sneak up on an Arduino and make it scream, you can make it into a very basic VGA graphics card [1]. Even easier is ESP32 to VGA (also gives keyboard and mouse) [2].
[1] https://www.instructables.com/Arduino-Basic-PC-With-VGA-Outp...
And yeah, video output is a significant issue because of the required bandwidth for digital outputs (unless you're okay with composite or VGA outputs, I guess they can still be done with readily available chips?). The recent Commander X16 settled for an FPGA for this.
I always got the impression that David sort of got railroaded by the other members of the team that wanted to keep adding features and MOAR POWAH, and didn't have a huge amount of choice because those features quickly scoped out of his own areas of knowledge.
He started posting videos again recently with some regularity after a lull. Audience is in the low hundreds of thousands. I assume fewer than 100k actually finish videos and fewer still do anything with it.
Hobby electronics seems surprisingly small in this era.
I've built stuff with microcontrollers (partially aided by techniques learned here), but that was very purpose-driven and I'm not super interested in just messing around for fun.
I’m having trouble wrapping my head around how / why you’d use youtube to present analog electrical engineering formulas and pin out diagrams instead of using latex or a diagram.
I wrote a couple of articles on how to do bit banged VGA on the RP2040 from scratch: https://gregchadwick.co.uk/blog/playing-with-the-pico-pt5/ and https://gregchadwick.co.uk/blog/playing-with-the-pico-pt6/ plus an intro to PIO https://gregchadwick.co.uk/blog/playing-with-the-pico-pt4/
As I read it, it's just a fun hobby project for them first and foremost and looks like they're intending to write a whole bunch more about how they built it.
It's certainly an impressive piece of work, in particular as they've got the full stack working, a windows driver implementing a custom graphics API and then quake running on top of that. A shame they've not got some DX/GL support but I can certainly understand why they went the custom API route.
I wonder if they'll open source the design?
The last year I've been working on a 2d focused GPU for I/O constrained microcontrollers (https://github.com/KallDrexx/microgpu). I've been able to utilize this to get user interfaces on slow SPI machines to render on large displays, and it's been fascinating to work on.
But seeing the limitation of processor pipelines I've had the thought for a while that FPGAs could make this faster. I've recently gotten some low end FPGAs to start learning to try and turn my microgpu from an ESP32 based one to an FPGA one.
I don't know if I"ll ever get to this level due to kids and free time constraints, but man, I would love to get even a hundredth of this level.
There's no open hardware GPU to speak of. Depending on license (can't find information?), this could be the first, and a starting point for more.
There's this which is about the same kind of GPU
Lattice ECP5 (which goes up to 85k LUT or so?) and Nexus have more than decent support.
Gowin FPGAs are supported via project apicula up to 20k LUT models. Some new models go above 200k LUT so there's hope there.
chip: https://colognechip.com/programmable-logic/gatemate/ board: https://www.olimex.com/Products/FPGA/GateMate/GateMateA1-EVB...
https://github.com/schlae/graphics-gremlin is an MDA/CGA compatible adapter
https://github.com/OmarMongy/VGA is a VGA core
https://github.com/archlabo/Frix is a whole IBM PC compatible SoC, including a VGA.
I have an idea for a small embedded product which needs a lot of compute and networking, but only very modest graphical capabilities. The NXP Layerscape LX2160A [1] would be perfect, but I have to pass on it because it doesn't come with an embedded GPU. I just want a small GPU!
[1]: https://www.nxp.com/products/processors-and-microcontrollers...
Performance is nowhere near a modern iGPU, because an iGPU has access to all of the system memory and caches and power budget, and a simple m.2 device has node of that. Even low-end PCIe GPUs (single slot, half-length/half-height) struggle to outperform better iGPUs and really only make sense when you have to use them for basic display functionality.
Something else to look at is the Vortex project from Georgia Tech[1]. Rather than recapitulating the fixed-function past of GPU design, I think it looks toward the future, as it's at heart a highly parallel computer, based on RISC-V with some extensions to handle GPU workloads better. The boards it runs on are a few thousand dollars, so it's not exactly a hobbyist friendly, but it certainly is more accessible than closed, proprietary development. There's a 2.0 release that just landed a few months ago.
https://www.amd.com/en/products/system-on-modules/kria/k26/k...
As mentioned in the rest of this thread, the Kria SoMs are FPGA fabric with hardened ARM cores running the show. Beyond just being what was available (for oh so cheap, the Kria devboards are like $350!), these devices also include things like hardened DisplayPort IP attached to the ARM cores allowing me to offload things like video output and audio to the firmware. A previous version of this project was running on a Zynq 7020, for which I needed to write my own HDMI stuff that, while not super complicated, takes up a fair amount of logic and also gets way more complex if it needs to be configurable.
It's a mixed chip: FPGA and traditional SoC glued together. This mean you don't have a softcore MCU taking up precious FPGA resources just to do some basic management tasks.
Designing and bringing-up the FPGA board as described in the blog post is already a high bar to clear. I hope the author will at some point publish schematics and sources.
[1] https://docs.amd.com/v/u/en-US/zynq-ultrascale-plus-product-...
I see no one else has asked this question yet, so I will: How VGA-compatible is it? Would I be able to e.g. plug it into any PC with a PCIe slot, boot to DOS and play DOOM with it?
It is how it is done on AMD GPU, that said I have no idea what is the nvidia hardware programming model.
Would be neat if someone made an FPGA GPU which had a shader pipeline honestly.
So my guess is that it would be quite challenging to implement a modern GPU in an affordable FPGA if you want more than a proof of concept.
I do not doubt that a shader core could be built, but I have reservations about the ability to run it fast enough or have as many of them as would be needed to get similar performance out of them. FuryGpu does its front-end (everything up through primitive assembly) in full fp32. Because that's just a simple fixed modelview-projection matrix transform it can be done relatively quickly, but having every single vertex/pixel able to run full fp32 shader instructions requires the ability to cover instruction latency with additional data sets - it gets complicated, fast!
Cheaper boards are definitely possible since there are smaller parts in that family, but they need to offer support for some of them in the free version of Vivado...
It's terrible use of the hardware and the performance is far from stellar, but you can!
Not every GPU should be used to train or infer so-called AI.
Please, stop, we need some hardware to put images on the screens.
FPGAs only make long-term sense in applications that are so low-volume that it's not worth spinning an ASIC for them.
llama.cpp already supports 4 bit quantization. They unpack the quantization back to bfloat16 at runtime for better accuracy. The best use case for an FPGA I have seen so far was to pair it with SK Hynix's AI GDDR and even that could be replaced by an even cheaper inference chip specializing in multi board communication and as many memory channels as possible.
I am not sure your product will be a success.
I am sure you web design skills need a good overhaul.