I'm sure Vulkan is fun and wonderful for people who really want low level control of the graphic stack, but I found it completely miserable to use. I still haven't really found a graphics API that works at the level I want that I enjoyed using; I would like to get more into graphics programming since I do think it would be fun to build a game engine, but I will admit that even getting started with the low level Vulkan stuff is still scary to me.
I think what I want is something like how SDL does 2D graphics, but for 3D. My understanding is that for 3D in SDL you just drop into OpenGL or something, which isn't quite what I want.
Maybe WebGPU would be something I could have fun working on.
Although after writing an entire engine with it, I ended up wanting more control, more perf, and to not be limited by the lowest common denominator limits of the various backends, and just ended up switching back to a Vulkan-based engine.
However, I took a lot of learnings from the SDL GPU code, such as their approach to synchronization, which was a pattern that solved a lot of problems for me in my Vulkan engine, and made things a lot easier/nicer to work with.
I just want OpenGL, it was the perfect level of abstraction. I still use it today, both at work and for personal projects.
The reason you don’t is that it does an amount of bookkeeping for you at runtime, only supports using a single, general queue per device, and several other limitations that only matter when you want to max out the capabilities of the hardware.
Vulkan is miserable, but several things are improved by using a few extensions supported by almost all relevant vendors. The misery mostly pays off, but there are a couple of cases where the API asks you for a lot of detail which all major drivers then happily go ahead ignore completely.
If I was a beginner looking to get a basic understanding of graphics and wanted to play around, I shouldn’t have to know or care what a “shader” is or what a vertex buffer and index buffer are and why you’d use them. These low level concepts are just unnecessary “learning cliffs” that are only useful to existing experts in the field.
Maybe unpopular opinion: only a relative handful of developers working on actually making game engines need the detailed control Vulkan gives you. They are willing to put up with the minutiae and boilerplate needed to work at that low level because they need it. Everyone else would be better off with OpenGL.
OpenGL still works. You can set up an old-school glBegin()-glEnd() pipeline in as few as 10 lines of code, set up a camera and vertex transform, link in GLUT for some windowing, and you have the basic triangle/strip of triangles.
OpenGL is a fantastic way to introduce people to basic graphics programming. The really annoying part is textures, which can be gently abstracted over. However, at some point the abstractions will start to be either insufficient in terms of descriptive power, or inefficient, or leaky, and that's when advanced courses can go into Vulkan, CPU and then GPU-accelerated ray tracing, and more.
With that said we decided to focus on DX12 eventually because it just made sense. I've written our platform layers targetting OpenGL, DX12, Vulkan and Metal and once you've just internalized all of these I really don't think the horribleness of the lower level APIs is as bad as people make them out to be. They're very debuggable, very clear and well supported.
BTW: If anyone says OpenGL is "deprecated", laugh in their face.
I know it is on Apple, but let's just assume I don't care about Apple specifically.
If you make a game instead of a game engine, you can use one of the existing engines.
OpenGL was designed as a way to more or less do that and it turned complicated fast.
Sadly 1) Apple only, 2) soft deprecated.
I imagine it will still be around for a long time because Apple and a lot of large third party apps use it for simple 3D experiences. (E.g. the badges in the Apple Fitness app).
Apple wants devs to move to RealityKit, which does support non-AR 3D, but it is still pretty far from feature parity with SceneKit. Also RealityKit still has too many APIs that are either visionOS only or are available on every platform but visionOS.
Microrant: I absolutely loathe when I am told "move to new thing. Old thing is deprecated/unsupported" and the new thing is incredibly far from feature parity and usually never reaches parity, let alone exceeds it. This is not just an Apple problem.
In general, it suffered from the problem of even Apple not knowing what it was made for, and what it even is. For a 3D API, it has less features than OpenGL 2. For a game engine, it… also has way less features than the competition, which shouldn’t surprise anyone - game engines are hard, and the market leaders have been developed for _decades_. But that’s what it looks like the most - a game engine. (It even has physics.)
Customizing the rendering pipeline in SceneKit is absolutely horrible. The user is given a choice between two equally bad options: either adding SCNTechniques which are configurable through .plists and provide no feedback on what goes wrong with their configuration (as like 3D rendering isn’t hard enough already), or using “shader modifiers” - placing chunks of Metal code into one of 4 places of the SceneKit’s default shader which the end users _don’t even have the source code of_ without hacking into the debug build! Or pulling it from Github from people who already did that [_].
If you just need something that can display 3d data, SceneKit is still fine, but once there’s a requirement to make that look good, it’s better to throw everything away and hook up Unity instead.
[_] https://gist.github.com/warrenm/794e459e429daa8c75b5f17c0006...
I find SDL3 more fun and interesting, but it’s a ton of work to to get going.
I personally have just been building off of tutorials. But notwithstanding all of the boilerplate code, the enjoyability of a code base can be vastly different.
The most fun I’ve ever had coding, and still do at times, is with WebGL. I just based it off of the Mozilla tutorial and went from there. WebGLFundamentals has good articles…but to be honest I do not love their code
I wonder if I can get it working with F# in Linux…
Frank Luna’s D3D11 bible is probably the closest thing we’ll get to a repetition spaced learning curriculum for 3D graphics at a level where you can do an assload with the knowledge.
No, it won’t teach you to derive things. Take Calculus I and II.
No, it won’t teach you about how light works. Take an advanced electrical engineering course on electromagnetism.
But it will teach you the nuts and bolts in an approachable way using what is by far an excellent graphics API, Direct3D 11. Even John Carmack approves.
From there on all the Vulkan, D3D12 shit is just memory fences buffers and queue management. Absolute trash that you shouldn’t use unless you have to.
Plenty of people make minecraft-like games as their first engine. As far as voxel engines go, a minecraft clone is "hello, world."
I remember reading NeHe OpenGL tutorials about 23 years ago. I still believe it was one of the best tutorial series about anything in the way they were structured and how each tutorial built over knowledge acquired in previous ones.
Tbh, OpenGL sucks just as much as Vulkan, just in different ways. It's time to admit that Khronos is simply terrible at designing 3D APIs ;) (probably because there are too many cooks involved)
I don't like Vulkan. I keep thinking did nobody look at this and think 'there must be a better way' but it's what we've got and mostly it's just learn it and write the code once
The commonalities to both are:
- Instances and devices
- Shaders and programs
- Pipelines
- Bind groups (in WebGPU) and descriptor sets (in Vulkan)
- GPU memory (textures, texture views, and buffers)
- Command buffers
Once I was comfortable with WebGPU, I eventually felt restrained by its limited feature set. The restrictions of WebGPU gave me the motivation to go back to Vulkan. Now, I'm learning Vulkan again, and this time, the high-level concepts are familiar to me from WebGPU.
Some limitations of WebGPU are its lack of push constants, and the "pipeline explosion" problem (which Vulkan tries to solve with the pipeline library, dynamic state, and shader object extensions). Meanwhile, Vulkan requires you to manage synchronization explicitly with fences and semaphores, which required an additional learning curve for me, coming from WebGPU. Vulkan also does not provide an allocator (most people use the VMA library).
SDL_GPU is another API at a similar abstraction level to WebGPU, and could also be another easier choice for learning than Vulkan, to get started. Therefore, if you're still interested in learning graphics programming, WebGPU or SDL_GPU could be good to check out.
The question you need to ask is: "Do I need my graphics to be multithreaded?"
If the answer is "No"--don't use Vulkan/DX12! You wind up with all the complexity and absolutely zero of the benefits.
If performance isn't a problem, using anything else--OpenGL, DirectX 11, game engines, etc.
Once performance becomes the problem, then you can think about Vulkan/DX12.
I really hope SDL3 or wgpu could be the abstraction layer that settles all these down. I personally bet on SDL3 just because they have support from Valve, a company that has reasons to care about cross platform gaming. But I would look into wgpu too (...if I were better at rust, sigh)
With Vulkan this is borderline impossible and it becomes messy quite quickly. It's very low level. Unlike OpenGL, one really needs an abstraction layer on top, so you either gotta use a library or write your own in the end.