But there's also a thesis in there: given that scene descriptions grow in size much faster than screen resolution increases, there should be a tipping point where ray-tracing is more efficient than rasterization. I don't think they expected it to take quite this long though.
The argument that "Ray tracing is logarithmic in scene complexity while rasterization is linear. Therefore, ray tracing will win eventually!" ignores the fact that rasterizers also use hierarchical graphs to maintain logarithmic complexity just like ray tracers do. You could make the same argument if you compared a well-designed rasterization-based system vs. a naive ray tracer that brute-forces every triangle vs. every pixel.
The difference is really a focus on local vs. non-local data. Rasterizers focus on preparing data ahead of time so that it can be directly indexed without searching. Ray tracers focus on making global searching as fast as possible. Rasterizers do more work at the start of a frame (rendering shadow, reflection, ambient occlusion maps). Ray tracers do more work in the middle of the frame (searching for shadowing/reflecting/occluding polygons).
It's wonderful that we finally have both accelerated in hardware. HW ray tracing still has a very long way to go. Currently, budgets are usually less than 1 ray per pixel of the final frame in real time apps! Figure that out how to use that effectively! :D But, it still opens up many new possibilities.
Rasterization goes for each triangle, which pixels does this intersect, whereas for ray-tracing it's for each pixel which triangles does this intersect.
Clearly, this allows us to build lookup data structures over the inner loop. So back of the hand ray-tracing becomes more efficient when you have more triangles than pixels (give or take an order of magnitude or so due to algorithmic differences)
So, the complexity of real-time content development has slowed due to multiple of reasons, not the least of which being that rasterization (especially modern GPU rasterization with quad based shading) is poorly suited to scenes where polygons approach pixels in coverage. And more importantly, we've hit the polygon density where we appear to be getting better visual quality gains by spending cycles on improved shading rather than more polygons.
Then add in the transition to 4K and we get a much larger pile of pixels once again changing the math all over again.
I don't know what this means for the future. I suspect we won't see much increase in scene complexity until we get to the cliff where ray-tracing is then viable, then I imagine scene complexity for real-time scenes will make a big jump.
Keep in mind, what you read is with offline CGI in mind. And we've already hit that tipping point for offline rendering. Even the last major rasterization hold-out (PRMAN) has switched to path-tracing even primary rays. It's just that real-time rendering is a bit of a different beast.
For this reason, and for the reason of real-time ray budgets barely approaching 1 ray per pixel at 1080p on the highest cards, ray-tracing tends to mostly be used for specular effects at this time.
Even in non-real-time workloads, this was a massive time sink until ray sorting and batching entered the common practice, collecting rays that hit a single coherent area to be processed at once. And the current RTX model has no strong support for ray batching.
In a previous life I worked for a company that was developing real-time ray tracing products. The founders had this magical algorithm but the catch was that dedicated ray tracing hardware was almost never successful because by the time the dedicated hardware made it to market general purpose processors had caught up.
However, what it seemed like to me was that the founders had developed a blazing fast algorithm that cut a ton of corners. Each time they'd fix an edge case the product got slower. Regardless they were moderately successful and might still be around.
And then there was the time I accidentally nuked all of our internal infrastructure in the middle of a product release demo.
No idea if it's your company, but this bit sort of reminds me of Euclideon demos: https://www.youtube.com/watch?v=DrBR_4FohSE
How so? Wouldn't both scale logarithmically with respective hierarchical acceleration structures? In what way does ray tracing scale better?
Wow, very impressive! I believe this is only available for MacOS Chrome Canary, with enable-unsafe-webgpu flags toggled on. But we are starting to see more example code.
https://github.com/tsherif/webgpu-examples
This is the first specific RTX target engine I've seen so far though. Starting to feel like the future with full time real-time hardware rendering capabilities in the browser ;)
Do you mind my asking what you plan to build with it?
The Ray-Tracing Extension is currently only available for Windows and Linux.
My next plan is to implement the extension into Dawn's D3D12 backend, so I can build chromium with my Dawn fork and have Ray-Tracing available directly in the browser (at least for myself) :)
Anecdotally, I was trying connect PyTorch's CUDA tensors to the GL textures that Electron/Chrome uses to render in a Canvas without going through CPU memory, but couldn't figure out where to inject my code. Chromium's GPU code is quite a maze. Perhaps a smarter person will be able to accomplish that.
It's not likely to be standardized as is, but the code demonstrates how to integrate something like this into Chromium. There's a Web ML community group that's working to figure out what could be standardized in this area. https://webmachinelearning.github.io/
And I'm cautiously optimistic that the recent conversation the GPU Web working group had with the Khronos liaison will spur some SPIR-V progress.
https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...
The meeting notes also reveal a clue as to why Apple might be pushing WSL so hard:
> MS: Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.
> NT: nobody is forced to come into Khronos’ IP framework.
Khronos basically said in that meeting that it would be fine to fork SPIR-V, which would solve Apple’s and Microsoft’s issues with their IPR framework. We’ve also discussed using a textual form of the SPIR-V format. We’ve offered all sorts of compromises. It’s Google that isn’t willing to budge, even stating in a WebGPU meeting that they never even considered what compromises would be acceptable to them. Encourage Google to be open to meeting in the middle and maybe we will get somewhere.
I've been wanting to play around with graphics programming for a while and the web is such a perfect platform due to cross platform compatibility and lower barrier for entry.
https://github.com/gpuweb/gpuweb/wiki/Implementation-Status
I doubt you'll see a lot of good tutorials for it until you start seeing it land in stable versions of browser (or even nightly builds tbh).
While starting to show it's age, WebGL should have a lot of tutorials and you can start working with it today.
But if you want to learn some graphics programming on the web, I really recommend trying out regl. It's a great way to learn about graphics primitives without getting bogged down in the WebGL API directly.