Vilem Otte said:
To clarify this one needs to read functional specifications of both - DXR and Vulkan Ray Tracing, and both are… disappointing at least. Apart from definition of TLAS (scene level) and BLAS (primitive/instance level) - there aren't requirements of acceleration structure to be of specific type. Now keep in mind that there are many acceleration structures and pretty much all of them can be multi level ones, that includes: Grids, BSP-trees (and KD-trees), various BVHs that are incompatible to each other (standard binary BVH, ternary BVH (or QBVH), 8 BVH (i.e. each interior node has 8 children), etc.), BIH (Bounding Interval Hierarchies), etc. etc.
The reasoning is obvious: By not specifying the data structures, vendors can do what suits their approach best.
Nothing wrong with that. But all 3 vendors communicate that they use BVH, Which is the most widely used for raytracing in general. Thus we can rule out all the alternatives, making the request on a BVH API reasonable.
But then, how should we deal with the differences? We know AMD uses a branching factor of 4 from their RT instructions. NV and Intel may use binary trees, up to branching factors of maybe 64. NV may even use compression based on treelets.
So what we need is an API in form of special compute functions to generate the BVH data with the necessary amount of abstractions, plus a specification we can query from the driver, so we know which exact requirements the custom BVH has.
I see this is pretty involved, and not many would want to use such low level stuff. We end up writing code for multiple vendors and chip generations. But if we want any form of continuous LOD, we simply have no other choice.
It will take many years until we get this, if at all.
The workaround is to do what Epic does with Nanite: Use low poly proxies with discrete LOD for raytracing. No details, no exact results, but at least that's doable.
But this also means: Keep the legacy crap around just to support raytracing. Even if we have just solved the LOD problem for rasterization, we are still stuck and left with all the problems from before for raytracing.
This is not acceptable imo. It's better to choose: Either LOD or raytracing. Pick one of those two wrong options.
Vilem Otte said:
All in all, one can always use compute to build ray tracer, acceleration structures, etc. etc. (which I've been doing for years so far … fun fact is that if I'm not mistaken Unreal Engine also went this way)
UE does software tracing of SDF volumes, but now you can completely replace this system with DXR and proxies, afaik.
Pros and cons are scene dependent. DXR sucks in scenes with many overlaps due to kitbashing (early cave demo), but wins in modular scenes (city demo).
Neither option will give accurate high frequency details in general. SDF is an approximation, DXR traces low poly proxies. Thus you could not use RT for shadows. Instead they keep using ‘crappy legacy shadow maps’, because 'awesome and innovative raytracing' is not even compatible with their geometry, which still is just triangles.
Beside that, although they already have BVH on disk (Nanite), they still need to build RT BVH at runtime as well. (One may argue Nanite BVH is not good for RT, but in my case it would be, and Epics does look good for such purpose too.)
I'm not impressed from this Patchwork. It's garbage, waiting to be replaced with something better. Just, the smart guys at MS and NV forgot about giving us options to do so. They thought the classic approaches from the 70's and offline rendering are fine for games, and the next big thing for sure. <:]
Now the damage has been done and it's very hard to fix that mess.
Vilem Otte said:
I absolutely not get hype among developers
I was hanging around a lot on b3d forum to discuss those issues. Many industry experts there. It were the most heated discussions of my internet life. Total fun as well, but opinions, expectations, arguments and agendas just vary wildly.
Idk, but ofc. my request on dynamic geometry does not represent state of the art in games tech. I see that, and beside that, raytracing is pretty nice.
But to me, ‘state of the art in games’ means one thing at first: Constant progress. I want the next game to look better than the former game. RT may give us that, but people can not see the doors it closes, the double work it requires, and that flexibility is more worth than a temporary performance advantage.
Vilem Otte said:
For skinned geometry it might be possible to precompute & refit between precomputed keyframes … during animation blending or ragdoll (generally physics taking over) rebuild may be the only option
We can not precompute based on key frames. We have dynamic ragdolls, blending animations, procedural animation, etc.
For chracters or foliage it might be best to precompute BVH at reference pose, extend bounds so they bound all possible animations, and then transform the nodes per frame. Notice, due to the extension, we do not even need to refit the BVH (causing expensive barriers between tree levels). Simply transforming the bounds is good enough.
The resulting overlaps of bounds would decrease tracing performance, but cost to update BVH is basically zero. Sounds like a win to me.
Vilem Otte said:
It is likely there is going to be either a major re-spec, or DXR/VulkanRT won't be nearly used as much as public would expect.
If there is enough pressure from the devs, they will solve this more sooner than later. But at the moment high HW prices are the bigger problem preventing RT adoption. And i don't think this will change either.
I see two camps: Devs which want to deliver cutting edge tech, so they request high minimum specs, targeting an enthusiast niche of gamers.
Devs which try to keep gaming affordable, investing more in art than tech.
The former makes little sense currently, so i assume NV pushes RT a lot with both money and assistance. But NV does not listen to what people want. Instead they tell them how and with what technology they will get what they want. They pretend the role of the innovator, while what they actually want is to sell bigger and bigger GPUs. For them, gaming is ideally a status symbol of cutting edge hardware.
And their strategy just works. Nobody doubts it, despite the obvious conflict of interest.
IMO, what we want is a small box with a 5tf APU. Low power, but powerful enough to game and do all other PC things. Something like Series S, or even a M1 chip for games.
Cheap hardware, and if we lower some expectations, just fine. This should be what the gaming industry wants, no?
It does not seem so. Actually we almost have such chip: AMDs Rembrandt has a 3.5 tf GPU, if i guess correctly, at 60 Watt.
I like that. I think that's what i need to deliver UE5 level gfx. But at 60 fps, not 20 like in that city demo.
So i tried to get one such laptop.
But all such laptops are ‘enthusiast gaming laptops’. Expensive, and they have a discrete GPU like a RTX 3050 at least. That's a 5.3 tf GPU. No big win over the iGPU. A waste of money and power. Ridiculously stupid.
I did not find a single model without dGPU. So again, i'll keep my money, and stick at retro games.
From my perspective as a PC player, gaming currently really looks bad, no matter from what angle i look at it. There are no interesting offers, regarding both games and hardware.
But well, we've had some lows before.