🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

solve geometry aliasing witch transparent foliage in DXR RTX raytracing

Started by
6 comments, last by JoeJ 2 years, 5 months ago

hi,

i have a custom DX12 raytracing only game engine. ( no hybrid version, gbuffer is also produced by RT ).

Handling masked transparent foliage in the anyhit shader works fine (although its computationally expensive) and the still picture quality is fine

But when i move the camera back contiuously from the foliage object severe aliasing artefacts become visible.
I know this is caused, because the world geometry resolution becomes subpixel resolution.

To say in other words my traced ray finds the correct color but this color does only cover a small part of "the pixel".

If i apply TAA it does not really help.

How can this be solved efficently ?

Of course i could render the picture in double resolution and average the pixel color but the FPS hit is too much.

I cant understand, why the same scene in rasterizer mode with the same mask texture does NOT show this kind of aliasing ?

Please, please could anyone help me ?

Advertisement

evelyn4you said:
I know this is caused, because the world geometry resolution becomes subpixel resolution.

What you want would be prefiltered geometry, which is especially hard for foliage models. Due to their topological complexity, it's hard to reduce their detail while still preserving shape.
There is some work on using volume representations of prefiltered geometry, or rendering distant foliage with point splatting. Former requires a form of volume marching to render it, latter is a form of rasterization and raytracing won't be practical.
Then there is the option to pre-compute geometry reduction, and to set the fitting detail at runtime from that. E.g. UE5 Nanite. But that's not compatible with current raytracing APIs as any change on geometry requires a full rebuild of BVH.

However, the problem you describe would be equally present in a rasterized framebuffer, so the usual work arounds (e.g. using billboards for distant foliage) should work for your RT engine just as good.
And you want this anyway, because RT performance in dense foliage scenes becomes pretty bad.

I think you focus on a problem that nobody else has properly solved either yet, and you could just ignore it?

I'm curious why you use RT for primary visibility, although rasterization could give the same result faster. I guess that's just to keep it simple by not mixing different approaches?
True advantages of RT here would be non planar projections, DOF, and unique suppixel position jitter per pixel for TAA.
If you could render the same scene with rasterization, you could use this to verify results match. Maybe your mip level / texture LOD selection code isn't right, and the problem becomes most visible with foliage just due to coincidence.

hi JoeJ,

=> I'm curious why you use RT for primary visibility, although rasterization could give the same result faster.

i have implemented a variation of Nvidia DDGI which uses Raytracing so this engine is only suitable for RTX cards.
A very good side effect of RTX raytracing, is the outstanding rendering performance without any need of frustrum culling, and object space culling. ( due to the acc structures )
For example i test my renderer with the famous "bistro" scene.

My renderer currently can handle 32 !! bistro instances placed in the level, handling transparency, having two cascaded light probe grids of thousands of DYNAMIC probes, raytraced shadows with 150 frames per second. ( a forward renderer like godot gives about 80fps with only 1 scene and no transparency handling. Of course i assume, a godot expert could optimize things that i dont know.)

Back to topic.

=> If you could render the same scene with rasterization, you could use this to verify results match. Maybe your mip level / texture LOD selection code isn't right, and the problem becomes most visible with foliage just due to coincidence.

This is exactly what makes me think over again and again. ( mip level / texture LOD selection code )

To be clear, me problem is not the very far distance where the object would be rendered better as a impostor or a billboard.

For example the Thuja of the bistro scene https://commons.wikimedia.org/wiki/File:Thuja_occidentalis_yellow_ribbon.jpg.
E.g.moving

The code for sampling the texture (with dx, dy) derivates is the same as sampling opaque meshes.

in the last post something went wrong with my text.

E.g. moving the camera from 1/4 screen height to 1/8 screen height shows the aliasing problem ( only in transparent meshes)

The code for sampling the texture (with dx, dy) derivates is the same as sampling opaque meshes.

The alpha channel of the mask texture is sampled without LOD because the mask texture has no LOD.
Transparency is detected in the usual way. comparing the alpha value with a given constant threshold.

Should the mask texture also have mip levels ??

evelyn4you said:
in the last post something went wrong with my text.

Once you post a link, forum software goes nuts and erases your stuff below that when clicking Post. :(

evelyn4you said:
My renderer currently can handle 32 !! bistro instances placed in the level, handling transparency, having two cascaded light probe grids of thousands of DYNAMIC probes, raytraced shadows with 150 frames per second. ( a forward renderer like godot gives about 80fps with only 1 scene and no transparency handling. Of course i assume, a godot expert could optimize things that i dont know.)

Very interesting numbers, thanks! It makes me again think that RT shadows should beat SM at some practical scene complexity already.
But because i personally work on continuous LOD, i can't use HW RT at all, which sucks even harder than forum software bugs ; )

evelyn4you said:
The alpha channel of the mask texture is sampled without LOD because the mask texture has no LOD.

But it still needs mips to help with aliasing. If you don't have that that's probably the problem.
I guess you have mips, so maybe the problem is the hard switch from one mip to the next. You could solve this with stochastic switches (+TAA to smooth it out), e.g. adding random distance per pixel to your switching math.

evelyn4you said:
Should the mask texture also have mip levels ??

Absolutely! Otherwise the ray will hit one texel out of many in the current pixel, and the result is noise. Pretty sure that's your issue.

hi JoeJ,

very thanks for your comment, :-) but i am doubting.., why ?

Suppose we have a mask texture. within a big filled circle ( opaque ) => totally white , the surrounding space => ( full transparent) => totally black

Lets create mips. => We will get textures, where there will be a kind of growing region of grayscale from outside black to inner white

But, masked transparency is a <yes, no> case.
- No matter how we define our threshold for transparency
- no matter which sampler ( point, linear )
=> we will get a fixed border where we will receive <yes, no> transparency

What will happen ? With each lod jump we move the border somewhat away. ( could be seen in the pictures )


=> I guess you have mips,

you are right ! I checked my texture and realized that i converted the texture to dds format with mip creation.

A lod sampling, depending on distance gives the following screenshots (level0, 1, 2 )

My first idea was, to define the texture values in the masked region not black but green, so a kind of washed out / blur would appear with growing lod, which actually is what we would prefer.
But if i do this the silhouette of the mesh changes ( will become bigger ) which is seen obviously.

I need some more help, please

evelyn4you said:
Suppose we have a mask texture. within a big filled circle ( opaque ) => totally white , the surrounding space => ( full transparent) => totally black

In this example you get away without mips, because a circle is a perfect solid shape. A sample point moving in a straight line over the image will give us a sequence of 0,0,0,1,1,1,1,1,0,0,0 values, which is a signal of low frequency, and higher mip map will give 0,1,0, which is a good representation of this signal.

But if you think of an alpha mask of a tree branch, that's thicker branches growing out thinner and more branches, with some small leafs at the end. Our sample point will give a sequence of 0,0,1,0,0,1,1,0,1,0,1, which is high frequency. If we move our sampling line only slightly, we might get a very different sequence form the same image. And a smooth camera animation would cause temporal noise due to high variance on our sampled sequences.
A mip of this input can not represent the HF signal well, but its signal is spationally consistent and stable to small sampling offsets, so we won't have temporal noise issues for our animation.

evelyn4you said:
My first idea was, to define the texture values in the masked region not black but green, so a kind of washed out / blur would appear with growing lod, which actually is what we would prefer. But if i do this the silhouette of the mesh changes ( will become bigger ) which is seen obviously.

The proper way to solve this is 'texture dilation'. Various DCC tools should have automated processes to help with this.
I know two basic methods:
1. Calculate a signed distance field from the alpha mask. For each unmasked texel, search the closest masked color following the gradient of decreasing distance.
2. Make a blur of the masked image, combine it with the unblurred masked image. This adds one more row of color information. Repeat the process until no unmasked texels remain or you have enough border.

The second seems easier to implement, and alse gives better quality because new texels are an average of former texels.
I always did this manually using Photoshop, which can be automated too using recorded actions. But there should be an easier way using some specific tools i guess.

This topic is closed to new replies.

Advertisement