🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Using 64-bit floats with DirectX

Started by
1 comment, last by Gnollrunner 2 years, 8 months ago

So, with DirectX D3D11_FEATURE_DOUBLES and D3D11_FEATURE_DATA_D3D11_OPTIONS.ExtendedDoublesShaderInstructions what level of double precision floating point is being supported by the GPU; even in HLSL you can use double4 types and it will compile.

However, DXGI_FORMAT doesn't have any double precision option, so…. if I want to use in my vertex some double precision values how can I specify that with the D3D11_INPUT_ELEMENT_DESC?…

"lots of shoulddas, coulddas, woulddas in the air, thinking about things they shouldda couldda wouldda donne, however all those shoulddas coulddas woulddas ran away when they saw the little did to come"
Advertisement

As near as I can tell there is no way to use them as vertexes. You can put them in a constant buffer by breaking them up into ints. I have done this and it works Ok. I would be hesitant to use them for a lot of calculations on the GPU anyway, since many GPUs seem to have nerfed double performance. I have used them for post skew tiling for Simplex noise and they work OK for that, but it's really minimal calculation considering the whole noise algorithm.

Edit: It just struck me that I'm probably wrong. I guess you should be able to use 32 bit types for you vertexes when you pass them down, and combine them with asdouble in the vertex shader. On the C++ side you would use some sort of type punning to put the data in the vertex structure. Mind you I haven't actually done it, at least with vertexes.

Also what do you need it for? I do everything in 64 bits on the CPU side but in general you don't need 64 bits GPU side. If set up your matrix transformation to simply move the data round the camera it works pretty well.

This topic is closed to new replies.

Advertisement