The FP64 extension for glsl does not provide new definitions for the math functions,

In other words functions such as; sin, cos, tan, etc... are still only specified for 32 bit float accuracy (FP32).

For games this is unlikely to cause any problems, however if you are using the GPU to perform complex calculations then this may prove to be an issue.

Each application should be analysed to see if the approach adopted provides sufficient accuracy.

In the case of the applications I have written data is presented via the transform feedback concept as scaled latitude and longitude.

Data has been previously split up into tiles, so the data presented is actually relative to the center of the tile. This avoids a common problem called 'floating point jitter'. The tile center is passed to the shader via a uniform.

As the resolution of the data improves the number of tiles covering the World is increased. This maintains the accuracy of the data because the positions of the vertices are relative to the center of the tile.

It turns out that FP32 is sufficient to 10NM range (and probably closer) at which point switching to FP64 needs to occur.

This requires a completely new set of shaders where the center of the tile and projection parameters are passed as doubles into the shader. The calculations are then performed using the FP64 glsl functionality.

The other trick is to make sure that as much of the projection setup occurs on the CPU as doubles, i.e. perform as much computation on the CPU as possible. Basically don't do any unnecessary calculations on the GPU. These calculations are performed once per frame.

Because my target GPU is an Intel HD4000 sin/cos are implemented using FP64 as a side-effect of working around the accuracy bug mentioned in a previous post. However on NVIDIA and AMD GPUs the supplied sin/cos are used.

Testing something like this is not simple... you have to compare the CPU and GPU implementation outputs. What you see on the screen is the most important deciding factor not necessarily the 6th or 8th decimal place.

In other words functions such as; sin, cos, tan, etc... are still only specified for 32 bit float accuracy (FP32).

For games this is unlikely to cause any problems, however if you are using the GPU to perform complex calculations then this may prove to be an issue.

Each application should be analysed to see if the approach adopted provides sufficient accuracy.

In the case of the applications I have written data is presented via the transform feedback concept as scaled latitude and longitude.

Data has been previously split up into tiles, so the data presented is actually relative to the center of the tile. This avoids a common problem called 'floating point jitter'. The tile center is passed to the shader via a uniform.

As the resolution of the data improves the number of tiles covering the World is increased. This maintains the accuracy of the data because the positions of the vertices are relative to the center of the tile.

It turns out that FP32 is sufficient to 10NM range (and probably closer) at which point switching to FP64 needs to occur.

This requires a completely new set of shaders where the center of the tile and projection parameters are passed as doubles into the shader. The calculations are then performed using the FP64 glsl functionality.

The other trick is to make sure that as much of the projection setup occurs on the CPU as doubles, i.e. perform as much computation on the CPU as possible. Basically don't do any unnecessary calculations on the GPU. These calculations are performed once per frame.

Because my target GPU is an Intel HD4000 sin/cos are implemented using FP64 as a side-effect of working around the accuracy bug mentioned in a previous post. However on NVIDIA and AMD GPUs the supplied sin/cos are used.

Testing something like this is not simple... you have to compare the CPU and GPU implementation outputs. What you see on the screen is the most important deciding factor not necessarily the 6th or 8th decimal place.

## Comments

## Post a Comment