Intel GPU sin/cos accuracy

While developing a system that re-projects GIS vector and raster data in realtime I found that the Intel GPU (HD4000) sin/cos was not particularly accurate (projecting data from WGS84 to Stereographic, Mercator etc...).

Because the accuracy was so poor I had to look for a different way of calculating sin/cos on the GPU.

Subsequently patches were made to Mesa and an environment variable was added to enable a slightly more precise variant of sin/cos, see:
The patch however still does not provide sufficient accuracy for display of GIS data.

The approach I took was to:

  • Use a min/max polynomial for sin.
  • To adjust the input value to be in the range [-PI/2, PI/2].
  • To take advantage of fma instruction in glsl.

You can take advantage of knowledge about sin/cos rules and identities as follows (pseudo code):
  • To calculate cos by calculating sin:
cos_value = calcSin(M_PI_2 - value);
  • To calculate sin and cos of a value:
sin_value = calcSin(value);
cos_value = sqrt( 1 + -(sin_value * sin_value) );

In variably you need both sin and cos when doing projection calculations and sqrt on a GPU is not as slow as you may think!

For additional information that triggered this please see the following thread on the opengl.org forums:

References:


Comments