GeoCue’s ongoing software development in support of optimized geospatial processing workflows includes research into Graphics Processing Unit (GPU) technology. GPUs, on both local and remote nodes, provide additional processing power for churning through mountains of imagery and LIDAR data. Stating the obvious, GPU technology has been around for a while. But what is less obvious is how GPU technology is best applied to geospatial workflows. It is tempting to think that simply throwing additional GPUs at a long running task will significantly speed up image and LIDAR processing workflows. And the bigger and more expensive the GPU card, the better!
Our research has shown that GPUs can indeed provide a beneficial impact on geospatial workflow processing times. In some cases we’ve seen 10x reductions in processing times when a processing task is coded to specifically leverage GPU architecture. Stated simply, GPU friendly algorithms allow the GPU to iterate a single operation across a large array of pixels or points. However we have seen in our application development that there is a fine balance between GPU memory use, GPU processing, Central Processing Unit (CPU) memory, CPU processing, and storage I/O operations. These elements must be optimized together to arrive at a meaningful optimization of geospatial workflows.