Whatever Happened to Data Fusion (Part 2 of 2)

In last month’s article we discussed data fusion from the aspect of merging LIDAR and Imagery data together in order to facilitate the extraction of information from the LIDAR point cloud. That article focused on some of the tools within the Terrasolid suite of software that facilitate this process for Airborne LIDAR data. In this article, we will examine some of the creative ways that data fusion is facilitating the processing of mobile LIDAR and providing dramatic means of conveying information to stakeholders.
Most mobile mapping systems (MMS) these days consist of multiple laser scanners and multiple cameras. One camera position has proven itself to be invaluable to the overall system calibration. One problem with point data is that the ability to precisely view edges of objects can become difficult and highly dependent upon the density of the cloud. With many collections the cross-track spacing is much denser than the down-track spacing meaning that the interpretation may end up being 1cm in one direction while getting to as much as 10cm or more in the other. When trying to place the horizontal accuracy of the dataset within 1cm this can become very difficult. Imagery that has been aligned to the laser data can not only resolve the ability to see the edges and small features, but may also give the analyst the ability to see features in an area that would be difficult to find any suitable features in a dataset consisting of laser points alone. For instance, these signal markers in the laser data (Figure 1) actually get picked up very well by the software despite the ability to visually discern them in the laser alone being a little difficult for the human eye in comparison to the photo (Figure 2).

Whatever Happened to Data Fusion (Part 2 of 2)

Share

Darrick Wagg

Darrick Wagg has written 3 articles

Leave a Reply