Terrasolid software offers a variety of ways to gauge the accuracy of LIDAR and imagery data. From using control points to determine the vertical accuracy of a ground surface, tie lines to measure and correct the mismatch between overlapping flight lines, and tie points to correct aerial imagery positions, each of these tools provide the us with values such as ‘average mismatch’ and ‘RMS’. How are these values arrived at, and what do they mean for the users of our software? This article is the first in a series that will begin to address these questions and provide meaning to this type of information. We will conduct a brief review of the relevant statistics and discuss how these principles are applied to the use of control points.
When all is said and done, we wish to have a dataset that represents the real world as accurately as possible, given the tools we have to work with and inherent error and uncertainty that can be introduced. We could choose to use subjective, qualitative terms like ‘good’, ‘well’, or ‘bad’ to describe the accuracy of our data or ‘better/worse’ to compare one dataset to another, but this approach lacks precision and meaning especially in communicating data accuracy within our production teams, and from vendors to customers. TerraScan gives us the ability to quantify the accuracy of the data with meaningful numbers that help to remove ambiguous interpretation of the data quality and prove that it meets the ASPRS Positional Accuracy Standards.
This quantification is rooted in evaluating how close a prediction is to is real world observation. In the context of LIDAR and this article, laser returns are our prediction of the position and elevation of the ground surface and control points (Known Points) are our observation of the real world (or ground truth). We can use descriptive statistics to describe how close the predictions are to reality.
Read Complete Article: Control Point Statistics in TerraScan