Recently we partnered up with folks from the University of Akron to help determine how accurate UAS are compared to traditional mapping methods. Given the current difficulty to fly commercially in the National Airspace, this partnership gave us a unique opportunity to fly inside their Field House. This controlled space had a lot of advantages, and some disadvantages. We were able to fly freely on a clean flat surface and the football field gridlines offered easy markers for the survey crew to use as Ground Control Points. Unfortunately, flying indoors meant no GPS to help us navigate or use autopilot, and the roof was only maybe 50 feet overhead. With sufficient carefulness and copious overlap, we were able to fly the whole field with our 3DR IRIS+ quadcopter and a Canon S100 camera. We took about 900 photos, about 10x more than needed if running autopilot. With the reconstructed orthomosiac (using Agisoft Photoscan) and the measured GCP data, I was able to calculate the RMSE between those two datasets and for each coordinate X, Y, and Z.
For me, this was a learning experience in determining map accuracy, so I spent some time reading up on how it works and industry standards. I came across the following which helped me understand positional accuracy assessments:
The first step in this process was to filter out the photos to remove blurry, excess, or irrelevant shots. I did this manually and ended up with about 750 photos to load into Agisoft. I did not perform any camera calibration for this test.
Then I went through the standard reconstruction workflow. For all of the steps I used the “High” quality setting. First I ran “Align Photos” to generate the sparse point cloud of tie points. Then I used the georeferencing import tool to load the GCPs. I selected 5 spatially distributed points manually, then 15 other randomly selected points from the 48 GCPs supplied by the surveyors.
The great thing about Agisoft is that once you’ve placed a few GCPs by hand, you can hit the refresh button to automatically populate the rest of them. This is quite the time saver. I still went through each georeferenced photo to ensure the points were placed correctly because the football field has a lot of repetitive patterns and I wasn’t sure the program would catch that. I was surprised that was not the case. I imagine that there is some threshold of accuracy or assurance that the algorithm follows to include a photo in the georeferencing.
After the quality check, I “Optimized Photos” to realign the photos and the tie points based on the new georeferencing information. Now the axes are shown correctly in the rendered point cloud.
I followed this part by running dense point cloud construction (the longest step, about 6 hours), meshing (using the Height Field option), Texturing, and Orthomosaic generation.
The orthomosaic was generated with a resolution of 2.93mm/pix.
The benefit to using Agisoft is that it has a built-in error calculator. It produces a table similar to the one in the Minnesota Positional Accuracy Handbook (p 5-6). The following table shows the error on each axis for each GCP used and the total error for each axis.
|Label||Error (m)||X error||Y error||Z error|
It is important to understand what the error values represent. For each GCP that has been measured by survey-grade equipment, Agisoft calculates the difference between the GCP position and the same point on the model. Of course, there are multiple sources of inaccuracy that affect the error measurement: from the camera itself (estimated at 10 m), from the survey equipment (negligible: 6 mm), from the marker placement in the software (0.5 pix), and the tie point accuracy (1 pix).
The total error calculated by Agisoft is useful but from the reading material above we want to use the National Standard for Spatial Data Accuracy (NSSDA). I quickly loaded the table into R (honestly it would be easy enough to put into Excel but I want to keep my R skills tuned) and produced the following NSSDA-worthy statistics:
Tested 0.01591555 meters horizontal accuracy at 95% confidence level.
Tested 0.011297 meters vertical accuracy at 95% confidence level.
In doing this project a few things became clear:
- We need to run the same experiment outside where we can fly with autopilot. The missing sections in the model detract from the model usefulness as a whole. Plus we can fly the same area with far fewer photos, even at 90% overlap/sidelap.
- We need to use physical, high contrast GCP targets. We know that UAS GPS accuracy is relatively low, especially with regards to RTK GPS. In general it would be good practice to have this method available for our work.
- Overall, UAS are not only more efficient than traditional methods but also very accurate. Or at least the Agisoft suite is very good. We do not have the same error collecting methods (yet) in OpenDroneMap, but I know it would be very beneficial to have these calculations for this software.