UAS Mapping: Positional Accuracy Assessment


Recently we partnered up with folks from the University of Akron to help determine how accurate UAS are compared to traditional mapping methods. Given the current difficulty to fly commercially in the National Airspace, this partnership gave us a unique opportunity to fly inside their Field House. This controlled space had a lot of advantages, and some disadvantages. We were able to fly freely on a clean flat surface and the football field gridlines offered easy markers for the survey crew to use as Ground Control Points. Unfortunately, flying indoors meant no GPS to help us navigate or use autopilot, and the roof was only maybe 50 feet overhead. With sufficient carefulness and copious overlap, we were able to fly the whole field with our 3DR IRIS+ quadcopter and a Canon S100 camera. We took about 900 photos, about 10x more than needed if running autopilot. With the reconstructed orthomosiac (using Agisoft Photoscan) and the measured GCP data, I was able to calculate the RMSE between those two datasets and for each coordinate X, Y, and Z.


For me, this was a learning experience in determining map accuracy, so I spent some time reading up on how it works and industry standards. I came across the following which helped me understand positional accuracy assessments:


The first step in this process was to filter out the photos to remove blurry, excess, or irrelevant shots. I did this manually and ended up with about 750 photos to load into Agisoft. I did not perform any camera calibration for this test.

Then I went through the standard reconstruction workflow. For all of the steps I used the “High” quality setting. First I ran “Align Photos” to generate the sparse point cloud of tie points. Then I used the georeferencing import tool to load the GCPs. I selected 5 spatially distributed points manually, then 15 other randomly selected points from the 48 GCPs supplied by the surveyors.


The great thing about Agisoft is that once you’ve placed a few GCPs by hand, you can hit the refresh button to automatically populate the rest of them. This is quite the time saver. I still went through each georeferenced photo to ensure the points were placed correctly because the football field has a lot of repetitive patterns and I wasn’t sure the program would catch that. I was surprised that was not the case. I imagine that there is some threshold of accuracy or assurance that the algorithm follows to include a photo in the georeferencing.

After the quality check, I “Optimized Photos” to realign the photos and the tie points based on the new georeferencing information. Now the axes are shown correctly in the rendered point cloud.


I followed this part by running dense point cloud construction (the longest step, about 6 hours), meshing (using the Height Field option), Texturing, and Orthomosaic generation.


The orthomosaic was generated with a resolution of 2.93mm/pix.

Accuracy Assessment

The benefit to using Agisoft is that it has a built-in error calculator. It produces a table similar to the one in the Minnesota Positional Accuracy Handbook (p 5-6). The following table shows the error on each axis for each GCP used and the total error for each axis.

Label Error (m) X error Y error Z error
GCP41 0.027775 -0.019520 -0.006123 0.018786
GCP32 0.009071 -0.008301 0.001062 -0.003500
GCP24 0.005397 0.005396 -0.000094 0.000024
GCP1 0.006616 0.004261 -0.005048 -0.000359
GCP14 0.008081 -0.004850 0.006462 -0.000148
GCP23 0.017049 0.017024 0.000461 0.000799
GCP34 0.002853 0.000495 0.001855 0.002111
GCP40 0.019562 0.007012 -0.016356 -0.008123
GCP43 0.014933 -0.008129 0.004997 -0.011487
GCP11 0.005267 0.004112 -0.001799 -0.002756
GCP10 0.005410 -0.004869 -0.000531 0.002297
GCP15 0.007765 0.001099 0.007506 0.001661
GCP2 0.006446 -0.000541 0.006397 -0.000574
GCP20 0.003922 0.002755 -0.000209 -0.002784
GCP25 0.004407 0.002822 0.002376 -0.002411
GCP26 0.009678 0.005925 0.001164 0.007564
GCP30 0.005293 -0.004997 0.001212 -0.001256
GCP33 0.003721 -0.001652 -0.002982 -0.001493
GCP47 0.003736 -0.002187 -0.002255 0.002022
GCP6 0.004730 0.004201 0.001811 -0.001202
Total error 0.010635 0.007311 0.005141 0.005764


It is important to understand what the error values represent. For each GCP that has been measured by survey-grade equipment, Agisoft calculates the difference between the GCP position and the same point on the model. Of course, there are multiple sources of inaccuracy that affect the error measurement: from the camera itself (estimated at 10 m), from the survey equipment (negligible: 6 mm), from the marker placement in the software (0.5 pix), and the tie point accuracy (1 pix).

The total error calculated by Agisoft is useful but from the reading material above we want to use the National Standard for Spatial Data Accuracy (NSSDA). I quickly loaded the table into R (honestly it would be easy enough to put into Excel but I want to keep my R skills tuned) and produced the following NSSDA-worthy statistics:

Tested 0.01591555 meters horizontal accuracy at 95% confidence level.

Tested 0.011297 meters vertical accuracy at 95% confidence level.

The code can be found here using this data

In doing this project a few things became clear:

  1. We need to run the same experiment outside where we can fly with autopilot. The missing sections in the model detract from the model usefulness as a whole. Plus we can fly the same area with far fewer photos, even at 90% overlap/sidelap.
  2. We need to use physical, high contrast GCP targets. We know that UAS GPS accuracy is relatively low, especially with regards to RTK GPS. In general it would be good practice to have this method available for our work.
  3. Overall, UAS are not only more efficient than traditional methods but also very accurate. Or at least the Agisoft suite is very good. We do not have the same error collecting methods (yet) in OpenDroneMap, but I know it would be very beneficial to have these calculations for this software.

Viewing Sparse Point Clouds from OpenDroneMap

This is a post about OpenDroneMap, an opensource project I am a maintainer for. ODM is a toolchain for post-processing drone imagery to create 3D and mapping products. It’s currently in beta and under pretty heavy development. If you’re interested in contributing to the project head over here.

The Problem

So for most of the 3D objects produced by ODM we can use MeshLab to view it. However, the sparse point cloud produced by OpenSfM is in json format. The native OpenSfM program has a web viewer for the sparse cloud, so I wanted to see if that would also work from ODM. Seems simple enough, and that’s because it is.

The Solution

There are 3 ways that OpenDroneMap can be run on a computer: from a native linux mechine (Ubuntu 14.04+ or Debian), in a Vagrant box, or from Docker. This tutorial will cover the first two, but not the Docker build.

Alright so in the ODM software you can find the OpenSfM source files (and python script to run their local server) in SuperBuild/src/opensfm. Sample data can be found in data, and that’s where we will be putting the stuff we need to view the sparse cloud.

First thing we need to do is run ODM on sample data. You may have your own images, but we’ve also got a decent sample here. I’m going to download caliterra and put it in our home folder, /home/geokota/caliterra. Run the following:

./ --project-path /home/geokota/caliterra --end-with opensfm

We don’t need to run past that. This shouldn’t take too long. Now if we look in the project folder, we should find the following:

|-- images
|-- images_resize
|-- opensfm
    |-- reconstruction.json

We are going to copy images_resize and the contents of opensfm into a new folder in SuperBuild/src/opensfm/data/:

cd /home/geokota/OpenDroneMap/SuperBuild/src/opensfm/data/
mkdir caliterra && cd caliterra
cp -R /home/geokota/caliterra/images_resize .
cp -R /home/geokota/caliterra/opensfm/* .
cd ..

Then we are going to run the webserver:

  1. Start an http server from the root with python -m SimpleHTTPServer
  2. Browse http://localhost:8000/viewer/reconstruction.html#file=/data/caliterra/reconstruction.json.

Now, this will not work yet on a vagrant machine. You’ll need to edit your Vagrantfile first to forward the port: "forwarded_port", guest: 8000, host: 8000

Once you do this, you should see:

json_ptcloud.pngThe perspective may be skewed, I don’t really know how to fix that at this point. But you can rotate around in 3 dimensions and see the flight path over the point cloud:


Next Steps

So what does this mean for OpenDroneMap? For one, this is a simple hack to get another visualization out of our product. More importantly, we have a web-based way to view data. This will be very useful as we transition towards a user interface and a toolchain with more user control (i.e. let’s run opensfm only and then analyze the point cloud before continuing on). At the very least, it’s a cool thing I learned about ODM today that I hope someone else will also be able to benefit from😀

Filtering Roads from Extracted Streams Data

The Problem

We have to filter out the roads and ditches without removing streams that cross roads or follow them closely. I’m going to use PostGIS to find the intersection of the streams lines data with a buffered roads polygon. If the intersected line is less than 50% of the length of the stream line, then the line is considered a road and removed from the streams dataset.

Buffering the roads

I found a subset of osm_roads that covered the whole area. Using QGIS, I performed a simple 25 foot buffer on the whole dataset. Instead of dissolving the boundaries in QGIS, however, I did in in PostGIS below using ST_Union().

Finding the intersection

Once the shapefiles were imported to postgres, I made a table that contained the ‘streams’ that were actually roads. This being a large dataset, I wanted to make sure a few things were in order: (1) that the tables I was using were indexed with a spatial index (using gist()), and (2) that I wasn’t working with a multi-polygon layer. I used ST_Dump() to ensure I was working with single polygons in the roads layer.

drop table if exists big_roads;

--create table of single polygon
create table big_roads as 
select (ST_Dump(ST_Union(geom))).geom as geom from odm_roads_all;

-- primary key
alter table big_roads add Column gid serial primary key;

-- create index
create index idx_big_roads_geom
on big_roads 
using gist(geom);

-- new table
drop table if exists intersected_streams;

create table intersected_streams (
 stream_id integer,
 stream_length numeric,
 geom geometry
-- create intersected streams
insert into intersected_streams (stream_id, geom) (
with clipped_streams as (
 select * from (
 select DISTINCT ON ( as stream_id, 
 ST_Length(streams.geom) as stream_length, 
 ST_Intersection(streams.geom, roads.geom) as geom 
 from big_roads as roads
 INNER JOIN streams
 ON ST_Intersects(roads.geom, streams.geom)) as clipped
int_streams as (
 select * from clipped_streams where ST_Length(geom) > 0.25 * stream_length -- AND clipped_streams.stream_id IN (select id from streams)
select id, geom from streams where id in (select stream_id from int_streams)

After about 45 minutes, here is the result of my efforts:

intersected streams.png

(The image above actually has a roads polygon that is more accurate than a buffered roads)

Does it capture all the streams being misclassified as roads? For the most part it seems, yes. Does it remove all the other errors, such as ditches, wide highways, and parking lots? No. The biggest takeaway for me doing this project has been that urban areas add so much complexity to all of my problems. Send me back to Vermont!!!


You can see above the stream enters a culvert and passes under the road, and that really confused the algorithm.

Next Steps

My next steps are to instead of creating a new table of intersected streams, add a new “isroad” column to the existing stream dataset and reclassify “roads”. I want to make sure I preserve the connectivity of my network for the next step in the process of brook trout habitat analysis.