Processing LiDAR data on an iPad
Last updated
Last updated
In this first tutorial, we are going to explore how easy and simple it is to process the LiDAR data captured from an AP-LiDAR-One Gen II or AP-LiDAR-M Gen II sensor.
On your iPad, go to the App Store and search for Aerial Precision to get the latest version of our app. The minimum version required for this tutorial is v2.0.
You can also click on the link below:
https://apps.apple.com/nl/app/aerial-precision/id1522046404?l=en-GB
When launching the Aerial Precision App, you will get a project browser view where you will have access to recent projects. You can also open an existing project with the import button on the top right corner.
Click on the + button to create a new project.
The first thing you will be asked for is where do you want to store the project and all its data. Please be aware that LiDAR data demands a significant amount of space. For an average scan, about 15 GBs of free space will be required. You can store your project in a removable drive, however, for the best performance and user experience, we recommend to store it in the local SSD. Once you are done working in a project, you can move it to another location.
For this tutorial we choose a folder named Tutorial on the On My iPad location.
Give a nice name to your project and select your desired coordinate reference system.
By default the app will propose the EPSG:4979, which is the standard on most GNSS receivers. It is a WGS84 reference system with coordinates as latitude, longitude and height in meters above the ellipsoid.
To choose a different one, click on the x next to the name to clear it and search for the desired one among more than 6,500 available.
For this tutorial, we will keep the proposed EPSG:4979.
Click on CREATE to create and open your project.
When your project opens, it will show an empty working space. You can navigate this space with gestures on the view.
Click on the big + button on the right bottom corner and choose AP Scan to import your first scan into the project.
Find your .apScan file and click on it to import it.
On import, an AP Scan Processing window will open and the import process will start. Once the scan import is completed, you will get an overview of the scan such as date of the scan, scan time, area covered or speed, among others.
There are two settings required to start the LiDAR data processing:
Sensor Mount Position: This tells the app where the left and right GNSS antennas where positioned relative to the LiDAR sensor while scanning.
GNSS Base Station: This is the RINEX correction data for post-processing the GNSS data of each of the antennas (PPK, Post-Process Kinematics)
For this tutorial, only the DJI M350 RTK, Downward Single SKYPORT is available. Click on it to select it.
This section will show all GNSS base station data available in the project. At this moment there is none. Click on the + button to import one. The app support .gz, .crx, .*o and .apBase files. For this tutorial we will choose one from our own base station.
Once all the information required is provided the PROCESS SCAN button will become available. Click on it to start the process. It will take a few minutes to complete.
In the meantime, this is how the LiDAR data gets processed:
First, each of the GNSS antennas are post-processed using the base station data provided. For the solution to be valid, both antennas solutions should meet the following:
Samples at 18 Hz or more. The nominal value is 20 Hz.
At least 18 satellites on average during the scan.
At least 90% of fixed solution.
In this tutorial we have 20Hz, 27 satellites and 100% fixed solution on both antennas.
At this step the full INS trajectory is calculated. In this step the information from our survey-grade IMU (accelerometers and gyroscopes) is fused with the GNSS data calculated in the previous step using the most advance fusion techniques available. The nominal rate of this solution is 2,000 Hz.
Using all CPUs and GPUs cores available on the iPad (8 CPUs and 10 GPUs on the latest iPad Pro M2 version), the lidar points are calculated. Our app can calculate more that 20 million points per second.
For this tutorial, more that 175 million points are obtained and given the extension of the area scanned, this is more than 250 points per squared meter.
During the indexing phase, the points are sorted by their location in space, similar to an octree indexing, but with a in-house developed technique that allows to manage billion-points point clouds on mobile devices such an iPad.
Once the processing is completed, close the window by clicking the Done button in the top right corner.
The project now present a panel in the right side containing all project assets. In this tutorial we have imported an AP Scan and GNSS base station data. You show/hide this panel by clicking the button next to it at the top.
Hide the panel to have the full screen available for the point cloud.
By using gestures, move around and explore your point cloud. You can use the following gestures to move around:
With one finger, the camera will pan around its center position. The center position is calculated based on the point that is at the moment of panning at the center of the screen.
With two fingers you can zoom and move at the same time.
With three fingers you can change the camera height if needed.