Newer
Older
---
layout: default
title: Passive scanning
nav_order: 3
mathjax: true
---
# Passive scanning
{: .no_toc}
## Table of contents
{: .no_toc .text-delta }
1. TOC
{:toc}
_Photogrammetry_ is the collection and organization of reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena.
Photogrammetry was first documented by the Prussian architect [Albrecht Meydenbauer](https://opus4.kobv.de/opus4-btu/files/749/db186714.pdf) in 1867. Since then it has been used for everything from simple measurement or color sampling to recording complex 3D [Motion Fields](https://en.wikipedia.org/wiki/Motion_field)
<img src="images/data_model_photogrammetry.png" alt="Data Model of Photogrammetry">
### Accessibility
Unlike other scanning methods that require precise orbital plans or specialized equipment, photogrammetry can be achieved simply by flying a drone in a circular pattern and capturing multiple photos. Utilizing the location data from the drone, one can construct detailed models like the example shown here: A typical medium resolution aerial photogrammetry scan of a barn.
With 50-100 images a reasonably accurate model can be produced. Such models are often used in surveying and restoration projects from the scale of hand helf objects to cities. This accessibility makes photogrammetry an attractive option for various applications, with results that can be sufficiently accurate depending on the specific requirements.
However, it's essential to note that photogrammetry lacks inherent scale. Without a reference point or prior knowledge of the camera locations, the resulting model lacks a definitive scale, as cameras inherently lack absolute scale information. Therefore, incorporating at least one reference point is crucial. For example, marking a facade with visual markers or known distances, such as pieces of tape, allows for scaling within a 3D modeling program based on these references.
<p>Stereo matching is also known as "disparity estimation", referring to the process of identifying which pixels in multiscopic views correspond to the same 3D point in a scene.</p>
<p>Early uses in stereophotogrammetry, the estimation of 3d coordinates from measurements taken from two or more images through the identification of common points. This technology was used throughout the early 20th century for generating topographic maps.</p>
<p><img src="images/stereo_plotter.jpg" alt="StereoPlotter"></p>
While the analog versions of these techniques have waned in popularity, stereophotogrammetry still has applications for capturing dynamic characteristics of previously difficult to measure systems like running <a href="https://www.spiedigitallibrary.org/conference-proceedings-of-spie/8348/1/Dynamic-characteristics-of-a-wind-turbine-blade-using-3D-digital/10.1117/12.915377.short?SSO=1">wind turbines</a>.</p>
### When is it useful?
Photogrammetry is useful for outdoors settings, where all you need is a handheld camera and some patience. In this example, note the loss of quality towards the top, as pixel resolution becomes problematic:
<div class="sketchfab-embed-wrapper"> <iframe title="Arc de Triomphe - photogrammetry" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/65937fd27de647c0a8ac99ce8275c03e/embed"> </iframe> <p style="font-size: 13px; font-weight: normal; margin: 5px; color: #4A4A4A;"> <a href="https://sketchfab.com/3d-models/arc-de-triomphe-photogrammetry-65937fd27de647c0a8ac99ce8275c03e?utm_medium=embed&utm_campaign=share-popup&utm_content=65937fd27de647c0a8ac99ce8275c03e" target="_blank" style="font-weight: bold; color: #1CAAD9;"> Arc de Triomphe - photogrammetry </a> by <a href="https://sketchfab.com/nicolasdiolez?utm_medium=embed&utm_campaign=share-popup&utm_content=65937fd27de647c0a8ac99ce8275c03e" target="_blank" style="font-weight: bold; color: #1CAAD9;"> Nicolas Diolez </a> on <a href="https://sketchfab.com?utm_medium=embed&utm_campaign=share-popup&utm_content=65937fd27de647c0a8ac99ce8275c03e" target="_blank" style="font-weight: bold; color: #1CAAD9;">Sketchfab</a></p></div>
### Unified Workflow
Geometry and texture/color in one workflow.
### Affordability and flexibility.
Depending on the end use application almost any camera will work given there is enough light and your post processing software is robust.
### Real Time Feedback & Processing*
*as models improve. [Ingenuity Drone](https://mars.nasa.gov/technology/helicopter/#) relies on photogrammetry-based onboard processing for ground distance estimation, showcasing the efficacy of passive sensing approaches in complex environments.
<p><img src="images/lidar_vs_photogrammetry_drone.jpg" alt="Drone imaging"></p>
### Lighting
Light conditions in the scene are crucial to the quality of the scan. A controlled environment is highly preferred. Precision is improving but can still be completely thrown off by certain light conditions in much the same way LiDar struggles with smooth surfaces.
### Precision Limitations
Increasingly industry pairs vision systems for photogrammetry with laser systems to balance the benefits of both.
### Industry Standard
[Autodesk ReCap](https://www.autodesk.com/products/recap/overview?term=1-YEAR)
[Agisoft Metashape](https://www.agisoft.com)
### ML - Powered
<a href="https://www.pix4d.com/blog/machine-learning-meets-photogrammetry">Pix4D</a>
<p>Improving accuracy while scraping information on the contents of photogrammetry data sets.</p>
### Apps
iPhone and Android apps for photogrammetry and now LiDAR scanning have multiplied over the last several years:
[Scan the World](https://www.myminifactory.com/scantheworld/)
[Rekrei](https://projectmosul.org/)
[Adobe Aero](https://www.adobe.com/products/aero.html)
[ScandyPro](https://www.scandy.co/apps/scandy-pro)
[Bellus](https://www.bellus3d.com/faceapp/)
<img src="images/bellus.jpg" alt="Bellus">
<p>Intrinsics and extrinsics parameters</p>
<p><a href="https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm">Levenberg–Marquardt algorithm</a> or damped least-squares algorithm (dls) are used to minimize the error across 3d coordinates. This procedure is typically called <a href="https://en.wikipedia.org/wiki/Bundle_adjustment">bundle adjustment</a>.</p>
<p>Groundwork camera properties and standards for USGS photgrammetry surveys.</p>
<a href="https://www.sciencedirect.com/science/article/abs/pii/0031866373900069">Welch 1973</a>
# ML for Photogrammetry
# Open Source Photogrammetry
# Light Field
<p>Light Fields are a new study field in photography. The objective is to capture the full plenoptic content of the scene, defined as the collection of light rays emitted from it, in any given direction. If the ideal plenoptic function was known, any novel viewpoint could be synthesized by placing a virtual camera in this space, and selecting the relevant light rays.</p>
<p>In practice, we can only sample light rays in discrete locations. There are two popular optical architectures for this:</p>
<ul> <li>Multi-camera systems: simply shoot the scene from several locations using an array of camera (or a single moving one).</li> <li>Lenslets: a single CMOS sensor with an array of lenses in front.</li> </ul>
<p>In the lenslet approach, each pixel behind a lenslet provides a unique light ray direction. The collection for all lenses is called a <strong>sub aperture image</strong>, and roughly corresponds to what a shifted camera would capture. The resolution of these images is simply the total number of lenslets, and the number of sub-aperture images available is given by the number of pixels behind a lenslet. For reference, the <a href="https://en.wikipedia.org/wiki/Lytro">Lytro Illum</a> provides 15x15 sub-aperture images of 541x434 pixels each, which is a total of ~53 Megapixels.</p>
<p><img src="images/viewpoint7_png+img+margin.gif" alt="LF sub aperture images"></p>
<p>The most efficient layout for lenslets is hexagonal packing, as it wastes the fewest pixel area. Note that some pixels are not fully covered by the lenslet and receive erroneous or darker data. This means some sub aperture images cannot be recovered.</p>
<p><img src="images/LF.png" alt="LF preview"></p>
<p>Light Fields have gotten a lot of traction recently thanks to their hight potential in VR applications. One impressive work was shown by Google in in a SIGGRAPH 2018 paper:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/4uHo5tIiim8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>Depth estimation on Light Field data is an active domain. For now, algorithms are commonly tested on ideal, synthetic light fields such as this <a href="https://lightfield-analysis.uni-konstanz.de/">dataset</a>. Here is one example of point cloud obtained from a stereo <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8478503">matching method</a>:</p>
<iframe title="4D light field - depth estimation" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"> </iframe>
# Light Stage
<p>This <a href="http://www.pauldebevec.com/">impressive device</a> was built for capturing the Bidirectional Reflectance Distribution Function (BRDF), which can describe the material’s optical properties in any direction and any illumination conditions. Thanks to the linearity of lighting, we can decompose the total illumination based on its direction. The viewing angle also plays a role for reflective or special materials (e.g. iridescence).</p>
<p><img src="images/brdf.png" alt=""></p>
<p>In the most complex case, objects need to be captured from several locations and illuminated from as many directions as possible.</p>
<p><img src="images/light_stage.png" alt=""></p>