diff --git a/topics/02_passive/images/LF.png b/topics/02_passive/images/LF.png new file mode 100644 index 0000000000000000000000000000000000000000..6ada4791d1b9fa48a71ce605a869e0af97beb893 Binary files /dev/null and b/topics/02_passive/images/LF.png differ diff --git a/topics/02_passive/images/viewpoint7_png+img+margin.gif b/topics/02_passive/images/viewpoint7_png+img+margin.gif new file mode 100644 index 0000000000000000000000000000000000000000..55dd79979b67189e7ad42c2ee280b4517c14353b Binary files /dev/null and b/topics/02_passive/images/viewpoint7_png+img+margin.gif differ diff --git a/topics/02_passive/index.md b/topics/02_passive/index.md index c592fdff99585758c4b1f14f64b4460acfeae0cc..e49fd2a2d74dc466bb9db9e2bae3fb4fa276b9ae 100644 --- a/topics/02_passive/index.md +++ b/topics/02_passive/index.md @@ -62,11 +62,11 @@ Increasingly industry pairs vision systems for photogrammetry with laser systems <p>iPhone and Android apps for photogrammetry and now LiDAR scanning have multiplied over the last 2 years. The primary driver is product placement, however some open source options are making way for both art and art preservation</p> <a href="https://www.myminifactory.com/scantheworld/">Scan the World</a> <p><a href="https://projectmosul.org/">Rekrei</a></p> -<p><a href="https://www.adobe.com/products/aero.html">Adobe Aero</a></p><p><a href="https://www.adobe.com/products/aero.html">Adobe Aero</a></p> +<p><a href="https://www.adobe.com/products/aero.html">Adobe Aero</a></p> <p><a href="https://www.scandy.co/apps/scandy-pro">ScandyPro</a></p> <p><a href="https://www.bellus3d.com/faceapp/">Bellus</a></p> -<p><img src="img/bellus.jpg" alt="Bellus"></p> +<p><img src="images/bellus.jpg" alt="Bellus"></p> @@ -86,3 +86,22 @@ Increasingly industry pairs vision systems for photogrammetry with laser systems # Light Field + +<p>Light Fields are a new study field in photography. The objective is to capture the full plenoptic content of the scene, defined as the collection of light rays emitted from it, in any given direction. If the ideal plenoptic function was known, any novel viewpoint could be synthesized by placing a virtual camera in this space, and selecting the relevant light rays.</p> +<p>In practice, we can only sample light rays in discrete locations. There are two popular optical architectures for this:</p> +<ul> <li>Multi-camera systems: simply shoot the scene from several locations using an array of camera (or a single moving one).</li> <li>Lenslets: a single CMOS sensor with an array of lenses in front.</li> </ul> +<p>In the lenslet approach, each pixel behind a lenslet provides a unique light ray direction. The collection for all lenses is called a <strong>sub aperture image</strong>, and roughly corresponds to what a shifted camera would capture. The resolution of these images is simply the total number of lenslets, and the number of sub-aperture images available is given by the number of pixels behind a lenslet. For reference, the <a href="https://en.wikipedia.org/wiki/Lytro">Lytro Illum</a> provides 15x15 sub-aperture images of 541x434 pixels each, which is a total of ~53 Megapixels.</p> + +<p><img src="images/viewpoint7_png+img+margin.gif" alt="LF sub aperture images"></p> + +<p>The most efficient layout for lenslets is hexagonal packing, as it wastes the fewest pixel area. Note that some pixels are not fully covered by the lenslet and receive erroneous or darker data. This means some sub aperture images cannot be recovered.</p> + +<p><img src="images/LF.png" alt="LF preview"></p> + +<p>Light Fields have gotten a lot of traction recently thanks to their hight potential in VR applications. One impressive work was shown by Google in in a SIGGRAPH 2018 paper:</p> + +<iframe width="560" height="315" src="https://www.youtube.com/embed/4uHo5tIiim8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> + +<p>Depth estimation on Light Field data is an active domain. For now, algorithms are commonly tested on ideal, synthetic light fields such as this <a href="https://lightfield-analysis.uni-konstanz.de/">dataset</a>. Here is one example of point cloud obtained from a stereo <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8478503">matching method</a>:</p> + +<iframe title="4D light field - depth estimation" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"> </iframe>