@@ -111,7 +111,7 @@ _plenoptic_: Of or relating to all the light, travelling in every direction, in
"We begin by asking what can potentially be seen" - Edward H. Adelson & James R. Bergen, Media Lab, Vision & Modeling Group:
[The Plenoptic Function and the Elements of Early Vision](https://persci.mit.edu/pub_pdfs/elements91.pdf)
[The Plenoptic Function and the Elements of Early Vision](https://persci.mit.edu/pub_pdfs/elements91.pdf), 1991.
...
...
@@ -120,18 +120,17 @@ Why do we want all of the light?
Image-Based Rendering (IBR) for view synthesis is a long-standing problem in the field of computer vision and graphics.
Applications in robot navigation, film, and AR/VR.
Basically, the bullet time effect fromthematrix.
<videowidth="320"height="240"controls>
The affect of this technique was popularized in _The Matrix_, in the famous bullet dodge shot which used [120 still cameras and two film cameras](https://filmschoolrejects.com/the-matrix-bullet-time/).
<iframewidth="560"height="315"src="https://www.youtube.com/embed/9XM5-CJzrU0?si=eWG2ipj0fqA1Ng13&start=73"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"referrerpolicy="strict-origin-when-cross-origin"allowfullscreen></iframe>
<iframewidth="560"height="315"src="https://www.youtube.com/embed/9XM5-CJzrU0?si=eWG2ipj0fqA1Ng13&start=73"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"referrerpolicy="strict-origin-when-cross-origin"allowfullscreen></iframe>
This is such an intensive calculation, that it prompts researchers to seek simulation shortcuts to reach this result, such as this paper [using thousands of virtual cameras](https://openaccess.thecvf.com/content/ACCV2022/papers/Li_Neural_Plenoptic_Sampling_Learning_Light-field_fro
This is such a resource intensive setup, that it prompts researchers to seek simulation shortcuts to reach this result, such as this paper [using thousands of virtual cameras](https://openaccess.thecvf.com/content/ACCV2022/papers/Li_Neural_Plenoptic_Sampling_Learning_Light-field_fro
m_Thousands_of_Imaginary_Eyes_ACCV_2022_paper.pdf) and neural networks to capture a complete dense plenoptic function.
...
...
@@ -140,6 +139,9 @@ In practice, we can only sample light rays in discrete locations. There are two
### Multi-Camera Systems
Simply shoot the scene from several locations using an array of camera (or a single moving one).
Some sophisticated acquisition rigs shown by Google in in a SIGGRAPH 2018 paper:
<iframewidth="560"height="315"src="https://www.youtube.com/embed/4uHo5tIiim8?si=7tmQx2MrG5WYh0SK&start=23"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"referrerpolicy="strict-origin-when-cross-origin"allowfullscreen></iframe>
### Lenslets
Lenslets: a single CMOS sensor with an array of lenses in front.
...
...
@@ -153,11 +155,6 @@ The most efficient layout for lenslets is hexagonal packing, as it wastes the fe
<imgsrc="images/LF.png"alt="LF preview">
Light Fields have gotten a lot of traction recently thanks to their hight potential in VR applications. One impressive work was shown by Google in in a SIGGRAPH 2018 paper:
https://www.youtube.com/embed/4uHo5tIiim8
### Depth Estimation
Forming an image from these cameras requires sampling one pixel from each micro lens to generate virtual viewpoints. The resulting "sub-aperture images" offer different perspectives with subtle shifts, presenting a challenge for depth estimation due to their minute disparities.