From 31efb1a94bafb56e5a9edc255dfcba080bee126e Mon Sep 17 00:00:00 2001
From: Danny Griffin <dgr@mit.edu>
Date: Tue, 23 Apr 2024 23:57:38 -0400
Subject: [PATCH] video fixes

---
 topics/02_passive/index.md | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/topics/02_passive/index.md b/topics/02_passive/index.md
index 2e3cbb0..c70abe8 100644
--- a/topics/02_passive/index.md
+++ b/topics/02_passive/index.md
@@ -111,7 +111,7 @@ _plenoptic_: Of or relating to all the light, travelling in every direction, in
 
 
 "We begin by asking what can potentially be seen" - Edward H. Adelson & James R. Bergen, Media Lab, Vision & Modeling Group:
-[The Plenoptic Function and the Elements of Early Vision](https://persci.mit.edu/pub_pdfs/elements91.pdf)
+[The Plenoptic Function and the Elements of Early Vision](https://persci.mit.edu/pub_pdfs/elements91.pdf), 1991.
 
 
 
@@ -120,18 +120,17 @@ Why do we want all of the light?
 Image-Based Rendering (IBR) for view synthesis is a long-standing problem in the field of computer vision and graphics.
 Applications in robot navigation, film, and AR/VR.
 
-Basically, the bullet time effect from the matrix.
-<video width="320" height="240" controls>
+The affect of this technique was popularized in _The Matrix_, in the famous bullet dodge shot which used [120 still cameras and two film cameras](https://filmschoolrejects.com/the-matrix-bullet-time/).
+<video width="500" height="290" controls>
   <source src="images/The_matrix_rootfop_bullet_dodge.mp4" type="video/mp4">
 </video>
 
-<iframe width="560" height="315" src="https://www.youtube.com/embed/9XM5-CJzrU0?si=eWG2ipj0fqA1Ng13&amp;start=73" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
-
 
+<iframe width="560" height="315" src="https://www.youtube.com/embed/9XM5-CJzrU0?si=eWG2ipj0fqA1Ng13&amp;start=73" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 
 
 
-This is such an intensive calculation, that it prompts researchers to seek simulation shortcuts to reach this result, such as this paper [using thousands of virtual cameras](https://openaccess.thecvf.com/content/ACCV2022/papers/Li_Neural_Plenoptic_Sampling_Learning_Light-field_fro
+This is such a resource intensive setup, that it prompts researchers to seek simulation shortcuts to reach this result, such as this paper [using thousands of virtual cameras](https://openaccess.thecvf.com/content/ACCV2022/papers/Li_Neural_Plenoptic_Sampling_Learning_Light-field_fro
 m_Thousands_of_Imaginary_Eyes_ACCV_2022_paper.pdf) and neural networks to capture a complete dense plenoptic function.
 
 
@@ -140,6 +139,9 @@ In practice, we can only sample light rays in discrete locations. There are two
 ### Multi-Camera Systems
 Simply shoot the scene from several locations using an array of camera (or a single moving one).
 
+Some sophisticated acquisition rigs shown by Google in in a SIGGRAPH 2018 paper:
+
+<iframe width="560" height="315" src="https://www.youtube.com/embed/4uHo5tIiim8?si=7tmQx2MrG5WYh0SK&amp;start=23" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 
 ### Lenslets
 Lenslets: a single CMOS sensor with an array of lenses in front.
@@ -153,11 +155,6 @@ The most efficient layout for lenslets is hexagonal packing, as it wastes the fe
 <img src="images/LF.png" alt="LF preview">
 
 
-
-Light Fields have gotten a lot of traction recently thanks to their hight potential in VR applications. One impressive work was shown by Google in in a SIGGRAPH 2018 paper:
-
-https://www.youtube.com/embed/4uHo5tIiim8
-
 ### Depth Estimation
 
 Forming an image from these cameras requires sampling one pixel from each micro lens to generate virtual viewpoints. The resulting "sub-aperture images" offer different perspectives with subtle shifts, presenting a challenge for depth estimation due to their minute disparities.
-- 
GitLab