Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
P
pit
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Erik Strand
pit
Commits
e6b3a0d4
Commit
e6b3a0d4
authored
6 years ago
by
Erik Strand
Browse files
Options
Downloads
Patches
Plain Diff
Clean up discussion of Fourier reconstruction
parent
b61d5940
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
_notes/fourier_series.md
+1
-1
1 addition, 1 deletion
_notes/fourier_series.md
project.md
+41
-23
41 additions, 23 deletions
project.md
with
42 additions
and
24 deletions
_notes/fourier_series.md
+
1
−
1
View file @
e6b3a0d4
...
...
@@ -81,7 +81,7 @@ $$
## Derivation of the Discrete Time Fourier Transform
We can apply the above results to $$
\h
at{f}$$ as well. Recall that the Fourier transform of
$$
\h
at{f}$$ is $$f(-x)$$. Thus the periodic summation of
the Fourier transform of $$f
$$ is
$$
\h
at{f}$$ is $$f(-x)$$. Thus the periodic summation of
$$
\h
at{f}
$$ is
$$
\b
egin{align
*
}
...
...
This diff is collapsed.
Click to expand it.
project.md
+
41
−
23
View file @
e6b3a0d4
...
...
@@ -12,37 +12,40 @@ title: Final Project
## Introduction
Computed Tomography (CT)
can
turn 2d projections of a 3d shape like these (TODO insert image)
Computed Tomography (CT)
is used to
turn 2d projections of a 3d shape like these (TODO insert image)
into a 3d model like this (TODO insert image).
This is pretty cool. In particular, voxels in the computed model can tell you the density at
specific locations
*inside*
the scanned object, whereas pixels in the projections can only tell you
the average density along lines that pass
*through*
the object. So we're getting a lot more out of
those images than meets the eye.
those
2d
images than meets the eye.
How does it work? That's what I'd like to understand. In particular, I hope to clearly explain the
basic principles of CT, use them to implement a basic reconstruction from scratch, and experiment
with existing solutions to get a sense of the state of the art.
I'm going to describe the algorithms in 2d, since it will make everything simpler. In this case we
have an image like this (TODO insert image), and we'd like to reconstruct it only knowing its 1d
projections at different angles.
## Theory
Let's start in 2d, since it will make everything simpler. In this case we have an image like this
(TODO insert image), and we'd like to reconstruct it only knowing its 1d projections at different
angles.
## Fourier Reconstruction
We can describe the image as a density function $$f :
\m
athbb{R}^2
\r
ightarrow
\m
athbb{R}$$. Think
of $$f(x, y)$$ as the brightness of the image at $$(x, y)$$. We'll assume that this function is
defined everywhere, but is always zero outside our image.
### Fourier Slice Theorem
The projection of this density function to the x axis is obtained by integrating along y:
Let $$f :
\m
athbb{R}^2
\r
ightarrow
\m
athbb{R}$$ be a density function. If we map density to
brightness we can view $$f$$ as describing an image. We'll assume that this function is defined
everywhere, but is always zero outside some finite neighborhood of the origin (say, the bounds of
the image).
The projection of $$f$$ to the x axis is obtained by integrating along y:
$$
p(x) =
\i
nt_
\m
athbb{R} f(x, y) dy
$$
Meanwhile, the Fourier transform of $$f$$ is
Meanwhile, the
[
Fourier transform
](
../notes/fourier_transform.html
)
of $$f$$ is
$$
\h
at{f}(u, v)
...
...
@@ -63,12 +66,14 @@ $$
So the Fourier transform of the 1d projection is a 1d slice through the 2d Fourier transform of the
image. Since the x axis is arbitrary (we can rotate the image however we want), this works for other
angles as well. That is, if you take the projection onto the line that makes an angle $$
\t
heta$$
with the x axis, its Fourier transform is equal to the slice along the same line in frequency space.
Conceptually this tells us everything we need to know about the reconstruction. First we take the
Fourier transform of each projection individually. Then we combine them by arranging them radially.
Finally we take the inverse Fourier transform of the resulting 2d function. We'll end up with the
angles as well. That is, if you project $$f$$ onto the line that makes an angle $$
\t
heta$$ with the
x axis, its Fourier transform is equal to the slice along the corresponding line in frequency space.
This result is known as the Fourier Slice Theorem, and is the foundation of most reconstruction
techniques.
Conceptually this tells us everything we need to know about the reconstruction. First we take the 1D
Fourier transform of each projection. Then we combine them by arranging them radially. Finally we
take the inverse Fourier transform of the resulting 2d function. We'll end up with the reconstructed
image.
It also tells us how to generate the projections, given that we don't have a 1d x-ray machine. First
...
...
@@ -77,10 +82,23 @@ the inverse Fourier transform of each slice. These are the projections. This wil
generating testing data.
## Discretization
### Discretization
Naturally the clean math of the theory has be modified a bit to make room for reality. In
particular, we only have discrete samples of $$f$$ (i.e. pixel values) rather than the full
continuous function (which in our current formalism may contain infinite information). This has two
important implications.
First, we'll want our Fourier transforms to be discrete Fourier transforms (DFTs). Luckily the
continuous and discrete Fourier transforms are effectively interchangeable, as long as the functions
we work with are mostly spatially and bandwidth limited, and we take an appropriate number of
appropriately spaced samples. You can read more about these requirements
[
here
](
../notes/fourier_series.html
)
.
Of course the clean math of the theory has be modified a bit to make room for reality. First of all,
our projections will be represented as discrete pixels, not perfect functions. So we'll want our
Fourier transforms to be DFTs. And when we combine the transforms of the projections, we won't
naturally end up with a nice 2d grid of values on which to perform an inverse DFT. So we'll have to
interpolate.
Second, since we combine the DFTs of the projections radially, we'll end up with samples (of the 2D
Fourier transform of our image) on a polar grid rather than a cartesian one. So we'll have to
interpolate. This step is tricky and tends to introduce a lot of error. I'll rather arbitrarily use
[
Catmull-Rom interpolation
](
http://entropymine.com/imageworsener/bicubic/
)
, but there are better
algorithms out there that come closer to the theoretically ideal
[
sinc interpolation
](
https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula
)
.
The popular
[
gridrec
](
https://www.ncbi.nlm.nih.gov/pubmed/23093766
)
method is one.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment