Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
P
pit
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Erik Strand
pit
Commits
40d203f4
Commit
40d203f4
authored
6 years ago
by
Erik Strand
Browse files
Options
Downloads
Patches
Plain Diff
Answer 15.5
parent
bc48ba10
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
_psets/12.md
+77
-2
77 additions, 2 deletions
_psets/12.md
with
77 additions
and
2 deletions
_psets/12.md
+
77
−
2
View file @
40d203f4
...
@@ -211,6 +211,80 @@ one bit?
...
@@ -211,6 +211,80 @@ one bit?
What is the SNR due to quantization noise in an 8-bit A/D? 16-bit? How much must the former be
What is the SNR due to quantization noise in an 8-bit A/D? 16-bit? How much must the former be
averaged to match the latter?
averaged to match the latter?
This depends on the characteristics of the signal. Assuming that it uses the full range of the A/D
converter (but no more), and over time is equally likely to be any voltage in that range, then the
SNR in decibels is
$$
\t
ext{SNR} = 20
\l
og_{10}(2^n)
$$
where $$n$$ is the number of bits used.
So we have
$$
\b
egin{align
*
}
\t
ext{8-bit SNR} &= 20
\l
og_{10}(2^8)
\\
&= 48.2
\s
i{dB}
\\
\t
ext{16-bit SNR} &= 20
\l
og_{10}(2^{16})
\\
&= 96.3
\s
i{dB}
\e
nd{align
*
}
$$
But where does this come from? Recall that the definition of the SNR is the ratio of the power in
the signal to the power in the noise. Let's say we have an analog signal $$f(t)$$ within the range
$$[-A, A]$$. Let its time-averaged distribution be $$p(x)$$. Then the power in the signal is
$$
P_
\t
ext{signal} =
\i
nt_{-A}^A x^2 p(x)
\m
athrm{d} x
$$
Let's assume that the signal is equally likely to take on any value in its range, so $$p(x) =
1/(2A)$$. This is completely true of triangle waves and sawtooth waves, relatively true for sin waves, but not
true at all for square waves. So this approximation may or may not be very accurate. Then its power
is
$$
\b
egin{align
*
}
P_
\t
ext{signal} &=
\f
rac{1}{2A}
\i
nt_{-A}^A x^2
\m
athrm{d} x
\\
&=
\f
rac{1}{2A}
\f
rac{1}{3}
\l
eft[ x^3
\r
ight]_ {-A}^A
\\
&=
\f
rac{1}{2A}
\f
rac{1}{3} 2A^3
\\
&=
\f
rac{A^2}{3}
\e
nd{align
*
}
$$
When the signal is quantized, each measurement $$f(t)$$ is replaced with a quantized version. The
most significant bit tells us which half of the signal range we're in. In this case that means
whether we're in the range [-A, 0] or [0, A]. Note that each interval is $$A$$ long. The next bit
tells us which half of that half we're in. Each interval is $$A/2$$ long. So finally the least
significant bit will tell us which half of a half etc. we're in, and each interval will be
$$A/2^{n - 1}$$ long (or equivalently $$2A/2^n$$), where $$n$$ is the number of bits.
The uncertainty we have about the original value is thus plus or minus half the least significant
bit. So the quantization error will be in the range $$[-A/2^n, A/2^n]$$. Since we're assuming our
signal is equally likely to take any value, the quantization error is equally likely to fall
anywhere in this range. Thus the power in the quantization noise is
$$
\b
egin{align
*
}
P_
\t
ext{noise} &=
\f
rac{2^n}{2A}
\i
nt_{-A/2^n}^{A/2^n} x^2
\m
athrm{d} x
\\
&=
\f
rac{2^n}{2A}
\f
rac{1}{3}
\l
eft[ x^3
\r
ight]_ {-A/2^n}^{A/2^n}
\\
&=
\f
rac{2^n}{2A}
\f
rac{1}{3}
\f
rac{2 A^3}{2^{3n}}
\\
&=
\f
rac{A^2}{3}
\f
rac{1}{2^{2n}}
\e
nd{align
*
}
$$
Putting it together,
$$
\b
egin{align
*
}
\t
ext{SNR} &= 10
\l
og_{10}
\l
eft(
\f
rac{P_
\t
ext{signal}}{P_
\t
ext{noise}}
\r
ight)
\\
&= 10
\l
og_{10}
\l
eft(
\f
rac{A^2}{3}
\c
dot
\f
rac{3}{A^2}
\f
rac{2^{2n}}{1}
\r
ight)
\\
&= 10
\l
og_{10}
\l
eft( 2^{2n}
\r
ight)
\\
&= 20
\l
og_{10}
\l
eft( 2^n
\r
ight)
\e
nd{align
*
}
$$
## (15.6)
## (15.6)
{:.question}
{:.question}
...
@@ -223,8 +297,9 @@ the convolutional encoder in Figure 15.20, what data were transmitted?
...
@@ -223,8 +297,9 @@ the convolutional encoder in Figure 15.20, what data were transmitted?
This problem is harder than the others.
This problem is harder than the others.
My code for all sections of this problem is
My code for all sections of this problem is
[
here
](
https://gitlab.cba.mit.edu/erik/compressed_sensing
)
. I wrote it in C++ using
[
here
](
https://gitlab.cba.mit.edu/erik/compressed_sensing
)
. All the sampling and gradient descent is
[
Eigen
](
http://eigen.tuxfamily.org
)
for vector and matrix operations.
done in C++ using
[
Eigen
](
http://eigen.tuxfamily.org
)
for vector and matrix operations. I use Python
and
[
matplotlib
](
https://matplotlib.org/
)
to generate the plots.
### (a)
### (a)
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment