Machine Learning Chapter 10 Solutions Problems Show That The Step Greedy Algorithm That Selects The

subject Type Homework Help
subject Pages 9
subject Words 1616
subject Authors Sergios Theodoridis

Unlock document.

This document is partially blurred.
Unlock all pages and 1 million more documents.
Get Access
page-pf1
1
Solutions To Problems of Chapter 10
10.1. Show that the step, in a greedy algorithm, that selects the column of
the sensing matrix, so that to maximize the correlation between the
column and the currently available error vector e(i1), is equivalent
with selecting the column that reduces the l2norm of the error vector.
Hint: All the parameters obtained in previous steps are fixed, and
their associated coefficients in θ(k), the square `2norm of the error
becomes
||yXθ(k1) xc
ikθik||2
2=||e(k1) xc
ikθik||2
2.
Minimizing the previous norm w.r. to θik, we easily obtain that
ik
Te(k1)
i
||xc
i||2
10.2. Prove the proposition stating that if there is a sparse solution to the
linear system y=Xθsuch that
k0=||θ||0<1
21 + 1
µ(X),
where µ(X) is the mutual coherence of X, then the column selection
page-pf2
|θm|||xc
m|| ≥ |θi|||xc
i||, i 6=m, i k0,
where the vector norms are assumed to be the `2norms. Then if we
show that
|xc
X
θi
xc
mTxc
i
||xc
>
X
θi
xc
jTxc
i
||xc
, j > k0.(2)
i=1
i=1,i6=m
≥ |θm|||xc
m|| −
k0
X
i=1,i6=m
|θm|||xc
m||µ(X)
≥ |θm|||xc
X
i=1
|θi|||xc
i||µ(X)
≤ |θm|||xc
m||µ(X)k0.
page-pf3
e(1) =yθi1xc
k0
X
˜
θixc
10.3. Give an explanation to justify why in step 4 of the CoSaMP algorithm
the value of tis taken to be equal to 2k
Solution: The estimates, θ(i), obtained at each iteration are always
10.4. Show that if
2||yXθ||2
2d(θ,˜
where
d(θ,˜
θ) := c||θ˜
θ||2
2− ||XθX˜
θ||2
2,
then minimization results to
ˆ
page-pf4
10.5. Prove the basic recursion of the parallel coordinate descent algorithm.
Hint: Assume that at the ith iteration, it is the turn of the jth com-
ponent to be updated, so that the following is minimized
J(θj) = 1
jxjθjxj||2
2+λ|θj|.
10.6. Derive the iterative scheme to minimize the weighted `1ball, using
a majorization-minimization procedure to minimize Pl
i=1 ln(|θi|+),
subject to the measurements set, y=Xθ.
Hint: Use the linearization of the logarithmic function to bound it
from above, since it is a concave function and its graph is located be-
low its tangent.
Solution: Our task is
minimize w.r. θ
l
X
j=1
ln(|θj|+),
s.t. Xθ=y.
page-pf5
5
Instead, at each iteration step, we will try to minimize the linearization
of the function, around its previous estimate. We have seen that this
θ,u
j=1
u(i1)
j+
s.t. Xθ=y
|θj| ≤ uj,
10.7. Show that the weighted `1ball, used in SpAPSM, is upper bounded
by the `0norm of the target vector.
Solution: Assume that θ(n) converges to the true vector. Moreover
assume that ´n´ > 0. Then
l
X
wj(n)|θj(n)| ≤
l
X
|θj(n)|
jSupport(θ)
page-pf6
10.8. Show that the canonical dual frame minimizes the total `2norm of the
dual frame, i.e.,
X
i∈I
˜
ψi
2
2.
Hint: Use the result of Problem 9.10.
Solution: We know from the theory that the matrices corresponding
10.9. Show that Parseval’s tight frames are self dual.
Solution: Let the dimensionality of the frame’s matrix Ψ be N×p.
From the definition of the Parseval tight frame (PTFP) we have that
p
X
p
X
is=sHs.
10.10. Prove that the bounds A, B of a frame coincide with the maximum
and minimum eigenvalues of the matrix product ΨΨH.
Solution: From the definition of a frame we have
Aksk2
2
p
X
|hψi,si|2=sHΨΨHsBksk2
2.
page-pf7
page-pf8
Bibliography
[Horn 85] Horn R.A., Johnson C.R. Matrix Analysis, Cambridge University
Press, New York 1985.
[Golu 83] Golub G.H., Van Loan C.F. Matrix Computations, The John Hop-
kins University Press, Baltimore, 1983.
9

Trusted by Thousands of
Students

Here are what students say about us.

Copyright ©2022 All rights reserved. | CoursePaper is not sponsored or endorsed by any college or university.