slides - Ceremade

Transcription

slides - Ceremade
Total Variation Projection
with First Order Schemes
Gabriel Peyré
Jalal Fadili
Overview
•Total Variation Prior
•Primal and Dual Projection
•First Order Algorithms
•Inverse Problem Resolution by Projection
Smooth and Cartoon Priors
!
|∇f |2
Smooth and Cartoon Priors
!
|∇f |2
!
|∇f |
Natural Image Priors
Natural Image Priors
Discrete Priors
Overview
•Total Variation Prior
•Primal and Dual Projection
•First Order Algorithms
•Inverse Problem Resolution by Projection
Denoising by Regularization
Noisy observations: f = f0 + w.
!
fλ
L2 -constraint:
1
2
= argmin ||f − g|| + λ||g||TV
g∈RN 2
fε! = argmin ||g||TV
!
fτ
||f −g||!ε
= argmin ||f − g||
||g||TV !τ
[Combettes,Pesquet]
λ
ε
τ
Denoising by Regularization
Noisy observations: f = f0 + w.
!
fλ
L2 -constraint:
1
2
= argmin ||f − g|| + λ||g||TV
g∈RN 2
fε! = argmin ||g||TV
!
fτ
Equivalence λ ↔ ε ↔ τ .
||f −g||!ε
= argmin ||f − g||
||g||TV !τ
[Combettes,Pesquet]
Best formulation: depends on the prior knowledge!
λ
ε
τ
Denoising by Regularization
Noisy observations: f = f0 + w.
!
fλ
L2 -constraint:
1
2
= argmin ||f − g|| + λ||g||TV
g∈RN 2
fε! = argmin ||g||TV
!
fτ
Equivalence λ ↔ ε ↔ τ .
||f −g||!ε
= argmin ||f − g||
||g||TV !τ
[Combettes,Pesquet]
Today’s focus
Best formulation: depends on the prior knowledge!
λ
ε
τ
Dual Formulation
1
f = argmin ||f − g|| = argmin ||f − g||2 + 1||·||TV !τ (g)
2
g
||g||TV !τ
!
Dual Formulation
1
f = argmin ||f − g|| = argmin ||f − g||2 + 1||·||TV !τ (g)
2
g
||g||TV !τ
!
Dual vector field u =
TV norm:
N −1
{ui }i=0
||f ||TV = ||u||1
∈ RN ×2 where ui ∈ R2 .
where u = ∇f.
||u||1 =
N
−1 "
!
i=0
u2i,1 + u2j,2
Dual Formulation
1
f = argmin ||f − g|| = argmin ||f − g||2 + 1||·||TV !τ (g)
2
g
||g||TV !τ
!
Dual vector field u =
N −1
{ui }i=0
||f ||TV = ||u||1
TV norm:
∈ RN ×2 where ui ∈ R2 .
where u = ∇f.
||u||1 =
!1 –!∞ duality:
1||·||1 !τ (v) = max !u, v" − τ ||u||∞
u
=⇒
||u||∞
N
−1 "
!
i=0
!
= max
u2i,1 + u2j,2
0!i<N
1||·||TV !τ (f ) = min "div(u), f # + τ ||u||∞
u
Proposition: one has
f ! = f − div(u! )
1
u = argmin ||f0 − div(u)||2 + τ ||u||∞
u∈RN ×2 2
!
u2i,1 + u2j,2
Overview
•Total Variation Prior
•Primal and Dual Projection
•First Order Algorithms
•Inverse Problem Resolution by Projection
Proximal Iterations
1
u = argmin ||f0 − div(u)||2 + τ ||u||∞
u∈RN ×2 2
smooth
“simple”
!
f ! = f − div(u! )
Proximal Iterations
1
u = argmin ||f0 − div(u)||2 + τ ||u||∞
u∈RN ×2 2
smooth
“simple”
Forward-backward splitting:
Initialization:
!
Gradient descent: ũ(k)
!∞ proximal correction:
f ! = f − div(u! )
u(0) = 0, k ← 0.
!
"
= u(k) − µ grad f0 − div(u(k) )
u(k+1) = proxµλ||·||∞ (ũ(k) ).
1
proxκ||·||∞ (u) = argmin ||u − v||2 + κ||v||∞ .
2
v
Continue? If ||u(k+1) − u(k) || > η, k ← k + 1.
Proximal Iterations
1
u = argmin ||f0 − div(u)||2 + τ ||u||∞
u∈RN ×2 2
smooth
“simple”
Forward-backward splitting:
Initialization:
!
Gradient descent: ũ(k)
!∞ proximal correction:
f ! = f − div(u! )
u(0) = 0, k ← 0.
!
"
= u(k) − µ grad f0 − div(u(k) )
u(k+1) = proxµλ||·||∞ (ũ(k) ).
1
proxκ||·||∞ (u) = argmin ||u − v||2 + κ||v||∞ .
2
v
Sλ
Continue? If ||u(k+1) − u(k) || > η, k ← k + 1.
Proximal operator computation:
proxκ||·||∞ (u) = u − Sλ (u)
!
"
λ
where Sλ (u)i = max 1 −
, 0 ui
||ui ||
−λ
Computing λ: partial sorting the values ||ui ||: O(N ) operations.
λ
Nesterov Multi-step
Accelerating the gradient descent: use u(!) for all ! ! k to compute u(k+1) .
Cf. Pierre Weiss PhD thesis.
Nesterov Multi-step
Accelerating the gradient descent: use u(!) for all ! ! k to compute u(k+1) .
Cf. Pierre Weiss PhD thesis.
Nesterov multi-step scheme:
Initialization:
u(0) = 0, A0 = 0, ξ (0) = 0, k ← 0.
First proximal computation:
υ (k) = proxAk τ ||·||∞ (u(0) − ξ (k) )
!
(k)
µ + µ2 + 4µAk
A
+
a
υ
k
k
(k)
ak =
Set
ω =
2
Ak + ak
(k+1)
(k)
u
=
prox
(ω̃
)
Second proximal computation:
µτ /2||·||∞
!
"
µ
where ω̃ (k) = ω (k) − ∇ f0 − div(ω (k) )
2
Update Ak+1 = Ak + ak and
ξ (k+1)
!
"
= ξ (k) + ak grad f0 − div(u(k+1) )
Continue? If ||u(k+1) − u(k) || > η, k ← k + 1.
Convergence Analysis
1
u = argmin E(u) = ||f0 − div(u)||2 + τ ||u||∞
2
u∈RN ×2
!
Known results: If 0 < µ < 2/||∇∗ ∇|| = 1/4,
E(u(k) ) − E(u! ) = O(1/k α )
−→ what about the convergence of f (k) = f − div(u(k) ) ?
Convergence Analysis
1
u = argmin E(u) = ||f0 − div(u)||2 + τ ||u||∞
2
u∈RN ×2
!
Known results: If 0 < µ < 2/||∇∗ ∇|| = 1/4,
E(u(k) ) − E(u! ) = O(1/k α )
−→ what about the convergence of f (k) = f − div(u(k) ) ?
Key lemma: [Fadili,Peyré]:
!
"
||f (k) − f ! ||2 ! 2 E(u(k) ) − E(u! )
Theorem: [Fadili, Peyré] If 0 < µ < 1/4, then
!
1 for Forward-Backward
(k)
! 2
α
||f − f || = O(1/k ) with α =
2 for Nesterov
TV-projection Denoising
log10 (||f (k) − f ! ||/||f ! ||)
log10 (||f (k) ||T V /τ − 1)
[Combett
es,Pesque
t]
k
k
Total Variation Texture Synthesis
TV texture synthesis: draw f ∈ CTV ∩ Ccontrast at random.
CTV = {f \ ||f ||TV ! τ }
Ccontrast = {f \ histogram(f ) = h0 }
Total Variation Texture Synthesis
TV texture synthesis: draw f ∈ CTV ∩ Ccontrast at random.
CTV = {f \ ||f ||TV ! τ }
Iterated projections:
f (k+1) = Equalizeh0
!
Ccontrast = {f \ histogram(f ) = h0 }
f (0)
f (0) ← noise.
"
Proj||·||TV !τ (f (k) )
Local convergence [Lewis, Malick, 2008]
f
CTV
(1)
Ccontrast
Overview
•Total Variation Prior
•Primal and Dual Projection
•First Order Algorithms
•Inverse Problem Resolution by Projection
Inverse Problems
Inverse Problems
Inverse Problems
TV Projection for Inverse Problems
Inverse problem resolution:
Projected gradient descent:
1
f = argmin ||Φf − y||2
||f ||TV !τ 2
f (!+1) = Proj{||·||TV !τ } f˜(!)
Theorem:
!
where
f˜(!) = f (!) + νΦ∗ (y − Φf (!) )
If ν ∈ (0, 2/||Φ∗ Φ||), then ||f (!) − f " || = O(1/").
TV Projection for Inverse Problems
Inverse problem resolution:
Projected gradient descent:
1
f = argmin ||Φf − y||2
||f ||TV !τ 2
!
f (!+1) = Proj{||·||TV !τ } f˜(!)
Theorem:
where
f˜(!) = f (!) + νΦ∗ (y − Φf (!) )
If ν ∈ (0, 2/||Φ∗ Φ||), then ||f (!) − f " || = O(1/").
Imperfect projection:
f (!+1) = Proj{||·||TV !τ }
Convergence of f (!) if
!
!
Theorem:
!
"
f (!) + νΦ∗ (y − Φf (!) ) + a!
||a! || < +∞. (does not work with Nesterov)
If Proj||·||TV !τ f˜(") is computed with η" -tolerance
!
and if ! η! < +∞, then f (!) converges to f "
Inpainting
f0
y = Φf0 + w
log10 (||f (k) − f ! ||/||f ! ||)
f!
Operator Φ: 70% missing pixels.
Gaussian noise: ||w|| = 0.02||f0 ||∞ .
TV constraint: τ = 0.6||f0 ||TV .
→ Roughly 1/! convergence speed.
Deconvolution
f0
y = Φf0 + w
log10 (||f (k) − f ! ||/||f ! ||)
f!
Operator Φ: Gaussian filter,
width 4 pixels.
Gaussian noise: ||w|| = 0.02||f0 ||∞ .
TV constraint: τ = 0.6||f0 ||TV .
→ Roughly 1/! convergence speed.
Conclusion
• Dual TV projection: unconstraint optimization.
1
min ||f − g||
min ||f0 − div(u)||2 + τ ||u||∞
||g||TV !τ
u 2
• Gradient methods: Forward-backward or Nesterov (faster).
Conclusion
• Dual TV projection: unconstraint optimization.
1
min ||f − g||
min ||f0 − div(u)||2 + τ ||u||∞
||g||TV !τ
u 2
• Gradient methods: Forward-backward or Nesterov (faster).
• Imaging applications: denoising, synthesis, inverse problems.
→ projected gradient descent: stability to imperfect projections.
τ
Conclusion
• Dual TV projection: unconstraint optimization.
1
min ||f − g||
min ||f0 − div(u)||2 + τ ||u||∞
||g||TV !τ
u 2
• Gradient methods: Forward-backward or Nesterov (faster).
• Imaging applications: denoising, synthesis, inverse problems.
→ projected gradient descent: stability to imperfect projections.
τ
• Other applications: computing Cheeger sets, plastic mechanics.

Similar documents