Linear Image Reconstruction by Sobolev Norms on the

Transcription

Linear Image Reconstruction by Sobolev Norms on the
Linear Image Reconstruction by Sobolev Norms on the
Bounded Domain
Bart Janssen and Remco Duits
Eindhoven University of Technology, Dept. of Biomedical Engineering
{B.J.Janssen,R.Duits}@tue.nl
Abstract. The reconstruction problem is usually formulated as a variational problem in
which one searches for that image that minimizes a so called prior (image model) while
insisting on certain image features to be preserved. When the prior can be described by a
norm induced by some inner product on a Hilbert space the exact solution to the variational
problem can be found by orthogonal projection. In previous work we considered the image
as compactly supported in L2 (R2 ) and we used Sobolev norms on the unbounded domain
including a smoothing parameter γ > 0 to tune the smoothness of the reconstruction image.
Due to the assumption of compact support of the original image components of the reconstruction image near the image boundary are too much penalized. Therefore we minimize
Sobolev norms only on the actual image domain, yielding much better reconstructions (especially for γ ≫ 0). As an example we apply our method to the reconstruction of singular
points that are present in the scale space representation of an image.
1
Introduction
One of the fundamental problems in signal processing is the reconstruction of a signal from its
samples. In 1949 Shannon published his work on signal reconstruction from its equispaced ideal
samples [27]. Many generalizations [25,28] and applications [21,4] followed thereafter.
Reconstruction from differential structure of scale space interest points, first introduced by
Nielsen and Lillholm [24], is an interesting instance of the reconstruction problem since the samples
are non-uniformly distributed over the image they were obtained from and the filter responses of the
filters do not necessarily coincide. Several linear and non-linear methods [24,15,20,22] appeared
in literature which all search for an image that (1) is indistinguishable from its original when
observed through the filters the features were extracted with and (2) simultaneously minimizes
a certain prior. We showed in earlier work [15] that if such a prior is a norm of Sobolev type on
the unbounded domain one can obtain visually attractive reconstructions while retaining linearity.
However, Boundaries are often causing problems to signal processing algorithms (see eg. [23]
Chapter 7.5 or [6] Chapter 10.7) , which are usually formulated on the unbounded domain, and
should be handled with care.
The problem that appears in the unbouded domain reconstruction method is best illustrated
by analyzing Figure 1 . The left image is a reconstruction from differential structure obtained from
an image that is a concatenation of (mirrored) versions of Lena’s eye. One can clearly observe that
structure present in the center of the image does not appear on the border. This can be attributed
to the fact that, when features are measured close to the image boundary, the corresponding kernels
partly lay outside the image domain and are “penalized” by the energy minimization methods
that are formulated on the unbounded domain. This is illustrated by the right image in Figure 1
. We solve this problem by considering bounded domain Sobolev norms instead. An additional
advantage of our method is that we can enforce a much higher degree of regularity than the
unbounded domain counter part (in fact we can minimize semi-norms on the bounded domain).
Furthermore we give an interpretation of the 2 parameters that appear in the reconstruction
framework in terms of filtering by a low-pass Butterworth filter. This allows for a good intuition
on how to choose these parameters.
Fig. 1. An illustration of the bounded domain problem: Features that are present in the center of the
image are lost near its border, since they, in contrast to the original image f are not compactly supported
on Ω.
2
Theory
In order to prevent the above illustrated problem from happening we restrict the reconstruction
problem to the domain Ω ⊂ R2 that is defined as the support of the image f ∈ L2 R2 from
which the features {cp (f )}P
1 , cp (f ) ∈ R are extracted. Recall that the L2 (Ω)-inner product on the
domain Ω ⊂ R2 for f, g ∈ L2 (Ω) is given by
Z
f (x)g(x)dx .
(1)
(f, g)L2 (Ω) =
Ω
A feature cp (f ) is obtained by taking the inner product of the pth filter ψp ∈ L2 (Ω) with the
image f ∈ L2 (Ω),
cp (f ) = (ψp , f )L2 (Ω) .
(2)
An image g ∈ L2 (Ω) is equivalent to the image f if they share the same features, {cp (f )}P
1 =
{cp (g)}P
1 , which is expressed in the following equivalence relation for f, g ∈ L2 (Ω).
f ∼ g ⇔ (ψp , f )L2 (Ω) = (ψp , g)L2 (Ω) for all p = 1, . . . , P.
(3)
Next we introduce the Sobolev space of order 2k on the domain Ω,
H2k (Ω) = {f ∈ L2 (Ω) | |∆|k f ∈ L2 (Ω)} , k > 0 .
(4)
The completion of the space of 2k-differentiable functions on the domain Ω that vanish on the
boundary of its domain ∂Ω is given by
2k
H2k
(Ω) | f |∂Ω = 0} , k >
0 (Ω) = {f ∈ H
1
.
2
(5)
Now H2k,γ
(Ω) denotes the normed space obtained by endowing H2k
0 (Ω) with the following inner
0
product,
k
k
(f, g)H2k,γ (Ω) = (f, g)L2 (Ω) + γ 2k |∆| 2 f, |∆| 2 g
0
L2 (Ω)
(6)
= (f, g)L2 (Ω) + γ 2k |∆|k f, g L (Ω)
2
,for all f, g ∈ H2k
0 (Ω) and γ ∈ R.
The solution to the reconstuction problem is the image g of minimal H2k,γ
-norm that shares
0
2k,γ
the same features with the image f ∈ H0 (Ω) from which the features {cp (f )}N
1 were extracted.
The reconstruction image g is found by an orthogonal projection, within the space H2k,γ
(Ω), of
0
f onto the subspace V spanned by the filters κp that correspond to the ψp filters,
arg min||g||2H2k,γ (Ω) = PV f ,
f ∼g
(7)
0
as shown in previous work [15]. The filters κp ∈ H2k,γ
(Ω) are given by
0
κp = I + γ 2k |∆|k
−1
(8)
ψp .
As a consequence (κp , f )H2k,γ (Ω) = (ψp , f )L2 (Ω) for (p = 1 . . . P ) for all f . Here we assumed that
0
f ∈ H2k (Ω) however, one can get away with f ∈ L2 (Ω) if ψ satisfies certain regularity conditions.
The interested reader can find the precise conditions and further details in [8].
The two parameters, γ and k, that appear in the reconstruction problem allow for an interesting
−1
interpretation. If Ω = R the fractional operator I + γ 2k |∆|k
is equivalent to filtering by the
classical low-pass Butterworth filter [3] of order 2k and cut-off frequency ω0 = γ1 . This filter is
defined as
ω
1
.
(9)
B2k
=
ω0
1 + | ωω0 |2k
A similar phenomena was recently observed by Unser and Blu [30] when studying the connection
between splines and fractals. Using this observation we can interpret equation (7) as finding an
k
2k = 8
|B2k (γω)|
|B2k (γω)|
γ=1
ω
γ
ω
Fig. 2. The filter response of a Butterworth filter. On the left γ is kept constant and the filter responses for
different k > 0 are shown. On the right the order of the filter,2k, is kept constant and the filter responses
for different γ > 0 are shown.
image, constructed by the ψp basis functions that, after filtering with the Butterworth filter of
order 2k and with a cut-off frequency determined by γ, is equivalent (cf. equation (3)) to the image
f . The filter response of the Butterworth filter is shown in Figure 3. One can observe the order
of the filter controls how well the ideal low-pass filter is approximated and the effect of γ on the
cut-off frequency.
2.1
Spectral decomposition
For now we set k = 1 and investigate the Laplace operator on the bounded domain: ∆ : H20 (Ω) 7→
L2 (Ω) which is bounded and whose right inverse is given by the minus Dirichlet operator, which
is defined as follows.
Definition 1 (Dirichlet Operator). The Dirichlet operator D is given by
∆g = −f
g = Df ⇔
g|∂Ω = 0
(10)
with f ∈ L2 (Ω) and g ∈ H20 (Ω).
The Green’s function G : Ω × Ω 7→ R2 of the Dirichlet operator is given by
∆G(x, ·) = −δx
G(x, ·)|∂Ω = 0
for fixed x ∈ Ω. Its closed form solution reads
sn(x + ix , k̃) − sn(y + iy , k̃) 1
1
2
1
2
G (x, y) = − log .
sn(x + ix , k̃) − sn(y + iy , k̃) 2π
1
2
1
(11)
(12)
2
Here x = (x1 , x2 ), y = (y1 , y2 ) ∈ Ω and k̃ ∈ R is determined by the aspect ratio of the rectangular domain Ω , sn denotes the well know Jacobi-elliptic function [13], [31] Chapter XXII.
In Appendix A we derive equality (12), and show how to obtain k̃. Figure 3 shows a graphical
representation of this non-isotropic Green’s function for a square domain (k̃ ≈ 0.1716). Notice
this function vanishes at its boundaries and is, in the center of the domain, very similar to the
isotropic fundamental solution on the unbounded domain [7]. In terms of regularisation this means
the Dirichlet operator smoothens inwards the image but never “spills” over the border of the domain Ω.
Fig. 3. From left to right plots of the „
graph of x 7→ Gx,y , isocontours
G(x, y) = c and isocontours of its
«
sn
(x1 +ix2 ,k̃)−sn(y1 +iy2 ,k̃)
−1
.
Harmonic conjugate H(x, y) = 2π arg
sn(x1 +ix2 ,k̃)−sn(y1 +iy2 ,k̃)
When the Dirichlet operator as defined in Definition 1 is expressed by means of its Green’s
function, which is presented in equation (12),
Z
(Df ) (x) =
G(x, y)f (y)dy, f ∈ L2 (Ω) , Df ∈ H20 (Ω)
(13)
Ω
one can verify that it extends to a compact, self-adjoint operator on L2 (Ω). As a consequence,
by the spectral decomposition theorem of compact self-adjoint operators [33], we can express the
Dirichlet operator in an orthonormal basis of eigen functions. The normalized eigen functions fmn
with corresponding eigen values λmn of the Laplace operator ∆ : H20 (Ω) 7→ L2 (Ω) are given by
1
nπx
mπy
sin(
) sin(
)
ab
a
b
nπ 2 mπ 2
+
=−
a
b
fmn (x, y) =
λmn
r
(14)
(15)
with Ω = [0, a] × [0, b]. Since ∆D = −I, the eigen functions of the Dirichlet operator coincide with
those of the Laplace operator (14) and its corresponding eigen values are the inverse of the eigen
values of the Laplace operator.
2.2
Scale space on the bounded domain
The spectral decomposition presented in the previous subsection, by (14), will now be applied to
the construction of a scale space on the bounded domain [9] . Before we show how to obtain a
Gaussian scale space representation of an image on the bounded domain we find, as suggested by
Koenderink [19], the image h ∈ H2 (Ω) that is the solution to ∆h = 0 with as boundary condition
that h = f restricted to ∂Ω. Now f˜ = f − h is zero at the boundary ∂Ω and can serve as an initial
condition for the heat equation (on the bounded domain). A practical method for obtaining h on
an arbitrarily shaped domain is suggested by Georgiev [11] and a fast method on a rectangular
domain is proposed by Averbuch at al. [1] . Now fmn (x, y) is obained by expansion of
f˜ =
X
(fmn , f˜)L2 (Ω) fmn ,
(16)
m,n∈N
which effectively exploits the sine transform.
The (fractional) operators that will appear in the construction of a Gaussian scale space on
the bounded domain can be expressed as
k
|∆|2k fmn = (λmn ) fmn
,
e−s|∆| fmn = esλmn fmn .
(17)
(18)
We also note that the κp filters, defined in equality (8), are readily obtained by application of the
following identity
−1
1
fmn .
(19)
I + γ 2k |∆|k
fmn =
2k
1 + γ (λmn )k
Consider the Gaussian scale space representation1 on bounded domain Ω
(x, y, s) =
uΩ
f˜
X
e(λmn )s (fmn , f˜)L2 (Ω) fmn (x, y)
(20)
m,n∈N
where the scale parameter s ∈ R+ . It is the unique solution to
 ∂u
 ∂s = ∆u
u(·, s)|∂Ω = 0 for all s > 0 .

u(·, 0) = f˜
1
(21)
The framework in this paper is readily generalized to α-scale spaces in general (see e.g. [9]) by replacing
(−λmn ) by (−λmn )2α .
The filter φp that measures differential structure present in the scale space representation uΩ
of
f˜
˜
f at a point p with coordinates (xp , yp , sp ), such that
(xp , yp , sp ) = φp , f˜
D np u Ω
f˜
L2 (Ω)
(22)
,
is given by (writing multi-index np = (n1p , n2p ))
φp (x, y) =
X
e(λmn )sp (Dnp fmn )(xp , yp ) fmn (x, y) ,
(23)
m,n∈N
where we note that
(D
np
fmn ) (xp , yp ) =
r
2
1
nπx
mπy
π π 1 mπ np nπ np
p
p
sin
+ n2p sin
+ n1p ,
ab
b
a
b
2
a
2
x = (x, y) ∈ Ω, xp = (xp , yp ) ∈ Ω and np = (n1p , n2p ) ∈ N × N.
2.3
Singular points
A singular point, often referred to as a toppoint, is a non-Morse critical point of a Gaussian scale
space representation of an image. Damon [5] studied the applicability of established catastrophe
theory (see eg. [12]) in the context of scale space. Florack and Kuijper [10] have given an overview
of this theory in their paper about the topological structure of scale space images.
Definition 2 (Singular Point). A singular point (x; s) ∈ Rn+1 of a scale space representation
uΩ
f (x, s) of the image f is defined by the following equations, in which ∇ denotes the spatial
gradient operator:
(
∇uΩ
f (x; s) = 0
(24)
det ∇∇T uΩ
f (x; s) = 0 .
A visualization of the singular points of a Gaussian scale space representation of an image can be
found in Figure 4. A red ball represents the location of a singular point on a critical path (grey
lines), which are paths of vanishing gradient. The approximate location (xa , ya , sa ) of a singular
point in the scale space of an image can be found by a zero-crossings method [18] . In such a
uΩ (x;s)
uΩ (x;s)
method one searches for the intersections of the planes defined by f ∂x = 0, f ∂y = 0 and
det ∇∇T uΩ
f (x; s) = 0 of which a visulization can also be found in Figure 4. The true location
(xc , yc , sc ) of a singular point can be found by calculating (xc , yc , sc ) = (xa + ξq , ya + ξ2 , sa + τ )
where
g
ξ
−1
,
(25)
=M
det ∇∇T uΩ
τ
f (x; s)
and where
M=
∇∇T uΩ
f (x; s) w
zT
c
,
T Ω
T Ω
g = ∇uΩ
f (x; s) , w = ∂s g , z = ∇ det ∇∇ uf (x; s) , c = ∂s det ∇∇ uf (x; s) .
(26)
(27)
All derivatives are taken in the point (xa , ya , sa ). This procedure is repeated in order to obtain a
more accurate location of a singular point. For further details and, for the general case of interest,
the application of singular points in image matching, we refer to [26].
Although one-dimensional signals can be reconstructed from its singular points up to a multiplicative constant [17,16] this is probably not the case for higher dimensional signals such as
images. If one endows these points with suitable attributes one can obtain a reconstruction that
is visually close to the initial image [14].
Fig. 4. A visualistation of the “deep structure” of a Gaussian scale space representation of an image.
The grey paths represent the critical paths (paths of vanishing gradient), a red ball shows the location
uΩ (x;s)
uΩ (x;s)
of a singular point on a critical path. The planes show the iso-surfaces f ∂x
= 0, f ∂y
= 0 and
det ∇∇T uΩ
(x;
s)
=
0,
which
are
used
in
the
calculation
of
the
position
of
a
singular
point.
Image
courtesy
f
of Bram Platel.
2.4
The solution to the reconstruction problem
Now we have established how one can construct a scale space on the bounded domain and shown
how to measure its differential structure we can proceed to express the solution to the reconstruction problem (recall equations (7) and (8)) in terms of eigen functions and eigen values of the
Laplace operator:
P
P
X
X
Gpq (φp , f˜)L2 (Ω) κq =
Gpq cp (f˜)κq ,
(28)
PV f˜ =
p,q=1
pq
where G
p,q=1
is the inverse of the Gram matrix Gpq = (κp , κq )H2k,γ (Ω) and the filters κp satisfy
0
κp (x, y) =
X
m,n∈N
e(λmn )sp
(Dnp fmn )(xp , yp ) fmn (x, y) .
1 + γ 2k (λmn )k
(29)
This is the unique solution to the optimization problem
arg min ||g||2H2k,γ (Ω) ,
˜
f∼g
(30)
0
which was introduced in equation (7).
2.5
Neumann boundary conditions
Axioms lead to α-scale spaces with as a special case, α = 1, the Gaussian scale space is used in
this paper. The theory in this paper (as well as in [14]) is also applicable for other α, 0 < α ≤ 1,
one can simply replace λmn by −(−λ)α . In case of the bounded domain the axiom of translation
invariance has to be dropped. To maintain the other axioms suchs as gray value invariance and
increase of entropy Neumann boundary conditions should be chosen [9]. Imposing zero Neumann
boundary conditions coincides with the symmetric extension of the image at the boundaries and
periodic boundary conditions on the extended domain. In this case the eigenvalues λmn (15) are
maintained, the eigen functions are given by
s
nπx
mπy
1
cos(
) cos(
).
(31)
fmn (x, y) =
ab(1 + δm0 )(1 + δn0 )
a
b
When the eigenfunctions in (20) are replaced by (31) equality (20) is the unique solution to
 ∂u
 ∂s = ∆u
∂u(·,s)
(32)
| = 0 for all s > 0 ,
 ∂n ∂Ω ˜
u(·, 0) = f
where n is the outward normal of the image boundary ∂Ω.
When Neumann boundary conditions other than zero are required, for instance in case of block
processing, we proceed in a similar fashion as proposed in Section 2.2. In this case, however, we
have to take care that Green’s second identity,
Z
Z
∂f
∂h
h
h∆f − f ∆hdx =
−f
dσ(x)
(33)
∂n
∂n
∂Ω
Ω
with dσ(x) a boundary measure, is not violated. As a consequence we can not find an h such that
∆h = 0
(34)
∂f ,
∂h
∂n |∂Ω = ∂n
holds, since
Z
Ω
∆hdx =
Z
∂Ω
∂h
dσ(x) =
∂n
Z
∂Ω
∂f
dσ(x) .
∂n
(35)
In order to solve this problem we introduce a constant source term, as suggested by [32], in (34)
and find instead an h such that
∆h = K
(36)
∂f ,
∂h
∂n |∂Ω = ∂n
with K =
1
|Ω|
R
∂f
dσ(x)
∂Ω ∂n
and |Ω| the area of the domain. An efficient method to solve h from
equation (36) based on the method by Averbuch [1] can be found in [32]. Now f˜ = f − h has zero
normal derivatives at the boundary ∂Ω and can serve as an initial condition for the heat equation
in (32).
3
Implementation
The implementation of the reconstruction method that was presented in a continuous Hilbert
space framework is completely performed in a discrete framework in order to avoid approximation
errors due to sampling, following the advice: “Think analog, act discrete!” [29].
D
D
First we introduce the discrete sine transform FS : l2 (IN
) 7→ l2 (IN
) on a rectangular domain
D
IN = {1, . . . , N − 1} × {1, . . . , M − 1}
M−1
−1
X NX
iuπ
jvπ
2
sin
sin
f (i, j) ,
(FS f ) (u, v) = − √
M
N
M N i=1 j=1
|
{z
}
(ϕi ⊗ϕj )(u,v)
(37)
D
with (u, v) ∈ IN
. Notice that this unitary transform is its own inverse and that
(ϕi , ϕj )l2 (IND ) = δij ,
so {ϕi ⊗ ϕi |
(38)
i = 1, . . . , M − 1
D
} forms an orthonormal basis in l2 (IN
).
j = 1, . . . , N − 1
ID
D
The Gaussian scale space representation ufN (i, j, s) of an image f ∈ l2 (IN
) introduced in the
continuous domain in equality (20) now reads
”
“ 2
M−1
−1
2
X NX
2
π2
s u +v
ID
(ϕu ⊗ ϕv ) (i, j)
fˆ(u, v)e M 2 N 2
ufN (i, j, s) = es∆ f (i, j) = − √
M N u=1 v=1
where fˆ(u, v) = (FS f ) (u, v). Differential structure of order np = (n1p , n2p ) ∈ N × N at a certain
D
and at scale sp ∈ R+ is measured by
position (ip , jp ) ∈ IN
“ 2
”
−1
M−1
X NX
2
D
u
v2
2
sp M
np IN
2 + N2 π
ˆ
√
f (u, v)e
D uf (ip , jp , sp ) = −
M N u=1 v=1
uπ n1p vπ n2p
ip uπ π 1
jp vπ
π 2
sin
+ np sin
+ np .
M
N
M
2
N
2
The filters φp , with p = (ip , jp , sp , np ) a multi-index, are given by
“ 2
”
M−1
−1
X NX
2
u
v2
2
sp M
2 + N2 π
√
e
φp (i, j, s) = −
(ϕu ⊗ ϕv ) (i, j)
M N u=1 v=1
uπ n1p vπ n2p
jp vπ
π
ip uπ π 1
sin
+ np sin
+ n2p
M
N
M
2
N
2
(39)
and the filters κp corresponding to φp read
“
u2
v2
”
2
M−1
−1
+
π
s
X NX
e p M2 N2
2
κp (i, j, s) = − √
(ϕu ⊗ ϕv ) (i, j)
M N u=1 v=1 1 + γ 2k u22 + v22 k
M
N
uπ n1p vπ n2p
ip uπ π 1
π
jp vπ
sin
+ np sin
+ n2p .
M
N
M
2
N
2
(40)
An element Gpq = (φp , φq )l2 (IND ) of the Gram matrix can, because of the orthonormality of the
transform, be expressed in just a double sum,
M−1
−1
X NX
(sp +sq )
“
u2
M2
”
2
v2
+N
2 π
uπ n1p vπ n2p
ip uπ
π 1
e
2
sin
+
n
Gpq = − √
N
M
2 p
M N u=1 v=1 1 + γ 2k u22 + v22 k M
M
N
1
2
jp vπ
π 2 uπ nq vπ nq
iq uπ π 1
π 2
jq vπ
sin
sin
+ np
+ nq sin
+ nq .
N
2
M
N
M
2
N
2
In order to gain accuracy we implement equality (3) by summing in the reverse direction and
multiplying by γ 2k . Then we compute
g̃ =
P
X
Gpq γ 2k cp (f )φq
(41)
p,q=1
and find the reconstruction image g by filtering g̃ by a discrete version of the 2D Butterworth
filter of order 2k and with cut-off frequency ω0 = γ1 .
The implementation was written using the sine transform as defined in equality (37) where we
already explicitly mentioned the transform can be written as
M−1
−1
X NX
2
(FS f ) (u, v) = − √
(ϕi ⊗ ϕj ) (u, v)f (i, j) .
M N i=1 j=1
(42)
N
N
N
Now we define the cosine transform FS : l2 (IN
) 7→ l2 (IN
) on a rectangular domain IN
=
{0, . . . , N + 1} × {0, . . . , M + 1} in a similar manner
(FC f ) (u, v) =
M+1
+1
XN
X
i=0 j=0
where(ϕ̃i ⊗ ϕ˜j ) (u, v) = cos
{ϕ̃i ⊗ ϕ̃j |
π(i+ 21 )u
M
q
2−δu0
M
cos
(ϕ̃i ⊗ ϕ̃j ) (u, v)f (i, j) .
π(j+ 12 )v
M
q
2−δv0
N
(43)
. These cosine basis functions
i = 0, . . . , M + 1
N
} form an orthogonal basis in l2 (IN
) and can thus be used to transform
j = 0, . . . , N + 1
the reconstruction method that was explicitly presented for the Dirichlet case into a reconstruction
method based on Neumann boundary conditions.
4
Experiments
We evaluate the reconstruction method by applying it to the problem that was presented in the
introduction. The upper row of Figure 5 shows, from left to right, the image from which the
features were extracted, a reconstruction by the unbounded domain method [15] (parameters:
γ = 50, k = 1) and a reconstruction by the newly introduced bounded domain method using
Dirichlet boundary conditions (parameters: γ = 1000, k = 1). Features that were used are up to
second order derivatives measured at the singular points [5] of the scale space representation of
the original image f . One can clearly see that the structure that is missing in the middle image
does appear when the bounded domain method is used (top-right image in Figure 5). However,
the visual quality of the reconstruction is still not appealing. Unlike in our previous work on image
reconstruction [14], where we investigated, among other things, the effect of endowing the feature
points with higher order differential structure, we select a different set of features.
Singular points of a scale space image tend to catch blob-like structures whereas singular points
of the scale space of the laplacian of an image (henceforth called laplacian singular points) are
more likely to catch information about edges and ridges in the image. Furthermore, the number
of singular points that manifest themselves in the scale space of an image is much smaller than
the number of laplacian singular points of an image. These advantages motivate us to reconstruct
from the properties of these points instead.
The bottom row of Figure 5 shows reconstructions from up to second order differential structure
obtained from the singular points of the scale space of the laplacian of f . On the left the unbounded
domain method was used with γ = 100 and k = 1, this leads to a reconstructed signal that has
“spilled” too much over the border of the image and therefore is not as crisp as the reconstruction
obtained by our newly proposed method using Dirichlet boundary conditions (parameters: γ =
1000 and k = 1). Due to this spilling the Gram matrix of the bounded domain reconstruction
method is harder to invert since basis functions start to become more and more dependent, this
problem gets worse when γ increases. Our bounded domain method is immune to this problem as
long as the parameter k is not chosen too high, which we will investigate next.
Figure 6 shows several reconstructions from up to second order differential structure taken at
the locations of the singular points of lena’s eye using Dirichlet boundary conditions. From left
to right the parameter k = {0.5 + ǫ, 1., 1.5, 2.0.3.0, 4.0} (introduced in (4)), which controls the
order of the Sobolev space we are projecting in, is varied and from top to bottom the parameter
γ = {1, 4, 16, 64, 256, 1024} (introduced in (6)), which controls the importance of smoothness, is
varied. One can clearly see the effect of the parameters. We will interpret the results in terms of the
low-pass Butterworth filter introduced in (9). When k is increased the order of the filter increases
and consequently approximates the ideal low-pass filter more closely. In Figure 6 the most right
column (k=4.0) the filter is too sharp which results in a reconstruction that does not satisfy all
features. This effect is still visible when γ = 1. Increasing γ corresponds to decreasing the cut-off
frequency of the filter, which smoothens the “bumpy” features that are most apparent in the top
left sub-images of Figure 6. Increasing γ does not seem to cause problems for the inversion of
the Gram matrix (and consequently loss of features) but does have a smoothing effect on the
reconstruction image. It is therefore preferred to use a γ ≫ 0.
Fig. 5. Top left: The image f from which the features were extracted. Top center and right: reconstruction from second order structure of the singular points of f using the unbounded domain method[15]
(parameters: γ = 50, k = 1) and the bounded domain method (parameters: γ = 1000, k = 1). Bottom row:
unbounded domain (left) and bounded domain (right) reconstruction from singular points of the laplacian
of f with k = 1 and γ set to 100 and 1000 respectively.
5
Conclusion
In previous work we considered the image as compactly supported in L2 (R2 ) and we used Sobolev
norms on the unbounded domain including a smoothing parameter γ > 0 to tune the smoothness
of the reconstruction image. Due to the assumption of compact support of the original image components of the reconstruction image near the image boundary are too much penalized. Therefore
we proposed to minimize Sobolev norms only on the actual image domain, yielding much better
reconstructions (especially for γ ≫ 0). We give a closed form expression for the Green’s function
of the Dirichlet operator in the spatial domain and formulate both feature extraction and the
reconstruction method on the bounded domain in terms of the eigenfunctions and corresponding
eigenvalues of the Laplace operator on the bounded domain: ∆ : H20 (Ω) 7→ L2 (Ω). By changing
the eigenfunctions the Dirichlet boundary conditions can be interchanged with Neumann boundary conditions. The implementation is done completely in the discrete domain and makes use of
fast sine or cosine transforms.
Fig. 6. Reconstructions from second order differential structure of lena’s eye using Dirichlet boundary conditions. From left to right k = {0.5 + ǫ, 1.0, 1.5, 2.0, 3.0, 4.0} and from top to bottom γ =
{1, 4, 16, 64, 256, 1024}.
We also showe an interpretation for the parameter γ and the order of the Sobolev space k
in terms of filtering by the classical Butterworth filter. In future work we plan to exploit this
interpretation by automatically selecting the order of the Sobolev space.
As a final remark we note that the methods presented in this paper apply to α-scale spaces in
general [9]. For general alpha one should replace λmn by −(−λmn )α .
Acknowledgment
The Netherlands Organisation for Scientific Research (NWO) is gratefully acknowledged for financial support.
A
Closed from expression of the Green’s function of the Dirichlet
operator
The Green’s function G : Ω × Ω 7→ R of the Dirichlet operator D (recall Definition 1) can be
obtained by means of conformal mapping2 . A visualization of the mappings used to arrive at the
solution can be found in Figure 7 . To this end we first map the rectangle to the upper half space
Fig. 7. Mapping from a square (top left) to the upper half space in the complex plane (bottem) followed
by a mapping to the disc (upper right). The concatenation of the two mappings (from top left to top
right) is used to obtain the Green’s function of the Dirichlet operator.
in the complex plane. By the Schwarz-Christoffel formula the derivative of the inverse of such a
mapping is given by
1
1
dz
1
1 1
1 1
1
1
p
= − (w − 1)− 2 (w + 1)− 2 (w − )− 2 (w + )− 2 = √
,
dw
1 − w2 1 − k̃2 w2
k̃
k̃
k̃
where w(±1/k̃) = ±a + ib. As a result
Z w
z(w, k̃) =
√
0
dt
p
⇔ w(z) = sn(z, k̃),
1 − t2 1 − k̃2 t2
where sn denotes the well-known Jacobi-elliptic
function. We have sn(0, k̃) = 0, sn(±a, k̃) = ±1, sn(±a +
p
ib, k̃) = ±(1/k̃) and sn(i(b/2), k̃) = i/ k̃, where the elliptic modulus k̃ is given by
q
(b/a)z(1, k̃) = z(1, 1 − k̃2 ).
For example in case of a square b/a = 2 we have k̃ ≈ 0.1715728752.
2
Our solution is a generalization of the solution derived by Boersma in [2]
The next step is to map the half plane onto the unit disk B0,1 . This is easily done by means of a linear
fractional transform
z − sn(y1 + i y2 , k̃)
.
χ(z) =
z − sn(y1 + i y2 , k̃)
To this end we notice that |χ(0)| = 1 and that the mirrored points sn(y1 + i y2 , k̃) and sn(y1 + i y2 , k̃) are
mapped to the mirrored points χ(sn(y1 + i y2 , k̃)) = 0 and χ(sn(y1 + i y2 , k̃)) = ∞.
Now define F : C → C and F : Ω → B0,1 by
F = χ ◦ sn(·, k̃), i.e. F (x1 + i x2 ) =
sn(x1 +i x2 ,k̃)−sn(y1 +i y2 ,k̃)
(44)
sn(x1 +i x2 ,k̃)−sn(y1 +i y2 ,k̃)
F(x1 , x2 ) = ( Re(F (x1 + i x2 )), Im(F (x1 + i x2 )))T ,
then F is a conformal mapping of Ω onto B0,1 with F(y) = 0. As a result we have by the Cauchy-Riemann
equations
∆F(x) = |F′ (x)|−1 ∆x ,
(45)
where the scalar factor in front of the right Laplacian is the inverse Jacobian:
′
−1
|F (x)|
′
= (det F (x))
−1
=
„
∂F1
(x)
∂x
«2
+
„
∂F2
(x)
∂x
«2 !−1
= |F ′ (x1 + ix2 )|,
for all x = (x1 , x2 ) ∈ Ω.
log kuk is the unique Greens function with Dirichlet boundary conditions on the
Now G̃(u, 0) = −1
2π
disk B0,1 = {x ∈ R2 | kxk ≤ 1} with singularity at 0 and our Green’s function is given by G = G̃ ◦ F, i.e.
˛
˛
˛ sn(x + i x , k̃) − sn(y + i y , k̃) ˛
1
1
˛
˛
1
2
1
2
G(x, y) = −
(46)
log |(χ ◦ sn(·, k̃))(x1 + i x2 )| = −
log ˛
˛.
˛ sn(x + i x , k̃) − sn(y + i y , k̃) ˛
2π
2π
1
2
1
2
References
1. A. Averbuch, M. Israeli, and L. Vozovoi. A fast poisson solver of arbitrary order accuracy in rectangular
regions. SIAM Journal of Scientific Computing, 19(3):933–952, 1998.
2. J. Boersma, J.K.M. Jansen, F.H. Simons, and F.W. Sleutel. The siam 100-dollar, 100-digit challenge
- problem 10. SIAM News, January 2002.
3. S. Butterworth. On the theory of filter amplifiers. Wireless Engineer, 7:536–541, 1930.
4. J.E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509,
February 2006.
5. J. Damon. Local Morse theory for solutions to the heat equation and Gaussian blurring. Journal of
Differential Equations, 115(2):368–401, January 1995.
6. I. Daubechies. Ten Lectures on Wavelets, volume 61. CBMS-NSF Regional Conference Series in
Applied Mathematics, 8 edition, 1992.
7. J. Duchon. Splines minimizing rotation-invariant semi-norms in Sobolev spaces. Springer Verlag,
1977.
8. R. Duits. Perceptual Organization in Image Analysis. PhD thesis, Technische Universiteit Eindhoven,
2005.
9. R. Duits, M. Felsberg, L.M.J. Florack, and B. Platel. α scale spaces on a bounded domain. In Lewis
Griffin and Martin Lillholm, editors, Scale Space Methods in Computer Vision, 4th International
Conference, Scale Space 2003, pages 494–510, Isle of Skye, UK, June 2003. Springer.
10. L. Florack and A. Kuijper. The topological structure of scale-space images. Journal of Mathematical
Imaging and Vision, 12(1):65–79, February 2000.
11. T. Georgiev. Relighting, retinex theory, and perceived gradients. In Proceedings of Mirage 2005,
March 2005.
12. R. Gilmore. Catastrophe Theory for Scientists and Engineers. Dover Publications, New York, 1993.
Originally published by John Wiley & Sons, New York, 1981.
13. I. S. Gradshteyn and I. M. Ryzhik. Table of Integrals, Series, and Products. Boston, fifth edition,
1994. (Edited by A. Jeffrey.).
14. B.J. Janssen and R. Duits. Linear image reconstruction by sobolev norms on the bounded domain.
Submitted to ScaleSpace 2007.
15. B.J. Janssen, F.M.W. Kanters, R. Duits, L.M.J. Florack, and B.M. ter Haar Romeny. A linear image
reconstruction framework based on sobolev type inner products. International Journal of Computer
Vision, 70(3):231–240, 2006.
16. P. Johansen, M. Nielsen, and O. F. Olsen. Branch points in one-dimensional gaussian scale space.
Journal of Mathematical Imaging and Vision, 13(3):193–203, 2000.
17. P. Johansen, S. Skelboe, K. Grue, and J. D. Andersen. Representing signals by their top points in
scale-space. In Proceedings of the 8th International Conference on Pattern Recognition, pages 215–217,
1986.
18. F.M.W. Kanters, L.M.J. Florack, R. Duits, and B. Platel. Scalespaceviz: Visualizing α-scale spaces.
Demonstration software, 2004. ECCV 2004, Prague, Czech Republic, May 11–14, 2004.
19. J. J. Koenderink. The structure of images. Biological Cybernetics, 50:363–370, 1984.
20. J. Kybic, T. Blu, and M. Unser. Generalized sampling: a variational approach – part I: Theory. IEEE
Transactions on Signal Processing, 50:1965–1976, 2002.
21. J. Kybic, T. Blu, and M. Unser. Generalized sampling: a variational approach – part II: Applications.
IEEE Transactions on Signal Processing, 50:1965–1976, August 2002.
22. M. Lillholm, M. Nielsen, and L. D. Griffin. Feature-based image analysis. International Journal of
Computer Vision, 52(2/3):73–95, 2003.
23. S. Mallat. A wavelet tour of signal processing. Academic Press, 1998.
24. M. Nielsen and M. Lillholm. What do features tell about images? In Scale-Space and Morphology in
Computer Vision: Proceedings of the Third International Conference, pages 39–50. Springer Verlag,
2001.
25. A. Papoulis. Generalized sampling expansion. IEEE Transactions on Circuits and Systems, 24:652–
654, 1977.
26. B. Platel. Exploring the Deep Structure of Images. PhD thesis, Eindhoven University of Technology,
2007.
27. C.E. Shannon. Communication in the presence of noise. In Proc. IRE, volume 37, pages 10–21,
January 1949.
28. M. Unser. Sampling—50 Years after Shannon. Proceedings of the IEEE, 88(4):569–587, April 2000.
29. M. Unser. Splines: On scale, differential operators, and fast algorithms. In Fifth International Conference on Scale Space and PDE Methods in Computer Vision (ICSSPDEMCV’05), Hofgeismar, Germany, April 6-10, 2005. Plenary talk.
30. M. Unser and T. Blu. Self-similarity: Part I-splines and operators. IEEE Transactions on Signal
Processing, 55(4):1352–1363, April 2007.
31. E.T. Whittaker and G.N. Wason. Modern Analysis. Camebridge University Press, 4 edition, 1946.
32. K. Yamatani and N. Saito. Improvement of DCT-based compression algorithms using poisson’s equation. IEEE Transactions on Image Processing, 15(12):3672–3689, December 2006.
33. K. Yosida. Functional Analysis. Springer Verlag, Berlin, sixth edition, 1980.