Ear-parotic face angle - University of Macau Institutional Repository

Transcription

Ear-parotic face angle - University of Macau Institutional Repository
Pattern Recognition Letters 53 (2015) 9–15
Contents lists available at ScienceDirect
Pattern Recognition Letters
journal homepage: www.elsevier.com/locate/patrec
Ear-parotic face angle: A unique feature for 3D ear recognition✩
Yahui Liu b, Bob Zhang c, David Zhang a,∗
a
Biometrics Research Centre, Department of Computing, Hong Kong Polytechnic University, Kowloon, Hong Kong
Department of Computer Science and Technology, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China
c
Department of Computer and Information Science, University of Macau, Macau
b
a r t i c l e
i n f o
Article history:
Received 16 June 2014
Available online 30 October 2014
Keywords:
3D ear recognition
Biometrics
Ear-parotic face angle
3D ear indexing
a b s t r a c t
This paper proposes a unique characteristic in 3D ear images: ear-parotic face angle of the person. The earparotic face angle feature is defined as the angle between the normal vector of the ear-plane and the normal
vector of the parotic face-plane. It is a unique and stable feature in 3D ears, and is able to provide indexing
identification as well as hierarchical verification solutions which enhance both the speed and accuracy of
3D ear recognition. The experimental results show that by using the angle indexing in identification, the
search range is reduced to 9.69% from the original, which is a considerable reduction in time. The verification
experiment also achieved an equal error rate (ERR) improvement (from 2.8% to 2.3%) by combing the angle
feature with iterative closest point (ICP) method.
© 2014 Elsevier B.V. All rights reserved.
1. Introduction
Biometric authentication is playing important roles in applications
of public security such as access control [1], forensics, and e-banking
[2,3]. In order to meet the needs of different security requirements,
new biometrics including palmprint [4], vein [5], ear [6–8] and so on,
have been developed. Among them, the ear has proven to be a stable
candidate for biometric authentication due to its desirable properties
such as universality, uniqueness and permanence. In addition, an ear
possesses several advantages: its structure does not change with age
and its shape is not affected by facial expressions [9,10].
Researchers have developed several approaches for ear recognition from 2D images. Burge and Burger proposed a method based on
Voronoi diagrams [8]. They built an adjacency graph from Voronoi
diagrams and used a graph matching based algorithm for authentication. Hurley et al. proposed a system based on force field feature
extraction [11]. They treated the ear image as an array of mutually attracting particles that act as the source of a Gaussian force field. Choras
presented a geometrical method of feature extraction from human ear
images [12]. Although these approaches show some good results, the
performance of 2D ear authentication will always be marred by illuminations and pose variation. Also, the ear has more spatial geometrical
information than texture information, but spatial information such as
posture, depth, and angle are limited in 2D ear images.
✩
∗
This paper has been recommended for acceptance by G. Borgefors.
Corresponding author. Tel.: +852 27667271; fax: +852 27740842.
E-mail address: [email protected] (D. Zhang).
http://dx.doi.org/10.1016/j.patrec.2014.10.014
0167-8655/© 2014 Elsevier B.V. All rights reserved.
In recent years, 3D techniques have been used in biometric authentication, such as 3D face [13,14], 3D palmprint [15–17] and 3D
ear recognition [18–23]. A 3D ear image is robust to imaging conditions and contains surface shape information which is related to the
anatomical structure as well as being insensitive to environmental illuminations. Therefore, 3D ear recognition has drawn more and more
researchers’ attention recently. Yan and Bowyer [18] presented an automated segmentation method by finding ear pit and using an active
contour algorithm on both color and depth images, in addition to describing an improved iterative closest point (ICP) approach for 3D ear
shape matching. Chen and Bhanu [19] proposed a 3D ear recognition
algorithm based on local surface patch and the ICP method. They also
presented a 3D ear indexing method [20] which combined feature
embedding and a support vector machine based learning technique
for ranking the hypotheses. Islam et al. provided an effective method
for ear detection using the key points to extract the local feature of
the 3D ears [21]. Zhou et al. presented a complete 3D ear recognition
system combining local and holistic features in a computationally efficient manner [22]. Even though good results were achieved by the
works mentioned above, there is still room for improvement [23]. To
date, there has been no work with 3D ears that have extracted angle
features between the ear and parotic face. The angle feature is stable
for each person and is unique in 3D images. This characteristic gives
a 3D ear more special features than a 2D ear, which is a helpful candidate for ear indexing and coarse classification in 3D ear recognition.
In this paper, we propose a unique global feature in 3D ear images:
ear-parotic face angle of the person. The motivation of defining the
ear-parotic face angle feature is to provide an original feature as an
aid to improve the efficiency in 3D ear recognition. As we all know,
time consumption is always a challenge in 3D ear recognition, and
10
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
Table 1
Performance comparison with commercial scanner.
Vivid 910
Our Scanner
Accuracy (mm)
Dimensions (mm)
Price (USD)
±0.1
±0.5
213 × 413 × 271
140 × 200 × 200
>20,000
<1000
the speed of a biometric system is an important factor for real-time
applications. By introducing the ear-parotic face angle feature, the hierarchical structure can be applied for 3D ear indexing which leads to
a reduction of the search space in 3D ear recognition. We can use this
feature to sort the samples in a registered database. When recognition is performed we only need to compare the test ear with candidate
ears, and determine which has a closer angle value to the test ear. By
using this method we do not need to compare all the registered ears
with the test ear, and thus the efficiency can be improved taking less
time and achieving a higher accuracy in the recognition stage. Since
data collection is the very first step in a biometric recognition system,
a short description of our special 3D ear acquisition device and 3D ear
database is introduced at the beginning.
The rest of the paper is organized as follows. Section 2 describes
the 3D ear acquisition system we developed. Section 3 describes the
definition of the ear-parotic face angle feature and how it is applied
in 3D ear recognition. Section 4 shows our experimental results that
subsequently prove the stability and effectiveness of the ear-parotic
face angle feature. Section 5 is the discussion and future work. A
conclusion is given in Section 6.
(a)
2. 3D ear acquisition
To date, most of the previous researchers use commercial laser
scanners (for example, the widely used Minolta VIVID Series [18–23])
to acquire 3D ear images. Although these scanners have numerous
functions and high accuracy, they are always expensive and big, which
is inconvenient for practical biometrics applications. With this consideration, we developed a low-cost laser scanner for 3D ear acquisition
using the laser-triangulation principle (as shown in Fig. 1(a) and (b)).
The main components of our 3D ear scanner are CCD camera, laser
projector, step-motor, and motion control circuit. The laser projector
projects a red laser line on the object surface. The step motor rotates
the laser projector to form a series of scanning lines. The CCD camera
is used to capture the images in scanning sequence. The system is
designed for access control. It adapts to both indoors and outdoors
acquisition environment. The 2D and 3D ear images obtained by our
scanner are shown in Fig. 1(c). Since the accuracy plays an important role for the computation of the ear-parotic face angle that affects
the overall accuracy, the scanning system is calibrated using a comprehensive calibration method (interested researchers please refer to
[24] for details). Table 1 is a brief comparison between Vivid 910 and
the low-cost scanner. The accuracy of our scanner is ±0.5 mm, the
dimensions are 140 × 200 × 200 mm, and the price is less than 1000
USD.
We collected the 3D ear samples from 250 individuals using our
3D ear laser scanner. The subjects mainly consisted of volunteers from
students and staffs at the Shenzhen Graduate School of HIT, including
178 males and 72 females with ages ranging from 20 to 60. These
samples were collected on two separate occasions, at an interval of
around one month. On each occasion the subject was asked to provide
two left side face images and two right side face images. Therefore,
each person provided eight images such that our database contains a
total of 2000 images from 500 different ears.
During data collection the subjects sat in a natural position on
a chair with a backrest and kept still. The laser scanner was vertically and horizontally movable to accommodate for different seating
and head positions. The scanner captures the front-view of ear at a
(b)
(c)
Fig. 1. 3D ear acquisition system: (a) the principle of imaging and calibration, (b)
the developed 3D ear acquisition device, and (c) the captured 2D ear sample and its
corresponding 3D model.
distance of about 30 cm with an approximately straight-on angle to
the side of the face, and the tolerances for pose rotation is ±20°. The
subjects were asked to take off all ornaments from their ear and tie
their hair back to avoid any occlusions. The scanning process took
approximately 2 s.
3. Unique ear-parotic face angle feature
3.1. Definition
From Fig. 2 we can observe that there is an angle (θ ) between the
ear and parotic face of a person. We assume there is a plane Af x +
Bf y + Cf z + Df = 0, which represents the 3D points on the parotic
face (the green circle shown in Fig. 2). We also use another plane,
Ae x + Be y + Ce z + De = 0, to represent the 3D points on the ear edge
(i.e., light blue square shown in Fig. 2). Thus, the normal vector of the
parotic face plane can be obtained as nf = (Af , Bf , Cf )T , and the normal
vector of the ear plane is ne = (Ae , Be , Ce )T . The angle between parotic
face and ear can be defined as follows:
θ=
θ1
180 − θ1
◦
if θ1 < 90◦ as the ear-parotic face angle
else
(1)
where θ1 = arccos(nf , ne /(nf 2 ne2 )).
3.2. Angle feature extraction
In order to calculate the ear-parotic face angle, the ear is first
located using a mask. Next, the laser lines on the ear are extracted
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
11
Fig. 2. Ear-parotic face angle definition. (For interpretation of the references to color
in the text, the reader is referred to the web version of this article.)
Fig. 4. Process of using ROI edge points (both ear and parotic face) to calculate the
ear-parotic face angle.
Fig. 3. Ear image processing: (a) ear mask generation, (b) extraction of laser lines and
key-points on the mask. (For interpretation of the references to color in the text, the
reader is referred to the web version of this article.)
and traced to represent the ear edge-points. Afterwards, region of
interest (ROI) consisting of the ear and parotic face can be defined.
Finally, the normal vectors nf and ne are obtained from the located
ear and parotic face. Below we explain each step in detail.
First, a mask was generated to separate the ear and parotic face
areas from the original captured images. Among the 2D scanning
sequence, there is no laser line in the last frame, so the last frame
is selected for the mask generating. Next, a binarization image is
acquired from it, and then the morphological dilation, opening, and
closing options are applied to fill holes and remove noisy pixels in this
binary image. Lastly, the connected component labeling algorithm is
used, and the largest connected component is retained as the final
mask (as shown in Fig. 3(a)).
After the mask generating, the laser lines can be extracted within
the mask region. Since the pixels on the laser lines are much brighter
than the other pixels in the image, we extract the laser lines by selecting the brightest pixels in each column (as the white lines shown
in Fig. 3(b)).
Before calculating the angle between the ear plane and the parotic
face plane, some representative key-points should be selected to describe the ear and the parotic face. This processing step consists of two
major tasks: tracing edge points on the ear, and selecting stable points
on the parotic face. Fig. 3(b) shows three cases of the edge points on
the ear (marked in red), case 1: when laser lines are projected on the
bottom of the ear (earlobe), select the furthest line’s right endpoint as
an edge-point; case 2: when laser lines are projected on the ear and
parotic face, select the rightmost line’s right endpoint as the edgepoint of interest; case 3: when there is only one line on the ear the
rightmost endpoint is the edge-point. For the second task, key-points
in Fig. 3(b) between the ear and parotic face are always the leftmost
line’s endpoint (marked in green). In our case we only have to locate
the lowest and highest key-points sets, and select the leftmost points
of each set as the edge-points between ear and parotic face (e.g. P3
and P4 in Fig. 4).
Using the key-points extracted above, we can segment the ear
region and locate a round region on the face. As shown in Fig. 4, P3
and P4 are the leftmost edge points in the lower 1/4 and upper 1/4
of the ear. We connect a straight line from P3 to P4 as the boundary
between ear and face. P1 is the lowest edge point and P2 is the highest
edge point which makes A and B the projection points of P1 and P2 on
the boundary line. C is the lower third of AB, and OC is perpendicular
to AB. In our experiment the distance between O and C is 10 mm, and
the radius of the circle is 5 mm. The ROI can be constituted by both
the ear region and the round region on parotic face.
After preprocessing, we can locate the points in the parotic faceplane and detect the boundary points in the ear-plane. To compute
the angle, the main task is to extract the normal vector that represents
the ear/parotic face plane. In the above ROI selection stage, a set of
ear boundary points and a region of parotic face points are located,
and the 3D coordinates of these points are computed using:
⎡ x b−d tan θ ⎤
(
)
⎡ ⎤
⎢ x +f tan θ ⎥
x
⎢ y (b−d tan θ ) ⎥
⎥
p = ⎣y⎦ = ⎢
⎢ x +f tan θ ⎥
⎦
⎣
z
(
f b−d tan θ
x +f tan θ
)
(2)
12
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
Fig. 5. Process of using ROI edge points (both ear and parotic face) to calculate the
ear-parotic face angle.
Fig. 8. Registration and model indexing diagram.
Fig. 6. Simulation of various possible ear-parotic face angles.
where (x , y ) are the 2D coordinates of the point on the laser line
recorded in the 2D image, b and d are the horizontal and vertical
distances between the camera optic center and the projector axes respectively, θ is the laser projection angle, and f is the focal length of
the camera. All parameters b, d, f are pre-calibrated while θ is controlled by a motor control circuit. Using the 3D coordinates of these
points, the parotic face-plane and ear-plane can be hypothesized, and
the normal vectors of these planes can be computed. The principal
component analysis (PCA) has proven to be an effective method in
seeking a projection that best represents the data in a least-squares
sense by finding the principal axes of the data matrix [25,26]. It is
mathematically defined as an orthogonal linear transformation that
transforms the data to a new coordinate system such that the greatest
variance by projection of the data comes to lie on the first coordinate
(called the first principal component), the second greatest variance
on the second coordinate, and so on. By using this characteristic of
PCA, we can treat the 3D points cloud in the form of a 3-by-n matrix, and then the first principal component of the points cloud is
the greatest variance by projection of the 3-by-n matrix, which is the
length direction in our case (as shown in Fig. 5: PC1). In a similar way,
the second principal component (PC2) is the width direction, and the
third principal component (PC3) is the depth direction. According to
the orthogonal characteristic of the principal components, we can see
that the third principal component can be used as the normal vector
to present the points cloud plane. Fig. 4 shows the procedure of calculating an ear-face angle. The 3D coordinates of the ear edge are stored
in a 3-by-m matrix Se , and the 3D coordinates of parotic face is stored
in a 3-by-n matrix Sf . Thus, the PCA method can be applied on both
matrixes to obtain their normal vectors ne and nf respectively, where
the angle between ear and parotic face can be computed according
to the definition in Section 3.1. Fig. 6 is a series of simulations which
illustrate the possible angles between ear and parotic face. Based on
our definition the angle is between 0° and 90°. Some typical ear samples and their respective angles are shown in Fig. 7.
3.3. Angle indexing for 3D ear recognition
The primary function of the angle feature is to reduce the time in
recognition. We can use this feature to sort the samples in a registered
database. When recognition is performed we only need to compare
the test ear with candidate ears, and determine which has a closer
angle value to the test ear. By using this method we do not need to
compare all the registered ears with the test ear, and thus a lot of
time can be saved. The registration and model indexing diagram is
illustrated in Fig. 8.
In the registration stage:
(1) Given a model ear, we first locate the ROI which contains both
the 3D ear and a circular patch of the parotic face.
(2) Compute the angle θ between ear plane and parotic face plane.
(3) According to the angle θ insert the ROI of the model ear into
the registered ear database which is sorted by ear-parotic face
angle in ascending order.
Thus, the registered ears in database can be indexed by its angle,
and the database stores both the 3D ears and their corresponding
angle values.
Fig. 7. Typical ear samples with different angles.
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
13
Table 2
Ear-parotic face angle difference between two years interval.
Volunteer
1
2
3
4
5
6
Ear-parotic face angle (°)
Angle after two years (°)
Absolute difference (°)
24.78
24.52
0.26
30.25
30.96
0.71
33.36
34.19
0.83
37.38
36.65
0.73
38.12
38.96
0.84
42.54
43.50
0.96
Et and Em (Anglet and Anglem respectively). Meanwhile, the 3D points
clouds of the ears’ ROI can be obtained (Ct and Cm respectively). If
the difference between Anglet and Anglem is greater than a threshold
(1.72° in our case, refer to Section 4.1), the distance (Dist) between Et
and Em is set to positive infinity. Else, the ICP method is performed on
Ct and Cm to calculate the distance.
4. Experimental results
Fig. 9. The flowchart of combing angle feature and ICP matching.
In the identification stage:
(1) Given a test ear the location of the ROI which contains both the
3D ear (Ct ) and a circular patch of found on the face.
(2) Compute the angle α between ear plane and parotic face plane.
(3) In the registered database we locate an angle area [α − β , α + β ]
(the angle β is a search range which is empirically chosen), and
then select all ear models (Ci , i = 1, . . . , k, k is the number of
registered models in the angle area [α − β , α + β ]) in this angle
range as candidate models for identification.
(4) Identification is performed between the test ear (Ct ) and candidate models (Ci , i = 1, . . . , k).
If the total number of registered models in database is N, and
the number of candidate models is k, the identification time will be
reduced to k/N of the original time cost, by using this angle-indexing
method.
In verification stage, we used the ICP algorithm to measure the
difference between model and test samples. The ICP algorithm developed by Besl and Mckay [27,28] is used to register two given points
sets in a common coordinate system. Each iteration of the algorithm
calculates the registration by selecting the nearest points. Variants
of ICP have been proved to be effective to align 3D ears in previous
works [18–21]. Below we explain the algorithm used in our case.
(1) Initialization of the rotation matrix R0 and translation
vector T0 .
(2) Given each point in a test ear, find the corresponding point in
the model ear.
(3) Discard pairs of points which are too far apart using a tolerance
distance tol.
(4) Find the rigid transformation (R, T) and apply it to the test ear.
(5) Go to (2) until no more corresponding points can be found or
the maximum iteration number is reached.
In this paper we used 10 as the tolerance distance for establishing
closest point correspondence, and the maximum iteration number is
50 times. After rotation and translation the average distance of all
corresponding points is calculated as the distance of model-test pair.
Then, we combined the ear-parotic face angle feature and ICP together for matching. The angle feature is applied first before ICP registration. Its function is similar to a rejection classifier. The flowchart
is shown in Fig. 9. When there is a matching between the test ear Et
and the model ear Em , we first calculate the ear-parotic face angle of
In the following section we provide the experimental results,
which were performed on a single PC with Intel Core 2 CPU at 2.33 GHz
and 2 GB of memory. Section 4.1 first illustrates the unique angle feature. Section 4.2 then shows the application of the ear-parotic face
angle for 3D ear recognition.
4.1. Unique angle feature
To prove the stability of the ear-parotic face angle, we followed
six adult volunteers for almost two years. Table 2 shows that the earparotic face angles were stable during that period of time. It can be
seen that the ear-parotic face angles extracted from the same subject
at different capture sessions changes little, which corroborate that the
proposed ear-parotic face angle is a stable feature. We computed all
the samples’ ear-parotic face angles and found the maximum angle
difference for the same ear at different capture sessions to be 1.72°.
We calculated all 500 left and right ear-parotic face angles and
Table 3 shows the ear-parotic face angle distribution. The first row
shows the range of degrees while the second row depicts the number
of individuals that fall into this range. From Table 3 it can be seen that
the majority of individuals have an ear-parotic face angle between 30°
and 39°, and the angle distribution is similar to Gaussian distribution.
4.2. Applications for ear recognition
Using the ear-parotic face angle, the indexing procedure can be
imported in 3D ear recognition. Table 4 shows the indexing performances for ear-parotic face angle. β is the angle search range given
in Fig. 8, and is empirically chosen as the maximum difference 1.72°
(refer to Section 4.1) divided into 10 equal intervals (1/10, 2/10, . . . ,
10/10). The middle row k/N expresses fraction of the database, and the
hit rate expresses the correctly indexed rate. The results show that
the identification can achieve its best performance when the fraction
of the database reaches 9.69%. This means our ear-parotic face angle
indexing approach took less time in the identification stage to locate
a match. Therefore, we can safely assume that the angle feature has a
good indexing performance for 3D ear recognition.
To assess the verification improvement by using the proposed earparotic face angle feature, we conducted comparison experiments utilizing the ICP matching and the Angle + ICP matching, and compared
results in terms of equal error rate (ERR) and receiver operating characteristic (ROC) curve. Also the result of using only the angle feature is
included for reference. The ROC curves generated by using Angle, ICP
and Angle + ICP were plotted in Fig. 10. The black dash curve is the
verification result using only the angle feature with an EER of 7.8%,
the red curve represents the verification statistical result by using ICP
matching with an EER of 2.8%, while the blue dashed curve is the result of combining the angle feature and ICP matching that achieving
an EER of 2.3%. As can be seen in Fig. 10 the blue dashed curve is
14
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
Table 3
Ear-parotic face angle distribution.
Angle (°)
0–9
10–19
20–29
30–39
40–49
50–59
60–69
70–79
80–89
90
Number
0
44
112
152
108
51
19
6
8
0
0.860°
5.44%
81.2%
1.032°
6.22%
86.4%
1.204°
7.06%
90.0%
1.376°
7.88%
92.4%
1.548°
8.77%
93.6%
1.720°
9.69%
100%
5°
24.91
0.13
10°
24.46
0.32
15°
24.47
0.31
20°
24.19
0.59
Table 4
Angle-indexing performance with different β a values.
βa
k/N
Hit rate
0.172°
1.22%
23.6%
0.344°
2.41%
41.2%
0.516°
3.45%
61.2%
0.688°
4.52%
73.6%
Table 5
Ear-parotic face angles of the same ear at different viewpoints.
Viewpoints
Angle (°)
Absolute difference (°)
−20°
25.61
0.83
−15°
25.58
0.80
−10°
24.87
0.09
−5°
25.60
0.82
0°
24.78
0.00
Table 6
Ear-parotic face angles under different facial expressions.
Facial expression
Normal
Anger
Disgust
Fear
Happiness
Sadness
Surprise
Variance
Ear-parotic face angle (°)
36.65
36.80
36.34
36.51
35.97
36.48
35.60
0.17
(±20°) while their absolute difference is quite minimal (<0.9°). Also,
we captured the same ear under different facial expressions, such as
anger, disgust, fear, happiness, sadness, and surprise. Table 6 shows
that the ear-parotic face angle changes barely with facial expression.
Admittedly, the statistics of the robustness testing are not enough
to conclude that most of the ear angles are not sensitive to view point
changes and to facial expressions. So the future extension of this
work will collect more special samples with respect to the viewpoint
changing, facial expressions, and partial occlusions. Furthermore a
robust and universal angle feature extraction method which can be
commonly used based on existing published databases will be researched. Finally, we will achieve an efficient recognition system by
using more detailed features in 3D ear and fuse global and local features together.
6. Conclusions
above all the other curves. This means for the same false acceptance
rate the Angle + ICP matching will have a higher genuine acceptance
rate compared to the others.
In this paper, we presented a unique global feature of the 3D ear:
the ear-parotic face angle. This feature is calculated as the angle between the ear plane and the parotic face plane in 3D ear images. It
is a novel and independent feature compared to the existing features
in 3D ear recognition. Experimental results illustrated its characteristics: universal, stable, distinguishable (coarsely) and permanence
over a period of time. Although it is a single feature that cannot distinguish the individuals directly, the ear-parotic face angle feature can
provide indexing identification and hierarchical verification solutions
which enhance both the speed and accuracy in 3D ear recognition.
The experimental results show that the search range was reduced to
9.69% from the original, and the EER has been reduced from 2.8% to
2.3%. Another advantage is that the ear-parotic face angle feature is
not an internal feature of the 3D ear, it is an independent external
feature containing the spatial orientation information of the ear relative to the face. Therefore, the ear-parotic face angle feature can be
used in conjunction with other existing features in 3D ear recognition
without any limitations.
5. Discussion and future work
Acknowledgments
To test the robustness of the angle calculation, we captured the
same person’s ear at different viewpoints as shown in Fig. 11, and subsequently calculated their ear-parotic face angles. The results listed
in Table 5 show that the angles change little to viewpoint changes
The authors would like to thank the editor and the anonymous
reviewers for their help in improving the paper. The work is partially
supported by the GRF fund from the HKSAR Government, the central fund from Hong Kong Polytechnic University, the NSFC fund
Fig. 10. ROC curves by using ICP matching and Angle + ICP matching. (For interpretation of the references to color in the text, the reader is referred to the web version of
this article.)
Fig. 11. Same ear at different viewpoints.
Y. Liu et al. / Pattern Recognition Letters 53 (2015) 9–15
(61332011, 61020106004, 61272292, 61271344), Shenzhen Fundamental Research fund (JCYJ20130401152508661), and Key Laboratory of Network Oriented Intelligent Computation, Shenzhen, China.
References
[1] Y. Pang, Y. Yuan, X. Li, J. Pan, Efficient HOG human detection, Signal Process. 91
(4) (2011) 773–781.
[2] D. Zhang, Automated Biometrics, Technologies & Systems, Kluwer Academic
Publishers, 2000.
[3] A. Jain, BIOMETRICS, Personal Identification in Network Society, Kluwer Academic
Publishers, USA, 1999.
[4] D. Zhang, W. Kong, J. You, M. Wong, Online palmprint identification, IEEE Trans.
Pattern Anal. Mach. Intell. 25 (9) (2003) 1041–1050.
[5] Y. Zhang, Q. Li, J. You, P. Bhattacharya, Palm vein extraction and matching for
personal authentication, Proceedings of the VISUAL 07, 2007, pp 154–164
[6] A. Abaza, A. Ross, C. Hebert, M.A.F. Harrison, M.S. Nixon, A survey on ear
biometrics, ACM Comput. Surv. 45 (2) (2013) 1–35.
[7] A. Abaza, A. Ross, Towards understanding the symmetry of human ears: a biometric perspective, Proceedings of the BTAS’10, 2010, pp. 1–7
[8] M. Burge, W. Burger, Ear biometrics in computer vision, Proceedings of the ICPR’00
2000, pp. 822–826.
[9] R. Purkait, P. Singh, A test of individuality of human external ear pattern: its
application in the field of personal identification, Forensic Sci. Int. 178 (2) (2008)
112–118.
[10] A. Iannarelli, Ear Identification, Paramont Publishing Company (1989).
[11] D. Hurley, M. Nixon, J. Carter, Force field feature extraction for ear biometrics,
Comput. Vis. Image Understanding 98 (3) (2005) 491–512.
[12] M. Choras, Ear biometrics based on geometric feature extraction, Electron. Lett.
Comput. Vis. Image Anal. 5 (3) (2005) 84–95.
[13] I.A. Kakadiaris, G. Passalis, G. Toderici, M.N. Murtuza, Y.L. Lu, N. Karampatziakis, T. Theoharis, Three-dimensional face recognition in the presence of facial
expressions: An annotated deformable model approach, IEEE Trans. Pattern Anal.
Mach. Intell. 29 (4) (2007) 640–649.
15
[14] C. Samir, A. Srivastava, M. Daoudi, Three-dimensional face recognition using
shapes of facial curves, IEEE Trans. Pattern Anal. Mach. Intell. 28 (11) (2006)
1858–1863.
[15] D. Zhang, G. Lu, W. Li, L. Zhang, N. Luo, Palmprint recognition using 3-D
information, IEEE Trans. Syst. Man Cybern. C Appl. Rev. 39 (5) (2009) 505–
519.
[16] D. Zhang, V. Kanhangad, N. Luo, A. Kumar, Robust palmprint verification using 2D
and 3D features, Pattern Recognit. 43 (1) (2010) 358–368.
[17] W. Li, D. Zhang, G. Lu, N. Luo, A novel 3-D palmprint acquisition system, IEEE
Trans. Syst. Man Cybern. A Syst. Humans 42 (2) (2012) 443–452.
[18] P. Yan, K.W. Bowyer, Biometric recognition using 3D ear shape, IEEE Trans. Pattern
Anal. Mach. Intell. 29 (8) (2007) 1297–1308.
[19] H. Chen, B. Bhanu, Human ear recognition in 3D, IEEE Trans. Pattern Anal. Mach.
Intell. 29 (4) (2007) 718–737.
[20] H. Chen, B. Bhanu, Efficient recognition of highly similar 3D objects in range
images, IEEE Trans. Pattern Anal. Mach. Intell. 31 (1) (2009) 172–179.
[21] S.M.S. Islam, R. Davies, M. Bennamoun, A.S. Mian, Efficient detection and recognition of 3D ears, Int. J. Comput. Vis. 95 (1) (2011) 52–73.
[22] J. Zhou, S. Cadavid, M. Abdel-Mottaleb, An efficient 3-D ear recognition system
employing local and holistic features, IEEE Trans. Inf. Forensics Security 7 (3)
(2012) 978–991.
[23] S.M.S. Islam, M. Bennamoun, R. Owens, R. Davies, A review of recent advances
in 3D ear-and expression-invariant face biometrics, ACM Comput. Surv. 44 (3)
(2012) 1–34.
[24] Y. Liu, G. Lu, A 3D Ear Acquisition System Design by Using Triangulation Imaging
Principle, Int. Conf. Image Anal. Recognit. (2013) 19–25.
[25] Y. Pang, X. Li, Y. Yuan, Robust tensor analysis with L1-norm, IEEE Trans. Circuits
Syst. Video Technol. 20 (2) (2010) 172–178.
[26] Y. Pang, L. Wang, Y. Yuan, Generalized KPCA by adaptive rules in feature space,
Int. J. Comput. Math. 87 (5) (2010) 956–968.
[27] P. Besl, N. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern
Anal. Mach Intell. 14 (2) (1992) 239–256.
[28] G. Turk, M. Levoy, Zippered polygon meshes from range images, Proceedings of
the SIGGRAPH ’94 1994, pp. 311–318.