Pdf File

Transcription

Pdf File
Project No : FP7-610713
Project Acronym: PCAS
Project Title: Personalised Centralized Authentication System
Scheme:
Collaborative project
Deliverable D3.1
State of the art of mobile biometrics, liveness and
non-coercion detection
Due date of deliverable: (T0+4)
Actual submission date: 31st January 2014
Start date of the project: 1st October 2013
Duration: 36 months
Organisation name of lead contractor for this deliverable: UPM
Final version
Collaborative Project supported by the 7th Framework Programme of the EC
Dissemination level
PU
Public
PP
Restricted to other programme participants (including Commission Services)
RE
Restricted to a group specified by the consortium (including Commission Services)
CO
Confidential, only for members of the consortium (including Commission Services)
X
Executive Summary:
State of the art of mobile biometrics, liveness and non-coercion detection
This document summarises deliverable D3.1 of project FP7-610713(PCAS), a Collaborative Project
supported by the 7th Framework Programme of the EC.
This document reports an overview of the state of the art of biometrics in mobile phones, describing
the current works, results, limitations, advantages and disadvantages of using them as authentication
systems in mobile devices. In addition, this deliverable also covers the study of voluntary or involuntary
approaches to detect non-coercion and liveness. This report provides an essential support in the
decision of the technologies to be deployed in WP3 of the project.
Full information on this project, including the contents of this deliverable, is available online at
http://www.pcas-project.eu.
List of Authors
Carmen Sánchez Ávila (UPM)
Javier Guerra Casanova (UPM)
Francisco Ballesteros (UPM)
Lorenzo Javier Martı́n Garcı́a (UPM)
Miguel Francisco Arriaga Gómez (UPM)
Daniel de Santos Sierra (UPM)
Gonzalo Bailador del Pozo (UPM)
2
Document History
Version
v0.1
v0.9
v1.0
v1.1
Date
1-11-2013
13-1-2014
26-1-2014
31-1-2014
Comments
First draft
Version for internal review
Version for incorporating feedback
Final version
3
Contents
List of figures
6
List of tables
7
1 Introduction
1.1 General concepts on biometrics . . . . . . . .
1.1.1 General biometric systems . . . . . . .
1.1.2 Functions of general biometric systems
1.1.3 Fundamental performance metrics . .
1.2 Related European Projects . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Mobile biometrics
2.1 Fingerprint recognition . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Relevant works on mobile fingerprint recognition . . . . .
2.1.3 Public databases for fingerprint recognition . . . . . . . .
2.1.4 Liveness detection on fingerprints . . . . . . . . . . . . . .
2.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Keystroke dynamics . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Public databases for mobile keystroke dynamics . . . . . .
2.2.3 Relevant works on mobile keystroke dynamics . . . . . . .
2.2.4 Liveness detection on mobile keystroke dynamics . . . . .
2.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Face recognition . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 Public databases for mobile face recognition . . . . . . . .
2.3.3 Relevant works on mobile face recognition . . . . . . . . .
2.3.4 Multimodal identification using face recognition . . . . . .
2.3.5 Liveness detection on mobile face recognition . . . . . . .
2.3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Signature recognition . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 Relevant works on signature recognition on mobile phones
2.4.3 Public databases for mobile signature recognition . . . . .
2.4.4 Liveness detection on mobile signature recognition . . . .
2.4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Hand recognition . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
9
9
10
11
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
16
16
18
22
23
26
27
27
29
29
31
31
32
33
35
36
39
40
41
42
42
43
48
48
48
49
PCAS Deliverable D3.1
2.6
2.7
2.8
2.9
SoA of mobile biometrics, liveness and non-coercion detection
2.5.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.5.2 Relevant works on mobile hand recognition . . .
2.5.3 Public databases for mobile hand recognition . .
2.5.4 Liveness detection on mobile hand recognition .
2.5.5 Conclusion . . . . . . . . . . . . . . . . . . . . .
Voice recognition . . . . . . . . . . . . . . . . . . . . . .
2.6.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.6.2 Relevant works on mobile speaker verification . .
2.6.3 Public databases for mobile speaker recognition .
2.6.4 Liveness detection on mobile speaker verification
2.6.5 Commercial applications . . . . . . . . . . . . . .
2.6.6 Conclusion . . . . . . . . . . . . . . . . . . . . .
Iris recognition . . . . . . . . . . . . . . . . . . . . . . .
2.7.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.7.2 Template aging . . . . . . . . . . . . . . . . . . .
2.7.3 Relevant works on mobile iris technique . . . . .
2.7.4 Liveness detection on mobile iris recognition . . .
2.7.5 Conclusion . . . . . . . . . . . . . . . . . . . . .
Gait recognition . . . . . . . . . . . . . . . . . . . . . .
2.8.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.8.2 Public databases for mobile gait recognition . . .
2.8.3 Relevant works on mobile gait recognition . . . .
2.8.4 Conclusion . . . . . . . . . . . . . . . . . . . . .
Fusion of biometrics . . . . . . . . . . . . . . . . . . . .
2.9.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.9.2 Multimodal information fusion techniques . . . .
2.9.3 Multimodal databases . . . . . . . . . . . . . . .
2.9.4 Recent related works . . . . . . . . . . . . . . . .
2.9.5 Conclusion . . . . . . . . . . . . . . . . . . . . .
3 Non-coercion techniques
3.1 Introduction . . . . . . . . .
3.2 Involuntary approach . . . .
3.2.1 Physiological signals
3.2.2 Voice . . . . . . . .
3.2.3 Face . . . . . . . . .
3.2.4 Movement . . . . . .
3.3 Voluntary approach . . . .
3.4 Conclusion . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
52
55
55
56
56
57
58
61
61
66
66
67
68
69
72
74
74
74
75
76
77
80
81
81
82
83
85
89
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
91
92
93
93
93
93
94
4 Conclusion
95
Glossary
96
Bibliography
101
5
List of Figures
1.1
Components of general biometric systems . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
Hand shape/Hand geometry approaches. (Left) Contour-based approach. (Right)
Distance-based approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Palmprint. Principal lines of the hand. . . . . . . . . . . . . . . . . . . . . . . . . .
Hand Veins. Dorsal palm veins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Iris. (Left) Iris picture from CASIA database under IR wavelength. (Right) Iris picture
from NICE1 database under visible wavelength. . . . . . . . . . . . . . . . . . . . . . .
Number of Iris Biometrics publications till 2013, searching “iris biometrics” and “iris
biometrics mobile phone” into Google Scholar. . . . . . . . . . . . . . . . . . . . . . .
Data fusion paradigms at sensor level. . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
2.3
2.4
2.5
2.6
6
10
50
51
52
68
68
82
List of Tables
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
Summary of relevant works in mobile fingerprint . . . . . . . . . . . . .
Summary of relevant works in mobile fingerprint liveness detection . . .
Summary of relevant works in keystroke dynamics . . . . . . . . . . . .
Commercial face detection available systems. . . . . . . . . . . . . . . .
Most popular face datasets . . . . . . . . . . . . . . . . . . . . . . . . .
Face detection algorithms . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary of relevant works about face recognition . . . . . . . . . . . .
Live detection algorithms based on face analysis. . . . . . . . . . . . . .
Summary of relevant works in mobile signature recognition . . . . . . .
Hand biometrics into mobile devices . . . . . . . . . . . . . . . . . . . .
A comparative overview of several aspects from different hand databases
Summary of relevant works in voice recognition . . . . . . . . . . . . . .
Public databases for mobile voice recognition . . . . . . . . . . . . . . .
Iris template aging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary of relevant works in mobile iris biometrics . . . . . . . . . . .
Relevant works in gait authentication for mobile phones . . . . . . . . .
Summary of relevant works in multimodal recognition. . . . . . . . . . .
7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
25
31
34
35
37
39
41
47
54
55
62
63
71
73
79
89
1 Introduction
This document is the deliverable 3.1 of PCAS project, regarding the state of the art on mobile
biometrics, liveness and non-coercion detection techniques.
This deliverable is part of WP3, named “Biometric recognition”. The main objectives of this
WP are the development and evaluation of several biometric techniques applied to portable devices
and the fusion of them in order to deploy a biometric authentication process in the PCAS device.
Additionally, this device should also include methods to detect the liveness of the captured samples
and a non-coercion method to notice when the user is under a very stressed situation. Both subsystems
are also carried out in this WP.
The first task of WP3, “Identification and analysis of technologies and sensors” is focused on
reviewing the state-of-the-art in mobile biometrics and other relevant technologies such as liveness
detection and non-coercion systems.
Deliverable 3.1 has been created as a result of this task, according the following objectives:
• Reviewing the state of the art on biometrics that can be used in portable devices, concentrating on the advantages, disadvantages, problems and solutions related to apply the biometrics
methods in standalone mobile devices.
• Reviewing the state of the art on liveness detection in biometrics focused on solutions that can
be used in mobile devices.
• Reviewing the state of the art on non-coercion and stress detection systems based on voluntary
or involuntary user actions.
• Understanding the advantages, disadvantages, problems and solutions of each biometric technique.
Accordingly, the document consists of the following chapters:
• Chapter 1: Introduction: In this chapter, the objectives and scope of the document are
presented. A general description of biometric terminology is included in order to facilitate the
understanding of the rest of the document. In this chapter, an overview of the related FP7
projects is also presented, as many of the research works analyzed subsequently result from
these projects.
• Chapter 2: Mobile biometrics state of the art: This chapter includes a review on the
state of the art of several biometric techniques that can be used in mobile devices. For each
biometrics, a section on liveness detection is also included to examine how the spoofing detection
can be performed in each technique. In particular, the following techniques have been inspected:
fingerprint, keystroke dynamics, face, signature, hand, voice, iris and gait. Furthermore, a
section regarding the fusion of several biometrics in mobile devices has also been incorporated.
8
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Chapter 3: Non-coercion techniques: This chapter reports the state of the art of techniques
to detect coercion. These techniques have been organized in two different approaches: voluntary
and involuntary. The former approach regards procedures where users can voluntary send alarms
meaning that they are under a coerced situation. The latter, involuntary approaches, refer to
techniques where systems detect the stress of a person through signals that appear involuntarily
when people are under stressful situations.
• Chapter 4: Conclusions: This chapter contains a summary of the main advantages, disadvantages and limitations of each biometric technique applied to standalone devices.
1.1
General concepts on biometrics
This section presents general concepts on biometrics, obtained from the ISO 19795 norm [37]. This
norm provides definitions and explanations on general biometric systems that are used in the rest of
this document. In addition to this, the purpose of this norm is to present the requirements and best
scientific practices for conducting technical performance testing. This is necessary because a wide
variety of conflicting and contradictory testing protocols have been used in biometrics over the last
two decades or more. Test protocols have varied not only because test goals and available data are
different from one test to the next, but also because no standard has existed for protocol creation.
Therefore, even though in this document there are references that use different test methodologies,
this section provides an overview of how the experiments and performance measures are commonly
obtained or how they should have been done.
1.1.1
General biometric systems
Given the variety of applications and technologies, it might seem difficult to draw any generalization
about biometric systems. All such systems, however, have many elements in common. Biometric
samples are acquired from a subject by a sensor. The sensor’s output is sent to a processor which
extracts the distinctive but repeatable measures of the sample (the features), discarding all other
components. The resulting features can be stored in the database as a template, or compared to a
specific template, many templates or all templates already stored in a database to determine if there
is a match. A decision regarding the identity claim is made based upon the similarity between the
sample features and those of the template or templates compared.
Figure 1.1 illustrates the information flow within a general biometric system consisting of data
capture, signal processing, storage, matching, and decision subsystems. This diagram illustrates both
enrolment and the operation of verification or identification systems. The following subclauses describe
each of these subsystems in more detail. It should be noted that, in any real biometric system, these
conceptual components may not exist or may not directly correspond to the physical components.
• Data capture subsystem: The data capture subsystem collects an image or signal of a subject’s biometric characteristic that has been presented to the biometric sensor, and outputs this
image/signal as a biometric sample.
• Signal processing subsystem: The signal processing subsystem extracts the distinguishing
features from a biometric sample. This may involve locating the signal of the subject’s biometric
characteristics within the received sample (a process known as segmentation), feature extraction, and quality control to ensure that the extracted features are likely to be distinguishing and
repeatable. Should quality control reject the received sample/s, control may return to the data
capture subsystem to collect a further sample/s. In the case of enrolment, the signal processing subsystem creates a template from the extracted biometric features. Often the enrolment
9
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Figure 1.1: Components of general biometric systems
process requires features from several presentations of the individual biometric characteristics.
Sometimes the template comprises just the features.
• Data storage subsystem: Templates are stored within an enrolment database held in the data
storage subsystem. Each template is associated with details of the enrolled subject. It should
be noted that prior to being stored in the enrolment database, templates may be re-formatted
into a biometric data interchange format. Templates may be stored within a biometric capture
device, on a portable medium such as a smart card locally, on a personal computer or local
server or in a central database.
• Matching subsystem: In the matching subsystem, the features are compared against one or
more templates and similarity scores fo ahead to the decision subsystem. The similarity scores
indicate the degree of fit between the features and the compared template/s . In some cases,
the features may take the same form as the stored template. For verification, a single specific
claim of subject enrolment would lead to a single similarity score. For identification, many or all
templates may be compared with the features, and output a similarity score for each comparison.
• Decision subsystem: The decision subsystem uses the similarity scores generated from one or
more attempts to provide the decision outcome for a verification or identification transaction.
– In the case of verification, the features are considered to match a compared template when
the similarity score exceeds a specified threshold. A claim about the subject’s enrolment
can then be verified on the basis of the decision policy, which may allow or require multiple
attempts.
– In the case of identification, the enroled identifier or template is a potential candidate for the
subject when the similarity score exceeds a specified threshold, and/or when the similarity
score is among the highest k values generated for a specified value k. The decision policy
may allow or require multiple attempts before making an identification decision.
1.1.2
Functions of general biometric systems
There are three main functions in biometric systems:
10
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
1. Enrolment: In enrolment, a transaction by a subject is processed by the system in order to
generate and store an enrolment template for that individual. Enrolment typically involves:
sample acquisition, segmentation and feature extraction, quality checks, (which may reject the
sample/features as being unsuitable for creating a template, and require acquisition of further
samples), template creation (which may require features from multiple samples), possible conversion into a biometric data interchange format and storage, test verification or identification
attempts to ensure that the resulting enrolment is usable, and should the initial enrolment be
deemed unsatisfactory, enrolment attempt repetitions may be allowed (dependent on the enrolment policy).
2. Verification: In verification, a transaction by a subject is processed by the system in order to
verify a positive specific claim about the subject’s enrolment (example “I am enrolled as subject
X”). Verification will either accept or reject the claim. The verification decision outcome is
considered to be erroneous if either a false claim is accepted (false accept) or a true claim is
rejected (false reject). Note that some biometric systems will allow a single end-user to enrol more
than one instance of a biometric characteristic (for example, an iris system may allow end-users
to enrol both iris images, while a fingerprint system may have end-users enrol two or more fingers
as backup, in case one finger gets damaged). Verification typically involves: sample acquisition,
segmentation and feature extraction, quality checks, (which may reject the sample/features as
being unsuitable for comparison, and require acquisition of further samples), comparison of
the sample features against the template for the claimed identity producing a similarity score,
judgement on whether the sample features match the template based on whether the similarity
score exceeds a threshold, and a verification decision based on the match result of one or more
attempts as dictated by the decision policy.
3. Identification: In identification, a transaction by a subject is processed by the system in order
to find an identifier of the subject’s enrolment. Identification provides a candidate list of identifiers that may be empty or contain only one identifier. Identification is considered correct when
the subject is enrolled, and an identifier for their enrolment is in the candidate list. The identification is considered to be erroneous if either an enrolled subject’s identifier is not in the resulting
candidate list (false-negative identification error), or if a transaction by a non-enrolled subject
produces a non-empty candidate list (false-positive identification error). Identification typically
involves: sample acquisition, segmentation and feature extraction, quality checks, (which may
reject the sample/features as being unsuitable for comparison, and require acquisition of further samples), comparison against some or all templates in the enrolment database, producing a
similarity score for each comparison, judgement on whether each matched template is a potential candidate identifier for the user, (based on whether the similarity score exceeds a threshold
and/or is among the highest k scores returned) producing a candidate list, an identification
decision based on the candidate lists from one or more attempts, as dictated by the decision
policy.
This report is focused on verification applications, where the identity of the user is known a priori
(because user provided it by a name, a card, an identification number or because there is only one
user enrolled in the system). For this purpose, only enrolment and verification functions are used.
1.1.3
Fundamental performance metrics
The norm 19795 proposes to evaluate biometric algorithms through the following rates:
• Failure-to-enrol Rate (FTE): The failure-to-enrol rate is the proportion of the population for
whom the system fails to complete the enrolment process. The failure-to-enrol rate shall include:
11
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
those attempts in which the user is unable to present the required biometric characteristic; those
unable to produce a sample of enough quality at enrolment; and those who cannot reliably
produce a match decision with their newly created template during attempts to confirm the
enrolment is usable. Attempts by users unable to enrol in the system shall not contribute to the
failure-to-acquire rate, or matching error rates.
• Failure-to-acquire Rate (FTA): The failure-to-acquire rate is the proportion of verification
or identification attempts for which the system fails to capture or locate a sample of sufficient
quality. The failure-to-acquire rate shall include: attempts where the biometric characteristic
cannot be presented (e.g. due to temporary illness or injury) or captured; attempts for which
the segmentation or feature extraction fails and attempts in which the extracted features do not
meet the quality control thresholds.
• False Non-Match Rate (FNMR): The false non-match rate is the proportion of samples,
acquired from genuine attempts, that are falsely declared not to match the template of the same
characteristic from the same user supplying the sample.
• False Match Rate (FMR): The false match rate is the proportion of samples, acquired from
zero-effort impostor attempts, that are falsely declared to match the compared non-self template.
• False Rejection Rate (FRR): The false reject rate is the proportion of genuine verification
transactions that will be incorrectly denied. It is calculated as: F RR = F T A + F N M R ∗ (1 −
F T A).
• False Acceptance Rate (FAR): The false accept rate is the expected proportion of zeroeffort non-genuine transactions that will be incorrectly accepted. It is calculated as: F AR =
F M R ∗ (1 − F T A).
• Receiver operating characteristic (ROC) curve: It is a curve plot of the rate of false positives on the x-axis against the corresponding rate of true positives (genuine attempts accepted)
on the y-axis plotted parametrically as a function of the decision threshold.
• Detection error trade-off (DET) curve: It is a modified ROC curve which plots error rates
on both axes (FAR on the x-axis and FRR on the y-axis). In this curve, the value where FRR
is equal to FAR is denoted as Equal Error Rate (EER).
In addition to this, sometimes researchers also use other rates to evaluate biometric systems. Some
of the most common, that have been referred in this report, are:
• Half Total Error Rate (HTER): Is the average between FAR and FRR.
• Genuine Match Rate (GMR): It is the proportion of accepting a genuine sample (1-FRR).
• Correct Classification Rate (CCR): It is the proportion of samples correctly classified independently of the class.
• Correct Identification Rate (CIR): It is the proportion of samples correctly identified independently of the identity.
12
PCAS Deliverable D3.1
1.2
SoA of mobile biometrics, liveness and non-coercion detection
Related European Projects
In this section, the most related European projects regarding biometrics are presented. Most of them
make research on biometrics with no specific application. Some of them try to use biometrics in
mobile devices, but in most of these initiatives the authentication process is performed out of the
device. Many of the research work analyzed in chapters 2 and 3 have been produced in these projects.
A brief description of each project and its objectives is introduced as follows:
• BIOSECURE 2004-2007 [10] This is a FP6 project from 2004 to 2007 with 30 core partners.
The main focus of the project was to provide reliable evaluation platforms for different biometric
modalities as well as for systems that combine multiple biometric modalities. The mainly academic organizations involved in BioSecure covered a wide range of research activities in the area
of multimodal biometrics with extensive experience in database acquisition and performance
evaluation campaigns. The project addressed scientific, technical and interoperability challenges
as well as standardization and regulatory questions which are critical issues for the future of
biometrics and its use in everyday life.
• BEAT Biometric Evaluation and Testing 2012-2015 [8]: BEAT is dedicated to the development of a framework of standard operational evaluations for biometric technologies. It
includes the development and the proposal for standardization of a methodology for Common
Criteria evaluations of biometrics systems.
• MOBIO Mobile Biometry 2008-2010 [20]: MOBIO addresses several innovative aspects
relative to bi-modal authentication systems in the framework of mobile devices (focusing on face
recognition and voice), in embedded and remote biometrics.
• TABULARASA Trusted Biometrics under Spoofing Attack 2010-2014 [29]: The focus
of this project was to address solutions to the recently shown vulnerabilities of conventional
biometric techniques, such as fingerprints and face, to direct (spoof) attacks, performed by
falsifying the biometric trait and then presenting this falsified information to the biometric
sensor. In this project, there are two main issues: analyzing the effectiveness of direct attacks
to a range of biometrics and exploring appropriate countermeasures.
• SENSATION Advanced Sensor Development for Attention, Stress, Vigilance and
Sleep/wakefulness 2004-2007 [6]: SENSATION aims to explore a wide range of micro and
nano sensor technologies, with the aim of achieving unobtrusive, cost-effective, real-time monitoring, detection and prediction of human physiological state in relation to wakefulness, fatigue
and stress anytime, everywhere and for everybody.
• SECUREPHONE 2004-2006 [26]: SECUREPHONE is a project enhanced with a “biometric recogniser” in order to permit to users to mutually recognise each other and securely
authenticate. They propose to use voice speaker verification, face recognition and On-line handwritten signature verification biometrics, discarding others like fingerprint and iris recognition
due to their physical and social intrusive.
• BITE Biometric Identification Technology Ethics 2005-2007 [7]: BITE aims to prompt
research and to launch a public debate on bioethics of biometric technology.
• BioSec 2009-2012 [9]: The project is not just looking at each of the traditional technological
components (sensors, algorithms, data fusion, network transactions, data storage), but is also
considering operational (security framework, interoperability, standardization ) and user centered (usability, acceptance, legal framework compliance) issues. Some of the technological and
13
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
research challenges the project is addressing include aliveness detection of biometric samples,
match-on-card solutions, personal biometric storage, interoperability, multiple biometrics, etc.
However, the project also recognizes the importance of dealing with the non-technological issues
of biometric deployment, such as usability, acceptance, data protection and business cases.
• ACTIBIO Unobtrusive authentication using activity related and soft biometrics
2008-2011 [30]: ACTIBIO targeted a multimodal approach fusing information from various
sensors capturing either the dynamic behavioural profile of the user (face, gesture, gait, body
dynamics) or the physiological response of the user to events (analysis of electroencephalography and electrocardiography). ACTIBIO also researched the use of unobtrusive sensors, either
wearable (in garments of uniforms to capture body dynamics) or integrated in the infrastructure (sensing seat sensors capturing the anthropometric profile of the user, sound-based activity
recognition sensors, etc.). In this way ACTIBIO developed novel activity related and soft biometrics technologies for substantially improving security, trust and dependability of “always on”
networks and service infrastructures.
In many of these initiatives, some biometric databases for research have been released. Additionally, some biometric evaluation campaigns have been carried out in order to let the research
community to improve the performance on biometrics. Many of their results will be commented in
the next chapter, separated by each biometric technique.
14
2 Mobile biometrics
With the increasing functionality and services accessible via mobile telephones, there is a strong
argument that the user authentication level on mobile devices should be extended beyond the Personal
Identification Number (PIN) that has traditionally been used.
One of the principal alternatives where the industry has focused is the usage of biometric techniques
on mobile phones as a method to verify the identity of a person accessing a service.
The author of the recent report on biometrics forecasts in [211] believes that there will be a rush by
smart mobile device manufacturers to emulate Apple by embedding and integrating biometrics technology into their next generation devices, not only fingerprint sensors but other biometric technologies
as well.
In addition to this, the report in [131] also suggest that the iPhone 5S deployment, with an embedded touch fingerprint sensor, was a pivotal moment for the biometrics industry and will accelerate the
consuming of biometric products. The report estimates that biometrics on mobile devices will generate
about $8.3 billion worth of revenue by 2018 for the biometrics industry, not just for unlocking the
device but to approve payments and as part of multi-factor authentication services.
However, the adaptation to the mobile devices of the most of the biometric technologies is still
challenging and full of difficulties.
In this chapter, the eight most significant biometric technologies are described, pointing out their
characteristics and the most relevant works regarding the adaptation of each technology to be used
in a mobile phone. Each technique will be presented in a section, including an introduction of the
technique and the relevant works in order to apply this biometrics to a mobile phone.
Additionally, for each biometric technique it is also commented if there is any public database
with samples captured from a mobile device. These databases can be used to evaluate the algorithms
deployed, making the results comparable to other algorithms. As it was introduced before, the evaluation of many biometric research works often is not presented following a standard protocol, so having
a public database of biometric samples with a specific testing protocol and performance measures to
compare with is quite useful.
The performance on biometrics is usually measured by error rates. However, most of the biometric
technologies have a vulnerability on the use of fake biometric samples, such as photographs, gummy
fingers, contact lens, etc. This is a very relevant vulnerability, since it is quite simple to produce a
fake characteristic. In general, this problems are solved by including a liveness detection module in
the verification process, in order to be sure that the biometric characteristic presented belongs to an
alive person. Liveness detection techniques are different for each biometrics. Accordingly, for each
biometrics, the relevant works related to liveness detection are included.
In particular, the biometric techniques applied to mobile devices that have been included in subsequent sections or this chapter are:
• Fingerprint.
• Keystroke.
15
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Face.
• Signature.
• Hand.
• Voice.
• Iris.
• Gait.
All these biometric techniques include a list of advantages and disadvantages of using them in
mobile phones. These conclusions represent the main ideas to consider when selecting the most
appropriate techniques for the project, in accordance with the requirements, scenarios, hardware
limitations, experience and marketing research.
In addition, there are also many initiatives aiming to join several biometric techniques in a multifactor authentication system. Accordingly, at the end of the description of each biometric technique,
a multibiometrics review is added.
2.1
Fingerprint recognition
This section describes the most important works regarding fingerprint recognition in mobile phones.
First, the section 2.1.1 presents an overview of fingerprint biometrics in classic systems. Next, the
section 2.1.2 gathers the most recent and relevant works to use fingerprint recognition in mobile phones.
In this section, there is a special focus at the recent iPhone 5S device, since it is the first successful
initiative using fingerprints in mobile phones.
Further section 2.1.3 presents the most significant public databases used to evaluate fingerprint
systems. Following this, a description of the current works about the liveness of the fingerprints is
presented in section 2.1.4. This is one of the main difficulties of this biometric technique, since it is
quite easy to forge a fingerprint from a latent sample released anywhere the user touch.
Finally, the conclusions of this section are presented in 2.1.5.
2.1.1
Introduction
Fingerprint recognition refers to the automated method of identifying or confirming the identity of
an individual based on the comparison of two fingerprints. Fingerprint recognition is one of the
most well known biometrics, and it is by far the most used biometric solution for authentication
on computerized systems. The reasons for fingerprint recognition being so popular are the ease of
acquisition, established use and acceptance when compared to other biometrics, and the fact that
there are numerous (ten fingers) sources of this biometric on each individual.
There are many research articles, books and state of the art regarding mobile fingerprints, where
the main characteristics of these systems are deeply explained [325], [385], [307]. A brief review of the
conventional fingerprint technique is presented as follows.
A fingerprint is the pattern of ridges and valleys on the surface of a fingertip. There are different
levels of information when representing a fingerprint:
• Level 1 (Global): There are three basic patterns of fingerprint ridges: arch, loop, and whorl.
16
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Level 2 (Local): The major minutia features of fingerprint are ridge ending, bifurcation and
short ridge. The representation of a fingerprint by their minutiae is not only the type and
position of the feature, but also the direction and the angle of the ridge, the distance between
two consecutive ridges.
• Level 3 (Fine): Ridges details as the wide, shape, inholes, etc.
There exist four main types of fingerprint reader sensor. All of them require to place the fingerprint
on the surface of the sensor (in contact):
• Optical readers: They are the most common type of fingerprint readers. The type of sensor
in an optical reader is a digital camera that acquires a visual image of the fingerprint. These
sensors are very impacted by dirty or marked fingers and this type of fingerprint reader is easier
to fool than others.
• Capacitive readers: A Complementary metal-oxide-semiconductor (CMOS) reader uses capacitors and thus electrical current to form an image of the fingerprint. An important advantage
of capacitive readers over optical readers is that a capacitive reader requires a real fingerprint
shape rather than only a visual image. This makes CMOS readers harder to trick, although they
are more expensive than optical.
• Ultrasound readers: They are the most recent type of fingerprint readers, they use high
frequency sound waves to penetrate the epidermal layer of the skin. They read the fingerprint
on the dermal skin layer, which eliminates the need for a clean surface. This type of fingerprint
reader is far more expensive than the first two, however due to their accuracy and the fact that
they are difficult to fool the ultrasound readers are already very popular.
• Thermal readers: These sensors measure, on a contact surface, the difference of temperature
between fingerprint ridges and valleys. Thermal fingerprint readers have a number of disadvantages such as higher power consumption and a performance that depends on the environment
temperature.
There are two main matching techniques of fingerprint features:
• Minutiae matching: relies on recognition of the minutiae points, this is the most widely used
technique for verification and identification purposes, and the one with best performance rates.
• Pattern matching: compares two images to see how similar they are, often used in fingerprint
systems to detect duplicates or reply attacks.
One of the most accepted methods to evaluate the performance of fingerprint recognition system
is by means of the FVC-onGoing initiative [14]. It is a web-based automated evaluation system for
fingerprint recognition algorithms where the tests are carried out on a set of sequestered datasets.
Results are reported on-line by using well known performance indicators and metrics. The aim is to
track the advances in fingerprint recognition technologies, through continuously updated independent
testing and reporting of performances on given benchmarks. FVC-onGoing is the evolution of FVC:
the international Fingerprint Verification Competitions organized in 2000, 2002, 2004, and 2006. At
present, 2684 algorithms have been evaluated from 645 registered participants. The best algorithm,
based on minutiae matching, obtained an EER of 0.108% for quality samples and 0.7% with a relevant
number of difficult cases. Both benchmark characteristics are explained next.
However, using fingerprints in mobile devices is a hot topic at present, since Apple included a
fingerprint recognition system in their iPhone 5S. This fact, has changed the world of mobile biometrics,
17
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
since for the first time in history they have been included at the same time in a huge amount of mobile
devices. In the next section, the iPhone’s fingerprint and all their consequences will be introduced.
Additionally, other initiatives to include fingerprint in mobile phones will be also commented.
2.1.2
Relevant works on mobile fingerprint recognition
In the last years, there have been some research on using fingerprints in mobile phones. Depending
on the sensor used, we can distinguish between two different approaches:
• Using the camera of the phone to capture an image of the fingertip. No contact is required
between the finger and the camera.
• Integrating a contact sensor, usually capacitive or thermal, in the mobile phone to use the
technology of traditional fingerprint techniques.
There are some examples of research work following the first approach:
For instance, in [45] a mobile, contact-less, single-shot, fingerprint capture system is proposed. This
approach described captures high resolution fingerprints and 3D information simultaneously using a
single camera. Liquid crystal polarization rotators combined with birefringent elements provides the
focus shift and a depth from focus algorithm extracts the 3D data. This imaging technique does not
involve any moving parts, thus reducing cost and complexity of the system as well as increasing its
robustness. Data collection is expected to take less than 100 milliseconds.
A more recent work on mobile phone camera based fingerprint recognition was carried out in
[315]. In this article, the authors evaluate the feasibility of fingerprints recognition via mobile phone
camera under real-life scenarios including (1) in-door with office illumination, (2) natural darkness, and
(3) out-door natural illumination with complicated background. For this experiment, they selected
three popular smartphones (Nokia N8, iPhone 4, Samsung Galaxy I) to capture fingerprint images.
NeuroTechnology and NIST functions were adopted to generate ISO standard minutiae templates and
compute the comparison scores among different subsets of the generated templates. The evaluation
results (EER over 25%) indicate that, unlike the in-lab scenario, it is a very challenging task to use
mobile phone camera for fingerprint recognition in real life scenarios and thus it is essential to control
the image quality during the sample acquisition process. They realized that it is different from the
laboratory environment that camera and hands are both fixed. In real life scenarios, it was impossible
to avoid hand and camera shaking during taking photos. And also, the cameras usually focused on
the background in the outside scenario, so it is quite hard to get stable and good quality images.
The same authors continued their work on the quality assessment for fingerprints collected by
smartphone cameras in [314] and [313]. They extracted a set of quality features for image blocks.
Without needing segmentation, the approach determines a sample’s quality by checking all image
blocks divided from the sample and for each block through a Supported Vector Machine (SVM). Then
a quality score is generated for the whole sample. Experiment showed this approach performed well
in identifying the high quality blocks (0.53 Spearman correlation coefficient) with a 4.63 percent of
false detection (background blocks judged as high-quality ones).
The idea of using a mobile device camera for fingerprint recognition was also followed by the
authors in [306]. In this work, the authors propose a method to find valid regions in focused images
where valleys and ridges are clearly distinguished. They propose a new focus-measurement algorithm
using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry
of gradient distribution. In this work, the authors created a database with a Samsung mobile where
the fingerprints of 15 volunteers were captured through their camera. With this database, the best
EER obtained by the authors is around 3%.
18
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Additionally, in [405] the authors also suggested the idea of using fingerprint with mobile cameras
or webcams. In this work, the authors claimed that the images produced by these kinds of sensors
during the acquisition of human fingertips are very different from the images obtained by dedicated
fingerprint sensors, especially as quality is concerned. In the literature a paramount quantity of
methods which are extremely effective in processing fingerprints obtained by classical sensors and
procedures is presented, so in their work they investigated new techniques to suitably process the
camera images of fingertips in order to produce images which are as similar as possible to the ones
coming from dedicated sensors. Results presented in this work considered a scenario evaluation with
a limited number of volunteers, where the systems obtained a performance of 4%.
Furthermore, in [234] the authors propose a touch-less fingerprint recognition system as a viable
alternative to contact-based fingerprint recognition technology. It provides a near ideal solution to
the problems in terms of hygienic, maintenance and latent fingerprints. In this paper, the authors
present a touch-less fingerprint recognition system by using a digital camera. In this work, the authors
addressed the constraints of the fingerprint images that were acquired with digital cameras, such as
the low contrast between the ridges and the valleys in fingerprint images, focus and motion blurriness.
The system comprises of preprocessing, feature extraction and matching stages. The proposed preprocessing stage shows the promising results in terms of segmentation, enhancement and core point
detection. Feature extraction is done by Gabor filter and the verification results are attained with the
SVM. They obtained a best EER value of 2% with a database of 100 fingers and 10 image per finger.
Finally, a review of fingerprint pre-processing using a mobile phone camera is presented in [279].
In this work, the authors showed the pre-processing state of the art in mobile fingerprint recognition,
as well as many other research articles related to mobile camera fingerprint recognition. The authors
claimed this area to be in maturity but with promising results, as well as they anticipate much future
work on this field, specially in mobile camera focusing, fingerprint processing, and fingerprint template
security.
All these works are summarized in Table 2.1.
Publication
Sensor
Subjects
Result
[45]
[315]
[314] and [313]
[306]
[405]
[234]
Birefringent lens
Mobile camera (Nokia N8, iPhone 4, Samsung Galaxy I)
Camera phone
Samsung camera phone
Webcam Microsoft LifeCam VX-1000
Digital camera (Canon PowerShot Pro1)
25
100 (real life)
FRR = 4.2%
EER=25%
FD = 4.63 %
EER = 3%
EER = 4.7%
EER = 2%
15
15
10
Table 2.1: Summary of relevant works in mobile fingerprint
There is a second approach regarding fingerprint biometrics in mobile phones consisting in incorporating a contact sensor into the phone, instead of using their already embedded camera. This approach
let the systems use most of the techniques already known in the literature to process the fingerprints.
However, even though the same sensor could be used in traditional fingerprint recognition systems and
mobile phones, the performance results in mobile phones should decrease as these kinds of sensors are
very influenced by environmental conditions (as temperature and humidity), oily fingers, dirty, dust,
etc. that happen more often in the mobile context. Additionally, the conservation of the sensor in a
mobile context is usually worse than in a static access control scenario.
Actually, Xia and O’Gorman [487] have reviewed some touch-based devices that commonly were
used in the market in 2003. They categorized into two types, which are optical sensors and solid-state
sensors. According to their review, fingerprint biometrics has high chance to be used as the solution
for personal authentication. The important key to implement this solution is the device should be
19
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
economical and built-in on personal devices such as mobile phone. In this work, they claimed that
embedding a touch-based device in a mobile phone will add more cost and complexity to the phone,
which is not desired, and was the main reason to make research on camera-based solutions. However,
the cost has been reduced and phones have increased a lot their capacities, making possible the use
of contact sensors.
For example, in [455], the authors propose a fingerprint authentication system for mobile phone
security application. The authors developed a prototype with external fingerprint capture module,
composed of two parts. One is the front-end fingerprint capture sub-system, and the other is back-end
fingerprint recognition system. A thermal sweep fingerprint sensor is used in the fingerprint capture
sub-system to fit the limitations of size, cost, and power consumption. In the fingerprint recognition
sub-system, an optimized algorithm is developed from the one participated in the FVC2004. The
performance of the proposed system is evaluated on the database built with a thermal sweep fingerprint
sensor, obtaining an EER of 4.13%.
Another similar approach was conducted in [425], where the authors describe a BioAPI compatible
architecture for mobile biometric fingerprint identification and verification based on a XML Web
Service and a Field Programmable Gate Array (FPGA). They present a client-server system that uses
a Personal Digital Assistant (PDA) with a built-in CMOS thermal fingerprint sensor. They partially
implement some of the processing functions by hardware. No performance results are reported.
Although there are not many research articles regarding the use of contact fingerprint sensors in
mobile phones, the most important telephone manufacturer companies have tried to do this since 1998.
A summary of these initiatives can be found in [12].
As follows, the first initiative of each of the most important companies is presented, including also
the events on 2013. These works, however do not refer to experimental approaches nor performance
rates, but only to initiatives of incorporating a contact fingerprint sensor in a mobile device.
• Siemens (1998): Siemens PSE and Triodata developed a phone prototype with a Siemens/Infineon
fingerprint sensor on the back.
• Sagem (2000 Jan): Sagem MC 959 ID with a ST/Upek fingerprint sensor on the back.
• HP (2002 Nov): The HP iPAQ h5450 is the first PDA with a built-in fingerprint sensor, the
FingerChip AT77C101 from Atmel.
• Casio (2003 Feb): Casio & Alps Electric unveil a new fingerprint optical sweep sensor designed
for cellphone such as the Casio cellphone prototype.
• Fujitsu (2003 Feb): The Fujitsu F505i cell phone contains an Authentec sensor.
• (2013 Oct): The Fujitsu Arrows Z FJL22 is announced with a round swipe fingerprint sensor.
• Hitachi (2004 July): An Hitachi G1000 PDAphone prototype containing the Atrua swipe
sensor is shown in Japan.
• LG TeleCom (2004 August): LG-LP3800 camera phone containing the Authentec AES2500.
• Yulong (2005 Feb): Yulong announces the Coolpad 858F GSM with the Atrua swipe fingerprint sensor.
• Samsung (2005 Oct): Samsung unveils the SCH S370 Anycall cellphone using the Authentec
swipe fingerprint sensor.
• Samsung (2013 Jan): Validity demonstrate a modified Galaxy S2 with a swipe fingerprint
sensor.
20
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Lenovo (2005 Oct): Lenovo unveils the ET980 using the Authentec swipe fingerprint sensor.
Seems to be an option of the prototype.
• Toshiba (2007 Feb): Toshiba unveils at 3GSM the G900 with an Atrua ATW310 sweep
fingerprint sensor.
• HTC (2006 Mar): Release the HPC Sirius P6500 with an Authentec sensor.
• HTC (2013 Nov): HTC One Max with a swipe sensor is announced.
• Asus (2008 Mar): Asus unveils the M536 PDA phone.
• Motorola (2008 Jul): Motorola unveils the Q9 Napoleon with an Atrua swipe fingerprint
sensor.
• Sharp (2008 Aug): Sharp unveils the SH907i (aka SH-01A) with an optical swipe fingerprint
sensor.
• Lexun / Leson (2008 Sep): Lexun / Leson unveils the G2 phone.
• Acer (2009 Feb): Acer unveils the Tempo M900 phone.
• Philips (2010 July): Philips unveils the Xenium X712 phone with an Authentec AES2260
sensor.
• Apple (2013 Sep): Apple unveils the iPhone 5S with a fingerprint sensor from their own (as
they bought Authentec one year ago).
• Bull (2013 Oct): Bull unveils the Hoox m2 with a Upek TCS5 fingerprint sensor (2008
According to this, there are many important companies who tried to deploy a fingerprint sensor
in their mobile phones. The most surprising part is that all of them, except Apple failed, selling very
few unities.
In [24] there is a discussion about why HTC and many others manufacturers failed in this work.
The author concludes that HTC’s fingerprint sensor was difficult to use and it was not well integrated
with the software of the device so it made the perception of being uncomfortable for users. In addition
o this, HTC made people to use their fingerprints for many actions they did not require such security,
so they became very unpopular and unaccepted.
In this article, the author also anticipates a new fail of the HTC fingerprint sensor, included again
in their new One max phablet, since it is very uncomfortable to swipe any other fingertip through
the sensor other than the index finger of the hand you’re holding the phablet with. Opposite to this,
the author explains the success of Apple in terms of the usability of the sensor, that seems almost
transparent for daily actions.
The story of the fingerprints in iPhone 5S is perfectly summarized in [3]. It began with the acquisition of Authentec, responsible for the recognition software and Upek, responsible for the hardware
part. In addition to this, Apple obtained several patents to protect their fingerprint system. Some of
them can be consulted in [23].
The Apple technology uses a capacitive contact sensor built into the home button of the phone to
take a high-resolution image from small sections of the fingerprint from the sub-epidermal layers of the
skin. However, the iPhone 5S has been cracked by Germany’s Chaos Computer Club, just two days
after the device went on sale [11]. The group took a fingerprint of the user, photographed from a glass
surface, and then created a “fake fingerprint” which could be put onto a thin film and used with a real
finger to unlock the phone. They released a tutorial on how to fake fingerprints in [16]. Actually, they
21
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
showed that there was no liveness detection included in their fingerprint system, and they claimed
that the security of the fingerprints was only a matter of resolution that can be accomplished today
with many different technologies. This can be a big problem in terms of security since users leave
latent fingerprints in many places (in a glass, a bottle, in the screen of the mobile, etc.). They also
introduce the problem that someone can easily be forced to unlock their phone against their will, even
easier than a passcode. Additionally, people can not change their fingerprints, so they claimed that if
someone fingerprint is compromised once then it is compromised forever.
Furthermore, there are no official false acceptance and false rejection rates in the iPhone 5S fingerprint. The author of [3] suggested that this systems gets a 0.05% FAR with a FRR of 2-10%.
However, even tough these vulnerabilities, the fingerprint technology on iPhone has found a big
success. In [13], the authors suggest that it is a matter of convenience, not security. This means that
users like fingerprints just because they are easier and faster than writing a PIN code, even though
they know it is not secure enough and can be faked. Actually, the author of this report indicates that
when users would need to perform a very secure action they could prefer other options but it was very
useful for quotidian operations, such as multi-factor authentication.
This was indeed the main failure reason of the rest of companies, since they provided a fingerprint
sensor which was not very naturally (it was on the back of the phone) and it was required to be
used to assure too many actions the users did not need that security becoming this technology quite
uncomfortable and unaccepted.
In addition to the location of the sensor and the non acceptance of using this technology when it
is not used naturally, there are some environmental limitations that must be considered.
It is known that the quality of fingerprints decreases with lower ambient temperature [213] and
also with wet fingerprints [166].
Also in [285] some experiments were made to analyze the impact of the temperature and humidity
in fingerprint recognition systems, showing that quality decreased when then temperature goes below
zero due to dryness of skin. In this work, it was also showed that the pressure of the finger in the
sensor is a factor on the performance of these systems.
In addition to this, the age of the population is also important for the quality of the fingerprint
images. For example, in [365] the authors studied the impact of fingerprint image quality of two
different age groups: 18-25, and 62. The results showed that the performance of the system degraded
significantly when used by the elderly population.
Furthermore, in the last years it has been quite accepted that one of the reasons of the fingerprint lack of acceptability by typical users is that fingerprints have traditionally been associated with
criminal investigations and police work [259]. Furthermore, it is also known that a small part of the
population can not use them because of genetic, aging, environmental and occupational reasons.
Finally, the dirt of the sensor or a dirty finger is also an important limitation on using fingerprints
[260]. Actually, the dirt of the fingers could remain in the sensors for a long time if not maintained
properly. This is one of the main problems of deploying traditional fingerprint systems in many places
(for example cash machines), where the sensors are located anywhere but not maintained and cleaned
regularly. In the case of fingerprint in mobile phones, it is expected that only the authorized person use
the device and he should be responsible for cleaning the sensor properly. However, this is a limitation
of the contact fingerprint sensors that can be highly influenced by how people use and maintain them.
2.1.3
Public databases for fingerprint recognition
To the author knowledge there are no public databases for evaluating fingerprints on mobile devices.
However, one of the most accepted process to evaluate traditional fingerprint recognition systems
is the FVC-onGoing initiative, leaded by the Biometric System Lab of the University of Bologna.
They provide three benchmarks with the following characteristics:
22
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• FV-TEST: A simple dataset useful to test the algorithm compliancy with the testing protocol.
It is made up with 280 genuine attempts and 45 impostor attempts.
• FV-STD-1.0: Contains fingerprint images acquired in operational conditions using high-quality
optical scanners. Results should reflect the expected accuracy in large-scale fingerprint-based
applications. In this database there are 27720 genuine attempts and 87990 from impostors.
• FV-HARD-1.0: Contains a relevant number of difficult cases (noisy images, distorted impressions, etc.) that makes fingerprint verification more challenging. Results do not necessarily
reflect the expected accuracy in real applications but allow to better discriminate the performance of various fingerprint recognition algorithms. It is composed of 19320 genuine attempts
and 20850 impostor attempts.
Only the data of the first benchmark are released for developers. The organizers of the initiative
keep the rest of the data and perform the evaluation of the algorithms the developers send. Accordingly,
all the results follow the same evaluation protocol and performance measures, and the evaluation is
performed by an independent entity, not by the developer. As a consequence, results are reliable and
comparable.
2.1.4
Liveness detection on fingerprints
As it was seen before, liveness detection is one of the main problems of fingerprint recognition. Fingerprints are left in many places as residues of oil or sweat when tapping fingers on a touch screen or
surface. From these fingerprints it is quite easy to build a fake fingerprint that would forge most of
the biometric systems.
This vulnerability implies a big security problem in these kind of systems. There have been
many works trying to deal with fake fingerprints in order to detect them and not approve their access.
Actually, there are several recent research articles making a review of the state of the art on fingerprint
liveness detection [49], [122] gathering together the most important related works.
There have been many works demonstrated that fingerprint systems are vulnerable at sensor level
attacks. Some of these spoof attacks are presented as follows:
In [343] the authors made gummy fingers from gelatine and studied spoof attacks on 11 commercial
fingerprint systems that used optical or capacitive sensors. Their experiments proved that all 11
commercial fingerprint systems enrolled the gummy fingers and accepted them in verification with
high probability (68-100 for cooperative user, 67 for non cooperative user)
Also in [275], the authors conducted an experiment against four commercial fingerprint systems
that used different sensing mechanisms: optical, capacitive, thermal and tactile using two different
artificial fingerprints: gelatine and silicon-rubber fingers. The results showed that gelatine fingers
spoofed all of the mechanisms but these fingers last for only 24 hours since the gelatine becomes dry.
When using the silicon-rubber fingers only the system that use thermal sensor was spoofed.
Furthermore, in [199] described an easy method to create gummy fingers from fingerprints with
silicone. Authors use these fake fingers to test two fingerprint verification systems (minutiae based
and ridge pattern based) over images captured by optical and thermal sweeping sensors. The results
of their experiment showed that both verification systems were vulnerable to spoof attacks. The same
authors, studied in [198] studied the robustness of an ISO minutiae-based system against attack from
fake fingerprints that were created. The results showed that the tested system accepted fake fingers
by 75%.
Moreover, at present there are even tutorials in Internet explaining how to fake fingerprints easily and with very simple materials [16]. This tutorial belongs to the group that forged iPhone 5S
fingerprint in two days.
23
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
According to the works presented in this section, it is quite easy to generate fake fingers that forge
current fingerprint systems. This is the reason why a liveness detection module must be included to
avoid fake fingers to access the protected systems.
There are some works dealing with fingerprint liveness detection. In [122] the authors present a
taxonomy to classify the liveness detection systems on fingerprints depending on the technology used.
There are mainly two groups of approaches:
• Hardware-based: These methods required special hardware integrated with the fingerprint
system to acquire life signs such as fingerprint temperature, pulse, pulse oximetry, blood pressure,
electric resistance and odor.
• Software-based: These methods require extra software added to a fingerprint recognition
system. These solutions are cheaper than hardware approaches and are more flexible to future
adaptation. In [49] the authors propose five categories. As follows, an explanation of each type
of technology and the most relevant work is presented.
– Perspiration based:
These works try to detect the perspiration pattern change between two or more fingerprints
captured separated by some seconds. This is a time consuming method since the user is
required to present his finger twice and it is not efficient for real-time authentications.
For example, in [392] the authors tried to detect the perspiration pattern change from two
fingerprint images captured over time and separated by 2 seconds as a sign of fingerprint
vitality. They used ridge signal algorithm that map the two-dimensional fingerprint images
into one-dimensional signals which represents the grey-level values along the ridges. They
used a dataset of fingerprints from 33 live subjects, 30 spoof fingerprints created with dental
material and 14 cadaver fingers. Two measurements were derived from the images and were
used in classification; static patterns and dynamic changes in the moisture structure of skin
around sweat pores caused by perspiration. The CCR obtained was 90%.
– Skin deformation-based:
These systems use the flexibility properties of the skin to detect whether a fingerprint is
fake or not. In general, the elasticity when pressing the sensor at different pressures can
distinguish between spoofing and real fingerprints.
For example, in [266] the authors developed a method based on skin elasticity and achieved
an EER of 4.78% with a dataset of 30 real fingerprints and 47 fake fingers of gelatin. A
sequence of fingerprint images was captured to extract two features that represent skin
elasticity without any special finger movement.
Also in [499] the authors asked the users to rotate his finger with some pressure in four
angles 0, 90, 180, 270 to capture a sequence of frames to extract relevant features related
to skin distortion, showing that when real fingers move on a scanner surface they mostly
produce a larger distortion than the fake fingers. They obtained an EER of 4.5% with their
approach with a database of 200 real fingers and 120 fake fingers made of silicone.
– Image quality-based
These techniques analyze the quality of the fingerprint image, looking for features representative of alive fingerprints.
For example, in [379] the authors focused on the uniformity of the gray-levels along ridges,
since fake fingerprints were quite uniform and alive ones not due to several reasons like
sweat pores, perspiration and skin quality (dry, wet and dirty). Their method achieved an
CCR between 92% and 97%, with a database of 185 real and 150 gummy fingers.
24
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Also the coarseness of the fingerprints can be detected, as in [368] where the authors realized
that the surface of a spoofed fingerprint is coarser than a live fingerprint because artificial
material consists of large organic molecules that usually agglomerate during processing.
The authors claim that this characteristic can be used in liveness detection when a high
resolution sensor is used but do not provide performance rates.
– Pore based
The pores of the fingerprint can also be used to detect liveness, even though pores can be
reproduced in fake fingerprints [161]. In this work, the authors prepared 78 fake fingers
with thermoplastic and silicon and another 78 with an acetate sheet poured with latex and
glue. The fingermarks were left on glass and photographed. They also used 26 real fingers
in their experiments, where they obtained a FAR of 21.2% and FRR of 8.3%.
In addition, in [332] the authors deployed a liveness detection method based on pore distribution between two images captured between 5 seconds based on the idea of frequency
of pores in live fingerprint is less than that in fake fingerprints, due to fabrication steps
necessary for replica. In this case, the database was made up with 224 live fingerprints and
193 fake replicas made of silicon. They provide ROC curves of the results, with a FAR
(fake as live) of 20% when FRR (live vs fake) of around 8%.
– Combined approach
The former techniques are also used together in order to get their benefits and improve the
liveness evidence.
For example, in [265] the authors extracted five features from a sequence of fingerprint
images to detect skin elasticity and perspiration pattern. Two static features were used to
detect perspiration pattern by measuring the differences in gray levels along the ridges due
to the presence of perspiration around pores. Three dynamic features were used to measure
skin elasticity and temporal change of the ridge signal due to perspiration. The EER of
their method was 4.49% with a database of 30 real and 47 fake fingers made of gelatin.
A summary of the works deploying liveness detection countermeasures with fingerprints is presented
in Table 2.2.
Publication Sensor
Subjects
Technique
Result
[392]
33 live, 14 cadaver, 30 fake
Perspiration
CCR = 90%
30 real, 47 fake fingers
200 real, 120 fake fingers
185 real, 150 gummy
23 real, 10 gelatin, 24 plastic
26 real, 156 fake
224 live, 193 fake
30 real, 47 fake
Skin
Skin
Image quality
Image quality
Pore
Pore
Combined
EER = 4.78%
EER = 4.5%
CCR = 92-97%
FAR = 21.2%, FRR = 8.3%
FAR = 20%, FRR = 8%
EER = 4.49%
[266]
[499]
[379]
[368]
[161]
[332]
[265]
Optical, electro-optical,
CMOS
CMOS
Optical
Optical
Optical
Optical
Optical
CMOS
Table 2.2: Summary of relevant works in mobile fingerprint liveness detection
In general, the conclusions of all these works is that there are promising techniques but a lot of
more research should be done in order to deploy a liveness detection system with high performance in
fingerprints.
25
PCAS Deliverable D3.1
2.1.5
SoA of mobile biometrics, liveness and non-coercion detection
Conclusion
Traditional fingerprint recognition systems stand out on the performance obtained in independent evaluation methods. There have been innumerable research works on fingerprint recognition technologies,
and they have been used to identify people for many years.
According to this, there have been many initiatives trying to make fingerprint recognition in
mobile phones since 2000. Some of them were based on capturing the fingerprint through the camera
of the mobile phone, without touching any contact sensor. The results obtained by this approach are
promising, although not as good as traditional contact sensors.
In order to get better performance, many mobile phone manufacturers have tried to incorporate a
contact fingerprint sensor in the hardware. There have been many attempts to deploy mobile phones
with fingerprints, but most of them were unaccepted by their clients. In general, the use of the
fingerprints was quite uncomfortable (sensors were located at the back of the phone) and it was used
for many tasks the users did not require that security.
However, in 2013 Apple launched their iPhone 5S including a fingerprint of high success and
acceptability. Their main characteristic is that it is not a security system but a usability system as it
was quite comfortable to use (it was located in a front button). In spite of their acceptability success,
it was hacked only two days after they sold them by using a fake fingerprint on a thin film.
Fingerprints present a big vulnerability based on the fact that they are released in many places
when touching or holding things, and from those rests a fingerprint can be easily rebuilt and used to
forge the system. There are some works trying to detect the liveness of the fingerprint, but they are
still not mature enough.
Consequently, using fingerprints in mobile phones presents the following advantages:
• If locating appropriately, it is a fast authentication method.
• It provides competitive performance rates when using in controlled situations.
• It is quite accepted that fingerprints can be used to authenticate people.
However, the following disadvantages or limitations of fingerprints in mobile phones have been
found:
• Do not work for a small part of population due to age and occupational reasons.
• Present limitations to the environment conditions, specially dry and cold.
• Lack of acceptability because of fingerprints are usually associated with criminal investigations.
• Require a contact sensor to be integrated in the mobile phone. Approaches that use the camera
of the phone to capture a fingerprint image do not work yet properly.
• The location of the fingerprint sensor in the mobile phone limits the acceptability. It succeeded
only when the fingerprint sensor was including in the home button in the front of the device.
All other approaches from the most important mobile phone manufacturers since 1998 failed.
• Require a maintenance of the sensor.
• It is vulnerable to gummy fingers, that can be easily built from latent fingerprints released in
glass or even in the screen of the phone.
26
PCAS Deliverable D3.1
2.2
SoA of mobile biometrics, liveness and non-coercion detection
Keystroke dynamics
This section presents the most important works related to keystroke dynamic in mobile devices. The
outline of this section is consistent with the rest of the document.
In section 2.2.1 main typical keystroke dynamic features are provided, in addition to the different
situations that can be acquired. Also a list of the most used methods to classify data is shown and
explained.
Next, in section 2.2.2 a list of public databases with keystroke dynamics is presented. As not much
work with mobile devices is done comparing it with the work done in computers, there are more public
databases related with computers. Part of the data extracted in computers can be a part of the first
approach of developing an authentication system in mobile devices.
The relevant work about the study of keystroke dynamics is presented in the section 2.2.3. These
works are related with authenticate a user using mobile devices through a wide variety of techniques.
The starting point are the classical studies done in computers, where users are being tracked when
they are typing with a computer keyboard. The only feature that can be extracted from a computer
keyboard is the time between pressing and releasing keys. Different classifiers techniques had been
applied to identify and authenticate users from their keystroke dynamics, so these techniques can be
extended to keystroke dynamics in mobile devices.
Finally, the conclusions of this section are summarized in section 2.2.5.
2.2.1
Introduction
The classical definition of keystroke dynamics is the study of whether the people can be distinguished
by their typing rhythms, often associated to a computer keyboard. This technique also receive the
name of typing dynamics and it is fundamentally based in measure the press, hold and release times
when typing a Personal Identification Number (PIN) code.
Keystroke dynamics is a behavioural biometric technique and can be implemented in a few different
manners, depending of the moment that users are being monitored: The user typing manner can be
measured only at login or in a continuous way, through all the time he/she is using the computer.
The classical keystroke recognition method is based on typing a PIN, a password between four
to eight numbers that users must type at the start. This is the most extended and used over last
twenty years. Although it has been the most used it is very insecure because the mobile device is not
protected all the time user is using it. Te PIN is typed once and then everyone has free access to the
contents until it is switched off. ([443]) classifies the methods in the following manner:
• Static at login: The user data is acquired when he/she types his/her PIN just when mobile
phone is switched on.
• Periodic dynamic: The data is taken more than once since user switch on the mobile phone.
For example, when he/she types a PIN code in order to unlock the screen or he/she is placing
or answering a phone call. [457]
• Continuous dynamic: Measures are taken continuously when user is typing using a physical
keyboard or a touchpad. It can be done running a background application that detects the
movements of an user. [84, 125]
• Keyword specific: Not always a numeric PIN is necessary to identify users. It is also possible
to make users type a word or draw a graphical pattern.
• Application specific: There are mobile applications developed in order to identify and authenticate users using images or graphical patterns. [56, 112, 375]
27
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
In all of these situations the feature most measured is the time between keystrokes.
• Di-Graph: Timing information of two consecutive keys pressed. It is the major feature represented in keystroke dynamics domain and it is widely categorized in two types, namely, dwell
time and flight time.
– Dwell time: This time refers to the amount of time between pressing and releasing a single
key.
– Flight time: In this case the feature measured is the time between pressing two successive
keys. It is also named latency time.
• N-Graph: Time between three or more consecutive keystroke events is measured.
There a lot of different ways to classify users using the data extracted that have been used in the
last thirty years, so far, the most important methods are the following:
• Statistical approach: The first used, easiest and with the lowest overhead, in this technique
the common generic statistical measures include mean, median and standard deviation which
are classified using statistical t-test and k-nearest neighbour.
• Probabilistic modeling: It is another variant of statistical approach that assumes that each
keystroke feature vector follows Gaussian distribution. Some models used are Bayesian, Hidden
Markov Model (HMM), Gaussian Density Function and weighted probability.
• Cluster analysis: This technique gathers similar characteristics pattern vectors together. Feature data categorized within a homogeneous cluster are very similar to each other but highly
dissimilar to other clusters.
• Distance measure: It is the most popular technique and consists in calculate the pattern of
the claimant login to determine the similarity/dissimilarity associated with a reference in the
database. The most used distances are the Euclidean, Manhattan, Bhattacharyya, Mahalanobis,
degree of disorder and direction similarity measure.
• Machine learning: It is very common in the pattern recognition domain, not only in keystroke
dynamics. The objective is to classify and make correct decisions based on the data provided.
In this category Neural Network (NN) are contained , which can produce better results than the
statistical methods. In the work of [466], the authors study the viability of using a backpropagation NN in keystroke dynamics. They used the database created by [282] and concluded that
this kind of networks are viable to perform reasonable results. The main disadvantage of this
technique is that genuine keystroke patterns are needed but also intruderś to train the network
and sometimes it can not be possible.
Keystroke dynamics have big advantages like uniqueness, its low implementation and deployment
cost and non-invasiveness and because of this it has been cause of a big amount of research articles
between 1980 to the present.
Of course, it has also some disadvantages like that it has lower accuracy than other biometric
techniques because keystroke patterns may change because of injury, distraction or fatigue, between
others. The consequence of this is that the classification system need to be retrained periodically to
follow the changes of the keystroke patterns.
Keystroke dynamic in computers has been studied deeply with good results as seen in the works
of [422, 67, 248, 123, 208, 283, 462].
28
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
At present people use much more devices that the computer every day and thanks to the fast
evolution of the technology most of people own a smartphone and use it daily for a variety of things.
Smartphones are used to access to important information such as email, bank account, calendar
or agenda, consequently, it is very important to protect these information. Most of the smartphones
incorporate a touchscreen, which for most of the people is more comfortable than the classical keyboard
but it is more sensible to attacks techniques as shoulder surfing or reflection surfing [416].
Similarly than in computers, the keystroke patterns typed on a mobile phone are unique, the way
in which uses key the PIN can be used to recognize them. This means that most of the techniques
developed with computer keyboards can be extended to mobile phones [276, 182]. The previous devices
to smartphones were the PDA which incorporated a touchscreen that can provide useful information
about users behaviour apart of the time between typing keys [431, 249, 293].
There are many differences between computer keyboards and smartphones. Computer keyboards
are limited to its keys, so only the time of pressing and releasing keys can be measured. This means
that the only information that can be extracted to differ one user to other is him/her speed of typing.
In the other hand, smartphones incorporate much more devices that can be taken in advantage in
order to differ one user to other. One user do not type his/her PIN at the same speed than other, but
also, he/she does not take his/her smartphone with the same strength, orientation and has a different
finger size. All these features and much more can be measured trough a smartphone and can be very
helpful to design a biometric system.
2.2.2
Public databases for mobile keystroke dynamics
As far as the authors’ knowledge there are no public databases regarding mobile keystroke recognition.
However there are several publicly databases that are obtained from a computer and can be useful to
develop a mobile keystroke recognition system.
• BioChaves project: This is a multimodal database which mixes voice recognition with keystroke
dynamics. It is formed by 10 users that in two sessions separated by one month had to utter and
type the same four words five times. Related with keystroke data, they recorded the down-down
time intervals from two keystrokes, namely the time between two consequent keys are pressed.
This database has been presented in [367] and it can be found in [179]
• Anomaly-Detection Algorithms: [282] compared the performance of classifiers used in
keystroke dynamics collecting data from 51 users typing 400 passwords over 8 sessions (50 repetitions per session) separated by a time of one day. The database url is [281].
It is necessary to mention the work of [504]. They have developed a verification system based
using tapping behaviours through a smartphone. The collected data consists of 80 users typing five
different 4-digit and 8-digit PINs at least 25 times each one.
2.2.3
Relevant works on mobile keystroke dynamics
First mobile phones had a physical keyboard in order to allow users to key contact numbers and names.
The most common feature to protect mobile phones from intruders is the PIN, a usually short number
that the phone owner must remember. As mobile phones incorporate much more critical information,
more secure techniques are necessary.
The use of a keyboard in mobile phones can help to extend the techniques developed with computers. The authors of [262] studied the feasibility of develop an authentication system based in
keystroke dynamics over touchscreens. They took ten people to perform a database and adopt a
Bayesian network classifier where the best result had a FAR of 2% and a FRR of 17.8%. Other kind
29
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
of studies that can be found related with keystroke dynamic with mobile phones is the work of [42],
where a two-factor authentication as an enhanced technique for authentication, is proposed. They
compare three classifiers and their result concludes that statistical classifiers reach better results with
the mobile phones. Also in [98], the authors authenticate users using keystroke dynamics acquired
when typing fixed alphabetic strings on a mobile phone keypad. Additionally, in [496], the authors
developed a keystroke-based user identification on smartphones with a fuzzy classifier using Particle
Swarm Optimizer and Genetic Algorithms. They took into account not only the time of pressing and
releasing keys, but also the relative position between keys. Following this technique a FAR of 2% and
a close to zero after PIN verification were reached.
With the advance of technology smartphones incorporate better capabilities like touch screens.
Using them a new way to unlock the screen appeared: lock patterns. Users must draw a pattern with
their fingers in order to unlock the screen. As people have a unique way of typing in a keyboard, they
have a unique way to draw the lock pattern. [55] studied the lock pattern given by Android to extract
biometric features in order to avoid shoulder-surfing and smudge attacks([59]). They used Random
Forests (RF) machine learning classifier and achieved an average EER of approximately 10.39%.
Other graphical lock patterns are studied by [112, 321], where they compared the usability of two
different graphic techniques: Touched Multi-Layered Drawing (TMD) or Draw A Secret, concluding
that TMD gives much better results in order to enhance user authentication. A new one based in
gestures by [83]. In this work the authors studied if it is possible to perform user identification on
multitouch displays without additional hardware. The description of touchpoints with coordinates
were the information used to extract features like distances, angles and areas between touchpoints.
Also a hybrid method based on tap and gestures in the studies of [56]. In this hybrid method a classical
PIN is combined with gestures with a numerical keyboard. This technique enhance the security and
eliminates the need for switching between different techniques. An Anderson-Darling test revealed
that the date they collected was not normally distributed, therefore they used nonparametric test
for the analyses reaching Average Error Rate (AER) of 1.31, 5.28 and 3.93% respectively for each
technique tested.
At present smartphones incorporate a variety of sensors in order to measure different magnitudes:
acceleration, angles, pressure, proximity, location, orientation between others. The most used in
keystroke dynamics are the sensors capable of detection the movement and the position of users. The
main reason is that users have different manners of holding smartphones while they are typing, so
can e very helpful to authenticate them. These kind of sensors receive the name of motion sensors.
Accelerometer, gyroscope and orientation sensor are contained in this group. These sensors can be
exploited to improve and invent new biometric systems. [317, 457] propose two systems to authenticate
users using accelerometers. They obtained 53 different features based in accelerations and angles that
classified using K-Nearest Neighbour (KNN) as the classification algorithm. This allowed them to
reach an EER of 6.85%. The main advantage of these systems is their transparency: Users can use
their phone normally while the learning classifier takes data.
Related to enhance the capacity of authentication using the PIN, the authors of [360] developed
TapPrints, a framework for inferring the location of taps on mobile device touchscreens using motion
sensors combined with machine learning analysis. This system was developed in order to demonstrate
that an attacker can launch a background process in a mobile device and silently monitor the user’s input, such as keyboard presses and icon taps. To classify the data they trained a variety of classification
algorithms: KNN, Multinomial Logistic Regression, SVM and RF. They showed that identifying tap
locations on the screen and inferring English letters could be done with an accuracy of 90% and 80%,
respectively. They also compared the effectiveness of accelerometers and gyroscopes and concluded
that gyroscope are the sensors that produce most useful results in order to authenticate an user and
to reduce the resources needed.
In [504] the authors combine keystroke dynamics and motion sensors extracting four features:
30
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
acceleration, pressure, size and time of typing. They collected data from 80 users that had to type
PIN of 4 and 8 digits. They trained a classifier based in the distance of the nearest neighbour that
in terms of accuracy the maximum ERR reached was 3.65%. This experiment has two issues to be
solved: The way in which the phone is handed and the body positions of the users. Because users had
to type PIN number holding the phone with two hands and in a fixed body position.
A summary of all the relevant works is presented in Table 2.3. 1 2
Publication
Sensor
Subjects
Technique
Result
[262]
[42]
[98]
[496]
[55]
[112]
[321]
[83]
[56]
[317]
[457]
[360]
[504]
Tc
Keypad
Keypad
Keypad
Tc
Tc
Tc
Tc
Tc
Tc
Tc, Ac
Tc,Ac,Gy
Tc, Ac, Gy
10 users-10 sessions
16 users
25 users
32 users-50 sessions
31-90 users
48 users
34 users
12 users-3 sessions
55 users
50 users
10 users
80 users
BN
Euclidean,Mah, MLP
SN
PSO and GA
RFM
Mann-Whitney U Test
DTW
SVMs,lineal Kernel
Friedman test
WkNN
DTW
kNN,MLR,SVM,RF
Euc normalized distance
FAR = 2%, FRR = 17.8%
FRR = 2.5%, FAR = 0%
EER = 13%
AER = 2%
EER = 10.39%
TMD better than DAS
Accuracy = 96%
Accuracy = 94.33%
AER = 1.31%,5.28%,3.93%
EER = 3.5%
–
–
EER = 3.65%
Table 2.3: Summary of relevant works in keystroke dynamics
To design a biometric system that can be used in real life, a variety of situations must be taken
into account. Users can type their PIN while they are walking, running, driving and more. In some
cases these situations can be a trouble to get keystroke data free of noise. To solve these problems,
classifiers should be trained in the most part of all the different scenarios in order to get enough data
to recognize the behaviour of users along all the daily actions.
It should be also noticed that this biometric technique is not affected by environmental conditions
such as light or temperature.
2.2.4
Liveness detection on mobile keystroke dynamics
Due to keystroke dynamics is obtained as a result of keys typing of a user, it is hard to imagine about
use them in order to identify if keys are being pressed by a human or by a machine. As far as the
author’s knowledge, there are not machines or artificial systems designed to broke the lock of a mobile
device, therefore at this moment liveness detection is a feature to considerate in keystroke dynamics.
2.2.5
Conclusion
Because the fast evolution of smartphones and mobile devices over last years, the techniques based in
timing features used with computers can be extended to them. Also much more benefits are available.
The main advantages of using this technique are:
• No extra hardware is needed. The information required can be got from the sensors all mobile
phones embed. Acceleration in each moment can be measured by accelerometers. Acceleration
1
Sensors: Tc=Touchscreen, Ac=Accelerometer, Gy=Gyroscope.
Techniques: BN:Bayesion Network, Euc:Euclidean, Mah:Mahalanobis, MLP:Multi-Layered Perceptron, SN:Score
Normalization, PSO:Particle Swarm Optimizer, GA:Genetic algorithms, RFM:Random Forest Machines, DTW:Dynamic
Time Warping, SVM:Support Vector Machines
2
31
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
can be very helpful to identify users and its measure does not require a big effort. The same
can be done with gyroscopes and angles, and with much more features that can be crucial to
identify an user like finger size or pressure while touching the screen.
• A small amount of data is needed to train a recognition system. If keystroke dynamics system is
compared with other techniques, e.g. face recognition, the data matrix extracted is quite smaller.
This size means a short time of processing, therefore a Computer Processing Unit (CPU) and
battery low consumption.
• It can be implemented transparently to the user, providing a complementing security to the
access based on PIN.
• Environmental conditions do not affect the verification process.
Although this technique has important advantages comparing it with other biometric techniques,
some points must be taken into account. The next disadvantages can be found:
• Not all keystroke dynamics techniques used in computers can be extended to mobile devices.
Processing speed and memory is much bigger in computers. This is the main limitation and has
to be taken into account if a fast and reliable system is required.
• It depends on user states (sitting down, walking, standing, etc.) can affect the performance of
the verification process.
• It requires users to remember and use frequently their PIN code.
Considering all advantages and disadvantages it is reasonable to think that keystroke recognition
techniques can me successfully applied to mobile devices, obtaining as good performance as computer
keystroke dynamics. As the objective is to identify users in all the environmental conditions, an adaptive system, which get trained along all the different moments users type their PIN, is adequate to
develop this technique.
2.3
Face recognition
During the last decades, face biometrics has become a very popular recognition and verification technique, due to the fact that face recognition is one of the most remarkable abilities of human and
primate vision. Indeed, over the last 20 years, several different techniques have been proposed for
computer recognition of human faces.
Face recognition systems must be able to identify a person’s face, even when some variations have
been introduced. The most common variations include appearance variations (such as the use of glasses
or make-up, the presence of beard, or differences on the skin tan), morphological variations (mainly
due to user’s age or other changes through the time) and image capturing variations (illumination,
pose, rotation, distance or scale). Image capturing from video stream involves also the face detection
problem.
Nowadays, face recognition at coarse resolution is possible. However, current automatic systems
are still far away from the capability of human perception. Although machine recognition systems
have reached a certain degree of development, their success is still limited by the conditions imposed
by many real applications. In fact, the system global performance is very sensitive to the FAR target.
In this document we offer a general view of face biometrics, and we focus on face identification or
identity verification of individuals with mobile telephones. The main peculiarities of these devices are
32
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
the limitations of their memories and processors. Mainly, the number of operations per second they
can afford is smaller than that of the state-of-the-art processors, used to develop the best algorithms
currently available.
2.3.1
Introduction
The face recognition problem has been formulated as recognizing three-dimensional objects from twodimensional images. In lately developed face recognition and identification techniques, automatic
systems use bidimensional facial images of the user with any old surrounding. Although 3D images
can also be used, the performance improvement is not worth the higher computational effort.
The captured image offers a huge variability. That is why its information must somehow be reduced
before its storage. The 2D image space is transformed into a face space, in order to manage lower
dimensional data in the system. Different techniques provide this size optimization and are generally
classified in 3 categories ([257]):
1. Holistic approaches: which use global representations of the complete image for face identification.
2. Feature based approaches: which process the input image to measure certain facial features
such as the eyes, mouth, nose, and other characteristic traits, as well as geometric relationships
among them.
3. Hybrid approaches.
The face pattern obtained from every preprocessed face image is then stored in the system user
database.
The general face recognition technique consists in several stages where many design decisions must
be taken:
• Face detection consists in determining if there are any faces in the image and, if so, return their
locations and extents.
– Image capturing: The image can be obtained from a static photograph or from a video
stream.
– Image preprocessing: A variety of methods allow the isolation of faces within an image.
• Face recognition consists in linking a face image to an enrolled user of the system.
– Feature extraction can be performed in many ways. The set of relevant features must be
previously defined.
– Learning algorithm decisions condition the way the features are analyzed in order to obtain
user patterns.
– Similarity measures: the suitability of the measures depends on the pattern structure.
– Similarity thresholds can be experimentally defined, according to the real environment of
system usage.
A deeper description of the general technique can be found in [250].
Face recognition and identification systems require a medium cooperation, as the user’s face must
be placed directly in front of a camera while the photograph or video is being taken. In some situations
of false rejection, higher user collaboration could be required. Nevertheless, the technique is highly
accepted, as it is not invasive. Furthermore, compared to other biometric techniques, face recognition
is a cheap technique.
33
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
In addition, face recognition presents some interesting constraints (like bilateral symmetry) that
we can take advantage of in the restoration of facial features. Another set of constraints derives from
the fact that almost every face has a very similar layout of its features.
In the last years, many works about general face recognition techniques have been published:
Ahmad et al. [47] propose an assessment of classical techniques over the five ”Face Recognition
Data” datasets provided by the computer vision research group of the University of Essex, namely:
Face 94, Face 95, Face 96, Grimace and PICS. These are very rich databases, in terms of subjects,
poses, emotions, races and lighting conditions.
Zhao et al. offer in [502] a review of many systematic empirical evaluations of face recognition
techniques, including the FERET [397], FRVT 2000 [81], FRVT 2002 [398], and XM2VTS [356]
protocols, as well as a list of many commercial available systems, shown in Table 2.4. The AppLock
application recently developed by Visidon [474] offers face recognition on Android mobile phones.
Commercial system
Viisage Technology
FaceKey Corp.
Cognitec Systems
ImageWare Sofware
BioID sensor fusion
Biometric Systems, Inc.
SpotIt for face composite
Description
[369]
[165]
[121]
[251]
[78]
[79]
[253]
Table 2.4: Commercial face detection available systems.
Jafri and Arabnia [257] divide face recognition techniques into three categories according to the
face data acquisition method: methods that operate on images intensity; methods dealing with video
sequences; and methods that require other sensory data, such as 3D information or infra-red imagery.
More recently, Chauhan et al. [108] provided an exhaustive summary of all the general face
techniques developed.
Maurya and Sharma analyze current image classification techniques in [345].
The related task of face detection has direct relevance to face recognition because images must
be analyzed and faces identified, before they can be recognized. Given an image, the goal of face
detection is to determine if there are any faces in it. The main difficulty of face detection is due to
variations in scale, location, orientation, pose, expression, lighting conditions and/or occlusions.
Zhang and Zhang [497] establish a survey of recent advances on face detection. The different
approaches are grouped into four categories:
• Knowledge-based methods, which use predefined rules, based on human knowledge.
• Feature-invariant approaches, which search for face structure features robust to variations.
• Template matching methods that use pre-stored face templates to judge if there is a face in
an image.
• Appearance-based methods that learn face models from a set of training face images.
According to Jafri and Arabnia [257], there are numerous application areas in which face recognition can be exploited for verification and identification:
• Closed Circuit Television (CCTV) monitoring and surveillance (to look for known criminals and
notify authorities), as the system doesn’t require human cooperation [47].
34
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Image database investigations (searching image databases of licensed drivers, missing children,
immigrants and police bookings).
• Video indexing (labeling faces in video).
• Witness face reconstruction.
• Gender classification.
• Expression recognition (intensive care monitoring in the field of medicine).
• Facial feature recognition and tracking (tracking a vehicle driver’s eyes and monitoring his fatigue
or detecting stress).
Face recognition is also being used in conjunction with other biometrics such as speech, iris,
fingerprint, ear and gait recognition in order to enhance the recognition performance of these methods
[120], as seen in section 2.3.4
2.3.2
Public databases for mobile face recognition
During the assessment stage of new techniques, in order to compare the performance of several methods, it is recommendable to use a standard testing data set. There are many databases currently
in use and each one has been developed under a different set of requirements. Therefore, according
to [220], it is important to decide the capability we want to test in the system before choosing the
appropriate database to assess the technique.
In Table 2.5 we offer a brief compilation of the most referenced face datasets3 ,4 . A complete list
can also be found in [214].
Name
Year
Images
Subjects
Environment
Dataset features
Reference
Website
MOBIO
SecurePhone PDA
FERET
AR
CAS-PEAL
Face Recognition Data
SCFace
M2VTS
Yale B
CMU PIE
FIA
MIT-CBCL
2010
2006
1996
1988
2003
1996
1996
1998
2001
2000
2004
1999
193620
12960
14051
3288
30900
7900
41260
N/A
5850
41368
12960
31022
152
60
1199
116
2747
395
130
295
10
68
200
10
U
C
S
C
C
U
U
C
C
C
C/U
S
Mobile
Video, Mobile
L, P, T
L, O
P, A, L
Ethnic
Video
T, P, Multimodal
P, L
P, L
P
NF
[331]
[370]
[399]
[334]
[201]
[451]
[215]
[356]
[205]
[221]
[209]
[246]
[329]
[26]
[400]
[335]
[164]
[451]
[216]
[103]
[124]
[219]
[350]
[102]
Table 2.5: Most popular face datasets
More detailed information about these databases is also available in the related description paper
and in the download web page.
The most appropriate database for our purpose (training and testing the Personalised Centralized
Authentication System (PCAS) device) would be the MOBIO dataset because of its features:
• It is composed of video frames (with audio included).
• The database was captured with a NOKIA N93i mobile and a standard 2008 MacBook laptop.
3
4
Environments: U=Uncontrolled, S=Semi-controlled, C=Controled
Dataset features: L=Light, P=Pose, T=Time, A=Accesories, O=Occlusion, NF=Non-faces
35
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• The samples have been registered at six different sites, from five different countries, and include
native and non-native English speakers.
• There are 12 samples registered from each individual.
The SecurePhone PDA dataset is also very interesting, as it offers phone-registered multimodal
patterns.
Ivanov presents in [255] an application which enables the development of image databases, which
could be used for training and testing mobile face recognition systems.
2.3.3
Relevant works on mobile face recognition
With the improvement of mobile devices capabilities, security of data stored on them has become very
important. In this context, face recognition schemes avoid user to remember pin codes or passwords,
providing with higher and more flexible security than former systems (biometric security is based in
something the user is, instead of something the user has or knows). As stated in [491], most current
face recognition systems work well under constrained conditions. However, their performance degrades
rapidly under non-regulated conditions.
Face identification in mobile phones is an emerging research topic. During the last years, some
commercial systems have been developed [474]. However, adapting desktop applications (Section 2.3.1)
to mobile devices is not a trivial task.
Robust face recognition involves a considerable amount of computation due to the image processing.
If additional image preprocessing is necessary, it can also slow down the system. These requirements
make it difficult to implement a robust and real-time mobile phone based face recognition system.
Most mobile devices’ CPUs run at less than 1,5 GHz and don’t have an Floating Point Unit (FPU).
Floating point operations are emulated by the CPU, which can reduce the overall running speed.
Finally, mobile phone memory resources are also limited, so developing algorithms that consume too
much memory (for data storage) is not recommended.
Therefore very few of the reported general face verification algorithms are suitable for real-time deployment on mobile devices. Nevertheless, there are many relevant works on mobile face identification
techniques recently published.
In most of the works, the image training and testing sets come from a standard database. The
related description paper (Section 2.3.2) offers details about these datasets’ construction. A High
Resolution (HR) camera is often used to capture the photographs, but there are some works, as [226],
in which infrared cameras are used. Socolinsky and Selinger [449] analyse face recognition performance
using visible and thermal infrared photographs. Other proposals use also the mobile phone camera
to build the database. Some works exist, such as [107, 88, 43], in which 3D models are constructed
by using structured light sensors, passive stereo sensors and range scanners. Finally, [312] proposes
a player identification using the Kinect device. This techniques, however, are not very suitable for
mobile devices. Other approaches that make use of additional sensing devices, such as thermal imaging
sensors or cameras with high-spectral sensitivity, or other biometric features, like vein patterns, are
considered out of the scope of this work, as these technologies are not available over mobile devices.
Attending to the face detection approach, we can find two different types of techniques:
• Skin color segmentation techniques.
• Simple feature extraction techniques.
Most of the recent works are based on the Viola-Jones algorithm ([473]), which has become the
main reference in face detection techniques since 2001. This approach for visual object detection uses
the simple feature extraction approach and it is capable of achieving high Detection Rate (DR) levels
36
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
with extremely quickly processed images. The detector runs at 15 video frames per second, what
makes it very suitable for real-time applications.
Hadid et al. [225] present a mobile environment approach, based on the Viola-Jones detector,
which uses Local Binary Patterns (LBP) with Histogram Intersection (HI) dissimilarity measure in
the authentication phase. With a speed of 2 frames per second, this configuration detects 129 faces
and 2 false positives in 150 test images containing 163 faces. They also propose an alternative using a
Skin Color Based (SCB) face detector. After obtaining 117 correct detections with 12 false positives
(processing 8 frames per second), the authors conclude that a static skin color model based approach
to face detection in mobile phones is interesting in terms of speed but may not be very satisfactory in
terms of detection rates.
In [419], Ren et al. present some software optimizations to implement a real-time Viola-Jones face
detection in mobile platforms using only the device processor (Due to the computational complexity
of these algorithms, often a hardware coprocessor is used for their real-time operation). These steps
include data reduction (image spatial subsampling, subimage shifting, size escalation and minimum
face size definition), search reduction (use of key frames and narrowed detection areas) and numerical
reduction (fixed point processing). The resulting accuracy is around 99%, the data reduction reduces
also the processing time by around 90%, and the use of fixed-point arithmetic generates about 3 to 5
times speedup.
Another proposal is Pachalakis and Bober’s Face Detection and Tracking System in the context
of a Mobile Videoconferencing Application [393]. This algorithm is based on subsampling, skin color
filtering and detecting/tracking user faces. It reaches high speed performance (over 400 frames per
second at 33 MHz) with a limited computational complexity (and less than of 700 bytes of memory),
while offering robustness to illumination variations and geometric changes. These advantages, facilitate
a real-time implementation on small microprocessors or custom hardware.
A summary of face detection algorithms is shown in Table 2.6. The Rowley-Baluja-Kanade algorithm (previous to Viola-Jones most referenced method) is also shown, in order to stablish a more
complete comparison.
Publication
Technique
[429]
Artificial Neural Network (ANN)
[473]
[225]
[225]
[419]
Haar features + Integral image + Cascade boosting
LBP + HI
[393]
Subsampling + Skin filtering
Data/Search/Numerical reduction
Reported results
Processor
DR=90%,
False
Positive
Rate
(FPR)=27%, Speed=0.003 fr/s.
DR=91%, FPR=10%, Speed=15 fr/s.
DR=79%, FPR=1.5%, Speed=2 fr/s.
DR=72%, FPR=9.3%, Speed=8 fr/s.
Speed=15 fr/s.
200 MHz. R4400SGI
Indigo 2
700 MHz. Pentium III
Nokia N90
Nokia N90
TI
OMAP
Mobile
Plat.
ALTERA
EP20K1000EBC652-1
DR=99%, Speed=400 fr/s.
Table 2.6: Face detection algorithms
Ng et al. [376] introduce a new verification system for mobile phones, which includes noise and
distortion-tolerant Unconstrained Minimum Average Correlation Energy (UMACE) filters, as well
as Fixed Point 2D Fast Fourier Transform (FFT) in the recognition phase. The UMACE filters
improve the performance of learning algorithms under illumination variations, while the fixed point
arithmetic reduces the computation time to a 50%, in relation to floating point arithmetic in mobile
face recognition scenarios. This evaluation results are obtained from a private dataset composed by
24 users from which 15 training images and 15 test images have been captured with a cell-phone.
A similar idea is related in [226], where Han et al. propose a new multimodal method which
improves face detection by Near-Infra-Red (NIR) lighting (to detect corneal specular reflections) and
integer-based Principal Component Analysis (PCA) method for face recognition excluding floating
point operation. The use of NIR lighting reduces the PCA lighting sensitivity. Although the recog37
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
nition accuracy does not increase, the performance is more than three times better, as the processing
time falls from 255.66 ms using floating-point to 79.55 ms using integer-point arithmetic.
Tao and Veldhuis [461] propose an authentication method that uses subspace metrics and a Parzen
classifier combined with a Viola-Jones-based face detector. The Viola-Jones detector is trained only
once and in offline mode and the user’s sample set is obtained by extensive exposition of the user to
the sensor. The resulting EER is of 1,2% over the BioID database.
In [169], Faundez-Zanuy et al. address the processor limitation problem with a new approach,
based on the use of a transformed domain. The Walsh-Hadamard Transform (WHT) can be easily
implemented on a fixed-point processor and achieves a good trade-off between storage demanding,
execution time and performance. On one hand, the WHT face detector uses less coefficients than
the statistical methods based on the Karhunen-Loève Transform (KLT). This fact allows a decrease
of the Detection Cost Function (DCF). In addition, the transformation is not data dependent. On
the other hand, the nearest neighbour classifier (using the mean absolute difference) happens to be a
good performance low-complexity face recognition system, as revealed from evaluation tests over the
FERET and ORL databases.
Jung et al. [274] propose another real-time face detection system consisting of a boosting algorithm
for detecting faces and a Symmetry Object Filter and Gradient Descent algorithm to locate eyes in a
face image. An image reduction method is adapted by using a pre-calculated look-up table. Finally,
the associated verification process consists of geometric and illumination normalization, face testing
by using the relative brightness between the face parts, Energy Probability and Linear Discriminant
Analysis (LDA) methods to extract the features from a Discrete Cosine Transform (DCT) transformed
image and a nearest neighbour classifier. The reported recognition rates obtained with the ORL and
ETRI datasets are over 96%, and the processing time running 2 or 3 frames per seconds is between
243-412 ms.
The system proposed by Rahman et al. [417] is based on the idea that, in any color space, the
human skin color (of different ethnicity forms) can easily be represented by a Gaussian Mixture Models
(GMM) with help of look up tables. After this step, a shape processing scheme applying probability
scoring can be used to determine any face location in the image. The improvement method introduced
to achieve real time implementation is very similar to that of [274]. The new algorithm’s performance
reaches a overall detection rate of 88.5% (whereas the Viola-Jones algorithm reaches only a 59.3%)
and requires an average time of 52.9 ms to process a frame (lower than the 90.1 ms. needed by
Viola-Jones).
A way to boost the face authentication system’s performance based on the use of multiple samples
obtained from a video stream is proposed by Poh et al. in [409].
In [134], Dave et al. present an analysis of the most popular face detection and recognition techniques, implemented on the Droid phone. Face detection is carried up by a combination of color
segmentation, morphological processing and template matching. All this process is performed under
three basic assumptions (correct illumination conditions, user facing the camera and closely photographed user) which simplify the algorithms. Regional labeling algorithms are applied in cases of
bad illumination conditions or dark skin colors. Eigenfaces and Fisherfaces schemes are employed in
face identification. For the implementation of the Fisherfaces scheme in the Motorola Droid phone,
Android Application Programming Interface (API)’s face detector was used instead of the face detection algorithm. Both the KLT and the Fisher LDA matrices for the training dataset were computed
with MatLab and the stored in the Droid device. Finally, to reduce the overall computation time,
high resolution camera pictures were downsampled by a factor of 8 and the text files containing the
KLT and LDA matrices where transformed into DataInputStreams. The training set was made up of
45 images, containing 9 classes and 5 images per classes. With a simple user interface, the algorithm
can detect and recognize a user in no more than 1.6 s. With eigenface scheme, the system is able to
achieve a total correct rate of 84.3% (with an EER of 35%), whereas with fisherface, the correct rate
38
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
goes up to 94.0% (and an EER of 25%). The fisherface scheme worked better for recognizing faces
under varying lighting conditions, as expected by the authors.
Finally, Reng et al. [418] focus on face alignment accuracy to present an improved face and eye
detector. This system proposes an attentional cascade structure, inspired in Viola-Jones face detector,
to speed-up the detection phase. The resulting method is known as Cascade Asymmetric Principal
Component Discriminant Analysis (C-APCDA). The eye location makes the face alignment easier.
To build the subspace-based face verification system, a new approach to calculate a Class-Specific
Threshold is also proposed. Subspace approaches have the disadvantage of the speed limitation due
to large matrix multiplications. On the other hand, these approaches need less memory and perform
better on low-resolution images. The CST turns out to provide better performance than a global
threshold. The system performance is assessed over the O2FN database (created for this purpose),
and then it is compared with other systems over the AR and CAS-PEAL databases. Using the CAPCDA detection algorithm with class-specific threshold and the Eigenfeature Regularization and
Extraction (ERE) face recognition approach, the system achieves a 96.98% face DR, a 98.73% eye
detection rate and an EER of 1.90% over the AR dataset. Using the same database, the Adaboost
detector gets a 91,27% face DR, a 95.71% eye detection rate and an EER of 3.37%.
A brief summary of all relevant works is shown in Table 2.7.
Publication
Sensor
Database
Technique
[376]
[226]
[461]
[169]
[274]
[417]
[134]
[134]
[418]
Cell Phone camera
NIR Camera
Camera
Camera
Camera
Camera
HR Camera
HR Camera
Camera
Private (720 images)
Private
BioID
FERET, ORL
ORL, ETRI
Bayer
Private (45 images)
Private (45 images)
O2FN, AR
UMACE filters + FFT
Integer-based PCA
Parzen classifier
WHT + ANN
LDA + DCT + ANN
GMM
Regional labelling + PCA
Regional labelling + LDA
C-APCDA + CST + ERE
Reported results
EER=8.49
EER=14.79%, Processing time=79.55 ms.
EER=1.2%
DCF=5.45
DR=96%, Time=243-412 ms.
DR=88.5%, Time=52.9 ms.
DR=84.3%, EER=35%, Time=1,6 s
DR=94%, EER=25%, Time=1,6 s
DR=96.98%, EER=1.9%
Table 2.7: Summary of relevant works about face recognition
As we have seen in this section, face recognition systems’ performance is highly conditioned by
environmental conditions. Jafri and Arabni point out in [257] many general difficulties that can arise
with the use of these techniques in mobile devices.
For instance, frontal face images form a very dense cluster in image space, which makes it hard for
traditional pattern recognition techniques to accurately discriminate among them with a high degree
of success. In this sense, slight variations in the so-called extrinsic factors (like illumination, pose,
rotation, distance, scale, expressions and occlusions) can alter the appearance of the face and reduce
the location procedure efficiency.
In addition, the face appearance may also vary due the intrinsic factors, caused by the physical
nature of the face, which are independent of the observer and can be intrapersonal factors (age, facial
hair, glasses, cosmetics, etc.) or interpersonal factors (ethnicity and gender).
2.3.4
Multimodal identification using face recognition
The efficiency of face biometrics techniques is very dependent on environmental conditions, such
as illumination. Although face recognition has shown an acceptable identification and verification
performance, the combined use of two or more biometric techniques (biometrics fusion or multimodal
biometrics) can enhance their individual efficiency. In this sense, we can find some relevant works
in which face biometrics are used together with any other techniques. This works are presented in
section 2.9.4
39
PCAS Deliverable D3.1
2.3.5
SoA of mobile biometrics, liveness and non-coercion detection
Liveness detection on mobile face recognition
Face recognition on mobile phones has turned out to be a reliable user authentication method. However, it is susceptible to security attacks that could compromise the system robustness.
Kollreider et al. [288] consider three kinds of face spoofing attacks:
• The presentation of a user’s face photograph to the identification system.
• The use of a photographic mask by an intruder.
• The presentation of a user’s face video.
and propose certain security countermeasures as well:
• Analyzing eyes-blinking.
• Tracking mouth movements.
• Studying head 3D features.
It is also possible to fool a face recognition system using 3D face models, as explained in [287].
Besides the use of multimodal techniques, many proposed spoofing detection systems are based on
detecting face motion on video streams:
The analysis of the images’ Fourier spectrum to detect the difference between live and fake images
was firstly proposed by Li et al. [316].
Fronthaler et al. propose in [186] a system that uses real-time face tracking and the localization
of facial landmarks as liveness assurance technique within a fingerprint authentication process. The
face-tracking system calculates face features with a retinoptic grid. In order to achieve real-time
performance, only 69 features are computed and modelled with support vectors. The face alignment
is controlled by machine-experts. A Gabor feature vector is computed at each point if the grid, and
the tracking system models several facial regions. Different frequency channels are used for locating
facial landmarks. In order to assess the system, 10 different people’s faces where tracked. From each
person, 30 frames were automatically acquired. The eyes area was properly tracked on 96.6% of the
frames, and the facial landmarks were correctly localized on 97%.
Kollreider’s system [288] consists on evaluating face 3D features while checking for at least one
eye-blink or mouth movement. The tracking algorithm exploits motion to refine the effective neighbourhood by a differential approach, which also provides motion estimates to the liveness detection
system. A motion vector is calculated by using consecutive video frames and it is used for computing
a rasterflow vector, which encodes the spatiality of the face image. A live face presents peaks on its
center, while a photograph doesn’t. An eyeflow vector is also computed and used for calculating a
liveness score, which is expected to be positive in case of a live face, and negative or zero otherwise.
The system evaluation was carried out over the Zhejiang University (ZJU) Eyeblink database ([390])
which consists on 80 face videos registered from 20 different individuals in 4 sessions. The average
false positive rate was 0.04+0.12%.
The Kim et al. system [287] segments each input video frame into foreground and background
regions. The foreground image includes the user’s face. The motion amount between foreground and
background regions is then compared. In a live face video, foreground motion is supposed to be higher
than background motion, whereas in a fake one the so called Background Motion Index is supposed
to be high. During the evaluation stage of the system 373 fake video and 37 live video were recorded.
Only 4.1% of fake video streams on a 7 inch LCD and 0.73% of high resolution photographs were
labelled as live user’s samples.
There are also some works in which liveness detection uses only a single image:
40
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Bai et al. [62] establish that the difference between recaptured images and the original ones can
be decided from the spatial distribution of their Bidirectional Reflectance Distribution Function. The
Specular Component provides information about microtextures on the surface that generates an image.
The texture differences between a live face and a fake one can be detected on the specular component
gradient histogram. This liveness detection method was evaluated over a 65 natural and 65 recaptured
images dataset. The best reported result is a 2.2% FAR and a 13% FRR with a 6.7% EER.
The interest on defeating identity spoofing on face recognition systems motivates the international
Competitions on Counter Measures to 2-D Facial Spoofing Attacks on 2011 [104] and 2013 [113].
The aim of this competitions is to evaluate different techniques over the same dataset (Print-Attack
database [328] and CASIA-FASD database [500]) and with the same protocols.
Recently, Komulainen et al. [289] have introduced the dynamic texture to the spoofing detection
systems by the use of Spatiotemporal Local Binary Patterns. This idea allows to analyse simultaneously the structure and the dynamics of microtextures on facial regions and provides with better
results than those reported on the first Competition on Counter Measures to 2-D Facial Spoofing
Attacks.
A summary of the considered techniques is shown in Table 2.8.
Publication
Technique
[186]
[288]
[287]
[62]
[289]
Face Tracking + Landmarks
3D features + Blinking
Video frame segmentation
1 image texture differences
1 image dynamic textures
Reported results
DR=97%
FPR=0.04+0.12%
FPR=4.1% (Video), FPR=0.73% (HR Photo)
FAR=2.2%, FRR=13%, EER=6.7%
FAR=0%, FRR=0%
Table 2.8: Live detection algorithms based on face analysis.
As we pointed out previously, more precise anti-spoofing techniques involving advanced sensors
are out of the extent of this work.
2.3.6
Conclusion
Face biometrics is turning out to be a reliable personal identification and verification method. The
most outstanding properties of this technique are:
• High performance rates.
• Non-intrusive technique.
• Existence of many successful implementations of face recognition basic techniques over mobile
devices, as summarized in section 2.3.3.
• These techniques do not require a high investment in hardware, as they use cheep sensors. In
fact, most of current mobile phones comprise cameras with enough resolution.
• Incorporating face biometrics to day-to-day cell-phone features would provide them with new
capabilities, such as verification for access control, identification for on-line transactions or facial
feature tracking.
• System evaluation and performance measurement is somehow standard, since some programs,
such as FERET [399, 397], provide system and algorithm testing protocols.
41
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Many face identification systems could be easily enriched by their combination with other biometric techniques, in order to improve the individual techniques’ performance. This multimodal
systems could also be integrated into a mobile phone, as seen in section 2.3.4.
• Face techniques offer great ways (discussed in section 2.3.5) to verify user’s liveness.
On the other hand some disadvantages of using this technique on a mobile phone are:
• The performance of face recognition is very sensitive to non-regulated environmental conditions.
• Image processing is usually a time and memory consuming operation. Algorithm optimization
is often required, to adapt the methods to limited resources devices.
• Face appearance is a very variable feature across time, due to intrinsic and extrinsic factors, as
discussed in section 2.3.3.
In conclusion, face biometrics is a trustworthy and non-intrusive identification technique that can
be adapted for use in mobile phones, either on its own or as part of a multimodal system. The system’s
false acceptance rate can be improved by integrating a liveness detection algorithm, achieving high
performance levels.
The performance of these systems depends on their application, as well as of the device’s characteristics. Although many general face recognition algorithms have been created, it is important to pay
attention to hardware limitations when designing a image-processing-based recognition system for a
mobile phone.
2.4
Signature recognition
This section presents the challenges and most important works related to signature recognition in
mobile handheld devices. Firstly, in section 2.4.1 an introduction to the signature recognition technique
is provided in addition to the problems and challenges of the adaptation to a mobile environment of
this technology.
Next, in section 2.4.2, the most recent and relevant works of mobile signature recognition are
presented. This includes three different approaches consisting in making signatures on the mobile
screen, utilizing special pens with specific hardware to make signatures, in a surface or in the air, and
making signatures holding the mobile phone in the hand.
The public databases that are used in these works are summarized in section 2.4.3, including their
characteristics and references to download them.
Following this, in section 2.4.4 some comments regarding liveness detection in this technique are
presented.
Finally, the conclusions of the state of the art of mobile signature recognition are presented in
2.4.5.
2.4.1
Introduction
Biometrics based on signature has been used since centuries as a method to authenticate the veracity
of documents. At present, most of legal or banking documents must be signed in order to be accepted.
People are very used to sign as there are many common situations where they should make a signature
so the document or the transaction is accepted. Actually, people must make a signature in order to
receive a valid national identification document. In general signatures are used to verify the identity
of a user, not to identify a user from a database.
42
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Most of the signatures made at present are handwritten signatures, where users take a pen and write
in a paper their signature. However, the penetration of new technologies have made improvements
on the type of sensors used and it has become usual to make a signature in digitalized screens with
special pens.
Since many years, there has been a big effort on making automatic the verification of a signature.
Actually, in [406] and [147] and [252] the authors made a fully complete work of gathering together
the most important articles related to classic handwritten signature until 2008. More recent survey
articles [481], [501], [435] and [158] summarize the most important works until 2013.
In these works there is a separation between two kinds of signatures:
• Offline signatures: Where the signatures are written with ink in paper, meaning an image
processing problem.
• On-line signatures: Where many temporal signals are captured when making a handwritten
signatures (usually speeds, accelerations, pressures, angles, etc.). This is a signal-processing
problem.
Of course, on-line signature systems require specific sensors to capture these signals but the performance improves considerably with respect to offline systems. In general, the main focus of these works
is to improve the performance of the handwritten signature verification algorithms in systems where
the signature is performed in a tablet with a pen, able to capture many different temporal signature
features.
However, this approach is different than the one presented in this document, where on-line signatures are captured in handheld devices, such as a PDAs or a mobile phones. Including signatures in
these kind of devices is quite interesting because of the big amount of operations related to legal, monetary and others that might make use of mobile signature recognition. This is the reason why there
are many works trying to incorporate signature biometrics specifically on mobile devices. However, to
the authors’ knowledge, there no exists a complete document gathering the present mobile signature
biometrics state of the art, which is the goal of this section.
2.4.2
Relevant works on signature recognition on mobile phones
Signature verification systems must face many challenges to adapt their techniques to the mobile
environment. Some of the most important challenges are the following:
• Handheld devices are affected by size and weight constraints because of their nature. Usually, mobile phones or PDAs present small input areas and poor ergonomics that increase the
variability of the signatures.
• The quality of the touch screen on mobile phones must also be considered. In these devices, in
general, only position signals in time are available, but not pressure, azimuth and other signals
that may improve the verification performance.
• The sensors that capture the signature are not the same. There are some works, presented as
follows, working with touch screens with special stylus, fingers or accelerometers. Finding the
most appropriate sensor to capture the signature feature is a requirement to be fulfilled.
• The processing capacity and the battery of the mobile phone are also constraints that limit the
complexity of the verification algorithms that can be used.
The research works related to mobile signature recognition try to face all these issues in different
manners, obtaining different performance results. Next, the most important approaches in adapting
43
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
handwritten signature in mobile devices are presented. These approaches can be divided in three
groups:
1. Signatures are made in a mobile device which is the responsible of capturing the position signals
along the time.
2. Signatures are made anywhere but using a special pen that captures the accelerations of the
signature.
3. Signatures are made in the air holding the mobile phone that captures the accelerations of the
gesture movement.
The first approach is the most similar with the not-handheld device technique. This is the most
common approach, since most of the work done in classical handwritten signature can be adapted
easily.
One of the most important initiative is the BioSecure Multimodal Evaluation Campaign, where
independent research institutions studied the verification results for handheld devices in comparison
with other databases captured using a pen tablet [494]. In this comparison, it was concluded that the
verification algorithms with handheld devices had lower performance than when using a pen tablet.
Following this initiative, the BioSecure multimodal database was created [387] including a specific
subdatabase of signatures obtained through a handheld device. A part of this database, consisting on
20 genuine signatures of 120 users, with 20 skilled forgeries per user, was used in [336]. In this work
the authors extracted 100 features of time, speed, acceleration, direction and geometry per signature
sample. Then they used a Fisher Discriminant ratio to select the most appropriate features for each
user and classify them using a HMM. They obtained an EER of 4% for random forgeries and 11.9%
for skilled forgeries. They suggested that the ergonomics, an unfamiliar surface and the signing device
may affect the signature performance.
The BioSecure database is also used in the “ESRA11: Biosecure signature evaluation campaign”
[244], which is, as far as the knowledge of the authors, the last evaluation campaign performed with
the BioSecure mobile signature subdatabase. In this competition, 11 teams presented their verification
algorithms to be evaluated in the subdatabase obtained through handheld devices. This database was
made up of 2 sessions in 4 weeks with 15 genuine repetitions and 10 skilled falsifications with the
information of the static signature. They provided 50 subjects to the training of the algorithm and
382 users to the test.
The best performance algorithm presented in this competition obtained an EER around 6% against
skilled forgeries. This approach consisted on obtaining the pen coordinates and a number of extra
points in Dynamic Time Warping (DTW) algorithms. Then they got a score by the average DTW
distance between the test samples and 5 reference signatures with user-based normalization [492].
There are other important works related to mobile phone signature, although they make use of
private databases that are created specifically for their research works.
For example, in [296] the authors use a Samsung Galaxy Note to capture the signatures of the
people signing with a special pen. They obtained a database of 25 users with two sessions. The
temporal signals captured were the position of the pen in X and Y. From those signals they extract
features related to time, speed, acceleration, direction and geometry. They select the best features
through a sequential forward algorithm and they normalize them using the tanh method. Finally,
they obtain the Mahalanobis distance from the feature vector and they made a fusion between this
distance and the DTW score. Using this approach and this database, they obtain an EER of 0.525%
(with random falsification samples).
In [354], the authors use 4 different handheld devices of different technologies. They use two
capacitive devices (Samsung Galaxy S and Samsung Galaxy Tab) and two resistive devices (HTC
44
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Tattoo and Geeksphone ONE) to capture the signatures performed with a pen. Each database is
composed of 25 users with two sessions of 14 genuine signatures and 14 skilled forgeries. Depending
on the database, they obtain an EER from 1.5 to 4% with an algorithm based on DTW.
The former work is complemented in [82] where the authors add 4 new devices, including an iPad.
Additionally, in four of the devices studied, the signature is performed with the finger not with a
stylus. The authors created a database of 11 users with 3 sessions and 20 repetitions with a separation
of two weeks and 10 forged signatures. They obtained an EER of 0.5-2% in random samples and
8-18% in skilled samples. They found out interesting conclusions, such as the smallest devices except
iPad get better performance and the stylus signatures are slightly better than the finger signatures.
The second approach of mobile signatures is based on using a specific pen which embedding several
sensors to make the signatures.
Following this approach, the authors of [71] create a Biometric Smart Pen Device (BiSP) able to
record different temporal signals when a person is making the signature in any solid pad or even in
free air. In particular, the device captures the acceleration, tilt angle, grip forces of the fingers holding
the pen and forces and vibrations during writing.
They use these devices for different purposes. One of them is the work of [72] which studies the
signature recognition with this device. In this work, they create a private database of 40 people who
wrote a private id word composed by 7 characters. They wrote the word in the air with the elbow
resting on a table or directly on a surface. They obtained a 99.99% of score rate with no forgeries
attempts with a fast adaptation of DTW.
Similarly, in [444] the authors build a different pen by attaching a tri-axis accelerometer and two
gyros to a pen. With this device, they make a private database of 4 people with skilled falsifications.
They propose an algorithm based on HMM obtaining an EER of around 1.5%.
Related to this approach there are some works where the authors attach a tri-axial accelerometer
close to the tip of the pen and a gyro in the middle, sampling at 1000 Hz. With this hardware, in [95]
the database AccSigDb2011 is presented, composed of 600 signatures from 40 authors including 10
genuine samples and 5 forgeries each. In all these samples, only acceleration values are captured. This
database is extended by the Gyrosigdb2012 database presented in [127] by similar authors. In this
extension they add signatures of 20 people more and this time they capture the signals from the gyros.
In their works, the authors claim these databases to be public but, as far as the authors’ knowledge,
the link to download them is not available.
The intersection of both databases are used in [217], where the authors propose Legendre approximation with SVM for classification obtaining a 90% of accuracy with a database of 10 people.
The third and last approach to make signatures on mobile phones presented in this document is
based on making an identifying gesture in the air. There are some research teams working on this
issue or similar, obtaining competitive results.
In [394] the authors present an authentication protocol based on making gestures on a mobile phone
with a two-axis accelerometer embedded. The identifying signature is a combination of different simple
gestures separated by a pause. However, the authors do not indicate how these gestures are analyzed
nor the results obtained.
A similar approach is presented in [118] where the authors present a vocabulary of 10 simple
gestures in order to be combined to create complex signatures. The authors use a private database of
18 people concluding that this kind of signatures can be easily falsified.
On the other hand, in [386] the authors propose that users could create a personal gesture to be
identified in the mobile phone. In this case the gesture is a movement based on get the phone from
the table and then shake it in a particular manner. The gestures are captured by a 3-axis embedded
accelerometer. The authors create a private database of 22 users, obtaining an EER of 5% with
random falsification samples. The processing of the signatures is based on a dynamic programming
technique similar to DTW.
45
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
The former work is complemented in [344]. Using the same algorithm, it studies the performance
of the technique along the time. For this purpose the authors create a private database of 12 people
making their gesture during 6 weeks. With an updating method, they obtain an EER of 4%.
Nevertheless, in [319] the authors propose to use an algorithm based on DTW with template
adaptation to analyze the performance of signatures composed by a vocabulary of eight simple gestures.
They use a database of 5 people in different sessions, obtaining a correct rate of 93.5%, which became
98.4% when analyzing only the samples of one session. In this work also, the authors propose the
signature to be created by each individual, obtaining in this case a correct rate of 99.5% with a
database of five people.
The same authors in [320] complete their work adding real falsification attempts. For this purpose
they create a private database of 10 people who create a personal signature in the air. Then, four
people tried to repeat their signature only knowing the “draw” the authentic user made and another
four people tried to forge the signature through a video record of the user making his/her signature.
An EER of 3% and 10% was obtained in each scenario.
Finally, in [222] the authors presented the “in-air signature” biometric technique, based on authenticating people when they make an identifying gesture (a signature) in the air while holding the
mobile phone in the hand. In this work, the authors use the tri-axis accelerometer embedded in most
of current mobile phones to capture the acceleration signals of the signature. The authors obtained an
EER of 2.5% analyzing a database of 34 users who repeated 7 times their signature and skilled forgeries
obtained through the study of video-recordings of the genuine users making their in-air signature.
Following this work, in [63] the authors evaluated a private database composed of the samples of
96 genuine individuals and the skilled forgeries of six different people who tried to repeat all of the
authentic gestures. The best algorithm was the one based on DTW that obtained an EER of 4.5%
against the skilled forgeries. The same database was used in [100] where the authors proposed different
algorithms based on sequence alignment, obtaining an EER under 2%.
Additionally, the same team in [101] presented a work where they analyzed the performance of the
in-air signature technique along the time. For this purpose they obtained a database of 20 sessions and
22 people who repeated 5 times their signature in the air each session. They proposed an updating
strategy of the template that derived in a 1.67% FAR and 5.32% FRR.
Finally, a recent article in [115] demonstrated that using gyroscopes in the in-air signature method
could improve the performance of the system to a FAR of 0.63% and a FRR of 0.97%.
The relevant works presented in this section are summarized in Table 2.9:
46
47
Mobile in-air
Mobile in-air
Mobile in-air
Mobile in-air
Mobile
Mobile
Mobile
Mobile
[344]
[319]
[320]
[222]
[63]
[100]
[101]
[115]
in-air
in-air
in-air
in-air
Mobile in-air
Pen in-air
Pen in-air
Pen in-air
mobile
mobile
mobile
mobile
[386]
[72]
[444]
[217]
[82]
[354]
[296]
[492]
EER= 4.5% (skilled)
EER= 2% (skilled)
FAR=1.67%, FRR=5.32%
FAR=0.63%, FRR=0.97%
EER=3-10% (skilled)
EER = 2.5% (skilled)
EER = 99.5%
EER=4%
EER = 6% (random)
CCR = 99.99% (random)
EER = 1.5% (skilled)
CCR = 90%
EER=0.5-2% (random),
18% (skilled)
EER=1.5-4%
EER=0.525% (random)
8-
EER=4% (random), 11.9%
(forgeries)
EER = 6% (skilled)
Result
DTW
DTW
DTW
DTW
DTW
DTW
DTW
DTW
DTW
DTW
HMM
Legendre+SVM
DTW
DTW
Mahalanobis + DTW
DTW
HMM
Technique
Table 2.9: Summary of relevant works in mobile signature recognition
96
96
24 (20 sessions)
NA
10
34
5
12
22
40
4
10
11
25
25
382
120
mobile
[336]
Pen on
screen
Pen on
screen
Pen on
screen
Pen on
screen
Pen on
screen
Users
Publication Approach
Accelerometer
Accelerometer
Accelerometer
Accelerometer + gyros
Falsifications from video
Accelerometer
Own signature
4 weeks
Get phone from table and shake
Biometric Smart Pen Device
Accelerometer + gyros
Accelerometer + gyros
4 devices
EER of
4 devices
Samsung Galaxy Note
ESRA11 database
BioSecure database
Comments
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
PCAS Deliverable D3.1
2.4.3
SoA of mobile biometrics, liveness and non-coercion detection
Public databases for mobile signature recognition
As follows, a summary of the public databases found in the state of the art is presented:
• BioSecure multimodal database [387]: This database can be downloaded in [10]. In this
database there is a subdataset (Mobile Dataset (DS3)), at a price of 1500$ focused on capturing
signatures through mobile devices (PDA HP iPAQ hx2790) under degraded conditions. There
were two sessions and 240 participants with skilled forgeries. Only temporal position signals are
captured with this device.
• GB2SGestureDB2 [222]: This is a database of 40 gestures performed by their truthful users
and 3 impostors trying to forge them [15]. Each original user has been recorded on video while
carrying out his/her gesture 8 times. From the study of these records 3 different people have
attempted to imitate each gesture in 7 trials. Accelerations of gestures on axis x-y-z have been
obtained at a sampling rate of 100 Hz.
• GB2SGestureDB3 [101]: This is a database of 20 people performing their identifying gestures
holding an iPhone on their hand [15]. 10 sessions of 5 repetitions of their gesture separated
along a month have been obtained for each user. Accelerations of gestures on axis x-y-z have
been obtained at a sampling rate of 100 Hz.
As far as the authors’ knowledge, there are no more available databases to make research on
biometric signature recognition. In the rest of works related to this technology, the databases employed
are not available.
2.4.4
Liveness detection on mobile signature recognition
As far as the authors’ knowledge, there no exist any work on this issue. At present, the research is
focused on improving the performance of the algorithms and sensors in order to be able to discard
better skilled forgeries.
At present, there are no works trying to make machines replicate the genuine signatures of people.
This does not mean that technically it could be possible to have machines able to hold a pen and
replicate a signature if enough information is provided.
According to this, it is accepted that this biometric technique involves liveness detection, since as
a behavioral characteristic it implies to perform an action that connote the person is alive.
2.4.5
Conclusion
At present, there are some works trying to incorporate the signature to mobile devices. However,
when they make a mobile adaptation of the classical handwritten signature systems, the performance
decreases. The reason of this is because in general, the number of signals that can be captured from
a mobile device are reduced to position in time, losing the information of the pressure or the azimuth
that usually reach a better performance.
In addition to this, the difficulty of the skilled signatures in this kind of devices remain the same,
since the signature process is not modified. This means, that the information received by a forger is
the same in the mobile context than in the classical context (the skilled forgeries are obtained from
trying to repeat copies of authentic signatures).
These two factors, less information but same forgeries, are the reasons of the decrease of performance in the mobile adaptation of classical signatures made with a pen on the surface of the mobile
device.
48
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
In order to increase the performance, there appear two different approaches to accomplish this
goal.
The first approach is based on using a special pen allowing to capture much more signals of each
signature, like pressure on the pen, accelerations, azimuth, etc. However, this technology requires
buying this specific pen in order to be used.
A second approach is based on employing the accelerometers and gyros already embedded in a
mobile phone. In this case, no additional hardware should be bought and the performance beats the
results of the classical handwritten. The cause of this is that it is much more difficult to forge a 3D
signature from a video-camera than repeating a 2D handwritten signature with a copy of it in front of
the forger. The performance results of this technology mean that it is an interesting option to carry
out signatures on the mobile phones.
According to this report, the following advantages regarding signatures in mobile phones have been
perceived:
• Signature is a very accepted technique. People often make signatures in their daily life.
• It is quite accepted that signatures are used to authenticate people or assure the veracity of
documents or transactions.
• Making a signature is an easy and comfortable action.
• Signature performance are not limited by environmental constraints.
• In-air signatures only use sensors embedded in mobile phones, obtaining good performance
against skilled falsification attempts.
However, some disadvantages have been also found in these types of techniques:
• Signatures with pens in mobile phones do not provide good performance.
• Special pens should be bought additionally to the device.
• Signatures depending on accelerometers should be made still, without any movement that can
include other accelerations.
• In-air signatures require to move the arm and the wrist , so people with injuries in these parts
could not use it appropriately.
2.5
Hand recognition
Biometric recognition based on hand features are becoming more interesting due to acceptability
between users and high-level accuracy.
There are multiple techniques based on hand features. In this section we will focus on three
of them: hand shape or hand geometry, palmprint and hand vein. Other techniques like knuckle
recognition or verification based on hand thermal images would be briefly introduced.
Multimodal techniques by mixing hand features, like hand geometry and palmprint or hand vein
in dorsal and in palm, are a natural form to create multibiometric systems with a higher accuracy
based on the same source.
49
PCAS Deliverable D3.1
2.5.1
SoA of mobile biometrics, liveness and non-coercion detection
Introduction
Recognition systems based on hand features have been widely used the last decade as one of the
systems with higher accuracy and higher acceptability by the user [297, 148, 143, 290, 181, 500, 154].
This section is intended to explain three of the most important biometric techniques based on
hand features: hand shape or hand geometry, palmprint and hand veins.
Hand shape / Hand geometry
Hand biometrics can be divided into two different approaches:
• Contour-based approaches, where the aim consists of extracting information from the contour
of the hand, carrying out the identification of an individual based on its hand shape, [258, 351,
384, 150, 323, 489, 495, 138, 458, 485].
• Distance-based approaches, where the aim consists of extracting measures from fingers and hand
(widths, angles, lengths and so forth), in order to collect the geometrical information contained
within the hand [433, 210, 61, 85, 298, 299, 503, 151].
α
β
Figure 2.1: Hand shape/Hand geometry approaches. (Left) Contour-based approach. (Right)
Distance-based approach.
Hand geometry biometrics usually has made use of a flat platform to place the hand, facilitating not
only the acquisition procedure but also the preprocessing (image segmentation) and posterior feature
extraction. This technique is evolving to contact-less, platform-free scenarios where hand images are
acquired in free air, increasing the user acceptability and usability.
However, this fact provokes an additional effort in preprocessing, feature extraction, template
creation and template matching, since these scenarios imply more variation in terms of distance to
camera, hand rotation, hand pose and unconstrained environmental conditions.
This evolution can be classified into three categories according to the image acquisition criteria:
• Constrained and contact-based: Systems requiring a flat platform and pegs or pins to restrict
hand degree of freedom [258, 432].
• Unconstrained and contact-based: Peg-free scenarios, although still requiring a platform to place
the hand, like a scanner [46, 174].
50
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Unconstrained and contact-free: Platform-free and contact-less scenarios where neither pegs nor
platform are required for hand image acquisition [503, 138].
In fact, at present, contact-less hand biometrics approaches are increasingly being considered
because of their properties in user acceptability, hand distortion avoidance and hygienic concerns.
Also their promising capability to be extended and applied to nowadays devices with less requirements
in terms of image quality acquisition or speed processor.
Palmprint
The hand palm print has distinguishable features like ridges and valleys, minutiae, and so forth.
Although the three main creases are genetically dependent, most of the wrinkles (secondary creases)
are not. In [291] is shown that even identical twins present different palm prints. So that, palm print
biometrics is a promising biometric recognition system.
Figure 2.2: Palmprint. Principal lines of the hand.
Palm print biometric can be divided into three different sets of features according to the palm
print image resolution.
• Less than 150 Dots per inch (dpi), extracted features are principal lines, wrinkles and texture.
• Less than 500 dpi, extracted features are ridges, singular points and minutia points.
• More than 1000 dpi, extracted features are pores and ridge contours.
The latter two are related to forensic applications while the first one is related to commercial applications such as access control [500, 290].
The survey [290] from 2009 and [500] from 2012 could be used as a recent state-of-the-art of this
technique.
As shown in hand geometry section, the palm print biometric could be also classified into the same
categories according to the image acquisition criteria:
• Constrained and contact-based: CCD-based scanners and pegs
• Unconstrained and contact-based: Digital scanners
• Unconstrained and contact-free: Digital and video cameras
Iula proposed an alternating method [254] where “the hand is properly aligned by marks and
is completely immersed in water with the palm facing upwards” to 3D ultrasound imaging data
acquisition. In our opinion, user acceptability could be affected due to the immersion into water.
51
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Hand veins
Hand vein patterns present some advantages regarding other biometrics. Veins are considered as an
inner body feature that can’t be falsified, they don’t need contact to image acquisition and they remain
stable over time [148, 297, 374].
Figure 2.3: Hand Veins. Dorsal palm veins.
Briefly, in the circulatory system, first, the hemoglobin in the blood is oxygenated in the lungs,
then, the oxygenated hemoglobin is sent to the body tissues where the oxygen is released. The
deoxygenated blood returns to the heart by the veins.
Deoxygenated hemoglobin is able to absorb NIR light (about 760nm) so that when veins are
illuminated with NIR light and they are captured with a IR sensor, they appear as a dark pattern
[434, 126]. In [434], the authors combined the palmar and dorsal vein patterns to obtain an EER of
0%.
Some works in the literature use thermography cameras. A thermographic camera captures infrared
radiations from skin [318, 476, 126, 297].
The need to use specific cameras for hand veins detection makes this technology difficult to counterfeit. However, veins could be used to detect liveness [126, 297, 77].
Others
Hand Thermal Images [130] are acquired by “1012 thermal sensors arranged in 23 columns and 44
rows”. The user places his/her hand above the sensor plate with pegs to guide the hand position.
“Each sensor measures temperature within a range of 0-60 o C with an accuracy of 0.1/0.3 o C”. They
study several feature selection methods (minimum Redundancy Maximum Relevance (mRMR), PCA
and PCA+LDA) and different classification methods (KNN and SVM). Best results with an EER of
6.67% are obtained with PCA+LDA (25 principal components are used) and KNN.
2.5.2
Relevant works on mobile hand recognition
We will devote this section to show the relevant work produced in these techniques. Must be noted
that these techniques are coming into use in mobile devices and, as far as the author’s knowledge,
there are not too much work in this field, indeed, no works were found in vein recognition with mobile
devices.
52
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Hand shape / Hand geometry
De Santos Sierra et al. [138] proposed a silhouette-based hand recognition with images taken with
mobile devices. Best results (3.7% EER) were obtained with the alignment of two sequences where
one sequence defined as the variation along the hand contour and the other sequence defined as the
distances of each point in the hand contour to the hand centroid.
De Santos Sierra et al. [137] used a Gaussian multiscale aggregation method to hand segmentation.
This method improves previous work in the segmentation field. It is able to be used against different
background without worsening the results. They used a synthetic database to test results with 408000
images. In terms of F-measure, worst results were obtained with parquet background with 88.3% and
best results were obtained with sky background with 96.1%
De Santos Sierra et al. [437] proposed a new set of features based on fingers’ widths and curvatures.
The EER obtained was 6.0%.
De Santos Sierra’s thesis [139] shows a complete study of the unconstrained and contact-less biometric systems based on hand geometry. The author gives a complete evaluation of his proposed
method based on multiscale aggregation applied to image segmentation and fingers feature extraction/classification which has been assessed with different public/private databases.
Hsu et al. [245] proposed an architecture to unlock the vehicle with the mobile phone based on
hand geometry features. The authors take four triangles areas formed by different hand points as
fingertips and valleys to check the user identity. Their method obtains an accuracy of 80%.
Palmprint
The early works in palmprint with mobile device cameras were performed by Han et al [226]. The
author used a PDA with a built-in camera with 100 dpi resolution. They proposed a sum-difference
ordinal filter to extract principle lines and wrinkles. With this filter, they obtained an EER of 0.92%
with a short-time process to extract the features (180 ms).
Methani et al. [357] proposed a method for palmprint recognition with poor quality images by the
combination of multiple frames of a short video (up to 0.5 seconds). Frame combination was realized
by line combination and they obtained an EER of 12.75% to 4.7% depending on the number of frames
used. They also proposed a method to avoid low quality images that enhances the EER to 1.8%.
Choras et al. [119] used texture mask-based features for palmprint recognition. They proposed
three methods to create these masks: random masks, user masks (where the user must label some
areas of his/her hand) and eigen-palms by PCA. Best results were obtained with eigen-palm approach
(1.7% of FAR and FRR).
53
HTC Camera
HTC Camera
De Santos Sierra et al. [437]
De Santos Sierra’s thesis [139]
54
40 subjects
100 subjects
84 subjects
Syn [139, 137]
Syn [139, 137]
Syn [139, 137]
Subjects/Database
Palmprint
Palmprint
Palmprint
Hand geometry
Hand geometry
Hand geometry
Technique
Table 2.10: Hand biometrics into mobile devices
PDA camera (100 dpi)
webcam
mobile device
HTC Camera
De Santos Sierra et al. [137]
Han et al. [226]
Methani et al. [357]
Choras et al. [119]
Sensor
Publication
F-measure:
96.1%
EER: 6.0%
Complete evaluation of different
methods
EER: 0.92%
EER: 1.8%
FAR and FRR:
1.7%
Results
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
However, the accomplishment of these techniques are strongly related to the acquisition of the
image with the required feature.
Hand biometrics in mobile devices could be classified as unconstrained and contact-free and it has
to deal with non-controlled conditions, that we briefly resume next:
1. Must work under any light condition: indoor, outdoor, day light, flash, etc.
2. Poor quality of cameras
3. Any kind of background, so the user can employ it anywhere
4. Blur, motion, noise, skin’s specular reflections, and so forth
5. Hand pose: the user can vary his/her hand pose from one picture to another
6. Hand items, like rings and watches
7. Battery and time consumption
2.5.3
Public databases for mobile hand recognition
In this section we show a table (Table 2.11) that overview databases with contact-less images of hands.
This table represents in its columns respectively the name of the database; references; whether the
database contains samples from Females (F), Males (M), Both (F/M); the population size; whether
rings are allowed: Yes(Y) or No(N); which hand is involved: Right(R), Left(L) or Both(B); number
of samples per user; the illumination of the image: Colour (C) or gray-scale(BW); the image size and
whether there exist variation in hand rotation during acquisition: Yes(Y) or No(N).
Name
ID
Syn
UST
IITD
Ref.
[139]
[139, 137]
[300]
[297]
F/M
Size
R.
H.
nSamp.
Ill.
Im. Size
Rot.
F/M
F/M
F/M
F/M
110
120
287
235
Y
Y
Y
Y
B
B
B
B
20
20
10
7
C
C
BW
BW
640x340
640x340
1280x960
800x600
Y
Y
Y
Y
Table 2.11: A comparative overview of several aspects from different hand databases
These databases are oriented to hand geometry technique but in our understanding, these databases
could be used also to palmprint technique oriented to mobile devices.
The PolyU [232] palmprint database [25] “contains 8000 samples collected from 400 different palms.
Each sample contains a 3D Region of Interest (ROI) and its corresponding 2D ROI”.
2.5.4
Liveness detection on mobile hand recognition
As far as the author’s knowledge, there exist no much work on this issue.
Some works in hand vein biometrics start from the assumption that veins are inner features of the
body that couldn’t be falsified due to they are not visible to the naked eye [126, 297, 77]. Then, this
technique could be used to live detection because veins couldn’t be falsified easily.
Future works in this issue could be oriented to video recordings, e.g., hand motion, heart rate
(as shown in Section 2.3.5) and human-machine interaction (the machine asks to the subject to do a
specific task, e.g., close and open the hand, rotate the hand or show a number of fingers.)
55
PCAS Deliverable D3.1
2.5.5
SoA of mobile biometrics, liveness and non-coercion detection
Conclusion
Hand biometrics in mobile devices is a work in progress that is emerging as a good solution that has
a compromise between user acceptability and system performance.
Nowadays with the improvement of the mobile phones, the possibility to add these biometrics
techniques to increase security with these devices is reachable.
Much work remains to be done in order to obtain similar results to other biometrics techniques
as iris, by this reason, the merging between different techniques related to hand must be studied,
e.g., hand geometry and palmprint techniques could start from the same hand picture then could be
noted that the merging of these two techniques will be a natural process that will increase the system
performance without disturbing the user acceptability.
Consequently, using hand in mobile phones presents the following advantages:
• It provides competitive performance rates when using in controlled situations.
• One picture could be analyzed from different points of view (e.g. hand geometry and palmprint)
and the results could be merged in order to enhance the performance rate.
• It is highly accepted because hand is not associated with criminal investigations. In addition
to this, it is not easy to steal an image of an open hand for someone else (in comparison with
fingerprints that keep latent in many surfaces.)
• Hand geometry and palmprint do not require high quality cameras or additional hardware to be
integrated in the mobile phones.
• It is very comfortable to use, since the user can take the photograph of their hand directly from
the mobile phone using the back camera without making contact with any sensor.
However, the following disadvantages or limitations of hand in mobile phones have been found:
• It presents limitations to the environments conditions, e.g. light condition, background, etc.
• Vein recognition requires thermographic cameras to acquire an image of the veins. This is a very
expensive technology.
• It is vulnerable to fake hands, that can be easily built by printing a picture of the hand.
2.6
Voice recognition
This section presents the most important works related to voice biometrics in mobile handheld devices.
The outline of this section follows the structure of the rest of the document.
In section 2.6.1 an introduction to the current speaker recognition techniques is provided in order
to align the scope of this document to the objective of the PCAS project.
Next, in section 2.6.2, the most recent and relevant works on speaker recognition systems are
presented, mainly those working on mobile environment or at least, focused on solving some of the
limitations of mobile phone systems (low storage and consumption). In addition, the main concern of
this document lies in works in which a text-dependent approach is implemented.
The public databases that are relevant in mobile speaker recognition techniques are summarized
in section 2.6.3, including their characteristics and how to download them.
Following this, in section 2.6.4 some comments regarding liveness detection in voice biometrics are
presented.
56
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Then, the most important companies providing voice biometric systems are presented in Section
2.6.5.
Finally, the conclusions of the state of the art of mobile signature recognition are presented in
2.6.6.
2.6.1
Introduction
Voice biometrics is focused on detecting the speaker identity in verification or identification systems.
In general, the sensor required is a microphone that captures the voice at a sampling rate and sends
these data to a computer, responsible for analyzing it.
Automatic speaker recognition systems have been studied for many years. There are thousands
of research articles regarding this biometric technique. Some of the most complete review articles of
voice biometrics are [421], [97], [428]. In addition, more recent surveys can be found in [171], [324]
and [106].
Usually, the speech features can be divided into high-level and low level characteristics. The former
are related to dialect, emotion state, speaker style and others, that are not usually adopted due to
difficulty of extraction. The latter, to spectrum, are easy to be extracted and are almost always
applied to automatic speaker recognition [187]. By far, Mel Frequency Cepstral Coefficients (MFCC)
and GMM are the most prevalent techniques used to represent a voice signal for feature extraction
and feature representation in speaker recognition systems [421].
There are two main scenarios of speaker recognition systems:
• Text dependent: The user is recognized when saying the specific word or phrase he/she was
enrolled with. This means that the Speaker Recognition Systems (SRS) knows a priori the
sentence the person is going to say, giving a lot of information to the system. In general, these
systems have better performance and are simpler than text-independent systems.
• Text independent: The user is recognized when he/she is having a conversation, no matter
which words are being pronounced. This is a much more complicated scenario but more flexible.
It is quite used for transparent and forensic identification.
Nowadays, the research on speaker recognition techniques is not so focused on adapting the technology to mobile phones, but on improving the performance of text dependent and independent scenarios
in real and noisy conditions. At present, text dependent speaker verification systems are more commercially viable for applications where a cooperative action is required, but text independent SRS are
more useful for background recognition and liveness detection.
As the scope of this document is to make an overview of the relevant technologies for the PCAS
project, the speaker recognition text-dependent scenario is more useful for verification purposes, since
a cooperation of the user is expected to accomplish the authentication. Consequently, the relevant
works presented in next section are focused on this scenario. In 2008 the authors of [231] presented a
complete overview specifically for text-dependent speaker recognition.
Additionally, as far as the authors’ knowledge, works related to mobile speaker verification often
make the mobile phone adaptation only in the sensor and the communication modules. This implies
that these works use the mobile phone to get the signal, then they send it to an external computer
where the analysis is carried out. So, in these works, the mobile adaptation of the techniques are
translated as a different kind of noise in the communications and acquisition module. In spite of these
works, which are quite interesting and presented as follows, there are several works where the focus of
the research is mainly in the low consumption time and storage required, allowing the development of
real speaker verification system on a mobile phone.
57
PCAS Deliverable D3.1
2.6.2
SoA of mobile biometrics, liveness and non-coercion detection
Relevant works on mobile speaker verification
One of the main challenges of speaker recognition systems is their high computational cost, that
must be reduced in order to be incorporated into a mobile phone. Many researchers are focused on
reducing the computational load of recognition while keeping the accuracy reasonably high. For this
purpose, optimizing Vector Quantization (VQ) has been proposed in many works [450]. This method
consists of reducing the number of test vectors by pre-quantizing the test sequence before matching.
Consequently, unlikely speakers can be easily rejected. Another very used option for this is using a
generalization of GMM [420].
As it was introduced before, this section is focused on describing the most relevant and recent works
regarding speaker verification by means of a mobile phone, mainly in the text-dependent scenario where
an active cooperation of the user is expected. As there are a lot of related works, the articles with an
experimentation section with a database of 10 users or less will be discarded in this document unless
they have a very important impact in the literature.
One of the most relevant initiatives is the First Mobile Biometry (MOBIO) Face and Speaker
Verification Evaluation, carried out in the projects MOBIO [20] and TABULA RASA [29], to make
a competition to recognize people from their face and voice through their mobile phone. The results
of this competition were presented in [330], making use of the first version of the MOBIO database,
described in the same work. This database was composed by text-dependent and text-independent
voice samples. However, in this evaluation there is not any separate evaluation for both scenarios. The
best results of this evaluation, regarding only voice biometrics, were obtained by the Brno University
of technology, achieving an EER of 10.47% (male) and 10.85% female.
The winner algorithm was composed by the fusion of two systems. The first system is a Joint
Factor Analysis (JFA), described in [278]. The second system was published in [142] and it is based
on an i-vector system that describes the subspace with the highest overall variability. Both systems
use 2048 Gaussians.
The MOBIO database was completed some years later and it was presented in [346]. The authors
of [371] used this database to evaluate their voice algorithms in mobile environments. In this work, the
authors presented a session variability model based on GMM. For speaker authentication, the speech
segments are first isolated using energy-based voice detection. Then MFCC features are extracted
for 25ms frames with 10ms overlap and a 24-band filter bank. The resulting 60-dimensional feature
vectors contain 19 MFCC together with energy, delta and double delta coefficients. These feature
vectors are examined by Inter-session Variability (ISV) and JFA methods, obtaining an HTER of
8.9% in males and 15.3% in females.
Additionally, in [233] the authors propose a verification system based on password phrase with
voice authentication. For this purpose they use the “BioID: A Multimodal Biometric Identification
System” database [185]. In this work the authors claim the robustness and accuracy of the system to
be the most important requirements in order to be used by the society. For this reason they propose
to train the verification system with limited samples and make the system consume very low time and
memory. They use DTW for classification of MFCC features. Robustness is increased with speech
enhancement and cepstral mean subtraction. In order to decrease the storage requirements, they use
VQ with speaker specific codebooks to obtain a 2.7% EER with limited training.
Another relevant work in terms of mobile text-dependent speaker recognition is the one of [305],
where the authors release the RSR2015 database, for Text-Dependent Speaker Verification using Multiple Pass-Phrases. This database is composed of 300 users who said 30 phrases in different sessions
with 4 mobile phones and 2 tablets. In this work, the authors use the ideas of the system proposed
in [308] to evaluate its performance in this database, obtaining an EER around 1% per males and
females. However, they required 3 sessions with 30 phrases per user to make the enrolment. They
propose a hierarchical multilayer acoustic model based on three layers, the first one to detect the gen58
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
der, the second to choose some candidates and the third to detect the identity. The authors extract 50
features per sample (19 Linear Frequency Cepstral Coefficients (LFCC), their derivatives, 11 second
derivatives and the delta energy). They use HMM to the classification.
The same main author together with other team presents a recent work proposing a text speaker
verification system with constrained temporal structures [304]. There, the authors describe these
constrains as the limited enrolment data and the computing power typically found in mobile devices.
They propose a client-customized pass-phrases and new Markov model structures. They also work
on a hierarchical multilayer model, where the first layer detect the gender, the second the speaker
candidates based on text-independent techniques and the third one with HMM to make the decision
on the text-dependent system. In this work the authors make use of the MyIdea database [152],
obtaining an EER of 0.84% when impostors do not know the pass phrase and 4.11% when impostors
use the authentic pass phrase.
Another text-dependent speaker verification for mobile devices was suggested in [50]. In this work,
the authors propose to verify all the people when saying the Arabic word “Naam” (“Yes” in English).
They extract the MFCC for each sample and train the model of each user by means of an ANN
through a batch gradient descend algorithm. For training each model, they use samples of other users
and other pass phrases. They work with a private database of 15 different speakers recorded from an
Android HTC Nexus One, obtaining an EER of around 7-8%.
Also the authors in [110] present a text-dependent speaker recognition scenario in Android platforms. In this case they suggest to make a preprocessing step, consisting on normalization, silence
removal and end-pointing techniques. Then, they extract LFCC features and make a classification
based on DTW. In order to decide whether a sample belongs to a user, they propose to use personal
thresholds for each user based on their training samples. At enrolment, they suggest two different
types of training: a sequential training that gets the first sample and the rest are aligned to it through
DTW and a collective training where they get all the training samples and choose the one with the
median length as the reference for the DTW. In this work, they use a private database of 16 people
with 15 records of the same passphrase (the same sentence for all the users). They use 10 samples for
training and 5 for testing, obtaining a FAR of 13% and a FRR of 12%.
Additionally, the authors of [54], propose an implementation of a real-time text dependent speaker
identification system with recording equipments similar to the ones integrated in mobile devices and
algorithms prepared to consume a low amount of memory and processing power. The speaker identification is based on the MFCC and the derived Dynamic Coefficients, while classifying features using a
DTW approach. They constructed a private database with 23 speakers with the Romanian equivalent
word for “airplane”. The recordings were captured using a low cost microphone attached to a low cost
MP4 player at a sampling frequency of 8kHz and 8 bits per sample. The authors claim a CIR of 80%
in their database and answering each identification request with less than one second. The CIR was
increased to 96% when using Dynamic Coefficients, but the time required also rose by a factor of 3.
Another similar work is introduced in [295]. In this case, the authors present a door phone
embedded system and a platform with speech technology support for recognition, identification and
verification. There was an emphasis in noisy environments. They use the database BioTechDat,
captured from telephone speech in noisy environments at the KEYSPOT project [18]. As far as
the authors’ knowledge, this database is not publicly available. The algorithm proposed is based on
modelling each speaker with a GMM. To minimize the influence of background sounds, they used
background modelling, in which they trained a single speaker independent GMM background model,
also called Universal Background Model (UBM). Also in this work, the voice signals are converted to
MFCC features. In order to enhance the speaker verification accuracy, a cepstral mean normalization
is carried out. This algorithm obtains an EER of 4.80%.
Another initiative to study the speaker verification on mobile phones was carried out by the MIT
in 2006 [483]. In this work the authors presented a corpus and some preliminary experiments. The
59
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
database was captured with a handheld device provided by Intel. There were three different environments with different noise conditions (office, lobby, street) and two different types of microphones.
Since recording in noisy environments, this corpus contains the Lombard effect (speakers alter their
style of speech in noisier conditions in an attempt to improve intelligibility). This effect is missing
in databases that add noise electronically. They captured a list of phrases for 48 speakers and they
extracted MFCC features in segment regions of the voice and speaker models based on GMM. The
EER obtained is 7.77% in office scenario, 10.01% in lobby and 11.11% in the street.
The former database was also used in [361] to make a comparative study of methods for handheld
speaker verification in realistic noisy conditions. In this case, the authors use Decorrelated Log FilterBank Energies (DLFBE) features as an alternative to MFCC features. The best algorithm evaluated
is based on the implementation of a Wiener filter by estimating the power spectrum of the noise at
the beginning of each voice sample to remove the noise and a universal compensation process using
simulated noise. With this approach, they obtain an EER of 10.19% reducing the EER for the baseline
model (19.96%).
The same team of authors presented one year later the work in [362] where they continued working
on robust speaker recognition in noisy conditions. They used data at the office scenario to train the
system and at the office and the street to test it. They also used DLFBE features but modelled voice
with GMM, obtaining a 6.50% of EER in office-office and a 12% in office-street scenarios.
The problems of noisy environments in the text dependent speaker identification were also studied
in [301]. In this case, the authors prepared a synthetic database, adding speech and F16 noises at
-5dB, 0dB and 10dB Signal-to-Noise Ratio (SNR) levels to a clean database of 50 speakers and 10
Hindi digits. They compared MFCC and LFCC features with classification using GMM, with best
results in the first type of features. They obtained a 96.65% speaker identification rate in the clean
database. However, this rate is reduced to 88.02% (10dB SNR), 79.42% (0dB SNR) and 76.71% (-5dB)
respectively.
A different approach in smart environments is proposed in [383], where they use Multi-layer perceptron (MLP) to classify the voice samples. They use low-level features, such as intensity, pitch,
formant frequencies and bandwidths, and spectral coefficients in order to train a MLP of each user.
In this case, they use the CHAINS corpus [128], made up of 36 speakers recorded under a variety of
speaking conditions. The best approach of MLP obtains an accuracy of 80%.
This corpus was also used in [218] to develop a speaker identification system using instantaneous
frequencies. In this case, the authors propose to use an AM-FM framework to compute the instantaneous frequency of the speech signal, instead of MFCC features. Using these features in this database
with a GMM classifier improves the accuracy of the system to around 90%.
One relevant initiative regarding mobile speaker recognition is “The 2013 speaker recognition
evaluation in mobile environment” [280] conducted in the BEAT project [8]. This is a text-independent
competition but with conversations obtained from real mobile phones. For this competition, they
completed the MOBIO database with a mobile scenario, composed by samples of face and voice
captured from mobile phones at a sampling rate of 16KHz. The speech segments are acquired with
real noise and some of them are shorter than 2 seconds. The competition has been conducted by 12
teams with the requirement of not using information of other clients at enrolment phase. Accordingly,
the template of each user is only composed by his/her own voice. The ALPINEON team obtained the
best results in this evaluation in terms of EER: around 10% females and 7% males [141]. The authors
of [280] proposed to make a fusion between all the 12 systems evaluated, obtaining an EER of 7%
females and 4.8% males
The Alpineon KC OpComm system [141], which won the competition is made up of 9 different
total variability models, with a score fusion combination. These kind of approach is also known in the
literature as i-Vector based subsystems. All the subsystems are identical but use different acoustic
features. 3 different cepstral-based features (MFCC, LFCC, Perceptual Linear Prediction (PLP)) are
60
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
extracted over 3 different frequency regions (0-8 kHz, 0-4 kHz and 300-3400 Hz). A gender independent
training is performed with a low dimensionality of the voices.
Furthermore, in [268] and [269], the authors propose a verification protocol, called Vaulted Voice
Verification (V 3 ) by means of text dependent and independent speaker recognition. The experimentation developed used also the MIT database [483], obtaining an EER of approximately 6%. They
created the models of each person by means of MFCC and GMM.
Finally, a prototype of an Android implementation working on a mobile phone has been recently
presented in [94], obtaining an EER of 4.52% using a text-independent speaker recognition system
based on MFCC and VQ. Additionally, they confirmed that different mobile devices will have different
parameters and therefore different performance, so they suggest a preliminary step of calibration on
the mobile device.
A summary of all these works is presented in Table 2.6.2.
According to these works, a lot of research has been carried out in voice verification systems.
However, some environmental limitations have been
2.6.3
Public databases for mobile speaker recognition
There are some databases publicly available to develop and improve the algorithms of the state of
the art on mobile voice recognition biometrics. As it has been previously said, the research on voice
biometrics is enormous; accordingly, the amount of public database is also large. Actually, in [353],
the authors presented an overview of 36 public speech databases available before the year 2000. In this
century, there is also an important number of public databases. In this review, the authors present
the most relevant ones in terms of mobile phones and text-dependent scenarios, which are the most
relevant in the PCAS project. No databases before year 2000 are presented.
The summary of all the relevant databases, according to the authors’ opinion, is presented in Table
2.13, where there is the following information for each database:
• Bibliographic reference.
• Name of the database.
• Number of people in the database, separated by males and females if this information is available.
• Number of sessions and repetitions on each session.
• Link to download the database.
• Type of speech in the database.
• Additional comments.
2.6.4
Liveness detection on mobile speaker verification
Speaker recognition has one intrinsic vulnerability based on the fact that anyone can record the voice
of the authentic person in order to forge the biometric system. Nowadays, there are many available
devices able to record voices or even conversations on the phone. Furthermore, voice is a biometric
characteristic that is often exposed, even more than fingerprints.
There are different cases of spoofing attacks in automatic speaker verification systems, perfectly
summarized in the recent work of [162]:
61
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Publication
Database
Result
Technique
Comments
[330]
MOBIO
JFA + i-vector
1st Mobio Evaluation
[371]
MOBIO
[233]
BioID
EER = 10.47%(male) and
10.85% (female)
HTER = 8.9% (males), 15.3%
(female)
EER = 2.7%
MFCC + DTW + VQ
[305]
EER = 1%
LFCC + HMM
[304]
RSR2015
database
MyIdea database
Hierarchical Multilayer Model
Constrained temporal structures
[50]
[110]
15 HTC ONE
16
EER=0.84%
(impostor
not know the passphrase)
EER=4.11%
(impostor
knows)
EER= 7-8%
FAR=13%, FRR=12%
MFCC + ANN
LFCC + DTW
Same word
Same passphrase
[54]
23
CIR = 96%
MFCC +DTW
[295]
BioTechDat
MFCC + GMM + UBM
EER of 4.80%
[483]
EER=7.77% (office), 10.01%
(lobby) 11.11% (street).
EER=10.19%
MFCC + GMM
DLFBE + Wiener filter
Lombard effect
EER= 6.50% (office-office),
12% (office-street)
MFCC+ GMM
DLFBE + GMM
Lombard effect
[301]
48 (Three environments)
48 (Three environments)
48 (Three environments)
50
Same
word
(“aeroplane”)
Noisy
environment
Lombard effect
Synthetic
added
[383]
CHAINS
CIR = 80%
CIR=96.65% (NO noise),
88.02% (10dBSNR), 79.42%
(0dBSNR)
MLP
[218]
CHAINS
CIR = 90%
AM-FM
Different environment conditions
[280]
MOBIO
Database
EER=7%
(males)
Fusion of 12 systems
Real noise, textindependent
[141]
MOBIO
Database
EER=10%
(males)
i-Vector MFCC, LFCC, PLP
Real noise, textindependent
[268]
MIT database
EER=6%
MFCC + GMM
[94]
18
EER=4.52%
MFCC + VQ
Fusion of textindependent and
dependent
Android implementation
[361]
[362]
(females),
(females),
4.8%
7%
19MFCC + ISV + JFA
Table 2.12: Summary of relevant works in voice recognition
62
Session variability
model
Long enrolment
noise
Different environment conditions
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Ref
Name
#people
#sessions
Sensors
Speech
Comments
[185]
BioID
22
1x10
Camera
[330] [346]
MOBIO
152 ( 52F + 100M )
12
Nokia N93i
Face, lips
and voice
Real noise
and short
speeches
[305]
RSR2015
300 (143F + 157M)
9
[483]
MIT
40 (17F + 23M)
2
Samsung
Galaxy,
Samsung
Neuxus,
HTC Desire
Intel
Name and
free speech
Response
questions
and
free
speech
30
short
sentences
[152]
MyIdea
30(M)
3
No mobile
[128]
CHAINS
36 (12F + 16M)
2
Profesional
studio
[66]
BANCA
208 (104F + 104M)
12
[177]
BIOSEC
200
2
High
and
low quality
microphones
Headset
and
webcam
microphone
[183]
Valid
106 (30F + 76M)
5
Camera
[202]
BIOMET
91 (45F + 46M)
3
Camera
short
phrases
25
sentences
Short
fables
and
individual
sentences
1 fixed digit
sequence
fixed digit
sequence
fixed digit
sequence
personal information
Table 2.13: Public databases for mobile voice recognition
63
4 to 8 words
Lombard
effect
Controlled
acoustic
conditions
Different
speaking
styles
Multimodal
(face, fingerprint,
iris,
and
voice)
Noisy office
Multimodal
(voice,
fingerprint, hand,
signature)
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Impersonation: Implies spoofing attacks with human-altered voices, involving mostly mimicking of prosodic or stylistic cues rather than aspects related to the vocal tract. Impersonation is
therefore considered more effective in fooling human listeners than a genuine threat to today’s
state-of-the-art Automatic Speaker Verification (ASV) systems [395]. The same conclusion was
obtained in [333] with an experiment of a professional imitator, providing access to the recordings
used in the experiment that can fool human listeners but not ASV systems.
• Replay: Involves the presentation of speech samples captured from a genuine client in the
form of continuous speech recordings, or samples resulting from the concatenation of shorter
segments. Work in [472] investigated vulnerabilities when replaying far-field recorded speech to
forge a ASV system. They proposed a baseline ASV system based on JFA and they concluded
that using these recordings the equal error rate (EER) increased from 1% to almost 70%.
The same authors showed that it is possible to detect such spoofing attacks by measuring the
channel differences caused by far-field recording [471], reducing the error rates to around 10%.
However, today’s state-of-the-art approaches to channel compensation leave some systems even
more vulnerable to replay attacks.
• Speech synthesis: ASV vulnerabilities to synthetic speech were first demonstrated over a
decade ago, using a HMM-based, text-prompted ASV system and an HMM-based synthesizer
where acoustic models were adapted to specific human speakers [340]. Experimental results
showed that FAR for synthetic speech reached over 70% by training the synthesis system using
only 1 sentence from each genuine user, however, this work involved only 20 speakers.
Larger scale experiments using the Wall Street Journal corpus containing 300 speakers and two
different ASV systems (GMM-UBM and SVM using Gaussian supervectors) was reported in
[135]. Using a state-of-the-art HMM-based speech synthesizer, the FAR was shown to rise to
81%. They proposed to use a new feature based on relative phase shift to detect synthetic
speech, able to reduce the FAR to 2.5%.
The same authors complement the previous work in [136], by analyzing words which provide
strong discrimination between human and synthetic speech, resulting in a 98% of accuracy in
correctly classification between humans and synthetic speech.
Spoofing experiments using one single HMM-based synthetic trial against a forensics speaker
verification tool were also reported in [200], presenting the huge vulnerability and the obligation
of including a synthetic voice detection to avoid speech synthesizers present a genuine threat to
ASV.
Successful detection of synthetic speech has been presented in [109], with a system based on
prior knowledge of the acoustic differences of specific speech synthesizers, such as the dynamic
ranges of spectral parameters at the utterance level and the variance of higher order parts of
MFCC. In their experiments they demonstrated that as the synthetic speech is generated from
HMM parameters, and the training stage of HMM parameters can be looked on as a smoothing
process, then, the variance of synthetic speech in higher order of MFCC is smaller than in real
voices so it can be used to detect the synthetic speech.
Other approaches to synthetic speech detection use fundamental frequency (F0) statistics [311],
based on the difficulty in reliable prosody modelling in both unit selection and statistical parametric speech synthesis. F0 patterns generated for the statistical in the speech synthesis approach
tend to be over-smoothed and the unit selection approach frequently exhibits ’F0 jumps’ at concatenation points of speech units. Results showed 98% accuracy in correctly classifying human
speech and 96% accuracy in correctly classifying synthetic speech.
64
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Voice conversion: Voice conversion is a sub-domain of voice transformation which aims to
convert one speaker’s voice towards that of another [454]. When applied to spoofing, the aim
with voice conversion is to synthesize a new speech signal such that extracted ASV features are
close in some sense to the target speaker.
This type of spoofing attacks have been deeply studied by the authors of [53]. In this work, they
noted that certain short intervals of converted speech yield extremely high scores or likelihoods
in ASV, even though these intervals are not representative of intelligible speech. They showed
that artificial signals optimised with a genetic algorithm provoked increases in the EER from
10% to almost 80% for a GMM-UBM system and from 5% to almost 65% for a factor analysis
(FA) system.
Two approaches regarding artificial signal detection were reported in [52] by the same authors.
Experimental work shows that supervector-based SVM classifiers are naturally robust to such
attacks whereas all spoofing attacks can be detected using an utterance-level variability feature
which detects the absence of natural, dynamic variability characteristic of genuine speech. An
alternative approach based on voice quality analysis is less dependent on explicit knowledge of
the attack but less effective in detecting attacks.
A related approach to detect converted voice has been recently proposed in [51], also by the same
authors. Probabilistic mappings between source and target speaker models are shown to yield
converted speech with less short-term variability than genuine speech. The threshold average
pair-wise distance between consecutive feature vectors is used to detect converted voice with an
EER of under 3%.
Additionally, the authors of [486] studied how to distinguish natural speech and converted speech,
showing that the performance of the features derived from phase spectrum outperform the MFCC
tremendously, reducing the EER from 20.20% of MFCC to 2.35%.
Some of these attacks can be found together, so the solutions provided by the ASV should treat
with all of them at the same time.
For instance, in [447] the authors propose a CAPTCHA system that asks the user to repeat a
random sentence. The reply is analyzed to verify that it is the requested sentence, not a recording,
and said by a human, not a speech synthesis system. Using an acoustic model trained on voices of
over 1000 users, their system can verify the user’s answer with 98% accuracy and with 80% success
in distinguishing humans from computers. The same authors have recently presented an article in
[448] proposing two implementations of the CAPTCHA system in mobile devices, concluding that a
CAPTCHA where the sentence is shown and then read aloud is much more comfortable for users than
a sentence heard and repeated.
Another approach to detect liveness of users regards the fusion of voice and face recognition
systems. In this case, the movements of the lips are often used as the features to detect whether the
face is talking or not [111]. Actually, not only the movement of the lips is important to detect the
liveness of the person, but also the synchrony between the speech and the lip movement, as presented
[163]. This synchrony can be measured by the correlation of the speech energy versus the mouth
openness, as proposed in [91]. Regarding this approach, there was an evaluation campaign in the The
BioSecure Network of Excellence project [10], where some face-voice systems were evaluated against
different forgeries [170]. The most relevant forgery in this document is the audio replay attack where
the impostor access uses speech from the outdoor session of the targeted speaker and the video from
someone else. In these conditions (an in the rest of forgeries studied), the best systems obtained a
high EER (around 30%).
65
PCAS Deliverable D3.1
2.6.5
SoA of mobile biometrics, liveness and non-coercion detection
Commercial applications
At this stage, there are some commercial applications to authenticate a user by means of their voice.
Actually, Barclays Bank made some commercial tests claiming 95% of users correctly verified [75].
However, in this work it is suggested that in addition to the solution’s scalability and the vendor’s
reputation, organisations also should look for:
• Algorithms sophisticated enough to work around problems such as crosstalk and background
noise.
• Anti-spoofing safeguards and liveness detection to become aware of changing in speakers or
playback recordings.
• Automated fraudster detection to build fraudster databases and detect malicious individuals as
they interact with a smartphone.
At present, there are several companies offering voice biometric solutions. The most important, as
far as the authors knowledge, are cited in [359]:
• Auraya Systems [4]: Their voice solutions have been implemented in New Zealand Banking
and Government services.
• Nuance Communications[22]: Their voice biometric solutions are based on repeating a sentence (text-dependent) and also in free speech. This company deploys the Barclays bank solution.
• Agnitio S.L.[2]: Provides a voice biometric method to be deployed in any device. It includes
spoofing detection methods. It provides commercial and governmental solutions
• SESTEK[27]: This company implemented a voice authentication method based on passphrases
in DenizBankÕs call centre, the first voice verification project of Turkish banking industry.
• Speech Technological Centre[28]: The Criminal Investigations Unit of the Nepalese Police
has selected their technology to handle its audio forensic and voice identification needs. They
also provide an authentication method based on face and voice recognition.
• ValidSoft[32]: They provide a patented voice-based Out-of-Band authentication and transaction verification and also a secure mobile authentication method for real-time transactions.
These technologies have been deployed in several UK banks.
• Voice Biometrics Group[33]: They provide a core of voice biometric technology that can be
applied to many use cases, including bank, mobile, time and attendance, etc.
• Voice Trust[34]: They implement a verification system to authenticate the identity of customers and employees. They claim to have more than 500 global customers around the world.
• VoiceVault[35]: They provide scalable voice solutions in the healthcare and banking fields in
40 countries.
2.6.6
Conclusion
There are many works in the state of the art regarding voice biometrics. In general, the adaptation of
speaker verification systems to the mobile devices is carried out in the capture sensor, since a mobile
phone microphone is often used. However, there are not so many works trying to accomplish all the
66
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
verification process in the device but usually voices are sent to a central authentication server where
speech signals are analyzed.
There are two main types of speaker verification systems: text-dependent systems, which are
easier and obtain better performance, and text-independent that are much better in terms of liveness
detection.
At present, most of speaker verification systems can be spoofed by replay attacks, voice conversion and speech synthesis, unless a special liveness detection module is implemented in the system.
This module usually is implemented through text-independent mechanisms, requiring a lot of time of
conversation to work properly.
There are several companies providing many commercial solutions related to speaker verification
systems. Most of them use the mobile phone to capture the voice of the people but the verification
process is not carried out in the device. In addition, it has been shown in the document that the
performance rates of this technique are not so low, the consuming time is high and it is quite easy to
forge with spoof samples.
However, this technology is very useful in phone banking in order to verify an action carried out
from a phone conversation, improving the current methods to verify people when making a phone call
(usually ask them for passwords or personal information), and giving people a higher security feeling.
At present, this is the main application of this technology, where the voice is captured from the mobile
device and processed in an external authentication server.
According to the report presented in this section, the following advantages of using voice biometrics
in standalone mobile phones have been found:
• All the mobile phones already have a microphone to capture voices.
• Natural and comfortable way of communication, specially in mobile phones.
• Promising performance results.
• If using while asking for information to a call center, it can be transparent to the user.
However, the following limitations or disadvantages have been also noticed:
• It is very constrained with noise.
• Voice of the people can change depending on sickness, the time of the day or aphonia.
• The most comfortable systems and those with best performance are based on passphrases, but
FAR increases a lot when impostors know the sentence.
• Systems based on text-independent require long conversations to train and access. The processing of this signals is hard.
• Voice is quite easy to capture or replicate. There are many possible attacks to these systems
and countermeasures do not work well enough yet.
2.7
Iris recognition
The iris is the colored region in the eye that control the size of the pupil and therefore the amount of
light that reach the retina.
The pigmented layer of the iris is known as stroma. The stroma is a fibrovascular layer of tissue
connected to the sphincter muscle which contracts the pupil.
67
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Figure 2.4: Iris. (Left) Iris picture from CASIA database under IR wavelength. (Right) Iris picture
from NICE1 database under visible wavelength.
The pattern generated by the fibers in the stroma layer is considered different for each person,
even twins and eyes of the same person have different patterns. This pattern is used in biometrics
systems to recognize people.
2.7.1
Introduction
350
Iris biometrics
Mobile phone
300
Number of publications
250
200
150
100
50
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
1992
1991
1990
<1990
0
Figure 2.5: Number of Iris Biometrics publications till 2013, searching “iris biometrics” and “iris
biometrics mobile phone” into Google Scholar.
Iris biometrics technique is an expanding field as depicted in Figure 2.5 where it is shown that
the number of publications keeps growing, reaching 337 publications in 2012. The research in Iris
Biometrics has to solve multiple fundamental issues in order to improve the applicability of this
technique into real environments.
The survey [89] covers the literature produced from the origin of this technique until 2007. The
period from 2008 to 2010 is covered by [90]. In these two surveys Bowyer divides up in sections the
different advances in the processes carried out in a typical iris biometric system: Image Acquisition,
68
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Iris Segmentation, Feature Encoding and Matching. Sheela’s survey is focused into methods to extract
the iris pattern for iris recognition [445]. Hansen explores into his survey [228] different methods for
eye detection and gaze estimation.
Let us summarize the state of the art up to 2010 and expand up to 2013.
The use of iris for person identification was first introduced in 1885 by an ophthalmologist named
Alphonse Bertillon [76]. The first patent was established in 1987 by Leonard Flom and Aran Safir,
where they offered a conceptual design but not a system implementation [180].
In 1992, Johnston made a study of the feasibility of using the iris pattern for identifying people (in
verification and identification scenarios). He studied 650 persons during 15 months to conclude that
the iris pattern remains unchanged over this period [270].
In 1994, John Daugman’s patent [133] and early work [132] described a system for iris recognition.
The integro-differential operator proposed by Daugman in his patent has become a mainstay on the
field. Indeed, most of the iris commercial systems are based on this patent.
From 1996 to 1998, Wildes used binary edge map and a Hough transform to detect circles to
accomplish iris recognition, [479, 478, 480]. In his patents [479, 480], Wildes proposed an acquisition
system based on “a diffuse source and polarization in conjunction with a low light level camera”.
In 2001, the United Arab Emirates started to use an iris recognition procedure of foreigners entering
the country. Other cities as Amsterdam and UK also started to introduce this system in their airports.
In order to do more flexible systems, Sarnoff Labs created cameras, in 2005, to capture “iris-onthe-move” and “iris-at-a-distance” [341]. There are some studies using different distances: beyond 1
meter [342], beyond 1.5 meters [477] and up to 3 meters [341], [149].
On these issues, noteworthy is the work done by Proenca et al. which covers most of the advances in
iris recognition based on visible wavelength and non-cooperative environments [412, 414, 415, 413, 411].
In 2010, the Indian Government started the Aadhaar[1] identification project where the main goal
is to assign a national identification number to each India’s resident. The enrolment of all their citizens
(about 1.2 billion) would be completed by February 28, 2014. The obtained biometric samples consist
of two iris, ten fingers and a facial photo.
The Noisy Iris Challenge Evaluation is a competition focused on performing Iris biometrics on
visible wavelength pictures. In 2007, the Noisy Iris Challenge Evaluation (NICE) - Part I was focused
on Iris segmentation. Best results are published on the special issue [40]. In 2010, the NICE - Part II
was “focused on performance in feature extraction and matching”, [87]. Best results are published on
the special issue [41].
Cardoso et al. [99] developed a software named NOISYRIS that is able to simulate irises acquisition
under different light sources, iris occlusions (eyelids, eyelashes and glasses), motion, and so on.
2.7.2
Template aging
Biometric template aging was defined by Mansfield and Wayman [326] as follows – “Template ageing
refers to the increase in error rates caused by time related changes in the biometric pattern, its
presentation, and the sensor.”
Since the beginning of the iris biometrics research, the assumption of the iris pattern immutability
was a fact for many authors [132, 133, 478, 230].
The eye and iris are subject to different changes with age [482, 80, 57], but changes produced in
iris patterns were not measured until 2008 by Tome-Gonzalez [464].
Tome-Gonzalez et al. [464] used two 4-month time lapse databases, BioSec and BioSecurID,
to study how time affects the iris template. The author concluded that template aging causes a
degradation on the FRR while FAR doesn’t change.
Baker et al. [69] used a 4-year database and they studied the mean hamming distance (HD) for
a long-time-lapse (LT) and short-time-lapse (ST) and they showed that the HD for LT is bigger than
69
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
for ST for match scores. Baker et al. conclude that FRR is increased by 75% with time.
Fenker et al. [172] and [173] studied deeply how the FRR changes with time and how the FAR is
not affected. In [172], they studied the increase of FRR in long-time-lapse when a threshold is fixed,
and in [173], they focused on a longitudinal study of the increase in FNMR.
Baker et al. [438] compared three different algorithms to behold how the template aging affects
their performance. They noted that in all of them a FRR increment takes place being the Cam-2 the
best behaved algorithm.
Ellavarason et al. [155] made a comparison between six different algorithms implemented in USIT
(University of Salzburg Iris Toolkit) for feature extraction. They conclude that the best behaviour
corresponds to the algorithm proposed by Ma et al. [322].
70
71
et
ND-IrisTemplate- Aging2008-2010
2006
Ellavarason
al [155]
al
ICE
[402][403]
et
Fenker
[173]
al
NIST. ICE [38]
Baker et al [438]
et
Fenker
[172]
Baker et al. [69]
BioSec [178]
Tome-Gonzalez
et al. [464]
BioSecurID [256]
Database
Article
IrisBEE modified version
4 years
weekly during academic semester
43 subjects
4 years
120 days between
sessions
timelapse 2008-2010
23 subjects
timelapse 2008-2011
- FRR increase
- FRR increase
- study of the FNMR increase at 1,2- and 3-Year Time Lapse
- In depth study of the increase in
FRR with different thresholds
- changes detected in the Hamming distance threshold for matching irises
- no effects in FAR
- increase FRR more than twice
Conclusion
Table 2.14: Iris template aging
Six different methods in USIT
[31]
VeriEye [39]
2 commercial systems
IrisBEE modified version
[403]
VeriEye [39]
Cam-2 [403]
IrisBEE
VeriEye [39]
IrisBEE + 1D-Log-Gabor
2 sessions
1-4 weeks between
sessions
254 subjects
4 sessions
1-4 weeks between
sessions
13 subjects
2 sessisons: 2008 and
2010
322 subjects
Libor Masek [337],[338]
Iris recognition method
200 subjects
Info
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
PCAS Deliverable D3.1
2.7.3
SoA of mobile biometrics, liveness and non-coercion detection
Relevant works on mobile iris technique
Nowadays, mobile phones (cell phones) computational power has been enhanced from 200 MHz processors in 2005 to 1.5 GHz quad-core processors and 2GB RAM in 2013. This improvement allows us
to adapt or develop classical or new algorithms for biometric identification into mobile devices.
Iris biometrics could be divided into two branches. The first branch gather those authors using
Infrared (IR) wavelength in order to extract the rich iris structure more easily [132, 479]. The second
branch is for these authors using Visual Wavelength (VW) [412, 40, 41].
Most of the works on Iris Biometrics with mobile devices were done with IR wavelength.
Cho et al. [238] use an halogen lamp and IR pass filter to acquire iris images. They use a
binarization process to locate corneal specular reflections and the pupil. To find iris boundary Cho
et al. use a Modified Edge Detector that measures the difference between ten points around a given
radius and ten points around a bigger radius. They ensure that this method obtains similar results
than that Daugman integro-differential operator in less time. They improved their Modified Edge
Detector in [239] to perform better at indoor and outdoor environments. To do so, they focus the
boundary search at specific angles where iris occlusion with eyelid and eyelash are less probable.
Jeong et al. [264] use an IR-illuminator and IR pass filter to take the iris image. They use the
algorithm proposed by Cho et al. [239] to detect iris boundaries. To end the segmentation process they
detect eyelid and eyelash regions. They propose an “Adaptative Gabor Filter based on the measured
image brightness and focus value” obtaining an EER of 0.14%.
Park et al. [391] search eye region into a face picture using a modified AdaBoost method based
in corneal specular reflections. They use the Edge Detector proposed by Cho et al. and eyelid and
eyelash detector. Iris feature extraction was done by the division of the polar iris representation into
eight tracks and 256 sectors and 1D Gaussian filter to extract the gray level to each one. Before, they
apply a 1D Gabor Filter.
Kurkovsky et al. [302] briefly introduce an adaptation of a classical algorithm for iris recognition
based on a threshold pupil localization and a edge detection and Hough transform to iris boundaries
detection.
Lu et al. [247] use an “EyeCup” to achieve the same iris-to-camera distance and the same illumination conditions for all collected iris. They analyze the histogram in order to find the pupil, iris and
sclera areas and then they apply a pixel-oriented method for the iris boundaries detection.
Mobbeel [19] offers a commercial product that implements one solution to iris recognition based
on a client-server solution where “the server receives the sample taken by the client”. They ensure
that “Mobbeel never stores the biometric templates obtained from users so there is no risk of a user’s
credentials being compromised”
OKI [17] Electric Industry develops a technology based on “OKI’s original iris recognition algorithm
using standard optical cameras that are equipped in mobile terminals”.
72
Cho et al. [239]
73
Lu et al. [247]
Samsung Ericsson P800
EyeCup
by
pixel-oriented method
Hamming
distance
Hamming
Distance
Template
Matching
Table 2.15: Summary of relevant works in mobile iris biometrics
-
Edge Detector
Hough transform
Histrogram analysis
Pupil detection
threshold
1D Gabor Filter
division of polar image
into 8 tracks and 32 sectors
IR
+ IR pass filter
division of polar image
into 8 tracks and 256
sectors
Adaptive Gabor Filter
1D-Gaussian
Eye region search by
AdaBoost based on
specular reflection
Improvement of iris and
pupil search
Cho et al. [238]
Cho et al. [238]
Modified Edge Detector
Threshold
Feature Extraction
+ dual IR-LED
+ IR pass filter
SPH-S2300
Kurkovsky et al [302]
Park et al. [391]
IR
IR
+ IR-illuminator
+ IR pass filter
SPH-S2300
+ halogen lamp
IR
al.
et
+ halogen lamp
+ IR pass filter
SPH-S2300
Jeong
[264]
IR
SPH-S2300
Cho et al. [238]
IR Segmentation
VW
Mobile & HW
Article
FAR 0.13%
EER 3.5%
perform.
non-glasses
glasses
EER 0.05%
pupil
99.5%
99%
iris
99.5%
98.9%
method improvement for indoor
and outdoor environments
EER 0.14%
Error performance similar to
Daugman’s method
Quicker than Daugman’s method
Results
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Iris Biometrics with mobile devices have to deal with the problem of “non-controlled conditions”
meaning a final user can use these technology at any time and any where. These non-controlled
conditions add some problems to the technique: image quality, indoor and outdoor conditions, specular
reflections, blurring and so on. We have to add some others like battery consumption, pattern safety
(mobiles could be lost or stolen), liveness detection and non-coercion.
2.7.4
Liveness detection on mobile iris recognition
As far as the author’s knowledge, there exist no work on this issue. At the moment, researchers are
focused on the improvement of iris segmentation and feature extraction methods using IR wavelength
and visual wavelength.
2.7.5
Conclusion
At the moment, there are not too many work on this issue. That is because best results in iris
recognition systems are obtained with IR wavelength, what force to use additional hardware in mobile
devices. The use of additional hardware makes this technology less attractive for the final user.
In addition, most of commercial mobile phones have good cameras at the backside but a poor
quality front camera when available, forcing to use the back camera. That makes the image capture
uncomfortable for the user.
In order to increase user acceptability, the improvement of iris recognition methods with visible
wavelength is needed, as well as the front cameras. Furthermore, the use of others biometric techniques,
as face recognition, in combination with iris recognition could increase the system’s performance.
Consequently, using iris in mobile phones presents the following advantages:
• It provides very competitive performance rates when using in controlled situations.
• It is quite accepted that iris can be used to authenticate people.
• Template aging problem does not affect to the performance rate due to the template can be
constantly updated.
However, the following disadvantages or limitations of iris in mobile phones have been found:
• It requires a high quality front camera.
• It requires a IR camera to obtain the best accuracy. Approaches that use visible light cameras
do not work yet properly.
• Present limitations to the environment conditions: light conditions, specular reflections.
• Deal with “non-controlled conditions” like blur and image quality.
2.8
Gait recognition
In biometrics, the term “gait” is used to describe a particular manner or style of walking which is
distinctive for each individual. Although gait shows a common pattern for everybody, it also presents
some interpersonal differences which make possible individual identification. This fact may be observed
in our ability to recognize a person only by observing his/her gait.
The state of the art of gait recognition technologies is introduced in section 2.8.1, presented an
overview of the biometric technique and the challenges of applying it in mobile devices.
74
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Next, section 2.8.2 presents a summary of the public databases of gait biometric samples, used in
the literature to evaluate the works on this field. The most relevant works are gathered in section
2.8.3.
Finally, some conclusions on the technique are presented in section 2.8.4.
2.8.1
Introduction
In previous literature, different sensors have been used to capture the human gait. Many works
[73, 129, 380] proposed to extract the movement of lower limbs from images recorded by a camera. In
other works, the authors [358, 263] extracted the movement of the legs by using some pressure sensors
incorporated to the floor of the room. However, both approaches limit the acquisition to specific
indoor environments. In order to alleviate this restriction, some authors have proposed to use some
wearable sensors which allow the subject to move freely. In particular, due to the miniaturization of
inertial sensors, accelerometers has been used extensively to capture these body movements [48, 465].
Nowadays, most of smartphones incorporate an accelerometer to rotate the screen when there are
changes in phone orientation thus gait identification using these sensors seems to be appropriate as
a biometric technique for mobile phones. Furthermore, newest smartphones are including gyroscopes
so the measurements of these sensors could also be used to complement the signals captured by
accelerometers.
The main advantage of this biometric technique is its unobtrusiveness, since it allows performing
continuous authentication of the user without bothering him/her. Most authors agree that the main
application of this technique is to detect whether a mobile device has been stolen by detecting changes
in the gait signals in order to lock it. However, other authors have proposed to use gait identification
to activate different profiles in a shared device depending on who is the current user [498].
The main problem of using a mobile phone to capture the gait is where to place it. People may wear
their mobile phones in their pockets (chest or leg), attached to their belts or even inside a carrying
bag. Depending where the mobile phone is worn, gait signals are completely different. In former
works the authors have placed an accelerometer on the back [440, 309] or on the chest [460] in order
to classify the different activities performed by a subject. Although these positions make it possible
to differentiate among several activities, they are not adequate to identify subjects, as the movements
captured in these positions are similar among most individuals. Furthermore, these locations are not
comfortable for the user since they are not the typical places to wear the mobile phone. The first
work that analysed the acceleration of the gait as a biometric technique was performed by Ailisto and
Mäntjärvi et al. in [48]. They performed an experiment over 36 subjects by placing the accelerometer
at their waists. This is an interesting position since it is close to the Center of Gravity (COG) of
the user so the accelerations measured at this place represent a summary of the accelerations of the
whole body. Other authors have proposed to place the sensor at the hip [191, 469, 241, 364], however,
signals captured at this place are not well–balanced since the sensor is closer to one leg in relation to
the other one. Although other authors [188, 60] have tried to identify subjects from the accelerations
of their ankles, the capturing of these data requires to attach a sensor to the ankle or to wear a shoe
with specific sensors. The work presented in [389] shows that measuring the accelerations at several
body parts considerably increases the identification performance, however this implies that the user
must wear many sensors which could be uncomfortable.
Although these initial works have used dedicated hardware to capture the gait signals, recent works
are using real mobile phones so they are proposing to place the sensors where users usually wear their
phones. In [273, 240, 303, 271], the authors conducted experiments in which the users wore the mobile
phone in their trouser pocket. Other works have proposed to attach the mobile phone to the belt of
users in order to measure their gait signals at their hip [372, 68, 378], or their waists [452, 145]. Lastly,
since another possible location of the mobile phone is inside of a bag, other authors have analysed the
75
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
feasibility of recognizing the gait when the mobile phone is worn in a bag [469], however this study
was not performed using a real mobile phone.
Some previous works using an accelerometer placed at the COG [277, 294] have described that
vertical and anteroposterior accelerations repeat a discernible pattern that consists of two quasi–
sinusoidal signals. These signals are produced by the typical swinging of the pelvis in both directions
during the gait cycle. Therefore, these sinusoidal signals present the same frequency though there is a
phase–shift between them. In the case of mediolateral acceleration, it presents a monophasic pattern
since it depends on which limb is lifted. Furthermore, in contrast to the others accelerations, the
authors of [58] have remarked the difficulty of finding a common pattern in this acceleration for all
subjects. This means that the mediolateral acceleration is user-dependent so it may be crucial when
identifying people.
Several techniques have been proposed to discriminate among gait signals of different individuals,
but in general two types of approaches may be identified. On the one hand, some authors propose
to apply time windows to the whole gait signal in order to extract statistical or frequency features
[327, 378, 68, 240]. The other approach, followed by most of researchers, consists in dividing the
signal into gait cycles and then compare the gait cycles separately. Based on this segmentation several
techniques have been applied to discriminate among individuals. In [191], the authors extracted the
walking cycle length and some histogram statistics from the gait accelerations of each subject. The
work presented in [48] compared the identification results obtained by these histogram statistics with
the correlation between the walking cycles.
Other works have proposed to generate templates of the gait cycles from the data captured during
the enrolment phase. In [192, 196], the authors averaged all the enrolment steps to create a gait
template. Usually, the amplitude and length of each step are normalized using linear interpolation
in order to produce a template independent of the variations on speed and amplitude of the signals.
After creating these templates, they are compared with gait signals using different metrics. Some
authors proposed to use Euclidean distance [241, 190] or Absolute Distance [195] but other authors
propose to use DTW since it is able to deal with non-linear time variability [426, 427, 465, 372, 65].
Other techniques not based on metrics have been also proposed, for instance in [271] the authors used
time-delay embeddings networks and in [452] they applied PCA and SVM to find the best features to
discriminate among individuals.
Finally, many works have been devoted to identify and analyze those factors that may affect gait
identification performance. Some of them analyze the differences between gaits of the same user
captured during different days [452, 378] or even in different environments [241]. In this study [60],
the authors affirm that the gait of the same person at different speeds can be as different as the gait
of another person. Nevertheless, other works have proposed to generate several templates at different
speeds in order to alleviate this problem [372] or even to create an average template for all speeds
[65]. Several authors have also remarked on the great differences in the gait of the same individual
when using different kind of shoes [60, 194] or carrying a backpack [195]. Furthermore, some authors
have conducted several experiments in order to evaluate the robustness of gait authentication against
spoofing attacks by means of mimicking the gait of other people [363, 190].
2.8.2
Public databases for mobile gait recognition
Although there are many public databases of gait biometrics based on vision, the number of public
datasets based on wearable sensors is quite reduced. The reason of this small number of public
databases could be that this biometric technique is on its early stages. However, as it could be
observed in the table of relevant works in gait biometrics 2.16, there exist many private databases
which have been used by the different research groups to test their own algorithms as in [193] where
they conducted an experiment over a great number of subjects (100 subjects: 70 males and 30 females).
76
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Recently, the Institute of Scientific and Industrial Research of Osaka University (OU-ISIR) has
released the largest inertial sensor-based gait database [465] composed of of 744 subjects(389 males
and 355 females) at ages from 2 to 78 years. This large database was captured using three dedicated
sensors placed at the waist and both hips while the subjects were walking along a flat path and
two up and down slopes. Jalan Multimedia University of Malaysia has also recently made publicly
available a new multimodal biometric database (MMU GASPFA) [235] which includes information
about gait(GA), speech(SP) and face(FA) of 82 participants(67 male and 15 female). This database
was captured using commercial off the shelf (COTS) equipment, concretely to capture the gait they
have used a mobile phone that was inside a hip pouch. Lastly, there is another public database
collected at McGill University by Jordan Frank for the work [271]. However, this dataset is composed
of much fewer subjects than previous ones, with 20 individuals performing two separate 15 minute
walks on two different ways. This data was captured with a mobile phone using the HumanSense
open-source Android data collection platform.
2.8.3
Relevant works on mobile gait recognition
Although there are several works [364, 240, 189, 145] that summarize the state of art of gait authentication based on wearable sensors, the most complete review for this technique may be found in
Claudia Nickel’s thesis [377]. Since this state of art only covered from 2005 to 2010, we have updated
and complemented this review in order to include new works appeared in scientific publications until
2013 and to provide additional information about the conducted experiments.
Table 2.16 summarizes the main works related to gait authentication based on wearable sensors.
For each work, the following information is presented:
• Publication. Reference to the publication in which appeared the presented work.
• Sensor. Type of sensor that was used to capture the data of the gait: Accelerometer (A) or
Gyroscopes (G). Since most of the experiments used dedicated sensors, we have also reported
when the sensors were embedded in a mobile phone(P).
• Position. Body parts where the sensor was placed to capture the gait signals.
• Subjects. Number of subjects participating in the experiment
• Scenarios. This column is divided into three subcolumns. First subcolumn indicates whether the
training and test data was collected on the same day (s), different days (d), or both are mixing of
several days(m). Second subcolumn shows if the subjects of the experiment were asked to walk
to their normal speed (n), or at several speeds (v). Lastly, the third subcolumn distinguishes if
the experiments were performed in a controlled environment or realistic environment:
– (c). The experiment was conducted in a controlled environment. For example: walking a
fixed distance along a corridor.
– (u). The experiment was conducted in a uncontrolled environment. For example: walking
on the street
– (b). The experiment was conducted in a controlled environment but the subjects were
carrying a backpack.
– (r). The itinerary of the experiment consists of walking in different surfaces or ramps.
– (s). In the different repetitions of the experiments the subjects wear different types of shoes.
– (i). In the experiment some people tried to imitate the gait of others
77
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• Technique. Classification technique used to distinguish among the gaits of the users.
• Result. Best result obtained in the paper without taking into account different scenarios. This
result is usually expressed by means of EER, CCR or GMR.
78
79
A
A
A
A
A
Acc
A
A
A
A
A
A
A
A
AP
A
A
A
A
AP
AP
A
AP
A
A
A
AP
AP
AG
AP
AGP
A
A2P
AG
Ailisto [48],2005
Mäntyjärvi [327],2005
Vildjiounaite [469],2006
Gafurov [188],2006
Gafurov [191],2006
Rong [427],2007
Gafurov [195],2007
Rong [426],2007
Vildjiounaite [467],2007
Gafurov [190],2007
Holien [241],2007
Gafurov [194],2008
Gafurov [197],2008
Gafurov [189],2009
Sprager [452],2009
Mjaaland [364],2009
Bächlin [60],2009
Gafurov [196],2010
Wang [488],2010
Frank [271],2010
Kwapisz [303],2010
Derawi [144],2010
Derawi [145],2010
Yan [490],2010
Gafurov [193],2010
Mjaaland [363],2010
Nickel [378],2011
Bajrami [68],2011
Trung [465],2012
Muaaz [372],2012
Juefei-Xu [273],2012
Bailador [65],2013
Hoang [240],2013
Zhang [498],2013
Waist at back
Waist at back
Breast, hip and suitcase
Ankle
Hip
Waist at back
Trouser pocket
Waist at back
Breast pocket and hip
Hip
Hip
Ankle
Arm
Foot, hip, pocket and arm
waist
Hip
Ankle
Ankle
Waist at back
trouser pocket
trouser pocket
Waist
Waist
Waist at front
Hip
Hip
Hip
Hip
Waist at back
Hip
Pocket trouser
Waist at back
Trouser pocket
Thorax, ankle and belt
Position
36
36
31
21
22
21
50
35
32
100
25
30
30
30
6
50
5
30
24
25
5
60
48
10
100
50
48
45
736
48
28
34
14
20
Subjects
c/v/c
c/v/c
c/v/c
s/n/c
s/n/c
c/n/c
s/n/b
c/n/c
c/n/c
s/n/i
s/n/r
s/n/s
s/n/c
s/n/s
c/v/c
s/n/i
m/v/sb
s/n/s
s/n/c
s/n/c
s/n/c
m/n/c
m/n/c
s/n/c
s/n/c
s/n/i
scm/n/c
c/v/s
s/n/r
c/v/r
s/v/c
s/v/c
s/n/c
s/v/c
Scenarios
Correlation
Frequency Analysis
Correlation and Frequency
Histogram similarity
Cycle length
DTW
Absolute Distance
DTW
Correlation and Frequency
Euclidean Distance
Euclidean Distance
Euclidean Distance
Frequency Analysis
Euclidean Distance
PCA and SVM
DTW
Features, DTW and Frequency
Cycle Matching
Wavelet and DTW
Time-Delay embeddings
Statistical Features
DTW
DTW
Wavelet
Cycle Matching
DTW
Frequency and Statistical Features
Statistical Features
DTW
DTW
Wavelet
7.8%
Frequency and Statistical Features
Correlation
Technique
Table 2.16: Relevant works in gait authentication for mobile phones
Sensor
Publication
91.33% (CCR)
99.3% (CCR)
6.4%
7%
13.7%
5%
16%
5.6%
7.3%
6.7%
13.7%
13%
18%
5.6%
10%
5%
90.3% CCR
6.2%
21.3%
1.6%
5%
100% CCR
100% CCR
5.7%
20.1%
6.29%
7.5% ??
6.2%
5.9%(FMR)
15.48%
8.8%
24.81%
3.6%
Result
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
As it can be seen in previous table there are many different authors, however many of them belong
to the same research institutes. Concretely, these are the main research teams that are currently
working on gait authentication for mobile devices:
• VTT Technical Research Centre of Finland [36]. H.Ailisto, J.Mäntyjärvi and E.Vildjiounaite.
• Norwegian Information Security Lab. Gjovik University College. [21]. G.Bajrami, P.Bours,
M.Derawi, K.Holien, D.Gafurov, B.Mjaaland E.Snekkenes and C.Nickel (Center of Advanced
Security Research Darmstadt)
2.8.4
Conclusion
Although gait authentication based on inertial sensors is a relatively new biometric technique since
first papers appeared in 2005, there have been many relevant works that showed its viability when used
to identify people. Experiments conducted in controlled environments have achieved high accuracies,
however these experiments were performed in ideal conditions, usually on the same day in a flat path
and using sensors that were always placed in the same position.
Some works have identified several factors that may affect the performance of this technique when
used in more realistic conditions as: different gait speeds, walking over slopes or other surfaces,
wearing different type of shoes, placing the sensor in different orientations or even whether the user
is carrying a backpack. These factors produce a high intra-variability in the gait of each subject that
drastically decreases the performance of this biometric technique. Therefore, this lower performance
in realistic conditions makes this technique appropriate to non critical security problems. Since the
main application of mobile gait authentication is to detect whether a mobile device has been stolen
by detecting changes in the gait, this lower performance will mean more false rejections, i.e., the
mobile phone will get locked even whether the user is walking in a slightly different way. Lastly,
another possibility to incorporate this technique in a final authentication mobile system could be
complementing more accurate biometric techniques.
Consequently, using gait authentication in mobile phones presents the following advantages:
• Gait recognition is unobstrusive for the user since he/she does not have to perform any specific
action to authenticate, except walking.
• It performs a continuous authentication while the person is walking.
• Most smartphones include the sensors needed for gait authentication (Accelerometers or Gyroscopes).
However, the following disadvantages or limitations of gait authentication in mobile phones have
been found:
• The user should always wear the mobile phone in the same position since the signals captured
at different parts of the body are quite different.
• Wearing different types of shoe may affect the performance of this technique.
• Walking at different speeds, on slopes or over different surfaces may also affect the performance.
• In real conditions, the system may produce many false rejections so the user may be burdened
with these authentication errors.
80
PCAS Deliverable D3.1
2.9
2.9.1
SoA of mobile biometrics, liveness and non-coercion detection
Fusion of biometrics
Introduction
As seen in previous sections, there is no biometrics-based system which can warranty 100% identification rates nor 0% FAR nor FRR. This is due to the fact that the biometric traits of some individuals
don’t accomplish with two main desirable features: Distinctiveness of a biometric trait (which concerns
the FAR) and its permanence (which affects the FRR).
On top of the situations in which the subject is not collaborative, Faundez-Zanuy [168] summarizes
the main drawbacks of each technique, which have been shown in the corresponding section:
• Fingerprint: Some fingerprint scanners are not able to acquire a clear fingerprint image under
certain conditions (elder people, manual workers with acid, . . . ). There also exist users who
don’t have fingerprints5 .
• Face: User’s face can undergo many changes due to hairstyling or make-up, the use of accessories,
weight variations or skin color changes. Pose and lighting changes can also reduce recognition
accuracy
• Iris: Cases of eye trauma exist, in which iris recognition is not possible.
• Voice: Acquisition devices and illness can modify voice features and degrade recognition rates.
• Hand geometry: Weight variations and mobility diseases, such as paralysis or arthrosis can
make recognition impossible.
A possible way of dealing with this limitations is to combine different biometric modalities. The
fusion process integrates different signals from multiple sensors into a single pattern. In the PCAS
device’s design the whole process must be based on mathematically rigorous methods that avoid naively
error propagation in the system. Although these systems are more difficult to fool (as defeating more
than one system is harder than defeating a single one) they are also more expensive (as they require
more sensors) and entail higher computational load.
In addition, the fusion process provides an enriched user pattern, which helps when dealing with
the small sample recognition problem.
Furthermore, multibiometrics can provide multi-factor authentication methods based on something
the user knows, something the user is and something the user has, which are much more secure methods
than those based only on one of the three factors.
In general, the term Biometrics fusion is considered a synonym of Multimodal biometrics but,
according to [408] it includes two general techniques:
1. Multimodal fusion: Fusion of biometric information obtained from different physiological or
behavioural traits.
2. Intramodal fusion: Fusion of biometric information obtained from the same trait, but using
different features, classifiers or sensors.
As said before, literature generally refers to this techniques as Multimodal information fusion techniques.
5
This rare medical condition is known as adermatoglyphia and is due to a genetic mutation, as reported in [382, 96]
81
PCAS Deliverable D3.1
2.9.2
SoA of mobile biometrics, liveness and non-coercion detection
Multimodal information fusion techniques
A typical biometric system is composed by four basic modules: the sensor, the feature extractor, the
matching module and the decision maker. Attending to the module in which biometric information is
combined, we distinguish between four data fusion levels [168]:
1. Sensor/Data level. If the sensor signals are comparable then the raw data can be directly
merged. The input signal is the result of sensing the same biometric characteristic with two
or more sensors. The combination of the input signals can be carried out using different data
fusion paradigms [93, 44]:
• The complementary data fusion paradigm, in which the information provided by different
sensors is independent from one another and can be combined to obtain a more detailed
information from an object (Figure 2.6(a)).
• The competitive data fusion paradigm, in which sensors provide different independent information about the same object. The fusion system establishes which sensor data has the
least discrepancies (Figure 2.6(b)).
• The cooperative data fusion paradigm, in which sensors provide different independent information about the same object, but the fusion system combines all the sources to obtain
new information that can’t be derived from any individual sensor (Figure 2.6(c)).
(a) Complementary
(b) Competitive
(c) Cooperative
Figure 2.6: Data fusion paradigms at sensor level.
2. Feature level. The feature level provides fusion of data obtained from different features of a
single biometric signal, or from different biometric signals. In this approach there is little control
about each component’s contribution on the system input signal and the increase on the signal’s
size clutters the system design.
3. Opinion/Confidence/Score level. There is a matching module for each biometric signal.
Each matcher provides a score which represents a similarity measure. The fusion system normalizes every score and combines them into a global one in one of the following ways:
• Fixed rule. The scores of all the classifiers have the same relevance on the final score.
• Trained rule. The scores have a different relevance on the final result. This relevance is
modified by the use of weighting factors computed using a training sequence.
• Adaptive rule. In variable environments, the relevance of a single classifier’s score depends
on the current moment.
The most popular fusion techniques at score level are Weighted Sum, Weighted Product and
Decision Trees. Garcia-Salicetti et al. make a comparative study [204] of the Arithmetic Mean
Rule (AMR) and a linear SVM, in the framework of voice and on-line signature scores fusion.
82
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Their conclusion is that in the non-noisy case, the AMR with a normalization (rescaling) of
scores gives the best results. In the noisy case, the SVM gives equivalent results to those
obtained with AMR after scores’ normalization and only the methods that take into account the
scores’ distributions are in fact efficient.
Different normalization schemes are described in [229]: the min-max normalization, the Z-score
normalization, the Tanh-estimators normalization and the so called Reduction-of-High-Scores
normalization.
4. Decision level. In this approach, there is also one matching module for each biometric signal,
but each one provides a decision about the identification or verification process. The classifiers’
outputs are then combined to obtain a final classification, overcoming the scores normalization.
[237] proposes the highest rank method, the Borda Count Mathod and logistic regression as
different ways to combine the classifiers’ outputs. The Borda Count Method (BCM) is a generalization of the majority vote, that assumes additive independence between individual classifiers
and detects redundant classifiers. Although it is simple to implement and requires no training, the BCM treats all the classifiers equally. This advantage can be corrected using logistic
regression.
Other important combination schemes at decision level are serial (which improves the FAR)
and parallel (which improves the FRR) combinations. The decisions of each classifier can be
represented as a ranking of classes. All this rankings can be compared across different types of
classifiers and different instances of a problem.
The decision level is not commonly applied to identification problems, as a high number of
classifiers are needed in order to avoid decision ties. For verification applications, at least three
classifiers are needed.
A fusion system must combine input signals in order to suppress the influence of inconsistent or
irrelevant data and yield the best interpretation of information. The combination of the input signals
can provide noise cancellation, blind source separation and so on. State-of-the-art data fusion bets
mainly on the opinion and decision levels, even if, in general, the best results are obtained when the
data fusion is performed in the first stages of the process.
It is also interesting to distinguish between client-independent and client- dependent fusion approaches. According to [408], the former approach has only a global fusion function that is common
to all users in the database. The latter approach has a different fusion function for each individual.
Examples of client-dependent fusion approach are client-dependent threshold, client-dependent score
normalisation and different weighing of expert opinions using linear or non-linear combination. It has
been reported that client-dependent fusion is better than client-independent fusion in situations where
there are enough client-dependent score data.
In mobile environments, the captured feature’s quality strongly depends on the surroundings conditions. Image-based biometric traits’ performance decreases in outdoor locations. In addition, background noise significantly affects voice recognition systems. In these conditions, merging data from
multiple sensors improves the system’s accuracy.
2.9.3
Multimodal databases
There exist many multimodal datasets described in the literature. However, most of this data are
extracted using desktop systems or fixed equipments. The most interesting databases for this project
are those in which the biometric features are captured with a mobile phone, like the SecurePhone
PDA Database:
83
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• SecurePhone PDA Database. This dataset was created in the context of the SecurePhone
Project (described in section 2.9.4). It contains biometric features from voice, face, speaking
face and handwritten signature. According to [370], data were recorded during three sessions
separated at least one week. The video database contains 30 male and 30 female speakers, of
which 80% are native speakers. Each group is divided in 3 age subgroups, and each recording
session comprises 2 indoor (dark light - clean voice and light illumination - noisy voice) and
2 outdoor (light-noisy and dark-noisy) recordings. Audio recordings were made for 3 types of
prompt (5-digit, 10 digit and short phrase), with 6 examples from each prompt type, which
produced a total amount of 12960 recordings. Handwritten signature conditions were always
good. 100 points per second were registered, with time data but no pressure or angle data. In
addition, every subject in the database has 20 true signatures and 20 forged signatures from the
same impostor. For voice and face forgery tests, impostor samples are taken as utterances of the
same prompt by other speakers.
Other multimodal databases offer information about other biometric traits, although this information is not captured with a mobile phone:
• DAVID-BT. This database contains full-motion video and the associated synchronous sound
records from 30 users, registered in 5 sessions spaced over several months. All videos show
the talking user’s full face in different scene backgrounds and with different illumination. The
utterances include the English digit set, English alphabet E-set, some syllables and phrases [339].
• XM2VTS.[103] Contains synchronised video and speech data from 295 subjects. It was recorded
in four sessions separated one month. Each session consists of two recordings with a speech
recording of each subject reciting a sentence and a frontal face shot [356].
• BANCA.[105] In the context of the BANCA project a face and speech database was created
[74]. High and low quality sensors were used in three different scenarios (controlled, degraded
and adverse) to register data during three months. Video and speech data were collected for 52
subjects (26 males and 26 females) speaking in 4 different languages (English, French, Italian
and Spanish), on 12 different sessions. Each session consists of 2 recordings, a true client access
and an informed impostor attack.
• BIOMET.[463] This database was recorded to study how different biometric modalities can
be combined in order to develop outperforming systems. It includes face (2D, infrared and 3D
images), speech, fingerprint, hand and signature data that was captured in three sessions with
three and five months spacing between each one [203].
• MYCT. Fingerprint and signature dataset described in [388].
• MyIDEA. This database includes talking face, audio, fingerprints, signature, handwriting and
hand geometry records [153]. Data are captured in three sessions randomly spaced in time, from
104 users, using sensors of different qualities. Audio content is recorded in French and in English
and impostor attempts for voice, signature and handwriting are included.
• BioChaves.[179] As a part of the BioChaves project, a multimodal database including voice
recognition and keystroke dynamics was created. It contains data from 10 users, registered in
2 sessions separated by one month. Each user had to utter and type the same four words five
times [367].
• BioSec[9] This database was acquired in the context of the BioSec integrated project [176] and
includes real multimodal data from 200 users, registered in 2 sessions. The multimodal patterns
84
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
consist of fingerprint images acquired with three different sensors, frontal face images captured
with a webcam, iris images from an iris sensor, and voice records acquired both with a close-talk
headset and a distant webcam microphone. An extended version comprising data from 250 user
acquired in 4 sessions is also available.
• BiosecurID. This dataset includes speech, iris, face (still images, videos of talking faces),
handwritten signature and hand- written text (on-line dynamic signals, off-line scanned images), fingerprints (acquired with two different sensors), hand (palmprint, contour-geometry)
and keystroking traits from 400 individuals (gender-balanced) divided in 4 age groups, together
with some subject context information. Replay attacks for speech and keystroking and skilled
forgeries for signatures are also included. The acquisition phase took place in four sessions distributed separated by one month one from each other. The acquisition set up and protocol are
described in [175].
• NIST-Multimodal. [401] This is a score database which contains two face scores and two
fingerprint scores from same individuals. The face scores were generated by two commercial
systems (“matcher C” and “matcher G”) and fingerprint score was obtained by comparing a
pair of images of the left index finger and the other score was obtained by comparing a pair of
images of the right index finger, according to [229].
• MMU GASPFA. This database offers multimodal data acquired using commercial “off the
shelf” equipment that includes digital video cameras, digital voice recorder, digital camera,
Kinect camera and smartphones equipped with accelerometers [236]. The dataset consists of
82 people patterns made up of frontal face images from the digital camera, speech utterances
recorded using the digital voice recorder and gait videos with their associated data recorded using
both the digital video cameras, Kinect camera and accelerometer readings from a smartphone.
• University of Notre Dame Biometrics multimodal databases.[86] This dataset contains
face images (face photographs, face thermograms, 3D face images and iris images), ear and hand
shape images.
• FRGC. The dataset contains intramodal face data captured using a camera at different angles,
with different range sensors in different controlled or uncontrolled settings.
• Chimeric users datasets. The utilization of chimeric users or virtual identities is somewhat
accepted in the literature and reduces the database creation time. As described in [408], this
technique consists in associating different biometric features from different users to create a
multimodal biometric pattern. Although this process was questioned during the 2003 Workshop
on Multimodal User Authentication, it is based on the independence assumption that two or
more biometric traits of a single person are independent from each other, and there is no work
in the literature that strongly approves or disapproves such assumption. A example of pattern
merging is used in [229].
2.9.4
Recent related works
Classical identification schemes use a single feature descriptor and a particular classification method to
associate a concrete pattern to an individual’s identity. Sometimes, the use of a second feature speeds
up the process of capturing the main feature’s pattern. For instance, the Near-Infra-Red lighting
face recognition method presented by Han et al. [227] as a base of an iris and face identification
system proposes the use of infrared corneal specular reflections to locate eyes and face position,
image enhancement and lighting normalization by simple logarithmic methods and face recognition
85
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
by means of integer-based PCA. The reported results showed an eye-detection rate over 98.8%. The
face recognition accuracy is almost the same to that with a visible face database, with a 14.8% of
EER, and the processing speed using the integer-based method (79.55 ms) was more than three times
faster than that using floating-point (255.66 ms).
On the other hand, as established in [237], in systems with a large number of users and noisy
input signals the combination of features and classifiers of different types complement one another in
classification performance. The chosen fusion function should take advantage of the strengths of the
individual classifiers, avoid their weaknesses, and improve classification accuracy.
Several works, previous to the PCAS project, have developed multimodal systems for hand-held
devices. We offer the most representative ones, classified by the fusion data level:
Feature-level-fusion-based systems
In [493] a face and palmprint multimodal scheme is proposed to manage the problem of the single
biometric sample. The discriminant features are extracted using Gabor-based image preprocessing
techniques and PCA. After feature vector normalization the fusion is done at feature level by a
distance-based separability weighting strategy. Evaluation of the multimodal system employs the AR
face database and a palmprint database provided by the Hong Kong Polytechnic University (20 sets
of 64 × 64 images from 189 individuals). Using a Nearest Neighbour classifier and assigning the same
weights to face and palmprint features, a 90,73% of average recognition rate is achieved, whereas the
best result obtained using unimodal techniques is 62,72%.
Jing et al. also deal with the small sample biometric recognition problem in [267] by introducing a
new classifier to be used with the fused biometric images. The discriminative features of the images are
extracted with Kernel Discriminative Common Vectors (KDCV), and then they are classified using a
Radial Base Function (RBF) network. This technique is assessed over the AR and FERET databases
and the so named palmprint dataset, yielding an increased performance in small sample recognition
cases. The total face recognition rate (67,32%) and palmprint recognition rate (60,88%) raise to
92,81% when using the multimodal technique. The introduction of the Gabor transform entails a
12,26% performance increase and the use of the KDCV+RBF classifier improves the total recognition
rate of the DNC (Discriminative Common Vectors (DCV) + ANN) by 10,55%, and the KPNC (Kernel
PCA + ANN) by a 9,99%.
Score-level-fusion-based systems
Fusing the score of several biometric systems before the decision module can improve system’s accuracy.
Poh et al. [408] present a database of scores taken from experiments carried out on the XM2VTS
face and speaker verification database. They also describe some protocols and tools for evaluating
score-level fusion algorithms, as well as 8 baseline systems (feature type + classifier) which are finally
assessed in terms of the HTER significance test.
Choudhury et al [120] propose a recognition and verification system using face and speech from
unconstrained audio and video. To detect real face presence, a 3D depth information system is also
used. Varying face pose and auditory background changes are managed by face tracking and audio
clip selection. Faces are detected by skin color information and classified by the eigenfaces model.
The text-independent speaker identification system is based on a simple set of linear spectral features
which are characterized with HMMs that adjust the speaker models to different kinds of background
noise. The classifier fusion is carried out at score level, using a simple Bayes Net that assigns weights
to each individual classifier in order to soften the influence of that with the worst reliability. When
an Optimal Rejection Threshold is fixed empirically, the recognition and verification rates improve in
relation to audio and video unimodal techniques. A recognition rate of 99,2%, with a rejection rate of
86
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
55,3% of images/clips, and a verification rate of 99,5%, with a rejection rate of 0,3% of images/clips
are reached over a private database with 26 users.
Montalvao et al. [367] present biometric fusion of keystroke dynamics and speech at score level
for identification in Internet applications. Although keystroke offers a weaker discrimination between users, it is almost immune to background noise. The extracted features are median pitch from
structured 3-second utterances, sequences of 13-MFCC vectors from the utterances and sequences of
Down-Down time intervals from the typing of 31-keystroke structured texts. Three fusion approaches
are considered: a linear data fusion with Fisher’s Linear Discriminant, a linear data fusion based on
Optimal Estimation (Simplified Fusion of Estimates) and a non-linear data fusion (Bayesian Classification for Normal Distribution). The Simplified Fusion of Estimates with the three features outperforms
the other techniques and provides a EER of 5% (when best pair-wise EER is 6,7%).
Vildjiounaite et al. [470] propose an unobtrusive method of user authentication for mobile devices
in the form of recognition of the walking style (gait) and voice. Two scores are obtained from 3D
preprocessed gait signals: a correlation score and a FFT score. Text-independent speaker recognition
was performed using the Munich Automatic Speaker Verification environment. The normalized gaitbased and voice-based similarity scores were fused by the Weighted Sum method. The performance
of gait and voice fusion is assessed in terms of EER and the multimodal system improves speaker
recognition unimodal technique’s results (EER over 40%). The lowest EER (1,97%) is reached when
carrying the accelerometer in a breast pocket with a surrounding city noise of 20 dB.
Rodrigues et al. [424] analyse the security of a multimodal system when one of the biometric
modalities is successfully spoofed and propose two new fusion schemes for verification tasks, which
take into account the intrinsic security of each biometric system being fused. The extended Likelihood
Ratio (LLR) scheme considers the LLR between the genuine and impostor distributions as the optimal
fusion method (in the sense that it minimizes the probability of error) and estimates the true impostor
distribution without the need of training spoofed samples by assuming that the similarity score in a
successfully spoofed biometric system will follow a genuine probability distribution. The fuzzy-logic
fusion scheme allows a linguistic description of the heuristics appeared in the previous approach. The
carried out experiments show the existence of a trade-off between recognition accuracy and robustness
against spoof attacks. On the other hand the fuzzy fusion scheme outperforms the probabilistic fusion
scheme.
The system presented by Kim et al. [284] integrates face, teeth and voice biometrics in mobile
devices. The three scores are normalized by the use of a sigmoid function. Weighted-summation rule,
KNN, Fisher and Gaussian classifiers are evaluated and compared over a 1000 biometric trait database
collected from 50 individuals with a smartphone. The weight-summation rule turns out to be the
outperforming approach with an error rate of 1,64 %. Image-based (face and teeth) authentication is
performed with the EHMM algorithm. Voice pitch and MFCCs are modelled with GMM. The scores of
the three techniques are normalized by using the sigmoid function and a weight-summation rule is then
used as a fusion technique. The performance is evaluated over a dataset of 20 biometric compound
traits from 50 individuals, and the reported authentication performance of the fusion approach is
shown to be superior to each of the unimodal approaches and to the fusion methods that integrate
two modalities as a pair. The error rates are around 1,64%, while the minimum error rate regarding
a single technique over the same database is 5,09%.
He et al. [229] present a performance evaluation of sum rule-based fusion and SVM-based fusion
schemes in fingerprint, face and finger vein systems, together with the Reduction of High-scores Effect
(RHE) normalization approach. This normalization technique is based on the fact that in multimodal
biometric systems low genuine scores happen with higher frequency than high impostor scores. There
are many reasons which can degrade the genuine score obtained by a genuine user, whereas it is quite
difficult for an impostor to obtain a high score. The techniques are evaluated over the NIST databases,
in terms of Genuine Acceptance Rate (GAR), and confirm the outperformance of multimodal over
87
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
unimodal biometrics. In addition, the RHE normalization schemes outperforms min-max normalization, Z-score normalization, and Tanh-estimators normalization. The final conclusion is that, in the
choice of the fusion technique between SVM scheme and sum rule-based scheme there is a trade-off
between implementation complexity and precision.
McCool et al. [347] present the implementation of a face and speaker recognition system on a Nokia
N900 mobile phone. The face localisation module consists of a face detector trained as a cascade of
classifiers using modified census transform, and the face authentication module divides the detected
faces into non-overlapping regions which are represented using a histogram of LBP features. The Voice
Activity Detection phase is performed by using Hungarian downscaled phoneme recogniser which is
the cascade of 3 Neural Networks. The Speaker Authentication phase applies Probabilistic LDA to
model certain features extracted from utterance. The techniques are fused at score level: similarity
scores for face authentication and the log-likelihood scores for speaker authentication are turned into
probabilities by logistic regression. The implemented system can process about 15 frames per second.
The performance is evaluated over the MOBIO database. The bimodal system outperforms both
modalities on their own. The performance is improved by 25% for female trials and by 35% for male
trials. The system global EER is around 10,9%.
Decision-level-fusion-based systems
In [468] a cascade fusion based system is proposed to reduce frequent user’s verification effort. The goal
of this technique is to require explicit verification effort (fingerprint) only if a cascade of unobtrusive
biometric (voice and gait) verifications fail. In the unobtrusive verification stage three scores are
computed (correlation and FFT scores for gait and voice score). When the last stage is performed,
the fingerprint score is added. The scores at the first stage are combined with a Weighted Sum fusion
rule and the scores from both stages are joined at decision level. This fusion of voice and gate improves
recognition rates, and unobtrusive verification is possible about 70 - 40% of cases, depending on the
surrounding noise and target FAR. For low noise levels (clean speech, city and car noise with SNR 20
dB and city noise with SNR 10dB) unobtrusive verification rate was not less than 80%, overall FAR
was less than 1%, and FAR was in a range of 1-2%. For noise levels such as city and car noise with
SNR 0 dB, and white noise with SNR 0-20 dB the technique shows FRR about 3-7% (better than with
unimodal fingerprint technique) for the same FAR. The recognition rates of unobtrusive verification
decreased to 40% for car noise and 60% for city noise.
The SecurePhone project (2004-2006)
The SecurePhone project [423, 26] was a European project that “aims at realising a new mobile
communication system enabling biometrically authenticated users to deal m-contracts during a mobile
phone call in an easy yet highly dependable and secure way. The SecurePhone [. . . ] will provide users
with a number of innovative functionalities, such as the possibility to securely authenticate themselves
by means of a ‘biometric recogniser’, mutually recognise each other in initiating a phone call, exchange
and modify in real time audio and/or text files and eventually e-sign and securely transmit significant
parts of their phone conversation”. This project’s biometric recogniser proposal was a multimodal
system using voice, face and handwritten signature with a Qtek 2020 PDA.
As part of this project, the SecurePhone database was created as summarized in 2.9.3 and detailed
in [370]. This document also explains an experimental protocol and a test procedure.
Jassim et al. [261] present a wavelet-based face verification system in the context of this project.
The facial features are obtained from the image wavelet decomposition, so the approach does not
require any training. The reported results point out an acceptable level of performance, similar to
those of PCA and LDA schemes, over BANCA and ORL databases.
88
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Koreman et al. also worked on this project. They present an overview of the biometric authentication system, with a description of the PDA multimodal dataset recorded for the project (section
2.9.3). In [292], voice features are obtained from MFCC using the HMM Toolkit; face feature vectors
are calculated from Haar-wavelet filters and histogram equalization and the user’s signing process is
modelized with HMMs. Signal preprocessing is performed within the PDA, whereas storage and processing of the user’s biometric profile is carried off in the device’s Subscriber Identity Module (SIM)
card. Voice verification is based on GMM, face authentication uses the Discrete Wavelet Transform
decomposition and the Manhattan distance, and signature authentication employs HMM and a fusion
of normalised log-likelihood and state occupancy vectors scores. The three features are fused at score
level and the joint distribution is modelled with GMM. This score fusion provides better performance
than each of the individual techniques. Over the BANCA and BIOMET databases, the best reported
EER is 0,57%. Using the SecurePhone PDA database [370], the best reported EER is 0.83%.
A brief summary of all relevant works is shown in table 2.17 6 .
Publication
Fusion level
Features
Fusion technique
[227]
[493]
[267]
[408]
[120]
[367]
[470]
[424]
[284]
[229]
[347]
[292]
[468]
N/A
Feature
Feature
Score
Score
Score
Score
Score
Score
Score
Score
Score
Decision
Ir+Fa
Pa+Fa
Pa+Fa
Fa+Sp
Fa+Sp
Ke+Sp
Fa+Fi
Fa+Fi
Fa+Te+Sp
Fi+Fa+Fv
Fa+Sp
Sp+Fa+Hs
(Sp+Ga)+Fi
N/A
Distance based separability weighting
KDCV + RBF Network
Baseline experts
Bayes net
Simplified fusion of estimates
Weighted sum
Extended LLR Fuzzy logic
Weight Summation Rule
SV + High Scores Normalization
Similarity scores + log likelihood scores
GMM
Cascade fusion
Reported results
CCR=98.8%, EER=14.8 %
CCR=90.73%
CCR=92.81%
HTER=0.511
CCR=99.2%
EER=5%
EER=1.97%
FAR=0.01%, FRR=18.48%
EER=1.64%
FAR=0.01%, GAR=99.6%
EER=10.9%
EER=0.57%
FAR=1-2%, FRR=3-7%
Table 2.17: Summary of relevant works in multimodal recognition.
2.9.5
Conclusion
Biometric data fusion can be done at four levels, as established in section 2.9.2, although the most
common in literature is the score level. Due to rising research on multimodal biometrics, many
multimodal datasets have been developed. The most emblematic, in the context on the PCAS project,
is the SecurePhone PDA database, as all the data is captured with a mobile device (section 2.9.3).
Therefore, as a result of the previous discussion, some advantages of multimodal biometrics are
pointed out:
• Multimodal biometric approaches allow overcoming some practical drawbacks of unimodal techniques, associated to distinctiveness and permanence of certain biometric features (as related in
section 2.9.1).
• In addition, each technique’s performance is affected, at sensor level, by environmental conditions
(for example, illumination settings alter face detection, as well as background noise is detrimental
to voice recognition). In this sense, fusion of techniques achieves better results, as it helps to
mitigate each individual technique’s lacks.
• Fusion techniques provide an enriched pattern, which reduces the effects of small sample problem,
and allow techniques crossed validation.
6
Feature initials: Iris, Face, Palmprint, Speacker, Keystroke, Fingerprint, Teeth, Gait, Fv=Finger veins,
Hs=Handwritten signature.
89
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
• The use of a ”second” feature speeds up the process of identifying an individual by a main
feature, as shown in section 2.9.4.
• Cascade fusion based systems are used to reduce frequent user’s verification effort, as detailed
in section 2.9.4.
• Multimodal systems can quickly be tested, as multimodal biometric dataset are publicly available.
• Related works (section 2.9.4) show that, in general, multimodal systems outperform unimodal
approaches.
On the other hand, the main disadvantages of this technique are:
• The use of several biometric traits and the fusion process increase the overall computing time.
• Multimodal fusion require the use of different sensors, many of them not included in a standard
mobile phone.
90
3 Non-coercion techniques
3.1
Introduction
Biometrics solves the problem of user authentication in a system, however these techniques do not
ensure that the person which is attempting to enter the system is not being forced by another person
to perform this authentication. For example, when someone is withdrawing from a cash machine while
a robber is threatening him/her. Therefore, non-coercion techniques must be also integrated into the
biometric solution in order to guarantee that the user is not being coerced by anyone.
Biometric systems are usually in controlled environments as border controls or banks, which are
normally under surveillance by means of a camera that records the whole process of authentication.
These controlled environments allow detecting possible threats to the user but they also help to prevent
these attacks since they produce a dissuasive effect. Nevertheless, this review is devoted to study
non-coercion techniques applied to mobile scenarios which are usually non controlled environments.
This means that these techniques may not rely on external systems of surveillance but they must be
incorporated in the mobile device. Therefore, the coercion attack must be detected using internal
sensors of the mobile device as: camera, accelerometer, touchscreen or other wearable sensors that
can be easily connected to the mobile device as health monitors.
Depending on the behaviour of the user while being coerced, two different approaches may be
distinguished to detect coercion. On the one hand, the user may attempt to warn the system that
he/she is under threat without alerting the attacker. This approach is called “Voluntary” since the
user has to perform voluntarily a specific action to inform the system. However this action should
be similar to the authentication process in order not to reveal the attacker the real objective of the
action. On the other hand, the user may be cooperative with the attacker since he/she is afraid of
being injured. In this case the user will not try to perform a specific action to warn the system in
order not to alert the attacker. However due to the user is under a stressful situation, the threat may
be detected by analysing involuntary changes in his/her state or behaviour. This approach is called
“Involuntary” since the user is not doing any conscious action to reveal the attack. In the following
sections, both approaches will be studied in detail.
3.2
Involuntary approach
In some cases when a person is under attack, he/she feels so stressed or scared that is not able
to react. Therefore, he/she will not be able to perform any voluntary action in order to alert the
system, however many studies suggest that it is possible to detect the emotional state of the user
from different physiological signals[404]. Changes in these signals are automatically controlled by
Autonomic Nervous System (ANS) so the user is not able to modify them voluntarily. For this reason,
these physiological signals have been extensively used for lie detection, since the person who is under
interrogation can try to deceive but cannot control the physiological reactions of the body[459]. In
the following paragraphs, different physiological and physical signals that may be affected by stress or
91
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
fear of the user will be studied.
3.2.1
Physiological signals
McCraty et al. [348] showed that some emotions could be distinguished analyzing the power spectrum
of Electrocardiography (ECG). According to McCraty, in cases of stress, there is a increase in Heart
Rate Variability (HRV) in the lower frequency ranges of the spectrum. HRV reflects the time variation
of the beat-to-beat intervals. This study also suggests that other emotions may be distinguished from
stressful situations since they produce high power in the medium frequency ranges. A recent study
has tested the feasibility of using HRV to distinguish among five different emotions: calm, fear, stress,
relax and happiness[475]. Other works have shown correlations between HRV and stressful situations,
however they were more related to mental stress in cognitive tasks as preparing an exam[352] or
making mental tests[116].
In order to capture the electrical signals from the heart it is necessary to wear a specific sensor
such as a chest belt, which is the most typical sensor for this purpose. Although there are works that
have proposed to wear this sensor during the whole day, even while sleeping, [373], this may have
low acceptability by the user. For this reason, new sensors that can be integrated into the textile
of the clothes have recently appeared [207]. Since our proposal is based on a mobile environment,
these signals should be sent to the mobile device and then analyzed. Although this architecture has
been successfully tested in different works[456, 92], its main drawback is that it requires an external
sensor. Another solution proposed by [446] is to attach some contact pads to the mobile phone in
order to capture the ECG while holding the device, even though this solution forces the user to hold
the mobile device with both hands to measure differences between two parts of the body creating a
loop with the heart. Although HRV is usually measured using ECG, it could also be captured using
a Photoplethysmography (PPG) sensor which could be incorporated easily to the mobile device. This
sensor is composed of an infrared led and a photodiode[453] and it measures changes in the light
absorption of the skin depending on the blood volume present in the blood vessels. Furthermore some
studies have shown that HRV measures using ECG and PPG sensors are highly correlated[441].
Another physiological signal strongly related to stress and other emotions is electrodermal activity,
also known as Galvanic Skin Response (GSR). This physiological signal is also controlled by ANS, in
particular by the sympathetic nervous system which, in case of stress, increases the secretion of sweat
glands reducing the resistance of the skin. However this rising of GSR levels might also be related to
a rise in ambient temperature or to physical activity [70]. For this reason, the measurements of this
signal are usually calibrated with information about the ambient temperature[396] and the physical
activity of the user using an accelerometer[436]. Although this signal may reflect stress[242] and
cognitive load[381] of the user, the authors of [442] showed that there are patterns in the GSR signal
that allow discrimination among them. Nevertheless, the main problem of using this physiological
signal in a mobile environment is that GSR sensor must be continuously in contact with the skin of
the user avoiding abrupt movements that could displace the contact pads[439].
The fusion of different physiological signals to provide a better estimation of the emotional status
of the user has been analyzed in many different works. In [140] the authors combined HRV and GSR
to provide a measurement of the stress level of the user. A wearable platform composed of many
different sensors: GSR, ECG, electromyogram and respiration rate has been proposed by [117] to
monitor user’s stress. Autosense is a small wearable sensor suite which measures acceleration, ECG,
GSR, respiration rate and temperature that can be attached to the chest of the user to continuously
monitoring his/her stress level[160]. Although most of these wearable sensors are unobtrusive for the
user and users may not feel uncomfortable while wearing them, they could not be easily incorporated
inside a mobile device or its sleeve since they must be placed in different parts of the user’s body to
measure the physiological signals.
92
PCAS Deliverable D3.1
3.2.2
SoA of mobile biometrics, liveness and non-coercion detection
Voice
Voice analysis has shown promising results in the classification of the emotional state of the individuals [349]. Some works have claimed that stress may affect speech production process [184, 146, 310].
For instance, in [355] the authors found a correlation between the vocal tremor and psychological
stress. Even solutions for mobile environments have appeared as StressSense, which is able to detect stress in the human voice in indoor and outdoor environments with an accuracy of 81% and
76% respectively[114]. However there are also some studies that are not so optimistic about this
technology[243]. One of the main advantages of this technique is that it does not need any external
sensor since it can use the built-in microphone of mobile device, even though a possible drawback is
that the person must speak loudly to analyze his/her voice.
3.2.3
Face
There is an ancient proverb that says that the face is the mirror of the soul, which shows to what
extent our face usually reflects our emotional state. Although facial expressions for different emotions
may vary among cultures, some general patterns can be extracted from facial cues in order to recognize
the user’s current emotion[157]. Many different techniques have been applied to automatically detect
emotions from face expressions achieving recognition rates over 75% [167]. Some studies suggest the
feasibility of detecting emotions even when the person is lying based on facial expressions which last
milliseconds (micro expressions)[156]. Eye analysis also provides information about the emotional
state of the user. When a person is under stress the pupil may change of size [366] and the blinking
frequency may be altered[224]. All these techniques can be easily implemented in a mobile device since
they only require a camera and they could complement the face biometric access detecting whether
the user is stressed or scared. Furthermore, some studies have shown the feasibility of obtaining the
heart rate from video[484] so the variability of this physiological signal (HRV) could be combined with
the face emotion detection systems in order to improve their estimations.
3.2.4
Movement
Emotions may also affect to involuntary movements of the body as foot or hand trembling. Former
studies have shown strong correlation between anxiety states and tremor[212] but also recent works
propose to measure the foot trembling to detect the stress level of the users[206]. Although hand
tremor while holding a mobile device has been used to predict strokes in Parkinson[272], to our
knowledge it has not been applied yet for detecting when a person is under stress. However this
involuntary movement is easy to capture with a mobile phone since most of new smartphones include
an accelerometer to detect the tilt of the device. In addition another work has shown that hand tremor
can also be measured using the touchscreen[410]. Since the biometric technique based on keystroke
dynamics relies on the data captured by these sensors (accelerometers and touchscreen), several authors
have studied whether changes in these dynamics may be caused by stressful situations[223, 159].
3.3
Voluntary approach
This section is devoted to these non-coercion techniques based on the cooperation of the user to detect
when is being under threat. In contrast to involuntary non-coercion techniques, in these techniques
the user must perform a specific action in order to alert the system about the threatening situation.
Since most of the security access controls are based on PIN, it seems appropriate to use also this
mechanism to warn the system in case of attack. The idea behind this technique is to provide the user
two different PINs: one to access normally to system and another one to alert about an attack. This
93
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
second PIN, besides generating an alarm, it also allows the user to access the system because otherwise
the attacker will detect the subterfuge. Recently some news claimed that some banks proposed to use
the PIN in reverse order to generate an alert, however this news has been reported as hoax [5] and to
our knowledge this second PIN has not been implemented yet to protect any service. Finally, whether
the PIN authentication system incorporates a biometric module based on keystroke dynamics, i.e.
the timing among pressing and releasing each button,[430] the user could type his/her PIN with a
different cadence on purpose in order to alert the system. In the case that the system detected a
different pattern in the timing, it will label all the operations performed by the user as fraudulent.
In-Air Signature biometrics [64] can also be complemented with a voluntary non-coercion technique
in two different ways. On the one hand, as in the case of PIN, the user can define two different gestures:
one for normal access control and another one to alert an attack. Other possible solution consists in
making the original gesture but in a slight different way than usual. In this case the system will detect
the poor performance of the gesture and will provide access to the system but tracking all the actions
of the user.
Another solution could be to incorporate an emergency button in the sleeve of the mobile device
so that when pressed will indicate a possible attack. This solution is similar to the emergency button
provided to the elderly people to detect falls or strokes[407]. However the pressing of this button could
be easily watched by the attacker. Therefore this solution could be improved including some hand
pressure sensors into the sleeve in order to detect changes in the hand grip of the user[286]. In this
way, the user could alert about an attack by pressing the sleeve stronger than usual and the attacker
will not be able to detect any strange movement.
3.4
Conclusion
Non-coercion detection techniques based on the cooperation of the use can be incorporated easily to
any biometric system, since they only need to provide two different keys: one for normal access and
other to alert the system. However the main disadvantage of these techniques comes from the fact
that they cannot detect whether the user is pretending an attack. In some situations, the user could
avoid the responsibility over his/her own actions generating an alert of threat.
This problem could be solved using non-coercion detection techniques based on involuntary signals
since the user cannot control them. Nevertheless, the detection techniques based on physiological
signals need some external sensors and for some developments this could be not affordable. Therefore,
the best solution could be using those techniques based on built-in sensors of the mobile phone or
sensors easily integrable in the device or its sleeve. Furthermore, the detection of stress in face, voice
or movement could be done while authenticating the user in the system by means of its associated
biometric technique. However, these involuntary detection techniques are not quite accurate and may
produce many false positives. Therefore, the user may be frequently bothered because the system
block his/her access to the device based on these errors.
Although voluntary and involuntary approaches present some disadvantages, they could be used
in combination in order to reduce them. For example the involuntary approach could be used only in
the case that the person alert of a threat. In this way, the detection based on involuntary signals will
be used only to confirm that the person is really under an attack and this would reduce the number
of false positives.
94
4 Conclusion
This report has provided an overview of the state of the art of biometrics applied to mobile phones,
presenting the most relevant research works related to the use of different biometric techniques in
mobile phones.
Each biometric technique has been analyzed separately, presenting the current topics where much
research is conducted in the last years in order to use these biometrics in mobile phones. This analysis
has resulted in a list of advantages, disadvantages and limitations of each biometrics reported in their
correspondent section.
One common vulnerability of these biometric systems is the possibility of being forged by the
presentation of a fake characteristic that do not belong to the authorized user, like an image, a
gummy finger or a recorded speech. This vulnerability is specially important in physical biometric
techniques (fingerprint, iris, hand, voice, face) rather than in behavioural ones (keystroke, signature,
gait). Each biometric technique should have specific countermeasures to perceive liveness in order to
detect if the biometric characteristic sample presented belongs to a person or it is a fake.
The analysis presented in this report provides an essential support in order to decide which technologies have a better potential to be used in the project. This decision will be based on the conclusions
of each biometric technique as well as the requirements, scenarios, experience, potential and hardware
limitations.
Therefore, there are many potential techniques that can be used, although separately they offer
different vulnerabilities. Most of them can be solved by the use of multibiometric approaches. In
addition, the use of several biometrics increases the performance and the security. An appropriate
multi-factor strategy should include information from different sources: something the user is (physical
biometrics), something the user has (the phone), something the user knows (a password, a sentence,
a signature, etc.) and how the user does something (behavioural biometrics).
In addition to biometric verification techniques in mobile devices, this report also addressed noncoercion techniques, that could be included in such systems to detect when users are under a coercion
attack. In this report two different types of non-coercion methods have been recognized: voluntary and
involuntary. The first ones are quite valuable practices to let users send alarms but do not distinguish
whether the user is pretending an attack. On the other hand, involuntary strategies also provide
information of coercion attacks although with high false alarms, but they are not controllable by users
and consequently, there is no option to pretend them; specially those based on physiological signals
but also those based on voice, face and movement.
In this case, a smart fusion of both approaches can provide mobile phones with enough tools to
detect coercion attacks.
95
Glossary
ANN: Artificial Neural Network
ANS: Autonomic Nervous System
AMR: Arithmetic Mean Rule
AER: Average Error Rate
API: Application Programming Interface
ASV: Automatic Speaker Verification
BCM: Borda Count Method
C-APCDA: Cascade Asymmetric Principal Component Discriminant Analysis
CCR: Correct Classification Rate
CCTV: Closed Circuit Television
CIR: Correct Identification Rate
CMOS: Complementary metal-oxide-semiconductor
COG: Center of Gravity
CPU: Computer Processing Unit
CST: Class-Specific Threshold
DCF: Detection Cost Function
96
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
DCT: Discrete Cosine Transform
DCV: Discriminative Common Vectors
DET: Detection error trade-off
DLFBE: Decorrelated Log Filter-Bank Energies
dpi: Dots per inch
DTW: Dynamic Time Warping
DR: Detection Rate
ECG: Electrocardiography
EER: Equal Error Rate
ERE: Eigenfeature Regularization and Extraction
FAR: False Acceptance Rate
FFT: Fast Fourier Transform
FNMR: False Non-Match Rate
FMR: False Match Rate
FPGA: Field Programmable Gate Array
FPR: False Positive Rate
FPU: Floating Point Unit
FRR: False Rejection Rate
FTA: Failure-to-acquire Rate
FTE: Failure-to-enrol Rate
97
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
GAR: Genuine Acceptance Rate
GMM: Gaussian Mixture Models
GMR: Genuine Match Rate
GSR: Galvanic Skin Response
HMM: Hidden Markov Model
HI: Histogram Intersection
HR: High Resolution
HRV: Heart Rate Variability
HTER: Half Total Error Rate
IR: Infra-Red
ISV: Inter-session Variability
JFA: Joint Factor Analysis
KDCV: Kernel Discriminative Common Vectors
KLT: Karhunen-Loève Transform
KNN: K-Nearest Neighbour
LBP: Local Binary Patterns
LDA: Linear Discriminant Analysis
LFCC: Linear Frequency Cepstral Coefficients
LLR: Likelihood Ratio
MFCC: Mel Frequency Cepstral Coefficients
98
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
MLP: Multi-layer perceptron
mRMR: minimum Redundancy Maximum Relevance
NICE: Noisy Iris Challenge Evaluation
NIR: Near-Infra-Red
NN: Neural Network
PCA: Principal Component Analysis
PCAS: Personalised Centralized Authentication System
PDA: Personal Digital Assistant
PIN: Personal Identification Number
PLP: Perceptual Linear Prediction
PPG: Photoplethysmography
RBF: Radial Base Function
RF: Random Forests
RHE: Reduction of High-scores Effect
ROC: Receiver operating characteristic
ROI: Region Of Interest
SCB: Skin Color Based
SIM: Subscriber Identity Module
SNR: Signal-to-Noise Ratio
SRS: Speaker Recognition Systems
99
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
SVM: Supported Vector Machine
TMD: Touched Multi-Layered Drawing
UBM: Universal Background Model
UMACE: Unconstrained Minimum Average Correlation Energy
VQ: Vector Quantization
VW: Visual Wavelength
WHT: Walsh-Hadamard Transform
ZJU: Zhejiang University
100
Bibliography
[1] Aadhaar project (accessed on 17th november 2013). http://timesofindia.indiatimes.com/city/kolkata/Stategovt-to-complete-Aadhaar-card-process-by-Feb-28-next-year/articleshow/22502293.cms.
[2] Agnitio. http://www.agnitio-corp.com/.
[3] Apple and fingerprints. http://fingerchip.pagesperso-orange.fr/biometrics/types/fingerprint apple.htm.
[4] Armorvox. http://www.armorvox.com/.
[5] Atm safetypin software. http://en.wikipedia.org/wiki/ATM SafetyPIN software.
[6] Avanced sensor development for attention, stress, vigilance and sleep/wakefulness monitoring.
http://www.sensation-eu.org/.
[7] Biometric identification technology ethics. http://www.biteproject.org/.
[8] Biometrics evaluation and testing. http://www.beat-eu.org/.
[9] Biosec:
Security and privacy
innovation.gr/projects/BIOSEC/.
in
life
sciences
data.
http://web.imis.athena-
[10] Biosecure. http://biosecure.it-sudparis.eu.
[11] Chaos computer club breaks apple touchid. http://www.ccc.de/en/updates/2013/ccc-breaksapple-touchid.
[12] Fingerchip. http://fingerchip.pagesperso-orange.fr/biometrics/types/fingerprint products pdaphones.htm.
[13] The focus on biometrics in the mass market. http://tomorrowstransactions.com/2013/09/thefocus-on-biometrics-in-the-mass-market/.
[14] Fvc-ongoing:
on-line
evaluation
http://biolab.csr.unibo.it/fvcongoing.
of
fingerprint
recognition
algorithms.
[15] Group of biometrics, biosignals and security. http://www.gb2s.es/.
[16] How to fake fingerprints? http://dasalte.ccc.de/biometrie/fingerabdruck kopieren?language=en.
[17] Iris recognition for mobile phones. http://www.mobilepaymentsworld.com/iris-recognition-formobile-phones/.
[18] Keyspot:
Keyword spotting in continuous speech.
research/projects/keyword-spotting-in-continuous-speech.
101
https://www.idiap.ch/scientific-
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[19] Mobbeel
website
iris
(accessed
http://www.mobbeel.com/technology/iris/.
on
17th
november
2013).
[20] Mobile biometry. http://www.mobioproject.org/.
[21] Norwegian information security lab. http://www.nislab.no.
[22] Nuance. http://www.nuance.com/.
[23] Patently apple. http://www.phonearena.com/news/HTCs-implementation-of-the-fingerprintsensor-shows-why-others-have-failed-in-this-before id48720.
[24] Phonearena. http://www.patentlyapple.com/patently-apple/biometrics/.
[25] Polyu biometrics image databases.
Databases.html.
http://mla.sdu.edu.cn/PolyU Biometrics Image
[26] Securephone. http://www.secure-phone.info/.
[27] Sestek. http://www.sestek.com/.
[28] Speech technology center. http://speechpro.com/.
[29] Trusted biometrics under spoofing attacks (tabula rasa). http://www.tabularasa-euproject.org/.
[30] Unobtrusive authentication using activity related and soft biometrics. http://www.actibio.eu/.
[31] Usit code. www.wavelab.at/sources.
[32] Valid soft. http://www.validsoft.com/.
[33] Voice biometrics group. http://www.voicebiogroup.com/.
[34] Voice trust. http://www.voicetrust.com/.
[35] Voice vault. http://www.voicevault.com/.
[36] Vtt technical research centre of finland. http://www.vtt.fi.
[37] Information technology – biometric performance testing and reporting – part 1: Principles and
framework, 2006.
[38] National Institute of Standards and Technology. Iris Challenge Evaluation, 2006.
[39] VeryEye Iris Recognition Technology, 2008.
[40] Segmentation of Visible Wavelength Iris Images Captured At-a-distance and On-the-move, volume 28, 2010.
[41] Noisy Iris Challenge Evaluation II - Recognition of Visible Wavelength Iris Images Captured
At-a-distance and On-the-move, volume 33, 2012.
[42] N L. Clarke A Buchoux. Deployment of Keystroke Analysis on a Smartphone. (December 2006),
2008.
[43] Andrea F. Abate, Michele Nappi, Daniel Riccio, and Gabriele Sabatino. 2D and 3D face recognition: A survey. Pattern Recognition Letters, 28(14):1885–1906, October 2007.
102
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[44] Mark Abernethy. User Authentication Incorporating Feature Level Data Fusion of Multiple
Biometric Characteristics. Phd. thesis, Murdoch University, 2011.
[45] Gil Abramovich, Kevin Harding, Swaminathan Manickam, Joseph Czechowski, Vijay Paruchuru,
Robert Tait, Christopher Nafis, and Arun Vemury. Mobile, contactless, single-shot, fingerprint
capture system, 2010.
[46] Miguel Adán, Antonio Adán, Andrés S. Vázquez, and Roberto Torres. Biometric verification/identification based on hands natural layout. Image and Vision Computing, 26(4):451
– 465, 2008.
[47] Faizan Ahmad, Aaima Najam, and Zeeshan Ahmed. Image-based Face Detection and Recognition: “State of the Art”. International Journal of Computer Science Issues, 9(6):3–6, 2013.
[48] H J Ailisto, M Lindholm, J Mantyjarvi, E Vildjiounaite, and S M Makela. Identifying people
from gait pattern with accelerometers. In Proceedings of SPIE, volume 5779, page 7, 2005.
[49] Amani Al-Ajlan. Survey on fingerprint liveness detection. In Biometrics and Forensics (IWBF),
2013 International Workshop on, pages 1–5. IEEE, 2013.
[50] A. Alarifi, I. Alkurtass, and A.-M.S. Al-Salman. Arabic text-dependent speaker verification
for mobile devices using artificial neural networks. In Machine Learning and Applications and
Workshops (ICMLA), 2011 10th International Conference on, volume 2, pages 350–353, 2011.
[51] Federico Alegre, Asmaa Amehraye, Nicholas Evans, et al. Spoofing countermeasures to protect
automatic speaker verification from voice conversion. In ICASSP 2013 (@ icassp13). 2013 IEEE
International Conference on Acoustics, Speech, and Signal Processing. Vancouver, Canada.,
2013.
[52] Federico Alegre, Ravichander Vipperla, Nicholas Evans, et al. Spoofing countermeasures for
the protection of automatic speaker recognition systems against attacks with artificial signals.
In INTERSPEECH 2012, 13th Annual Conference of the International Speech Communication
Association, 2012.
[53] Federico Alegre, Ravichander Vipperla, Nicholas Evans, and Benoı̂t Fauve. On the vulnerability
of automatic speaker recognition to spoofing attacks with artificial signals. In Signal Processing
Conference (EUSIPCO), 2012 Proceedings of the 20th European, pages 36–40. IEEE, 2012.
[54] V. Andrei, C. Paleologu, and C. Burileanu. Implementation of a real-time text dependent speaker
identification system. In Speech Technology and Human-Computer Dialogue (SpeD), 2011 6th
Conference on, pages 1–6, 2011.
[55] Julio Angulo and W Erik. Exploring Touch-screen Biometrics for User Identification on Smart
Phones. 2011.
[56] Ahmed Arif, Michel Pahud, Ken Hinckley, and WIlliam Buxton. A tap and gesture hybrid
method for authenticating smartphone users. Proceedings of the 15th international conference
on Human-computer interaction with mobile devices and services - MobileHCI ’13, page 486,
2013.
[57] David A. Atchison, Emma L. Markwell, Sanjeev Kasthurirangan, James M. Pope, George Smith,
and Peter G. Swann. Age-related changes in optical and biometric characteristics of emmetropic
eyes. J Vis, 8(4):29.1–2920, 2008.
103
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[58] B Auvinet, G Berrut, C Touzard, L Moutel, N Collet, D Chaleil, and E Barrey. Reference data
for normal subjects obtained with an accelerometric device. Gait & Posture, 16(2):124–134,
October 2002.
[59] Adam J Aviv, Katherine Gibson, Evan Mossop, Matt Blaze, and Jonathan M Smith. Smudge
Attacks on Smartphone Touch Screens. 2010.
[60] M Bächlin, J Schumm, D Roggen, and G Tröster. Quantifying Gait Similarity: User Authentication and Real-World Challenge. In Massimo Tistarelli and Mark S Nixon, editors, Advances
in Biometrics, volume 5558, chapter 105, pages 1040–1049. Springer Berlin Heidelberg, Berlin,
Heidelberg, 2009.
[61] A.M. Badawi and M.S. Kamel. Free style hand geometry verification system. In Circuits and
Systems, 2003 IEEE 46th Midwest Symposium on, volume 1, pages 324–327 Vol. 1, 2003.
[62] Jiamin Bai, Tian-Tsong Ng, Xinting Gao, and Yun-Qing Shi. Is physics-based liveness detection
truly possible with a single image? Proceedings of 2010 IEEE International Symposium on
Circuits and Systems, pages 3425–3428, May 2010.
[63] Gonzalo Bailador, Carmen Sanchez-Avila, Javier Guerra Casanova, and Alberto de Santos Sierra. Analysis of pattern recognition techniques for in-air signature biometrics. Pattern
Recognition, 44(10-11):2468–2478, 2011.
[64] Gonzalo Bailador, Carmen Sanchez-Avila, Javier Guerra Casanova, and Alberto de Santos Sierra. Analysis of pattern recognition techniques for in-air signature biometrics. Pattern
Recognition, 44(10-11):2468–2478, 2011.
[65] Gonzalo Bailador, Carmen Sánchez-Ávila, Alberto De-Santos-Sierra, and Javier GuerraCasanova. Speed-Independent gait identification for mobile devices. International Journal of
Pattern Recognition and Artificial Intelligence, 26(08):1260013, 2012.
[66] Enrique Bailly-Bailliére, Samy Bengio, Frédéric Bimbot, Miroslav Hamouz, Josef Kittler, Johnny
Mariéthoz, Jiri Matas, Kieron Messer, Vlad Popovici, Fabienne Porée, Belen Ruiz, and JeanPhilippe Thiran. The banca database and evaluation protocol. In Proceedings of the 4th International Conference on Audio- and Video-based Biometric Person Authentication, AVBPA’03,
pages 625–638, Berlin, Heidelberg, 2003. Springer-Verlag.
[67] Swarna Bajaj and Sumeet Kaur. Typing Speed Analysis of Human for Password Protection (
Based On Keystrokes Dynamics ). (2):88–91, 2013.
[68] Gazmend Bajrami. Activity Identification for Gait Recognition Using Mobile Devices. PhD
thesis, Gjøvik University College, 2011.
[69] Sarah E. Baker, Kevin W. Bowyer, and Patrick J. Flynn. Empirical evidence for correct iris
match score degradation with increased time-lapse between gallery and probe matches. In Proc.
Int. Conf. on Biometrics (ICB2009), 2009.
[70] Jorn Bakker, Mykola Pechenizkiy, and Natalia Sidorova. What’s Your Current Stress Level?
Detection of Stress Patterns from GSR Sensor Data. 2011 IEEE 11th International Conference
on Data Mining Workshops, (1):573–580, December 2011.
[71] Muzaffar Bashir and Florian Kempf. Advanced biometric pen system for recording and analyzing
handwriting. Signal Processing Systems, 68(1):75–81, 2012.
104
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[72] Muzaffar Bashir, Georg Scharfenberg, and Jürgen Kempf. Person authentication by handwriting
in air using a biometric smart pen device. In BIOSIG, pages 219–226, 2011.
[73] C BenAbdelkader, R Cutler, H Nanda, and L Davis. EigenGait: Motion-Based Recognition of
People Using Image Self-Similarity. In Audio- and Video-Based Biometric Person Authentication, pages 284–294. 2001.
[74] Samy Bengio, Miroslav Hamouz, Josef Kittler, Jiri Matas, Kieron Messer, Belen Ruiz, JeanPhilippe Thiran, and Vlad Popovici. The BANCA Database and Evaluation Protocol. In Audioand Video-Based Biometric Person Authentication, pages 625–638. Springer Berlin Heidelberg,
2003.
[75] Brett Beranek. Voice biometrics: success stories, success factors and what’s next. Biometric
Technology Today, 2013(7):9 – 11, 2013.
[76] Alphonse Bertillon. La couleur de l’iris. Rev Sci, 36(3):65–73, 1885.
[77] Qin Bin, Pan Jian-fei, Cao Guang-zhong, and Du Ge-guo. The anti-spoofing study of vein
identification system. In Computational Intelligence and Security, 2009. CIS ’09. International
Conference on, volume 2, pages 357–360, 2009.
[78] BioID AG. MyBioID personal recognition: easy, secure online login and identity management.
https://mybioid.com/. Web. Last access December, 2013.
[79] Biometrica Systems. Focal Point. http://biometrica.com/focal_point/. Web. Last access
December, 2013.
[80] L. Z. Bito, A. Matheny, K. J. Cruickshanks, D. M. Nondahl, and O. B. Carino. Eye color changes
past early childhood. The louisville twin study. Arch Ophthalmol, 115(5):659–663, May 1997.
[81] Duane M. Blackburn, Mike Bone, and P. Jonathon Phillips. Face Recognition Vendor Test 2001.
Technical report, DoD Counterdrug Technology Development Program Office, 2001.
[82] R. Blanco-Gonzalo, O. Miguel-Hurtado, A. Mendaza-Ormaza, and R. Sanchez-Reillo. Handwritten signature recognition in mobile scenarios: Performance evaluation. In Security Technology
(ICCST), 2012 IEEE International Carnahan Conference on, pages 174–179, 2012.
[83] Bojan Blažica, Daniel Vladušič, and Dunja Mladenić. MTi: A method for user identification for
multitouch displays. International Journal of Human-Computer Studies, 71(6):691–702, June
2013.
[84] Cheng Bo, Lan Zhang, and Xy Li. SilentSense: Silent User Identification via Dynamics of Touch
and Movement Behavioral Biometrics. arXiv preprint arXiv:1309.0073, August 2013.
[85] G. Boreki and A. Zimmer. Hand geometry: a new approach for feature extraction. In Automatic
Identification Advanced Technologies, 2005. Fourth IEEE Workshop on, pages 149–154, 2005.
[86] Kevin W. Bowyer. Kevin W. Bowyer - Research. http://www3.nd.edu/~kwb/research.htm.
Web. Last access December, 2013.
[87] Kevin W. Bowyer. The results of the NICE.II iris biometrics competition. Pattern Recognition
Letters, 33(8):965 – 969, 2012.
105
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[88] Kevin W. Bowyer, Kyong Chang, and Patrick Flynn. A survey of approaches and challenges
in 3D and multi-modal 3D+2D face recognition. Computer Vision and Image Understanding,
101(1):1–15, January 2006.
[89] Kevin W. Bowyer, Karen Hollingsworth, and Patrick J. Flynn. Image understanding for iris
biometrics: A survey. Computer Vision and Image Understanding, 110:281–307, 2008.
[90] Kevin W. Bowyer, Karen P. Hollingsworth, and Patrick J. Flynn. A survey of iris biometrics
research: 2008-2010. In Mark J. Burge and Kevin W. Bowyer, editors, Handbook of Iris Recognition, Advances in Computer Vision and Pattern Recognition, pages 15–54. Springer London,
2013.
[91] H. Bredin and G. Chollet. Audio-visual speech synchrony measure for talking-face identity
verification. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International
Conference on, volume 2, pages II–233–II–236, 2007.
[92] Christian Breitwieser, Oliver Terbu, Andreas Holzinger, Clemens Brunner, and Stefanie Lindstaedt. iscope, viewing biosignals on mobile devices. In Pervasive Computing and the Networked
World Conference. ICPA-SWS 2012, pages 50–56, Istanbul, Turkey, 2012.
[93] R. R. Brooks and S. S. Iyengar. Multi-sensor fusion: fundamentals and applications with software. Prentice-Hall, Inc., 1998.
[94] Kevin Brunet, Karim Taam, Estelle Cherrier, Ndiaga Faye, Christophe Rosenberger, et al.
Speaker recognition for mobile user authentication: An android solution. In 8ème Conférence
sur la Sécurité des Architectures Réseaux et Systèmes d’Information (SAR SSI), 2013.
[95] Horst Bunke, János Csirik, Zoltán Gingl, and Erika Griechisch. Online signature verification
method based on the acceleration signals of handwriting samples. In Proceedings of the 16th
Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, CIARP’11, pages 499–506, Berlin, Heidelberg, 2011. SpringerVerlag.
[96] Bettina Burger, Dana Fuchs, Eli Sprecher, and Peter Itin. The immigration delay disease:
Adermatoglyphia-inherited absence of epidermal ridges. Journal of the American Academy of
Dermatology, 64(5):974 – 980, 2011.
[97] Joseph P Campbell Jr. Speaker recognition: A tutorial. Proceedings of the IEEE, 85(9):1437–
1462, 1997.
[98] P. Campisi, E. Maiorana, M. Lo Bosco, and a. Neri. User authentication using keystroke dynamics for cellular phones. IET Signal Processing, 3(4):333, 2009.
[99] L. Cardoso, A. Barbosa, F. Silva, A.M.G. Pinheiro, and H. Proença. Iris biometrics: Synthesis of
degraded ocular images. Information Forensics and Security, IEEE Transactions on, 8(7):1115–
1125, 2013.
[100] Javier Guerra Casanova, Carmen Sánchez Ávila, Gonzalo Bailador, and Alberto de Santos Sierra.
Authentication in mobile devices through hand gesture recognition. Int. J. Inf. Sec., 11(2):65–83,
2012.
[101] Javier Guerra Casanova, Carmen Sánchez Ávila, Alberto de Santos Sierra, and Gonzalo Bailador
del Pozo. Score optimization and template updating in a biometric technique for authentication
in mobiles based on gestures. Journal of Systems and Software, 84(11):2013–2021, 2011.
106
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[102] Center for biological and computational learning. MIT.
CBCL FACE RECOGNITION
DATABASE.
http://cbcl.mit.edu/software-datasets/heisele/
facerecognition-database.html. Web. Last access December, 2013.
[103] Centre for Vision, Speech and Signal Processing. University of Surrey. The extended M2VTS
Database. http://www.ee.surrey.ac.uk/CVSSP/xm2vtsdb/. Web. Last access December,
2013.
[104] Murali Mohan Chakka, Andre Anjosl, Sebastien Marcell, Roberto Tronci, Daniele Muntoni,
Gianluca Fadda, Maurizio Pili, Nnicola Sirena, Gabriele Murgia, Marco Ristori, Fabio Roli,
Junjie Yan, Dong Yi, Zhen Lei, Zhiwei Zhang, Stan Z. Li, William Robson Schwartz, Anderson
Rocha, Helio Pedrini, Javier Lorenzo-Navarro, Lorenzo Castrillón-Santana, Jukka Määttä, Abdenour Hadid, and Matti Pietikdinen. Competition on Counter Measures to 2-D Facial Spoofing
Attacks. International Joint Conference on Biometrics, pages 2–7, 2011.
[105] Chi Ho Chan. The BANCA Database. http://www.ee.surrey.ac.uk/CVSSP/banca/. Web.
Last access December, 2013.
[106] E. Chandra and C. Sunitha. A review on speech and speaker authentication system using voice
signal feature selection and extraction. In Advance Computing Conference, 2009. IACC 2009.
IEEE International, pages 1341–1346, 2009.
[107] Kyong I Chang, Kevin W Bowyer, and Patrick J Flynn. Face Recognition Using 2D and 3D
Facial Data. Workshop on Multimodal User Authentication (MMUA), (December), 2003.
[108] Tanvi Chauhan and Sunil Sharma. Literature Report on Face Detection with Skin & Reorganization using Genetic Algorithm. International Journal of Advanced and Innovative Research,
2(2):256–262, 2013.
[109] Lian-Wu Chen, Wu Guo, and Li-Rong Dai. Speaker verification against synthetic speech. In
Chinese Spoken Language Processing (ISCSLP), 2010 7th International Symposium on, pages
309–312, 2010.
[110] Yanling Chen, E. Heimark, and D. Gligoroski. Personal threshold in a small scale text-dependent
speaker recognition. In Biometrics and Security Technologies (ISBAST), 2013 International
Symposium on, pages 162–170, 2013.
[111] Girija Chetty and Michael Wagner. Liveness verification in audio-video speaker authentication.
In Proceeding of International Conference on Spoken Language Processing ICSLP, volume 4,
pages 2509–2512, 2004.
[112] Hsin-yi Chiang and Sonia Chiasson. Improving user authentication on mobile devices : A
Touchscreen Graphical Password. pages 251–260, 2013.
[113] I. Chingovska, J. Yang, Z. Lei, D. Yi, S. Z. Li, C. Glaser, N. Damer, A. Kuijper, A. Nouak,
J. Komulainen, T. Pereira, S. Gupta, S. Khandelwal, S. Bansal, A. Rai, T. Krishna, D. Goyal,
H. Zhang, I. Ahmad, S. Kiranyaz, M. Gabbouj, R. Tronci, M. Pili, N. Sirena, F. Roli, J. Galbally,
J. Fierrez, A. Pinto, H. Pedrini, W. S. Schwartz, A. Rocha, A. Anjos, and S. Marcel. The 2nd
Competition on Counter Measures to 2D Face Spoofing Attacks. pages 1–6, 2013.
[114] Gokul T. Chittaranjan, Mashfiqui Rabbi, Hong Lu, Andrew T. Campbell, Marianne Schmid
Mast, Denise Frauendorfer, Tanzeem Choudhury, and Daniel Gatica-Perez. StressSense: Detecting stress in unconstrained acoustic environments using smartphones. In ACM Conference
on Ubiquitous Computing. UbiComp’12, pages 351–360, 2012.
107
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[115] Yu-Tzu Chiu. In-air signature gives mobile security to the password-challenged. IEEE Spectrum,
2013.
[116] Jongyoon Choi and Ricardo Gutierrez-Osuna. Using Heart Rate Monitors to Detect Mental
Stress. In Body Sensor Networks, pages 221–225, 2009.
[117] Jongyoon Choi and Ricardo Gutierrez-Osuna. Estimating mental stress using a wearable cardiorespiratory sensor. In IEEE Sensors, pages 150–154. IEEE, November 2010.
[118] Ming Ki Chong. Usable authentication for mobile banking. Master’s thesis, University of Cape
Town, Republic of South Africa, 2009.
[119] Michal Choraś and Rafal Kozik. Contactless palmprint and knuckle biometrics for mobile devices.
Pattern Analysis and Applications, 15(1):73–85, 2012.
[120] Tanzeem Choudhury, Brian Clarkson, Tony Jebara, and Alex Pentland. Multimodal Person
Recognition using Unconstrained Audio and Video. International Conference on Audio and
Video-Based Person Authentication Proceedings, pages 176–181, 1999.
[121] Cognitec.
FaceVACS VideoScan - Cognitec.
http://www.cognitec-systems.de/
facevacs-videoscan.html. Web. Last access December, 2013.
[122] Pietro Coli, Gian Luca Marcialis, and Fabio Roli. Vitality detection from fingerprint images: a
critical survey. In Advances in Biometrics, pages 722–731. Springer, 2007.
[123] D G Vaishnav College. Keystroke Dynamics for Biometric Authentication, A Survey. 2013.
[124] Computer Science and Engineering Department. U.C. San Diego. Extended Yale Face Database.
http://vision.ucsd.edu/content/extended-yale-face-database-b-b. Web. Last access
December, 2013.
[125] Heather Crawford, Karen Renaud, and Tim Storer. A framework for continuous, transparent
mobile device authentication. Computers & Security, pages 1–10, May 2013.
[126] J.M. Cross and C.L. Smith. Thermographic imaging of the subcutaneous vascular network of the
back of the hand for biometric identification. In Security Technology, 1995. Proceedings. Institute
of Electrical and Electronics Engineers 29th Annual 1995 International Carnahan Conference
on, pages 20–35, 1995.
[127] János Csirik, Zoltán Gingl, and Erika Griechisch. The effect of training data selection and
sampling time intervals on signature verification. In First International Workshop on Automated
Forensic Handwriting Analysis (AFHA), pages 6–10, 2011.
[128] Fred Cummins, Marco Grimaldi, Thomas Leonard, and Juraj Simko. The chains speech corpus:
Characterizing individual speakers. In Proc of SPECOM, pages 1–6, 2006.
[129] D Cunado, M S Nixon, and J N Carter. Automatic extraction and description of human gait
models for recognition purposes. Computer Vision and Image Understanding, 90(1):1–41, April
2003.
[130] A. Czajka and P. Bulwan. Biometric verification based on hand thermal images. In Biometrics
(ICB), 2013 International Conference on, pages 1–6, 2013.
[131] Mousumi Dasgupta. Innovations in biometrics for consumer electronics (technical insights).
Technical Report D503-01, Frost & Sullivan, August 2013.
108
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[132] John Daugman. High confidence visual recognition of persons by a test of statistical independece.
IEEE Trans. Pattern Anal. Mach. Intell., 15(11):1148–1161, 1993.
[133] John Daugman. Biometric personal identification system based on iris analysis. U.S. Patent,
5291560, 1994.
[134] Guillaume Dave, Xing Chao, and Kishore Sriadibhatla. Face Recognition in Mobile Phones.
Department of Electrical Engineering Stanford University, USA, 2010.
[135] P.L. De Leon, M. Pucher, J. Yamagishi, I. Hernaez, and I. Saratxaga. Evaluation of speaker
verification security and detection of hmm-based synthetic speech. Audio, Speech, and Language
Processing, IEEE Transactions on, 20(8):2280–2290, 2012.
[136] P.L. De Leon and B. Stewart. Synthetic speech detection based on selectedword discriminators.
In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on,
pages 3004–3008, 2013.
[137] A. de Santos Sierra, C.S. Avila, G. Bailador del Pozo, and J. Guerra Casanova. Gaussian
multiscale aggregation oriented to hand biometric segmentation in mobile devices. In Nature
and Biologically Inspired Computing (NaBIC), 2011 Third World Congress on, pages 237–242,
2011.
[138] A. de Santos Sierra, J. Guerra Casanova, C.S. Avila, and V.J. Vera. Silhouette-based hand
recognition on mobile devices. In Security Technology, 2009. 43rd Annual 2009 International
Carnahan Conference on, pages 160–166, 2009.
[139] Alberto de Santos Sierra. Design, implementation and evaluation of an unconstrained and contactless biometric system based on hand geometry and stress detection. PhD thesis, Universidad
Politécnica de Madrid, 2012.
[140] Alberto de Santos Sierra, C Sanchez Avila, G Bailador del Pozo, and J Guerra Casanova. A
stress detection system based on physiological signals and fuzzy logic. Industrial Electronics,
IEEE Transactions on, (99):1, 2011.
[141] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet. Front-end factor analysis for
speaker verification. Audio, Speech, and Language Processing, IEEE Transactions on, 19(4):788–
798, 2011.
[142] Najim Dehak, Réda Dehak, Patrick Kenny, Niko Brummer, Pierre Ouellet, and Pierre Dumouchel. Support vector machines versus fast scoring in the low-dimensional total variability
space for speaker verification. In Interspeech, September 2009.
[143] K. Delac and M. Grgic. A survey of biometric recognition methods. In Electronics in Marine,
2004. Proceedings Elmar 2004. 46th International Symposium, pages 184–193, 2004.
[144] Mohammad O. Derawi, Patrick Bours, and Kjetil Holien. Improved Cycle Detection for Accelerometer Based Gait Authentication. In 2010 Sixth International Conference on Intelligent
Information Hiding and Multimedia Signal Processing, pages 312–317. IEEE, October 2010.
[145] Mohammad Omar Derawi, Claudia Nickel, Patrick Bours, and Christoph Busch. Unobtrusive
User-Authentication on Mobile Phones Using Biometric Gait Recognition. In 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pages
306–311. IEEE, October 2010.
109
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[146] N.P. Dhole and A.A. Gurjar. Detection of Speech under Stress: A Review. International Journal
of Engineering and Innovative Technology (IJEIT) Volume, 2(10):36–38, 2013.
[147] G. Dimauro, S. Impedovo, M. G. Lucchese, R. Modugno, and G. Pirlo. Recent advancements
in automatic signature verification. In Frontiers in Handwriting Recognition, 2004. IWFHR-9
2004. Ninth International Workshop on, pages 179–184, 2004.
[148] Yuhang Ding, Dayan Zhuang, and Kejun Wang. A study of hand vein recognition method. In
Mechatronics and Automation, 2005 IEEE International Conference, volume 4, pages 2106–2110
Vol. 4, 2005.
[149] Wenbo Dong, Zhenan Sun, and Tieniu Tan. A design of iris recognition system at a distance.
In Pattern Recognition, 2009. CCPR 2009. Chinese Conference on, pages 1–5, 2009.
[150] Sun Dong-mei and Qiu Zheng-ding. Automated hand shape verification using hmm. In Signal
Processing, 2004. Proceedings. ICSP ’04. 2004 7th International Conference on, volume 3, pages
2274–2277 vol.3, 2004.
[151] J. Doublet, O. Lepetit, and M. Revenu. Contactless hand recognition based on distribution
estimation. In Biometrics Symposium, 2007, pages 1–6, 2007.
[152] B. Dumas, C. Pugin, J. Hennebert, D. Petrovska Delacretaz, A. Humm, F. Evequoz, R. Ingold,
and D. Von Rotz. MyIDea - multimodal biometrics database, description of acquisition protocols.
In Proc. of 3rd COST 275 Workshop (COST 275), pages 59–62, Hatfield, UK, 2005.
[153] B. Dumas, C. Pugin, J. Hennebert, D. Petrovska-Delacrétaz, A. Humm, F. Evéquoz, D. Von
Rotz, and R. Ingold. MyIDEA - Multimodal biometrics database description of acquisition
protocols. In Third COST 275 Workshop (COST 275), pages 59–62, 2005.
[154] Nicolae Duta. A survey of biometric technology based on hand shape. Pattern Recognition,
42(11):2797 – 2806, 2009.
[155] Ellavarason E. and Rathgeb C. Template ageing in iris biometrics: A cross-algorithm investigation of the nd-iris-template-ageing-2008-2010. Technical report, Biometrics and Internet-Security
Research Group, Center for Advanced Security Research, Darmstadt, Germany, 2013.
[156] Paul Ekman. Lie catching and microexpressions. The philosophy of deception, pages 118–133,
2009.
[157] Paul Ekman and Wallace V. Friesen. Unmasking the face: A guide to recognizing emotions from
facial clues. Ishk, 2003.
[158] Ibrahim El-Henawy, Magdy Rashad, Omima Nomir, and Kareem Ahmed. Online signature
verification: State of the art. INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY, 4(2c), 2013.
[159] Clayton Epp, Michael Lippold, and Regan L. Mandryk. Identifying emotional states using
keystroke dynamics. In Proceedings of the 2011 annual conference on Human factors in computing systems - CHI ’11, pages 715–724, Vancouver, Canada, 2011. ACM Press.
[160] Emre Ertin, Nathan Stohs, Santosh Kumar, and Andrew Raij. AutoSense: unobtrusively wearable sensor suite for inferring the onset, causality, and consequences of stress in the field. SenSys
2011, pages 274–287, 2011.
110
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[161] M. Espinoza and C. Champod. Using the number of pores on fingerprint images to detect
spoofing attacks. In Hand-Based Biometrics (ICHB), 2011 International Conference on, pages
1–5, 2011.
[162] Nicholas Evans, Tomi Kinnunen, and Junichi Yamagishi. Spoofing and countermeasures for
automatic speaker verification. In INTERSPEECH, 2013.
[163] Nicolas Eveno and Laurent Besacier. A speaker independent” liveness” test for audio-visual
biometrics. In INTERSPEECH, pages 3081–3084, 2005.
[164] Face Recognition Group, Chinese Academy of Science. The PEAL Face Database. http:
//www.jdl.ac.cn/peal/index.html. Web. Last access December, 2013.
[165] FaceKey. Biometric Access Control. http://www.facekey.com/. Web. Last access December,
2013.
[166] Hourieh Fakourfar and Serge Belongie. Fingerprint recognition system performance in the maritime environment. In Applications of Computer Vision (WACV), 2009 Workshop on, pages
1–5. IEEE, 2009.
[167] B. Fasel and Juergen Luettin. Automatic facial expression analysis: a survey. Pattern Recognition, 36(1):259–275, 2003.
[168] Marcos Faundez-Zanuy. Data Fusion in Biometrics. Aerospace and Electronic Systems Magazine,
IEEE, 20(January):34–38, 2005.
[169] Marcos Faundez-Zanuy, Josep Roure, Virginia Espinosa-Duró, and Juan Antonio Ortega. An efficient face verification method in a transformed domain. Pattern Recognition Letters, 28(7):854–
858, May 2007.
[170] B. Fauve, H. Bredin, W. Karam, F. Verdet, A. Mayoue, G. Chollet, J. Hennebert, R. Lewis,
J. Mason, C. Mokbel, and D. Petrovska. Some results from the biosecure talking face evaluation
campaign. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International
Conference on, pages 4137–4140, 2008.
[171] A. Fazel and S. Chakrabartty. An overview of statistical pattern recognition techniques for
speaker verification. Circuits and Systems Magazine, IEEE, 11(2):62–81, 2011.
[172] S.P. Fenker and K.W. Bowyer. Experimental evidence of a template aging effect in iris biometrics.
In Applications of Computer Vision (WACV), 2011 IEEE Workshop on, pages 232–239, 2011.
[173] S.P. Fenker and K.W. Bowyer. Analysis of template aging in iris biometrics. In Computer Vision
and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on,
pages 45–51, 2012.
[174] M.A. Ferrer, J. Fabregas, M. Faundez, J.B. Alonso, and C. Travieso. Hand geometry identification system performance. In Security Technology, 2009. 43rd Annual 2009 International
Carnahan Conference on, pages 167–171, 2009.
[175] J. Fierrez, J. Galbally, J. Ortega-Garcia, M. R. Freire, F. Alonso-Fernandez, D. Ramos, D. T.
Toledano, J. Gonzalez-Rodriguez, J. A. Siguenza, J. Garrido-Salas, E. Anguiano, G. Gonzalezde Rivera, R. Ribalda, M. Faundez-Zanuy, J. A. Ortega, V. Cardeñoso Payo, A. Viloria, C. E.
Vivaracho, Q. I. Moro, J. J. Igarza, J. Sanchez, I. Hernaez, C. Orrite-Uruñuela, F. MartinezContreras, and J. J. Gracia-Roche. BiosecurID: a multimodal biometric database. Pattern
Analysis and Applications, 13(2):235–246, February 2009.
111
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[176] Julian Fierrez, Javier Ortega-Garcia, Doroteo Torre Toledano, and Joaquin Gonzalez-Rodriguez.
Biosec baseline corpus: A multimodal biometric database. Pattern Recognition, 40(4):1389–1392,
April 2007.
[177] Julian Fierrez-aguilar, Javier Ortega-garcia, Doroteo Torre-toledano, and Joaquin Gonzalezrodriguez. Biosec baseline corpus: A multimodal biometric database. Pattern Recognition,
pages 1389–1392, 2007.
[178] Julian Fierrez-Aguilar, Javier Ortega-Garcia, Doroteo Torre-Toledano, and Joaquin GonzalezRodriguez. Biosec baseline corpus: A multimodal biometric database. Pattern Recognition,
pages 1389–1392, 2007.
[179] Montalvao Filho and Freire. Biochaves Project database. http://www.biochaves.com/en/
download.htm, 2006.
[180] Leonard Flom and Aran Safir. Iris recognition system. U.S. Patent, 4641349, 1987.
[181] Leong Lai Fong and Woo Chaw Seng. A comparison study on hand recognition approaches.
In Soft Computing and Pattern Recognition, 2009. SOCPAR ’09. International Conference of,
pages 364–368, 2009.
[182] Denis Foo Kune and Yongdae Kim. Timing attacks on PIN input devices. Proceedings of the
17th ACM conference on Computer and communications security - CCS ’10, page 678, 2010.
[183] Niall A Fox, Brian A OÕMullane, and Richard B Reilly. Valid: A new practical audio-visual
database, and comparative results. In Audio-and Video-Based Biometric Person Authentication,
pages 777–786. Springer, 2005.
[184] Matthew Frampton and Sandeep Sripada. Detection of time-pressure induced stress in speech
via acoustic indicators. In 1th Annual Meeting of the Special Interest Group on Discourse and
Dialogue, pages 253–256, 2010.
[185] R.W. Frischholz and U. Dieckmann. Biold: a multimodal biometric identification system. Computer, 33(2):64–68, 2000.
[186] H Fronthaler and K Kollreider. Assuring Liveness in Biometric Identity Authentication by
Real-time Face Tracking. Proceedings of the IEEE International Conference on Computational
Intelligence for Homeland Security and Personal Safety, (July):21–22, 2004.
[187] S. Furui. Cepstral analysis technique for automatic speaker verification. Acoustics, Speech and
Signal Processing, IEEE Transactions on, 29(2):254–272, 1981.
[188] D. Gafurov, K. Helkala, and T. Soendrol. Gait recognition using acceleration from MEMS.
In First International Conference on Availability, Reliability and Security (ARES’06), pages 6
pp.–439. IEEE, 2006.
[189] D Gafurov and E Snekkenes. Gait recognition using wearable motion recording sensors.
EURASIP Journal on Advances in Signal Processing, 2009:1–16, 2009.
[190] D Gafurov, E Snekkenes, and P Bours. Spoof Attacks on Gait Authentication System. IEEE
Transactions on Information Forensics and Security., 2(3):491–502, 2007.
[191] D Gafurov, E Snekkenes, and T Buvarp. Robustness of Biometric Gait Authentication Against
Impersonation Attack. In On the Move to Meaningful Internet Systems OTM, pages 479–488,
2006.
112
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[192] Davrondzhon Gafurov. Security Analysis of Impostor Attempts with Respect to Gender in Gait
Biometrics. In 2007 First IEEE International Conference on Biometrics: Theory, Applications,
and Systems, pages 1–6. IEEE, September 2007.
[193] Davrondzhon Gafurov and Patrick Bours. Improved hip-based individual recognition using
wearable motion recording sensor. In Security Technology, Disaster Recovery and Business
Continuity, pages 179–186. Springer, 2010.
[194] Davrondzhon Gafurov and Einar Snekkenes. Towards understanding the uniqueness of gait biometric. In 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition,
pages 1–8. IEEE, September 2008.
[195] Davrondzhon Gafurov, Einar Snekkenes, and Patrick Bours. Gait Authentication and Identification Using Wearable Accelerometer Sensor. In 2007 IEEE Workshop on Automatic Identification
Advanced Technologies, pages 220–225. IEEE, June 2007.
[196] Davrondzhon Gafurov, Einar Snekkenes, and Patrick Bours. Improved Gait Recognition Performance Using Cycle Matching. International Conference on Advanced Information Networking
and Applications Workshops, 0:836–841, 2010.
[197] Davrondzhon Gafurov and Einar Snekkkenes. Arm Swing as a Weak Biometric for Unobtrusive
User Authentication. In 2008 International Conference on Intelligent Information Hiding and
Multimedia Signal Processing, pages 1080–1087. IEEE, August 2008.
[198] Javier Galbally, Raffaele Cappelli, Alessandra Lumini, Guillermo Gonzalez-de Rivera, Davide
Maltoni, Julian Fierrez, Javier Ortega-Garcia, and Dario Maio. An evaluation of direct attacks
using fake fingers generated from iso templates. Pattern Recognition Letters, 31(8):725–732,
2010.
[199] J Galbally-Herrero, J Fierrez-Aguilar, JD Rodriguez-Gonzalez, Fernando Alonso-Fernandez,
Javier Ortega-Garcia, and M Tapiador. On the vulnerability of fingerprint verification systems to fake fingerprints attacks. In Carnahan Conferences Security Technology, Proceedings
2006 40th Annual IEEE International, pages 130–136. IEEE, 2006.
[200] Guillaume Galou, Gérard Chollet, et al. Synthetic voice forgery in the forensic context: a short
tutorial. In Forensic Speech and Audio Analysis Working Group (ENFSI-FSAAWG), 2011.
[201] Wen Gao, Bo Cao, Shiguang Shan, Xilin Chen, Delong Zhou, Xiaohua Zhang, and Debin Zhao.
The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 38(1):149–161, January
2008.
[202] Sonia Garcia-Salicetti, Charles Beumier, Gerard Chollet, Bernadette Dorizzi, JeanLerouxles
Jardins, Jan Lunter, Yang Ni, and Dijana Petrovska-Delacretaz. Biomet: A multimodal person
authentication database including face, voice, fingerprint, hand and signature modalities. In Josef
Kittler and MarkS. Nixon, editors, Audio- and Video-Based Biometric Person Authentication,
volume 2688 of Lecture Notes in Computer Science, pages 845–853. Springer Berlin Heidelberg,
2003.
[203] Sonia Garcia-Salicetti, Charles Beumier, Bernadette Dorizzi, Jan Lunter, and Yang Ni.
BIOMET: A Multimodal Person Authentication Database Including Face, Voice, Fingerprint,
Hand and Signature Modalities. In Audio-and Video-Based Biometric Person Authentication,
pages 845–853. Springer Berlin Heidelberg, 2003.
113
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[204] Sonia Garcia-Salicetti, Mohamed Anouar Mellakh, and Bernadette Dorizzi. Multimodal biometric score fusion: the mean rule vs. Support Vector classifiers. In European Signal Processing
Conference, 2005.
[205] Athinodoros S. Georghiades, Peter N. Belhumeur, and David J. Kriegman. From Few to Many:
Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):643–660, 2001.
[206] Dimitris Giakoumis, Anastasios Drosou, Pietro Cipresso, Dimitrios Tzovaras, George Hassapis,
Andrea Gaggioli, and Giuseppe Riva. Using activity-related behavioural features towards more
effective automatic stress detection. PloS one, 7(9), January 2012.
[207] Marc Tena Gil. ECG Recording and Heart Rate Detection with Textile-Electronic Integrated
Instrumentation Part I. PhD thesis, University of Borås School, 2012.
[208] Romain Giot, Alexandre Ninassi, Mohamad El-abed, and Christophe Rosenberger. Analysis of
the Acquisition Process for Keystroke Dynamics. 2012.
[209] Rodney Goh, Lihao Liu, Xiaoming Liu, and Tsuhan Chen. The CMU Face In Action (FIA)
Database. Analysis and Modelling of Faces and Gestures, pages 255–263, 2005.
[210] S. Gonzalez, C.M. Travieso, J.B. Alonso, and M.A. Ferrer. Automatic biometric identification
system by hand geometry. In Security Technology, 2003. Proceedings. IEEE 37th Annual 2003
International Carnahan Conference on, pages 281–284, 2003.
[211] Alan Goode. Mobile biometric security market forecasts provides market forecasts for the mobile biometric security market, between the years 2013 and 2018. Technical report, Goode
Intelligence, October 2013.
[212] J. D. P. Graham. Static tremor in anxiety states. Journal of neurology, neurosurgery, and
psychiatry, 8:57–60, 1945.
[213] Tosha Grey. Impact of time, weathering and surface type on fingerprinting. 2012 NCUR, 2013.
[214] Mislav Grgic and Kresimir Delac. Face Recognition Homepage. http://www.face-rec.org/
databases/. Web. Last access December, 2013.
[215] Mislav Grgic, Kresimir Delac, and Sonja Grgic. SCface, A surveillance cameras face database.
Multimedia Tools and Applications, 51(3):863–879, October 2009.
[216] Mislav Grgic, Kresimir Delac, Sonja Grgic, and Bozidar Klimpak. SCface - Surveillance Cameras
Face Database. http://www.scface.org/. Web. Last access December, 2013.
[217] E. Griechisch, M.I. Malk, and M. Liwicki. Online signature analysis based on accelerometric
and gyroscopic pens and legendre series. In Document Analysis and Recognition (ICDAR), 2013
12th International Conference on, pages 374–378, 2013.
[218] M. Grimaldi and Fred Cummins. Speaker identification using instantaneous frequencies. Audio,
Speech, and Language Processing, IEEE Transactions on, 16(6):1097–1111, 2008.
[219] Ralph Gross. Robotics Institute: PIE Database. http://www.ri.cmu.edu/research_project_
detail.html?project_id=418&menu_id=261. Web. Last access December, 2013.
[220] Ralph Gross. Face Databases. In S. Li and A. Jain, editors, Handbook of Face Recognition,
chapter 13, pages 301–327. Springer New York, 2005.
114
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[221] Ralph Gross, Iain Matthews, Jeff Cohn, Takeo Kanade, and Simon Baker. Multi-PIE. Proceedings of the International Conference on Automatic Face and Gesture Recognition, 28(5):807–813,
May 2010.
[222] J. Guerra-Casanova, C. Sánchez-Ávila, A. Santos-Sierra, G. Bailador, and V. Jara-Vera. A realtime in-air signature biometric technique using a mobile device embedding an accelerometer. In
Filip Zavoral, Jakub Yaghob, Pit Pichappan, and Eyas El-Qawasmeh, editors, Networked Digital
Technologies, volume 87 of Communications in Computer and Information Science, pages 497–
503. Springer Berlin Heidelberg, 2010.
[223] Suranga D.W. Gunawardhane, Pasan M. De Silva, Dayan S.B. Kulathunga, and Shiromi M.K.D.
Arunatileka. Non Invasive Human Stress Detection Using Key Stroke Dynamics And Pattern
Variations. In International Conference on Advances in ICT for Emerging Regions, Colombo,
Sri Lanka, 2013.
[224] M. Haak, S. Bos, S. Panic, and L. J. M. Rothkrantz. Detecting stress using eye blinks and brain
activity from EEG signals. Proceeding of the 1st Driver Car Interaction and Interface (DCII
2008), 2009.
[225] A. Hadid, J. Y. Heikkilä, O. Silven, and M. Pietikdinen. Face and eye detection for person
authentication in mobile phones. First ACM/IEEE International Conference on Distributed
Smart Cameras, pages 101–108, 2007.
[226] Song-Yi Han, Hyun-Ae Park, Dal-Ho Cho, Kang Ryoung Park, and Sangyoun Lee. Face Recognition Based on Near-Infrared Light Using Mobile Phone. In Springer Berlin Heidelberg, editor,
Adaptive and Natural Computing Algorithms, pages 440–448, 2007.
[227] Song-Yi Han, Hyun-Ae Park, Dal-Ho Cho, Kang Ryoung Park, and Sangyoun Lee. Face Recognition Based on Near-Infrared Light Using Mobile Phone. In Springer Berlin Heidelberg, editor,
Adaptive and Natural Computing Algorithms, pages 440–448, 2007.
[228] Dan Witzner W. Hansen and Qiang Ji. In the eye of the beholder: a survey of models for
eyes and gaze. IEEE transactions on pattern analysis and machine intelligence, 32(3):478–500,
March 2010.
[229] Mingxing He, Shi-Jinn Horng, Pingzhi Fan, Ray-Shine Run, Rong-Jian Chen, Jui-Lin Lai,
Muhammad Khurram Khan, and Kevin Octavius Sentosa. Performance evaluation of score
level fusion in multimodal biometric systems. Pattern Recognition, 43(5):1789–1800, May 2010.
[230] Zhaofeng He, Tieniu Tan, Zhenan Sun, and Xianchao Qiu. Toward Accurate and Fast Iris Segmentation for Iris Biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence,
31(9):1670–1684, 2009.
[231] Matthieu Hebert. Text-dependent speaker recognition. In Jacob Benesty, M.Mohan Sondhi, and
Yiteng(Arden) Huang, editors, Springer Handbook of Speech Processing, pages 743–762. Springer
Berlin Heidelberg, 2008.
[232] P.H. Hennings-Yeomans, B.V.K.V. Kumar, and M. Savvides. Palmprint classification using
multiple advanced correlation filters and palm-specific segmentation. Information Forensics and
Security, IEEE Transactions on, 2(3):613–622, 2007.
[233] Heinz Hertlein, Robert Frischholz, and Elmar Noth. Pass phrase based speaker recognition for
authentication. In Arslan Bromme and Christoph Busch, editors, BIOSIG, volume 31 of LNI,
pages 71–80. GI, 2003.
115
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[234] B.Y. Hiew, A.B.J. Teoh, and Y.H. Pang. Touch-less fingerprint recognition system. In Automatic
Identification Advanced Technologies, 2007 IEEE Workshop on, pages 24–29, 2007.
[235] Chiung Ching Ho, Hu Ng, Wooi-Haw Tan, Kok-Why Ng, Hau-Lee Tong, Timothy Tzen-Vun
Yap, Pei-Fen Chong, C. Eswaran, and Junaidi Abdullah. MMU GASPFA: A COTS multimodal
biometric database. Pattern Recognition Letters, 34(15):2043–2050, 2013.
[236] Chiung Ching Ho, Hu Ng, Wooi-Haw Tan, Kok-Why Ng, Hau-Lee Tong, Timothy Tzen-Vun
Yap, Pei-Fen Chong, C. Eswaran, and Junaidi Abdullah. MMU GASPFA: A COTS multimodal
biometric database. Pattern Recognition Letters, 34(15):2043–2050, 2013.
[237] Tin Kam Ho, J.J. Hull, and S.N. Srihari. Decision combination in multiple classifier systems.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):66–75, 1994.
[238] Dal ho Cho, Kang Ryoung Park, and Dae Woong Rhee. Real-time iris localization for iris recognition in cellular phone. In Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, 2005 and First ACIS International Workshop on Self-Assembling
Wireless Networks. SNPD/SAWN 2005. Sixth International Conference on, pages 254–259, 2005.
[239] Dal ho Cho, Kang Ryoung Park, Dae Woong Rhee, Yanggon Kim, and Jonghoon Yang. Pupil
and iris localization for iris recognition in mobile phones. In Software Engineering, Artificial
Intelligence, Networking, and Parallel/Distributed Computing, 2006. SNPD 2006. Seventh ACIS
International Conference on, pages 197–201, 2006.
[240] Thang Hoang, Thuc Nguyen, Chuyen Luong, Son Do, and Deokjai Choi. Adaptive Cross-Device
Gait Recognition Using a Mobile Accelerometer. Journal of Information Processing Systems,
9(2):333–348, June 2013.
[241] Kjetil Holien, Rune Hammersland, Terje Risa, and Norwegian Information. How Different Surfaces Affect Gait Based Authentication. Technical report, Norwegian Information Security Lab,
Gjøvik University College, 2007.
[242] C. Holmgard, G.N. Yannakakis, K.I. Karstoft, and H.S. Andersen. Stress Detection for PTSD
via the StartleMart Game. In Proceedings of the Fifth International Conference of the Humaine
Association on Affective Computing and Intelligent Interaction (ACII 2013), 2013.
[243] Frank Horvath. Detecting Deception: The Promise and the Reality of Voice Stress Analysis.
Polygraph, 31(2):96–107, 2002.
[244] N. Houmani, S. Garcia-Salicetti, B. Dorizzi, J. Montalvao, J. C. Canuto, M. V. Andrade,
Y. Qiao, X. Wang, T. Scheidat, A. Makrushin, D. Muramatsu, J. Putz-Leszczynska, M. Kudelski, M. Faundez-Zanuy, J. M. Pascual-Gaspar, V. Cardenoso-Payo, C. Vivaracho-Pascual, E. Argones Riia, J. L. Alba-Castro, A. Kholmatov, and B. Yanikoglu. Biosecure signature evaluation
campaign (esra’2011): evaluating systems on quality-based categories of skilled forgeries. In
Biometrics (IJCB), 2011 International Joint Conference on, pages 1–10, 2011.
[245] Chih-Yu Hsu, Pei-Shan Lee, Kuo-Kun Tseng, , and Yifan Li. A palm personal identification for
vehicular security with a mobile device,. International Journal of Vehicular Technology, 2013,
2013.
[246] Jennifer Huang, Bernd Heisele, and Volker Blanz. Component-Based Face Recognition with
3D Morphable Models. First IEEE Workshop on Face Processing in Video, Washington, D.C.,
pages 27–34, 2004.
116
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[247] Chris R. Chatwin Huiqi Lu and Rupert C.D. Young. Iris recognition on low computational
power mobile devices. In Unique and Diverse Applications in Nature, Science, and Technology,
2011.
[248] Abdulameer Khalaf Hussain and Mustafa Nouman Al-hassan. Multifactor Strong Authentication
Method Using Keystroke Dynamics. 2(2):31–34, 2013.
[249] Seong-seob Hwang, Sungzoon Cho, and Sunghoon Park. Keystroke dynamics-based authentication for mobile devices. Computers & Security, 28(1-2):85–93, February 2009.
[250] Y. Ijiri, M. Sakuragi, and Shihong Lao. Security Management for Mobile Devices by Face
Recognition. In 7th International Conference on Mobile Data Management (MDM’06). IEEE,
2006.
[251] ImageWare Systems Inc. GoMobile Interactive. http://www.iwsinc.com/. Web. Last access
December, 2013.
[252] D. Impedovo and G. Pirlo. Automatic signature verification: The state of the art. Systems,
Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 38(5):609–635,
2008.
[253] ITC-irst. SpotIt! http://spotit.fbk.eu/SpotIt.html. Web. Last access December, 2013.
[254] A. Iula, A. Savoia, C. Longo, G. Caliano, A. Caronti, and M. Pappalardo. 3d ultrasonic imaging
of the human hand for biometric purposes. In Ultrasonics Symposium (IUS), 2010 IEEE, pages
37–40, 2010.
[255] R. S. Ivanov. An Application for Image Database Development for Mobile Face Detection and
Recognition Systems.
[256] J. Ortega-garcia M. R. Freire F. Alonso-fern J. A. Siguenza J. Garrido-salas E. Anguiano-rey
G. Gonzalez-de-rivera R. Ribalda M. Faundez-zanuy J. A. Ortega V. Cardeñoso-payo A. Viloria
C. E. Vivaracho Q. I. Moro J. J. Igarza J. Sanchez I. Hernaez C. Orrite-uruñuela J. Galbally,
J. Fierrez. BiosecurID: a multimodal biometric database. Pattern Analysis and Applications 13,
2009.
[257] Rabia Jafri and Hamid R. Arabnia. A Survey of Face Recognition Techniques. Journal of
Information Processing Systems, 5(2):41–68, June 2009.
[258] A. Jain, A. Ross, and S. Pankanti. A prototype hand geometry-based verification system.
Proceedings of Second International Conference on Audio- and Video-based Biometric Person
Authentication, pages 166–171, 1999.
[259] Anil Jain, Lin Hong, and Sharath Pankanti. Biometric identification. Commun. ACM, 43(2):90–
98, February 2000.
[260] Anil K Jain and Sharathchandra Pankanti. A touch of money [biometric authentication systems].
Spectrum, IEEE, 43(7):22–27, 2006.
[261] Sabah Jassim, Harin Sellahewa, and Johan-Hendrik Ehlers. Wavelet Based Face Verification for
Mobile Personal Devices. Biometrics on the Internet, pages 81–85, 2005.
[262] Nattapong Jeanjaitrong and Pattarasinee Bhattarakosol. Feasibility study on authentication
based keystroke dynamic over touch-screen devices. 2013 13th International Symposium on
Communications and Information Technologies (ISCIT), pages 238–242, September 2013.
117
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[263] J Jenkins and C Ellis. Using ground reaction forces from gait analysis: body mass as a weak
biometric. Pervasive Computing, pages 251–267, 2007.
[264] Dae Sik Jeong, Hyun-Ae Park, Kang Ryoung Park, and Jaihie Kim. Iris Recognition in Mobile
Phone Based on Adaptive Gabor Filter. In David Zhang and Anil K. Jain, editors, ICB, volume
3832 of Lecture Notes in Computer Science, pages 457–463. Springer, 2006.
[265] Jia Jia and Lianhong Cai. Fake finger detection based on time-series fingerprint image analysis. In Advanced intelligent computing theories and applications. with aspects of theoretical and
methodological issues, pages 1140–1150. Springer, 2007.
[266] Jia Jia, Lianhong Cai, Kaifu Zhang, and Dawei Chen. A new approach to fake finger detection
based on skin elasticity analysis. In Advances in Biometrics, pages 309–318. Springer, 2007.
[267] Xiao-Yuan Jing, Yong-Fang Yao, David Zhang, Jing-Yu Yang, and Miao Li. Face and palmprint
pixel level fusion and Kernel DCV-RBF classifier for small sample biometric recognition. Pattern
Recognition, 40(11):3209–3224, November 2007.
[268] R.C. Johnson, Terrance E. Boult, and Walter J. Scheirer. Voice authentication using short
phrases: Examining accuracy, security and privacy issues. In The IEEE International Conference
on Biometrics: Theory, Applications and Systems (BTAS), September 2013.
[269] R.C. Johnson, Walter J. Scheirer, and Terrance E. Boult. Secure voice-based authentication for
mobile devices: Vaulted voice verification. In The SPIE Defense, Security + Sensing Symposium,
May 2013.
[270] R. Johnston. Can iris patterns be used to identify people? Los Alamos National Laboratory,
Chemical and Laser Sciences Division Annual Report, LA-12331-PR:81–86, 1992.
[271] Shie Mannor Doina Precup Jordan Frank, Jordan Frank, Shie Mannor, and Doina Precup.
Activity and Gait Recognition with Time-Delay Embeddings Time-Delay Embeddings. In AAAI
Conference on Artificial Intelligence, pages 407–408. ACM Press, 2010.
[272] Raed A. Joundi, John-Stuart Brittain, Ned Jenkinson, Alexander L. Green, and Tipu Aziz.
Rapid tremor frequency assessment with the iPhone accelerometer. Parkinsonism & Related
Disorders, 17(4):288–290, 2011.
[273] Felix Juefei-Xu, Chandrasekhar Bhagavatula, Aaron Jaech, Unni Prasad, and Marios Savvides.
Gait-ID on the move: Pace independent human identification using cell phone accelerometer
dynamics. In 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications
and Systems (BTAS), pages 8–15. IEEE, September 2012.
[274] Sung-Uk Jung, Yun-Su Chung, Jang-Hee Yoo, and Ki-young Moon. Real-Time Face Verification for Mobile Platforms. In Advances in visual computing, pages 823–832. Springer Berlin
Heidelberg, 2008.
[275] Hyosup Kang, Bongku Lee, Hakil Kim, Daecheol Shin, and Jaesung Kim. A study on performance evaluation of the liveness detection for various fingerprint sensor modules. In KnowledgeBased Intelligent Information and Engineering Systems, pages 1245–1253. Springer, 2003.
[276] M Karnan and N Krishnaraj. A Model to Secure Mobile Devices Using Keystroke Dynamics
through Soft Computing Techniques. (3):71–75, 2012.
118
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[277] J J Kavanagh, R S Barrett, and S Morrison. Upper body accelerations during walking in healthy
young and elderly men. Gait Posture, 20(3):291–298, December 2004.
[278] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, and P. Dumouchel. A study of interspeaker variability in speaker verification. Audio, Speech, and Language Processing, IEEE Transactions on,
16(5):980–988, 2008.
[279] M.S. Khalil and F.K. Wan. A review of fingerprint pre-processing using a mobile phone. In
Wavelet Analysis and Pattern Recognition (ICWAPR), 2012 International Conference on, pages
152–157, 2012.
[280] E. Khoury, B. Vesnicer, J. Franco-Pedroso, R. Violato, Z. Boulkcnafet, L.M. Mazaira Fernandez,
M. Diez, J. Kosmala, H. Khemiri, T. Cipr, R. Saeidi, M. Gunther, J. Zganec-Gros, R.Z. Candil,
F. Simoes, M. Bengherabi, A. Alvarez Marquina, M. Penagarikano, A. Abad, M. Boulayemen,
P. Schwarz, D. Van Leeuwen, J. Gonzalez-Dominguez, M.U. Neto, E. Boutellaa, P. Gomez Vilda,
A. Varona, D. Petrovska-Delacretaz, P. Matejka, J. Gonzalez-Rodriguez, T. Pereira, F. Harizi,
L.J. Rodriguez-Fuentes, L. El Shafey, M. Angeloni, G. Bordel, G. Chollet, and S. Marcel. The
2013 speaker recognition evaluation in mobile environment. In Biometrics (ICB), 2013 International Conference on, pages 1–8, 2013.
[281] Killourhy and Maxion. Anomaly-detection algorithms database. http://www.cs.cmu.edu/
~keystroke/, 2009.
[282] Kevin S. Killourhy and Roy a. Maxion. Comparing anomaly-detection algorithms for keystroke
dynamics. 2009 IEEE/IFIP International Conference on Dependable Systems & Networks, pages
125–134, June 2009.
[283] Kevin S Killourhy, Roy A Maxion, and David L Banks. A Scientific Understanding of Keystroke
Dynamics. (January), 2012.
[284] Dong-Ju Kim, Kwang-Woo Chung, and Kwang-Seok Hong. Person authentication using face,
teeth and voice modalities for mobile device security. IEEE Transactions on Consumer Electronics, 56(4):2678–2685, November 2010.
[285] Hale Kim. Evaluation of fingerprint readers: Environmental factors. Human Factors, & Liveness
Detecting Capability. Retrieved April, 2:2005, 2003.
[286] Kee-Eung Kim, Wook Chang, Sung-Jung Cho, Junghyun Shim, Hyunjeong Lee, Joonah Park,
Youngbeom Lee, and Sangryoung Kim. Hand grip pattern recognition for mobile user interfaces.
In Proceedings of the National Conference on Artificial Intelligence, volume 21, pages 1789–1794.
Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
[287] Younghwan Kim, Jang-hee Yoo, and Kyoungho Choi. A Motion and Similarity-Based Fake
Detection Method for Biometric Face Recognition Systems. IEEE International Conference on
Consumer Electronics, 57(2):756–762, 2011.
[288] K. Kollreider, H. Fronthaler, and J. Bigun. Verifying liveness by multiple experts in face biometrics. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Workshops, pages 1–6, June 2008.
[289] Jukka Komulainen and Abdenour Hadid. Face Spoofing Detection Using Dynamic Texture.
Computer Vision-ACCV 2012 Workshops, pages 146–157, 2013.
119
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[290] Adams Kong, David Zhang, and Mohamed Kamel. A survey of palmprint recognition. Pattern
Recognition, 42(7):1408 – 1418, 2009.
[291] Adams Wai-Kin Kong, David Zhang, and Guangming Lu. A study of identical twins palmprints
for personal verification. Pattern Recognition, 39(11):2149 – 2156, 2006.
[292] J. Koreman, A. C. Morris, D. Wu, S. Jassim, H. Sellahewa, J. Ehlers, G. Chollet, G. Aversano,
L. Allano, B. Ly Van, and B. Dorizzi. Multi-modal biometric authentication on the SecurePhone
PDA. Proc. 2nd Workshop on Multimodal User Auth., Toulouse, France., 2006.
[293] J Koreman, A C Morris, D Wu, S Jassim, H Sellahewa, J Ehlers, G Chollet, G Aversano,
H Bredin S Garcia-salicetti, L Allano, B Ly Van, B Dorizzi, École Nationale, and Institut
National. Multi-modal biometric authentication on the SecurePhone PDA. 2006.
[294] M Kourogi and T Kurata. Personal Positioning based on Walking Locomotion Analysis with SelfContained Sensors and a Wearable Camera. In 2nd IEEE and ACM International Symposium
on Mixed and Augmented Reality ISMAR ’03, Washington, DC, USA, 2003. IEEE Computer
Society.
[295] I. Kramberger, M. Grasic, and T. Rotovnik. Door phone embedded system for voice based
user identification and verification platform. Consumer Electronics, IEEE Transactions on,
57(3):1212–1217, 2011.
[296] RamP. Krish, Julian Fierrez, Javier Galbally, and Marcos Martinez-Diaz. Dynamic signature verification on smart phones. In JuanM. Corchado, Javier Bajo, Jaroslaw Kozlak, Pawel Pawlewski,
JoseM. Molina, Vicente Julian, RicardoAzambuja Silveira, Rainer Unland, and Sylvain Giroux,
editors, Highlights on Practical Applications of Agents and Multi-Agent Systems, volume 365 of
Communications in Computer and Information Science, pages 213–222. Springer Berlin Heidelberg, 2013.
[297] A. Kumar, M. Hanmandlu, V.K. Madasu, and B.C. Lovell. Biometric authentication based on
infrared thermal hand vein patterns. In Digital Image Computing: Techniques and Applications,
2009. DICTA ’09., pages 331–338, 2009.
[298] A. Kumar and D. Zhang. Personal recognition using hand shape and texture. Image Processing,
IEEE Transactions on, 15(8):2454–2461, 2006.
[299] A. Kumar and D. Zhang. Hand-geometry recognition using entropy-based discretization. Information Forensics and Security, IEEE Transactions on, 2(2):181–187, 2007.
[300] Ajay Kumar, DavidC.M. Wong, HelenC. Shen, and AnilK. Jain. Personal verification using
palmprint and hand geometry biometric. In Josef Kittler and MarkS. Nixon, editors, Audioand Video-Based Biometric Person Authentication, volume 2688 of Lecture Notes in Computer
Science, pages 668–678. Springer Berlin Heidelberg, 2003.
[301] P. Kumar, N. Jakhanwal, and M. Chandra. Text dependent speaker identification in noisy
environment. In Devices and Communications (ICDeCom), 2011 International Conference on,
pages 1–4, 2011.
[302] S. Kurkovsky, T. Carpenter, and C. MacDonald. Experiments with simple iris recognition for
mobile phones. In Information Technology: New Generations (ITNG), 2010 Seventh International Conference on, pages 1293–1294, 2010.
120
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[303] Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. Cell phone-based biometric identification. In 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications
and Systems (BTAS), pages 1–7. IEEE, September 2010.
[304] Anthony Larcher, Jean-Francois Bonastre, and John S.D. Mason. Constrained temporal structure for text-dependent speaker verification. Digital Signal Processing, 23(6):1910 – 1917, 2013.
[305] Anthony Larcher, Kong-Aik Lee, Bin Ma, and Haizhou Li. Rsr2015: Database for text-dependent
speaker verification using multiple pass-phrases. In INTERSPEECH. ISCA, 2012.
[306] Dongjae Lee, Kyoungtaek Choi, Heeseung Choi, and Jaihie Kim. Recognizable-image selection
for fingerprint recognition with a mobile-device camera. Systems, Man, and Cybernetics, Part
B: Cybernetics, IEEE Transactions on, 38(1):233–243, 2008.
[307] Henry C Lee and R Robert E Gaensslen. Advances in fingerprint technology. CRC press, 2001.
[308] Kong-Aik Lee, Anthony Larcher, Helen Thai, Bin Ma, and Haizhou Li. Joint application of
speech and speaker recognition for automation and security in smart home. In INTERSPEECH,
pages 3317–3318. ISCA, 2011.
[309] S H Lee, H D Park, S Y Hong, K J Lee, and Y H Kim. A study on the activity classification using
a triaxial accelerometer. In 25th Annual International Conference of the IEEE on Engineering
in Medicine and Biology Society, volume 3, pages 2941–2943, 2003.
[310] Lulia Lefter, Leon J. M. Rothkrantz, David A. Van Leeuwen, and Pascal Wiggers. Automatic
Stress Detection In Emergency. International Journal Intelligent Defence Support Systems,
4(2):148–168, 2011.
[311] Phillip L. De Leon, Bryan Stewart, and Junichi Yamagishi. Synthetic speech discrimination
using pitch pattern statistics derived from image analysis. In INTERSPEECH. ISCA, 2012.
[312] T Leyvand and C Meekhof. Kinect Identity: Technology and Experience. Computer, 44(4):94–
96, April 2011.
[313] Guoqiang Li, Bian Yang, and C. Busch. Autocorrelation and dct based quality metrics for
fingerprint samples generated by smartphones. In Digital Signal Processing (DSP), 2013 18th
International Conference on, pages 1–5, 2013.
[314] Guoqiang Li, Bian Yang, M.A. Olsen, and C. Busch. Quality assessment for fingerprints collected
by smartphone cameras. In Computer Vision and Pattern Recognition Workshops (CVPRW),
2013 IEEE Conference on, pages 146–153, 2013.
[315] Guoqiang Li, Bian Yang, R Raghavendra, and C. Busch. Testing mobile phone camera based
fingerprint recognition under real-life scenarios. In Proceedings of the 5th Norsk Informasjons
Sikkerhets Konferanse (NISK), 2012.
[316] J. Li, Y. Wang, T. Tan, and A. K. Jain. Live face detection based on the analysis of Fourier
spectra. Defense and Security. International Society for Optics and Photonics., pages 296–303,
2004.
[317] Chien-Cheng Lin, Chin-Chun Chang, Deron Liang, and Ching-Han Yang. A New Non-Intrusive
Authentication Method Based on the Orientation Sensor for Smartphone Users. 2012 IEEE
Sixth International Conference on Software Security and Reliability, pages 245–252, June 2012.
121
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[318] Chih-Lung Lin and Kuo-Chin Fan. Biometric verification using thermal images of palm-dorsa
vein patterns. Circuits and Systems for Video Technology, IEEE Transactions on, 14(2):199–213,
2004.
[319] Jiayang Liu, Zhong Lin Zheng, Wang, Jehan Wickramasuriya, and Venu Vasudevan. uwave:
Accelerometer-based personalized gesture recognition. Technical Report TR0630-08, Rice University and Motorola Labs, East Lansing, Michigan, June 2008.
[320] Jiayang Liu, Lin Zhong, Jehan Wickramasuriya, and Venu Vasudevan. uwave: Accelerometerbased personalized gesture recognition and its applications. Pervasive and Mobile Computing,
5(6):657–675, 2009.
[321] Alexander De Luca, Alina Hang, Frederik Brudy, Christian Lindner, and Heinrich Hussmann.
Touch me once and I know it is you! Implicit Authentication based on Touch Screen Patterns.
pages 987–996, 2012.
[322] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang. Efficient iris recognition by characterizing
key local variations. Image Processing, IEEE Transactions on, 13(6):739–750, 2004.
[323] YingLiang Ma, F. Pollick, and W.T. Hewitt. Using b-spline curves for hand recognition. In
Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on,
volume 3, pages 274–277 Vol.3, 2004.
[324] L. Machlica, Z. Zajic, and L. Muller. On complementarity of state-of-the-art speaker recognition
systems. In Signal Processing and Information Technology (ISSPIT), 2012 IEEE International
Symposium on, pages 000164–000169, 2012.
[325] Davide Maltoni, Dario Maio, Anil K. Jain, and Salil Prabhakar. Handbook of Fingerprint Recognition. Springer Publishing Company, Incorporated, 2nd edition, 2009.
[326] A. J. Mansfield, J. L. Wayman, Authorised Dr, Dave Rayner, and J. L. Wayman. Best practices
in testing and reporting performance, 2002.
[327] J. Mantyjarvi, M. Lindholm, E. Vildjiounaite, S. Makela, and H. Ailisto. Identifying Users of
Portable Devices from Gait Pattern with Accelerometers. In Proceedings. (ICASSP ’05). IEEE
International Conference on Acoustics, Speech, and Signal Processing, 2005., volume 2, pages
973–976. IEEE, 2005.
[328] Sèbastien Marcel and Andrè Anjos. The Idiap Research Institute PRINT-ATTACK Database.
http://www.idiap.ch/dataset/printattack. Web. Last access December, 2013.
[329] Sèbastien Marcel, Chris McCool, Timo Ahonen, and Honza Cernocky. MOBIO Face and Speaker
Verification Evaluation. http://www.mobioproject.org/icpr-2010. Web. Last access December, 2013.
[330] Sébastien Marcel, Chris McCool, Pavel Matějka, Timo Ahonen, Jan Černockỳ, Shayok
Chakraborty, Vineeth Balasubramanian, Sethuraman Panchanathan, Chi Ho Chan, Josef Kittler, et al. On the results of the first mobile biometry (mobio) face and speaker verification
evaluation. In Recognizing Patterns in Signals, Speech, Images and Videos, pages 210–225.
Springer, 2010.
[331] Sébastien Marcel, Christopher McCool, Pavel Matejka, Timo Ahonen, Jan Cernocký, Shayok
Chakraborty, Vineeth Balasubramanian, Sethuraman Panchanathan, Chi Ho Chan, Josef Kittle, Norman Poh, Benoı̂t Fauve, Ondrej Glembek, Oldrich Plehot, Zdenek Jancik, Anthony
122
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
Larcher, Chistophe Lévy, Driss Matrouf, Jean-François Bonastre, Ping-Han Lee, Jui-Yu Hung,
Si-Wei Wu, Yi-Ping Hung, Lukás Machlica, John Mason, Sandra Mau, Conrad Sanderson, David
Monzo, Antonio Albiol, Hieu v. Nguyen, Li Bai, Yan Wang, Matti Niskanen, Markus Turtinen,
Juan Arturo Nolazco-Flores, Leibny Paola Garcı́a-Pererea, Roberto Aceves-López, Mauricio Villegas, and Roberto Paredes. On the Results of the First Mobile Biometry (MOBIO) Face and
Speaker Verification Evaluation. Recognizing Patterns in Signals, Speech, Images and Videos,
pages 210–225, 2010.
[332] G.L. Marcialis, F. Roli, and A. Tidu. Analysis of fingerprint pores for vitality detection. In
Pattern Recognition (ICPR), 2010 20th International Conference on, pages 1289–1292, 2010.
[333] Johnny Mariéthoz and Samy Bengio. Can a professional imitator fool a GMM-based speaker
verification system? Idiap-RR Idiap-RR-61-2005, IDIAP, 0 2005.
[334] Aleix Martı́nez and Robert Benavente. The AR Face Database. Technical report, Computer
Vision Center, 1999.
[335] Aleix M. Martinez. AR Face Database Webpage. http://www2.ece.ohio-state.edu/~aleix/
ARdatabase.html.
[336] M. Martinez-Diaz, J. Fierrez, J. Galbally, and J. Ortega-Garcia. Towards mobile authentication
using dynamic signature verification: Useful features and performance evaluation. In Pattern
Recognition, 2008. ICPR 2008. 19th International Conference on, pages 1–5, 2008.
[337] Libor Masek. Recognition of Human Iris Patterns for Biometric Identification. PhD thesis,
University of Western Australia, 2003.
[338] Libor Masek and Peter Kovesi. Matlab source code for a biometric identification system based
on iris patterns. Technical report, The School of Computer Science and Software Engineering,
The University of Western Australia, 2003.
[339] J. S. D. Mason, F. Deravi, C.C. Chibelushi, and S. Gandon. Digital Audio Visual Integrated
Database Final Report. Technical report, 1996.
[340] Takashi Masuko, Takafumi Hitotsumatsu, Keiichi Tokuda, and Takao Kobayashi. On the security
of hmm-based speaker verification systems against imposture using synthetic speech. In In
Proceedings of the European Conference on Speech Communication and Technology, pages 1223–
1226, 1999.
[341] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. LoIacono, S. Mangru, M. Tinker,
T. M. Zappia, and W. Y. Zhao. Iris on the Move: Acquisition of Images for Iris Recognition in
Less Constrained Environments. Proceedings of the IEEE, 94(11):1936–1947, 2006.
[342] James R. Matey and Lauren R. Kenell. Iris Recognition - Beyond One Meter. Handbook of
Remote Biometrics. Springer, 2009.
[343] Tsutomu Matsumoto, Hiroyuki Matsumoto, Koji Yamada, and Satoshi Hoshino. Impact of
artificial gummy fingers on fingerprint systems. In Electronic Imaging 2002, pages 275–289.
International Society for Optics and Photonics, 2002.
[344] Kenji Matsuo, Fuminori Okumura, Masayuki Hashimoto, Shigeyuki Sakazawa, and Yoshinori
Hatori. Arm swing identification method with template update for long term stability. In
Advances in Biometrics, pages 211–221. Springer, 2007.
123
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[345] Jay Prakash Maurya and Sanjay Sharma. A Survey on Face Recognition Techniques. Computer
Engineering and Intelligent Systems, 4(6):11–17, 2013.
[346] C. McCool, S. Marcel, A. Hadid, M. Pietikainen, P. Matejka, J. Cernocky, N. Poh, J. Kittler,
A. Larcher, C. Levy, D. Matrouf, J.-F. Bonastre, P. Tresadern, and T. Cootes. Bi-modal person
recognition on a mobile phone: Using mobile phone data. In Multimedia and Expo Workshops
(ICMEW), 2012 IEEE International Conference on, pages 635–640, 2012.
[347] Chris McCool, Sébastien Marcel, Abdenour Hadid, Matti Pietikdinen, Pavel Matejka, Jan Cernocký, Norman Poh, Josef Kittler, Anthony Larcher, Chistophe Lévy, Driss Matrouf, JeanFrançois Bonastre, Phil Tresadern, and Timothy Cootes. Bi-Modal Person Recognition on a
Mobile Phone: using mobile phone data. IEEE International Conference on Multimedia and
Expo Workshops (ICMEW), pages 635–640, 2012.
[348] R. McCraty, M. Atkinson, W. A. Tiller, G. Rein, and A. D. Watkins. The effects of emotions
on short-term power spectrum analysis of heart rate variability . The American journal of
cardiology, 76(14):1089–1093, November 1995.
[349] Sinéad McGilloway, Roddy Cowie, Ellen Douglas-Cowie, Stan Gielen, Machiel Westerdijk, and
Sybert Stroeve. Approaching automatic recognition of emotion from voice: a rough benchmark.
In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion, 2000.
[350] Reed McManigle.
FiA “Face-in-Action” Dataset.
project/5486/. Web. Last access December, 2013.
http://www.flintbox.com/public/
[351] Dong mei Sun, Zheng ding Qiu, and Bing He. Automated identity verification based on hand
shapes. In Signal Processing, 2002 6th International Conference on, volume 2, pages 1596–1599
vol.2, 2002.
[352] Paolo Melillo, Marcello Bracale, and Leandro Pecchia. Nonlinear Heart Rate Variability features
for real-life stress detection. Case study: students under stress due to university examination.
Biomedical engineering online, 10(1):96, January 2011.
[353] Hakan Melin. Databases for speaker recognition: Activities in cost250 working group 2. In in
Proceedings COST250 Workshop on Speaker Recognition in Telephony, 1999.
[354] A. Mendaza-Ormaza, O. Miguel-Hurtado, R. Blanco-Gonzalo, and F. Diez-Jimeno. Analysis
of handwritten signature performances using mobile devices. In Security Technology (ICCST),
2011 IEEE International Carnahan Conference on, pages 1 –6, oct. 2011.
[355] E. Mendoza and G. Carballo. Vocal tremor and psychological stress. Journal of voice. Official
journal of the Voice Foundation, 13(1):105–12, March 1999.
[356] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre. XM2VTSDB: The Extended M2VTS
Database. Second International Conference on Audio and Video-based Biometric Person Authentication, pages 72–77, 1999.
[357] C. Methani and A.M. Namboodiri. Video based palmprint recognition. In Pattern Recognition
(ICPR), 2010 20th International Conference on, pages 1352–1355, 2010.
[358] L Middleton, A A Buss, A I Bazin, and M S Nixon. A floor sensor system for gait recognition.
In Fourth IEEE Workshop on Automatic Identification Advanced Technologies, 2005.
124
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[359] Dan Miller. Voice biometrics vendor survey and ”Intelliview”. Technical Report Mid Year
Edition 2013, OpusResearch, 6 2013.
[360] Emiliano Miluzzo, Alexander Varshavsky, and Suhrid Balakrishnan. TapPrints : Your Finger
Taps Have Fingerprints Categories and Subject Descriptors. pages 323–336, 2012.
[361] J. Ming, T.J. Hazen, and J.R. Glass. A comparative study of methods for handheld speaker
verification in realistic noisy conditions. In Speaker and Language Recognition Workshop, 2006.
IEEE Odyssey 2006: The, pages 1–8, 2006.
[362] Ji Ming, T.J. Hazen, J.R. Glass, and D.A. Reynolds. Robust speaker recognition in noisy
conditions. Audio, Speech, and Language Processing, IEEE Transactions on, 15(5):1711–1723,
2007.
[363] Bendik B. Mjaaland, Patrick Bours, and Danilo Gligoroski. Walk the walk: attacking gait
biometrics by imitation. pages 361–380, October 2010.
[364] Bendik Bjø rklid Mjaaland. Gait Mimicking: Attack Resistance Testing of Gait Authentication
Systems. PhD thesis, Norwegian University of Science and Technology, 2009.
[365] Shimon K Modi and Stephen J Elliott. Impact of image quality on performance: comparison
of young and elderly fingerprints. In Proceedings of the 6th International Conference on Recent
Advances in Soft Computing (RASC 2006), K. Sirlantzis (Ed.), pages 449–45, 2006.
[366] F. Mokhayeri and M-R. Akbarzadeh-T. Mental Stress Detection Based on Soft Computing
Techniques. In 2011 IEEE International Conference on Bioinformatics and Biomedicine, pages
430–433. IEEE, November 2011.
[367] Jugurta R. Montalvao Filho and Eduardo O. Freire. Multimodal biometric fusion, a joint typist
(keystroke) and speaker verification. 2006 International Telecommunications Symposium, pages
609–614, September 2006.
[368] Yiu Sang Moon, JS Chen, KC Chan, K So, and KC Woo. Wavelet based fingerprint liveness
detection. Electronics Letters, 41(20):1112–1113, 2005.
[369] MorphoTrust USA (Safran). MorphoTrust USA. http://www.morphotrust.com/. Web. Last
access December, 2013.
[370] A. C. Morris, J. Koreman, H. Sellahewa, J. Ehlers, S. Jassim, and L. Allano. The SecurePhone
PDA Database, Experimental Protocol and Automatic Test Procedure for Multimodal User
Authentication. Technical report, Saarland University, Institute of Phonetics., 2006.
[371] P. Motlicek, L.E. Shafey, R. Wallace, C. Mccool, and S. Marcel. Bi-modal authentication in
mobile environments using session variability modelling. In Pattern Recognition (ICPR), 2012
21st International Conference on, pages 1100–1103, 2012.
[372] Muhammad Muaaz and Claudia Nickel. Influence of different walking speeds and surfaces
on accelerometer-based biometric gait recognition. In 2012 35th International Conference on
Telecommunications and Signal Processing (TSP), pages 508–512. IEEE, July 2012.
[373] Amir Muaremi, Bert Arnrich, and Gerhard Tröster. Towards Measuring Stress with Smartphones
and Wearable Devices During Workday and Sleep. BioNanoScience, 3(2):172–183, May 2013.
125
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[374] C. Nandini, C. Ashwini, Medha Aparna, Nivedita Ramani, Pragnya Kini, and Sheebe k. Biometric authentication by dorsal hand vein pattern. International Journal of Engineering and
Technology, 2(5), 2012.
[375] I. Nazir, I. Zubair, and M.H. Islam. User authentication for mobile device through image
selection. In Networked Digital Technologies, 2009. NDT ’09. First International Conference
on, pages 518–520, July 2009.
[376] Chee Kiat Ng, Marios Savvides, and Pradeep K. Khosla. Real-Time Face Verification System
on a Cell-Phone using Advanced Correlation Filters. Fourth IEEE Workshop on Automatic
Identification Advanced Technologies, 2005.
[377] Claudia Nickel. Accelerometer-based Biometric Gait Recognition for Authentication on Smartphones. PhD thesis, Universitat Darmstadt, 2012.
[378] Claudia Nickel, Mohammad O Derawi, Patrick Bours, and Christoph Busch. Scenario Test of
Accelerometer-Based Biometric Gait Recognition. In 3rd International Workshop on Security
and Communication Networks (IWSCN), 2011.
[379] ShankarBhausaheb Nikam and Suneeta Agarwal. Curvelet-based fingerprint anti-spoofing. Signal, Image and Video Processing, 4(1):75–87, 2010.
[380] M S Nixon, T Tan, and R Chellappa. Human identification based on gait. Springer-Verlag New
York Inc, 2006.
[381] Nargess Nourbakhsh, Yang Wang, Fang Chen, and Rafael A. Calvo. Using galvanic skin response
for cognitive load measurement in arithmetic and reading tasks. In Proceedings of the 24th
Conference on Australian Computer-Human Interaction OzCHI ’12, pages 420–423, New York,
New York, USA, 2012. ACM Press.
[382] Janna Nousbeck, Bettina Burger, Dana Fuchs-Telem, Mor Pavlovsky, Shlomit Fenig, Ofer Sarig,
Peter Itin, and Eli Sprecher. A mutation in a skin-specific isoform of SMARCAD1 causes
autosomal-dominant adermatoglyphia. The American Journal of Human Genetics, 89(2):302 –
307, 2011.
[383] J. Novakovic. Speaker identification in smart environments with multilayer perceptron. In
Telecommunications Forum (TELFOR), 2011 19th, pages 1418–1421, 2011.
[384] Cenker Oden, Aytul Ercil, and Burak Buke. Combining implicit polynomials and geometric features for hand recognition. Pattern Recognition Letters, 24(13):2145 – 2152, 2003.
¡ce:title¿Audio- and Video-based Biometric Person Authentication (AVBPA 2001)¡/ce:title¿.
[385] Lawrence O’Gorman. An overview of fingerprint verification technologies. Information Security
Technical Report, 3(1):21–32, 1998.
[386] Fuminori Okumura, Akira Kubota, Yoshinori Hatori, Kenji Matsuo, Masayuki Hashimoto, and
Atsushi Koike. A study on biometric authentication based on arm sweep action with acceleration
sensor. In Intelligent Signal Processing and Communications, 2006. ISPACS’06. International
Symposium on, pages 219–222. IEEE, 2006.
[387] J. Ortega-Garcia, J. Fierrez, F. Alonso-Fernandez, J. Galbally, M.R. Freire, J. GonzalezRodriguez, C. Garcia-Mateo, J.-L. Alba-Castro, E. Gonzalez-Agulla, E. Otero-Muras, S. GarciaSalicetti, L. Allano, B. Ly-Van, B. Dorizzi, J. Kittler, T. Bourlai, N. Poh, F. Deravi, M. Ng,
126
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
M. Fairhurst, J. Hennebert, A. Humm, M. Tistarelli, L. Brodo, J. Richiardi, A. Drygajlo,
H. Ganster, F.M. Sukno, S.-K. Pavani, A. Frangi, L. Akarun, and A. Savran. The multiscenario multienvironment biosecure multimodal database (bmdb). Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 32(6):1097–1111, 2010.
[388] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, J. Gonzalez, M. Faundez-Zanuy, V. Espinosa,
A. Satue, I. Hernaez, J.-J. Igarza, C. Vivaracho, D. Escudero, and Q. I. Moro. MCYT baseline
corpus: a bimodal biometric database. IEE Proceedings-Vision, Image and Signal Processing,
150(6):395–401, 2003.
[389] G Pan, Y Zhang, and Z Wu. Accelerometer-based gait recognition via voting by signature points.
Electronics Letters, 45(22):1116–1118, October 2009.
[390] Gang Pan, Lin Sun, Zhaohui Wu, and Shihong Lao. Eyeblink-based Anti-Spoofing in Face
Recognition from a Generic Webcamera. IEEE 11th International Conference on Computer
Vision, pages 1–8, 2007.
[391] Kang Ryoung Park, Hyun-Ae Park, Byung Jun Kang, Eui Chul Lee, and Dae Sik Jeong. A
Study on Iris Localization and Recognition on Mobile Phones. In Advanced Signal Processing
and Pattern Recognition Methods for Biometrics, 2007.
[392] Sujan TV Parthasaradhi, Reza Derakhshani, Larry A Hornak, and Stephanie AC Schuckers.
Time-series detection of perspiration as a liveness test in fingerprint devices. Systems, Man, and
Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 35(3):335–343, 2005.
[393] Stavros Paschalakis and Miroslaw Bober. Real-time face detection and tracking for mobile
videoconferencing. Real-Time Imaging, 10(2):81–94, April 2004.
[394] Shwetak N Patel, Jeffrey S Pierce, and Gregory D Abowd. A gesture-based authentication
scheme for untrusted public terminals. In Proceedings of the 17th annual ACM symposium on
User interface software and technology, pages 157–160. ACM, 2004.
[395] P.Z. Patrick, G. Aversano, Raphael Blouet, M. Charbit, and G. Chollet. Voice forgery using alisp:
Indexation in a client memory. In Acoustics, Speech, and Signal Processing, 2005. Proceedings.
(ICASSP ’05). IEEE International Conference on, volume 1, pages 17–20, 2005.
[396] Christian Peter, Eric Ebert, and Helmut Beikirch. A wearable multi-sensor system for mobile acquisition of emotion-related physiological data. Affective Computing and Intelligent Interaction.,
3784:691–698, 2005.
[397] P. J. Phillips, S. A. Rizvi, and P. J. Rauss. The FERET evaluation methodology for facerecognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence,
22(10):1090–1104, 2000.
[398] P. Jonathon Phillips, Patrick Grother, Ross J. Micheals, Duane M. Blackburn, Elham Tabassi,
and Mike Bone. Face Recognition Vendor Test 2002. Technical Report March, DARPA, 2003.
[399] P. Jonathon Phillips, Harry Wechsler, Jeffery Huang, and Patrick J. Rauss. The FERET
database and evaluation procedure for face-recognition algorithms. Image and vision computing,
16(I 998):295–306, 1997.
[400] P.J. Phillips. Color Feret Database. http://www.nist.gov/itl/iad/ig/colorferet.cfm.
Web. Last access December, 2013.
127
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[401] P.J. Phillips. MBE Multiple Biometric Evaluation. http://face.nist.gov/. Web. Last access
December, 2013.
[402] P.J. Phillips, K.W. Bowyer, P.J. Flynn, X. Liu, and W.T. Scruggs. The iris challenge evaluation 2005. In Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE
International Conference on, pages 1–8, 2008.
[403] P.J. Phillips, W.T. Scruggs, A.J. O’Toole, P.J. Flynn, K.W. Bowyer, C.L. Schott, and M. Sharpe.
FRVT 2006 and ICE 2006 Large-Scale Experimental Results. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 32(5):831–846, 2010.
[404] Rosalind W. Picard, Elias Vyzas, and Jennifer Healey. Toward machine emotional intelligence:
Analysis of affective physiological state. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(10):1175–1191, 2001.
[405] V. Piuri and F. Scotti. Fingerprint biometrics via low-cost sensors and webcams. In Biometrics:
Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International Conference on,
pages 1–6, 2008.
[406] Réjean Plamondon and Sargur N Srihari. Online and off-line handwriting recognition: a comprehensive survey. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(1):63–84,
2000.
[407] A. Podziewski, K. Litwiniuk, and J. Legierski. Emergency button, A Telco 2.0 application in the
e-health environment. In Conference on Computer Science and Information Systems (FedCSIS),
pages 663–677, 2012.
[408] Norman Poh and Samy Bengio. Database, protocols and tools for evaluating score-level fusion
algorithms in biometric authentication. Pattern Recognition, 39(2):223–233, February 2006.
[409] Norman Poh, Chi Ho Chan, Josef Kittler, Sébastien Marcel, Christopher McCool, Enrique
Aragonés Rúa, José Luis Alba Castro, Mauricio Villegas, Roberto Paredes, Vitomir Struc, Albert Ali Salah, Hui Fang, and Nicholas Costen. An Evaluation of Video-to-Video Face Verification. IEEE Transactions on Information forensics and Security, 5(4):781–801, 2010.
[410] Alexandru Popa. Tracking hand tremor on touchscreen. PhD thesis, RWTH Aachen University,
2012.
[411] H. Proença. Iris recognition: On the segmentation of degraded images acquired in the visible
wavelength. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(8):1502–
1516, 2010.
[412] H. Proença and L.A. Alexandre. Iris segmentation methodology for non-cooperative recognition.
Vision, Image and Signal Processing, IEE Proceedings -, 153(2):199–205, 2006.
[413] H. Proença, S. Filipe, R. Santos, J. Oliveira, and L.A. Alexandre. The UBIRIS.v2: A Database
of Visible Wavelength Iris Images Captured On-the-Move and At-a-Distance. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 32(8):1529–1535, 2010.
[414] Hugo Proença and Luı́s A. Alexandre. Toward noncooperative iris recognition: a classification
approach using multiple signatures. IEEE Trans Pattern Anal Mach Intell, 29(4):607–612, Apr
2007.
128
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[415] Hugo Proença and Luı́s A. Alexandre. Introduction to the special issue on the segmentation of
visible wavelength iris images captured at-a-distance and on-the-move. Image Vision Comput.,
28(2):213–214, 2010.
[416] R. Raguram, a. M. White, J. Frahm, P. Georgel, and F. Monrose. On the Privacy Risks of
Virtual Keyboards: Automatic Reconstruction of Typed Input from Compromising Reflections.
IEEE Transactions on Dependable and Secure Computing, 10(3):154–167, May 2013.
[417] M. Rahman, Jianfeng Ren, and N. Kehtarnavaz. Real-time implementation of robust face detection on mobile platforms. IEEE International Conference on Acoustics, Speech and Signal
Processing, pages 1353–1356, April 2009.
[418] Jianfeng Ren, Xudong Jiang, and Junsong Yuan. A complete and fully automated face verification system on mobile devices. Pattern Recognition, 46(1):45–56, January 2013.
[419] Jianfeng Ren, Nasser Kehtarnavaz, and Lorenzo Estevez. Real-Time Optimization of Viola-Jones
Face Detection for Mobile Platforms. Circuits and Systems Workshop: System-on-Chip-Design,
Applications, Integration, and Software, IEEE Dallas, pages 1–4, 2008.
[420] Douglas A Reynolds. Speaker identification and verification using gaussian mixture speaker
models. Speech communication, 17(1):91–108, 1995.
[421] Douglas A Reynolds. An overview of automatic speaker recognition technology. In Acoustics,
Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, volume 4,
pages IV–4072. IEEE, 2002.
[422] a. Rezaei and S. Mirzakuchaki. A recognition approach using multilayer perceptron and keyboard
dynamics patterns. 2013 First Iranian Conference on Pattern Recognition and Image Analysis
(PRIA), pages 1–5, March 2013.
[423] R. Ricci, G. Chollet, M. V. Crispino, S. Jassim, J. Koreman, and A. Morris. The “SecurePhone”: A mobile phone with biometric authentication and e-signature support for dealing
secure transactions on the fly. Defense and Security Symposium. International Society for Optics and Photonics, 2006.
[424] Ricardo N. Rodrigues, Lee Luan Ling, and Venu Govindaraju. Robustness of multimodal biometric fusion methods against spoof attacks. Journal of Visual Languages & Computing, 20(3):169–
179, June 2009.
[425] David Rodriguez, JuanM. Sanchez, and Arturo Duran. Mobile fingerprint identification using
a hardware accelerated biometric service provider. In Koen Bertels, Joao M.P. Cardoso, and
Stamatis Vassiliadis, editors, Reconfigurable Computing: Architectures and Applications, volume
3985 of Lecture Notes in Computer Science, pages 383–388. Springer Berlin Heidelberg, 2006.
[426] L Rong, D Zhiguo, Z Jianzhong, and L Ming. Identification of individual walking patterns
using gait acceleration. In The 1st International Conference on Bioinformatics and Biomedical
Engineering, (ICBBE), pages 543–546, 2007.
[427] Liu Rong, Zhou Jianzhong, Liu Ming, and Hou Xiangfeng. A Wearable Acceleration Sensor
System for Gait Recognition. In 2007 2nd IEEE Conference on Industrial Electronics and
Applications, pages 2654–2659. IEEE, May 2007.
[428] A.E. Rosenberg. Automatic speaker verification: A review. Proceedings of the IEEE, 64(4):475–
487, 1976.
129
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[429] Henry A Rowley, Shumeet Baluja, and Takeo Kanade. Neural Network-Based Face Detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23–28, 1998.
[430] Mariusz Rybnik, Marek Tabedzki, and Khalid Saeed. A Keystroke Dynamics Based System for
User Identification. International Conference on Computer Information Systems and Industrial
Management Applications, pages 225–230, 2008.
[431] H. Saevanee and P. Bhattarakosol. Authenticating User Using Keystroke Dynamics and Finger
Pressure. 2009 6th IEEE Consumer Communications and Networking Conference, pages 1–2,
January 2009.
[432] R. Sanchez-Reillo and A. Gonzalez-Marcos. Access control system with hand geometry verification and smart cards. Aerospace and Electronic Systems Magazine, IEEE, 15(2):45–48, 2000.
[433] R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos. Biometric identification through
hand geometry measurements. Pattern Analysis and Machine Intelligence, IEEE Transactions
on, 22(10):1168–1171, 2000.
[434] Sanchit, M. Ramalho, P.L. Correia, and L.D. Soares. Biometric identification through palm and
dorsal hand vein patterns. In EUROCON - International Conference on Computer as a Tool
(EUROCON), 2011 IEEE, pages 1–4, 2011.
[435] A. Sanmorino and S. Yazid. A survey for handwritten signature verification. In Uncertainty
Reasoning and Knowledge Engineering (URKE), 2012 2nd International Conference on, pages
54–57, 2012.
[436] Akane Sano and Rosalind W. Picard. Stress Recognition using Wearable Sensors and Mobile
Phones. In Humaine Association Conference on Affective Computing and Intelligent Interaction,
2013.
[437] Alberto Santos Sierra, Carmen Sánchez-Ávila, Aitor Mendaza Ormaza, and Javier
Guerra Casanova. An approach to hand biometrics in mobile devices. Signal, Image and Video
Processing, 5(4):469–475, 2011.
[438] Patrick J. Flynn P. Jonathon Phillips Sarah E. Baker, Kevin W. Bowyer. Handbook of Iris
Recognition, chapter Template Aging in Iris Biometrics: Evidence of Increased False Rject Rate
in ICE 2006. Springer, 2013.
[439] Johannes Schumm, M. Bächlin, Cornelia Setz, Bert Arnrich, Daniel Roggen, and Gerhard
Tröster. Effect of Movements on the Electrodermal Response after a Startle Event. Methods of
Information in Medicine, pages 186–191, 2008.
[440] M Sekine, T Tamura, T Fujimoto, and Y Fukui. Classification of walking pattern using acceleration waveform in elderly people. In 22nd Annual International Conference of the IEEE
Engineering in Medicine and Biology Society, volume 2, pages 1356–1359, 2000.
[441] Nandakumar Selvaraj, Ashok Jaryal, Jayashree Santhosh, Kishore K Deepak, and Sneh Anand.
Assessment of heart rate variability derived from finger-tip photoplethysmography as compared
to electrocardiography. Journal of medical engineering & technology, 32(6):479–484, 2008.
[442] Cornelia Setz, Bert Arnrich, Johannes Schumm, Roberto La Marca, Gerhard Tröster, and Ulrike
Ehlert. Discriminating stress from cognitive load using a wearable EDA device. IEEE transactions on information technology in biomedicine : a publication of the IEEE Engineering in
Medicine and Biology Society, 14(2):410–7, March 2010.
130
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[443] D Shanmugapriya. A Survey of Biometric keystroke Dynamics : Approaches , Security and
Challenges. 5(1):115–119, 2009.
[444] A. Shastry, R. Burchfield, and S. Venkatesan. Dynamic signature verification using embedded
sensors. In Body Sensor Networks (BSN), 2011 International Conference on, pages 168–173,
2011.
[445] S.V. Sheela and P.A. Vijaya. Iris recognition methods - survey. International Journal of Computer Applications, 3:19–25, 2010.
[446] Tsu-Wang Shen, Willis J Tompkins, and Yu Hen Hu. Implementation of a one-lead ECG human
identification system on a normal population. Journal of Engineering and Computer Innovations,
2(1):12–21, 2011.
[447] Sajad Shirali-Shahreza, Yashar Ganjali, and Ravin Balakrishnan. Verifying human users in
speech-based interactions. In INTERSPEECH, pages 1585–1588. ISCA, 2011.
[448] Sajad Shirali-Shahreza, Gerald Penn, Ravin Balakrishnan, and Yashar Ganjali. Seesay and
hearsay captcha for mobile interaction. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, CHI ’13, pages 2147–2156, New York, NY, USA, 2013. ACM.
[449] Diego A Socolinsky and Andrea Selinger. A Comparative Analysis of Face Recognition Performance with Visible and Thermal Infrared Imagery. Proceedings. 16th International Conference
on Pattern Recognition, 2002, 4:217–222, 2002.
[450] F Soong, A Rosenberg, L Rabiner, and B Juang. A vector quantization approach to speaker
recognition. In Acoustics, Speech, and Signal Processing, IEEE International Conference on
ICASSP’85., volume 10, pages 387–390. IEEE, 1985.
[451] Libor Spacek. Face Recognition Data. http://cswww.essex.ac.uk/mv/allfaces/index.html.
Web. Last access December, 2013.
[452] Sebastijan Sprager and Damjan Zazula. A cumulant-based method for gait identification using
accelerometer data with principal component analysis and support vector machine. WSEAS
Transactions on Signal Processing, 5(11):369–378, November 2009.
[453] R. Stojanovic and D. Karadaglic. A LED–LED-based photoplethysmography sensor. Physiological Measurement, 28(6):N19, 2007.
[454] Y. Stylianou. Voice transformation: A survey. In Acoustics, Speech and Signal Processing, 2009.
ICASSP 2009. IEEE International Conference on, pages 3585–3588, 2009.
[455] Qi Su, Jie Tian, Xinjian Chen, and Xin Yang. A fingerprint authentication system based on
mobile phone. In Takeo Kanade, Anil Jain, and NaliniK. Ratha, editors, Audio and VideoBased Biometric Person Authentication, volume 3546 of Lecture Notes in Computer Science,
pages 151–159. Springer Berlin Heidelberg, 2005.
[456] F. Sufi, Q. Fang, and I. Cosic. ECG R-R peak detection on mobile phones. Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, 2007:3697–3700, January
2007.
[457] Donghong Dongbin Sun, Wu Liu, Ping Ren, and Donghong Dongbin Sun. A Data Security
Protection Mechanism Based on Transparent Biometric Authentication for Mobile Intelligent
Terminals. 2012 Third Cybercrime and Trustworthy Computing Workshop, (61272427):1–6, October 2012.
131
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[458] Lingming Sun, Wei Wei, and Fu Liu. A hand shape recognition method research based on
gaussian mixture model. In Optoelectronics and Image Processing (ICOIP), 2010 International
Conference on, volume 1, pages 15–19, 2010.
[459] Michael Sung and A. S. Pentland. Stress and lie detection through non-invasive physiological
sensing. Biomedical Soft Computing and Human Sciences(IJBSCHS).Special Issue: Bio-sensors:
Data Acquisition, Processing and Control, 14(2):109–116, 1995.
[460] S Tanaka, K Motoi, M Nogawa, and K Yamakoshi. A new portable device for ambulatory
monitoring of human posture and walking velocity using miniature accelerometers and gyroscope.
In 26th Annual International Conference of the IEEE on Engineering in Medicine and Biology
Society., volume 3, pages 2283–2286, 2004.
[461] Qian Tao and Raymond N. J. Veldhuis. Biometric Authentication for Mobile Personal Device.
In Mobile and Ubiquitous Systems - Workshops, 2006. 3rd Annual International Conference on,
pages 1–3, 2006.
[462] Pin Shen Teh, Andrew B.J. J Teoh, and Shigang Yue. A Survey of Keystroke Dynamics Biometrics. Proceedings Title: 2012 International Conference on Cyber Security, Cyber Warfare
and Digital Forensic (CyberSec), pages 277–282, June 2012.
[463] TELECOM SudParis. BIOMET: Multimodal Database. http://biometrics.it-sudparis.
eu/english/index.php?item=10&menu=projects. Web. Last access December, 2013.
[464] P. Tome-Gonzalez, F. Alonso-Fernandez, and J. Ortega-Garcia. On the effects of time variability
in iris recognition. In Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd
IEEE International Conference on, pages 1–6, 2008.
[465] Ngo Thanh Trung, Yasushi Makihara, Hajime Nagahara, Yasuhiro Mukaigawa, Yasushi Yagi,
and Thanh Trung Ngo. Performance evaluation of gait recognition using the largest inertial
sensor-based gait database. Pattern Recognition, 47(1):228–237, March 2014.
[466] Yasin Uzun and Kemal Bicakci. A second look at the performance of neural networks for
keystroke dynamics using a publicly available dataset. Computers & Security, 31(5):717–726,
July 2012.
[467] E. Vildjiounaite, S.-M. Makela, M. Lindholm, V. Kyllonen, and H. Ailisto. Increasing Security of
Mobile Devices by Decreasing User Effort in Verification. 2007 Second International Conference
on Systems and Networks Communications (ICSNC 2007), 2007.
[468] Elena Vildjiounaite, Satu-Marja Makela, Mikko Lindholm, Vesa Kyllonen, and Heikki Ailisto.
Increasing Security of Mobile Devices by Decreasing User Effort in Verification. Second International Conference on Systems and Networks Communications (ICSNC 2007), August 2007.
[469] Elena Vildjiounaite, Satu-Marja Mäkelä, Mikko Lindholm, Reima Riihimäki, Vesa Kyllönen,
Jani Mäntyjärvi, and Heikki Ailisto. Pervasive Computing, volume 3968 of Lecture Notes in
Computer Science. Springer Berlin Heidelberg, Berlin, Heidelberg, May 2006.
[470] Elena Vildjiounaite, Satu-Marja Mäkelä, Mikko Lindholm, Reima Riihimäki, Vesa Kyllönen,
Jani Mäntyjärvi, and Heikki Ailisto. Unobtrusive Multimodal Biometrics for Ensuring Privacy
and Information Security with Personal Devices. Pervasive Computing, pages 187–201, 2006.
[471] J. Villalba and E. Lleida. Preventing replay attacks on speaker verification systems. In Security
Technology (ICCST), 2011 IEEE International Carnahan Conference on, pages 1–8, 2011.
132
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[472] Jesus Villalba and Eduardo Lleida. Detecting replay attacks from far-field recordings on speaker
verification systems. In Claus Vielhauer, Jana Dittmann, Andrzej Drygajlo, NielsChristian Juul,
and MichaelC. Fairhurst, editors, Biometrics and ID Management, volume 6583 of Lecture Notes
in Computer Science, pages 274–285. Springer Berlin Heidelberg, 2011.
[473] Paul Viola and Michael Jones. Rapid Object Detection using a Boosted Cascade of Simple
Features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, 1:511–518, 2001.
[474] Visidon. Visidon. http://www.visidon.fi/en/Home. Web. Last access December, 2013.
[475] Chendi Wang and Feng Wang. An Emotional Analysis Method Based on Heart Rate Variability. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health
Informatics (BHI 2012), volume 25, pages 104–107, Hong Kong and Shenzhen, China, 2012.
[476] Lingyu Wang and Graham Leedham. A thermal hand vein pattern verification system. In
Sameer Singh, Maneesha Singh, Chid Apte, and Petra Perner, editors, Pattern Recognition and
Image Analysis, volume 3687 of Lecture Notes in Computer Science, pages 58–65. Springer Berlin
Heidelberg, 2005.
[477] Frederick W. Wheeler, A.G.A. Perera, G. Abramovich, Bing Yu, and Peter H. Tu. Stand-off iris
recognition system. In Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd
IEEE International Conference on, pages 1–7, 2008.
[478] Richard Patrick Wildes. Iris recognition: An emerging biometric technology. Proceedings of the
IEEE, 85(9):1348–1363, 1997.
[479] Richard Patrick Wildes, Jane Circle Asmuth, Keith James Hanna, Stephen Charles Hsu, Raymond Joseph Kolczynski, James Regis Matey, and Sterling Eduard McBride. Automated, noninvasive iris recognition system and method. U.S. Patent, 5572596, 1996.
[480] Richard Patrick Wildes, Jane Circle Asmuth, Keith James Hanna, Stephen Charles Hsu, Raymond Joseph Kolczynski, James Regis Matey, and Sterling Eduard McBride. Automated, noninvasive iris recognition system and method. U.S. Patent, 5751836, 1998.
[481] T. Wilkin and Ooi Shih Yin. State of the art: Signature verification system. In Information
Assurance and Security (IAS), 2011 7th International Conference on, pages 110–115, 2011.
[482] B. Winn, D. Whitaker, D. B. Elliott, and N. J. Phillips. Factors affecting light-adapted pupil
size in normal human subjects. Invest Ophthalmol Vis Sci, 35(3):1132–1137, Mar 1994.
[483] R.H. Woo, A. Park, and T.J. Hazen. The mit mobile device speaker verification corpus: Data
collection and preliminary experiments. In Speaker and Language Recognition Workshop, 2006.
IEEE Odyssey 2006: The, pages 1–6, 2006.
[484] Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, and William Freeman. Eulerian video magnification for revealing subtle changes in the world. ACM Transactions
on Graphics, 31(4):1–8, July 2012.
[485] Wen-Yen Wu. A string matching method for hand recognition. In Natural Computation (ICNC),
2011 Seventh International Conference on, volume 3, pages 1598–1601, 2011.
[486] Zhi-Zheng Wu, Chng Eng Siong, and Haizhou Li. Detecting converted speech and natural speech
for anti-spoofing attack in speaker recognition. In INTERSPEECH, 2012.
133
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[487] Xiongwu Xia and Lawrence O’Gorman. Innovations in fingerprint capture devices. Pattern
Recognition, 36(2):361–369, 2003.
[488] Xiaobo Wang, Yuexiang Li, and Feng Qiao. Gait authentication based on multi-criterion model
of acceleration features. In International Conference onModelling, Identification and Control
(ICMIC), pages 664–669, Okayama, 2010.
[489] Wei Xiong, Changsheng Xu, and Sim Heng Ong. Peg-free human hand shape analysis and
recognition. In Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP ’05).
IEEE International Conference on, volume 2, pages 77–80, 2005.
[490] Liu Yan, Li Yue-e, and Hou Jian. Gait recognition based on MEMS accelerometer. In IEEE
10th International Conference on Signal Processing, pages 1679–1681. IEEE, October 2010.
[491] Jie Yang, Xilin Chen, and W. Kunz. A PDA-based face recognition system. Sixth IEEE Workshop on Applications of Computer Vision, Proceedings., pages 19–23, 2002.
[492] Berrin Yanikoglu and Alisher Kholmatov. Online signature verification using fourier descriptors.
EURASIP Journal on Advances in Signal Processing, 2009:12, 2009.
[493] Yong-Fang Yao, Xiao-Yuan Jing, and Hau-San Wong. Face and palmprint feature level fusion
for single sample biometrics recognition. Neurocomputing, 70(7-9):1582–1586, March 2007.
[494] Dit-Yan Yeung, Hong Chang, Yimin Xiong, Susan George, Ramanujan Kashi, Takashi Matsumoto, and Gerhard Rigoll. Svc2004: First international signature verification competition. In
David Zhang and AnilK. Jain, editors, Biometric Authentication, volume 3072 of Lecture Notes
in Computer Science, pages 16–22. Springer Berlin Heidelberg, 2004.
[495] E. Yoruk, E. Konukoglu, B. Sankur, and J. Darbon. Shape-based hand recognition. Image
Processing, IEEE Transactions on, 15(7):1803–1815, 2006.
[496] Saira Zahid, Muhammad Shahzad, Syed Ali Khayam, and Muddassar Farooq. Keystroke-based
User Identification on Smart Phones. pages 1–18, 2009.
[497] Cha Zhang and Zhengyou Zhang. A Survey of Recent Advances in Face Detection. Technical
Report MSR-TR-2010-66, Microsoft Research, 2010.
[498] Tianxiang Zhang, Michelle Karg, Jonathan Feng-Shun Lin, Dana Kulic, and Gentiane Venture.
IMU based single stride identification of humans. pages 220–225, 2013.
[499] Yangyang Zhang, Jie Tian, Xinjian Chen, Xin Yang, and Peng Shi. Fake finger detection based
on thin-plate spline distortion model. In Advances in Biometrics, pages 742–749. Springer, 2007.
[500] Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, and S. Z. Li. A face antispoofing database with diverse
attacks. International Conference on Biometrics, pages 26–31, 2012.
[501] Zhaoxiang Zhang, Kaiyue Wang, and Yunhong Wang. A survey of on-line signature verification.
In Zhenan Sun, Jianhuang Lai, Xilin Chen, and Tieniu Tan, editors, Biometric Recognition,
volume 7098 of Lecture Notes in Computer Science, pages 141–149. Springer Berlin Heidelberg,
2011.
[502] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld. Face Recognition: A Literature Survey.
ACM Computing Surveys (CSUR)., 35(4):399–458, 2003.
134
PCAS Deliverable D3.1
SoA of mobile biometrics, liveness and non-coercion detection
[503] Gang Zheng, Chia-Jiu Wang, and T.E. Boult. Application of projective invariants in hand
geometry biometrics. Information Forensics and Security, IEEE Transactions on, 2(4):758–768,
2007.
[504] Nan Zheng, Kun Bai, Hai Huang, and Haining Wang. College of William & Mary Department
of Computer Science You Are How You Touch : User Verification on Smartphones via Tapping
Behaviors You Are How You Touch : User Verification on Smartphones via Tapping Behaviors.
2012.
135

Similar documents