APPENDIX 2
Supplementary memorandum from the Government
STRUCTURE OF
IDENTITY CARDS
PROGRAMME
Assuming the Identity Cards Bill has received
Royal Assent, the Identity Cards Programme and UKPS (UK Passport
Service) will combine to form a new agency on 1 April 2006. This
will be headed by a new chief executive who will be recruited
by open competition following Royal Assent of the Bill. There
will be four executive directors responsible for service delivery,
business development, corporate services, and the Chief Information
Officer (CIO). The procurement of the components of the ID Cards
scheme mainly falls within the CIO's brief.
As of February 2006 there were 186 people working
with the Identity Cards Programme team. This comprises 54 civil
servants and 98 consultants from our development partners and
34 interims.
The diagram on the following page shows the
combined high-level structure of the Identity Cards Programme
and UKPS as they form a new agency.
2. ACADEMIC
AND COMMERCIAL
SURVEY
In this section and in others some terms are
used to describe the performance of biometric systems in matching
a biometric with previously recorded biometrics. Below is an explanation
of the most commonly used terms:
False Match Rate (FMR): the probability
that a biometric sample, when compared with a biometric of the
same type from a different (and randomly-selected) individual,
will be falsely declared to match that biometric. Eg a false match
would be where your fingerprints match another inividual's.
False Accept Rate (FAR): the probability
of a biometric matching transaction resulting in a wrongful confirmation
of claim of identity (in a positive ID systemie one which
tests a claim that a person is enrolled in a system) or non-identity
(in a negative ID systemie one which tests a claim that
person is not enrolled in a system). A transaction may consist
of one or more wrongful attempts dependent upon system policy.
Eg a false accept would be where your fingerprints match someone
else's in a database of fingerprints.
False Non-Match Rate (FNMR) (or False
Reject RateFRR): the probability that a biometric sample,
when compared with a biometric of the same type from the same
user, will be falsely declared not to match that biometric. Eg
a false non-match would be where your fingerprints fail to match
your previously enrolled fingerprints
False Reject Rate (FRR): the probability
of a biometric matching transaction resulting in a wrongful denial
of claim of identity (in a positive ID system) or non-identity
(in a negative ID system). A transaction may consist of one or
more truthful attempts dependent upon system policy. Eg a false
reject would be where your fingerprints fail to match your own
in a database of fingerprints
Failure To Enrol Rate (FTE): the
expected proportion of the population for whom the system is unable
to generate repeatable biometrics. This will include those unable
to present the required biometric feature, those unable to produce
an image of sufficient quality at enrolment, and those who cannot
reliably match their Reference in attempts to confirm the enrolment
is usable.
Failure To Acquire Rate (FTA): the
expected proportion of transactions for which the system is unable
to capture or locate an image or signal of sufficient quality.
Different biometrics (eg fingerprint, iris,
face) will have different performance characteristics and these
will vary between different implementations of a single type of
biometric and will also vary dependent on how the system is designed
and operated (eg it will vary with the competence and experience
of operators).
On the next page are extracts taken from the
document "Biometric and Card Technology Options", produced
during September 2004. The section summarises the work done to
survey the academic and commercial literature on biometrics. This
document was not shown to ministersrather it was a resource
for officials so that they could have in one place a summary of
relevant research.
An important point to note when reading this
work is that where the performance of biometric systems is discussed,
this is the "raw" performance of individual biometric
technologies measured by standards institutions and academic bodies.
It does not necessarily equate to the performance of a biometric
system which combines several biometrics or which allows multiple
attempts at enrolling a biometric. Nor does it equate to the performance
of an end-to-end enrolment system which uses biometrics as a single
component of identity validation together with, for example a
biographical check and an interview.
To further inform the committee about the subject
we have included as a footnote on p 17 a short note on biometric
spoofing. This was part of a summary on biometrics that Baroness
Scotland provided for Peers following the first 3 days of Committee
in the Lords on 11 December 2005.
2. BIOMETRIC
TECHNOLOGY OPTIONS
2.1 Scope
This document considers the performance of the
biometric sub-system and the capabilities of card technology.
It covers only the technical performance of the biometric capture
device, template generation and matching algorithm. It does not
address any IT aspects of the biometric sub-system, for example
the National Identity Register database.
A biometric system has traditionally consisted
of three subsystems:
In image acquisition, a digital image of a biometric
is recorded either from a live scan of a person's biometric or
from an impression of a person's biometric on paper (eg fingerprint
cards). Feature extraction is the process of representing the
captured image in some space (the "template") to facilitate
matching. Matching involves computing the likelihood of the biometric
coming from subjects (persons) in the database. The performance
of the whole system depends on how well each subset behaves.
The biometric capture device is the hardware
that captures an electronic representation of the biometric (eg
an iris camera or a fingerprint scanner). The template generation
algorithm processes the captured image into a template and the
matching algorithm computes the probability that a template matches
another.
2.2 Biometric performance definition and terms
There is no standardised method for presenting
biometric performance or even for the terms used to describe performance.
Common terms used are FMR, FAR, FNMR, FRR, and FTE. FMR, FNMR
and FTE are the properties that are least ambiguous. This document
uses definitions in [ref NPL testing report]. In particular we
make reference to the following:
FMRthis is defined as the
probability that the biometric sub-system will decide that two
biometric templates are from the same individual when in fact
they are not.
FNMRthis is defined as the
probability that the biometric sub-system will decide that two
biometric templates are not from the same individual when in fact
they are.
FTERthis is the percentage
of failures to enrol in the biometric system ie it is the percentage
of people who cannot give a biometric of sufficient quality to
be enrolled.
In the context of databases (of size N) frequently
confused terms are FAR, FMR and effective FMR. In this document
we define these, for positive identification scenarios as:
FAR or Effective FMR approx. N*FMR
(assuming N*FMR <<1)
FAR is larger than FMR as the more times a match
is attempted the more matches will result. FRR can be thought
of in terms that a person's only true match is against their own
template.
Other terminology and definitions used to describe
biometrics and biometric performance testing are set out in Appendix
E.
In term of interpreting statistics:
An FAR of 1% or 0.01 in an enrolment
(identification) scenario implies that every hundredth enrolee
will falsely match against the enrolment database. In a verification
scenario (e.g. against a template stored on a card) a person would
have to acquire 100 cards before they could falsely match against
one.
An FRR of 1% or 0.01 in an enrolment
(identification) scenario implies that an imposter would have
to try 100 times to re-enrol under a second identity. In a verification
scenario (e.g. against a template stored on a card) a person would
be refused entry to a building at every hundredth attempt.
FMR can be derived from FAR statistics generated
during trials using the equations above. However, this FMR should
not really be used to extrapolate an FAR beyond the database size
that was used to calculate it in the first place. For example
if FAR of 0.5% is measured using a database of 10 million, then
the FMR is 5e-10. The "estimated" FAR for a database
of 100 million is therefore calculated at 5%this is a result
that has not been tested.
2.3 Biometric performance in large scale tests
The most widely independently tested biometrics
(in terms of database sizes) are:
Iris performance statistics from independent
tests are limited to 100's. It should be noted that there is iris
vendor supplied limited data based on database sizes of 100,000's
gathered from a real life deployment. There is no large scale
database for multimodal biometrics (two or more distinct biometrics
captured under the same controlled conditions) although there
is large scale multibiometric (1-10 fingers) data.
Principal tests have been conducted by NIST
and to a lesser extent NPL and the FVC competitions in Italy using
databases gathered from real life deployments tested against vendors'
products under test conditions.
There is very little data from real life deployments
in the public domain.
As mentioned previously there is no standard
test protocol. As heavy use is made of NIST data the NIST protocol
is detailed below.
There are three distinct test scenarios that
NIST defines which are called Verification, Closed-Set Identification,
and or Open-Set Identification. For each task, appropriate performance
statistics are defined.
In verification (1:1 matching), a
subject presents his biometric image to the system and claims
to be a person in the system's gallery. For evaluation, each probe
image is compared to each gallery image independently. Two performance
measures are computed: True Accept Rate (TAR), the fraction of
true identity claims scoring above a threshold; and False Accept
rate (FAR), the fraction of false identity claims scoring above
threshold. The resulting relationship between TAR and FAR, where
each point is defined as a function of score threshold, may be
graphed on a Receiver Operator Characteristic (ROC) curve.
In closed-set identification (1:N
matching), only subjects known to be in the gallery are searched.
The system's ability to identify the subject is evaluated based
on the fraction of searches in which the probe image scored at
rank k or higher. A probe has rank k if the correct match is the
kth largest similarity score. No score threshold is used. The
relationship between Identification rate and rank may be graphed
on a Cumulative Match Characteristic (CMC) curve.
In open-set identification1 (1:N
matching), each subject is searched against the gallery, and an
alarm is raised if the subject occurs in the gallery. A subject
is considered to be "in the gallery" if the probe image
scored above the threshold at rank k or higher. In evaluation,
the system's ability to detect and identify is measured as two
rates: the true accept rate and the false accept rate. An open-set
identification ROC plots TAR vs. FAR. This may be generalized
using rank, where the subject must be detected and identified
at rank k or better.
Note that in a verification (1:1) task, the
performance metrics are based on each comparison of a probe image
to a gallery image, whereas in the identification (1:N) tasks,
the performance metrics are based on each search of a probe image
against the entire gallery.
The table below summarises biometric performance
data from independent large scale tests:
Source |
Trial |
Data source |
Gallery |
Probe |
Type |
Verification/ Identifications |
Statistics |
NISTIR 7110 |
Matching performance for the US-VISIT IDENT system |
operationalUS VISIT | 6000000
| 60000 | fingerlive index pairs
| Identification | FAR 0.31% TAR 96%
|
NISTIR 7110 | Matching performance for the US-VISIT IDENT system
| operationalUS VISIT | 6000000
| 60000 | fingerlive index pairs
| Identification | FAR 0.08% TAR 95%
|
NISTIR 7110 | Matching performance for the US-VISIT IDENT system
| operationalUS VISIT | 6000
| 6000 | fingerlive index pairs
| Verification | FAR 0.1% TAR 99.5%
|
NISTIR 7123 | FPVTE2003 | operationalmultisource
| 9000 | 4000 | finger10 live slap
| Identification | FAR 1e-4 TAR >0.999
|
NISTIR 7123 | FPVTE2003 | operationalmultisource
| 21119 | 4184 | finger2 live flat
| Identification | FAR 1e-4 TAR 0.9959
|
NISTIR 7123 | FPVTE2003 | operationalmultisource
| 3190 | 1204 | finger1 live flat
| Identification | FAR 1e-4 TAR 0.9825
|
NISTIR6965 | FRVT2002 | operationalUS Visa Services
| 37437 | 74874 | face 2D
| Verification | FAR 1% TAR 90% (indoors)
|
NISTIR6965 | FRVT2002 | operationalUS Visa Services
| 37437 | 74874 | face 2D
| Verification | FAR 1% TAR 54% (outdoors)
|
NISTIR6965 | FRVT2002 | operationalUS Visa Services
| 37437 | 74874 | face 2D
| Identification | Identification rate 73% at rank 1
|
X92A/4009309 | Biometric Product Testing Final Report
| scenarioNPL
staff | 200
| | iris | Verification
| FTE 0.5% FMR 0% FNMR 1.9% |
| |
| | | |
| |
The table below summarises biometric performance data from
Vendors etc:
Source | Trial |
Data source | Gallery
(no of people) |
Probe | Type | Verification/
Identifications |
Statistics |
TR-02-004 | Iridian Crossmatch study
| operational composite datasource | 120000
| 9000x17000 sets | iris | Verification
| FAR 3.92e-6 |
International Airport Review, Issue 2, 2004 |
UAE border deployment | operational
| 430,000 | 2.2 million | iris
| Identification | 9,506 matches, none disputed, 0.2% FRR at third attempt
|
Manufacturer | "FRVT2002 test set up"
| unknown | unknown | unknown
| face 3D | Verification | FAR 0.1% FRR 1.5% to 3%
|
Manufacturer | "FRVT2002 test set up"
| unknown | unknown | unknown
| face 3D | Verification | FAR 1% FRR 0.5% to 1.5%
|
Phillipines ID | SSS-ID | Phillipines Government
| 7900000 | unknown | finger4 (2 on card, 4 on NIR)
| NA | FTE~2% ("finger wound")
|
Cogent | Cogent study | Cogent database
| 10 million | 25000 | finger2
| Identification | FAR 0.5%, FNMR 5%
|
Cogent | Cogent study | Cogent database
| 10 million | 25000 | finger4
| Identification | FAR 0.1%, FNMR 1%
|
Cogent | Cogent study | Cogent database
| 10 million | 25000 | finger10
| Identification | FAR 0.004%, FNMR 1%
|
ATOS Origin | UKPS | UKPS
| up to 10,000 | NA | iris
| NA | FTE 9% |
ATOS Origin | UKPS | UKPS
| up to 10,000 | NA | finger
| NA | FTE 2% |
| |
| | | |
| |
In terms of summarising finger and face peformance, NIST
highlights the following regarding the data above: These tables
highlight the following points:
One-to-Many Matching (Identification)NIST
recommends ten slap fingerprint images stored in type 14 ANSI/NIST-ITL
1-2000 formatted records for enrolment and checking of large databases.
Face images are not recommended for identification applications.
With available fingerprint scanning technology, the acquisition
of 10 slap fingerprints should take only slightly more time than
the acquisition of two flat fingerprints.
One-to-One Matching (Verification)NIST
recommends one face and two index fingerprints for verification.
All three biometrics should be in image form. The face image should
conform to the ANSI/INCITS 385-2004 standard. The fingerprint
images should conform to the ANSI/INCITS 381-2004 standard with
500 dots per inch (dpi) scan resolution.
The two-fingerprint accuracy (or true accept rate
(TAR)) at 0.1% false accept rate (FAR) for the US-VISIT two fingerprint
matching system [4] is 99.6% while the best 2002 face recognition
TAR at 1% FAR was 90% using controlled illumination. When outdoor
illumination was used in 2002, the best TAR at 1% FAR was 54%.
Even under controlled illumination, which is not currently used
in US-VISIT, the error rate of face recognition is 25 times higher
than the two-fingerprint results using US-VISIT data that has
10 times lower FAR. If the case of uncontrolled illumination is
considered, this factor would be 115. This means that face recognition
is useful only for those cases where fingerprints of adequate
quality cannot be obtained.
FTE for fingerprints from real life deployments
are 2%, for iris it is between 0.5% and 9%.
In terms of iris data, although the performance is impressive,
it should be noted that no independent testing on databases of
millions has been undertaken to date. Iris recognition generates
a template with a large sequence of bits for comparison (the iris
code). This means that if the bit sequence from two irises are
uncorrelated, the probability of the number of bits that match
being significantly different from half of the total number of
bits is very small indeed. This statistical argument is used by
Iridian (the holder of iris IP) to argue for the superior performance
of this biometric. The difficulty is that there is little evidence
for the iris code being truly random (in the sense that there
is no significant correlation between iris codes from different
eyes).
2.4 Multimodal biometrics
As mentioned previously there is no large scale database
for multimodal biometrics (two or more distinct biometrics captured
under the same controlled conditions) although there is large
scale multibiometric (1-10 fingers) data that has been discussed
above.
There are no published data on the performance of multimodal
systems that combine two iris patterns although there seem to
be little correlation between the two irises of an individual.
It is unclear whether combining these biometrics would significantly
increase performance. The logic of the argument is as follows:
Correlation effects eg some people are significantly
worse (through physical characteristics or temperament) than the
mean in their ability to supply a biometric.
For this group of people the FNMR is (comparatively)
very high
Use of a second biometric does not greatly reduce
the absolute value of the FNMR for these people
For example:
Scenario 1 (random distribution of FNMR). In this
idealistic scenario everyone is equally good (or bad) at producing
a biometric. Everyone therefore has an equal chance of producing
a FNM. If the probability of a FNM for this biometric is p and
a second biometric with FNMR=q is used in combination then the
combined FNMR is pq. For 2% FNMR this would imply that pq=4x10-4.
Scenario 2 (some people bad at presenting biometrics).
Suppose that 90% of the population present excellent biometrics
(FNMR~0) and the remainding 10% present poor biometrics (FNMR=20%).
The averaged FNMR is still 2% (0.2x0.1+0.9x0) = 2%. If however
a second biometric is introduced then the FNMR for the 90% part
of the population remains at ~0 whilst the FNMR for the remaining
10% is now 0.2x0.2 = 0.04 and the mean FNMR is only reduced to
4x10-3 (0.1x0.04+0.9x0).
2.5 Effect of image quality
There is no standard measure of image quality for fingerprint,
iris or face. Standards exist that specify
Recording equipment (eg FBI Appendix F&G guidelines
for fingerprint capture devices)
Guidelines for facial images (illumination, size
of image etc.)
Compression techniques (JPEG, WSQ etc)
Recent work by NIST on image quality of fingerprints has
shown that image quality has a large effect on performance statistics.
[see NIST 7110, figure 12, p23, ftp://sequoyah.nist.gov/pub/nist_internal_reports/ir_7110.pdf]
This study used US VISIT data and assigned a measure of 1
for best quality and 8 for worst quality. As can be seen performance
for quality 1 to 3 (top three plots) is very similar.
Another study by NIST [NIST 7151] used NIST's own image quality
assessment tool. This yielded similar conclusions. In this study
image quality 1 was excellent and 5 was poor. In this report NIST
developed a method to assess quality of a fingerprint that can
forecast matcher performance. This included an objective method
of evaluating quality of fingerprints. These image quality values
were then tested on 300 different combinations of fingerprint
images data and fingerprint matcher systems and found to predict
matcher performance for all systems and datasets. The test results
presented in the body of the report for US-VISIT POE data show
that the method is highly accurate.
Automatically and consistently determining quality of a given
biometric sample for identification and/or verification is a problem
with far reaching applications. If one can determine low quality
biometric samples, this information can be used to improve the
acquisition of new data and also reduce FTE. This same quality
measure can also be used to selectively improve archival biometric
gallery by replacing poor quality biometric samples with better
quality samples. Weights for multimodal biometric fusion can be
selected to allow better quality biometric samples to dominate
the fusion.
The definition of quality can also be applied to other biometric
modalities and upon proper feature extraction can be used to assess
quality of any mode of biometric samples.
Image quality of a biometric is a function of a number of
factors, for example:
Changes in the physical biometric
Damage due to cuts, abrasion or other injury
XY position of fingers on reader
Angles of each finger tip relative to surface
of reader
Joint position causing changes in skin tension
and stretching of skin
Forces applied by finger in plane of reader surface
stretching skin
Torques applied by finger in plane of reader surface
stretching skin
Force applied by finger in Z direction compressing
ridges to reduce contrast (pressure too high) or providing insufficient
contact area (pressure to low)
Sensitivity of reader to skin condition (moisture
and skin oils)
Sensitivity of individual biometrics is discussed in the
Biometric Types section.
Image quality is also a function of the capture equipment.
The FBI has defined several criteria (FBI EFTS Appendix F and
G) to evaluate fingerprint capture devices that can be used within
its IAFIS system. These include signal to noise ratio, greyscale
linearity, grey level uniformity etc. In general there does not
appear to be much information as to how capture devices are kept
in calibration in the field. This is particularly the case for
other biometrics such as face and iris. Iridian approved iris
cameras perform an on camera quality check before the Iriscode
is generated on and sent from the camera.
2.6 Matching speed
Matching speeds are an issue for enrolment into large databases
with a high rate of enrolment where identification is required.
The likely timescale for processing the biometric confirmation
has important implications:
The process flow during the enrolment appointment,
specifically the nature of the questions that could be asked at
the post biometric capture interview
The overall security of the systemon the
one hand a view could be taken that an enrolment decision reached
with the applicant present in the enrolment centre is likely to
discourage attempts at repeat enrolment, on the other hand, it
has been suggested that non real time matching will give an opportunity
for extensive cross checking of applications and the option to
inform the relevant agencies in cases of suspected fraud (the
latter is the Security point of view).
The nature (in terms of cost, complexity, size
etc) of the matching hardware
Assumptions for calculating times based on fingerprint only:
The time for the enrolment decision is based solely
on matching speeds, implying that access to NIR database of templates
will not be limiting
Fingerprint matchers work at a rate of 1 x 1e9
matches/minute (ref supplier meeting) in the something with a
fridge-freezer footprint
Each person will enrol 10 fingerprints
The NIR database of templates possesses 1 x 1e9
records (ie 1 x 1e6 enrolments with 10 fingerprints each)
Enrolments take place at an average rate of 50,000
per day (based on UKPS peak rate in March 2004 was ~30,000 per
day (UKPS Annual Report, 2003-04)
To allow for peak loading a scale up factor of
5 is used
Enrolments take place over an 8h day (480 minutes)
Matches take place over an 8h day
Each match attempt involves 100% penetration of
the database to eliminate potential binning errors
The calculations and rationale are set out below:
For real time matching, it is essential that the peak rate
of enrolment applications does not exceed the rate of enrolment
match results, otherwise a queue would build up.
This "no queuing stipulation" means that the matching
capability of the system must equal peak demand over a reasonable
time period. So given that up to 50,000 enrolment applications
will take place per day, this could be averaged out to 100 enrolments
per minute.
Each application will involve 10 x 109 match attempts
(10 fingerprints, each checked against a database of 1 x 109 records)
Therefore the number of matches per minute will
be 100 x 10 x 109, ie 1012
Given that matchers work at a rate of 1 x 109
matches/minute, this implies that the system will need 1,000 matchers
to keep up with peak demand.
Note that during any given time interval, it is possible
that the maximum rate of enrolment applications could exceed the
sustainable peak of 100 per minute. For example, 2 enrolment applications
could arrive in a second. However, assuming that the enrolment
appointments will be scheduled at 20 minutes intervals and that
there will be 2000 enrolment booths , it will not be possible
to sustain a peak enrolment rate above 100 per minute. Any transient
increase over a short time period will inevitably be smoothed
over a 20 minute time interval.
The maximum decision time is determined from the time required
for a single matcher to check 1 person's fingerprints against
the entire database
Since each enrolment application will involve 10 x 109 match
attempts, and matchers work at a rate of 1 x 109 matches/minute,
the match decision time for each applicant is 10 minutes. This
is deemed to be the maximum matching time since applicant is allocated
a single matcher to process the matching checks.
The minimum decision time is determined from the time required
for a single matcher to check 1 person's fingerprints against
the entire database divided by the number of matchers available
per person
The maximum decision time of 10 minutes is based on using
a single matcher per applicant. However, since it has been shown
that 1,000 matchers will be required to avoid queue build up and
that 100 enrolment applications will arrive per minute, it is
feasible that each individual application could be divided amongst
10 matchers. This would give a minimum matching decision time
of 1 minute. In this scenario, each matcher sees a portion of
the total database (a 1/10th), whereas in the maximum decision
time scenario, each matcher checks against the whole database.
Note that there might be design issues that suggest that
one of these scenarios is more preferable, however for now the
decision time for real time matching can be estimated to be between
1 and 10 minutes.
For non real time matching, a queue can be allowed to build
up
Non real time matching allows matching checks to take place
in intervals when enrolment is not taking place. Hence in the
absence of fresh demand, a queue that had been built up can be
eliminated. The length of the queue that can be allowed to build
up depends on the ratio of time that enrolment centres are open
to the enrolment downtime. Hence there are two options for non
real time matching: 24 hour turnaround (where enrolments are processed
at a rate of one third the application rate); and a 1 week turnaround
(where enrolments are processed at a rate of one quarter the application
rate).
However, the total number of matchers required is the product
of the number of enrolments per unit time and the amount of time
(in the same units) that a matcher spends on each enrolment. This
stipulation prevents the build up of a queue of applicants, which
could only be dealt with when enrolments were no longer occurring,
ie in a scenario where enrolment decisions are not real time.
No. of enrolments per min = (enrolments per day) x (scale
up factor) / (minutes per day) = 520 min-1
Time (minutes) required for 1 person to enrol 10 fingerprints
using a single matcher: = (database size) x (no. fingerprints)/(matching
speed) = 9.2 min
Number of matchers required = (matching time) x (no. of
enrolments per minute) = 4784
Number of matchers available per person per minute (note
-all 4784 matchers could be put on the job, giving a decision
time of 0.12 secs or just 1 matcher giving 9.2 min, so the decision
time varies between these extremes) = total number of matchers
required/no. enrolments per min = 9.2
Decision time per enrolment = time required to enrol 10
fingerprints using a single matcher/number of matchers available
per person = 1 min
For real time matching, the number of matchers required is
a function of the peak enrolment rate and the time required for
1 matcher to search 1 set of records against the entire database.
Once the number of matchers required is calculated, the average
decision time can be calculated by dividing the time required
for 1 matcher to search 1 set of records by the number of matchers
available per person per unit time. Using the assumptions stated
above, we have deduced that:
4,784 matchers will be required for fingerprint
scanning
Average decision time for each enrolment applicant
is 1 min
Note that the number of matchers required is sensitive to
the "peak scale up factor" and the number of enrolments
per day. The former has been estimated at 5 for illustrative purposes,
but modelling of likely demand forecasts would provide a more
realistic number. Note also that the costs of real time biometric
decision (~5000 match engines), should be set against the benefits
(improved security, simpler processes). We could assess the options
of doing this against overnight batch type matching (the cost
being 1,594 matchers, keeping other assumptions the same).
Note also that iris matching not included in this analysis.
As stated previously no Iris database of millions is known to
exist. Iridian matchers run at a rate of 0.5 million to 1 million
matches per second per server.
The speed at which matching can be achieved is also dependent
on the algorithm type. For the US IDENT system a thoughput of
1,035,000 matches per second was achieved, although special purpose
hardware is required and some filtering is used to reduce the
number of fingerprints that need to be examined in detail. Without
filtering a figure of 734,000 matches per second was achieved.
In general minutiae based templates have a higher match rate pattern
based ones.
Matching speeds for verification scenarios are generally
not an issue. Matching can occur on the card, in the card reader
or on the database. In each case only one pair of records is being
matched.
2.7 Spoofing
Spoofing is the practice of substituting a false biometric
in place of the real one[2]2.
It is normally attempted by the following approaches:
Studies have shown that biometrics can be "spoofed"
to fool a biometric reader. However, once more, this must be placed
in context. These studies, often conducted in laboratory conditions,
only sought to see if it were possible and did not attempt to
see it would be possible to conceal such attempts if you were
attempting to nrol or verify biometrics in the prescence of a
trained operator. That is a very different undertaking.
In practice, it would be very difficult to spoof biometrics in
front of a trained operator who uses technology that incorporates
"liveness detection" measures, which identify if the
biometric presented is an actual biometric or, in fact, an attempt
to copy a biometric. Such studies also do not consider the fact
such attempts would also encounter the other non-biometric security
measures which have been previously mentioned.
The Identity Cards Programme is also working to improve current
methods to prevent spoofing with established experts from the
Communications Electronics Security Group (CESG), the National
Physical Laboratory and independent specialists. Resistance against
spoofing will also form part of biometric testing of any technologies
procured.
1. Re-activating a latent image from a previous
enrolment
2. Use of a false biometricimpressions in
a compliant material e.g polymer coatings on a finger [ref], pictures
of irises on false eyes [ref]
3. Use of a biometric from another individual (alive
or dead)
2.8 Types of Biometrics
This section outlines the most widely used, or most widely
discussed biometrics, namely:
For each of these biometrics, a brief overview is given of
the characteristics that are measured, devices used to capture
the biometric and features that are extracted together with the
some of the key advantages and disadvantages of these systems.
Later sections describe some other biometrics technology methods
that are also available but are less proven by large scale testing.
2.8.1 Fingerprint
Fingerprint recognition is one of the best known biometric
techniques, because of its widespread application in forensic
sciences and law enforcement. Fingerprints are one of the few
biometrics that can be "left behind" and therefore gathered
in a person's absence.
The basic characteristics of fingerprints do not change and
are usable beyond a given age (12 years [ref Cogent]. Fingerprints
are however susceptible to wear and damage due to: occupation,
hobbies, injury, burns, disease etc.
Fingerprint technology is widely establishedFingerprints
have long been associated with law enforcement and commercial
automated fingerprint identification systems (AFIS) have been
available since the early 1970's. Current applications of fingerprint
biometrics include:
Criminal and civil AFIS
Physical and logical access
Fraud prevention in entitlement programmes
A variation on fingerprint verification is "palm print"
verification which relies on physical features of the palm including
line features, wrinkle features, delta point and minutia features.
Palm print is not as widely tested as fingerprint.
FINGERPRINT ACQUISITION
CHARACTERISTICS
A fingerprint is a complex combination of patterns formed
by ridges. The Henry system derives from the pattern of ridges;
concentrically patterning the hands, toes, feet and in this case,
fingers (patterns are called arches, loops and whorls). The most
distinctive characteristics are the minutiae, the smallest details
found in the ridge endings and bifurcations (where a ridge splits
into two). Most fingerprint identification systems rely on the
hypothesis that the uniqueness of fingerprints is captured by
these local ridge structures and their spatial distributions.
Fingerprint technology uses the impressions made by the unique
ridge formations or patterns found on the fingertips. Livescan
technologies use "frustrated total internal reflection"
to capture details of distinct ridges on fingertips in a digital
image. A clean finger is placed on a coated platen that is typically
glass or hard plastic and light is scanned across the platen from
below. Where a ridge is present and close contact with the platen
is obtained, the light rays do not exit the top of the platen
and are scattered back into the platen and onto a detector. Where
a valley is present, the light is reflected in a focussed ray
and a strong signal is detected (refs 2, 3, 4). In most optical
devices, a charged coupled device converts this image of dark
ridges and bright valleys into a digital signal. Thus a high contrast
binary imaged is produced by taking the average grey level pixel
and processing every single pixel above this level as a binary
"one". Every pixel that is below this average level
is processed as a "zero". Several steps are required
to convert a fingerprints unqiue features into a template, feature
extraction. This is the basis for various vendors propriety algorithms
(refs 5, 6, 7). For example, the fingerprint may be classified
into whorls, loops or arches. Individual minutiae features such
as ridges, forks and intersections can also be identified and
their relative position captured and plotted by the application
software. This data is then saved in a template for use in future
comparisons or matches. Most software algorithms used to extract
minutiae also compensate for minor deviations in the position
of the finger on the optical scanning device. The process is usually
one way, in that the template cannot be used to reconstruct the
fingerprint.
Fingerprints can either be flat or rolled. A flat print captures
an impression of the central area directly below the nail; a rolled
print captures details of ridges on both sides of the fingertip.
A slap captures multiple fingers (usually 4) simultaneously which
are then segmented with segmentation software.
FINGERPRINT ACQUISITION
DEVICES
Most common technologies include:
Optical scanners are the most commonly used for AFIS applications
(and enrolments for multiple fingers) because of their large area,
high definition capture capability. Scanning fingerprints optically
can be prone to error if the platen has a build up of dirt, grime,
or oilproducing leftover prints from previous users (latent
prints). Severe cases of latent prints can cause the superimposition
of two sets of fingerprints and image degradation. Enrolments
for multiple fingers are carried out on optical systems.
Capacitance sensors are frequently used for single finger
applications (eg verification) due to their smaller area. The
ridges and valleys of a fingertip create different charge distributions
when in contact with a CMOS chip grid. This charge variation can
be converted into an intensity value of a pixel via a number of
DC or AC signal processing techniques.
Ultrasound scanning (ref 9) is designed to penetrate dirt
and residue on the platens and has not been demonstrated in widespread
use to date. An ultrasonic beam is scanned across the finger surface
to measure the depth of the valleys directly from the reflected
signal.
Thermal imaging (ref 10) uses a sensor manufactures from
a pyroelectric material. Thermal imaging measures the temperature
change due to the ridge-valley structure as the finger is swiped
over the sensor. Since skin is a better thermal conductor than
air, contact with the ridges causes a noticeable temperature drop
on a heated surface. The technology is claimed to overcome wet
and dry skin issues of optical scanners however, the resultant
images tend to have a poorer dynamic range (not rich in grey values).
FINGERPRINT COMPRESSION
AND TEMPLATE
ALGORITHMS
A typical finger has an image area of approximately 1²
x1² and is recorded at 500 dpi with 8 bit greyscale. Compression
techniques such as WSQ (wavelet scalar quantisation) are recommended
over jpeg (ref NIST) and can offer up to 12.9 compression ratios.
Templates are generated from the WSQ or JPEG image using proprietary
software. Templates will be minutiae or pattern based and range
in size range from 250 bytes to 1,000 bytes depending on which
vendor's algorithm the system uses. Minutiae algorithms are used
primarily for AFIS applications due to their higher processing
speed and ability to cope with rotated fingers (a consequence
of latent print capability). Pattern based algorithms are used
primarily for physical and logical access applications where smaller
cheaper sensors are used and therefore higher information density
is required.
FINGERPRINT ADVANTAGES
Fingerprints are persistent: Fingerprints almost always remain
the same throughout a person's lifetime except for accidental
damage.
Fingerprints are unique: Every human has a unique set of
fingerprints [reference]
Fingerprints are one of the most mature biometrics: Fingerprints
have been widely studied and researched over the years and have
been successfully used in most manual and automated methods.
The standards for interoperability of fingerprint systems
are also the most mature biometric interchange standards. Also,
despite the fact that 1 to 3% of people may find it difficult
to reliable use a fingerprint system, fingerprints are the biometric
with the largest population base in use worldwide.
FINGERPRINT DISADVANTAGES
Dirt on the finger or injury can distort the image: An image
of the fingerprint is captured by a scanner, enhanced, and converted
into a template. During image enhancement the definition of the
ridges is enhanced by reducing image noise. Sources of noise in
reflective technologies arise because the reflected light is a
function of skin characteristics. If the skin is too wet or too
dry, the fingerprint impression can be saturated or faint and
difficult to process. In addition noise may be caused by dirt,
cuts, creases, scars or worn fingertips.
Contact system: In most current systems, the process of capturing
the fingerprint biometric involves contact of the fingerprint
pattern with an input device. Because of this, the actual pattern
that is sensed may be elastically distorted during the acquisition
of the pattern causing the possibility that impressions of the
same finger may be quite different. There are some non-contact
fingerprint sensors available that avoid the problems related
to touch sensing, but these have yet to be proven on a large scale
[ref digital descriptor].
Suppliers have propriety algorithms and matching hardware
FINGERPRINT ROBUSTNESS
AND VULNERABILITIES
As discussed, if a user leaves an oily latent image on the
scanner, a false rejection may occur or someone with a fine brush
and dry toner could "lift" fingerprints with adhesive
tape. Latent prints can also be recovered by breathing onto the
sensor window. Gelatin or carbon-doped silicon rubber can be used
to mould finger templates from a wax imprint19. Some vendors include
"liveness tests" to help against spoofing but it is
likely to still be a developmental area
2.8.2 Face
Facial recognition is one of the oldest biometrics. Manual
verification of a person against their photograph has been around
for many years. It is also a non intrusive method for capturing
a biometric.
Most systems to date have focussed on 2D images. Emerging
techniques include 2.5D (multiple 2D images) and 3D (true 3D images).
FACIAL IMAGE
ACQUISITION CHARACTERISTICS
Facial recognition technology identifies people by the sections
of the face that are less susceptible to alterationthe
upper outlines of the eye sockets, the areas around the cheekbones,
the sides of the mouth and other prominent skull features.
FACIAL IMAGE
ACQUISITION DEVICES
Images can be recorded from static cameras or video cameras
in the visible spectrum. Emerging technologies also make use of
the NIR spectrum to mitigate for uncontrolled background illumination
[ref A4Vision].
FACIAL COMPRESSION
AND TEMPLATE
ALGORITHMS
Two primary methods of facial recognition system are used
to create templates: (Other facial recognition technologies based
on thermal patterns below the skin are not yet commercially available)11
LFA measures the relative distances between different landmarks
on the face to create a facial biometric template, or faceprint.
LFA uses many features from different regions of the face, and
also incorporates the relative location of these features. The
extracted (very small) features are building blocks, and both
the type of blocks and their arrangement are used to identify/verify.
Small shifts in a feature may cause a related shift in an adjacent
feature and the technology can accommodate these changes in appearance
or expression (such as smiling). LFA was patented by Visionics
corpnow Identix Incorporated (ref 3). Since LFA does not
provide a global representation of the face, it is prone to difficulties
when the head is tilted away from the frontal pose by more than
about 25 degrees horizontally or more than about 15 degrees vertically11.
The Eigenface method looks at the face as a whole and is
patented at Massachusetts Institute of Technology (MIT). This
method uses 2D global grayscale images that represent distinctive
characteristics of a facial image.
The vast majority of faces can be reconstructed by combining
features of approximately 100-125 eigenfaces. Upon enrollment,
the subject's eigenface is mapped to a series of numbers (coefficients)
that form the basis of the template.
Two other methods used in facial recognition systems are
neural network and automatic face processing. Neural networks
employ an algorithm to determine the similarity of the unique
global features of live versus enrolled or reference faces, using
as much of the facial image as possible. Automatic Face Processing
(AFP) uses distances and distance ratios between easily acquired
features such as eyes, end of nose, and corners of mouth. Though
overall not as robust as eigenfaces, feature analysis, or neural
network, AFP may be more effective in dimly lit, frontal image
capture situations.
Facial recognition templates sizes are typically 83 to 1,000
bytes (ref 13).
FACIAL RECOGNITION
ADVANTAGES
Convenience and acceptance: Face identification is one of
the most widely publicly accepted biometrics since it is not intrusive.
It is relatively easy to perform face recognition and moderately
convenient. Users tend to find it highly acceptable to be identified
by their face since this is the most traditional way of identification.
Has potential for long distance recognition and covert identification
from surveillance cameras
Has the potential to be applicable to existing image databases
FACIAL RECOGNITION
DISADVANTAGES
Imaging conditions: The lighting of the face can have large
effects on the appearance of the face in an image and therefore
on the performance statistics.
Appearances naturally alter with age.
Although the passive nature of image capture rendered facial
recognition easy to use, this is also the reason it is disliked;
the face biometric is able to operate without the users cooperation,
since CCTV camera need only capture a picture for the technology
to generate a template. However, the technology is much more able
to identify people who are motivated to cooperate with the system.
FACIAL RECOGNITION
ROBUSTNESS AND
VULNERABILITY
Facial recognition systems tend to be less accurate than
fingerprint systems [ref]. Impacts on performance and difficulties
with acquiring facial images arise due to effects such as quick
changes in facial expressions, unknown geometric location of the
face upon presentation, imaging conditions such as lighting and
compression artefacts. More on spoofing in here?
2.8.3 Iris
Iris recognition measures the iris pattern in the coloured
part of the eye, although the iris colour has no role to play
in the biometric. Iris patterns are formed randomly at birth and
are the results of muscle tears as the eye forms [ref Iridian
and Daugman]. As a result, iris patterns from left and right eyes
of the same individual are different as are the patterns from
identical twins (ref 18). Iris recognition has been developed
primarily by Iridian (formerly IriScan) which holds over 200 patents.
IRIS ACQUISITION
CHARACTERISTICS
Unique complex patterns of striations, freckles and fibrous
structures in the human iris stabilise within one year of birth
and remain constant throughout a lifetime. The iris can have more
than 250 distinct features compared with 40 or 50 points of comparison
in fingerprints (ref 14). John Daugman developed a set of mathematical
formulae for iris recognition at Cambridge university in 1993
(ref 17). The patents for the algorithms are owned by Iridian
Technologies and are the basis for current iris recognition systems
and products. The concept patent expires within the next two years.
IRIS ACQUISITION
DEVICES
Systems require a camera to record the iris. Cameras can
capture both eyes (binocular) or a single eye (monocular). The
eye (or eyes) is initially located, the camera then zooms in and
focuses on the eye itself, the iris is then located along with
pupil boundary. Obstructed areas are located (eyelashes, eyelids)
and the system then essentially breaks the image into circular
grids and the each area analysed for unique patterns (using polar
co-ordinate transforms). Feature vectors may be compared by Hamming
distance and rotations.
IRIS COMPRESSION
AND TEMPLATE
ALGORITHM
The majority of cameras generate the IrisCode algorithm in
the camera. Raw images of the iris are difficult to obtain from
Iridian approved "proof positive" iris cameras. IrisCode
template sizes are 256 to 688 bytes.
IRIS ADVANTAGES
UniquenessIris development during gestation results
in a uniqueness of the iris even between multi-birth children.
These patterns are stable throughout life.
Non-invasiveNo direct contact between the user and
the camera.
Use of infrared band avoids uncomfortable visible illumination
and improves contrast of iris, as well as seeing through some
types of contact lens.
Facial images could also be captured at the same time as
iris images are captured.
IRIS DISADVANTAGES
Image captureContact lens wearers or people with diseases
such as glaucoma may find it difficult to pass an iris scan.
Image captureCorrect illumination is required
for good quality image capture.
IP issuesfundamental patents owned by one
company, Iridian.
No large database of irises to assist in benchmarking
systems.
Extent and nature of exception cases needs study.
IRIS ROBUSTNESS
AND VULNERABILITY
Out of focus camera, mirrored sunglasses, contact lenses
(patterned etc), glass eyes, medical conditions such as anirida
and other such barriers to recognition may introduce system failures.
2.8.4 Signature
Dynamic signature verification is a behavioural biometric
and is the automated method of examining an individuals signature.
This technology examines characteristics such as speed, direction,
pressure of writing, the time that the stylus is contact with
a digitised platen, the total time to make the signature, and
where the stylus us raised from and lowered onto the platen.
Signature recognition tends to be used more for document
security than network log-ins.
2.8.5 Voice
Voice recognition is a reasonably common biometric technology
[ref companies VeriVoice Motorola Ciphervox, Veritel corp voicecrypt2.01]
for access control systems. Voice verification considers the quality,
duration, pitch and loudness of the signal compared to previously
enrolled characteristics. Speaker verification technologies can
be divided into two major categories:
1. Text dependentwhere the system associates
a sentence or password, possibly different, to each user.
2. Text Independentwhere the user is not
requested to say the same sentence during each access.
Voice recognition can be affected by environmental factors
such as background noise. Additionally, there is a concern that
a voice could be recorded and played back for identification.
2.8.6 Hand Geometry
Hand or finger geometry utilises an automated measurement
of many dimensions of the hand and fingers. Only spatial geometry
is examined as the user places his or her hand on the sensor surface.
Digital camera captures two hand silhouettes. The hand needs
to be aligned to posts, which may require some practice and good
hand mobility. With a typical EER of 10-3, it is usually combined
with a PIN or card.
2.8.7 Vascular patterns
Vascular pattern technology uses infrarered light to produce
an image of the vein pattern in the face, back of hand, or on
the wrist. Hand vein pattern readers measure the position of veins
on the back of an individual's hand. Technical issues include
the distance of veins from the surface of the person's skin, and
the dilation or contraction of the vessels over time (due to aging
or simply temperature changes).
2.8.8 Retina
Retinal scans measure the blood vessel patterns in the back
of the eye. Users tend to perceive retinal scanning as intrusive
and it has not gained popularity with end users. The device involves
a light source that shines into the eye of the user who must be
standing very still close to the device (within a few inches)
as the compact sensor can see a significant part of the retina
only from a very short distance. This makes the technqie slow
and unergonomic. [one more sentence on how it works]. This biometric
may have the potential to reveal more than just the identify of
the user since patterns may change with certain medical conditions
e.g. pregnancy, high blood pressure, AIDS.
2.8.9 DNA
This technique takes advantage of the different biological
pattern of the DNA molecule between individuals. The molecular
structure of DNA can be imagined as a zipper with each tooth represented
by one of the letters A (Adeline), C (Cytosine), G (Guanine),
T (Thymine) with opposite teeth forming one of two pairs, either
A-T or G-C. The information in DNA is determined by the sequence
of letters along the zipper and is the same for every cell in
the body. The main concerns are the costs, ethical issues, practical
issues and acceptability of the technology since DNA testing is
neither real time nor unobtrusive.
2.8.10 Gait
The use of an individuals walking style or gait to determine
identity. It is attractive because it requires no contact. Psychological
studies support the view that gait can be modelled and is unique.
It can be used to monitor people without their cooperation.
2.8.11 Ear Recognition
Ear recognition uses mainly two features:
The shape of the ear: ear geometryThis
technology utilises the fact that the shape and size of the ears
are unique characteristics of an individual.
The canal of the ear which returns a specific
echo: otoacoustic emissions.
While ear geometry has been used by police to identify criminals,
otoacoustic emissions are currently the subject of academic research.
Tests carried out by University researchers indicate that if clicks
are broadcast into the human ear, a healthy ear will send a response
back15. These are called otoacoustic emissions.
2.8.12 Keystroke
Keystroke dynamics is an automated technique of examining
a users fluctuating typing dynamics. People move their fingers
around the keyboard in precise, yet irregular, timing patterns
during log ins without even realising it. Characteristics such
as speed, pressure, total time to type a password and the time
between hitting different keys are measured. The algorithms are
still being developed to improve robustness and distinctiveness.
NetNanny Software Inc bases a keystroke biometric on patented
algorithms originally developed at Stanford University and measures
the timings between keystrokes. There are issues around personal
privacy in the commercial use of keystroke dynamicssuch
as applications to monitor hourly progress of employees.
2.8.13 Other
Body odor (This technique is under development
and relies on an individual's distinctive smell from chemicals
known as rolatiles)
2.9 Market Structure
The supply chain for biometrics comprises
Biometric hardware providers
Biometric software providers
Biometric hardware and software providers
In terms of individual biometrics fingerprint and face dominate
the market in terms of supplier numbers. There are 100's of fingerprint
companies although only 4-5 AFIS suppliers. The rest of the fingerprint
companies are primarily logical and physical access companies
of which about 10 are well known names. These 10 also offer other
biometrics such as face. In terms of large area optical capture
devices there are up to 10 companies that offer solutions. Capacitance
"flat" capture chips are offered by approximately 5
suppliers, some of which are fables. There are 100's of companies
that then package these chips in a variety of formats: USB readers,
PCMCIA card, standalone, combined with card etc. Some of the larger
AFIS suppliers are able to be the prime contractor for medium
scale biometric projects.
The face market has fewer companies with 10's offering 2D
solutions and approximately 10 offering 3D solutions.
The iris market is effectively controlled by Iridian with
approximately 4-5 companies offering Iridian approved ("proof
positive") cameras. Another company Iritech is developing
its own iris solutions. Other iris companies offer access control
and border control solutions.
The other biometrics are generally represented by a small
number (<10)
of companies with the possible exception of finger and voice.
3. BIOMETRIC
TRIALS
The UKPS Biometric Enrolment trial was governed
by a Project Board with representatives from the contractors,
Atos Origin, UKPS and the Identity Cards Programme and also Dr
Tony Mansfield from National Physical Laboratory (NPL) who advised
on the experimental design. The trial final report was reviewed
by those close to the trial within the Identity Cards Programme
and also by Professor Paul Wiles, the Home Office's chief scientist
and by Dr Tony Mansfield of NPL.
Dr Mansfield was co-author of the earlier feasibility
study on the use of biometrics in an entitlement card scheme,
referred to as the "NPL feasibility study. There was no formal
project structure to oversee its production.
The following note shows the comments on trial
report from Dr Tony Mansfield. These were in general very specific
comments on the text of the report which were incorporated into
the final version of the report.
2 2 Letter to Peers after Lords Committee: Can
biometrics be forged or "spoofed"? Back
|