How To Create Database For Face Recognition In Matlab

In this tutorial, I will show you how to create a database for face recognition in Matlab.

A database is a collection of images with associated metadata. In our case, the metadata is simply the name of each image.

The first step is to load our image files into Matlab. We can do this by creating a matrix of images and then reading them in one at a time.

How To Create Database For Face Recognition In Matlab

In today’s information technology world, security for systems is becoming more and more important. Authentication
plays a major role in the field of exchanging any data or information where secrecy & privacy of the data to be
transferred is must and another factor is, it should be accessed by only authorized person. Authentication is the
process of giving someone identity so that he or she can access that particular application or data. It is the act of
confirming something what it claims to be something like we are showing our ID proof to get access in the
particular area restricted to particular persons only. Objective of the authentication technique using Face detection
is to provide and to ensure the full proof security of the information or the data we are sharing by the mean of
processing & comparing the unique structure of the face of the person to authenticate him.
Keywords: Authentication, Matrix, Biometric, Authentication.


I INTRODUCTION
Authentication can be defined as of three types:
1) Authentication using something we can remember like Password, PIN or any code.
2) Authentication using something physical thing we can have like Swipe card, Token or any Key.
3) Authentication using something we possess within us that is our Biological characteristics which is called
Biometrics.
 Passwords, PIN & codes can be forgotten or hacked.
 Swipe card, Token & Keys can be lost or theft.
 Our Biological characteristics are something that ensures Secure, Convenient & Unique method of
authentication of information.
International Journal of Advance Research In Science And Engineering http://www.ijarse.com
IJARSE, Vol. No.3, Issue No.7, July 2014 ISSN-2319-8354(E)
189 | P a g e
www.ijarse.com


1.1 Biometric Authentication
Biometric identification utilizes physiological and behavioral characteristics to authenticate a person’s identity. The
term Biometrics is usually associated with the use of unique physiological characteristics to identify any individual.
Biometric authentication refers to the identification of humans by their characteristic. The most common application
of biometrics is security. Biometric authentication can be further categorized on the basis of physiological versus
behavioral characteristics.
Biometric authentication requires to compare a registered biometric sample against a newly captured biometric
sample (captured during a login). This is a three-step process followed by a process:
 CAPTURE
 PROCESS
 ENROLL
During Capture process, raw biometric is captured by a sensing device such as a fingerprint scanner or video
camera. Next step is to extract the distinguishing characteristics from the raw biometric sample and convert into a processed mathematical representation.

In next phase, the processed sample (mathematical representation of the biometric) is stored / registered in a storage medium for future comparison during an authentication.
Some of the common physical characteristics that may be used for identification
Includes:
 Fingerprints,
 Palm prints/Hand geometry,
 Retinal scan
 Face recognition
 Iris recognition, etc.
Some of the behavioral characteristics includes:
 Signature,
 Voice recognition
 Keystroke pattern, etc.


A biometric system works by capturing and storing the information and then comparing the recorder/stored
information with what is stored in the memory of the device.


Out of all the various physical characteristics available, faces are one of the more accurate physiological
characteristics that can be used. Face detection technology does provide a good method of authentication to replace
the current methods of passwords, token cards or PINs and if used in conjunction with something the user knows in
a two-factor authentication system then the authentication becomes even stronger.


II LITERATURE REVIEW
International Journal of Advance Research In Science And Engineering http://www.ijarse.com
IJARSE, Vol. No.3, Issue No.7, July 2014 ISSN-2319-8354(E)
190 | P a g e
www.ijarse.com


During 1964 and 1965, Bledsoe, along with Helen Chan and Charles Bisson, worked on using the computer to
recognize human faces. He was proud of this work, but because the funding was provided by an unnamed
intelligence agency that did not allow much publicity, little of the work was published.

Given a large database of images and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the method could be measured in terms of the ratio of the answer list to the number of records in the database.


By about 1997, the system developed by Christoph von der Malsburg and graduate students of the University of
Bochum in Germany and the University of Southern California in the United States outperformed most systems with
those of Massachusetts Institute of Technology and the University of Maryland rated next. The Bochum system was developed through funding by the United States Army Research Laboratory.

The software was sold as ZN-Face and used by customers such as Deutsche Bank and operators of airports and other busy locations. The software was “robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hair styles and glasses—even sunglasses”.


In about January 2007, image searches were “based on the text surrounding a photo,” for example, if text nearby mentions the image content. Polar Rose technology can guess from a photograph, in about 1.5 seconds, what any individual may look like in three dimensions, and claimed they “will ask users to input the names of people they recognize in photos online” to help build a database. Identix, a company out of Minnesota, has developed the software, Face It. Face It can pick out someone’s face in a crowd and compare it to databases worldwide to recognize and put a name to a face. The software is written to detect multiple features on the human face.

I can detect the distance between the eyes, width of the nose, shape of cheekbones, length of jaw lines and many more facial features. The software does this by putting the image of the face on a face print, a numerical code that re In 2006, the performance of the latest face recognition algorithms were evaluated in the presents the human face.
Facial recognition software used to have to rely on a 2D image with the person almost directly facing the camera.
Now, with Face It, a 3D image can be compared to a 2D image by choosing 3 specific points off of the 3D image and converting it into a 2D image using a special algorithm that can be scanned through almost all databases. Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests.

The results indicated that the new algorithms are 10 times more accurate than the face recognition
algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins.
U.S. Government-sponsored evaluations and challenge problems have helped spur over two orders-of-magnitude in face-recognition system performance.


Since 1993, the error rate of automatic face-recognition systems has decreased by a factor of 272. The reduction applies to systems that match people with face images captured in studio or mugshot environments. In Moore’s law terms, the error rate decreased by one-half every two years.
International Journal of Advance Research In Science And Engineering http://www.ijarse.com
IJARSE, Vol. No.3, Issue No.7, July 2014 ISSN-2319-8354(E)
191 | P a g e
www.ijarse.com


Low-resolution images of faces can be enhanced using face hallucination. Further improvements in high resolution,
megapixel cameras in the last few years have helped to resolve the issue of insufficient resolution.
III FACE RECOGNITION TECHNOLOGY
Face recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the faces. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.

One of the ways to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.Face recognition uses camera technology to acquire images of the detailed structures of the face. Digital templates encoded from these patterns, by mathematical algorithms. These algorithms allow the identification of an individual.


Databases of existing templates are searched & matched by the matcher engines at speeds measured in the millions of templates per second per CPU
3.1 Face Detection Process
The process of capturing an face into a biometric template consists of below steps:

  1. Capturing the image
  2. Defining and optimising the image
  3. Storing and comparing the image.
  4. Capturing the Image:
    The image of the face can be captured using a standard camera using both visible and infrared light and may be either a manual or automated procedure. The camera can be positioned between three and a half inches and one meter to capture the image. In the manual procedure, the user needs to adjust the camera to get the face in focus and needs to be within six to twelve inches of the camera. This process is much more manually intensive and requires proper user training to be successful. The automatic procedure uses a set of cameras that locate the face and face automatically thus making this process much more user friendly
  5. Defining and Optimising the Image
    The face detection system identifies the image that has the best focus and clarity of the face. The image is then
    analyzed to identify the outer boundary of the face.
    The face detection system then identifies the areas of the face image that are suitable for feature extraction and
    analysis. This involves removing areas that are covered, any deep shadows and reflective areas.
  6. Storing and Comparing the Image
    Once the image has been captured, an algorithm is used to map segments of the face into hundreds of vectors. These
    algorithms also take into account the changes that can occur with an face, for example the pupil’s expansion and
    contraction in response to light will stretch and skew the face. This information is used to produce a code which is
    called as the Face Code, which is a 512-byte record. This record is then stored in a database for future comparison.

multiple face detection matlab code

class and function. Based on Viola-Jones face detection algorithm, the computer vision system toolbox contains vision.CascadeObjectDetector System object which detects objects based on above mentioned algorithm.
  Prerequisite: Computer vision system toolbox

FACE DETECTION:

clear all

clc

%Detect objects using Viola-Jones Algorithm

%To detect Face

FDetect = vision.CascadeObjectDetector;

%Read the input image

I = imread(‘HarryPotter.jpg’);

%Returns Bounding Box values based on number of objects

BB = step(FDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

    rectangle(‘Position’,BB(i,:),’LineWidth’,5,’LineStyle’,’-‘,’EdgeColor’,’r’);

end

title(‘Face Detection’);

hold off;

The step(Detector,I) returns Bounding Box value that contains [x,y,Height,Width] of the objects of interest.

BB =

    52    38    73    73

   379    84    71    71

   198    57    72    72

NOSE DETECTION:

%To detect Nose

NoseDetect = vision.CascadeObjectDetector(‘Nose’,’MergeThreshold’,16);

BB=step(NoseDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

    rectangle(‘Position’,BB(i,:),’LineWidth’,4,’LineStyle’,’-‘,’EdgeColor’,’b’);

end

title(‘Nose Detection’);

hold off;

EXPLANATION:

To denote the object of interest as ‘nose’, the argument  ‘Nose’ is passed.

vision.CascadeObjectDetector(‘Nose’,’MergeThreshold’,16);

The default syntax for Nose detection :

vision.CascadeObjectDetector(‘Nose’);

Based on the input image, we can modify the default values of the parameters passed to vision.CascaseObjectDetector. Here the default value for ‘MergeThreshold’ is 4.

When default value for ‘MergeThreshold’ is used, the result is not correct.

Here there are more than one detection on Hermione.

To avoid multiple detection around an object, the ‘MergeThreshold’ value can be overridden. 

MOUTH DETECTION:

%To detect Mouth

MouthDetect = vision.CascadeObjectDetector(‘Mouth’,’MergeThreshold’,16);

BB=step(MouthDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

 rectangle(‘Position’,BB(i,:),’LineWidth’,4,’LineStyle’,’-‘,’EdgeColor’,’r’);

end

title(‘Mouth Detection’);

hold off;

EYE DETECTION:

%To detect Eyes

EyeDetect = vision.CascadeObjectDetector(‘EyePairBig’);

%Read the input Image

I = imread(‘harry_potter.jpg’);

BB=step(EyeDetect,I);

figure,imshow(I);

rectangle(‘Position’,BB,’LineWidth’,4,’LineStyle’,’-‘,’EdgeColor’,’b’);

title(‘Eyes Detection’);

Eyes=imcrop(I,BB);

figure,imshow(Eyes);

Cropped Image

I will discuss more about object detection and how to train detectors to identify object of our interest in my upcoming posts. Keep reading for updates.

Leave a Comment