BEACON Researchers at Work: Facial recognition software improved by evolutionary computing

Here is the second installment of the new BEACON Researchers at Work series, from North Carolina A&T graduate student Joseph Shelton.

Biometric security systems use biometric identification to determine whether or not an individual is allowed access to a resource or location by scanning a biometric modality. A biometric modality is a trait that is unique for most individuals, it is universal for all individuals, such as fingerprints, irises, voice and the human face, and it should be practical to use. BSS has a significant advantage over a conventional security system, which uses a string of characters as a password, because a biometric modality is more difficult to copy.

My interest in this research was spurred by action movies that involved hacking a system that scanned any part of the human body. Whenever a biometric system needed to be spoofed, the burglar/ex-CIA/teen genius always went through a big deal trying to replicate somebody’s biometric signature, and it seemed like biometric systems were much cooler and more futuristic then a regular “type-in-a-password” system. Admittedly, a guilty part of me wanted to know if it was really possible to break a system using the same methods as in the movies. So when I was told about a biometrics research team, I wanted to join and learn how biometric systems truly functioned. Before I could even begin to understand how to break a BSS and how it works, I had to gain an understanding of the modules of a system, meaning I had to study the subtopics of components involved, and at times, there were sub-subtopics to be studied. Through studying, I answered a lot of my own questions, and realized that in some areas, biometrics isn’t as flashy as movies makes it out to be, but in most areas, theatrics do not do the field of biometrics justice.

A biometric security system (BSS) is composed of 4 modules: the sensor, Feature Extraction, comparison/matching and database storage. The sensor is where biometric modalities, are read into the system. Feature extraction is the process of converting a modality into a form that can be compared in the comparison module, and the modality is compared to previously enrolled subjects in a database. My research focuses on the feature extraction module, and it uses the face, or a facial image, as the modality to extract features from. The goal of GEFE is to increase the probability that an individual will be correctly identified as well as reduce the number of features that need to be looked at.

An important concept of GEFE is Genetic and Evolutionary Computation (GEC). A GEC is a technique that is used to seek out an optimal/near-optimal solution to a problem by simulating the evolutionary process. For any problem (e.g. What two numbers added together would give the greatest value?) a population of possible solutions are created and given a fitness value. For example, the solution 12,45 would be given the fitness 12+45 = 57. Just like in evolution, parents are selected within the population of solutions, offspring are created, and the offspring replace unfit individuals in the population. This process is repeated until either an optimal solution is obtained, or a number of generations/evaluations have run. There are different forms of GEC that select parents and create offspring in unique ways, but the overall function is to find the best solution to a problem.

The Local Binary Pattern algorithm (LBP), a texture analysis algorithm proposed by Timo Ojala, can be used for feature extraction. It is used to “describe” the textures that are displayed within an image, the idea being that different textures have different patterns that represent them. In order to identify images, the patterns, or features, must be extracted and converted in a way that can be used in comparisons. The traditional way of doing this is to segment an entire image into non-overlapping regions that are all uniform sized. Within each region, center pixels (pixels that are surrounded by a number of neighboring pixels) must be sought out. If the image is gray scale, then the pixels will have an intensity value between 0 and 255, 0 being all black and 255 being all white. Each center pixel has a pattern that can be interpreted by comparing the values of the center pixel to the neighboring pixels. If the difference between one neighbor and the center is negative, then a 0 will be chosen as a bit, and if the difference is positive, then 1 would be chosen as the bit. In our research, we focus on 8 neighbors, so when the 8 bits are obtained from the pixel comparisons, they are concatenated to form a pattern string.

We then make a histogram for the region that keeps count of the possible patterns that can be extracted. There are initially 256 patterns that can be created, but to reduce this number, only uniform patterns are considered. A uniform pattern is a bit string that has two or fewer changes between bit values when compared in sequential, circular order. The uniform pattern 00111110 has a change between the second and third position and the seventh and eighth position, while the pattern 11100110 is considered non-uniform because it has a total of four changes: between the third and forth position, the fifth and sixth position, the seventh and eighth position, and the eighth and first position. There are 58 possible uniform 8-bit patterns, so the number of bins in the histogram is 59, the 59th bin being a bin that keeps count of the non-uniform patterns. The histograms for all the regions are concatenated together to form a feature vector. Then we compare this feature vector to a database of other feature vectors using a Manhattan distance measure, and the distance determines how identical two feature vectors are.

The GEFE technique combines LBP and GEC to optimize feature extraction. The GEFE method is basically evolving the regions that features are to be extracted from. GEFE differs from the traditional LBP method in that regions are allowed to overlap, the entire region doesn’t have to be segmented, and the regions don’t have to be uniform. A genetic Feature Extractor (FE) holds sets of region parameters and the fitness. The parameters include the (X,Y) coordinates of each region (the center of the region), the widths and heights of each region and a masking value for each. The masking value determines whether or not features are extracted from that region by turning regions on or off. The purpose is to reduce the number of features objectively, without any human bias. The fitness is just the number of incorrect matches that the FE obtains on a biometric system added to the percent of regions that are turned on.

Different ways of implementing the GEFE method have been evolved with different types of GECs and have been tested on facial images from the Facial Recognition Grand Challenge (FRGC) dataset. In every test, the genetic FE beats out the traditional method, in terms of recognition accuracy and the percent of features used.

My biggest interest in GEFE is the evolutionary aspect, which can be thought of as a form of Artificial Intelligence when it is applied to computing. Before joining the research team, the concept of computational evolution was foreign to me. The basic idea was simple enough, but understanding the particulars was more difficult. The computer is not always deterministic, so one of the more interesting studies is observing the results of using different types of GECs multiple times. The different areas of research with biometrics seem boundless, with subjects branching off into different subjects, but that’s one reason why the field is appealing to me.

For more information, please contact Joseph Shelton at jashelt1 at ncat dot edu.

This entry was posted in BEACON Researchers at Work, BEACONites and tagged , , , , . Bookmark the permalink.

Comments are closed.