Facial Recognition Technology

Face recognition has very many possible uses of great importance to the society. Three-dimensional sensing is a very important possibility being explored to increase biometric recognitions precision. Fresh work is coming up often, in many conferences and journals. The 3D recognition faces several challenges in its attempt to move to a level where it can be successfully used in major applications. There is still a need for better 3D sensors, although 3-D sensors quality has been enhanced. Also, there is need to embrace thorough experimental methods of verifying improvements to high-tech. The use of suitable experiments of statistical importance would support the reading of the size and the importance of improvements reported. With all those challenges tackled, some positive expressions regarding the three-dimensional recognitions potential will come true. This paper shows that there has been great interest in the facial recognition technology by the researchers and with time, this process is expected to go high tech. Facial Recognition Technology

Introduction
The face is one of the most easily recognized objects on earth. However, its 3D shape and 2D images are intricate and difficult to characterize. The face recognition issue dates as far back as that of computer vision, because of its practical importance as well as academic interest from scientists. Although other identification methods like iris scans and fingerprints, could be more precise, it has always attracted a lot of focus due to its non-invasiveness not to mention its being the humans primary identification method (Nixon, 2010).

The instinctive way to recognize faces is to spot their key features and comparing them with other faces same features.  The initial attempts on this started in the 1960s with a semi-automated system.  Symbols were put on the photographs to place the key features it used features like eyes, ears, noses, and mouths.  Then distances and the ratios were calculated from these marks to a common point of reference and then compared to the reference data (Nixon, 2010).

A Brief History
At the beginning of the 1970s Goldstein, Harmon and Lesk created a system that used 21 subjective markers such as hair color and lip thickness.  This was more difficult to automate because several measurements were still being made totally by hand.  A few years after the Goldstein, Harmon and Lesks system was created Fisher and Elschlagerb came up with a more automated method that measured the same characteristics using feature templates of the faces sections and then putting them onto a universal template.  However, it was later established that the features were not unique enough to be able to represent a face (Nixon, 2010).

The Connectionist approach was yet another method which classified the face using gestures and identifying markers. This is applied using two-dimensional pattern recognition and neural net principles.  To get more accuracy, the Connectionist method requires a huge number of training faces, and so it is not widely used (Nixon, 2010). The first of the completely automated systems to be developed used very general pattern recognition, by comparing human faces to basic models with expected facial appearance, and created many patterns for an image which were comparative to the models.  Kirby and Sirovich then came up with the eigenfaces method in the year 1988, which was used by many developers who expanded on their basic principles. New technologies to make three-dimensional models of faces based on digital pictures are being developed in order to come up with more options for comparison. However, given that the computer is supposed to manipulate a three-dimensional model from a two-dimensional photograph, the three-dimensional technology is too susceptible to errors (Nixon, 2010).

A few automated techniques have been expected to achieve the task of studying human perception. Among the first methods involved the photo-multipliers network, a device which could learn every faces pattern. It could distinguish between ten photographs of peoples faces, each photograph having two hundred and fifty presentations. The device was not limited to facial recognition only it was able to recognize the pattern of individual faces. Later, facial profile method involving automatic recognition was used. One method involved the use of seventeen fiducials whose constituents e.g. nose protrusion and chin length were created from a three sittings average. The fourth presentations recognition accuracy was 96 with the proper result which was second list of the remaining 4 failures in order. Another method involved twelve components of the profile silhouette which were reached at using a circular auto-correlation function. It quoted 90 recognition accuracy, in a problem of ten classes similar to the human observers performances which were measured (Nixon, 2010).

Another approach involved neural nets and could distinguish between varied subjects. Geometric measurement involving facial features was worked out. An observers panel categorized faces according to these criteria using three portraits of each faces frontal, side as well as three-quarter profiles (Nixon, 2010). After classification, some criteria were considered inappropriate for facial recognition. Each faces measures for the other remaining criteria were then sorted out by computer which classified six faces to single out a particular face. Ears protrusion and length, eye shade and eye separation, lip thickness, mouth width, and nose profile were included in a set of features.

Measurement of eye spacing is crucial to automated recognition as the eyes offer a distinctive feature in a particular face and which, in general, present a regular shape (especially because their shape is hardly altered by facial expressions), although sometimes they are obscured a bit by glasses. Other methods recognition bases are the eyes natural geometric features (Nixon, 2010).

Introduction of Eigenfaces
Kohonens face recognition system was probably the most well-known of early examples. Kohonen established that a neural net, a simple one, could do face recognition for aligned as well as normalized images of human faces. An images eigenvectors autocorrelation matrix or eigenfaces was used to calculate the description of a face (Escarra et al, 2000). Kohonens system, however, was not practically successful, due to the need for accurate alignment and also normalization. In years that followed, several researchers experimented with schemes of face recognition on the basis of edges, distances of inter-features, as well as other neural net methods. While many of them succeeded on small aligned image data-bases, none properly dealt with the more practical large database problem, involving unknown face location or scale (Escarra et al, 2000).

Improvements on Eigenfaces
In 1989, Kirby and Sirovich came up with algebraic manipulations which made the direct calculation of eigenfaces easier. It demonstrated that less than a hundred were needed for accurately coding of aligned, normalized images. In 1991, Turk and Pentland demonstrated that it was possible to use the remaining error when using eigenfaces coding to spot faces in jumbled natural imagery, as well as determine the exact or accurate face image location and face image scale. They went on to demonstrate that by combining this method with the eigenface method, reliable and real-time face recognition could be achieved in a minimally confined environment. This demonstration ignited a great interest in the face recognition topic (Escarra et al, 2000).

Recently, even though face recognition has been getting considerable attention from research communities as well as the market, it has still remained quite challenging in actual applications. Numerous algorithms of face recognition, and their modifications, have been created through the decades. Many typical algorithms are categorized into model-based and appearance-based schemes (Lu, 2010).

Universal Interest in Face Recognition Technology
The reality of human activity as a major concern both in daily life as well as in cyberspace, and the exceptional ability to distinguish between people through their faces has caused universal interest among biometrics researchers, computer vision, and pattern recognition communities, as well as computer graphics and machine learning communities (Lu, 2010). Moreover, a big number of security, commercial, as well as forensic applications such as automated crowd surveillance, mug-shot identification, and face reconstruction among others require the technologies involving face recognition. Some systems of commercial face recognition e.g. Cognitec, Eyematic, Identix, and Viisage have been set up. Facial scan as a biometric attribute or indicator has proved effective. Distinct biometric indicators go with distinct types of identification applications because of the variations in accuracy, intrusiveness, cost, and sensing ease (Lu, 2010).

Face recognition as a Devoted Process
Generally, face recognition uses an extensive range of stimuli obtained from the senses auditory, visual, tactile, olfactory, etc. The stimuli can be used collectively or individually for storing or retrieving face images. Contextual knowledge is also sometimes used. While a human brain may have restrictions to the total number of individuals it can actually remember, a computerized system can handle large face images datasets. Several neuro-scientific and psychophysical findings show that face recognition is a devoted process because to humans, faces are easy to remember than most other objects, when presented in upright angles. It is also argued that infants are born with the inclination to be attracted by facial objects because they prefer looking at moving objects which have features that look like faces than looking at those without patterns or that which have disorderly facial features (Nixon, 2010). The findings also show several differences between object and facial recognition on the basis of empirical results configural effects, expertise, differences that can be verbalized, sensitivity to polarity of contrast and direction of illumination, metric variation, rotation in depths, and rotation in inverted or plane faces. Both holistic as well as feature information can be very crucial for face perception and face recognition. If the dominant facial features are there, holistic descriptions may possibly not be employed. The results further indicate that face outline, hair, mouth and eyes have been found to be crucial for face perception and face recognition. Also the more attractive a face is the rate of recognition, the least striking faces follow, and then the faces in the midrange of attractiveness (Nixon, 2010).

Also, when caricatures are drawn, they show the most important features in a face, although they contain less information than photographs. Studies have shown that faces which are distinctive are retained for longer in memory than the typical faces. Also, earlier studies came to the conclusion that info in a low spatial frequency band plays a key role in recognition of faces (Nixon, 2010). Another study shows that the famous faces are easier to point out when in sequential motion than when in still pictures. Also, neurophysiologic studies seem to conclude that facial expression analysis is achieved in parallel to recognition of faces. Considerable progress has been made in segmentation, extraction of facial features and face recognition. Researchers are also studying on another topic of face recognition recognition from the data of range images. Range images contain the object-in-questions depth structure (Nixon, 2010).

Face Recognition from Video
There is a big problem in trying to identify faces from video sequences. Basic human behavior description especially that which is not particular to a certain individual, is very useful. A smart room is capable of recognizing such behavior and initiating appropriate action (Nixon, 2010). However, the mission to recognize individuals from surveillance videos still proves difficult because of the low quality of the video. To solve the problem, high-resolution techniques must be applied. Another problem is that the images of faces are mostly smaller than the assumed sizes in the still image systems of face recognition. Also, the intra-class variations of human objects particularly face objects are much smaller of the objects the classs outside. For instance it is harder to recognize specific faces than to detect and localize faces. However, in a fairly controlled environment and where the region of the face is fairly large, e.g. in ATMs, video-based face recognition techniques are feasible (Nixon, 2010).

More Recent Developments
In the recent past, Mohamed Abdel-Mottaleb, developed a high-tech system that can capture photographs of a persons faces image ear, compare it against his or her other stored images. It claimed an accuracy of 95-100 percent. The systems can use three-dimensional facial images, or combine 2-D facial images and 3-D ear models, constructed from video frame sequences to recognize people by distinctive facial characteristics and the shapes of their ears (University of Miami, 2009).

The first of the methods involves the use of 3-D facial images with a recognition rate of more than 95, lab setting. The usual methods of shape-matching common in 3-D face recognition consume a lot of time. Abdel-Mottalebs method increases computational effectiveness while still maintaining a satisfactory recognition rate. He reduces the number of each faces distinguishable landmarks (vertices) taken into consideration when matching three-dimensional facial data, by the automatic selection of the most distinguishable regions of the face (University of Miami, 2009). The landmarks were seen to be mainly within the areas of the eye brows, mouth, nose and chin. The second method attains facial landmarks set from frontal images of human faces and merges the data with a three-dimensional component of ear recognition an identification process that is a great deal more difficult with regard to the sensitivity of the technique to various lighting conditions. By combining the two modalities, a 100 percent lab identification rate was achieved. These advanced tools of identification help curb crime, as well as enhance security at borders.  The techniques are hoped to be expanded to distinguish faces showing facial expressions and also recognize faces using profile images only (University of Miami, 2009).

The United States government has done several evaluations to test face recognitions capabilities and restrictions. The Face Recognition Technology (FERET) Evaluation was an effort aimed at encouraging development of algorithms of face recognition and its technology by assessing the model systems of face recognition. A Face Recognition Vendor Test was conducted in 2002 (NSTC Subcommittee of Biometrics, 2006).

Discussion
Machine-based recognition of faces now involves image processing, computer vision, pattern recognition and neural networks and has various applications like access control, video surveillance used by law enforcers. Also, ATM use face recognition techniques. Because of its user-friendliness, face recognition is still very attractive in spite of the existence of highly reliable bio-metric identification methods like iris scans and fingerprint analyses (Escarra et al, 2000). More than three decades research in neuroscience and psychophysics on recognition of faces is acknowledged in literature. Segregation of faces from still images or videos is the first big step in completely automatic systems of face recognition. stupendous accomplishments have been gotten. Two of the advancements are systems based on neural network and also example-based systems of learning. Face segregation can be applied in surveillance systems and human to computer interfaces. Face recognition methods based on sensor modality like range images, sketches, and sketched images are difficult to apply in reality. Face recognition in video sequences particularly surveillance video is the most challenging face recognition problem. Evaluation and standardization of the numerous algorithms is a crucial step toward more accurate recognition of faces. There are also big challenges involving pose and illumination in photographs, while the ageing problem is among the hardest issues to address (Escarra et al, 2000).

There are many challenges in pursuing face recognition technology like in many other technologic or scientific pursuits. For instance, in 2001, the first large scale test of the technology was conducted to analyze the faces of all those who attended the Super Bowl by taking their photo as they entered the stadium. All those photos were compared with the police database to see if anyone was wanted for any type of crimes. They were unable to identify any criminals using the available technology. There are other states testing facial recognition technology for use against criminal activity and to aid in missing children cases. In 2001, Virginia Beach, Virginia police installed cameras at various places on the oceanfront. The police department of Tampa, Florida also installed cameras in a popular city known for a lively nightlife crowd. Two years later and with less than flavorful results, and plenty opposition from the public, they shut down the system in 2003. This shows how serious the issue of face recognition in public is (Agre, 2001).

Conclusion
Face Recognition has come from far and is still growing and showing some positive results. Since the introduction of eigenfaces to the inception of 3D techniques, researchers optimism is bearing fruits. However, there are challenges like computers accuracy in manipulating 2D face images to come up with 3D images. Nevertheless, Abdel-Mottalebs combined modalities are a huge step forward for researchers in this field. The latest developments can greatly help curb crime and lower crime rates.

Another major challenge is matching images from video sequences with still images, as the surveillance videos are not of high quality to produce accurate images. However, if high quality surveillance cameras are introduced, that would be another great milestone for researchers in face-recognition technology.

Because face recognition is non-intrusive, it is proving popular among law-maintenance authorities. However, many people are challenging its usage because they view it as an invasion to their privacy, although law-maintenance authorities might argue that individuals privacy should come after security. All in all, face recognition technologys benefits seem to outweigh the disadvantages.

0 comments:

Post a Comment