Face Recognition using LBPH Face Recognizer

Yeshwant Kumar
6 min readApr 4, 2021

--

Face recognition using LBPH

In my previous blog I have clearly talked about Face detection So now I would like to do Face Recognition using LBPH face recognizer.

Note: Inorder to recognize the face, First we should detect the face and then recognize it so recognization also includes detection.So if you want to know more about Face Detection, Look into my Face Detection Blog which is mentioned above.

Here LBPH means Local Binary pattern Histogram.

Here I tried to recognize the faces in the image. First of all I got the train_model.py file which would help me collecting the train data that is it would collect about 200 images for every user and store it in the dataset folder. For example- If we have 5 users that it would collect 200 images for each user that would lead to a total of 1000 images.

Then we have the recognizer.py file which takes this data as input and train the model with these images so that it can recognize face in the the test image.

Now here we have two folders as dataset and model. In the dataset folder we are storing the training images for each user and in the model folder we are collecting the model that we used to train on the training dataset.

Train_model.py:

Here we train the collect the data and train the model, As we just run the train_model.py file we would have a interface like:

Here we need to give the image id and image name to identify the person in the image with a labelled name. And Here we have two buttons namely Take images and Train images. Take images would take 200 images with the webcam and store in the dataset folder with id and name for each image.
Now train images would build the model by training on these images and save the model in the model folder.

The code for these two buttons would look like:

takeImg = tk.Button(window, text="Take Images",command=take_img,fg="black", bg="cyan"  ,width=20  ,height=3, activebackground = "yellow" ,font=('times', 20, 'italic bold '))
takeImg.place(x=200, y=500)
trainImg = tk.Button(window, text="Train Images",fg="black",command=trainimg ,bg="cyan" ,width=20 ,height=3, activebackground = "yellow",font=('times', 20, 'italic bold '))
trainImg.place(x=590, y=500)

In this the most important parameter is the command which directly calls a the function to implement the taking and training of images and the other parameters are just to design the external interface of the button. Now we will examine individual buttons, Firstly in the take images button we have the function as take_img, Now let us understand what this function does:

def take_img():cam = cv2.VideoCapture(0)
detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
ID = txt.get()
Name = txt2.get()
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder
cv2.imwrite("dataset/ " + Name + "." + ID + '.' + str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
cv2.imshow('Frame', img)
# wait for 100 miliseconds
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# break if the sample number is more than 200
elif sampleNum > 200:
break
cam.release()
cv2.destroyAllWindows()

res = "Images Saved : " + ID + " Name : " + Name
Notification.configure(text=res, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=250, y=400)

As we are taking the images with the webcam so it would be captured as video so we will capture it as video and then break the video into frames and then detect the faces in the image and store it in the dataset folder. Here we are doing the same thing as we have done in face detection that is getting the images as frames from the video and then drawing the boxes around the faces and then store those faces in the dataset folder. And we have mentioned a criteria that we would capture only 200 images for each face.After saving all these message we will prompt a message that these images are saved.

Now we will look into the second button named Train_images.When we use this then all the images that we collected using take images will be used for training the model and the model will be saved in the models folder.

def trainimg():
recognizer = cv2.face.LBPHFaceRecognizer_create()
global detector
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
try:
global faces,Id
faces, Id = getImagesAndLabels("dataset")
except Exception as e:
l='please make "dataset" folder & put Images'
Notification.configure(text=l, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
recognizer.train(faces, np.array(Id))
try:
recognizer.save("model/trained_model2.yml")
except Exception as e:
q='Please make "model" folder'
Notification.configure(text=q, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
res = "Model Trained" # +",".join(str(f) for f in Id)
Notification.configure(text=res, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=250, y=400)

Now we would like to train our algorithm using local binary pattern histogram Recognizer.Even here you can see that we have used detector along with recognizer as we should be able to detect faces before recognizing them.

Then we are getting the images and their corresponding ids from our dataset.
Then we are training our model on this data and then finally we are saving our recognizer in the model folder.

res = "Model Trained"  # +",".join(str(f) for f in Id)
Notification.configure(text=res, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=250, y=400)

Thus now we have learnt how to collect the data and train our recognizer so now we need to recognize the faces.

import cv2
import numpy as np
#pip install opencv-contrib-pythonrecognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('model/trained_model2.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
font = cv2.FONT_HERSHEY_SIMPLEX
cam = cv2.VideoCapture(0)
while True:
ret, im =cam.read()
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
faces=faceCascade.detectMultiScale(gray, 1.2,5)

for(x,y,w,h) in faces:
Id, conf = recognizer.predict(gray[y:y+h,x:x+w])

cv2.rectangle(im, (x, y), (x + w, y + h), (0, 260, 0), 7)
cv2.putText(im, str(Id), (x,y-40),font, 2, (255,255,255), 3)

cv2.imshow('im',im)
if cv2.waitKey(10) & 0xFF==ord('q'):
break
cam.release()
cv2.destroyAllWindows()

So now we created the recognizer and then we are getting the images from the webcam and converting them into gray scale and then trying to recognize the faces using the recognizer and drawing rectangle around the face and adding an id as label over the image using putText and then we are showing the image using imshow.

Conclusion:

Thus we recognized the faces over the image as

Thus we have classified the image correctly with the label 2 that means our recognizer is working fine.

References:

My github link:

https://github.com/AarohiSingla

My linkedin profile:

--

--

Yeshwant Kumar
Yeshwant Kumar

Written by Yeshwant Kumar

Quant Researcher/ AI Researcher/ Machine Learning Engineer and a Data Scientist

No responses yet