Day 16 - 30 Days 30 ML Projects: Real-Time Face Detection in a Webcam Feed Using OpenCV

Today, I tackled a real-time face detection problem using OpenCV. The goal was to implement a system that could detect faces in real-time from a webcam feed and highlight them with bounding boxes.

If you want to see the code, you can find it here: GIT REPO.

The Solution

We used OpenCV, a powerful computer vision library, to process video streams and detect faces. The Haar Cascade Classifier was the key tool for recognizing face patterns. It works by scanning the image for areas that look like a human face, then drawing a rectangle around them.

Code Workflow

Lets import the required libraries first:

import cv2

This imports the OpenCV library. OpenCV (Open Source Computer Vision Library) is a library of programming functions primarily aimed at real-time computer vision.

Step 1: Load the Pre-trained Haar Cascade for Face Detection

face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
  • CascadeClassifier: This function loads the pre-trained Haar Cascade XML file which contains the model for detecting faces. OpenCV comes with several pre-trained models for face detection, and here we are using the ‘haarcascade_frontalface_default.xml’ model.
  • cv2.data.haarcascades: This points to the directory where OpenCV stores its pre-trained models, so the cascade file can be located easily.
  • haarcascade_frontalface_default.xml: This XML file contains the Haar Cascade data for detecting human frontal faces.

Step 2: Capture Video from the Webcam

cap = cv2.VideoCapture(0)
  • cv2.VideoCapture(0): This creates an object cap which starts accessing the webcam. The parameter (0) specifies that the default webcam (if you have more than one camera, 1, 2, etc. will refer to other cameras).
  • The webcam is now ready to capture frames in real-time.

Step 3: Process Each Frame from the Webcam Feed

while True:
    ret, frame = cap.read()  # Read each frame
    if not ret:
        break
  • while True: This starts an infinite loop that continuously processes the video stream frame by frame.
  • ret, frame = cap.read(): This reads the current frame from the webcam. The read() method returns two values:
    • ret: A boolean that indicates whether the frame was successfully captured (True) or not (False).
    • frame: The captured frame (an image represented as a NumPy array).
  • if not ret: If ret is False, it means there was an issue capturing the frame (e.g., camera disconnected), so we break the loop.

Convert Frame to Grayscale

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  • cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY): This converts the captured frame from a colored image (BGR) to grayscale. Face detection using Haar Cascades performs better on grayscale images because they are computationally less complex compared to color images.

Step 4: Detect Faces in the Frame

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))
  • detectMultiScale: This method detects objects (in our case, faces) in the grayscale image. It returns a list of rectangles where faces are detected.

    • gray: The grayscale image where the detection will take place.

    • scaleFactor=1.1: Specifies how much the image size is reduced at each scale. 1.1 means the image is reduced by 10% at each scale, helping to detect faces at different sizes.

    • minNeighbors=5: Specifies how many neighbors each rectangle candidate should have to retain it. Higher values result in fewer detections but with higher quality.

    • minSize=(30, 30): The minimum possible size of the detected face. Any face smaller than 30x30 pixels will not be considered.

Draw Rectangles Around Detected Faces

for (x, y, w, h) in faces:
    cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
  • for (x, y, w, h) in faces: Loops through all the detected faces. Each face is represented by a rectangle with:

    • (x, y): Coordinates of the top-left corner of the rectangle.
    • (w, h): The width and height of the rectangle.
  • cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2):

    • Draws a rectangle on the original colored frame.
    • (255, 0, 0): Specifies the rectangle color in BGR (Blue in this case).
    • 2: The thickness of the rectangle.

Display the Frame with Detected Faces

cv2.imshow('Face Detection', frame)
  • cv2.imshow: This function displays the current frame in a window titled ‘Face Detection’. The frame now contains rectangles drawn around detected faces.

Exit the Loop and Close the Webcam

if cv2.waitKey(1) & 0xFF == ord('q'):
    break
  • cv2.waitKey(1): This function waits for a key press for 1 millisecond. The 1 millisecond delay is needed to give OpenCV time to refresh the window showing the video.
  • & 0xFF: Ensures compatibility across different operating systems.
  • ord('q'): This checks if the ‘q’ key was pressed. If it was, the loop breaks, and the program ends.

Step 5: Release the Resources and Close All Windows

cap.release()
cv2.destroyAllWindows()
  • cap.release(): This releases the webcam so other applications can access it.
  • cv2.destroyAllWindows(): Closes all the OpenCV windows that were opened during the program’s execution.

Output

When the script runs, the webcam feed opens, and the faces detected are highlighted with blue rectangles.

Model Performance

Since this project didn’t involve a trained model, there were no accuracy metrics to track. However, the detection worked smoothly for faces in good lighting and straightforward angles.

Gratitude

This project was my first venture into OpenCV, and I’m really excited about the possibilities of computer vision. I’m eager to explore more in this space and apply it to other projects!

Stay Tuned!

Posts in this series