Build

FaceTracking Security: OpenCV, React, ClickSend, SendGrid

13 min read Cameron Akhavan on Aug 7, 2019

In this tutorial, we are going to explore one of the most cutting edge technologies in machine learning AI…computer vision! To showcase its capabilities, this step-by-step article will walk you through building your very own desktop security system with a facial recognition machine learning algorithm.

With a simple webcam, your program will be able to recognize the faces of people you choose to allow into the system. It will trigger a text, email, and snapshot image alert system, if any unrecognized faces appear in front of your webcam. We’ll also use Cloudinary and PubNub to build a React Native application that can receive a snapshot image of an “intruder’s” face. It will also allow a user to be added to the system if you desire.

facetrackeroptimized

What is Computer Vision?

Computer Vision is a specific field in artificial intelligence that deals with training machine learning models to understand and interpret the visual world. By learning from images and frames from cameras and videos, a computer vision AI can accurately classify the objects it sees and subsequently perform reactionary tasks just like we humans do.

computervision

Source Accessed 7/31/19

But enough talk…It’s time to code!

Python Code for Your FaceTracking Alert System

Before jumping into the code, be sure you sign up for a free PubNub account so we don’t run into any issues later.

To start building the project from scratch, create your project’s directory using your computer’s command line app:

mkdir faceTrackingApp
cd faceTrackingApp

Then create a new Python file called facetracker.py.

Libraries and Dependencies for Computer Vision

OpenCV and face_recognition

First, let’s import some machine learning libraries for our app’s face tracking capabilities. The main libraries we are going to use are OpenCV and face_recognition.

import face_recognition # Machine Learning Library for Face Recognition
import cv2 # OpenCV
import numpy as np # Handling data
import time
import os,sys

OpenCV is the most popular machine learning library for real-time Computer Vision. The library has useful tools such as webcam control as well as models to train a face tracking app from scratch.

However, our project will primarily use ageitgey’s face_recognition python library as it already comes with a face recognition model out of the box, making it extremely quick and easy to use.

PubNub

Next, we’re going to setup PubNub as our Data Stream Network to handle all of the data between our Python script and mobile application.

After you’ve retrieved your free PubNub API keys, install the PubNub Python SDK.

pip install 'pubnub>=4.1.4'

Then, import the library in your python file,

from pubnub.callbacks import SubscribeCallback
from pubnub.pnconfiguration import PNConfiguration
from pubnub.pubnub import PubNub
from pubnub.enums import PNOperationType, PNStatusCategory

and configure a PubNub instance with your API keys.

# PubNub Config
pnconfig = PNConfiguration()
pnconfig.subscribe_key = "YOUR_SUBSCRIBE_KEY"
pnconfig.publish_key = "YOUR_PUBLISH_KEY"
pnconfig.ssl = False
pubnub = PubNub(pnconfig)

Cloudinary

Lastly, we will setup Cloudinary as our Content Delivery Network to store images of intruders’ faces. This will work beautifully with PubNub as our python script can upload the image to Cloudinary, get the URL from the response, then PubNub will send that URL to our Client app to render.

pnubcloudinary

First, sign up for a free Cloudinary account and then install the Cloudinary Python SDK with:

pip install cloudinary

Setup the CLOUDINARY_URL environment variable by copying it from the  Management Console .

Using zsh/bash/sh:

export CLOUDINARY_URL=cloudinary://API-Key:API-Secret@Cloud-name

Import the library in your Python script,

from cloudinary.api import delete_resources_by_tag, resources_by_tag
from cloudinary.uploader import upload
from cloudinary.utils import cloudinary_url

and configure a Cloudinary instance.

# Cloudinary Config
os.chdir(os.path.join(os.path.dirname(sys.argv[0]), '.'))
if os.path.exists('settings.py'):
    exec(open('settings.py').read())
DEFAULT_TAG = "python_sample_basic"

Machine Learning Face Tracking Algorithm

Before we begin building our face recognition machine learning model, we’ll need to declare some global variables:

# Setup some Global Variables
video_capture = cv2.VideoCapture(0) # Webcam instance
known_face_names = [] # Names of faces
known_face_encodings = [] # Encodings of Faces
count = 0 # Counter for Number of Unknown Users
flag = 0 # Flag for Setting/Unsetting "Intruder Mode"

NOTE: We create a count variable for the unknown users because we are going to dynamically save the user’s snapshot image in a file path. We’re  going to append the count to the file name like an ID tag. In order to find this snapshot image later, we need to pull up that user’s count variable, so we can find the image in the file path.

To start training our face recognizer model, we’ll begin with two sample images of faces. You will need two images of two different people’s faces in your project directory.

# Load a sample picture and learn how to recognize it.
sample_face_1 = face_recognition.load_image_file("sample_1.jpeg")
sample_face_1_encoding = face_recognition.face_encodings(sample_face_1)[0]

# Load a second sample picture and learn how to recognize it.
sample_face_2 = face_recognition.load_image_file("17.png")
sample_face_2_encoding = face_recognition.face_encodings(sample_face_2)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    sample_face_1_encoding,
    sample_face_2_encoding
]

# Create Names for Sample Face encodings
known_face_names = [
    "sample_1",
    "sample_2"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

Next, we will declare a while loop that will run continuously for the duration of the app. The loop will be responsible for the main functions of our app:

  • Turning on and displaying the webcam feed
  • Tracking faces that appear in front of the webcam and drawing a red box around the face in real time
  • Displaying a name below a known user’s face and “Unknown” for a face that has not been added to the database
  • Calling a series of Alerts and Functions to handle when an “Unknown” face appears on screen
while(True):
    
    video_capture = cv2.VideoCapture(0)
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # # If a match was found in known_face_encodings, just use the first one.
            # if True in matches:
            #     first_match_index = matches.index(True)
            #     name = known_face_names[first_match_index]

            # Or instead, use the known face with the smallest distance to the new face
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]

            face_names.append(name)

            #---------------------See next section for this code block's explanation---------------------#

            ## Set Unknown User Flag and Send Alerts
            #global flag
            #if(name=='Unknown' and flag==0):
            #    flag = 1
            #    Alert()
            #
            #--------------------------------------------------------------------------------------------#

    process_this_frame = not process_this_frame

    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(10) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

Sending Alerts

We shall take care of the case when an unregistered face appears in front of our webcam. We want our program to trigger an alert system the moment it sees an “Unknown” face. Luckily, all we need to do is add a few lines of code to our main while loop at the end of the for face_encoding in face_encodings: loop.

# Set Unknown User Flag and Send Alerts
global flag
if(name=='Unknown' and flag==0):
    flag = 1 # Stop repeated calls of Alerts until after the Unknown User is dealt with
    Alert() # Trigger Alert System

When the alert is triggered, we can then define a function to take a snapshot of the unknown user’s face, call a function to upload the snapshot to Cloudinary, and finally call our Text/Email alert function.

def Alert():
    global count
    video_capture = cv2.VideoCapture(0) # Create Open CV Webcam Instance
    path = './' # Specify where you want the snapshot to be stored
    name = 'Unknown_User' + str(count) # Append User ID to File Path

    # Wait for 3 seconds
    print('Taking picture in 3')
    time.sleep(1)
    print('Taking picture in 2')
    time.sleep(1)
    print('Taking picture in 1')
    time.sleep(1)

    # Take Picture
    ret, frame = video_capture.read()

    # Grayscale Image to save memory space
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Save Image in File Path
    status = cv2.imwrite('% s/% s.jpg' % (path, name),gray)
    print('Unknown User Saved to Database', status)

    # Upload Snapshot to Cloudinary 
    upload_files('% s/% s.jpg' % (path,name))
    
    # Send Out Email and Text Alerts
    sendAlerts()

NOTE: We grayscale the image because the face recognizer doesn’t need color to determine facial features. We also store the snapshot image locally so we can add the face to the recognizer if the client wants to add the user later.

When defining our upload_files() function, we’re passing in the snapshot’s file path so Cloudinary knows where to upload the file from. We then get the response URL of where the image lives in the cloud. We send this url along with a user ID (count of Unknown user) over PubNub to our client application. The client application can then render the image of the snapshot from the Cloudinary URL.

def upload_files(msg):
    global count # Make global changes to count
    response = upload(msg, tags=DEFAULT_TAG) # Upload Image to Cloudinary
    url, options = cloudinary_url( 
        response['public_id'],
        format=response['format'],
        width=200,
        height=150,
        crop="fill"
    )
    dictionary = {"url": url, "ID": count}
    pubnub.publish().channel('global').message(dictionary).pn_async(publish_callback)
    count+=1 # Increment Unknown User Count

In order to publish with PubNub, we need to define a publish callback.

def publish_callback(result, status):
    pass
    # Handle PNPublishResult and PNStatus

 

To setup your Text and Email alerts, you’ll need to sign up for a free ClickSend account as well as a free SendGrid account to get your API keys.

Now you get to see the power and beauty behind Functions with our Partnered Blocks. Go ahead and visit both our ClickSend Block as well as our SendGrid Block. Through those links, PubNub will automatically generate a customizable Function.

The serverless, open-source code will completely handle the APIs for you. All you need to do is put in your API keys and you’re good to go!

Once you’ve set up your Functions, you can define a sendAlerts() function to publish a message, implementing your Text and Email alerts:

def sendAlerts():
    dictionary = {
    "to" : 'RECEIVING PHONE NUMBER',
    "body": "There is an unregistered user at your desk!"
    }
    pubnub.publish().channel('clicksend-text').message(dictionary).pn_async(publish_callback)

    dictionary = {
    "to": "EMAIL RECEIVER",
    "toname": "EMAIL SENDER",
    "subject": "INTRUDER ALERT",
    "text": "THERE IS AN UNREGISTERED USER AT YOUR DESK"
    }   
    pubnub.publish().channel('email-sendgrid-channel').message(dictionary).pn_async(publish_callback)

NOTE: In order to properly use a PubNub block, you need to publish over the same channel specified in your block (you can check this in your blocks Functions dashboard) as well as formatting the message payload properly (according to the block’s documentation).

Adding Users to Our Facetracker

When an Unregistered face is detected on our webcam, our Python script sends an email/text alert as well as a snapshot image to our client application.

We now want to add the ability to add a user’s face to our app’s “known_faces” database, so the user will no longer trigger our alert system. To do this, the client application must publish a message over PubNub.

To receive this message in our python application, we must subscribe to the channel the client is publishing from, and create a Subscriber Callback to handle the incoming message.

class MySubscribeCallback(SubscribeCallback):
    def status(self, pubnub, status):
        pass
        # The status object returned is always related to subscribe but could contain
        # information about subscribe, heartbeat, or errors
        # use the operationType to switch on different options
        if status.operation == PNOperationType.PNSubscribeOperation \
                or status.operation == PNOperationType.PNUnsubscribeOperation:
            if status.category == PNStatusCategory.PNConnectedCategory:
                pass
                # This is expected for a subscribe, this means there is no error or issue whatsoever
            elif status.category == PNStatusCategory.PNReconnectedCategory:
                pass
                # This usually occurs if subscribe temporarily fails but reconnects. This means
                # there was an error but there is no longer any issue
            elif status.category == PNStatusCategory.PNDisconnectedCategory:
                pass
                # This is the expected category for an unsubscribe. This means here
                # was no error in unsubscribing from everything
            elif status.category == PNStatusCategory.PNUnexpectedDisconnectCategory:
                pass
                # This is usually an issue with the internet connection, this is an error, handle
                # appropriately retry will be called automatically
            elif status.category == PNStatusCategory.PNAccessDeniedCategory:
                pass
                # This means that Access Manager does not allow this client to subscribe to this
                # channel and channel group configuration. This is another explicit error
            else:
                pass
                # This is usually an issue with the internet connection, this is an error, handle appropriately
                # retry will be called automatically
        elif status.operation == PNOperationType.PNSubscribeOperation:
            # Heartbeat operations can in fact have errors, so it is important to check first for an error.
            # For more information on how to configure heartbeat notifications through the status
            if status.is_error():
                pass
                # There was an error with the heartbeat operation, handle here
            else:
                pass
                # Heartbeat operation was successful
        else:
            pass
            # Encountered unknown status type
 
    def presence(self, pubnub, presence):
        pass  # handle incoming presence data
    def message(self, pubnub, message):
        addUser(message.message["ID"], message.message["name"])
 
 
pubnub.add_listener(MySubscribeCallback())
pubnub.subscribe().channels('ch1').execute()

NOTE: Above we assume the client is publishing the ID of the unknown user (for image file path) as well as the name of the user (to display below the user’s face).

With the parameters in hand, we can add the new user to our database.

def addUser(ID, name):
    global known_face_encodings, known_face_names, flag
    path = './Unknown_User' + str(ID) # Append User ID to File Path
    # Load User's picture and learn how to recognize it.
    user_image = face_recognition.load_image_file('% s.jpg' % (path)) # Load Image
    user_face_encoding = face_recognition.face_encodings(user_image)[0] # Encode Image
    known_face_encodings.append(user_face_encoding) # Add Encoded Image to 'Known Faces' Array
    known_face_names.append(name) # Append New User's Name to Database
    flag = 0 # Reset Unknown User Flag

React Native Code for Our Client Application

Setting Up Our Real-time React Native Environment

Install Xcode so we can build and simulate our app for IOS and Android Studio for Android.

Then install Node.js and watchman using Homebrew:

brew install node
brew install watchman

Install the React Native CLI with NPM:

npm install -g react-native-cli

To create a React Native App template, enter the React Native CLI command in your project’s directory:

react-native init client
cd client

Since we’re going to be using PubNub in our Client app to send and receive messages, we’ll need to install the PubNub React SDK,

npm install --save pubnub pubnub-react

and then link the library like so:

react-native link pubnub-react

Setting Up Real-time Pub/Sub Messaging

To start sending and receiving messages in real time in our app, first import the PubNub React SDK.

import PubNubReact from 'pubnub-react';

Then import the TouchableOpacity and Image components from React Native,

import {
  StyleSheet,
  View,
  Text,
  TextInput,
  TouchableOpacity,
  Image,
} from 'react-native';

Now we add a constructor at the top of our App Component. The constructor will be responsible for setting up a PubNub instance with our Publish/Subscribe keys as well as initialize the following state variables:

  • image – Snapshot image from an unknown user alert (we initialize it with a placeholder image until a Snapshot alert arrives).
  • message – Incoming alert message from the face tracking app.
  • text – Client user’s input for typing in the name of a user.
  • count –  To keep track of which unknown user we are getting an alert from.
export default class App extends React.Component {

  constructor(props) {
    super(props)

    this.pubnub = new PubNubReact({
      publishKey: "YOUR PUBLISH KEY",
      subscribeKey: "YOUR SUBSCRIBE KEY"
    })

    //Base State
    this.state = {
      image: require('./assets/PLACEHOLDER_IMAGE.jpg'),
      message: '',
      text: '',
      count: 0,
    }

    this.pubnub.init(this);
  }

/// .......VVV REST OF THE CODE VVV.......///

When our client app first fires up, we declare an asynchronous function that will subscribe to our face tracking alert channel and handle message events. In this case, we receive the ID (count of unknown user) as well as the snapshot image URL (from Cloudinary) of the unknown user.

async componentDidMount() {
  this.setUpApp()    
}

async setUpApp(){
  this.pubnub.getMessage("global", msg => {
    this.setState({count: msg.message.ID})
    this.setState({image: msg.message.url})
  })

  this.pubnub.subscribe({
    channels: ["global"],
    withPresence: false
  });
}

Once that image is received by the mobile app, the client user should then be able to add the unknown user to the face tracker’s “known_faces” database. We can define a function to set the state of the client user’s input for the unknown user’s name.

 handleText = (name) => {
   this.setState({ text: name })
}

We can also write a function to publish the added user’s name along with the added user’s ID.

 publishName = (text) => {
  this.pubnub.publish({
    message: {
      ID: this.state.count,
      name: text,
    },
    channel: "ch1"
  });
}

Creating and Rendering App Components

At the top of our screen we’ll render the snapshot image from an incoming “Unknown User” Alert. The source of this image will be a URI we grabbed from the alert message that we saved to state.

<Image
  source={{uri: this.state.image}}
  style={{width: 250, height: 250}}
/>

Below that, we can display a suitable caption.

<Text>{'Do You Know This Person?'}</Text>

We then create a Text Input component to store the name of the User to be added to the face tracker, if the client decides to do so.

<TextInput style = {styles.input}
         underlineColorAndroid = "transparent"
         placeholder = "Name"
         placeholderTextColor = "#9a73ef"
         autoCapitalize = "none"
         onChangeText = {this.handleText}/>

Lastly, we create a submit button with TouchableOpacity to publish the added user’s name for our Face Tracker to add to the system:

<TouchableOpacity
    style = {styles.submitButton}
    onPress = {
      () => this.publishName(this.state.text)
    }>
      <Text>"SUBMIT"</Text>
</TouchableOpacity>

Wrap all those components in a <View> </View> and you’re good to go!

Running the Program

First, start up the React Native client application on Android or iOS by opening a terminal in the client app’s directory.

react-native run-ios

or

react-native run-android

Then, in another terminal window, run the Python face tracker.

python facetracker.py

If You’re Still Hungry For More…

Feel free to send us any of your questions, concerns, or comments at devrel@pubnub.com.

If you’re still hungry for more PubNub Machine Learning content, here are some other articles that you may be interested in:

0