Skip to Content
Technical Articles
Author's profile photo Mohammed zahid

AI EHS – PPE Detection using Tensorflow Object Detection – Part 1

Artificial Intelligence in  EHS – PPE Detection using Tensorflow Object Detection –  Part 1

 

Bring Artificial Intelligence in Environment Heath and Safety field making EHS a truly safer environment .

In this blog we are going to use Tensorflow Object Detection and train our own custom data set .

 

Problem Statement:

Post Covid, Companies wants to ensure all employees wearing PPE in the company (E.g. – in this scenario: Face mask) and ensure PPE is worn correctly before employee is going to perform any job (Such as height job, welding etc.)

 

Solution Approach :

Using AI – Computer Vision – Deep learning, we can create a model with our own custom dataset which recognize the missing PPE and send Alert notification to HSE manager – Via SAP Inbox.

 

For this we are going to train our model using tensorflow .

 

Software Requirement:

Check here and check the correct version of Python, Anaconda, cuDNN, and CUDA.

You can check how to install CUDA and cuDNN here

Process to build the Object Detection :

  1. Installing Anaconda , CUDA and cuDNN
  2. Setting up environment
  3. Image labelling – gathering
  4. Creating training data
  5. Creating a label map and configuring model files
  6. Training
  7. Exporting the inference graph
  8. Testing our own data – Face Mask ( N95 in this scenario )

 

  1. Installing Anaconda , CUDA and cuDNN

 

Follow this YouTube video which explains the process for installing Anaconda, CUDA, and cuDNN. In this scenario we are going to use TensorFlow – GPU v1.15, you can check your version in this table and install the corresponding components. We are going with Cuda 9.0 and cuDNN 7.0 as it is more stable.

 

  1. Setting up environment

 

  1. Create a folder in C drive with name “ tensorflow1” and open Anaconda Prompt to create an virtual environment with name – Tensorflow1 , open anaconda prompt and type – conda create –name tensorflow and after that to activate it , type – Conda activate tensorflow1 , navigate to C drive and inside C drive go to tensorflow 1 folder .
  2. Clone the Tensorflow mode – by typing “ git clone https://github.com/tensorflow/models , this will download the model directory inside your tensorflow folder
  3. Download another model from here , we are using ssd_mobilenet_v1_coco for this scenario, you can download any. And extract the file inside – Model/research/object_detection folder.
  4. You need to download the last file here and extract the file and paste inside model/research/object_detection folder. ( As this gives readymade .py files to convert xml to csv, generate_tfrecord , Object_detection_webcam which we need later part of this section
  5. Download tensorflow Gpu and other libraries with below commands
         (tensorflow1) C:\> conda install tensorflow-gpu==1.15

         (tensorflow1) C:\> conda install -c anaconda protobuf

         (tensorflow1) C:\> pip install pillow

         (tensorflow1) C:\> pip install lxml

         (tensorflow1) C:\> pip install Cython

         (tensorflow1) C:\> pip install contextlib2

         (tensorflow1) C:\> pip install jupyter

         (tensorflow1) C:\> pip install matplotlib

         (tensorflow1) C:\> pip install pandas

         (tensorflow1) C:\> pip install opencv-python

Set the path by below command –

(tensorflow1) C:\> set  PYTHONPATH=C:\tensorflow1\models;C:\tensorflow1\models\research;C:\tensorflow1\models\research\slim

 

(Note: Every time the “tensorflow1” virtual environment is exited , the PYTHONPATH variable is reset and needs to be set up again

Now create protobufs and run setup , run the below command in Research folder  –

C:\> cd C:\tensorflow1\models\research
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto

 

Run the below command  –

 

protoc --python_out=. .\object_detection\protos\input_reader.proto

And

protoc object_detection/protos/*.proto --python_out=.

 

 

  1. Image labelling –gathering

Download LabelImg tool from here or by command – Pip install LabelImg and open from Start window – Select your directory and label the images as “PPE – Mask Detected”

For this scenario we have downloaded nearly 350 images and label each images

Sample%20Images

 

Sample Images

 

LabelImg%20Tool

LabelImg Tool

Once you have labelled and saved all your images, it will create an xml file for each images.

Split your Images in two section one is for training and second is for test .

Create two folders inside Images and place 20 % of your label Images inside test folder and rest 80 percentage inside train folder.

 

  1. Creating training data.

Converted the Labelled xml files to CSV format from below command –

C:\tensorflow1\models\research\object_detection> python xml_to_csv.py

Open generate_tfrecord.py file and replace the line 31 with your own label – in our scenario it will be ‘PPE –Mask Detected’

 

After that in Object_Detection folder run the below two commands –

 

python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record
python generate_tfrecord.py --csv_input=images\test_labels.csv --image_dir=images\test --output_path=test.record

 

  1. Creating a label map and configuring model files

Create the a new LabelMap.pbtxt file and save it inside training folder – E.g –

 

item {

id: 1

name: ‘PPE-Mask Detected’

}

 

 

Now we need to do the below changes in the config files of our SSD Model – ssd_mobilenet_v1_coco

Open the sample folder – config and open our file – SSD_mobile_v1_coco file and do the below changes –

Number of classes– 1

Fine_tune_Checkpoint at line 106 - 
"C:/tensorflow1/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt

input_path : "C:/tensorflow1/models/research/object_detection/train.record"

label_map_path: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"

Eval_input – number of test data – for us it is 40

input_path : "C:/tensorflow1/models/research/object_detection/test.record"

label_map_path: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"
  1. Training

In Object_detection folder run the below command –

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config

 

This will start using your system GPU and train the model – run it around 5-6 hours until your loss time is below 0.05.

 

  1. Exporting the inference graph

After checking the lose time is below 0.05 terminate the program by clicking Ctrl +C and run the below command –

 

python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix training/model.ckpt-96740 --output_directory inference_graph

 

  1. Test your trained model  – Face Mask ( N95 in this scenario )

Replace the Object_detection_webcam classes with 1 and run the file with command –To run any of the scripts, type “idle” in the Anaconda Command Prompt (with the “tensorflow1” virtual environment activated) and press ENTER. This will open IDLE, and from there, you can open any of the scripts and run them

 

 

 

Sample%20Image%20from%20Webcam

Sample Image from Webcam

 

Sample%20Image%20from%20Webcam

Sample Image from Webcam

 

 

In the next blog will be see how we trigger this detection alert  notification inside SAP

 

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.