Real-Time Object Detection using TensorFlow API



Computer vision is one of the important and demanding fields in Artificial intelligence. Computer vision is an interdisciplinary scientific field that deals with how computers can gain a high-level understanding of digital images or videos. Computer Vision is also composed of various aspects such as image recognition, object detection, image generation, image super-resolution, and more. Object detection is probably the most profound aspect of computer vision due to the number of practical use cases.


Object detection refers to the capability of computer and software systems to locate objects in an image/scene and identify each object. Object detection has been widely used for face detection, vehicle detection, pedestrian counting, web images, security systems, and driverless cars. There are many ways object detection can be used as well in many fields of practice. Like every other computer technology, a wide range of creative and amazing uses of object detection will definitely come from the efforts of computer programmers and software developers.


To perform object detection using TensorFlow API, all you need to do is

  1. Install Python and other required packages on your computer system

  2. Install TensorFlow object detection API

  3. import all the libraries in the program

  4. load detection model

  5. test model with an image

  6. Experience real-time object detection

for complete source code click here



Step 1: Install Required Libraries

1. Install the following dependencies via pip:

  1. TensorFlow

pip install tensorflow

b. OpenCV

pip install opencv-python

c. pycocotools

pip install pycocotools

Step 2: Install TensorFlow Object Detection API


Clone the TensorFlow object detection repository on the github.com/tensorflow/models page.

Download the zip and Extract it in the project directory.


For setting up the environment for object detection we have to do some stuff to install TensorFlow object detection API in the environment

from IPython.conftest import get_ipython

get_ipython().run_cell_magic('bash', '', 'cd /models/research/\nprotoc object_detection/protos/*.proto --python_out=.')

get_ipython().run_cell_magic('bash', '', 'cd /models/research\npip install .')

Step 3: Import required python libraries


import installed required python libraries and packages

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import pathlib

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display

get_ipython().system('pip install --user tf_slim')

Step 4: Import object detection libraries


import all required packages from object_detection library

from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

Patches:

# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1

# Patch the location of gfile
tf.gfile = tf.io.gfile

Step 5: Download Pre-Trained Model


To begin with, we need to download the latest pre-trained network for the model we wish to use. This can be done by simply clicking on the name of the desired model in the table found in the TensorFlow 2 Detection Model Zoo. Clicking on the name of your model should initiate a download for a *.tar.gz file.


Once the *.tar.gz file has been downloaded, open it using a decompression program of your choice (e.g. 7zip, WinZIP, etc.). Next, open the *.tar folder that you see when the compressed folder is opened, and extract its contents inside the folder training_demo/pre-trained-models. Since we downloaded the SSD ResNet50 V1 FPN 640x640 model, our training_demo directory should now look as follows:

training_demo/
├─ ...
├─ pre-trained-models/
│  └─ ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/
│     ├─ checkpoint/
│     ├─ saved_model/
│     └─ pipeline.config
└─ ...


The function to load the model in the environment by passing the name of the saved model:

def load_model(model_name):
    path = './pre-trained-models/'
    model_dir = path + model_name
    model_dir = pathlib.Path(model_dir) / "saved_model"
    model = tf.saved_model.load(str(model_dir))
    model = model.signatures['serving_default']

    return model

Step 6: Loading Label Map


Label maps map indices to category names so that when our convolution network predicts 5, we know that this corresponds to the airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine


# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

Step 7: Generate Inference Function


Add a wrapper function to call the model, and cleanup the outputs:

def run_inference_for_single_image(model, image):
    image = np.asarray(image)
    # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
    input_tensor = tf.convert_to_tensor(image)
    # The model expects a batch of images, so add an axis with `tf.newaxis`.
    input_tensor = input_tensor[tf.newaxis, ...]

    # Run inference
    output_dict = model(input_tensor)

    # All outputs are batches tensors.
    # Convert to numpy arrays, and take index [0] to remove the batch dimension.
    # We're only interested in the first num_detections.
    num_detections = int(output_dict.pop('num_detections'))
    output_dict = {key: value[0, :num_detections].numpy()
                   for key, value in output_dict.items()}
    output_dict['num_detections'] = num_detections

    # detection_classes should be ints.
    output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)

    # Handle models with masks:
    if 'detection_masks' in output_dict:
        # Reframe the the bbox mask to the image size.
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            output_dict['detection_masks'], output_dict['detection_boxes'],
            image.shape[0], image.shape[1])
        detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
                                           tf.uint8)
        output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()

    return output_dict

Step 8: Test model with sample images


For the sake of simplicity we will test on 2 images:

# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('./models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS

Load the model using load_model function

# load model in variable.
model = load_model('ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8')

Run it on each test image and show the results:

def show_inference(model, image_path):
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = np.array(Image.open(image_path))
  # Actual detection.
  output_dict = run_inference_for_single_image(model, image_np)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks_reframed', None),
      use_normalized_coordinates=True,
      line_thickness=8)

  display(Image.fromarray(image_np))

Test the program

for image_path in TEST_IMAGE_PATHS:
  show_inference(detection_model, image_path)

Result:



Step 9: Implement Real-Time object detection using OpenCV


import cv2
import numpy as np

cap = cv2.VideoCapture(0)

while True:
    ret, image_np = cap.read()
    image_np = show_inference(detection_model, image_np)
    cv2.imshow('Object Detection', image_np)
    if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

For complete source code click here.


Contact Us

Mob:  9067957548

blogpost@datasciencefever.com

  • Instagram
  • Facebook
  • LinkedIn
Rate UsDon’t love itNot greatGoodGreatLove itRate Us
Google Home: Over 100 of the funniest questions to ask Google Home