People Counting Embedded System

Date: June,19,2022

Author: Sungjoo Lee, Joseph Shin

Github: https://github.com/SungJooo/DLIP_FINAL

Demo Video: https://youtu.be/8opxq6PDLP0

Introduction

This tutorial is about how to create an independent embedded system that can:

  1. Film people entering and exiting the room from the top of a doorway

  2. Track the people using YOLOv5 detection and user-written code

  3. Record how many people entered or exited the room, all within a raspberry pi

Requirement

Hardware

  • Raspberry Pi 4 Model B 4GB & microSD card 32GB

  • Logitech WebCam C920

  • Coral TPU USB Accelerator

  • Bread-board & LED

Software Installation

Software to test the system on computer

1. Install libs

Open Anaconda Prompt and enter the following commands:

  • update conda

  • create py39 virtual environment

  • activate py39 virtual environment

  • install opencv

  • install pytorch

  • install numpy, matplot

2. Install YOLOv5

Access YOLOv5 github then download the following repository

YOLO1

Unzip the folder, changing the name from yolov5-master -> yolov5 then paste it to your desired location.

Go in the folder then copy the directory.

YOLO2

Open Anaconda Prompt in user mode then enter the following commands.

YOLO2

2. Test the code on VScode

To download the source video, go the the following URL:

https://github.com/SungJooo/DLIP_FINAL

There you will find a file called source.MOV. Download it to your desired folder

Now with the software installations complete, open the folder you downloaded the source.MOV file to in VScode, then create a new .py file and paste the "code to test on computer" code, which you can find in the appendix of this paper.

cap

Near the top of the code (lines 8~13) you will see this part. If you wish to use the webcam for your test, use the cap=cv2.VideoCapture(0) line and comment out the above line, and if you wish to use the source video, use the cap=cv2.VideoCapture('source.MOV') line and comment out the below line.

Then, run the code on VScode.

Software Explanation

The algorithm works in three steps:

  1. Object detection via YOLOv5 pretrained model

  2. Object tracking using the detection data

  3. People counting using the tracking data

The following are some important functions in the tracking or people counting algorithm

matchboxes()

Uses the coordinates of all bounding boxes in previous frame and current frame to match the boxes with each other and track the objects

Parameters

  • coordlist: list of bounding box coordinates in current frame

  • coordlist: list of bounding box coordinates in previous frame

  • width: width of frame

Example Code

checkbot_box()

Checks if the inputted box coordinates are near the bottom of the frame

Parameters

  • coords: coordinates of box

  • height: height of frame

Example Code

update_frame()

Updates the frame information using the object detection data and previous number of people

Parameters

  • results: YOLOv5 detection current frame results

  • prev_results: YOLOv5 detection previous frame results

  • frame: captured frame of video

  • rectframe: frame with colored-in bounding boxes

  • num_people: number of people from previous frame

Example Code

Tutorial Procedure

Raspberry Pi Setup

1. Install Raspberry Pi OS

If you want to start Raspberry Pi, you need to install the Raspberry Pi OS. First, insert a micro SD card with a reader into your laptop. Then, download the OS installer in the link. You can get an exe. file named "imager_1.7.2.exe". When you run the file, you must pick 64-bit OS for activating YOLOv5.

install_guide

1.1. Remote controlling Raspberry Pi

For remote controlling the Raspberry Pi in laptop, you have to do add "ssh" file which doesn't have any extension name, and "wpa_supplicant.conf" file at the root folder of Raspberry Pi.

The file named "wpa_supplicant.conf" need to include the following:

The name "JooDesk" and the password "wwjd0000" is the laptop hotspot's ID and password, to make the IP address unchanged. The bandwidth must be selected as 2.4GHz.

When you've done the process above, insert mirco SD card to Raspberry Pi and boot it. After you boot the Raspberry Pi, make your hostname and password.

If the Raspberry Pi is executed successfully, you have to make the IP address of Raspberry Pi as a static IP address. To do this, you need to follow the instructions:

And then, insert the following command at the bottom of it.

There should be no "#" marks. After you enter the command, press "esc" to quit, and type ":wq!" to quit the vim. After this step, your Raspberry Pi will get the static IP address, which is "192.168.137.110". Then reboot your Raspberry Pi with the command "$ reboot".

After reboot the Raspberry Pi, follow the command for the next step.

Tightvncserver is a program to synchronize the Raspberry Pi screen on a laptop. Specific guidelines are follows.

1.2. PuTTY

PuTTY is a program for connecting to Raspberry Pi as a SSH mode. You can download via the link.

The static IP address is "192.168.137.110", and use the port number as 22.

1.3. TightVNC

When connecting to Raspberry Pi using PuTTY, it is connected only in terminal mode. To connect and use the Raspberry Pi as a GUI environment, TightVNC program would help. The installation link is here. If you download the TightVNC, you have to set the password.

By the above step, you have installed TightVNC at the Raspberry Pi. To activate the TightVNC in the Raspberry Pi, command the following line.

TightVNC uses the 5901 port of Raspberry Pi. By the command "sudo netstat –tulpn", you can check the state 0.0.0.0.0:5901 is in the "listen" state. If it is, it is ready to sync Raspberry Pi into your laptop. "$ vncpasswd" is a command to edit the password of TightVNC.

2. YOLOv5 in Raspberry Pi

First, you have to clone the YOLOv5 repository into the Raspberry Pi. To do this, you need to enter the following command line.

After this, the "yolov5" folder would be formed at the root folder of Raspberry Pi. Follow the instruction to make the environment for yolov5.

After the commands, you can find the following image at the "runs/detect/exp" in Raspberry Pi.

For connecting external camera input device(such as Logitech Webcam, Picam ...), you can test the module that can detect the object by the input source. The test code is as follows. "source 0" means the external device you have connected to the Raspberry Pi. For the extra device, "source 1" and goes on.

As Raspberry Pi environment isn't good as laptop, the upper limit of using yolo model is yolov5n to prevent of much less FPS.

2.1. Package RPi.GPIO

This is the python package to control the GPIO on a Raspberry Pi to turn on the light. To do this, follow the command in Raspberry Pi.

This is the pinmap of Raspberry Pi 4 GPIO. We used GPIO21(Pin 40) as a voltage source to the light. To activate this on python, the code is as follows.

3. Algorithm implementation in Raspberry Pi

To implement the algorithm into the Raspberry Pi, follow the command.

First, upload your code to your github repository. Then, execute the following command at the Raspberry Pi root folder.

Then, get into your github folder with the command cd, and command the following code.

From here, we would guide you how we made it.

Then you could find the 3 python code which we made.

"DLIP_Final_00_test.py" is the file that model yolov5n is working well on Raspberry Pi. This file finds only the "person" class.

"DLIP_Final_01_fps.py" is the file that measures your FPS with the model yolov5n. As model is still heavy to covered in Raspberry Pi, the FPS would be about 2~2.5, and in remote condition, the FPS gets even lower when the WiFi network is bad.

"DLIP_Final_01_fps.py" is the file that turns the lights on and off depending on whether a person enters or leaves.

To launch the code, write down the following code at the DLIP_FINAL folder.

Results and Analysis

The system was successfully able to:

  1. film the doorway entrance

  2. use the video footage to detect, track, and count the people in the frame

  3. all within a raspberry pi module, without connection to an external computer

Some issues were that the frame rate (around 2.5fps) and accuracy (around 60 percent) of the detection model (YOLOv5 nano) weren't superb on a raspberry pi. YOLOv5 nano was deemed the adequate model, for a lighter model would result in a faster frame rate but a lower accuracy, while a heavier model would've had a higher accuracy, but a higher frame rate.

Below is a table of the frame rate depending on the device.

fpw

A possible solution to this problem to be to use a tensor-based object detection model instead of YOLOv5, which would have increased fps without sacrifice of accuracy. This is because a TPU was used for this project, which is optimized to accelerate computing speed of tensor-based models.

Appendix

Code to test on computer

Code to test on Raspberry Pi

Reference

Code Reference

  • https://ykkim.gitbook.io/dlip/

  • https://github.com/ultralytics/yolov5

  • some class materials from ECE30003

Last updated

Was this helpful?