📚
DLIP
  • Introduction
  • Prerequisite
  • Image Processing Basics
    • Notes
      • Thresholding
      • Spatial Filtering
      • Masking with Bitwise Operation
      • Model n Calibration
    • Tutorial
      • Tutorial: Install OpenCV C++
      • Tutorial: Create OpenCV Project
      • Tutorial: C++ basics
      • Tutorial: OpenCV Basics
      • Tutorial: Image Watch for Debugging
      • Tutorial: Spatial Filter
      • Tutorial: Thresholding and Morphology
      • Tutorial: Camera Calibration
      • Tutorial: Color Image Processing
      • Tutorial: Edge Line Circle Detection
      • Tutorial: Corner Detection and Optical Flow
      • Tutorial: OpenCV C++ Cheatsheet
      • Tutorial: Installation for Py OpenCV
      • Tutorial: OpenCv (Python) Basics
    • LAB
      • Lab Report Template
      • Lab Report Grading Criteria
      • LAB Report Instruction
      • LAB: Grayscale Image Segmentation
        • LAB: Grayscale Image Segmentation -Gear
        • LAB: Grayscale Image Segmentation - Bolt and Nut
      • LAB: Color Image Segmentation
        • LAB: Facial Temperature Measurement with IR images
        • LAB: Magic Cloak
      • LAB: Straight Lane Detection and Departure Warning
      • LAB: Dimension Measurement with 2D camera
      • LAB: Tension Detection of Rolling Metal Sheet
  • Deep Learning for Perception
    • Notes
      • Lane Detection with Deep Learning
      • Overview of Deep Learning
        • Object Detection
        • Deep Learning Basics: Introduction
        • Deep Learning State of the Art
        • CNN, Object Detection
      • Perceptron
      • Activation Function
      • Optimization
      • Convolution
      • CNN Overview
      • Evaluation Metric
      • LossFunction Regularization
      • Bias vs Variance
      • BottleNeck Unit
      • Object Detection
      • DL Techniques
        • Technical Strategy by A.Ng
    • Tutorial - PyTorch
      • Tutorial: Install PyTorch
      • Tutorial: Python Numpy
      • Tutorial: PyTorch Tutorial List
      • Tutorial: PyTorch Example Code
      • Tutorial: Tensorboard in Pytorch
      • Tutorial: YOLO in PyTorch
        • Tutorial: Yolov8 in PyTorch
        • Tutorial: Train Yolo v8 with custom dataset
          • Tutorial: Train Yolo v5 with custom dataset
        • Tutorial: Yolov5 in Pytorch (VS code)
        • Tutorial: Yolov3 in Keras
    • LAB
      • Assignment: CNN Classification
      • Assignment: Object Detection
      • LAB: CNN Object Detection 1
      • LAB: CNN Object Detection 2
      • LAB Grading Criteria
    • Tutorial- Keras
      • Train Dataset
      • Train custom dataset
      • Test model
      • LeNet-5 Tutorial
      • AlexNet Tutorial
      • VGG Tutorial
      • ResNet Tutorial
    • Resource
      • Online Lecture
      • Programming tutorial
      • Books
      • Hardware
      • Dataset
      • Useful sites
  • Must Read Papers
    • AlexNet
    • VGG
    • ResNet
    • R-CNN, Fast-RCNN, Faster-RCNN
    • YOLOv1-3
    • Inception
    • MobileNet
    • SSD
    • ShuffleNet
    • Recent Methods
  • DLIP Project
    • Report Template
    • DLIP 2021 Projects
      • Digital Door Lock Control with Face Recognition
      • People Counting with YOLOv4 and DeepSORT
      • Eye Blinking Detection Alarm
      • Helmet-Detection Using YOLO-V5
      • Mask Detection using YOLOv5
      • Parking Space Management
      • Vehicle, Pedestrian Detection with IR Image
      • Drum Playing Detection
      • Turtle neck measurement program using OpenPose
    • DLIP 2022 Projects
      • BakeryCashier
      • Virtual Mouse
      • Sudoku Program with Hand gesture
      • Exercise Posture Assistance System
      • People Counting Embedded System
      • Turtle neck measurement program using OpenPose
    • DLIP Past Projects
  • Installation Guide
    • Installation Guide for Pytorch
      • Installation Guide 2021
    • Anaconda
    • CUDA cuDNN
      • CUDA 10.2
    • OpenCV
      • OpenCV Install and Setup
        • OpenCV 3.4.13 with VS2019
        • OpenCV3.4.7 VS2017
        • MacOS OpenCV C++ in XCode
      • Python OpenCV
      • MATLAB-OpenCV
    • Framework
      • Keras
      • TensorFlow
        • Cheat Sheet
        • Tutorial
      • PyTorch
    • IDE
      • Visual Studio Community
      • Google Codelab
      • Visual Studio Code
        • Python with VS Code
        • Notebook with VS Code
        • C++ with VS Code
      • Jupyter Notebook
        • Install
        • How to use
    • Ubuntu
      • Ubuntu 18.04 Installation
      • Ubuntu Installation using Docker in Win10
      • Ubuntu Troubleshooting
    • ROS
  • Programming
    • Python_Numpy
      • Python Tutorial - Tips
      • Python Tutorial - For Loop
      • Python Tutorial - List Tuple, Dic, Set
    • Markdown
      • Example: API documentation
    • Github
      • Create account
      • Tutorial: Github basic
      • Tutorial: Github Desktop
    • Keras
      • Tutorial Keras
      • Cheat Sheet
    • PyTorch
      • Cheat Sheet
      • Autograd in PyTorch
      • Simple ConvNet
      • MNIST using LeNet
      • Train ConvNet using CIFAR10
  • Resources
    • Useful Resources
    • Github
Powered by GitBook
On this page
  • Introduction
  • 1.Why do I use FLIR camera
  • It is useful at night
  • 2. It is not affected by visible light.
  • 2.Train FLIR dataset and obtain the training 'weight'
  • Download yolov5 from Github!!!
  • Download dataset from Google drive
  • Check yaml file
  • Making image lists (Test and Validation)
  • And make sure your data is uploaded successfully!
  • Making path text files (Test and Valition)
  • Check there are new txt files ('train.txt' and 'val.txt') in '/content/yolov5/data/'.
  • Modifying the data.yaml file
  • Train your data and obtain the weight file!!!
  • Check your training result!
  • Move your weight file to your yolov5 in desktop
  • 3.The simple usage of FLIP camera (FLIR A65)
  • Download FLIR software!!
  • 4.Pre-processing code
  • Add pre-processing code in yolov5
  • 1. Set Roi
  • 2. Display warning message
  • 5.Dicussion
  • 1. Lack of detection capability of our custom trained weight file
  • 2. Unable to measure distance accurately

Was this helpful?

  1. DLIP Project
  2. DLIP 2021 Projects

Vehicle, Pedestrian Detection with IR Image

PreviousParking Space ManagementNextDrum Playing Detection

Last updated 3 years ago

Was this helpful?

Date: 2021-6-21

Author: 김도연, 이예

Github:

Demo Video:

Introduction

This tutorial explains the FLIR image object detection using yolov5. If you want to know how to install yolov5 in your desktop, referring to this site .

This report consists of five parts.

  • Why do I use FLIR camera

  • Train FLIR dataset and obtain the training 'weight'

  • The simple usage of FLIP camera (FLIR A60)

  • Pre-processing codes

  • Discussion

Wanting to know only the way to train dataset, you can only refer to step 2, (Train FLIR dataset and obtain the training 'weight')

image.png

1.Why do I use FLIR camera

  • At night, it is hard to object detection because it is difficult to distinguish objects from common cameras.

  • But FLIR camera not affected by light

  • These camera can see through smoke, fog, haze, and other atmospheric obscurants better than a visible light camera can.

It is useful at night

2. It is not affected by visible light.

2.Train FLIR dataset and obtain the training 'weight'

Download yolov5 from Github!!!

%cd /content
!git clone https://github.com/ultralytics/yolov5.git

%cd /content/yolov5/
!pip install -r requirements.txt

Download dataset from Google drive

Before you get your dataset from Google drive, you should upload your data in your own Google drive.

For example, in this tutorial, I made data folder named 'juho' in my google drive. In 'juho', there are 'images' folder which has training image set and validateion image set and 'lables' folder which has lables of training images and labels of validation images.

Figure below shows the subfolers of 'juho' folder!

from google.colab import drive 
drive.mount('/content/yolov5/drive')
Mounted at /content/yolov5/drive

If you run this and login your ID, you will get the authorizing code

Then you can get the "drive" folder and you can access your data in Google drive

Check yaml file

'yaml' file includes the imformation of your dataset.

  • 'train' in 'yaml' indicates the directory of your training images!

  • 'val' in 'yaml' indicates the directory of your validating images!

  • 'nc' means the number of class that your data file detecting

  • 'names' means the sort of classes your data file detecting

 %cat /content/yolov5/drive/MyDrive/juho/data.yaml
train: ../train/images
val: ../valid/images

nc: 4
names: ['1', '18', '2', '3']

Making image lists (Test and Validation)

This stage is making image lists for your image data!

  • 'train_img' is a training image list

  • 'val_image' is a validating image list

In this tutorial, the image files used are '.jpg' format, you can change it to your own file format!

And make sure your data is uploaded successfully!

%cd
from glob import glob

train_img = glob('/content/yolov5/drive/MyDrive/juho/images/train_images/*.jpg')
val_img   =glob('/content/yolov5/drive/MyDrive/juho/images/val_images/*.jpg')
train_img = train_img[0:1500]
val_img   = val_img[0:300]

print("Train images:",len(train_img))
print("Validating images:",len(val_img))

import matplotlib.pyplot as plt
import random
import matplotlib.image as Image

testnum    = random.randrange(0,1501)
test_img   = train_img[testnum]
img = Image.imread(test_img)
imgplot = plt.imshow(img)
plt.show()
/root
Train images: 1500
Validating images: 300

Making path text files (Test and Valition)

You should modify directory of your 'train_img' and 'val_img' in 'yaml' file (in Check yaml file session). Because directories of data files changed in Google drive. This code make the directory (path of your files) as a .txt file.

with open('/content/yolov5/data/train.txt','w') as f:
  f.write('\n'.join(train_img)+'\n')
with open('/content/yolov5/data/val.txt','w') as f:
  f.write('\n'.join(val_img)+'\n')

Check there are new txt files ('train.txt' and 'val.txt') in '/content/yolov5/data/'.

Modifying the data.yaml file

Using the 'train.txt' and 'val.txt'(in Making path text files session), you can change your 'train' and 'val' directory in 'yaml' file.

import yaml

with open('/content/yolov5/drive/MyDrive/juho/data.yaml','r') as f:
  data = yaml.load(f)

print(data)

data['train'] ='/content/yolov5/data/train.txt'
data['val']  = '/content/yolov5/data/val.txt'

with open('/content/yolov5/data/data.yaml','w') as f:
 yaml.dump(data,f) 

print(data)
{'train': '../train/images', 'val': '../valid/images', 'nc': 4, 'names': ['1', '18', '2', '3']}
{'train': '/content/yolov5/data/train.txt', 'val': '/content/yolov5/data/val.txt', 'nc': 4, 'names': ['1', '18', '2', '3']}


/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:4: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  after removing the cwd from sys.path.

Train your data and obtain the weight file!!!

This stage is 'training stage'. If you run this code, you can get a 'weight' file for your own dataset. There are some parameters you should know.

  • '--img' : the image size of your data

  • '--batch' : set of data calculated per process.

  • '--epochs' : total number of data training

  • '--data' : the directory of your 'yaml' data file

  • '--cfg' : the structure of the weight model(in this tutorial 'yolo5x.yaml' used)

  • '--weights' : training weight (in this tutorial 'yolo5x' used)

  • '--name' : the name of your own weight file, the result of this training

%cd /content/yolov5/

!python train.py --img 416 --batch 8 --epochs 50 --data /content/yolov5/data/data.yaml --cfg ./models/yolov5x.yaml --weights yolov5x.pt --name FIRL_yolov5x

Check your training result!

import numpy as np 
from  PIL import Image

path = "/content/yolov5/runs/train/FIRL_yolov5x/results.png"
fig = plt.figure()

result_img   =Image.open(path)  
result_img   = result_img.resize((2048*2,2048*2))
imgplot = plt.imshow(result_img)
plt.show()

Move your weight file to your yolov5 in desktop

You can move 'best.pt' file to your desktop YOLOv5 folder.

Finally, you can run your code with your own weight file!!!

3.The simple usage of FLIP camera (FLIR A65)

Download FLIR software!!

Install the ebus_runtime_32-bit.4.1.6.3809.exe and run the PvSimpleUISample.exe

After, click the Select/Connect button, then connect your FLIR camera

4.Pre-processing code

Add pre-processing code in yolov5

Anaconda Prompt allows dircet access to the yolov5 file to modify the code. It can modify overall extent of yolov5 before object detection, such as what object to find. We did the following setting: 1. Set Roi to detect only the objects of lane where this vehicle is located 2. Approximate distance measurement - Display warnining message about 2m when a person or car in front of the car

1. Set Roi

Set roi region like this, in order to get the information needed only from the front of the vehicle when driving.

If the object is not in range, do not detect it. More clear detection is possible by blocking unnecessary information (such as vehicles in other lanes).

2. Display warning message

If a person or other vehicle is too close to the front of the vehicle, warning message is displayed on the screen. The distance was set at approximately 2 m.

In detect.py code, the xyxy variable is the coordinate value of the upper left and lower right points of the bounding box being detected. This allows the warning message to be displayed when the y-value of the lower right-hand spot is 30 percent from the bottom of the screen, which is thought to be 2 meters from the camera.

5.Dicussion

In this step, the overall assesment wil be conducted.

1. Lack of detection capability of our custom trained weight file

if you look at the above figure, it can be seen that there are three objects(1 person ,2 cars),but actually there are only two objects(1 person, 1 car). It means that it detect the wrong object. There are mainly two reason for this error.

  • Environmental effect

The FLIR measures the infra wave from the object. Therefore, it is affected the environment of the place where images are taken. the training images are taken in California , but the test images are in Pohang. For this reason, this error occurs.

  • Filter effect

    In FLIR camera setting, there is a filter effect. The result of the picture changes depending on which filters user uses. The difference filter effect between training data and testing data make this error.

The picture above and the picture below are pictures with different filters. Because FLIR images are represented by binaries, these filter differences make a very big difference.

2. Unable to measure distance accurately

The distance measured in this project (2 meters apart from the camera) was about 30 percent of the screen. But this is a very rough value, and it actually needs to be more precise.

  • When object detection is performed, the bounding box is not exactly fitted to the object.

  • Only measured values can always be used in fixed positions because the camera can vary at all times depending on the height, angle, etc.

Therefore, we suggest utilizing a combination of sensors other than FLIR cameras (e.g., lidar sensors that read reflected values using laser pulses) to measure the distance precisely.

image.png
light_sum.png
image.png
image.png
image.png
image.png

After finishing your custom training dataset,you should move the 'weight' file to your desktop The result weight file is located at '/content/yolov5/runs/train/FIRL_yolov5x/weights/ (FIRL_yolov5x is the weight file name used in this tutorial)

image.png
image.png
image.png

image from

you can download FLIR software from

download FLIR GEV Dem 1.10.0 version After download this software

image.png
image.png
roi.png
set roi.png
image.png
warning text.png
image.png
image.png
image.png
image.png
https://www.flirkorea.com/products/a65/
https://go.pleora.com/Download-eBUS-Player
https://ropiens.tistory.com/44
image.png
image.png