📚
DLIP
  • Introduction
  • Prerequisite
  • Image Processing Basics
    • Notes
      • Thresholding
      • Spatial Filtering
      • Masking with Bitwise Operation
      • Model n Calibration
    • Tutorial
      • Tutorial: Install OpenCV C++
      • Tutorial: Create OpenCV Project
      • Tutorial: C++ basics
      • Tutorial: OpenCV Basics
      • Tutorial: Image Watch for Debugging
      • Tutorial: Spatial Filter
      • Tutorial: Thresholding and Morphology
      • Tutorial: Camera Calibration
      • Tutorial: Color Image Processing
      • Tutorial: Edge Line Circle Detection
      • Tutorial: Corner Detection and Optical Flow
      • Tutorial: OpenCV C++ Cheatsheet
      • Tutorial: Installation for Py OpenCV
      • Tutorial: OpenCv (Python) Basics
    • LAB
      • Lab Report Template
      • Lab Report Grading Criteria
      • LAB Report Instruction
      • LAB: Grayscale Image Segmentation
        • LAB: Grayscale Image Segmentation -Gear
        • LAB: Grayscale Image Segmentation - Bolt and Nut
      • LAB: Color Image Segmentation
        • LAB: Facial Temperature Measurement with IR images
        • LAB: Magic Cloak
      • LAB: Straight Lane Detection and Departure Warning
      • LAB: Dimension Measurement with 2D camera
      • LAB: Tension Detection of Rolling Metal Sheet
  • Deep Learning for Perception
    • Notes
      • Lane Detection with Deep Learning
      • Overview of Deep Learning
        • Object Detection
        • Deep Learning Basics: Introduction
        • Deep Learning State of the Art
        • CNN, Object Detection
      • Perceptron
      • Activation Function
      • Optimization
      • Convolution
      • CNN Overview
      • Evaluation Metric
      • LossFunction Regularization
      • Bias vs Variance
      • BottleNeck Unit
      • Object Detection
      • DL Techniques
        • Technical Strategy by A.Ng
    • Tutorial - PyTorch
      • Tutorial: Install PyTorch
      • Tutorial: Python Numpy
      • Tutorial: PyTorch Tutorial List
      • Tutorial: PyTorch Example Code
      • Tutorial: Tensorboard in Pytorch
      • Tutorial: YOLO in PyTorch
        • Tutorial: Yolov8 in PyTorch
        • Tutorial: Train Yolo v8 with custom dataset
          • Tutorial: Train Yolo v5 with custom dataset
        • Tutorial: Yolov5 in Pytorch (VS code)
        • Tutorial: Yolov3 in Keras
    • LAB
      • Assignment: CNN Classification
      • Assignment: Object Detection
      • LAB: CNN Object Detection 1
      • LAB: CNN Object Detection 2
      • LAB Grading Criteria
    • Tutorial- Keras
      • Train Dataset
      • Train custom dataset
      • Test model
      • LeNet-5 Tutorial
      • AlexNet Tutorial
      • VGG Tutorial
      • ResNet Tutorial
    • Resource
      • Online Lecture
      • Programming tutorial
      • Books
      • Hardware
      • Dataset
      • Useful sites
  • Must Read Papers
    • AlexNet
    • VGG
    • ResNet
    • R-CNN, Fast-RCNN, Faster-RCNN
    • YOLOv1-3
    • Inception
    • MobileNet
    • SSD
    • ShuffleNet
    • Recent Methods
  • DLIP Project
    • Report Template
    • DLIP 2021 Projects
      • Digital Door Lock Control with Face Recognition
      • People Counting with YOLOv4 and DeepSORT
      • Eye Blinking Detection Alarm
      • Helmet-Detection Using YOLO-V5
      • Mask Detection using YOLOv5
      • Parking Space Management
      • Vehicle, Pedestrian Detection with IR Image
      • Drum Playing Detection
      • Turtle neck measurement program using OpenPose
    • DLIP 2022 Projects
      • BakeryCashier
      • Virtual Mouse
      • Sudoku Program with Hand gesture
      • Exercise Posture Assistance System
      • People Counting Embedded System
      • Turtle neck measurement program using OpenPose
    • DLIP Past Projects
  • Installation Guide
    • Installation Guide for Pytorch
      • Installation Guide 2021
    • Anaconda
    • CUDA cuDNN
      • CUDA 10.2
    • OpenCV
      • OpenCV Install and Setup
        • OpenCV 3.4.13 with VS2019
        • OpenCV3.4.7 VS2017
        • MacOS OpenCV C++ in XCode
      • Python OpenCV
      • MATLAB-OpenCV
    • Framework
      • Keras
      • TensorFlow
        • Cheat Sheet
        • Tutorial
      • PyTorch
    • IDE
      • Visual Studio Community
      • Google Codelab
      • Visual Studio Code
        • Python with VS Code
        • Notebook with VS Code
        • C++ with VS Code
      • Jupyter Notebook
        • Install
        • How to use
    • Ubuntu
      • Ubuntu 18.04 Installation
      • Ubuntu Installation using Docker in Win10
      • Ubuntu Troubleshooting
    • ROS
  • Programming
    • Python_Numpy
      • Python Tutorial - Tips
      • Python Tutorial - For Loop
      • Python Tutorial - List Tuple, Dic, Set
    • Markdown
      • Example: API documentation
    • Github
      • Create account
      • Tutorial: Github basic
      • Tutorial: Github Desktop
    • Keras
      • Tutorial Keras
      • Cheat Sheet
    • PyTorch
      • Cheat Sheet
      • Autograd in PyTorch
      • Simple ConvNet
      • MNIST using LeNet
      • Train ConvNet using CIFAR10
  • Resources
    • Useful Resources
    • Github
Powered by GitBook
On this page
  • ROC Curve
  • 민감도와 특이 (Covid-19 진단예시)
  • Top-1, TOP-5 ImageNet, ILSVRC
  • For Object Detection
  • IOU
  • mAP
  • AP (Area under curve AUC)
  • COCO mAP

Was this helpful?

  1. Deep Learning for Perception
  2. Notes

Evaluation Metric

PreviousCNN OverviewNextLossFunction Regularization

Last updated 3 years ago

Was this helpful?

ROC Curve

Understanding AUC, ROC curve:

AUC - ROC curve is a performance measurement for classification problem at various thresholds settings. ROC is a probability curve and AUC represents degree or measure of separability.

ROC AUC It tells how much model is capable of distinguishing between binary classes

Receiver Operating Characteristic(ROC) Curve

  • true positive rate (recall) vs false positive rate (FPR)

  • FPR is the ratio of negative instances that are incorrectly classified as positive

Area under the curve(AUC): a perfect classifier ROC AUC= 1 a purely random classifier ROC AUC= 0.5.

  • E.g. Find a Person. Red: Person, Green: non-person

민감도와 특이 (Covid-19 진단예시)

앞서 밝힌대로 민감도와 특이도 검사 모두 이미 음성·양성을 확인한 대상자를 놓고 새로운 진단법에 대한 정확도를 밝히는 과정이다.

민감도는 '양성 환자 중 검사법이 진단한 양성 정확도'라는 의미고, 특이도는 '정상인 중 검사법이 진단한 정상 정확도'라는 의미다.

실제 양성·음성군을 대상으로 진단 시행 시 얻을 수 있는 결과

각 표본에 대한 검사가 끝나면 대상군은 위의 네개로 분류되고 이때의 민감도·특이도를 구하는 공식은 다음과 같다.

  1. 민감도 = 새로운 진단법이 판명한 환자 중 실제 환자

    ① / ① + ②

  2. 특이도 = 새로운 진단법이 판명한 정상인 중 실제 정상

    ④ / ③ + ④

그렇다면 민감도와 특이도 중 진단기법 신뢰도에 더 큰 영향을 미치는 것은 무엇일까. 전문가에 따르면 두 기준은 양립해야하며 어느 한 쪽이 우월한 가치는 아니라는 설명이다.

정은경 청장은 "Sensitivity(민감도)와 Specificity(특이도)가 차이가 크다면 올바른 진단 방식이라고 볼 수는 없을 것"이라며 "질병에 따라 어느 한쪽에 무게가 실리기도 하지만 양쪽 모두를 충족해야 한다"고 설명했다.

Top-1, TOP-5 ImageNet, ILSVRC

The Top-5 error rate is the percentage of test examples for which the correct class was not in the top 5 predicted classes.

If a test image is a picture of a Persian cat, and the top 5 predicted classes in order are [Pomeranian (0.4), mongoose (0.25), dingo (0.15), Persian cat (0.1), tabby cat (0.02)], then it is still treated as being 'correct' because the actual class is in the top 5 predicted classes for this test image.

For Object Detection

IOU

We need to evaluate the performance of both (1) classification and (2) localization of using bounding boxes in the image.

Object Detection uses the concept of Intersection over Union (IoU). IoU computes intersection over the union of the two bounding boxes; the bounding box for the ground truth and the predicted bounding box. An IoU of 1 implies that predicted and the ground-truth bounding boxes perfectly overlap.

Set a threshold value for the IoU to determine if the object detection is valid or not.

If threshold of IoU=0.5,

  • if IoU ≥0.5, classify the object detection as True Positive(TP)

  • if Iou <0.5, then it is a wrong detection and classify it as False Positive(FP)

  • When a ground truth is present in the image and model failed to detect the object, classify it as False Negative(FN).

  • True Negative (TN): TN is every part of the image where we did not predict an object. This metrics is not useful for object detection, hence we ignore TN.

Also, need to consider the confidence score (classification) for each object detected. Bounding boxes above the threshold value are considered as positive boxes and all predicted bounding boxes below the threshold value are considered as negative.

Use Precision and Recall as the metrics to evaluate the performance. Precision and Recall are calculated using true positives(TP), false positives(FP) and false negatives(FN).

mAP

It use 11-point interpolated average precision to calculate mean Average Precision(mAP).

Step 1: Plot Precision and Recall from IoU

Precision in PR graph is not always monotonically decreasing due to certain exceptions and/or lack of data.

Example: In this example, the whole dataset contains 5 apples only. We collect all the predictions made for apples in all the images and rank it in descending order according to the predicted confidence level. (IoU>0.5)

For example, for rank#3, assume only 3 apples are predicted(2 are correct)

Precision is the proportion of TP = 2/3 = 0.67

Recall is the proportion of TP out of the possible positives = 2/5 = 0.4

Step 2: use 11 point interpolation technique.

11 equally spaced recall levels of 0.0, 0.1, 0.2, 0.3 ….0.9, 1.0.

Point interpolation: take the maximum Precision value of all future points of Recall.

Step 3: Calculate the mean Average Precision(mAP)

Average Precision is the area under the curve of Precision-Recall

mAP is calculated as

In our example, AP = (5 × 1.0 + 4 × 0.57 + 2 × 0.5)/11.

For 20 different classes in PASCAL VOC, we compute an AP for every class and also provide an average for those 20 AP results.

It is less precise. Second, it lost the capability in measuring the difference for methods with low AP. Therefore, a different AP calculation is adopted after 2008 for PASCAL VOC.

AP (Area under curve AUC)

For later Pascal VOC competitions, VOC2010–2012 samples.

No approximation or interpolation is needed. Instead of sampling 11 points, we sample p(rᵢ) whenever it drops and computes AP as the sum of the rectangular blocks.

COCO mAP

Latest research papers tend to give results for the COCO dataset only. In COCO mAP, a 101-point interpolated AP definition is used in the calculation. For COCO, AP is the average over multiple IoU (the minimum IoU to consider a positive match). AP@[.5:.95] corresponds to the average AP for IoU from 0.5 to 0.95 with a step size of 0.05. For the COCO competition, AP is the average over 10 IoU levels on 80 categories (AP@[.50:.05:.95]: start from 0.5 to 0.95 with a step size of 0.05). The following are some other metrics collected for the COCO dataset.

출처 : 히트뉴스()

What is mAP:

http://www.hitnews.co.kr
click here
click here