Feature Model and Feature Selection

2015, Thomas W. Rauber, Member, IEEE, Francisco de Assis Boldt, and Flรกvio Miguel Varejรฃo

Introduction

  • CWRU dataset ์‚ฌ์šฉ

  • Statistical feature

    • envelope feature

  • Feature selection

    • dimension์„ ์ค„์ด๊ณ  parameter์˜ ์„ฑ๋Šฅ์„ ๋†’์ด๊ธฐ ์œ„ํ•จ

    • univariate feature์˜ ๊ฒฝ์šฐ์—๋Š” feature selection์ด ํ•„์š”ํ•˜์ง€ ์•Š์Œ

      ex) Fisher score ์‚ฌ์šฉ

    • multivariate feature๋Š” search algorithm ํ•„์š”

      statistical, wavelets, and envelopes ๊ฐ™์€ ์„œ๋กœ ๋ณ„๊ฐœ์ธ ๋ชจ๋ธ์—์„œ feature๋ฅผ ์กฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Œ

    • SVM ์–ด์ฉŒ๊ตฌ

  • Training, validation, and test data splitting

  • Validation

    • LOO(leave-one-out), K-fold cross validation

    • estimated accuracy๊ฐ€ ์œ ์ผํ•œ ์„ฑ๋Šฅ ํ‰๊ฐ€ ๊ธฐ์ค€์ด์ง€๋งŒ AUC-ROC(area under the receiver operating characteristic curve)๊ฐ™์€ ๊ฒƒ๋„ ์žˆ์Œ

Feature models

  • ๊ฐ๊ฐ ๋‹ค๋ฅธ signal feature extraction methods์—์„œ ๋น„๋กฏ๋œ feature๋ฅผ ์œตํ•ฉํ•ด์„œ ์‚ฌ์šฉํ–ˆ๋˜ ์—ฐ๊ตฌ๋Š” ์—†์—ˆ์Œ

    โ†“ bearing fault diagnosis๋ฅผ ์œ„ํ•œ ์ผ๋ฐ˜์ ์ธ framework์™€ ์ด ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” framework

Sequence of Information Processing

  1. Signal feature extraction: ์„ผ์„œ๋กœ ๋ฐ์ดํ„ฐ ์ทจ๋“

  2. Feature pooling: ์ตœ๋Œ€ํ•œ ๋งŽ์€ ์ •๋ณด๋ฅผ ๊ฐ–๋„๋ก global feature vector๋ฅผ assemble

  3. Feature extraction on the feature level: feature vector์—์„œ ์ƒˆ๋กœ์šด feature๋ฅผ extract, ์ด ์ƒˆ๋กœ์šด feature๋Š” dimension๋„ ์ค„์–ด๋“ค๊ณ  machine condition์˜ ์ •๋ณด๋ฅผ ์ถ”์ƒ์ ์œผ๋กœ ๊ฐ€์ง€๊ณ  ์žˆ์Œ

  4. Feature selection: dimension reduction๊ณผ ํŒ๋ณ„๋ ฅ์„ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ ์œ„ํ•จ

  5. Classification

    ์ด ๋ฐฉ๋ฒ•์€ extraction ๋‹จ๊ณ„์—์„œ ๊ด€๋ จ ์—†๋Š” feature๋Š” ๋ฒ„๋ฆฌ๊ณ  ๊ด€๋ จ ์žˆ๋Š” feature๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Œ

Statistical feature

time domain, freq. domain์˜ statistical features

(time domain feature 10๊ฐœ + freq. domain feature 3๊ฐœ) * (DE, FE ๋‘ ๊ฐ€์ง€ ์ง„๋™ ๋ฐ์ดํ„ฐ) = 26๊ฐœ์˜ feature

Complex envelope analysis

  • ๋ฒ ์–ด๋ง ๊ณ ์žฅ์—๋Š” 4๊ฐ€์ง€ fault freq.๊ฐ€ ์žˆ์Œ

    • f_s = shaft rotational freq.

    • f_c = fundamental cage freq. (CWRU ๋ฐ์ดํ„ฐ์—๋Š” ์—†์Œ)

    • f_bpi = ball-pass inner-raceway freq.

    • f_bpo = ball-pass outer-raceway freq.

  • hilbert transform์œผ๋กœ ๊ณ„์‚ฐ๋จ

    • definition: ์‹ ํ˜ธ h(t)์™€ 1/ฯ€t์˜ convolution

    h~(t):=H{h(t)}:=h(t)โˆ—1ฯ€t=1ฯ€โˆซโˆ’โˆžโˆžh(t)dฯ„tโˆ’ฯ„\tilde h\left( t \right): = H\left\{ {h\left( t \right)} \right\}: = h\left( t \right)*{1 \over {\pi t}} = {1 \over \pi }\int_{ - \infty }^\infty {h\left( t \right){{d\tau } \over {t - \tau }}}
    • analytic signal

      ha(t):=h(t)+ih~(t){h_a}\left( t \right): = h\left( t \right) + i\tilde h\left( t \right)
  • ์ˆœ์„œ๋ฅผ ๋‹ค์‹œ ์ •๋ฆฌํ•˜๋ฉด

    1. high pass filtering of the raw signal โ†’ signal h(t) ์–ป์Œ

    2. h(t)์—์„œ h_a(t) ์–ป์Œ

    3. Fourier transform: F{h_a(t)}

    4. analysis its spectrum: |F{h_a(t)}|

  • Feature: 1% narrowband RMS energy ๊ณ„์‚ฐ

    • (DE, FE ๋ฐ์ดํ„ฐ 2๊ฐ€์ง€) * (f_bpi, f_bpo, f_b ์„ธ๊ฐ€์ง€ ์ฃผํŒŒ์ˆ˜) * (cross detection) * (sixth harmonic)

      = 2 * 3 * 2 * 6 = 72๊ฐœ์˜ envelope feauture

    • cross detection? : For the CWRU database, the sensor at the DE, although with less confidence, can detect the faults at the FE; hence, the number of features duplicates.

Wavelet packet analysis

  • wavelet decomposition๋ณด๋‹ค felxibleํ•จ

  • mother wavelet: Daubechies 4 ์‚ฌ์šฉ

  • level 4๊นŒ์ง€ ๋ถ„ํ•ด

    ~~ ๋ณต์žกํ•œ ์ˆ˜ํ•™์ •์ธ ์ •์˜ ~~

Feature pooling and dimensionality reduction

feature model์„ ์–ป๊ณ  ๋‚˜์„œ ๊ทธ๊ฒƒ๋“ค์„ common pool๋กœ merge ์‹œํ‚ด

  • feature pool์˜ index(?)

    26๊ฐœ์˜ statistical features + 72๊ฐœ์˜ complex envelope features + 32๊ฐœ์˜ wavelet packet feature = 130๊ฐœ

  • ์ด feature๋“ค์€ ์ค‘๋ณต๊ณผ noise๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Œ โ†’ feature vector์˜ dimension์„ ๋‚ฎ์ถ”๋Š” ๊ฒŒ ๋ชฉ์ 

  • reference 28์—์„œ๋Š” PCA, partial least squares, independent component analysis, Fisher discriminant analysis, and subspace-aided approach ๋“ฑ์˜ ๋ฐฉ๋ฒ• ์‚ฌ์šฉ

Feature selection

  • ๊ธฐ๋ณธ์ ์œผ๋กœ selection criterion๊ณผ search strategy๋กœ ๊ตฌ์„ฑ๋จ

  • wrapper approach

    • classifier์˜ ์„ฑ๋Šฅ์„ estimate..?

    • ๊ฐ€์žฅ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๋Š” feature์˜ subset์„ ๋ฝ‘์•„๋‚ด๋Š” ๋ฐฉ๋ฒ•

  • filter approach

    • feature set์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค๋ฅธ ๊ธฐ์ค€์„ ์‚ฌ์šฉํ•จ

    • selection filter์˜ ์žฅ์ : ์†๋„๊ฐ€ ๋น ๋ฆ„

    • ๋‹จ์ : wrapper ๋ฐฉ์‹์— ๋น„ํ•ด ์„ฑ๋Šฅ์ด ์•ˆ ์ข‹์Œ

    • ์‚ฌ์šฉ์ž์—๊ฒŒ feature-rank๋ฅผ ์ค˜์„œ ๊ฐ feature๊ฐ€ ์–ผ๋งŒํผ์˜ ์˜ํ–ฅ๋ ฅ์„ ๊ฐ€์ง€๋Š”์ง€ ์•Œ๋ ค์ฃผ๋Š” ๋ฐฉ๋ฒ•

  • Best feature(BF) search

    • ์„ ํƒ ๊ธฐ์ค€์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•

      • ๊ฐ feature x_j(j = 1, 2, ..., D)์— ๋Œ€ํ•œ ์„ ํƒ ๊ธฐ์ค€ J({x_j})๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  J์— ๋Œ€ํ•ด feature์„ ๋‚ด๋ฆผ์ฐจ์ˆœ์œผ๋กœ ์ •๋ ฌํ•˜๊ณ  ์„ ํƒ๋œ set X_d๋ฅผ ์ •๋ ฌ๋œ set์˜ ์ฒซ๋ฒˆ์งธ d feature๋กœ ์„ค์ •ํ•จ

    • BF ์žฅ์ : ์†๋„๊ฐ€ ๋น ๋ฆ„

    • BF ๋‹จ์ : multidimensionality of the problem์„ ๋ฌด์‹œํ•จ

  • Sequential forward selection (SFS)

    • empty set์—์„œ ์‹œ์ž‘ํ•ด์„œ feature ํ›„๋ณด๊ตฐ๊ณผ ์ด๋ฏธ ์„ ํƒ๋œ set๋“ค์„ ๊ฐ™์ด ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐฉ๋ฒ•

  • Sequential backward selection (SBS)

    • ๋ชจ๋“ (D๊ฐœ์˜) feature๋ฅผ ์„ ํƒํ•˜๊ณ  D-d feature๊ฐ€ ์‚ญ์ œ๋ ๋•Œ๊นŒ์ง€ feature๋ฅผ ํ•˜๋‚˜์”ฉ ์‚ญ์ œํ•˜๋Š” ๋ฐฉ๋ฒ•

      = d feature๊ฐ€ ๋‚จ์•„์žˆ์„๋•Œ๊นŒ์ง€

    • floating techniques

    • SFFS

    • SFBS

์ตœ๊ทผ ์—ฐ๊ตฌ์—์„œ wrapper์™€ filter method๋ฅผ combineํ•˜๋ ค๋Š” ์‹œ๋„๋Š” ๊ณ ๋ ค๋˜์ง€ ์•Š๋Š”๋‹ค(...?)

Classification and performance estimation

CV(Cross validation) techniques

  • x% training data, 100-x% test data

  • K-fold CV

    • ๊ณ„์‚ฐ์˜ ๋ณต์žก์„ฑ์„ ์ค„์ด๋ฉด์„œ๋„ ํ†ต๊ณ„์  ์œ ์˜์„ฑ์„ ์–ป์„ ์ˆ˜ ์žˆ์Œ

    • K๊ฐœ์˜ subset์œผ๋กœ ๋‚˜๋ˆ ์ง

    • ๊ฐ subset์€ training์— K-1๋ฒˆ, test์— 1๋ฒˆ ์‚ฌ์šฉ๋จ

      +) ๋ฐ์ดํ„ฐ์…‹์„ k๊ฐœ์˜ ๊ฐ™์€ ํฌ๊ธฐ๋กœ ๋‚˜๋ˆ ์„œ ํ•œ ๋ถ€๋ถ„์”ฉ test set์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ k๊ฐœ์˜ test performance๋ฅผ ํ‰๊ท ์„ ๋‚ด๋Š” ๋ฐฉ๋ฒ•

      ์žฅ์ : LOO CV์— ๋น„ํ•ด ๋น ๋ฅธ ์†๋„, validation set approach๋ณด๋‹ค ๋†’์€ ์ •ํ™•๋„

  • LOO(leave-one-out) CV

    • training ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๊ธธ์ง€ ์•Š๋‹ค๋ฉด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Œ

    • K-fold CV์˜ ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ์ž„: total dataset์ด N pattern์ž„(K = N)

    • ๋‹จ์ˆœํ•˜๊ฒŒ train-test data๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์Œ - ์•”๋ฌต์ ์œผ๋กœ ๊ฐ available pattern์ด ํ…Œ์ŠคํŠธ๋˜๊ธฐ ๋•Œ๋ฌธ์ž„(...?)

      +) N๋ฒˆ์˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ๊ฐ ๋ชจ๋ธ์„ ๋งŒ๋“ค ๋•Œ ํ•˜๋‚˜์˜ ์ƒ˜ํ”Œ์„ ์ œ์™ธํ•˜๋ฉด์„œ ๊ทธ ์ œ์™ธํ•œ ์ƒ˜ํ”Œ๋กœ test set performance๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋ผ๊ณ  ํ•จ

      ์žฅ์ : ๋ชจ๋“  ์ƒ˜ํ”Œ์— ๋Œ€ํ•ด ํ•œ๋ฒˆ์”ฉ์€ testํ•˜๊ธฐ ๋•Œ๋ฌธ์— randomness๊ฐ€ ์—†์Œ, validation set approach์™€ ๋‹ค๋ฅด๊ฒŒ ๋งค์šฐ stableํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Œ

      ๋‹จ์ : ์—ฐ์‚ฐ๋Ÿ‰์ด ๋งŽ์Œ, k-fold CV์— ๋น„ํ•ด model์˜ ๋‹ค์–‘์„ฑ์ด ์ ์Œ

์ด ๋…ผ๋ฌธ์—์„œ๋Š” classifier architecture์˜ training cost์— ๋”ฐ๋ผ LOO์™€ K-fold CV๋ฅผ ์‚ฌ์šฉํ–ˆ์Œ

Performance criteria

  • accuracy์™€ AUC-ROC ๋‘ ๊ฐ€์ง€๊ฐ€ ์‚ฌ์šฉ๋จ

  • AUC-ROC: ๋ถ„๋ฅ˜ํ•ด์•ผํ•  ํด๋ž˜์Šค๊ฐ€ ๋‘ ๊ฐ€์ง€์ด๊ณ , ๊ฐ ํด๋ž˜์Šค๋กœ ๋ถ„๋ฅ˜๋  ํ™•๋ฅ ์„ returnํ•˜๋Š” ๊ฒฝ์šฐ์— ์‚ฌ์šฉ ๊ฐ€๋Šฅ

    • negative class๊ฐ€ positive class๋ณด๋‹ค ํ›จ์”ฌ ๋งŽ์€ ๊ฒฝ์šฐ์— ์ข‹์Œ ( = normal condition์ด fault๋ณด๋‹ค ํ›จ์‹  ๋งŽ์Œ)

Classifier architectures

  • k-nearest neighbor classifier

  • nonparametric method์˜ ํ•œ ์ข…๋ฅ˜

  • ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๊ฐ€ ๋“ค์–ด์™”์„ ๋•Œ ๊ทผ์ฒ˜์˜ ๊ธฐ์กด ๋ฐ์ดํ„ฐ์˜ k๊ฐœ์˜ majority votes๋กœ classificationํ•˜๋Š” ๋ฐฉ๋ฒ•

  • MLP

    • ๋Œ€ํ‘œ์ ์ธ artificial neural network์˜ ํ•œ ์ข…๋ฅ˜

  • SVM

    • ์˜ค๋žซ๋™์•ˆ ๋งŽ์€ ์˜์—ญ์—์„œ ์‚ฌ์šฉ๋œ classifier

Experimental result

CWRU dataset ์‚ฌ์šฉ

Condition classes

โ€‹ table 3. machine condition classes defined for the experiments

  • ์ผ๋ฐ˜์ ์ธ CWRU data๋ฅผ ์‚ฌ์šฉํ•œ ๋…ผ๋ฌธ๋“ค๋ณด๋‹ค ๋” ๋งŽ์€ class๋ฅผ ์‚ฌ์šฉํ•จ

    • fault์˜ ์œ„์น˜, fault์˜ ์‹ฌ๊ฐ์„ฑ, ๋ชจํ„ฐ์— ๋Œ€ํ•œ ์œ„์น˜ ๋“ฑ์„ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•จ

    • load ๋ถ„๋ฅ˜๊ฐ€ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ฌธ์ œ

Signals to patterns

  • ์ „์ฒด ์‹ ํ˜ธ๋Š” ํšŒ์ „ ์‹ ํ˜ธ์˜ ๋ฐ˜๋ณต์ด๊ธฐ ๋•Œ๋ฌธ์— ์ผ์ •ํ•œ time interval์˜ ์—ฐ์†์œผ๋กœ ์ž๋ฅด๋Š” ๊ฒŒ ์ข‹์Œ

    • ํ…Œ์ŠคํŠธ๋ฅผ ํ†ตํ•ด 15๋ฒˆ์˜ nonoverlapping interval์ด ์„ฑ๋Šฅ ์ €ํ•˜์˜ threshold๋ผ๊ณ  ์ฐพ์Œ

Experiment 1: Feature extraction

๊ฐ machine condition์— ๋”ฐ๋ผ feature vector๋กœ splitํ•จ

  1. โ†“ statistical features in the time and freq. domain

  2. narrow band energy of the complex envelope magnitude์—์„œ 72๊ฐœ์˜ feature

    • six harmonic freq. ์ฃผ๋ณ€์˜ 1%๋งŒํผ narrow band๋ผ๊ณ  ์ •์˜ํ•จ

    • ex) expected freq.๊ฐ€ 30Hz (running speed of the machine) * 5.4152์ธ ๊ฒฝ์šฐ๋Š” 2*30* 5.4152 ( = 324.91) ๊ทผ์ฒ˜๋ฅผ ๋ด์•ผํ•จ : narrow band๋Š” interval [(324.91*0.99), (324.81*1.01)] = [321.66, 328.16 Hz]

  3. wavelet packet analysis

  • fault์˜ ์ข…๋ฅ˜๋Š” ๊ฐ™์ง€๋งŒ ์‹ฌ๊ฐ๋„(๊ฒฐํ•จ์˜ ํฌ๊ธฐ)๊ฐ€ ๋‹ค๋ฆ„

  • ๋Œ€๋น„๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•ด 0.007 ๊ฒฐํ•จ์€ 0hp, 0.021 ๊ฒฐํ•จ์€ 3hp์ธ ๋ฐ์ดํ„ฐ ์‚ฌ์šฉ

  • freq. band 4,4์™€ 4,12์—์„œ ๊ฐ’์ด ํฌ๊ฒŒ ์ฐจ์ด๋‚จ

  • normal ๋ฐ์ดํ„ฐ๋Š” 4,0์—์„œ ๊ฐ’์ด ํ™• ๋†’์•„์ง

Experiment 2: Performance Without Feature Selection

  • 1-nearest neighbor classifier๋Š” ํ•ญ์ƒ LOO CV๋กœ validationํ•ด์•ผํ•จ

  • SVM๊ณผ MLP๋Š” ์—ฐ์‚ฐ๋Ÿ‰์ด ๋งŽ๊ธฐ ๋•Œ๋ฌธ์— tenfold CV(10๊ฐœ์˜ fold๋กœ ๋‚˜๋ˆ„๋Š” CV)๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•จ

statistical feature model์ด ๊ฐ€์žฅ ์ •ํ™•๋„๊ฐ€ ๋–จ์–ด์ง€๊ณ  wavelet packet์ด ๊ฐ€์žฅ ์ •ํ™•๋„ ๋†’์Œ

complete pool์ด ํ•ญ์ƒ ๊ฐ€์žฅ ์ •ํ™•ํ•œ ๊ฑด ์•„๋‹˜(์ผ๋ถ€ feature์— ๋…ธ์ด์ฆˆ๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ฒฐ๊ณผ๊ฐ€ ์•ˆ ์ข‹์„ ์ˆ˜๋„ ์žˆ์Œ)

Experiment 3: Feature selection

  • x ์ถ•์€ ์„ ํƒ๋œ feature์˜ ์ˆ˜

  • global pool๊ณผ ์„ธ๊ฐ€์ง€ feature pool ๋‹ค ํ…Œ์ŠคํŠธํ–ˆ์ง€๋งŒ wavelet ๊ฒฐ๊ณผ๋งŒ ํ‘œ์‹œ

  • global pool์˜ ๊ฒฝ์šฐ์—๋Š” SFFS, SFS๊ฐ€ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ํ•˜๊ณ  SBS, SFBS๊ฐ€ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ํ•จ

  • wavelet์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณด๋ฉด floating๊ธฐ๋ฒ•์ด sequential ๊ฒฐ๊ณผ๋ณด๋‹ค ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ž„

  • ๊ฒฐ๋ก : global pool์—์„œ ์„ ํƒ๋œ feature์˜ ์„ฑ๋Šฅ์ด ๋” ์ข‹๋‹ค

Experiment 4: AUC-ROC for different feature models and number of selected features

  • ROC ๊ณก์„ ์„ ๋ณด๊ธฐ ํŽธํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด(?) separating the classes ๋‚œ์ด๋„๋ฅผ ์ธ์œ„์ ์œผ๋กœ ๋†’์ž„

    1. signal sampling resolution์„ ๋‚ฎ์ถค

    2. ํ•œ signal์—์„œ pattern์˜ ์ˆ˜๋ฅผ 15๊ฐœ์—์„œ 50๊ฐœ๋กœ ๋Š˜๋ฆผ

    3. 50๊ฐœ์˜ ํŒจํ„ด์„ sampleํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ 2์ดˆ๋งŒ ์‚ฌ์šฉํ•จ

fig. 4, 5๋ฅผ ํ†ตํ•ด one feature model๋ณด๋‹ค ์—ฌ๋Ÿฌ feature๋ฅผ ๋™์‹œ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒŒ ๋” ์„ฑ๋Šฅ์ด ์ข‹๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

Experiment 5: Comparison of feature selection and feature extraction by PCA

  • PCA๋Š” feature selection๊ณผ ๋‹ค๋ฅด๊ฒŒ ๊ธฐ์กด์˜ feature๋“ค์„ ์กฐํ•ฉํ•ด์„œ ์ƒˆ๋กœ์šด feature๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋ฐฉ๋ฒ•, feature selection์€ original feature๊ฐ€ ๋ณ€ํ•˜์ง€ ์•Š์ง€๋งŒ PCA๋Š” variance์— ๋”ฐ๋ผ rank๋จ(variance๊ฐ€ ํฌ๋‹ค๊ณ  ํ•ด์„œ ๋ฌด์กฐ๊ฑด ์„ฑ๋Šฅ์ด ์ข‹์•„์ง€๋Š” ๊ฑด ์•„๋‹˜)

table 9 : feature selection(SFS) ๋ฐฉ๋ฒ•๊ณผ feature extraction(PCA) ๋ฐฉ๋ฒ•์˜ ์„ฑ๋Šฅ ๋น„๊ต

fig. global pool๊ณผ wavelet packet์˜ PCA ๊ฒฐ๊ณผ

Conclusion

  • 3๊ฐœ์˜ feature model๊ณผ classifier architecture, performance criteria๋Š” ์ด๋ฏธ ๋งŽ์ด ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ๊ฒƒ๋“ค์ž„

  • ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค๋ฅธ ๋…ผ๋ฌธ๋“ค์€ feature model์˜ ๋‚ฎ์€ ์„ฑ๋Šฅ์„ ์ง€๋‚˜์น˜๊ฒŒ ์ •๊ตํ•œ classifier model๋กœ ๋ณด์™„ํ•˜๋ ค๊ณ  ํ•จ

  • ์ด ๋…ผ๋ฌธ์—์„œ๋Š” good process description์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” classifier๋Š” ๊ฐ„๋‹จํ•ด๋„ ๊ดœ์ฐฎ๋‹ค๋Š” ์ ์„ ์‹œ์‚ฌํ•จ

Last updated