DL Techniques
Last updated
Last updated
Pruning
Weight Sharing
Quantization
Low-Rank Approximation
Binary / Ternary Net
Winograd Transformation
Set individual weights in the weight matrix to zero. This corresponds to deleting connections as in the figure above.
Unit/Neuron pruning
Set entire columns to zero in the weight matrix to zero, in effect deleting the corresponding output neuron
Read: Inception paper \[âGoing deeper with convolutionsâ\](https://arxiv.org/pdf/1409.4842.pdf)
1Ă1 convolutions are an essential part of the Inception module.
A 1Ă1 convolution returns an output image with the same dimensions as the input image.
Colored images have three dimensions, or channels. 1Ă1 convolutions compress these channels at little cost, leaving a two-dimensional image to perform expensive 3Ă3 and 5Ă5 convolutions on.
Convolutional layers learn many filters to identify attributes of images. 1Ă1 convolutions can be placed as âbottlenecksâ to help compress a high number of filters into just the amount of information that is necessary for a classification.
Template 1: the template is here