Stop Thinking, Just Do!

Sungsoo Kim's Blog

Pruning Makes Faster and Smaller Neural Networks

tagsTags

23 February 2018


Pruning Makes Faster and Smaller Neural Networks

Abstract

Many state-of-the-art computer vision algorithms use large scale convolutional neural networks (CNNs) as basic building blocks. These CNNs are known for their huge number of parameters, high redundancy in weights, and tremendous computing resource consumptions. This paper presents a learning algorithm to simplify and speed up these CNNs. Specifically, we introduce a “try-and-learn” algorithm to train pruning agents that remove unnecessary CNN filters in a data-driven way. With the help of a novel reward function, our agents removes a significant number of filters in CNNs while maintaining performance at a desired level. Moreover, this method provides an easy control of the tradeoff between network performance and its scale. Performance of our algorithm is validated with comprehensive pruning experiments on several popular CNNs for visual recognition and semantic segmentation tasks.

The paper “Learning to Prune Filters in Convolutional Neural Networks” is available here:

https://arxiv.org/pdf/1801.07365.pdf


comments powered by Disqus