Abstract
Neural network designers have reached progressive accuracy by increasing models depth, introducing new layer types and discovering new combinations of layers. A common element in many architectures is the distribution of the number of filters in each layer. Neural network models keep a pattern design of increasing filters in deeper layers such as those in LeNet, VGG, ResNet, MobileNet and even in automatic discovered architectures such as NASNet. It remains unknown if this pyramidal distribution of filters is the best for different tasks and constrains. In this work we present a series of modifications in the distribution of filters in three popular neural network models and their effects in accuracy and resource consumption. Results show that by applying this approach, some models improve up to 8.9% in accuracy showing reductions in parameters up to 54%.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 |
Publisher | IEEE Computer Society |
Pages | 1241-1246 |
Number of pages | 6 |
ISBN (Electronic) | 9781665448994 |
DOIs | |
Publication status | Published - 1 Sept 2021 |
Event | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 - Virtual, Online Duration: 19 Jun 2021 → 25 Jun 2021 |
Publication series
Name | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
ISSN (Print) | 2160-7508 |
ISSN (Electronic) | 2160-7516 |
Conference
Conference | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 |
---|---|
City | Virtual, Online |
Period | 19/06/21 → 25/06/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE.