2 WPNets and PWNets: from the perspective of channel fusion

1 minute read

Liang, Daojun, et al. "Wpnets and pwnets: from the perspective of channel fusion." IEEE Access 6 (2018): 34226-34236. http://liangdaojun.github.io/files/p2-WPNet.pdf

Published in Journal 2, 2018

Download Code Citation

Abstract The performance and parameters of neural networks have a positive correlation, and there are a lot of parameter redundancies in the existing neural network architectures. By exploring the channels relationship of the whole and part of the neural network, the architectures of the convolution network with the tradeoff between the parameters and the performance are obtained. Two network architectures are implemented by dividing the convolution kernels of one layer into multiple groups, thus ensuring that the network has more connections and fewer parameters. In these two network architectures, the information of one network flows from the whole to the part, which is called whole-to-part connected networks (WPNets), and the information of the other network flows from the part to the whole, which is called part-to-whole connected networks (PWNets). WPNets use the whole channel information to enhance partial channel information, and the PWNets use partial channel information to generate or enhance the whole channel information. We evaluate the proposed architectures on three competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet), and our models obtain comparable results even with far fewer parameters compared to many state of the arts.