2018 24th International Conference on Pattern Recognition (ICPR) (2018)
Aug. 20, 2018 to Aug. 24, 2018
Gangming Zhao , CASIA, Research Center for Brain-inspired Intelligence
Zhaoxiang Zhang , CASIA, Research Center for Brain-inspired Intelligence
He Guan , CASIA, Research Center for Brain-inspired Intelligence
Peng Tang , School of EIC, Huazhong University of Science and Technology
Jingdong Wang , Microsoft Research
Most of convolutional neural networks share the same characteristic: each convolutional layer is followed by a nonlinear activation layer where Rectified Linear Unit (ReLU) is the most widely used. In this paper, we argue that the designed structure with the equal ratio between these two layers may not be the best choice since it could result in the poor generalization ability. Thus, we try to investigate a more suitable method on using ReL U to explore the better network architectures. Specifically, we propose a proportional module to keep the ratio between convolution and ReLU amount to be N:m (n>m). The proportional module can be applied in almost all networks with no extra computational cost to improve the performance. Comprehensive experimental results indicate that the proposed method achieves better performance on different benchmarks with different network architectures, thus verify the superiority of our work.
Convolution, Tensile stress, Network architecture, Computational efficiency, Computational modeling, Pattern recognition
G. Zhao, Z. Zhang, H. Guan, P. Tang and J. Wang, "Rethinking ReLU to Train Better CNNs," 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 2018, pp. 603-608.