The Community for Technology Leaders
2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2016)
Las Vegas, NV, United States
June 26, 2016 to July 1, 2016
ISBN: 978-1-5090-1437-8
pp: 426-433
ABSTRACT
We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.
INDEX TERMS
Semantics, Image segmentation, Recurrent neural networks, Computer architecture, Image resolution, Context modeling
CITATION

F. Visin et al., "ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation," 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, United States, 2016, pp. 426-433.
doi:10.1109/CVPRW.2016.60
185 ms
(Ver 3.3 (11022016))