CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2014 vol.36 Issue No.01 - Jan.

Subscribe

Issue No.01 - Jan. (2014 vol.36)

pp: 157-170

Christopher Zach , Microsoft Res. Cambridge, Cambridge, UK

Christian Hane , Comput. Vision & Geometry Group, ETH Zurich, Zurich, Switzerland

Marc Pollefeys , Comput. Vision & Geometry Group, ETH Zurich, Zurich, Switzerland

ABSTRACT

In this work, we present a unified view on Markov random fields (MRFs) and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased toward the grid geometry than Markov random fields on grids. It turns out that the continuous methods are nonlinear extensions of the well-established local polytope MRF relaxation. In view of this result, a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. Finally, we generalize the underlying energy formulation from isotropic metric smoothness costs to arbitrary nonmetric and orientation dependent smoothness terms.

INDEX TERMS

Joining processes, Labeling, Standards, Markov random fields, Optimization methods, Image edge detection, Minimization,approximate inference, Markov random fields, continuous labeling problems, convex relaxation

CITATION

Christopher Zach, Christian Hane, Marc Pollefeys, "What Is Optimized in Convex Relaxations for Multilabel Problems: Connecting Discrete and Continuously Inspired MAP Inference",

*IEEE Transactions on Pattern Analysis & Machine Intelligence*, vol.36, no. 1, pp. 157-170, Jan. 2014, doi:10.1109/TPAMI.2013.105REFERENCES