Unrolling loopy top-down semantic feedback in convolutional deep networks

Carlo Gatta, Adriana Romero, Joost van de Weijer

In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterized by an efficient training and a sufficiently fast testing. We use the well known SIFT-flow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches.