Tag: ICCV 2019

  • Multi-Modal Fusion for End-to-End RGB-T Tracking

    Lichao Zhang, Martin Danelljan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan Read Full Paper → We propose an end-to-end tracking framework for fusing the RGB and TIR modalities in RGB-T tracking. Our baseline tracker is DiMP (Discriminative Model Prediction), which employs a carefully designed target prediction network trained end-to-end using a discriminative loss. We analyze the effectiveness […]

  • SID4VAM: A Benchmark Dataset With Synthetic Images for Visual Attention Modeling

    David Berga, Xose R. Fdez-Vidal, Xavier Otazu, Xose M. Pardo Read Full Paper → A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as the influence of model inspiration and consistency with human psychophysics. SID4VAM is composed of 230 synthetic images, with […]

  • Active Learning for Deep Detection Neural Networks

    Hamed H. Aghdam, Abel Gonzalez-Garcia, Joost van de Weijer, Antonio M. López Read Full Paper → The cost of drawing object bounding boxes (i.e. labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting […]

  • Learning the Model Update for Siamese Trackers

    Lichao Zhang, Abel Gonzalez-Garcia, Joost van de Weijer, Martin Danelljan, Fahad Shahbaz Khan Read Full Paper → Siamese approaches address the visual tracking problem by extracting an appearance template from the current frame, which is used to localize the target in the next frame. In general, this template is linearly combined with the accumulated template from the previous frame, […]