Category: CVPR

  • Transferring Unconditional to Conditional GANs with Hyper-Modulation

    Héctor Laria, Yaxing Wang, Joost van de Weijer, Bogdan Raducanu Read Full Paper → GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. […]

  • Avalanche: an End-to-End Library for Continual Learning

    Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni Read Full Paper → Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in […]

  • Ternary Feature Masks: zero-forgetting for task-incremental learning

    Marc Masana, Tinne Tuytelaars, Joost van de Weijer Read Full Paper → We propose an approach without any forgetting to continual learning for the task-aware regime, where at inference the task-label is known. By using ternary masks we can upgrade a model to new tasks, reusing knowledge from previous tasks while not forgetting anything about them. Using […]

  • Continual learning in cross-modal retrieval

    Kai Wang, Luis Herranz, Joost van de Weijer Read Full Paper → Multimodal representations and continual learning are two areas closely related to human intelligence. The former considers the learning of shared representation spaces where information from different modalities can be compared and integrated (we focus on cross-modal retrieval between language and visual representations). The latter studies […]

  • DANICE: Domain adaptation without forgetting in neural image compression

    Sudeep Katakol, Luis Herranz, Fei Yang, Marta Mrak Read Full Paper → Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data. This data-driven nature enables new potential functionalities. In this paper, we study the adaptability of codecs to custom domains of interest. We show that NIC codecs […]

  • Slimmable Compressive Autoencoders for Practical Neural Image Compression

    Fei Yang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov Read Full Paper → Neural image compression leverages deep neural networks to outperform traditional image codecs in rate-distortion performance. However, the resulting models are also heavy, computationally demanding and generally optimized for a single rate, limiting their practical use. Focusing on practical image compression, we propose slimmable compressive autoencoders […]

  • Generative Feature Replay For Class-Incremental Learning

    Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost van de Weijer Read Full Paper → Humans are capable of learning new tasks without forgetting previous ones, while neural networks fail due to catastrophic forgetting between new and previously-learned tasks. We consider a class-incremental setting which means that the task-ID is unknown at inference time. […]

  • Semi-supervised Learning for Few-shot Image-to-Image Translation

    Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan Read Full Paper → In the last few years, unpaired image-to-image translation has witnessed remarkable progress. Although the latest methods are able to generate realistic images, they crucially rely on a large number of labeled images. Recently, some methods have tackled the challenging setting of few-shot […]

  • Semantic Drift Compensation for Class-Incremental Learning

    Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost van de Weijer Read Full Paper → Class-incremental learning of deep networks sequentially increases the number of classes to be classified. During training, the network has only access to data of one task at a time, where each task contains several classes. In this setting, networks suffer […]

  • Orderless Recurrent Models for Multi-label Classification

    Vacit Oguz Yazici, Abel Gonzalez-Garcia, Arnau Ramisa, Bartlomiej Twardowski, Joost van de Weijer Read Full Paper → Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification. Since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task. Current approaches sort labels according to their frequency, typically ordering them in either […]