Category: Publications
-
Avalanche: an End-to-End Library for Continual Learning
Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni Read Full Paper → Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in […]
-
Ternary Feature Masks: zero-forgetting for task-incremental learning
Marc Masana, Tinne Tuytelaars, Joost van de Weijer Read Full Paper → We propose an approach without any forgetting to continual learning for the task-aware regime, where at inference the task-label is known. By using ternary masks we can upgrade a model to new tasks, reusing knowledge from previous tasks while not forgetting anything about them. Using […]
-
Continual learning in cross-modal retrieval
Kai Wang, Luis Herranz, Joost van de Weijer Read Full Paper → Multimodal representations and continual learning are two areas closely related to human intelligence. The former considers the learning of shared representation spaces where information from different modalities can be compared and integrated (we focus on cross-modal retrieval between language and visual representations). The latter studies […]
-
DANICE: Domain adaptation without forgetting in neural image compression
Sudeep Katakol, Luis Herranz, Fei Yang, Marta Mrak Read Full Paper → Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data. This data-driven nature enables new potential functionalities. In this paper, we study the adaptability of codecs to custom domains of interest. We show that NIC codecs […]
-
Slimmable Compressive Autoencoders for Practical Neural Image Compression
Fei Yang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov Read Full Paper → Neural image compression leverages deep neural networks to outperform traditional image codecs in rate-distortion performance. However, the resulting models are also heavy, computationally demanding and generally optimized for a single rate, limiting their practical use. Focusing on practical image compression, we propose slimmable compressive autoencoders […]
-
Bookworm continual learning: beyond zero-shot learning and continual learning
Kai Wang, Luis Herranz, Anjan Dutta, Joost van de Weijer Read Full Paper → We propose bookworm continual learning(BCL), a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually. Thus BCL generalizes both continual learning (CL) and zero-shot learning (ZSL). We also propose the bidirectional imagination (BImag) […]
-
Disentanglement of Color and Shape Representations for Continual Learning
David Berga, Marc Masana, Joost Van de Weijer Read Full Paper → We hypothesize that disentangled feature representations suffer less from catastrophic forgetting. As a case study we perform explicit disentanglement of color and shape, by adjusting the network architecture. We tested classification accuracy and forgetting in a task-incremental setting with Oxford-102 Flowers dataset. We combine our […]
-
On Class Orderings for Incremental Learning
Marc Masana, Bartłomiej Twardowski, Joost van de Weijer Read Full Paper → The influence of class orderings in the evaluation of incremental learning has received very little attention. In this paper, we investigate the impact of class orderings for incrementally learned classifiers. We propose a method to compute various orderings for a dataset. The orderings are derived […]
-
RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning
Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper […]
-
DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs
Yaxing Wang, Lu Yu, Joost van de Weijer Read Full Paper → Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose […]