Category: Publications

  • HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification

    Kai Wang, Xialei Liu, Luis Herranz, Joost van de Weijer Read Full Paper → Human beings learn and accumulate hierarchical knowledge over their lifetime. This knowledge is associated with previous concepts for consolidation and hierarchical construction. However, current incremental learning methods lack the ability to build a concept hierarchy by associating new concepts to old ones. A more […]

  • Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation

    Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, Shangling Jui Read Full Paper → Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address […]

  • Generalized Source-free Domain Adaptation

    Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, Shangling Jui Read Full Paper → Domain adaptation (DA) aims to transfer the knowledge learned from a source domain to an unlabeled target domain. Some recent works tackle source-free domain adaptation (SFDA) where only a source pre-trained model is available for adaptation to the target domain. However, those methods […]

  • TransferI2I: Transfer Learning for Image-to-Image Translation from Small Datasets

    Yaxing Wang, Hector Laria Mantecon, Joost van de Weijer, Laura Lopez-Fuentes, Bogdan Raducanu Read Full Paper → Image-to-image (I2I) translation has matured in recent years and is able to generate high-quality realistic images. However, despite current success, it still faces important challenges when applied to small domains. Existing methods use transfer learning for I2I translation, but they still require […]

  • Avalanche: an End-to-End Library for Continual Learning

    Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni Read Full Paper → Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in […]

  • Ternary Feature Masks: zero-forgetting for task-incremental learning

    Marc Masana, Tinne Tuytelaars, Joost van de Weijer Read Full Paper → We propose an approach without any forgetting to continual learning for the task-aware regime, where at inference the task-label is known. By using ternary masks we can upgrade a model to new tasks, reusing knowledge from previous tasks while not forgetting anything about them. Using […]

  • Continual learning in cross-modal retrieval

    Kai Wang, Luis Herranz, Joost van de Weijer Read Full Paper → Multimodal representations and continual learning are two areas closely related to human intelligence. The former considers the learning of shared representation spaces where information from different modalities can be compared and integrated (we focus on cross-modal retrieval between language and visual representations). The latter studies […]

  • DANICE: Domain adaptation without forgetting in neural image compression

    Sudeep Katakol, Luis Herranz, Fei Yang, Marta Mrak Read Full Paper → Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data. This data-driven nature enables new potential functionalities. In this paper, we study the adaptability of codecs to custom domains of interest. We show that NIC codecs […]

  • Slimmable Compressive Autoencoders for Practical Neural Image Compression

    Fei Yang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov Read Full Paper → Neural image compression leverages deep neural networks to outperform traditional image codecs in rate-distortion performance. However, the resulting models are also heavy, computationally demanding and generally optimized for a single rate, limiting their practical use. Focusing on practical image compression, we propose slimmable compressive autoencoders […]

  • Bookworm continual learning: beyond zero-shot learning and continual learning

    Kai Wang, Luis Herranz, Anjan Dutta, Joost van de Weijer Read Full Paper → We propose bookworm continual learning(BCL), a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually. Thus BCL generalizes both continual learning (CL) and zero-shot learning (ZSL). We also propose the bidirectional imagination (BImag) […]