Category: Publications

  • Attention Distillation: self-supervised vision transformer students need more guidance

    Kai Wang, Fei Yang, Joost van de Weijer Read Full Paper → Self-supervised learning has been widely applied to train high-quality vision transformers. Unleashing their excellent performance on memory and compute constraint devices is therefore an important research topic. However, how to distill knowledge from one self-supervised ViT to another has not yet been explored. Moreover, the […]

  • Class-incremental learning: survey and performance evaluation on image classification

    Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required […]

  • Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data

    Yaxing Wang, Joost van de weijer, Lu Yu, SHANGLING JUI Read Full Paper → Conditional image synthesis is an integral part of many X2I translation systems, including image-to-image, text-to-image and audio-to-image translation systems. Training these large systems generally requires huge amounts of training data. Therefore, we investigate knowledge distillation to transfer knowledge from a high-quality unconditioned generative model […]

  • Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

    Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost van de Weijer Read Full Paper → In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes an initial step towards a surgical investigation […]

  • Continually Learning Self-Supervised Representations with Projected Functional Regularization

    Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally — they are, in fact, mostly used only as a pre-training phase over […]

  • Area Under the ROC Curve Maximization for Metric Learning

    Bojana Gajić, Ariel Amato, Ramon Baldrich, Joost van de Weijer, Carlo Gatta Read Full Paper → Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is […]

  • Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition

    Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost van de Weijer Read Full Paper → Most meta-learning approaches assume the existence of a very large set of labeled data available for episodic meta-learning of base knowledge. This contrasts with the more realistic continual learning paradigm in which data arrives incrementally in the form of tasks containing disjoint […]

  • Transferring Unconditional to Conditional GANs with Hyper-Modulation

    Héctor Laria, Yaxing Wang, Joost van de Weijer, Bogdan Raducanu Read Full Paper → GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. […]

  • Class-Balanced Active Learning for Image Classification

    Javad Zolfaghari Bengar, Joost van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu Read Full Paper → Active learning aims to reduce the labeling effort that is required to train algorithms by learning an acquisition function selecting the most relevant data for which a label should be requested from a large unlabeled data pool. Active learning is generally studied […]