Tag: CVPR 2022 Workshop

  • Best Paper Award CL-Vision 2022

    Alex won the Best Paper Award at the Continual Learning Workshop. Saurav received the runner-up award for: Joost van de Weijer gave an invited talk at Continual Learning Workshop.

  • Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

    Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost van de Weijer Read Full Paper → In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes an initial step towards a surgical investigation […]

  • Continually Learning Self-Supervised Representations with Projected Functional Regularization

    Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally — they are, in fact, mostly used only as a pre-training phase over […]

  • Five papers accepted at Computer Vision and Pattern Recognition (CVPR) 2022 Workshops

    Papers at CL-Vision: Paper at Efficient Deep Learning for Computer Vision:

  • Area Under the ROC Curve Maximization for Metric Learning

    Bojana Gajić, Ariel Amato, Ramon Baldrich, Joost van de Weijer, Carlo Gatta Read Full Paper → Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is […]

  • Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition

    Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost van de Weijer Read Full Paper → Most meta-learning approaches assume the existence of a very large set of labeled data available for episodic meta-learning of base knowledge. This contrasts with the more realistic continual learning paradigm in which data arrives incrementally in the form of tasks containing disjoint […]

  • Transferring Unconditional to Conditional GANs with Hyper-Modulation

    Héctor Laria, Yaxing Wang, Joost van de Weijer, Bogdan Raducanu Read Full Paper → GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. […]