Category: CVPR

  • One-Way Ticket: Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models

    Senmao Li, Lei Wang, Kai Wang, Tao Liu, Jiehang Xie, Joost van de Weijer, Fahad Shahbaz Khan, Shiqi Yang, Yaxing Wang, Jian Yang Read Full Paper → Text-to-Image (T2I) diffusion models have made remarkable advancements in generative modeling; however, they face a trade-off between inference speed and image quality, posing challenges for efficient deployment. Existing distilled T2I models can generate high-fidelity images with fewer […]

  • The Art of Deception: Color Visual Illusions and Diffusion Models

    Alex Gomez-Villa, Kai Wang, Alejandro C. Parraga, Bartlomiej Twardowski, Jesus Malo, Javier Vazquez-Corral, Joost van de Weijer Read Full Paper → Visual illusions in humans arise when interpreting out-of-distribution stimuli: if the observer is adapted to certain statistics, perception of outliers deviates from reality. Recent studies have shown that artificial neural networks (ANNs) can also be deceived by visual illusions. This […]

  • Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning

    Dipam Goswami, Albin Soutif-Cormerais, Yuyang Liu, Sandesh Kamath, Bartlomiej Twardowski, Joost van de Weijer Read Full Paper → Continual learning methods are known to suffer from catastrophic forgetting a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore to reduce potential drift in the feature extractor […]

  • Density Map Distillation for Incremental Object Counting

    Chenshen Wu, Joost van de Weijer Read Full Paper → We investigate the problem of incremental learning for object counting, where a method must learn to count a variety of object classes from a sequence of datasets. A naïve approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic […]

  • 3D-aware multi-class image-to-image translation with NeRFs

    Senmao Li, Joost van de Weijer, Yaxing Wang, Fahad Shahbaz Khan, Meiqin Liu, Jian Yang Read Full Paper → Recent advances in 3D-aware generative models (3D-aware GANs) combined with Neural Radiance Fields (NeRF) have achieved impressive results. However no prior works investigate 3D-aware GANs for 3D consistent multi-class image-to-image (3D-aware I2I) translation. Naively using 2D-I2I translation methods suffers from unrealistic […]

  • Endpoints Weight Fusion for Class Incremental Semantic Segmentation

    Jia-Wen Xiao, Chang-Bin Zhang, Jiekang Feng, Xialei Liu, Joost van de Weijer, Ming-Ming Cheng Read Full Paper → Class incremental semantic segmentation (CISS) focuses on alleviating catastrophic forgetting to improve discrimination. Previous work mainly exploit regularization (e.g., knowledge distillation) to maintain previous knowledge in the current model. However, distillation alone often yields limited gain to […]

  • Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

    Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost van de Weijer Read Full Paper → In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes an initial step towards a surgical investigation […]

  • Continually Learning Self-Supervised Representations with Projected Functional Regularization

    Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally — they are, in fact, mostly used only as a pre-training phase over […]

  • Area Under the ROC Curve Maximization for Metric Learning

    Bojana Gajić, Ariel Amato, Ramon Baldrich, Joost van de Weijer, Carlo Gatta Read Full Paper → Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is […]

  • Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition

    Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost van de Weijer Read Full Paper → Most meta-learning approaches assume the existence of a very large set of labeled data available for episodic meta-learning of base knowledge. This contrasts with the more realistic continual learning paradigm in which data arrives incrementally in the form of tasks containing disjoint […]