Category: Publications

  • Get What You Want, Not What You Don’t: Image Content Suppression for Text-to-Image Diffusion Models

    Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, Jian Yang Read Full Paper → Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this paper, we consider the challenging Cold Start scenario in which insufficient data is available in the first task […]

  • Elastic Feature Consolidation for Cold Start Exemplar-free Incremental Learning

    Simone Magistri, Tomaso Trinci, Albin Soutif-Cormerais, Joost van de Weijer, Andrew D. Bagdanov Read Full Paper → Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this paper, we consider the challenging Cold Start scenario in which insufficient data is available in the first task to learn […]

  • FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning

    Dipam Goswami, Yuyang Liu, Bartłomiej Twardowski, Joost van de Weijer Read Full Paper → Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks and thus suffers from catastrophic forgetting. Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention. In […]

  • IterInv: Iterative Inversion for Pixel-Level T2I Models

    Chuanming Tang, Kai Wang, Joost van de Weijer Read Full Paper → Large-scale text-to-image diffusion models have been a ground-breaking development in generating convincing images following an input text prompt. The goal of image editing research is to give users control over the generated images by modifying the text prompt. Current image editing techniques predominantly hinge on […]

  • Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing

    Kai Wang, Fei Yang, Shiqi Yang, Muhammad Atif Butt, Joost van de Weijer Read Full Paper → Large-scale text-to-image generative models have been a ground-breaking development in generative AI, with diffusion models showing their astounding ability to synthesize convincing images following an input text prompt. The goal of image editing research is to give users control over the generated […]

  • ICICLE: Interpretable Class Incremental Continual Learning

    Dawid Rymarczyk, Joost van de Weijer, Bartosz Zieliński, Bartłomiej Twardowski Read Full Paper → Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are […]

  • Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection

    Liu Yuyang, Cong Yang, Goswami Dipam, Liu Xialei, Joost van de Weijer Read Full Paper → In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this […]

  • Density Map Distillation for Incremental Object Counting

    Chenshen Wu, Joost van de Weijer Read Full Paper → We investigate the problem of incremental learning for object counting, where a method must learn to count a variety of object classes from a sequence of datasets. A naïve approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic […]

  • Planckian Jitter: countering the color-crippling effects of color jitter on self-supervised training

    Simone Zini, Alex Gomez-Villa, Marco Buzzelli, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Several recent works on self-supervised learning are trained by mapping different augmentations of the same image to the same feature representation. The data augmentations used are of crucial importance to the quality of learned feature representations. In this paper, we analyze […]