Category: NeurIPS

  • FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning

    Dipam Goswami, Yuyang Liu, Bartłomiej Twardowski, Joost van de Weijer Read Full Paper → Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks and thus suffers from catastrophic forgetting. Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention. In […]

  • IterInv: Iterative Inversion for Pixel-Level T2I Models

    Chuanming Tang, Kai Wang, Joost van de Weijer Read Full Paper → Large-scale text-to-image diffusion models have been a ground-breaking development in generating convincing images following an input text prompt. The goal of image editing research is to give users control over the generated images by modifying the text prompt. Current image editing techniques predominantly hinge on […]

  • Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing

    Kai Wang, Fei Yang, Shiqi Yang, Muhammad Atif Butt, Joost van de Weijer Read Full Paper → Large-scale text-to-image generative models have been a ground-breaking development in generative AI, with diffusion models showing their astounding ability to synthesize convincing images following an input text prompt. The goal of image editing research is to give users control over the generated […]

  • Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation

    Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, Joost van de Weijer Read Full Paper → We propose a simple but effective source-free domain adaptation (SFDA) method. Treating SFDA as an unsupervised clustering problem and following the intuition that local neighbors in feature space should have more similar predictions than other features, we propose to optimize an objective of […]

  • Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation

    Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, Shangling Jui Read Full Paper → Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address […]

  • RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning

    Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer Read Full Paper → Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper […]

  • DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

    Yaxing Wang, Lu Yu, Joost van de Weijer Read Full Paper → Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose […]

  • Memory Replay GANs: learning to generate images from new categories without forgetting

    Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost van de Weijer, Bogdan Raducanu Read Full Paper → Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. […]

  • Image-to-image translation for cross-domain disentanglement

    Abel Gonzalez-Garcia, Joost van de Weijer, Yoshua Bengio Read Full Paper → Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In […]