The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.
3 papers at ICLR 2024
Three papers were accepted:
1. Elastic Feature Consolidation for Cold Start Exemplar-free Incremental Learning (pdf).
2. Get What You Want, Not What You Don’t: Image Content Suppression for Text-to-Image Diffusion Models (pdf).
3. Divide and not forget: Ensemble of selectively trained experts in Continual Learning (pdf).
2 papers at NeurIPS 2023
Two papers were accepted:
1. FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning (pdf).
2. Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing (pdf).
And one workshop paper:
1.IterInv: Iterative Inversion for Pixel-Level T2I Models (pdf)
2 Papers at CVPR 2023
Two papers were accepted:
1. Endpoints Weight Fusion for Class Incremental Semantic Segmentation (pdf).
2. 3D-aware multi-class image-to-image translation with NeRFs (pdf).
And one paper in the Workshop on Continual Learning in Computer Vision (CVPRW):
1. Density Map Distillation for Incremental Object Counting (pdf).
ICLR 2023
Our paper on Planckian Jitter for better color image representation has been accepted for ICLR. Great work Simone and Alex!
1. Planckian Jitter: countering the color-crippling effects of color jitter on self-supervised training (pdf).
NeurIPS 2022
Shiqi-s paper Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation got accepted for NeurIPS 2022.
2 TPAMIs + 1 IJCV accepted
The survey paper Class-incremental learning: survey and performance evaluation on image classification is accepted at PAMI.
Also check the code framework FACIL that allows to reproduce the results from the survey.
The paper on zero-shot has also been accepted a PAMI: Generative Multi-Label Zero-Shot Learning
And in IJCV: MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Best Paper Award CLvision 2022
Alex won the Best Paper Award at the Continual Learning Workshop at CVPR 2022 for his paper Continually Learning Self-Supervised Representations with Projected Functional Regularization. Francesco and Saurav received the runner-up award for:
Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization.
Joost van de Weijer gave an invited talk at Continual Learning Workshop.
Paper at ICLR 2022
Yaxing’s paper on Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data is accepted on ICLR 2022.
Paper at NeurIPS 2021
Shiqi has one paper at the NeurIPS 2021 Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation.
Two papers at ICCV2021
Two papers have been accepted for the main track: Yaxing’s paper on TransferI2I: Transfer Learning for Image-to-Image Translation from Small Datasets. And Shiqi’s paper on Generalized Source-free Domain Adaptation (see project page).
CVPR 2021
Fei’s paper Slimmable compressive autoencoders for practical neural image compression has been accepted for CVPR.
Also four CVPR workshop papers have been accepted:
Two papers at NeurIPS 2020
Riccardo’s paper >RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning on continual learning of captioning systems, and Yaxing’s paper on transfer learning for image-to-image systems:DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs have been accepted !
4 papers at CVPR 2020
Four CVPR papers have been accepted:
- MineGAN: effective knowledge transfer from GANs to target domains with few images,
- Orderless Recurrent Models for Multi-label Classification,
- Semantic Drift Compensation for Class-Incremental Learning,
- Semi-supervised Learning for Few-shot Image-to-Image Translation.
and one workshop paper:
Two papers at WACV 2024
Two papers at WACV 2024
1. Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning, Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
2. Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning, Alex Gomez-Villa, Bartlomiej Twardowski, Kai Wang, Joost van de Weijer
BMVC 2022 and WACV 2023
Kai has two papers on BMVC 2022 !
Attention Distillation: self-supervised vision transformer students need more guidance
and
Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification
Dipam has published his work on incremental semantic segmentation at WACV 2023:
Attribution-aware Weight Transfer: A Warm-Start Initialization for Class-Incremental Semantic Segmentation