Alex Gomez-Villa Successfully Defended Ph.D. Thesis. Congratulations!

published in

Title The Analysis and Continual Learning of Deep Feature Representations.

Abstract Artificial Intelligence (AI) is having an enormous impact across diverse application fields, from autonomous driving to drug discovery. This progress is predominantly driven by deep learning, a technology whose performance scales with data availability. As the demand for large datasets grows, researchers have developed transfer learning as an alternative strategy to leverage pre-trained models. However, transfer learning faces several challenges, including the need for extensive annotated data, difficulty in adapting to highly disjoint domains, and the lack of knowledge accumulation when adapting to new domains. Continual learning emerges as a promising approach to address these issues, enabling models to learn from shifting data distributions without forgetting previously acquired knowledge.

Self-supervised learning has shown remarkable success in alleviating data scarcity issues by leveraging unlabeled data to learn meaningful representations. This paradigm offers interesting challenges and opportunities for continual learning, potentially leading to more generalizable and adaptable models. Therefore, this thesis explores the combination of self-supervised and continual learning. While existing continual learning theory has predominantly focused on supervised learning, we extend continual learning theory to unsupervised scenarios. This allows learning of high-quality representations from a stream of unlabeled data.

In the first part of the thesis, we introduce a method that leverages feature distillation to combine the paradigms of continual learning and self-supervised learning to enable continual unsupervised representation learning. We then delve into the stability-plasticity dilemma, proposing strategies to optimize this trade-off in the context of exemplar-free unsupervised continual learning, particularly for scenarios involving numerous tasks and heterogenous architectures.

In the realm of prototype-based continual learning approaches, we find that feature drift is a primary cause of performance decline. Thus, we develop a prototype correction technique based on a projector that maps between the feature spaces of consecutive tasks.  We show that our method can be applied to any self-supervised continual learning method to create the first exemplar-free, semi-supervised continual learning method.

Finally, we investigated how the contrastive self-supervised methods invariant to data augmentations can be improved for more transferable representations. We explore the impact of color augmentations on self-supervised learning and introduce a physics-based color augmentation method. This technique proves effective for tasks where intrinsic object color is crucial. We improve overall representation learning by combining color and shape information through different self-supervision modalities.