Background Adaptation with Residual Modeling for Exemplar-Free Class-Incremental Semantic Segmentation

Anqi Zhang1, Guangyu Gao1,
1Beijing Institute of Technology, Beijing, China
Accepted to ECCV 2024
Visualization of BARM effect.

3D Visualization of the Background Adaptation results. The background logits for step t combine the background logits from step t-1 and the background adaptation logits learned in step t, which simultaneously prevent direct yet disorderly adjustment on previous classifiers and focus on learning residuals. The values of the current step background logits are further processed via the Sigmoid function. Note that the color corresponds to the numerical values from small to large, ranging from red to blue.

Abstract

Class Incremental Semantic Segmentation (CISS), within Incremental Learning for semantic segmentation, targets segmenting new categories while reducing the catastrophic forgetting on the old categories. Besides, background shifting, where the background category changes constantly in each step, is a special challenge for CISS.

Current methods with a shared background classifier struggle to keep up with these changes, leading to decreased stability in background predictions and reduced accuracy of segmentation. For this special challenge, we designed a novel background adaptation mechanism, which explicitly models the background residual rather than the background itself in each step, and aggregates these residuals to represent the evolving background. Therefore, the background adaptation mechanism ensures the stability of previous background classifiers, while enabling the model to concentrate on the easy-learned residuals from the additional channel, which enhances background discernment for better prediction of novel categories. To precisely optimize the background adaptation mechanism, we propose Pseudo Background Binary Cross-Entropy loss and Background Adaptation losses, which amplify the adaptation effect. Group Knowledge Distillation and Background Feature Distillation strategies are designed to prevent forgetting old categories.

Our approach, evaluated across various incremental scenarios on Pascal VOC 2012 and ADE20K datasets, outperforms prior exemplar-free state-of-the-art methods with mIoU of 3.0% in VOC 10-1 and 2.0% in ADE 100-5, notably enhancing the accuracy of new classes while mitigating catastrophic forgetting.

Method

Overview of Background Adaptation

Overview of our framework that consists of Background Adaptation mechanism. Previous studies often apply only one background classifier, causing chaos when evolving the background category. We introduce an additional background adaptation channel for each classifier of the incremental process.

Overview of Background Adaptation.

Incremental Training Strategy

Background Adaptation Losses

The Background Adaptation (BgA) losses are designed for further refined adjustments on the background adaptation channel, which includes a cross-entropy loss for positive adaptation and a triplet loss for negative adaptation.

Background Adaptation Losses.

Knowledge Distillation Strategies

Group Knowledge Distillation (GKD) and Background Feature Distillation (BFD) strategies are designed to prevent forgetting old categories. GKD distills the probability prediction from the previous classifier to the current classifier, while BFD distills the features of background region.

Experiment Results

Quantitative Results

Pascal VOC 2012

Pascal VOC 2012 Results.

ADE20K

ADE20K Results.

Qualitative Results

Qualitative Results.

BibTeX

@inproceedings{zhang2024background,
      title={Background Adaptation with Residual Modeling for Exemplar-Free Class-Incremental Semantic Segmentation},
      author={Zhang, Anqi and Gao, Guangyu},
      journal={ECCV},
      year={2024}
}