Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic Image Classification

Yeganeh Madadi, Vahid Seydi, Jian Sun, Edward Chaum, Siamak Yousefi2

Research output: Contribution to conferencePaperpeer-review

Abstract

Domain adaptation is an attractive approach given the availability of a large amount of labeled data with similar properties but different domains. It is effective in image classification tasks where obtaining sufficient label data is challenging. We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods for effectively solving real-world problems. The major assumption is that when base domain adaptation models are combined, we can obtain a more accurate and robust model by exploiting the ability of each of the base models. We extend Maximum Mean Discrepancy (MMD), Low-rank coding, and Correlation Alignment (CORAL) to compute the adaptation loss in three base models. Also, we utilize a two-fully connected layer network as a meta-model to stack the output predictions of these three well-performing domain adaptation models to obtain high accuracy in ophthalmic image classification tasks. The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
Original languageEnglish
Publication statusPublished - 27 Sept 2021

Keywords

  • Stacking ensemble learning
  • Domain adaptation
  • Ophthalmic image classification

Fingerprint

Dive into the research topics of 'Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic Image Classification'. Together they form a unique fingerprint.

Cite this