S. Khodadadeh, L. Bölöni, and M. Shah

Unsupervised Meta-Learning For Few-Shot Image Classification


Cite as:

S. Khodadadeh, L. Bölöni, and M. Shah. Unsupervised Meta-Learning For Few-Shot Image Classification. In Proc. of Thirty-third Conference on Neural Information Processing Systems (NeurIPS-2019), pp. 10132–10142, December 2019.

Download:

Download 

Abstract:

Few-shot or one-shot learning of classifiers requires a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. In this paper, we propose UMTRA, an algorithm that performs unsupervised, model agnostic meta-learning for classification tasks. The meta-learning step of UMTRA is performed on a flat collection of unlabeled images. While we assume that these images can be grouped into a diverse set of classes and are relevant to the target task, no explicit information about the classes or any labels are needed. UMTRA uses random sampling and augmentation to create synthetic training tasks for meta-learning phase. Labels are only needed at the final target task learning step, and they can be as little as one sample per class. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a vast reduction in the required labels. For instance, for 5-way 5-shot classification on the Omniglot dataset UMTRA obtains a 95.43\% accuracy with only 25 labels, while supervised MAML obtains 98.83\% with 24025.

BibTeX:

@inproceedings{Khodadadeh-2019-NeurIPS,
    title={Unsupervised Meta-Learning For Few-Shot Image Classification},
    author={S. Khodadadeh and L. B{\"o}l{\"o}ni and M. Shah},
    booktitle={Proc. of Thirty-third Conference on Neural Information Processing Systems (NeurIPS-2019)},
    year={2019},
    location="Vancouver, Canada",
    pages="10132-10142",
    month = "December",
    xxxacceptance="21.1%",
    abstract = {
      Few-shot or one-shot learning of classifiers requires a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. In this paper, we propose UMTRA, an algorithm that performs unsupervised, model agnostic meta-learning for classification tasks.
      The meta-learning step of UMTRA is performed on a flat collection of unlabeled images. While we assume that these images can be grouped into a diverse set of classes and are relevant to the target task, no explicit information about the classes or any labels are needed. UMTRA uses random sampling and augmentation to create synthetic training tasks for meta-learning phase. Labels are only needed at the final target task learning step, and they can be as little as one sample per class.
      On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a vast reduction in the required labels. For instance, for 5-way 5-shot classification on the Omniglot dataset UMTRA obtains a 95.43\% accuracy with only 25 labels, while supervised MAML obtains 98.83\% with 24025.
     }
}

Generated by bib2html.pl (written by Patrick Riley, Lotzi Boloni ) on Fri Jan 29, 2021 20:15:22