Adaptation based on generalized discrepancy

Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina

Research output: Contribution to journalArticle

Abstract

We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.

Original languageEnglish (US)
Pages (from-to)1-30
Number of pages30
JournalJournal of Machine Learning Research
Volume20
StatePublished - Jan 1 2019

Fingerprint

Discrepancy
Convex optimization
Training Samples
Convex Optimization
Theoretical Analysis
Optimization Problem
Experiment
Experiments

Keywords

  • Domain adaptation
  • Learning theory

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Cite this

Adaptation based on generalized discrepancy. / Cortes, Corinna; Mohri, Mehryar; Medina, Andrés Muñoz.

In: Journal of Machine Learning Research, Vol. 20, 01.01.2019, p. 1-30.

Research output: Contribution to journalArticle

Cortes, Corinna ; Mohri, Mehryar ; Medina, Andrés Muñoz. / Adaptation based on generalized discrepancy. In: Journal of Machine Learning Research. 2019 ; Vol. 20. pp. 1-30.
@article{4797cd387ea24a769fef8f57b29e5016,
title = "Adaptation based on generalized discrepancy",
abstract = "We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.",
keywords = "Domain adaptation, Learning theory",
author = "Corinna Cortes and Mehryar Mohri and Medina, {Andr{\'e}s Mu{\~n}oz}",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
volume = "20",
pages = "1--30",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

TY - JOUR

T1 - Adaptation based on generalized discrepancy

AU - Cortes, Corinna

AU - Mohri, Mehryar

AU - Medina, Andrés Muñoz

PY - 2019/1/1

Y1 - 2019/1/1

N2 - We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.

AB - We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.

KW - Domain adaptation

KW - Learning theory

UR - http://www.scopus.com/inward/record.url?scp=85068332419&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068332419&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:85068332419

VL - 20

SP - 1

EP - 30

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -