Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning

Rui Zhang, Quanyan Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.

Original languageEnglish (US)
Title of host publication2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-6
Number of pages6
ISBN (Electronic)9781538605790
DOIs
StatePublished - May 21 2018
Event52nd Annual Conference on Information Sciences and Systems, CISS 2018 - Princeton, United States
Duration: Mar 21 2018Mar 23 2018

Other

Other52nd Annual Conference on Information Sciences and Systems, CISS 2018
CountryUnited States
CityPrinceton
Period3/21/183/23/18

Fingerprint

Support vector machines
Learning systems
Labels
Classifiers
Experiments

Keywords

  • Distributed Learning
  • Multi-Task Learning
  • Support Vector Machines
  • Transfer Learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Information Systems

Cite this

Zhang, R., & Zhu, Q. (2018). Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning. In 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018 (pp. 1-6). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CISS.2018.8362195

Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning. / Zhang, Rui; Zhu, Quanyan.

2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 1-6.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhang, R & Zhu, Q 2018, Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning. in 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018. Institute of Electrical and Electronics Engineers Inc., pp. 1-6, 52nd Annual Conference on Information Sciences and Systems, CISS 2018, Princeton, United States, 3/21/18. https://doi.org/10.1109/CISS.2018.8362195
Zhang R, Zhu Q. Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning. In 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1-6 https://doi.org/10.1109/CISS.2018.8362195
Zhang, Rui ; Zhu, Quanyan. / Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning. 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1-6
@inproceedings{4151c073dca64e5e8487067c1675d785,
title = "Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning",
abstract = "Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.",
keywords = "Distributed Learning, Multi-Task Learning, Support Vector Machines, Transfer Learning",
author = "Rui Zhang and Quanyan Zhu",
year = "2018",
month = "5",
day = "21",
doi = "10.1109/CISS.2018.8362195",
language = "English (US)",
pages = "1--6",
booktitle = "2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning

AU - Zhang, Rui

AU - Zhu, Quanyan

PY - 2018/5/21

Y1 - 2018/5/21

N2 - Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.

AB - Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.

KW - Distributed Learning

KW - Multi-Task Learning

KW - Support Vector Machines

KW - Transfer Learning

UR - http://www.scopus.com/inward/record.url?scp=85048587430&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048587430&partnerID=8YFLogxK

U2 - 10.1109/CISS.2018.8362195

DO - 10.1109/CISS.2018.8362195

M3 - Conference contribution

SP - 1

EP - 6

BT - 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -