Game-theoretic defense of adversarial distributed support vector machines

Rui Zhang, Quanyan Zhu

Research output: Contribution to journalArticle

Abstract

With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We establish a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed learning algorithms. We show that the DSVM learner is less vulnerable when he uses a balanced network with fewer nodes and higher degree. We also show that adding more training samples is an efficient defense strategy against an attacker. We present secure and resilient DSVM algorithms with verification method and rejection method, and show their resiliency against adversary with numerical experiments.

Original languageEnglish (US)
Pages (from-to)3-21
Number of pages19
JournalJournal of Advances in Information Fusion
Volume14
Issue number1
StatePublished - Jan 1 2019

ASJC Scopus subject areas

  • Signal Processing
  • Information Systems
  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Game-theoretic defense of adversarial distributed support vector machines'. Together they form a unique fingerprint.

  • Cite this