Adversarial optimization for dictionary attacks on speaker verification

Mirko Marras, Paweł Korus, Nasir Memon, Gianni Fenu

Research output: Contribution to journalConference article

Abstract

In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.

Original languageEnglish (US)
Pages (from-to)2913-2917
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2019-September
DOIs
StatePublished - Jan 1 2019
Event20th Annual Conference of the International Speech Communication Association: Crossroads of Speech and Language, INTERSPEECH 2019 - Graz, Austria
Duration: Sep 15 2019Sep 19 2019

Fingerprint

Speaker Verification
Glossaries
Attack
Optimization
Vulnerability
Configuration
Demonstrate
Experiment
Dictionary
Voice
Experiments
Utterance
Model

Keywords

  • Adversarial Examples
  • Authentication
  • Biometrics
  • Dictionary Attacks
  • Speaker Verification

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modeling and Simulation

Cite this

Adversarial optimization for dictionary attacks on speaker verification. / Marras, Mirko; Korus, Paweł; Memon, Nasir; Fenu, Gianni.

In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, Vol. 2019-September, 01.01.2019, p. 2913-2917.

Research output: Contribution to journalConference article

@article{c76bab2f941c4e78999bfc673b8a5c49,
title = "Adversarial optimization for dictionary attacks on speaker verification",
abstract = "In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20{\%} of females and 10{\%} of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.",
keywords = "Adversarial Examples, Authentication, Biometrics, Dictionary Attacks, Speaker Verification",
author = "Mirko Marras and Paweł Korus and Nasir Memon and Gianni Fenu",
year = "2019",
month = "1",
day = "1",
doi = "10.21437/Interspeech.2019-2430",
language = "English (US)",
volume = "2019-September",
pages = "2913--2917",
journal = "Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH",
issn = "2308-457X",

}

TY - JOUR

T1 - Adversarial optimization for dictionary attacks on speaker verification

AU - Marras, Mirko

AU - Korus, Paweł

AU - Memon, Nasir

AU - Fenu, Gianni

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.

AB - In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.

KW - Adversarial Examples

KW - Authentication

KW - Biometrics

KW - Dictionary Attacks

KW - Speaker Verification

UR - http://www.scopus.com/inward/record.url?scp=85074694088&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85074694088&partnerID=8YFLogxK

U2 - 10.21437/Interspeech.2019-2430

DO - 10.21437/Interspeech.2019-2430

M3 - Conference article

AN - SCOPUS:85074694088

VL - 2019-September

SP - 2913

EP - 2917

JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

SN - 2308-457X

ER -