New results for learning noisy parities and halfspaces

Vitaly Feldman, Parikshit Gopalan, Subhash Khot, Ashok Kumar Ponnuswami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.

Original languageEnglish (US)
Title of host publication47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006
Pages563-572
Number of pages10
DOIs
StatePublished - 2006
Event47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006 - Berkeley, CA, United States
Duration: Oct 21 2006Oct 24 2006

Other

Other47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006
CountryUnited States
CityBerkeley, CA
Period10/21/0610/24/06

Fingerprint

Learning algorithms
Cryptography
Hardness

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Feldman, V., Gopalan, P., Khot, S., & Ponnuswami, A. K. (2006). New results for learning noisy parities and halfspaces. In 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006 (pp. 563-572). [4031391] https://doi.org/10.1109/FOCS.2006.51

New results for learning noisy parities and halfspaces. / Feldman, Vitaly; Gopalan, Parikshit; Khot, Subhash; Ponnuswami, Ashok Kumar.

47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006. 2006. p. 563-572 4031391.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Feldman, V, Gopalan, P, Khot, S & Ponnuswami, AK 2006, New results for learning noisy parities and halfspaces. in 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006., 4031391, pp. 563-572, 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006, Berkeley, CA, United States, 10/21/06. https://doi.org/10.1109/FOCS.2006.51
Feldman V, Gopalan P, Khot S, Ponnuswami AK. New results for learning noisy parities and halfspaces. In 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006. 2006. p. 563-572. 4031391 https://doi.org/10.1109/FOCS.2006.51
Feldman, Vitaly ; Gopalan, Parikshit ; Khot, Subhash ; Ponnuswami, Ashok Kumar. / New results for learning noisy parities and halfspaces. 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006. 2006. pp. 563-572
@inproceedings{ba9b387ecb304b66aef9d8cb1ec1aa7c,
title = "New results for learning noisy parities and halfspaces",
abstract = "We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.",
author = "Vitaly Feldman and Parikshit Gopalan and Subhash Khot and Ponnuswami, {Ashok Kumar}",
year = "2006",
doi = "10.1109/FOCS.2006.51",
language = "English (US)",
isbn = "0769527205",
pages = "563--572",
booktitle = "47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006",

}

TY - GEN

T1 - New results for learning noisy parities and halfspaces

AU - Feldman, Vitaly

AU - Gopalan, Parikshit

AU - Khot, Subhash

AU - Ponnuswami, Ashok Kumar

PY - 2006

Y1 - 2006

N2 - We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.

AB - We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.

UR - http://www.scopus.com/inward/record.url?scp=34547698378&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34547698378&partnerID=8YFLogxK

U2 - 10.1109/FOCS.2006.51

DO - 10.1109/FOCS.2006.51

M3 - Conference contribution

AN - SCOPUS:34547698378

SN - 0769527205

SN - 9780769527208

SP - 563

EP - 572

BT - 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006

ER -