Attribute-efficient learning in query and mistake-bound models

Nader H. Bshouty, Lisa Hellerstein

    Research output: Chapter in Book/Report/Conference proceedingChapter

    Abstract

    We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. Using a number-theoretic coloring technique, we show that in this model, any class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of Blum et al. and Bshouty et al.

    Original languageEnglish (US)
    Title of host publicationProceedings of the Annual ACM Conference on Computational Learning Theory
    Editors Anon
    Pages235-243
    Number of pages9
    StatePublished - 1996
    EventProceedings of the 1996 9th Annual Conference on Computational Learning Theory - Desenzano del Garda, Italy
    Duration: Jun 28 1996Jul 1 1996

    Other

    OtherProceedings of the 1996 9th Annual Conference on Computational Learning Theory
    CityDesenzano del Garda, Italy
    Period6/28/967/1/96

    Fingerprint

    Attribute
    Query
    Polynomials
    Polynomial time
    Target
    Model
    Coloring
    Learning algorithms
    Parity
    Colouring
    Learning Algorithm
    Efficient Algorithms
    Learning
    Polynomial

    ASJC Scopus subject areas

    • Computational Mathematics

    Cite this

    Bshouty, N. H., & Hellerstein, L. (1996). Attribute-efficient learning in query and mistake-bound models. In Anon (Ed.), Proceedings of the Annual ACM Conference on Computational Learning Theory (pp. 235-243)

    Attribute-efficient learning in query and mistake-bound models. / Bshouty, Nader H.; Hellerstein, Lisa.

    Proceedings of the Annual ACM Conference on Computational Learning Theory. ed. / Anon. 1996. p. 235-243.

    Research output: Chapter in Book/Report/Conference proceedingChapter

    Bshouty, NH & Hellerstein, L 1996, Attribute-efficient learning in query and mistake-bound models. in Anon (ed.), Proceedings of the Annual ACM Conference on Computational Learning Theory. pp. 235-243, Proceedings of the 1996 9th Annual Conference on Computational Learning Theory, Desenzano del Garda, Italy, 6/28/96.
    Bshouty NH, Hellerstein L. Attribute-efficient learning in query and mistake-bound models. In Anon, editor, Proceedings of the Annual ACM Conference on Computational Learning Theory. 1996. p. 235-243
    Bshouty, Nader H. ; Hellerstein, Lisa. / Attribute-efficient learning in query and mistake-bound models. Proceedings of the Annual ACM Conference on Computational Learning Theory. editor / Anon. 1996. pp. 235-243
    @inbook{085ca22e00904e5b9b2a67d63a744d8e,
    title = "Attribute-efficient learning in query and mistake-bound models",
    abstract = "We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. Using a number-theoretic coloring technique, we show that in this model, any class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of Blum et al. and Bshouty et al.",
    author = "Bshouty, {Nader H.} and Lisa Hellerstein",
    year = "1996",
    language = "English (US)",
    pages = "235--243",
    editor = "Anon",
    booktitle = "Proceedings of the Annual ACM Conference on Computational Learning Theory",

    }

    TY - CHAP

    T1 - Attribute-efficient learning in query and mistake-bound models

    AU - Bshouty, Nader H.

    AU - Hellerstein, Lisa

    PY - 1996

    Y1 - 1996

    N2 - We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. Using a number-theoretic coloring technique, we show that in this model, any class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of Blum et al. and Bshouty et al.

    AB - We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. Using a number-theoretic coloring technique, we show that in this model, any class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of Blum et al. and Bshouty et al.

    UR - http://www.scopus.com/inward/record.url?scp=0030389398&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0030389398&partnerID=8YFLogxK

    M3 - Chapter

    SP - 235

    EP - 243

    BT - Proceedings of the Annual ACM Conference on Computational Learning Theory

    A2 - Anon, null

    ER -