Naïve bayes with higher order attributes

Bernard Rosell, Lisa Hellerstein

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The popular Naïve Bayes (NB) algorithm is simple and fast. We present a new learning algorithm, Extended Bayes (EB), which is based on Naïve Bayes. EB is still relatively simple, and achieves equivalent or higher accuracy than NB on a wide variety of the UC-Irvine datasets. EB is based on two ideas, which interact. The first is to find sets of seemingly dependent attributes and to add them as new attributes. The second idea is to exploit "zeroes", that is, the negative evidence provided by attribute values that do not occur at all in particular classes in the training data. Zeroes are handled in Naïve Bayes by smoothing. In contrast, EB uses them as evidence that a potential class labeling may be wrong.

    Original languageEnglish (US)
    Title of host publicationAdvances in Artificial Intelligence
    PublisherSpringer Verlag
    Pages105-119
    Number of pages15
    Volume3060
    ISBN (Print)9783540220046
    StatePublished - 2014
    Event17th Canadian Conference on Artificial Intelligence, Canadian AI 2004 - London, Canada
    Duration: May 17 2004May 19 2004

    Other

    Other17th Canadian Conference on Artificial Intelligence, Canadian AI 2004
    CountryCanada
    CityLondon
    Period5/17/045/19/04

    Fingerprint

    Bayes
    Labeling
    Learning algorithms
    Attribute
    Higher Order
    Zero
    Smoothing
    Learning Algorithm
    High Accuracy
    Dependent

    ASJC Scopus subject areas

    • Computer Science(all)
    • Theoretical Computer Science
    • Hardware and Architecture

    Cite this

    Rosell, B., & Hellerstein, L. (2014). Naïve bayes with higher order attributes. In Advances in Artificial Intelligence (Vol. 3060, pp. 105-119). Springer Verlag.

    Naïve bayes with higher order attributes. / Rosell, Bernard; Hellerstein, Lisa.

    Advances in Artificial Intelligence. Vol. 3060 Springer Verlag, 2014. p. 105-119.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Rosell, B & Hellerstein, L 2014, Naïve bayes with higher order attributes. in Advances in Artificial Intelligence. vol. 3060, Springer Verlag, pp. 105-119, 17th Canadian Conference on Artificial Intelligence, Canadian AI 2004, London, Canada, 5/17/04.
    Rosell B, Hellerstein L. Naïve bayes with higher order attributes. In Advances in Artificial Intelligence. Vol. 3060. Springer Verlag. 2014. p. 105-119
    Rosell, Bernard ; Hellerstein, Lisa. / Naïve bayes with higher order attributes. Advances in Artificial Intelligence. Vol. 3060 Springer Verlag, 2014. pp. 105-119
    @inproceedings{b4bc2bc252684a25ae4265614c8df687,
    title = "Na{\"i}ve bayes with higher order attributes",
    abstract = "The popular Na{\"i}ve Bayes (NB) algorithm is simple and fast. We present a new learning algorithm, Extended Bayes (EB), which is based on Na{\"i}ve Bayes. EB is still relatively simple, and achieves equivalent or higher accuracy than NB on a wide variety of the UC-Irvine datasets. EB is based on two ideas, which interact. The first is to find sets of seemingly dependent attributes and to add them as new attributes. The second idea is to exploit {"}zeroes{"}, that is, the negative evidence provided by attribute values that do not occur at all in particular classes in the training data. Zeroes are handled in Na{\"i}ve Bayes by smoothing. In contrast, EB uses them as evidence that a potential class labeling may be wrong.",
    author = "Bernard Rosell and Lisa Hellerstein",
    year = "2014",
    language = "English (US)",
    isbn = "9783540220046",
    volume = "3060",
    pages = "105--119",
    booktitle = "Advances in Artificial Intelligence",
    publisher = "Springer Verlag",

    }

    TY - GEN

    T1 - Naïve bayes with higher order attributes

    AU - Rosell, Bernard

    AU - Hellerstein, Lisa

    PY - 2014

    Y1 - 2014

    N2 - The popular Naïve Bayes (NB) algorithm is simple and fast. We present a new learning algorithm, Extended Bayes (EB), which is based on Naïve Bayes. EB is still relatively simple, and achieves equivalent or higher accuracy than NB on a wide variety of the UC-Irvine datasets. EB is based on two ideas, which interact. The first is to find sets of seemingly dependent attributes and to add them as new attributes. The second idea is to exploit "zeroes", that is, the negative evidence provided by attribute values that do not occur at all in particular classes in the training data. Zeroes are handled in Naïve Bayes by smoothing. In contrast, EB uses them as evidence that a potential class labeling may be wrong.

    AB - The popular Naïve Bayes (NB) algorithm is simple and fast. We present a new learning algorithm, Extended Bayes (EB), which is based on Naïve Bayes. EB is still relatively simple, and achieves equivalent or higher accuracy than NB on a wide variety of the UC-Irvine datasets. EB is based on two ideas, which interact. The first is to find sets of seemingly dependent attributes and to add them as new attributes. The second idea is to exploit "zeroes", that is, the negative evidence provided by attribute values that do not occur at all in particular classes in the training data. Zeroes are handled in Naïve Bayes by smoothing. In contrast, EB uses them as evidence that a potential class labeling may be wrong.

    UR - http://www.scopus.com/inward/record.url?scp=7444227090&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=7444227090&partnerID=8YFLogxK

    M3 - Conference contribution

    SN - 9783540220046

    VL - 3060

    SP - 105

    EP - 119

    BT - Advances in Artificial Intelligence

    PB - Springer Verlag

    ER -