Approaches to adversarial drift

Alex Kantchelian, Sadia Afroz, Ling Huang, Aylin Caliskan Islam, Brad Miller, Michael Carl Tschantz, Rachel Greenstadt, Anthony D. Joseph, J. D. Tygar

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.

    Original languageEnglish (US)
    Title of host publicationAISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013
    Pages99-109
    Number of pages11
    DOIs
    StatePublished - Dec 9 2013
    Event2013 6th Annual ACM Workshop on Artificial Intelligence and Security, AISec 2013, Co-located with the 20th ACM Conference on Computer and Communications Security, CCS 2013 - Berlin, Germany
    Duration: Nov 4 2013Nov 4 2013

    Publication series

    NameProceedings of the ACM Conference on Computer and Communications Security
    ISSN (Print)1543-7221

    Conference

    Conference2013 6th Annual ACM Workshop on Artificial Intelligence and Security, AISec 2013, Co-located with the 20th ACM Conference on Computer and Communications Security, CCS 2013
    CountryGermany
    CityBerlin
    Period11/4/1311/4/13

    Fingerprint

    Security systems
    Labeling
    Learning systems
    Classifiers
    Experiments
    Malware

    Keywords

    • adversarial machine learning
    • concept drift
    • malware classification

    ASJC Scopus subject areas

    • Software
    • Computer Networks and Communications

    Cite this

    Kantchelian, A., Afroz, S., Huang, L., Islam, A. C., Miller, B., Tschantz, M. C., ... Tygar, J. D. (2013). Approaches to adversarial drift. In AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013 (pp. 99-109). (Proceedings of the ACM Conference on Computer and Communications Security). https://doi.org/10.1145/2517312.2517320

    Approaches to adversarial drift. / Kantchelian, Alex; Afroz, Sadia; Huang, Ling; Islam, Aylin Caliskan; Miller, Brad; Tschantz, Michael Carl; Greenstadt, Rachel; Joseph, Anthony D.; Tygar, J. D.

    AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013. 2013. p. 99-109 (Proceedings of the ACM Conference on Computer and Communications Security).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Kantchelian, A, Afroz, S, Huang, L, Islam, AC, Miller, B, Tschantz, MC, Greenstadt, R, Joseph, AD & Tygar, JD 2013, Approaches to adversarial drift. in AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013. Proceedings of the ACM Conference on Computer and Communications Security, pp. 99-109, 2013 6th Annual ACM Workshop on Artificial Intelligence and Security, AISec 2013, Co-located with the 20th ACM Conference on Computer and Communications Security, CCS 2013, Berlin, Germany, 11/4/13. https://doi.org/10.1145/2517312.2517320
    Kantchelian A, Afroz S, Huang L, Islam AC, Miller B, Tschantz MC et al. Approaches to adversarial drift. In AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013. 2013. p. 99-109. (Proceedings of the ACM Conference on Computer and Communications Security). https://doi.org/10.1145/2517312.2517320
    Kantchelian, Alex ; Afroz, Sadia ; Huang, Ling ; Islam, Aylin Caliskan ; Miller, Brad ; Tschantz, Michael Carl ; Greenstadt, Rachel ; Joseph, Anthony D. ; Tygar, J. D. / Approaches to adversarial drift. AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013. 2013. pp. 99-109 (Proceedings of the ACM Conference on Computer and Communications Security).
    @inproceedings{5510378ad22f495fb4a1f56a03875fce,
    title = "Approaches to adversarial drift",
    abstract = "In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.",
    keywords = "adversarial machine learning, concept drift, malware classification",
    author = "Alex Kantchelian and Sadia Afroz and Ling Huang and Islam, {Aylin Caliskan} and Brad Miller and Tschantz, {Michael Carl} and Rachel Greenstadt and Joseph, {Anthony D.} and Tygar, {J. D.}",
    year = "2013",
    month = "12",
    day = "9",
    doi = "10.1145/2517312.2517320",
    language = "English (US)",
    isbn = "9781450324885",
    series = "Proceedings of the ACM Conference on Computer and Communications Security",
    pages = "99--109",
    booktitle = "AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013",

    }

    TY - GEN

    T1 - Approaches to adversarial drift

    AU - Kantchelian, Alex

    AU - Afroz, Sadia

    AU - Huang, Ling

    AU - Islam, Aylin Caliskan

    AU - Miller, Brad

    AU - Tschantz, Michael Carl

    AU - Greenstadt, Rachel

    AU - Joseph, Anthony D.

    AU - Tygar, J. D.

    PY - 2013/12/9

    Y1 - 2013/12/9

    N2 - In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.

    AB - In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.

    KW - adversarial machine learning

    KW - concept drift

    KW - malware classification

    UR - http://www.scopus.com/inward/record.url?scp=84888987784&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84888987784&partnerID=8YFLogxK

    U2 - 10.1145/2517312.2517320

    DO - 10.1145/2517312.2517320

    M3 - Conference contribution

    AN - SCOPUS:84888987784

    SN - 9781450324885

    T3 - Proceedings of the ACM Conference on Computer and Communications Security

    SP - 99

    EP - 109

    BT - AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013

    ER -