Monitoring algorithms for negative feedback systems

Mark Sandler, Shanmugavelayutham Muthukrishnan

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    There are many online systems where millions of users post original content such as videos, reviews of items such as products, services and businesses, etc. While there are general rules for good behavior or even formal Terms of Service, there are still users who post content that is not suitable. Increasingly, online systems rely on other users who view the posted content to provide feedback. We study online systems where users report negative feedback, i.e., report abuse; these systems are quite distinct from much studied, traditional reputation systems that focus on eliciting popularity of content by various voting methods. The central problem that we study here is how to monitor the quality of negative feedback, that is, detect negative feedback which is incorrect, or perhaps even malicious. Systems address this problem by testing flags manually, which is an expensive operation. As a result, there is a tradeoff between the number of manual tests and the number of errors defined as the number of incorrect flags the monitoring system misses. Our contributions are as follows: • We initiate a systematic study of negative feedbacks systems. Our framework is general enough to be applicable for a variety of systems. In this framework, the number of errors the system admits is bounded over the worst case of adversarial users while simultaneously the system performs only small amount of manual testing for multitude of standard users who might still err while reporting. • Our main contribution is a randomized monitoring algorithm that we call Adaptive Probabilistic Testing (APT), that is simple to implement and has guarantees on expected number of errors. Even for adversarial users, the total expected error is bounded by εN over N flags for a given ε > 0. Simultaneously, the number of tests performed by the algorithm is within a constant factor of the best possible algorithm for standard users. • Finally, we present empirical study of our algorithm that shows its performance on both synthetic data and real data accumulated from a variety of negative feedback systems at Google. Our study indicates that the algorithm performs better than the analysis above shows.

    Original languageEnglish (US)
    Title of host publicationProceedings of the 19th International Conference on World Wide Web, WWW '10
    Pages871-880
    Number of pages10
    DOIs
    StatePublished - Jul 20 2010
    Event19th International World Wide Web Conference, WWW2010 - Raleigh, NC, United States
    Duration: Apr 26 2010Apr 30 2010

    Publication series

    NameProceedings of the 19th International Conference on World Wide Web, WWW '10

    Other

    Other19th International World Wide Web Conference, WWW2010
    CountryUnited States
    CityRaleigh, NC
    Period4/26/104/30/10

    Fingerprint

    Feedback
    Online systems
    Monitoring
    Testing
    Industry

    Keywords

    • negative feedback systems
    • probabilistic analysis
    • user reputation

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Computer Science Applications

    Cite this

    Sandler, M., & Muthukrishnan, S. (2010). Monitoring algorithms for negative feedback systems. In Proceedings of the 19th International Conference on World Wide Web, WWW '10 (pp. 871-880). (Proceedings of the 19th International Conference on World Wide Web, WWW '10). https://doi.org/10.1145/1772690.1772779

    Monitoring algorithms for negative feedback systems. / Sandler, Mark; Muthukrishnan, Shanmugavelayutham.

    Proceedings of the 19th International Conference on World Wide Web, WWW '10. 2010. p. 871-880 (Proceedings of the 19th International Conference on World Wide Web, WWW '10).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Sandler, M & Muthukrishnan, S 2010, Monitoring algorithms for negative feedback systems. in Proceedings of the 19th International Conference on World Wide Web, WWW '10. Proceedings of the 19th International Conference on World Wide Web, WWW '10, pp. 871-880, 19th International World Wide Web Conference, WWW2010, Raleigh, NC, United States, 4/26/10. https://doi.org/10.1145/1772690.1772779
    Sandler M, Muthukrishnan S. Monitoring algorithms for negative feedback systems. In Proceedings of the 19th International Conference on World Wide Web, WWW '10. 2010. p. 871-880. (Proceedings of the 19th International Conference on World Wide Web, WWW '10). https://doi.org/10.1145/1772690.1772779
    Sandler, Mark ; Muthukrishnan, Shanmugavelayutham. / Monitoring algorithms for negative feedback systems. Proceedings of the 19th International Conference on World Wide Web, WWW '10. 2010. pp. 871-880 (Proceedings of the 19th International Conference on World Wide Web, WWW '10).
    @inproceedings{9676f1b36a2c4009ae2c60fa07e87188,
    title = "Monitoring algorithms for negative feedback systems",
    abstract = "There are many online systems where millions of users post original content such as videos, reviews of items such as products, services and businesses, etc. While there are general rules for good behavior or even formal Terms of Service, there are still users who post content that is not suitable. Increasingly, online systems rely on other users who view the posted content to provide feedback. We study online systems where users report negative feedback, i.e., report abuse; these systems are quite distinct from much studied, traditional reputation systems that focus on eliciting popularity of content by various voting methods. The central problem that we study here is how to monitor the quality of negative feedback, that is, detect negative feedback which is incorrect, or perhaps even malicious. Systems address this problem by testing flags manually, which is an expensive operation. As a result, there is a tradeoff between the number of manual tests and the number of errors defined as the number of incorrect flags the monitoring system misses. Our contributions are as follows: • We initiate a systematic study of negative feedbacks systems. Our framework is general enough to be applicable for a variety of systems. In this framework, the number of errors the system admits is bounded over the worst case of adversarial users while simultaneously the system performs only small amount of manual testing for multitude of standard users who might still err while reporting. • Our main contribution is a randomized monitoring algorithm that we call Adaptive Probabilistic Testing (APT), that is simple to implement and has guarantees on expected number of errors. Even for adversarial users, the total expected error is bounded by εN over N flags for a given ε > 0. Simultaneously, the number of tests performed by the algorithm is within a constant factor of the best possible algorithm for standard users. • Finally, we present empirical study of our algorithm that shows its performance on both synthetic data and real data accumulated from a variety of negative feedback systems at Google. Our study indicates that the algorithm performs better than the analysis above shows.",
    keywords = "negative feedback systems, probabilistic analysis, user reputation",
    author = "Mark Sandler and Shanmugavelayutham Muthukrishnan",
    year = "2010",
    month = "7",
    day = "20",
    doi = "10.1145/1772690.1772779",
    language = "English (US)",
    isbn = "9781605587998",
    series = "Proceedings of the 19th International Conference on World Wide Web, WWW '10",
    pages = "871--880",
    booktitle = "Proceedings of the 19th International Conference on World Wide Web, WWW '10",

    }

    TY - GEN

    T1 - Monitoring algorithms for negative feedback systems

    AU - Sandler, Mark

    AU - Muthukrishnan, Shanmugavelayutham

    PY - 2010/7/20

    Y1 - 2010/7/20

    N2 - There are many online systems where millions of users post original content such as videos, reviews of items such as products, services and businesses, etc. While there are general rules for good behavior or even formal Terms of Service, there are still users who post content that is not suitable. Increasingly, online systems rely on other users who view the posted content to provide feedback. We study online systems where users report negative feedback, i.e., report abuse; these systems are quite distinct from much studied, traditional reputation systems that focus on eliciting popularity of content by various voting methods. The central problem that we study here is how to monitor the quality of negative feedback, that is, detect negative feedback which is incorrect, or perhaps even malicious. Systems address this problem by testing flags manually, which is an expensive operation. As a result, there is a tradeoff between the number of manual tests and the number of errors defined as the number of incorrect flags the monitoring system misses. Our contributions are as follows: • We initiate a systematic study of negative feedbacks systems. Our framework is general enough to be applicable for a variety of systems. In this framework, the number of errors the system admits is bounded over the worst case of adversarial users while simultaneously the system performs only small amount of manual testing for multitude of standard users who might still err while reporting. • Our main contribution is a randomized monitoring algorithm that we call Adaptive Probabilistic Testing (APT), that is simple to implement and has guarantees on expected number of errors. Even for adversarial users, the total expected error is bounded by εN over N flags for a given ε > 0. Simultaneously, the number of tests performed by the algorithm is within a constant factor of the best possible algorithm for standard users. • Finally, we present empirical study of our algorithm that shows its performance on both synthetic data and real data accumulated from a variety of negative feedback systems at Google. Our study indicates that the algorithm performs better than the analysis above shows.

    AB - There are many online systems where millions of users post original content such as videos, reviews of items such as products, services and businesses, etc. While there are general rules for good behavior or even formal Terms of Service, there are still users who post content that is not suitable. Increasingly, online systems rely on other users who view the posted content to provide feedback. We study online systems where users report negative feedback, i.e., report abuse; these systems are quite distinct from much studied, traditional reputation systems that focus on eliciting popularity of content by various voting methods. The central problem that we study here is how to monitor the quality of negative feedback, that is, detect negative feedback which is incorrect, or perhaps even malicious. Systems address this problem by testing flags manually, which is an expensive operation. As a result, there is a tradeoff between the number of manual tests and the number of errors defined as the number of incorrect flags the monitoring system misses. Our contributions are as follows: • We initiate a systematic study of negative feedbacks systems. Our framework is general enough to be applicable for a variety of systems. In this framework, the number of errors the system admits is bounded over the worst case of adversarial users while simultaneously the system performs only small amount of manual testing for multitude of standard users who might still err while reporting. • Our main contribution is a randomized monitoring algorithm that we call Adaptive Probabilistic Testing (APT), that is simple to implement and has guarantees on expected number of errors. Even for adversarial users, the total expected error is bounded by εN over N flags for a given ε > 0. Simultaneously, the number of tests performed by the algorithm is within a constant factor of the best possible algorithm for standard users. • Finally, we present empirical study of our algorithm that shows its performance on both synthetic data and real data accumulated from a variety of negative feedback systems at Google. Our study indicates that the algorithm performs better than the analysis above shows.

    KW - negative feedback systems

    KW - probabilistic analysis

    KW - user reputation

    UR - http://www.scopus.com/inward/record.url?scp=77954575943&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=77954575943&partnerID=8YFLogxK

    U2 - 10.1145/1772690.1772779

    DO - 10.1145/1772690.1772779

    M3 - Conference contribution

    AN - SCOPUS:77954575943

    SN - 9781605587998

    T3 - Proceedings of the 19th International Conference on World Wide Web, WWW '10

    SP - 871

    EP - 880

    BT - Proceedings of the 19th International Conference on World Wide Web, WWW '10

    ER -