Parts-based multi-task sparse learning for visual tracking

Zhengjian Kang, Edward Wong

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.

    Original languageEnglish (US)
    Title of host publication2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings
    PublisherIEEE Computer Society
    Pages4022-4026
    Number of pages5
    Volume2015-December
    ISBN (Print)9781479983391
    DOIs
    StatePublished - Dec 9 2015
    EventIEEE International Conference on Image Processing, ICIP 2015 - Quebec City, Canada
    Duration: Sep 27 2015Sep 30 2015

    Other

    OtherIEEE International Conference on Image Processing, ICIP 2015
    CountryCanada
    CityQuebec City
    Period9/27/159/30/15

    Fingerprint

    Glossaries
    Atoms

    Keywords

    • Multi-task learning
    • particle filter
    • parts-based model
    • sparse representation
    • visual tracking

    ASJC Scopus subject areas

    • Software
    • Computer Vision and Pattern Recognition
    • Signal Processing

    Cite this

    Kang, Z., & Wong, E. (2015). Parts-based multi-task sparse learning for visual tracking. In 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings (Vol. 2015-December, pp. 4022-4026). [7351561] IEEE Computer Society. https://doi.org/10.1109/ICIP.2015.7351561

    Parts-based multi-task sparse learning for visual tracking. / Kang, Zhengjian; Wong, Edward.

    2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings. Vol. 2015-December IEEE Computer Society, 2015. p. 4022-4026 7351561.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Kang, Z & Wong, E 2015, Parts-based multi-task sparse learning for visual tracking. in 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings. vol. 2015-December, 7351561, IEEE Computer Society, pp. 4022-4026, IEEE International Conference on Image Processing, ICIP 2015, Quebec City, Canada, 9/27/15. https://doi.org/10.1109/ICIP.2015.7351561
    Kang Z, Wong E. Parts-based multi-task sparse learning for visual tracking. In 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings. Vol. 2015-December. IEEE Computer Society. 2015. p. 4022-4026. 7351561 https://doi.org/10.1109/ICIP.2015.7351561
    Kang, Zhengjian ; Wong, Edward. / Parts-based multi-task sparse learning for visual tracking. 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings. Vol. 2015-December IEEE Computer Society, 2015. pp. 4022-4026
    @inproceedings{b7762e48df2e462681cec0cc3d763825,
    title = "Parts-based multi-task sparse learning for visual tracking",
    abstract = "We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.",
    keywords = "Multi-task learning, particle filter, parts-based model, sparse representation, visual tracking",
    author = "Zhengjian Kang and Edward Wong",
    year = "2015",
    month = "12",
    day = "9",
    doi = "10.1109/ICIP.2015.7351561",
    language = "English (US)",
    isbn = "9781479983391",
    volume = "2015-December",
    pages = "4022--4026",
    booktitle = "2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings",
    publisher = "IEEE Computer Society",

    }

    TY - GEN

    T1 - Parts-based multi-task sparse learning for visual tracking

    AU - Kang, Zhengjian

    AU - Wong, Edward

    PY - 2015/12/9

    Y1 - 2015/12/9

    N2 - We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.

    AB - We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.

    KW - Multi-task learning

    KW - particle filter

    KW - parts-based model

    KW - sparse representation

    KW - visual tracking

    UR - http://www.scopus.com/inward/record.url?scp=84956633353&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84956633353&partnerID=8YFLogxK

    U2 - 10.1109/ICIP.2015.7351561

    DO - 10.1109/ICIP.2015.7351561

    M3 - Conference contribution

    SN - 9781479983391

    VL - 2015-December

    SP - 4022

    EP - 4026

    BT - 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings

    PB - IEEE Computer Society

    ER -