Cross-Safe

A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired

Xiang Li, Hanzhang Cui, John Ross Rizzo, Edward Wong, Yi Fang

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.

    Original languageEnglish (US)
    Title of host publicationAdvances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC
    EditorsKohei Arai, Supriya Kapoor
    PublisherSpringer-Verlag
    Pages132-146
    Number of pages15
    ISBN (Print)9783030177973
    DOIs
    StatePublished - Jan 1 2020
    EventComputer Vision Conference, CVC 2019 - Las Vegas, United States
    Duration: Apr 25 2019Apr 26 2019

    Publication series

    NameAdvances in Intelligent Systems and Computing
    Volume944
    ISSN (Print)2194-5357

    Conference

    ConferenceComputer Vision Conference, CVC 2019
    CountryUnited States
    CityLas Vegas
    Period4/25/194/26/19

    Fingerprint

    Computer vision
    Navigation
    Traffic control
    Signal detection
    Electric power utilization
    Planning

    Keywords

    • Assistive technology
    • Pedestrian safety
    • Portable device
    • Visual impairment

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Computer Science(all)

    Cite this

    Li, X., Cui, H., Rizzo, J. R., Wong, E., & Fang, Y. (2020). Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. In K. Arai, & S. Kapoor (Eds.), Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC (pp. 132-146). (Advances in Intelligent Systems and Computing; Vol. 944). Springer-Verlag. https://doi.org/10.1007/978-3-030-17798-0_13

    Cross-Safe : A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. / Li, Xiang; Cui, Hanzhang; Rizzo, John Ross; Wong, Edward; Fang, Yi.

    Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC. ed. / Kohei Arai; Supriya Kapoor. Springer-Verlag, 2020. p. 132-146 (Advances in Intelligent Systems and Computing; Vol. 944).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Li, X, Cui, H, Rizzo, JR, Wong, E & Fang, Y 2020, Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. in K Arai & S Kapoor (eds), Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC. Advances in Intelligent Systems and Computing, vol. 944, Springer-Verlag, pp. 132-146, Computer Vision Conference, CVC 2019, Las Vegas, United States, 4/25/19. https://doi.org/10.1007/978-3-030-17798-0_13
    Li X, Cui H, Rizzo JR, Wong E, Fang Y. Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. In Arai K, Kapoor S, editors, Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC. Springer-Verlag. 2020. p. 132-146. (Advances in Intelligent Systems and Computing). https://doi.org/10.1007/978-3-030-17798-0_13
    Li, Xiang ; Cui, Hanzhang ; Rizzo, John Ross ; Wong, Edward ; Fang, Yi. / Cross-Safe : A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC. editor / Kohei Arai ; Supriya Kapoor. Springer-Verlag, 2020. pp. 132-146 (Advances in Intelligent Systems and Computing).
    @inproceedings{79d4f7625ac848d1a9a1f1a80fbe3455,
    title = "Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired",
    abstract = "Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.",
    keywords = "Assistive technology, Pedestrian safety, Portable device, Visual impairment",
    author = "Xiang Li and Hanzhang Cui and Rizzo, {John Ross} and Edward Wong and Yi Fang",
    year = "2020",
    month = "1",
    day = "1",
    doi = "10.1007/978-3-030-17798-0_13",
    language = "English (US)",
    isbn = "9783030177973",
    series = "Advances in Intelligent Systems and Computing",
    publisher = "Springer-Verlag",
    pages = "132--146",
    editor = "Kohei Arai and Supriya Kapoor",
    booktitle = "Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC",

    }

    TY - GEN

    T1 - Cross-Safe

    T2 - A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired

    AU - Li, Xiang

    AU - Cui, Hanzhang

    AU - Rizzo, John Ross

    AU - Wong, Edward

    AU - Fang, Yi

    PY - 2020/1/1

    Y1 - 2020/1/1

    N2 - Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.

    AB - Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.

    KW - Assistive technology

    KW - Pedestrian safety

    KW - Portable device

    KW - Visual impairment

    UR - http://www.scopus.com/inward/record.url?scp=85065474195&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85065474195&partnerID=8YFLogxK

    U2 - 10.1007/978-3-030-17798-0_13

    DO - 10.1007/978-3-030-17798-0_13

    M3 - Conference contribution

    SN - 9783030177973

    T3 - Advances in Intelligent Systems and Computing

    SP - 132

    EP - 146

    BT - Advances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC

    A2 - Arai, Kohei

    A2 - Kapoor, Supriya

    PB - Springer-Verlag

    ER -