Please use this identifier to cite or link to this item: http://repository.aaup.edu/jspui/handle/123456789/1524
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAlia, Ahmad$Other$Palestinian-
dc.contributor.authorMaree, Mohammed$AAUP$Palestinian-
dc.contributor.authorChraibi, Mohcine$Other$Palestinian-
dc.date.accessioned2022-05-30T07:59:32Z-
dc.date.available2022-05-30T07:59:32Z-
dc.date.issued2022-05-26-
dc.identifier.citationAlia A, Maree M, Chraibi M. A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics. Sensors. 2022; 22(11):4040. https://doi.org/10.3390/s22114040en_US
dc.identifier.issnhttps://doi.org/10.3390/s22114040-
dc.identifier.urihttp://repository.aaup.edu/jspui/handle/123456789/1524-
dc.description.abstractCrowded event entrances could threaten the comfort and safety of pedestrians, especially when some pedestrians push others or use gaps in crowds to gain faster access to an event. Studying and understanding pushing dynamics leads to designing and building more comfortable and safe entrances. Researchers—to understand pushing dynamics—observe and analyze recorded videos to manually identify when and where pushing behavior occurs. Despite the accuracy of the manual method, it can still be time-consuming, tedious, and hard to identify pushing behavior in some scenarios. In this article, we propose a hybrid deep learning and visualization framework that aims to assist researchers in automatically identifying pushing behavior in videos. The proposed framework comprises two main components to generate motion information maps: (i) deep optical flow and wheel visualization; (ii) A combination of an EfficientNet-B0-based classifier and a false reduction algorithm for detecting pushing behavior at the video patch level. In addition to the framework, we present a new patch-based approach to enlarge the data and alleviate the class imbalance problem in small-scale pushing behavior datasets. Experimental results (using real-world ground truth of pushing behavior videos) demonstrate that the proposed framework achieves an 86% accuracy rate. Moreover, the EfficientNet-B0-based classifier outperforms baseline CNN-based classifiers in terms of accuracy.en_US
dc.language.isoenen_US
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)en_US
dc.subjectdeep learningen_US
dc.subjectconvolutional neural networken_US
dc.subjectEfficientNet-B0-based classifieren_US
dc.subjectimage classificationen_US
dc.subjectcrowd behavior analysisen_US
dc.subjectpushing behavior detectionen_US
dc.subjectmotion information mapsen_US
dc.subjectdeep optical flowen_US
dc.titleA HYBRID DEEP LEARNING AND VISUALIZATION FRAMEWORK FOR PUSHING BEHAVIOR DETECTION IN PEDESTRIAN DYNAMICSen_US
Appears in Collections:Faculty & Staff Scientific Research publications

Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Admin Tools