Figure - uploaded by Xinshuo Weng
Content may be subject to copyright.
Comparison of video-based Re-ID on the ConstructSite, WILDTRACK, and DukeMTMC dataset. The number in bold represents the best result. * indicates the code is not released or available.

Comparison of video-based Re-ID on the ConstructSite, WILDTRACK, and DukeMTMC dataset. The number in bold represents the best result. * indicates the code is not released or available.

Context in source publication

Context 1
... We evaluate our model and these methods on the four datasets and the results is presented in the Table 2. Yet, for the evaluation on the ConstructSite and WILDTRACK, we only run the experiments for those methods whose codes are available. ...

Similar publications

Article
Full-text available
Pada Ruas Jalan Bensol , sebelumnya menggunakan perkerasan lentur (flexibel pavement) sebagai jalan utamannya, dan saat ini pada jalan tersebut banyak sekali mengalami kerusakan jalan, kerusakan pada jalan akan menimbulkan banyak kerugian yang dapat dirasakan oleh pengguna secara langsung, karena sudah pasti akan menghambat laju dan kenyamanan peng...

Citations

... Many authors have attempted to detect abnormal behaviour in overcrowded environments using texturebased information, such as time gradients [4], dynamic texture characteristics [5] and the spatiotemporal frequency properties [6], [7]. Other groups concentrate on optical flows, which recognize motion features in video frames directly, such as multi-scale pedestrian features [8], fuzzy clustering based features [9], behavioural model for pedestrian detection [10], convolutional neural networks (CNN) features [11], weighted autoencoder based features [12], trajectory based features [13], student object behavioral features [14], multi-target association based features [15], [16]. Previous research has shown that the technique of motion is beneficial, and we believe that the present methods can still be improved. ...
... Li et.al. [15] presented a social force map-based strategy for detecting worldwide anomalous activity. They placed a particle grid across the optical flow field and computed the interaction force between each particle. ...
Article
Full-text available
span>In this paper, we propose an efficient method for the detection of student unusual activity in the academic environment. The proposed method extracts motion features that accurately describe the motion characteristics of the pedestrian's movement, velocity, and direction, as well as their intercommunication within a frame. We also use these motion features to detect both global and local anomalous behaviors within the frame. The proposed approach is validated on a newly built proposed student behavior database and three additional publicly available benchmark datasets. When compared to state-of-the-art techniques, the experimental results reveal a considerable performance improvement in anomalous activity recognition. Finally, we summarize and discuss future research directions.</span
... Chen et al. [9] proposed a pedestrian tracking framework based on faster R-CNN and a full convolution network; however, compared with the usual scenarios, such as street, park, and court, the operating room is quite narrow and full of large instruments (as shown in Figure 1), which means there are lots of blind spots when shooting with only one camera. As a result, we need to record with multiple synchronous cameras; this creates a new problem of how to summarize the content captured by different cameras, which requires inter-camera re-identification (ReID) [10][11][12][13]. In existing methods, some researchers exploit mobile phones [14] and wearable devices [15] to track the movement of pedestrians. ...
Article
Full-text available
Multi-camera multi-person (MCMP) tracking and re-identification (ReID) are essential tasks in safety, pedestrian analysis, and so on; however, most research focuses on outdoor scenarios because they are much more complicated to deal with occlusions and misidentification in a crowded room with obstacles. Moreover, it is challenging to complete the two tasks in one framework. We present a trajectory-based method, integrating tracking and ReID tasks. First, the poses of all surgical members captured by each camera are detected frame-by-frame; then, the detected poses are exploited to track the trajectories of all members for each camera; finally, these trajectories of different cameras are clustered to re-identify the members in the operating room across all cameras. Compared to other MCMP tracking and ReID methods, the proposed one mainly exploits trajectories, taking texture features that are less distinguishable in the operating room scenario as auxiliary cues. We also integrate temporal information during ReID, which is more reliable than the state-of-the-art framework where ReID is conducted frame-by-frame. In addition, our framework requires no training before deployment in new scenarios. We also created an annotated MCMP dataset with actual operating room videos. Our experiments prove the effectiveness of the proposed trajectory-based ReID algorithm. The proposed framework achieves 85.44% accuracy in the ReID task, outperforming the state-of-the-art framework in our operating room dataset.