added a research item
Project
Indoor Human Action Recognition
Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
1
Reads
0 new
65
Project log
Robots are trying to be an integral part of human life soon. With the development of societies and increasing the expectations of people, it is now desirable to make the future more convenient. We are now pleased to tell that we have gathered a collection of high-quality indoor activities from the latest and hottest datasets around the web. Although this collection took a long time from us but it does worth it. This project main goal is to provide appropriate and high-quality data for researchers to make neural network models more reliable and accurate to make the robot understand the activities that a person does in his/her daily life. Thanks to our dataset selection and the neural network models, it is now possible to have more intelligent robots in our world.-101". With spending a lot of time selecting the best videos from more than 220000 video files, our collection is surprisingly considerable for those who need an indoor human activities dataset. We considered some visual criteria like quality, relevancy, hand vibration while recording the video and also some semantic criteria. The activities must take place in the house, so the outdoor ones got eliminated from the sources. Unfortunately in some cases, we had to remove some indoor classes because of not reaching the minimum number of videos in a class. Now, we have a bunch of indoor videos but the selection is not over yet, we listed the most common activities that a person does in his/her daily life, for example, the person eats every day, he/she might get some sleep on the sofa in front of the Television or the person walks in the house and so on. In the front, some activities like jumping or looking into a mirror is not the most common ones, plus, these are something that the robots have nothing to do with. Another semantic criteria of our selection was considering the activities, not the objects. For example, in the CHARADES EGO dataset, we have multiple classes like putting something into objects that we deal with in the house activities, so from the plenty number of putting something into something we only chose the logical and appropriate ones, the ones that we interact with them mostly. In the video selection phase, we both used some programmings in Python3 to automatically remove low-quality and irrelevant videos and also some visual observation to make sure that the selected videos pass our criteria. Selecting the top-notch videos from more than 220000 video files was a monotonous work because some datasets did not have standard labeling on the videos, so we had to classify them too. Since it is not possible to manually classify this huge amount of video files, so we classified them by utilizing our Python programming techniques. After the classification and merging all of the datasets into 44 classes of indoor human activities, we executed our Python scripts to eliminate the irrelevant data and after that, we started to observe and watch each video, one by one to pass our final desired standards. The indoor human activities dataset has 44 indoor activities, 70 important objects, 5 global locations, and 9 local locations. We are also pleased to tell that we have presented a new annotating method by considering more than one parameter in labeling a video. In this technique, we annotate the person, the action(s) that he/she is doing, the global and local location that the action is being done and also the objects that the person is interacting with them. We hope that robotics, deep-learning & machine-learning researchers will pay enough attention to this topnotch video dataset of indoor human activities since home service robots can alter human living shortly.