Figure 4 - uploaded by Neeraj Dhanraj Bokde
Content may be subject to copyright.
Source publication
In recent years there is remarkable progress in one of the computer vision application area is object detection. One of the most challenging and fundamental problem in object detection is locating a specific object from the multiple-objects present in the scene. Earlier traditional detection methods were used for detecting the objects since 2012 wi...
Context in source publication
Context 1
... method extracts only 2000 regions from the images, and they are also referred to as region proposals. Figure 4 shows RCNN architecture, using a selective search method [30] where a set of object region proposals is extracted. Each object region proposal is transformed into a fixed image size by rescaling it, and then applied to the convolutional neural network model which is pre-trained on ImageNet, i.e., AlexNet [1], for feature extraction. ...
Similar publications
Pedestrian tracking is an important research content in the field of computer vision. Tracking is achieved by predicting the position of a specific pedestrian in each frame of a video. Pedestrian tracking methods include neural network-based methods and traditional template matching-based methods, such as the SiamRPN (Siamese region proposal networ...
Citations
... Object detection algorithms are important in artificial intelligence and computer vision, allowing computers to see their environments by detecting objects in images and videos [18], [19]. Object detection in deep learning requires training and testing of the input data, since these models require powerful computational resources and large datasets. ...
Artificial intelligence has introduced revolutionary and innovative solutions to many complex problems by automating processes or tasks that used to require human power. The limited capabilities of human efforts in real-time monitoring have led to artificial intelligence becoming increasingly popular. Artificial intelligence helps develop the monitoring process by analyzing data and extracting accurate results. Artificial intelligence is also capable of providing surveillance cameras with a digital brain that analyzes images and live video clips without human intervention. Deep learning models can be applied to digital images to identify and classify objects accurately. Object detection algorithms are based on deep learning algorithms in artificial intelligence. Using the deep learning algorithm , object detection is achieved with high accuracy. In this paper, a combined model of the YOLO v5 model and network Siames technology is proposed, in which the YOLO v5 algorithm detects cheating tools in classrooms, such as a cell phone or a book, in such a way that the algorithm detects the student as an object and cannot recognize his face. Using the Siames network, we compare the student's face against the database of students in order to identify the student with cheating tools. This is an open access article under the CC BY-SA license.
... The fundamental idea behind DL is to use numerous layers of interconnected neurons to replicate the human brain's structure and behaviour ( Nagaraju & Chawla, 2020;Mijwil et al., 2023;Bouguettaya et al., 2022;Khan et al., 2021). The hierarchical layer structure enables DL models to gradually uncover more advanced features from raw input data, resulting in improved predictions that are both accurate and resilient (Lakshmanna et al, 2022;Murthy et al., 2020;Sarker, 2021;Shoeibi et al., 2021). convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) have individually made distinct contributions in DL by tackling specific issues and improving the functionalities of the systems (Amiri et al., 2024). ...
... Few-shot learning is a growing technique focused on training models with just a small number of examples from each class (Lakshmanna et al, 2022;Abdar et al, 2021;Mehrish et al, 2023;Boulemtafes et al., 2020;Dildar et al., 2021). This method is based on how humans can rapidly grasp new concepts with minimal data (Domingues et al, 2020;Nagaraju & Chawla, 2020;Khan et al., 2021;Murthy et al., 2020). Few-shot learning methods like prototypical networks and matching networks have displayed potential in tasks such as image categorization and linguistic comprehension. ...
... It may be necessary to employ optimisation techniques that do not rely on gradients since gradient information may not be readily available or reliable (Bouguettaya et al., 2022;Khan et al., 2021;Murthy et al., 2020;Boulemtafes et al., 2020). These methods provide robustness and adaptation by exploring the parameter space without relying on gradients (Lakshmanna et al, 2022;Nagaraju & Chawla, 2020;Saleem et al., 2021). ...
Deep Learning (DL), a branch of Artificial Intelligence (AI), has transformed various industries by allowing machines to carry out activities that were once thought to be only possible through human intelligence. The rapid progress in deep learning methods and algorithms has played a key role in reaching unparalleled levels of accuracy and efficiency in diverse applications. This research explores the most recent and popular techniques and algorithms in deep learning, offering a detailed look at how they are created and used. Important focuses include convolutional neural networks (CNNs) for recognizing and processing images, recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) for analysing sequential data and language processing, and generative adversarial networks (GANs) for generating authentic synthetic data. Moreover, the research delves into new advancements like transformers and self-attention mechanisms, which have greatly enhanced results in activities such as language translation and text generation. The research also explores new developments in transfer learning, federated learning, and explainable AI, showcasing their ability to improve model generalization, privacy, and interpretability. The study highlights the increased significance of incorporating these advanced methods across different fields such as healthcare, finance, autonomous driving, and robotics, in order to propel the upcoming wave of technological advancements. This research seeks to educate and provide direction to future research and development in the dynamic field of DL by examining advanced algorithms.
... Detecting abnormal events in video surveillance employs various methods and algorithms, which can be categorized into traditional computer vision-based and deep learning-based approaches. Traditional computervision-based methods use image processing techniques and apply fundamental concepts such as motion detection, background subtraction, object tracking, and appearancebased detection [18]. Motion-based detection, akin to optical flow tracking, examines video motion patterns to identify sudden or unusual movements. ...
This study explores and evaluates the effectiveness of various abnormal event detection techniques in video surveillance, addressing challenges such as intrusions, accidents, and suspicious activities. Through a systematic review of related papers, the study reveals the prevalence of traditional methods like background subtraction and motion detection despite their limitations in complex scenarios. It highlights the increasing use of deep learning techniques, particularly CNNs and RNNs, which show promise but require substantial labeled data. The findings underscore the importance of selecting proper detection techniques based on specific surveillance scenarios and emphasize the need for extensive labeled datasets for deep learning methods. The originality of this study lies in its comprehensive review and comparison of various abnormal event detection techniques, providing valuable insights and practical implications for advancing video surveillance systems.
... Multiple platforms with communication links enabled coordinated search and exploration tasks. Fixed radio communication between aerial drones and ground platforms demonstrated potential mobility for effective exploration missions [24]. The introduction of mobile robot platforms with the implementation of various visually based techniques for terrain classification has broadened the robotic system applicability and reduced the need for external infrastructure. ...
Robotics has dramatically improved disaster response, allowing for more efficient and effective search and rescue (SAR) operations. This review investigates the role of robotics in disaster situations, with an emphasis on unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and developing technologies that support SAR operations. The Global Robotics for Advanced Search (GRAS) system is presented as a case study, demonstrating its effective deployment in a simulated gas explosion situation. A historical review chronicles the history of robots in disaster response, followed by an examination of contemporary uses, problems, and new technology. The report finishes with predictions for the future of SAR robots, emphasising the need of ongoing innovation and integration to handle the issues that disasters provide. INTRODUCTION Robotics revolutionized automation in real-world applications. Disaster response now utilizes automated systems. Robotics systems include mechanical bodies, computer systems, sensors, and AI for autonomous or semi-autonomous actions. They are categorized as UAV, UGV, and UWV based on design and medium [1]. Trained first responders and response team dispatched to handle incidents and save victims. Their main purpose is to secure the site, gather information, and make a plan. Chaotic and complex scenes include smoke, debris, and inaccessible areas. Robotics systems can be used as an alternative for search and rescue in dangerous environments [2]. The first responders and rescue team need vital on-site information to assess and conduct effective search and rescue operations. GRAS is a set of automated robotic systems that gather information using an aerial UAV, a ground UGV, and a fixed camera. The UAV takes real-time aerial photographs and the UGV captures normal and thermal video footage. Both operate simultaneously and can adapt to different situations. The command center receives information from all devices, builds a ground model of the area, and controls UGVs based on the model and acquired information [3]. The efficacy of the GRAS system was validated on a real-case scenario of a simulated gas explosion in a two-storied building conducted in the EMRS 2018 (Emergency Mission Robot System 2018) joint exercise. A robotic competition was also conducted under that exercise with a set of tasks to be completed by the teams as a means of evaluating the capacity and viability of the robotics systems in disaster response [4]. THE ROLE OF ROBOTICS IN DISASTER RESPONSE With increasing disasters, effective ways to save more people in less time are needed. Robotics can enhance search and rescue operations in inaccessible, dangerous environments. Unmanned aerial vehicles (UAVs) collect aerial imagery and smaller UAVs are better for navigating collapsed structures. Unmanned ground vehicles (UGVs) can enter buildings. Legged robots provide additional mobility. Multi-robot systems utilize different strengths. Robotics in disaster response has evolved from small incidents to large disasters. The history and current state of robotics in disaster response are discussed [5].
... In recent times as explained by the author (Murthy et al., 2020), The discipline of deep learning (DL) is an important topic that is receiving a lot of attention in the scientific community, as well as in the business world and the academic world. Unlike traditional and machine learning methods, deep learning approaches have the capacity to efficiently manage enormous volumes of unstructured data and discover complicated patterns within extensive data sets. ...
Object detection and image classification are among the most important areas to which scientific research is directed, which are commonly used in various applications based on computer vision. The development of low-cost embedded devices with powerful processing is leading to a trend in their use in computer vision, which provides reduced access time, reliability, and data security. That's why Tiny Machine Learning (TinyML) technology appeared, which is field that specifically explores the application of machine learning on highly limited edge devices. Deep learning techniques are increasingly employed in data-intensive and time-sensitive IoT applications. Deploying Deep Neural Network (DNN) models on microcontrollers (MCUs) is challenging due to limited resources such as RAM. Recent advancements in the field of TinyML hold the potential to introduce a novel category of peripheral applications. TinyML enables the development of new applications and services by eliminating the reliance on cloud computing, which consumes power and poses risks to data security and privacy. TinyML currently regarded as a promising artificial intelligence (AI( alternative that specifically targets technologies and applications for devices with very low profiles. In this paper, the most important algorithms used in object detection will be presented, in addition to the challenges in using low-resource embedded systems that support TinyML technology.
... This combination allows for an insight, into how traffic flows during challenging weather conditions can be monitored. LiDAR is an effective tool that provide three dimensions data [37][38][39], while CCTV provides detailed images that help with spotting and following objects [40,41]. By combining these types of data, monitoring and analyzing traffic are made precise and reliable. ...
The study evaluates the accuracy of LiDAR and CCTV technologies for vehicle and pedestrian count collection at a signalized intersection under varied weather conditions. Data collection occurred over a two-hour period during peak morning and evening hours using both technologies. The trajectory identification, entry and exit point determination, and anomaly filtering were utilized to analyze the vehicle counts. The pedestrian counts were carefully analyzed using LiDAR point cloud data and CCTV footage to monitor movements, in areas. Analysis of the data showed differences in vehicle and pedestrian counts depending on the weather conditions. Rainy weather had the variations while sunny conditions also showed differences with snowy weather having the least discrepancies. Interestingly the southbound through and eastbound right movements exhibited the variations in both vehicle and pedestrian counts. Despite challenges like spots and weather impacts both LiDAR and CCTV technologies hold promise for collecting traffic data. It is crucial to focus on addressing limitations such as reducing spots, refining data collection methods and enhancing technology resilience to adverse weather conditions. These improvements are essential, for ensuring traffic monitoring and enhancing safety measures at signalized intersections.
... The final output provides crucial information including the coordinates of bounding boxes and associated class labels, which find applications across numerous domains including autonomous driving, surveillance, and medical imaging. This process unfolds as figure 2 [14]. ...
... The proposed method's performance is assessed using the KITTI dataset, a widely recognized resource in the field of self-driving car research, as referenced by Geiger et al. (2012). This dataset comprises 22 sequences categorized into two sets: a Training dataset (sequences 00-10) and a Testing dataset (sequences [11][12][13][14][15][16][17][18][19][20][21]. Within the Training dataset, ground-truth trajectories are provided for every frame across all sequences. ...
Visual odometry includes two important stages: 1) feature extraction and 2) pose estimation. The performance of visual odometry is dependent on the quality of features including the number of features, the percentage of the correct matching, and the location of detected features. Usually, RANSAC method has been used in pose estimation to remove outlier and select a good set of features that provide higher accuracy. However, in the case the higher wrong matches, the RANSAC seems to be failing. This article proposes the removing unstable feature method by deep learning-based object detection. The proposed method evaluated on the KITTI dataset shows a higher accuracy 6 - 8% compared to the conventional method.
... single shot detector (SSD) achieves better accuracy over dense sampling of object locations and comparable accuracy than RCNN series detectors. Overall this model achieves comparable accuracy [1].G. Chandan et. ...
In order to tackle the ongoing problem of keeping fruits fresh, this research presents an automated freshness detection. The ever-growing presence of surveillance cameras has undoubtedly enhanced public safety. However, manually reviewing vast quantities of video footage for violence detection remains a tedious and error-prone task. This paper proposes a deep learningbased approach for anomaly detection in surveillance videos, with a specific focus on identifying violent activities. Traditional surveillance methods often lack the sophistication to distinguish between normal behaviour and potentially threatening actions. Deep learning offers a significant advantage by automating the identification of deviations from established behavioural patterns in video data. This automation enables real-time analysis of footage, potentially signifying the occurrence of violent incidents and allowing for a swifter response from security personnel. This research delves into the application of deep learning models for violence detection in surveillance footage. We begin by discussing the limitations inherent in conventional methods, such as motion detection and manual review, which struggle to capture the nuances of human behaviour. Subsequently, we explore the advantages that deep learning-based approaches bring to the table, including their ability to learn complex patterns from large datasets and identify subtle anomalies that might escape human observation.
... In this section, we take a look at some of the most common architectures for object detection as well as some of the latest sports analysis techniques. Murthy et al. (2020) discussed deep learning-based supervised object detection mechanisms. A review article presented by Naik et al. (2022a) explored most of the aspects of academic research in sports activities. ...
The surge in demand for advanced operations in sports video analysis has underscored the crucial role of multiple object tracking. This study addresses the escalating need for efficient and accurate player and referee identification in sports video analysis. The challenge of identity switching among players, especially those with similar appearances, complicates multi-player tracking. Existing algorithms relying on manually labeled data face limitations, particularly with changes in jersey colors. This paper introduces an automated algorithm employing Intersection over Union (IoU) loss and Euclidean Distance (EUD), termed EIoU-Distance Loss, to track players and referees. The method prioritizes identity coherence, aiming to mitigate challenges associated with player and referee recognition. Comprising BackgroundSubtractionMOG2 for player and referee detection and IoU with EUD for connecting nodes across frames, the proposed approach enhances tracking performance, ensuring a clear distinction between different identities. This innovative method addresses critical issues in sports video analysis, offering a robust solution for tracking players and referees in dynamic game scenarios.
... Furthermore, machine learning makes it possible to identify dietary trends and the way they affect health outcomes by assisting in the analysis of large-scale dietary surveys. Furthermore, machine learning algorithms can improve food analysis and categorization, making it easier to determine portion sizes and nutrient contents [8]. Additionally, this technology is essential in automating the tracking of dietary adherence while providing insightful feedback to those who are trying to fulfill particular dietary targets. ...