Conference Paper

Information Measure Computation and its Impact in MI COCO Dataset

Authors:
  • Jaypee University of Engineering and Technology Guna
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Labeling: The original dataset was annotated in JSON format. However, to make it compatible with the YOLO model, we converted the annotations to the COCO dataset format [22]. During this process, we preserved all annotation information for each image and ensured the accuracy and consistency of the bounding boxes. ...
Article
Full-text available
The issue of obstacle avoidance and safety for visually impaired individuals has been a major topic of research. However, complex street environments still pose significant challenges for blind obstacle detection systems. Existing solutions often fail to provide real-time, accurate obstacle avoidance decisions. In this study, we propose a blind obstacle detection system based on the PC-CS-YOLO model. The system improves the backbone network by adopting the partial convolutional feed-forward network (PCFN) to reduce computational redundancy. Additionally, to enhance the network’s robustness in multi-scale feature fusion, we introduce the Cross-Scale Attention Fusion (CSAF) mechanism, which integrates features from different sensory domains to achieve superior performance. Compared to state-of-the-art networks, our system shows improvements of 2.0%, 3.9%, and 1.5% in precision, recall, and mAP50, respectively. When evaluated on a GPU, the inference speed is 20.6 ms, which is 15.3 ms faster than YOLO11, meeting the real-time requirements for blind obstacle avoidance systems.
... It is commonly used for image classification, object detection, and image generation, serving as a crucial benchmark for training and evaluating deep learning models. Similarly, the COCO dataset (Sharma, 2021) is extensively used in computer vision and natural language processing. It consists of over 200,000 images with millions of annotations, supporting tasks like object recognition, image segmentation, and object detection. ...
Article
Full-text available
Introduction Speech recognition and multimodal learning are two critical areas in machine learning. Current multimodal speech recognition systems often encounter challenges such as high computational demands and model complexity. Methods To overcome these issues, we propose a novel framework-EnglishAL-Net, a Multimodal Fusion-powered English Speaking Robot. This framework leverages the ALBEF model, optimizing it for real-time speech and multimodal interaction, and incorporates a newly designed text and image editor to fuse visual and textual information. The robot processes dynamic spoken input through the integration of Neural Machine Translation (NMT), enhancing its ability to understand and respond to spoken language. Results and discussion In the experimental section, we constructed a dataset containing various scenarios and oral instructions for testing. The results show that compared to traditional unimodal processing methods, our model significantly improves both language understanding accuracy and response time. This research not only enhances the performance of multimodal interaction in robots but also opens up new possibilities for applications of robotic technology in education, rescue, customer service, and other fields, holding significant theoretical and practical value.
... In the grid search hyperparameter training process, the momentum parameter for stochastic gradient descent (SGD) was set to 0.9, and the default IoU threshold between the detection box and the GT was 0.5 and 0.7, respectively. The pretrained weights on the COCO dataset were utilized in the training processing for faster convergence and better generalization [ 63 ]. The model training and testing to search the optimum hyperparameters were conducted on Google Colab Pro+ using a Nvidia Tesla V100 graphics processing unit (GPU) with 16 GB video random access memory (VRAM). ...
Article
Full-text available
Substantial effort has been made in manually tracking plant maturity and to measure early-stage plant density and crop height in experimental fields. In this study, RGB drone imagery and deep learning (DL) approaches are explored to measure relative maturity (RM), stand count (SC), and plant height (PH), potentially offering higher throughput, accuracy, and cost-effectiveness than traditional methods. A time series of drone images was utilized to estimate dry bean RM employing a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) model. For early-stage SC assessment, Faster RCNN object detection algorithm was evaluated. Flight frequencies, image resolution, and data augmentation techniques were investigated to enhance DL model performance. PH was obtained using a quantile method from digital surface model (DSM) and point cloud (PC) data sources. The CNN-LSTM model showed high accuracy in RM prediction across various conditions, outperforming traditional image preprocessing approaches. The inclusion of growing degree days (GDD) data improved the model’s performance under specific environmental stresses. The Faster R-CNN model effectively identified early-stage bean plants, demonstrating superior accuracy over traditional methods and consistency across different flight altitudes. For PH estimation, moderate correlations with ground-truth data were observed across both datasets analyzed. The choice between PC and DSM source data may depend on specific environmental and flight conditions. Overall, the CNN-LSTM and Faster R-CNN models proved more effective than conventional techniques in quantifying RM and SC. The subtraction method proposed for estimating PH without accurate ground elevation data yielded results comparable to the difference-based method. Additionally, the pipeline and open-source software developed hold potential to significantly benefit the phenotyping community.
... Faster R-CNN uses a Region Proposal Network (RPN) to first identify Regions of Interest (ROI) and then perform object detection on each region. There are two main types of object detection models: One-Stage Detection Models and Two-Stage Detection Models [34]. Two-Stage Detection Models like Faster R-CNN first identify areas in the image where objects are likely to be found. ...
Article
Full-text available
Expanding traditional video metadata and recommendation systems encompasses challenges that are difficult to address with conventional methodologies. Limitations in utilizing diverse information when extracting video metadata, along with persistent issues like bias, cold start problems, and the filter bubble effect in recommendation systems, are primary causes of performance degradation. Therefore, a new recommendation system that integrates high-quality video metadata extraction with existing recommendation systems is necessary. This research proposes the "Extraction of Meta-Data for Recommendation using keyword mapping," which involves constructing contextualized data through object detection models and STT (Speech-to-Text) models, extracting keywords, mapping with the public dataset MovieLens, and applying a Hybrid recommendation system. The process of building contextualized data utilizes YOLO and Google’s Speech-to-Text API. Following this, keywords are extracted using the TextRank algorithm and mapped to the MovieLens dataset. Finally, it is applied to a Hybrid Recommendation System. This paper validates the superiority of this approach by comparing it with the performance of the MovieLens recommendation system that does not expand metadata. Additionally, the effectiveness of metadata expansion is demonstrated through performance comparisons with existing deep learning-based keyword extraction models. Ultimately, this research resolves the cold start and long-tail problems of existing recommendation systems through the construction of video metadata and keyword extraction.
... The MS COCO dataset is used to assess the model [25,26]. Over 200,000 images, 250,000 human instances, and 17 key points are included. ...
Article
Full-text available
Object detection algorithms play an important role in detecting people in surveillance videos. In recent years, with the rapid development of deep learning, the performance of object detection has improved by leaps and bounds, and the scheme of object detection by the YOLOV7 algorithm has also been born. Traditional object detection methods often fail to achieve a balance between speed and accuracy. To address these issues, in this research, an improved YOLOv7 algorithm performance is proposed to get the best speed-to-accuracy balance compared to state-of-the-art object detection within recorded videos using an effective compression method. This method calculates the difference between frames of video, and by using the zero difference approach by removing the duplicate frames from the recorded video and choosing only the meaningful frames based on many variables, including frame size, frame details, and the distance of the frames, influence the choice of a meaningful frame, and this will reduce the size of the video by eliminating the frames comparable to those chosen. Additionally, any other datasets or pre-trained weights have not been used; YOLOv7 has been exclusively trained on the MS COCO dataset from scratch. In order to ensure the effectiveness of this approach, numerous detection systems are used in this work. Additionally, positive performance results to reduce the processing time required for object detection have been attained.
... These models can automatically learn the relationships between images and text from large-scale image and text datasets. Additionally, to train and evaluate image captioning models, researchers have created various large-scale image captioning datasets such as COCO (Sharma 2021) and Flickr30k (Plummer et al. 2015). Evaluation metrics include BLEU, METEOR, and CIDEr, among others, to measure the similarity between generated text and human-generated reference text. ...
Article
Full-text available
The robot video question-answering system is an artificial intelligence application that integrates computer vision and natural language processing technologies. Recently, it has received widespread attention, especially with the rapid development of large language models (LLMs). The core technical challenge lies in the application of visual question answering (VQA). However, visual question answering currently faces several challenges. Firstly, the acquisition of human annotations is costly, and secondly, existing models require expensive retraining when replacing a particular module. We propose the VLM2LLM model, which significantly improves the performance of multimodal question-answering tasks by integrating visual-language matching and large-scale language models. Specifically, it overcomes the limitations of requiring massive computational resources for training and inference in previous models. Furthermore, it allows for the upgrading of our LLM version according to the latest research advancements and needs. The results demonstrate that the VLM2LLM model achieves the highest accuracy compared to other state-of-the-art models on three datasets: QAv2, A-OKVQA, and OK-VQA. We hope that the VLM2LLM model can drive advancements in the field of robot video question-answering and provide innovative solutions for a wider range of application domains.
... COCO data set was adopted by many researchers within the machine learning field [1] [2]. A study by Sharma [3] investigates the information measure Shannon's to classify images that use deep learning and machine learning methods, the COCO dataset deployed for training the system. COCO is also used within Object detection [4][5][6], Wang [7] uses a large batch optimization framework for object detection called LargeDet, that successfully scales the batch size to 1056 with a ResNet50 backbone. ...
Article
Full-text available
Microsoft Common Objects in Context (COCO) is a huge image dataset that has over 300 k images belonging to more than ninety-one classes. COCO has valuable information in the field of detection, segmentation, classification, and tagging; but the COCO dataset suffers from being unorganized, and classes in COCO interfere with each other. Dealing with it gives very low and unsatisfying results whether when calculating accuracy or intersection over the union in classification and segmentation algorithms. A simple method is proposed to create a customized subset from the COCO dataset by determining the class or class numbers. The suggested method is very useful as preprocessing step for any detection or segmentation algorithms such as YOLO, SSPNET, RCNN, etc. The proposed method was validated using the link net architecture for semantic segmentation. The results after applying the preprocessing were presented and compared to the state of art methods. The comparison demonstrates the exceptional effectiveness of transfer learning with our preprocessing model.
... Metrics such as accuracy, precision, and F1 score were used to evaluate the three algorithms on the Microsoft COCO dataset and determine their relative strengths and limitations. Sharma [17] utilized deep learning and machine techniques to study information measures for picture classification. Python was chosen as the programming language because it is included in the Creator package. ...
Preprint
Full-text available
Currently, machine learning is dominant in feature extraction and classification tasks, and it has even experienced fierce competition with deep learning. Deep learning, with its high ac-curacy, has attracted the attention of researchers and developers, surpassing previously esti-mated machine learning techniques, especially in the field of computer vision. This has been proven through modern scientific research. Not only has deep learning outperformed machine learning in addressing feature extraction and classification challenges, it has also demon-strated its advantages in guiding neural networks to recognize visual images using the same images. To achieve these capabilities, deep learning models have gradually grown in size and complexity, enabling them to take on more responsibility. In the field of object detection. The current research proposal introduces a convolutional neural network CNN model called S4ANET. This network aims to push the boundaries of neural-network models by imple-menting several advancements. One of its main focuses is to refine the loss function within the CNN. The newly designed loss function exhibits enhanced adaptability and rationality compared with its predecessor, effectively reducing network errors. Moreover, the research project emphasizes the incorporation of transfer learning, which is a critical aspect. By lever-aging predefined weights, the knowledge gained can be preserved and utilized for subse-quent training, resulting in reduced time and resources dedicated to the calculations. The re-sults unequivocally supported these goals and motivations. Experiments conducted on vari-ous COCO datasets demonstrated that the proposed methodology achieves significant im-provements in terms of accuracy and precision of up to 99%. Building upon the foundation of 1D CNN, the proposed deep learning model has made remarkable progress in object detec-tion and classification, becoming a major method in the field.
... Similarly, in Asia, water harvesting hones have been followed for over 2000 years in Thailand, therefore the technology has a lengthy history there as well. Over the years, people in Africa and Asia have perfected the art of collecting water in little quantities from the overhang of rooftops or by employing easy drains into traditional jugs and pots [13]. Many outlying provincial areas still use this method today. ...
Article
Full-text available
In order to conserve water for the future and replenish groundwater supplies, rainwater harvesting (RWH) is a fantastic practise. Both surface and groundwater supplies in India are rapidly declining because of the country's burgeoning population, the effects of global warming, the inequitable distribution of rainfall, and the often-severe fluctuations in other meteorological indices. As a result, it's crucial that people everywhere start taking steps to conserve water on their own, in their schools, and in their neighbourhoods. The purpose of this research project was to create plans for a rainwater harvesting system on the roof of the Dhaanish Ahmed College of Engineering in Chennai, which is in the Indian state of Tamil Nadu. After analysing the water needs and available supplies on campus, the administration decided that the main building would provide the best catchment area for collecting rainwater for reuse. In addition, the RWH system's many components were developed using industry standards. Based on the results of the study, it was determined that if the RWH system were installed on the campus of the Dhaanish Ahmed College of Engineering in Chennai, it would be possible to store enough water over the course of a year to alleviate the College's water scarcity issues during the dry season. With this plan in place, there will be more water available for building and cultivation. It will assist artificially recharge groundwater, which will improve water quality in both the surface and subsurface. Dhaanish Ahmed College of Engineering, Padappai garden grounds and roof tops are included in the construction plan and implementation. Total roof surface area is 81706.38 square feet. IS 15797: 2008, "Indian Standards for Rooftop Rainwater Harvesting Guidelines," was used as inspiration for the design.
... In order to carry out its duties efficiently, boards of directors typically establish audit, nomination, and compensation committees [154][155][156]. The establishment, characteristics, and actions of audit committees have only recently been the subject of study, despite the fact that they have long been mandated by rules in the industrialised world. ...
Article
Full-text available
The impact of audit committees and boards of directors on company performance is investigated. primarily the number of members on the board and their ability to make decisions without outside influence, as well as the audit committee's composition, authority, expertise, and frequency of meetings. Although agency theory predicts that a more impartial board leads to greater results, this paper discusses resource dependency theory, which holds that non-independent directors can improve a company's performance. Accounting scandals and other worldwide corporate governance failures have had a significant impact on stakeholders and economies at all levels during the past few decades. But we couldn't find any correlation between audit committee qualities and financial outcomes in our analysis. The foregoing results provide light on the inner workings of corporate governance. _________________________________________
... The original YOLOv4-tiny model's anchor boxes were obtained by clustering the COCO dataset [38] and the Pascal VOC dataset [39]. By analyzing these datasets, we found that the targets in these datasets were more different in size and shape from those in the traffic sign dataset, and the background of the traffic sign dataset was more complex. ...
Article
Full-text available
Recognizing traffic signs is an essential component of intelligent driving systems’ environment perception technology. In real-world applications, traffic sign recognition is easily influenced by variables such as light intensity, extreme weather, and distance, which increase the safety risks associated with intelligent vehicles. A Chinese traffic sign detection algorithm based on YOLOv4-tiny is proposed to overcome these challenges. An improved lightweight BECA attention mechanism module was added to the backbone feature extraction network, and an improved dense SPP network was added to the enhanced feature extraction network. A yolo detection layer was added to the detection layer, and k-means++ clustering was used to obtain prior boxes that were better suited for traffic sign detection. The improved algorithm, TSR-YOLO, was tested and assessed with the CCTSDB2021 dataset and showed a detection accuracy of 96.62%, a recall rate of 79.73%, an F-1 Score of 87.37%, and a mAP value of 92.77%, which outperformed the original YOLOv4-tiny network, and its FPS value remained around 81 f/s. Therefore, the proposed method can improve the accuracy of recognizing traffic signs in complex scenarios and can meet the real-time requirements of intelligent vehicles for traffic sign recognition tasks.
... The findings have helped to increase the rise in operational profitability in family businesses that incorporate more executives and managers. This improvement has been contributed to by the findings [124]. ...
Article
The purpose of this study is to investigate the factors that drive performance and profitability in family-oriented firms and to identify the elements that play an important part in generational succession. The fact that family-owned businesses are not only the foundation of the economy in many countries but also a significant contributor to the economy of the world, as well as the fact that many family-owned businesses struggle to prosper and thrive across all generations, are the primary factors that contribute to this problem. This analysis was carried out by utilising a design for exploratory research and qualitative methods derived from previously carried out publications, reviews, and observational studies pertaining to this industry. Some of the studies discussed the aspects that have an impact on the performance of families, while others discussed the important considerations that should be taken into account while planning the succession of firms. It had been determined that money was not the only indicator that should be used to evaluate an organization's performance; other metrics should also be considered. The study placed a strong emphasis on the variables that enable family businesses to expand and, as a result, break free from stagnation. The impact that the firm has had on the performance and profitability of the company has been taken into consideration.
... In the training process, the momentum parameter for stochastic gradient descent was set to 0.9, the weight decay parameter was 0.001, the default IoU threshold between the detection box and the ground truth was 0.5, and the batch size was 8. The pretrained weights on the Microsoft Common Objects in Context (COCO) dataset were utilized in the training processing for faster convergence and better generalization (Sharma, 2021). ...
Article
Accurate detection of plant leaves is a meaningful and challenging task for developing smart agricultural systems. To improve the performance of detecting plant leaves in natural scenes containing severe occlusion, overlapping, or shape variation, we developed an in situ sweet potato leaf detection method based on a modified Faster R-CNN framework and visual attention mechanism. First, a convolutional block attention module was added to the backbone network to enhance and extract critical features of leaf images by fusing cross-channel information and spatial information. Subsequently, the DIoU-NMS algorithm was adopted to modify the regional proposal network by replacing the original NMS. DIoU-NMS was utilized to reduce missed and incorrect detection in scenes of densely distributed leaves by considering the targets' overlap ratio, distance, and scale. The proposed leaf detection method was tested and evaluated on sweet potato plant images collected in agricultural fields. In the datasets, sweet potato leaves were presented in various sizes and poses, and a large proportion of leaves were occluded or overlapped with each other. The experimental results showed that the proposed leaf detection method outperforms state-of-the-art object detection methods. The mean average precision of the proposed method reached 95.7%, which was 2.9% higher than that of the original Faster R-CNN and 7.0% higher than that of YOLOv5. The proposed method achieved promising performance in detecting dense leaves or occluded leaves and could provide key techniques for applications in smart agriculture and ecological monitoring, such as growth monitoring or plant phenotyping.
... The various ICs in this project necessitates lower supply voltages, so a step-down transformer steps down the 230v ac mains supply voltage to lower values. Both the primary and secondary coils are found in a transformer [159][160][161][162][163][164][165][166][167][168][169][170][171]. The secondary core of the transformer is designed to have fewer turns to reduce or decrease the voltage. ...
Article
Full-text available
This project presents an automatic oxygen pumping system for acute asthma patients. This system continuously monitors the patient's breathing rate and starts pumping oxygen to support the patient's breathing as soon as it identifies an abnormality. Meanwhile, it monitors the breath rate. If it finds the average rate is not achieved even after pumping, the controller sounds an alarm indicating the patient's condition to the bystander. To monitor the patient's breathing, a breath rate sensor is used. The controller adjusts a solenoid valve accordingly to pump oxygen when necessary. Also, we added a temperature sensor for measuring the body temperature, a heartbeat sensor for measuring the patient's heart rate, and an IoT to send information to the consultant doctor's PC or hospital.
... In the case that the chatbot cannot answer the queries, it will redirect the query and the user to a real customer support person. This is really helpful in fields where repetitive questions are asked, like customer support [82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99]. The chatbot can troubleshoot many simple problems, saving time for employees and letting them focus on larger problems. ...
Article
The purpose of this project is to build a ChatBot that utilises NLP (Natural Language Processing) and assists customers. A ChatBot is an automated conversation system that replies to users' queries by analysing them using NLP and assists them in every way it can. In this project, we are trying to implement a customer service chatbot that tries to converse and assist the user in some simple scenarios. This chat bot can take simple user queries as input, process them, classify them into one of the existing tags, and respond to them with an appropriate response. If the user's queries are too complex for the bot, it will redirect the conversation to an actual person. The ChatBot is going to be based on a machine learning model that is built using PyTorch (Python Deep Learning library) and NLTK (Natural Language Tool Kit). The model used here is a feed-forward neural network. There are 3 layers in this neural network, i.e., the input layer, the hidden layer, and the output layer. The number of nodes in the input and hidden layers depends on the total number of distinct words present in the data set. whereas the output contains the same number of nodes as the number of distinct tags the data set is divided into. This kind of neural network is perfect for building simple chatbots as it does not require high computational power either for training or for deploying. The chatbot we built is for a coffee shop, and it performs actions like ordering coffee, telling a joke, suggesting a drink, etc. Although this chatbot is relatively simple, it is highly customizable, thus making it easy to implement it in any scenario.
... This cycle is applied after expansion, followed by disintegration [142][143][144][145][146][147][148][149][150][151][152][153][154][155][156]. The came about the picture is displayed in Figure 4 Thresholding methods produce sections having pixels with comparable forces [157][158][159][160][161][162][163][164][165][166][167][168][169][170][171][172][173][174][175]. Thresholding is a helpful method for laying out limits in pictures containing strong items on a differentiating foundation [176][177][178][179][180][181]. Figure 5 shows the came about the picture. ...
Article
Full-text available
The brain is the focal sensory system inundated in cerebrospinal liquid (CSF), which shields it from mechanical pressure and helps support its weight through lightness. Cerebrum hydrocephalus is the condition wherein there is a strange amassing of this cerebrospinal liquid in the ventricles or the pits of the mind. Different calculations have been proposed to resolve this issue. In this paper, an original methodology for the division of CSF from hydrocephalus impacted T2 weighted MRI picture is proposed. The skull stripping strategy embraces a two-venture approach; in the initial step, better deliberate utilization of morphological reproductions tasks is finished the cerebrum picture. In the subsequent advance, a thresholding-based method is utilized to disengage the mind inside the skull. Fixing the limit esteem and by carrying out histogram put together thresholding procedure to the skull stripped picture the CSF segment is separated. This strategy is tested and contrasted and the K-Means calculation. The proposed strategy is ended up being more productive than the K-Means division. The volume of sectioned CSF is determined, which empowers the discovery of hydrocephalus.
... Varian research was presented in this study to detect fake news with several methodologies, and a way of classifying fake news on social media was presented [167][168][169][170][171][172][173][174][175][176][177][178]. The work of several researchers on these subjects is shown below in Table 3 ...
Article
Full-text available
Bogus Internet news is widely regarded as fake articles purposefully made to mislead the reader. The continuing nature of social media platforms has led to an excessively large quantity of social network multimedia system information. Openness and unlimited information sharing on social media platforms promote information across the network regardless of its creditworthiness. It was difficult to find trustworthy sources for news media in the multiplication of misleading information in daily news shops such as social media feeds, news blogs and online news media. Machine tools could provide insights into the reliability of online content. The intensive development of false news can have highly detrimental effects on human beings and society. Consequently, pretending that news detection in social media has recently gained enormous attention. The analytical community attracted the attention of the analytics community because of the losses caused by the rapid dissemination of false news in multiple industries such as politics and finance. A social networking service is a platform where social media or social relationships are established between persons who share interests, hobbies, backgrounds or links to real life. Participants who register on the current website with a unique (typically profile) illustration and social links are offered a social networking service in large part.
... We applied our RBM adaptive learning method to CIFAR-10 [16]. Our proposed model will outperform the previous RBM model [7][8] based on experiments [116][117][118][119][120][121][122][123][124][125][126][127][128][129][130][131][132]. The remaining sections are as follows. ...
Article
Full-text available
The RBM is a stochastic energy-based model of an unsupervised neural network (RBM). RBM is a key pre-training for Deep Learning. Structure of RBM includes weights and coefficients for neurons. Better network structure allows us to examine data more thoroughly, which is good. We looked at the variance of parameters in learning on demand to fix the problem. To determine why RBM's energy function fluctuates, we'll look at its parameter variance. A neuron generation and annihilation algorithm is smeared with an adaptive RBM learning method to determine the optimal number of hidden neurons for attribute imputation during training. When the energy function isn't converged and parameter variance is high, a hidden neuron is generated. If the neuron doesn't disrupt learning, it'll destroy the hidden neuron. In this study, some yardstick PIMA data sets were tested..
... The purpose of preprocessing is to improve input image data by reducing unneeded distortion and enhancing image features used for future processing [90][91][92][93][94][95][96][97][98][99][100][101]. Image preprocessing is the general name for operation on images at low-level abstraction. ...
Article
Full-text available
Many disorders are identified in the early stages of diagnosis by analyzing the human hand‘s nails. The colour of a person‘s nails can aid in diagnosing certain medical conditions. The suggested approach, in this situation, leads to illness diagnosis decision-making. Human nail art is used to feed the system. The technology analyses nail photos and extracts disease-specific nail characteristics. The human nail has numerous characteristics, and the suggested system detects illness by changing the colour of the nail. The initial training set data is extracted from an image of a patient‘s nails with a certain condition and processed with the Weka tool. Nail To obtain the desired results, the image‘s feature results are compared to the training dataset. Deformation of the nail unit is referred to as nail disease. Nail units have their sickness class because of their distinct indications, symptoms, causes, and consequences that may or may not be related to other medical illnesses. Nail problems are still unknown and difficult to diagnose. This study proposes a fresh deep learning system for identifying and categorizing nail disorders from photos. CNN models (CNN) are combined in this framework to extract features. This research was also contrasted with certain other province algorithms (Support vector, ANN, K - nearest neighbors, and RF) evaluated on datasets and showed positive results. Key words: Human Nail, Deep Learning, CNN, Neural Networks, Preprocessing
... A bookshop at the Corridor on the outer circle belongs to Jehangir Rangoonwalla, which he considers the center of the universe [113][114][115][116][117][118][119][120][121]. For him, -The Universe Revolves Around Him, Occasionally Stopping To Pick Up An Odd Perry Mason Or James Hadley Chase. ...
Article
Full-text available
One of the country"s most renowned graphic novel artists, Mr. Banerjee, has published Corridor, a fragmented micro-story, injecting one or more small stories within the body of a larger story at various levels within the text including thematics, language, and imagery. India"s first graphic novel, Corridor, was astronomically different from anything published earlier in India. It targeted the astute, well-read reader with a meandering plot and kitschy observations of life in India"s astronomically immense cities. The process offilling the gaps in the multiple and fragmented conversations reflectsthe role readers are invited to play throughout the novel on the microscopic level. Francis McKee, the director of the Glascow-based Centre for Contemporary Arts, remarks about Sarnath"s work at an exhibition that digs into India"s changes. He added that the artist explores the country "s transformation, looking at the losses suffered in terms of intimacy and tradition, the rise in aspects like conspicuous wealth, consumption, etc., and the evolution of newer ways of life. The term narration describes how stories are told and how their material is selected and arranged to achieve particular effects on their audiences. The narration is either authentic or mendacious in which truth and falsity are arduous to tell.
...  Physically motivated  Vehicle motivated Brick-Making Machines (BMM) are often used in large-scale utility companies with large merits [4]. The convenience of bricks for this manufactured product has been considered by some of the largest and most expensive machines for an unlimited period [5][6][7][8]; BMM has grown in size like the claim for bricks expands [9]. ...
Conference Paper
Full-text available
Reliability offers theoretical and practical tools to detect parts, components, and organizations’ occurrence and potential to perform their functions. Reliability theory for the desired period length without failure in each environment enables system administrators to know their production systems’ reliability and produce optimum reliability levels. The objective, as mentioned earlier, modelling argues the MTSF and the Profit Analysis (PA) of the Single Unit System (SUS) explored the possibilities for repairs beyond a warranty within a renovate talent. Any collapse through the permit is provided free of charge by the manufacturer. Collapses are not suitable for the carelessness of users. Component undergoes examination gone its collapse to repair or change past permits. The collapse occasion of the arrangement goes after the unenthusiastic high-speed allocation; simultaneously, the repair and examination period distributions are taken randomly. Reliability, the terms for the availability and PA of the MTSF system are determined using the sub-variable technique. To study the concert of industry organization models and MTSF availability, PA and other reliability characteristics of process industry organization models, one must know the image until the evolution to the condition and some mathematical and statistical tools for solving the equations. Graphical results for reliability and PA are achieved by enchanting into account the different sizes and specific values of the repair cost.
... Recognizing the characteristics that directly affect financial reporting consistency is one of the most difficult difficulties [77][78][79][80][81][82][83]. Using this information, the Management Committee may evaluate management performance and, if necessary, make the relevant actions to minimize failure rates and improve financial outcomes [68][69][70][71][72][73][74][75][76]. ...
Article
Full-text available
Investing in banks has been harmed lately by a number of accounting scandals, one of which included Bank Al Madina. Additionally, it was determined that the low quality of financial reporting systems was a major contributing factor in the accounting scandals, along with insufficient governance processes. Strong links were found between the board structure and the consistency of financial statements and their accompanying financial statements. Board size and the capacity of management to supervise managers have a significant influence in generating better quality financial reporting for the organization.. For management, it provides a better perspective of the company's financial accounts since the board of directors may be seen as an observatory device that presents more accurate information and pricing. Quantitative methods will be used to gather data by distributing a set of questionnaires to a predetermined number of participants in the study. In addition, to ensure reliable findings, the study will make use of both primary and secondary data sources. SPSS statistical software is used to analyze the data, and the findings are presented as descriptive, inferential, and correlational data.
... Recognizing the characteristics that directly affect financial reporting consistency is one of the most difficult difficulties [77][78][79][80][81][82][83]. Using this information, the Management Committee may evaluate management performance and, if necessary, make the relevant actions to minimize failure rates and improve financial outcomes [68][69][70][71][72][73][74][75][76]. ...
Article
Investing in banks has been harmed lately by a number of accounting scandals, one of which included Bank Al Madina. Additionally, it was determined that the low quality of financial reporting systems was a major contributing factor in the accounting scandals, along with insufficient governance processes. Strong links were found between the board structure and the consistency of financial statements and their accompanying financial statements. Board size and the capacity of management to supervise managers have a significant influence in generating better quality financial reporting for the organization.. For management, it provides a better perspective of the company's financial accounts since the board of directors may be seen as an observatory device that presents more accurate information and pricing. Quantitative methods will be used to gather data by distributing a set of questionnaires to a predetermined number of participants in the study. In addition, to ensure reliable findings, the study will make use of both primary and secondary data sources. SPSS statistical software is used to analyze the data, and the findings are presented as descriptive, inferential, and correlational data.
Article
Full-text available
Introduction Sea jellyfish stings pose a threat to human health, and traditional detection methods face challenges in terms of accuracy and real-time capabilities. Methods To address this, we propose a novel algorithm that integrates YOLOv4 object detection, an attention mechanism, and PID control. We enhance YOLOv4 to improve the accuracy and real-time performance of detection. Additionally, we introduce an attention mechanism to automatically focus on critical areas of sea jellyfish stings, enhancing detection precision. Ultimately, utilizing the PID control algorithm, we achieve adaptive adjustments in the robot's movements and posture based on the detection results. Extensive experimental evaluations using a real sea jellyfish sting image dataset demonstrate significant improvements in accuracy and real-time performance using our proposed algorithm. Compared to traditional methods, our algorithm more accurately detects sea jellyfish stings and dynamically adjusts the robot's actions in real-time, maximizing protection for human health. Results and discussion The significance of this research lies in providing an efficient and accurate sea jellyfish sting detection algorithm for intelligent robot systems. The algorithm exhibits notable improvements in real-time capabilities and precision, aiding robot systems in better identifying and addressing sea jellyfish stings, thereby safeguarding human health. Moreover, the algorithm possesses a certain level of generality and can be applied to other applications in target detection and adaptive control, offering broad prospects for diverse applications.
Article
Full-text available
Counterfeiting is a global international phenomenon that is steadily growing because of globalization. Furthermore, exchanges between various countries and cultures are promoted.In other words, the fact that products with lower quality or value are similar to genuine ones also entails brand piracy and imitation of the logos and even packages of brands. These goods are generally unlawfully placed on the market without charging government taxes. China and Turkey are accused of producing, distributing and hitting brand shares of these products to nations. In the business world, luxurious brands for their leading role in the consumer's life are thus the first objective of counterfeit products.Financial values-Prices play an indispensable role in driving consumers to counterfeit because luxury brands were only targeted at high classes. In contrast, a large mass of medium-sized classes could not buy them. The creation of an equivalent replica of the first version was also liable for the proliferation of such goods on the market. More customers are already able to purchase them as they imitate, since buyers demand a comparable commodity while asking for fewer a reasonable price.The research implemented quantitative methodology, and results found a direct relationship between branding, counterfeiting, and brand image.
Conference Paper
Full-text available
A flock or swarm of Unmanned Aircraft Vehicles (UAVs) can be thought of as a flock of flying computers. Individuals in these communities all adhere to the same norms, practices, and tasks as those in their immediate vicinity. Since UAV is a trendy topic, many studies have focused on how to make it better. The current work introduces a novel model, called hybrid deep machine learning (HDML), to address the issue of leader failure in leader-follower algorithms. The findings show that splitting the process into two phases allows for the successful fusion of two DL and ML models. Extensive experiments demonstrated that the new hybrid method offered improved performance in terms of precision, recall and f1-measure of 100%. Also, the results show that the HDML model can eliminate the leader fail problem by incorporating a sub-leader to ensure the work is completed successfully for everyone involved.
Preprint
Full-text available
Background Significant effort has been made in manually tracking plant maturity and to measure early-stage plant density, and crop height in experimental breeding plots. Agronomic traits such as relative maturity (RM), stand count (SC) and plant height (PH) are essential to cultivar development, production recommendations and management practices. The use of RGB images collected via drones may replace traditional measurements in field trials with improved throughput, accuracy, and reduced cost. Recent advances in deep learning (DL) approaches have enabled the development of automated high-throughput phenotyping (HTP) systems that can quickly and accurately measure target traits using low-cost RGB drones. In this study, a time series of drone images was employed to estimate dry bean relative maturity (RM) using a hybrid model combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) for features extraction and capturing the sequential behavior of time series data. The performance of the Faster-RCNN object detection algorithm was also examined for stand count (SC) assessment during the early growth stages of dry beans. Various factors, such as flight frequencies, image resolution, and data augmentation, along with pseudo-labeling techniques, were investigated to enhance the performance and accuracy of DL models. Traditional methods involving pre-processing of images were also compared to the DL models employed in this study. Moreover, plant architecture was analyzed to extract plant height (PH) using digital surface model (DSM) and point cloud (PC) data sources. Results The CNN-LSTM model demonstrated high performance in predicting the RM of plots across diverse environments and flight datasets, regardless of image size or flight frequency. The DL model consistently outperformed the pre-processing images approach using traditional analysis (LOESS and SEG models), particularly when comparing errors using mean absolute error (MAE), providing less than two days of error in prediction across all environments. When growing degree days (GDD) data was incorporated into the CNN-LSTM model, the performance improved in certain environments, especially under unfavorable environmental conditions or weather stress. However, in other environments, the CNN-LSTM model performed similarly to or slightly better than the CNN-LSTM + GDD model. Consequently, incorporating GDD may not be necessary unless weather conditions are extreme. The Faster R-CNN model employed in this study was successful in accurately identifying bean plants at early growth stages, with correlations between the predicted SC and ground truth (GT) measurements of 0.8. The model performed consistently across various flight altitudes, and its accuracy was better compared to traditional segmentation methods using pre-processing images in OpenCV and the watershed algorithm. An appropriate growth stage should be carefully targeted for optimal results, as well as precise boundary box annotations. On average, the PC data source marginally outperformed the CSM/DSM data to estimating PH, with average correlation results of 0.55 for PC and 0.52 for CSM/DSM. The choice between them may depend on the specific environment and flight conditions, as the PH performance estimation is similar in the analyzed scenarios. However, the ground and vegetation elevation estimates can be optimized by deploying different thresholds and metrics to classify the data and perform the height extraction, respectively. Conclusions The results demonstrate that the CNN-LSTM and Faster R-CNN deep learning models outperforms other state-of-the-art techniques to quantify, respectively, RM and SC. The subtraction method proposed for estimating PH in the absence of accurate ground elevation data yielded results comparable to the difference-based method. In addition, open-source software developed to conduct the PH and RM analyses can contribute greatly to the phenotyping community.
Article
Full-text available
Annotation: Hands-on management is the norm in most henhouses. Air quality, temperature, and humidity are only a few environmental characteristics that must be properly maintained. Each of these factors affects poultry production. There is a much higher mortality rate among broiler chickens than the normal rate. Using IoT and Wireless Sensor Networks, this research hopes to improve the health of Brunei"s chickens by reducing the mortality rate and increasing the number of healthy chicks that can be produced (WSN). Prototypes developed using IoT and WSN technologies were used to verify the above parameters against thresholds. Autocorrection operations are triggered whenever one or more of these parameters cross a predetermined threshold. Additionally, the user is alerted through SMS when something goes wrong with the system. These metrics can also be tracked and displayed via a Web interface.
Article
In this study, we will look at a simplified model of a larger industrial system that uses relays to carry out sequential actions. Automatically shuts off when it senses water or temperature levels that are too high for safe industrial use. To put this idea into action, an Arduino microcontroller is used because it is the best option available. Because it's a free, open-source platform, it's great for prototyping the system with little hassle. Because of its interoperability with sensors, the system may get the necessary feedback and be fine-tuned by the Arduino. The status of the system is represented via various indicators such as the liquid crystal display (LCD), the buzzer, and so on. Data from the sensors is collected and stored in a Data Acquisition System (DAS), from which it may be accessed and used for management and oversight.
Article
Full-text available
Missing value research has been around for at least two to three decades, but imputation of missing values is a major challenge in keeping databases intact. Statistics-oriented imputation and non-statistics-oriented imputation are two types of missing value imputation. Numerous flaws in the statistics-based imputation method make it difficult to fine-tune or expect perfect imputation; it also has a number of execution-related limitations. This is a clue to the non-statistical practise known as machine learning, which we examined in this work. The Deep Belief Network (DBN) is a type of unsupervised probabilistic generative model used in machine learning applications. Restricted Boltzmann Machines are used to build it, and they perform a contrastive divergence and backpropagation to fine-tune the weights for the imputation process. DBN's stable imputation value is based on the contrastive divergences. Data from the UCI Repository's PIMA medical dataset was used in the experimentation Up to 90% of the time, the DBM with backpropagation is accurate. A maximum of 10% mean square error rate is supported by this method (DBN) compared to earlier imputation techniques. In order to evaluate the accuracy of DBN, nearly five additional imputation methods are linked to it. In comparison to other methods, DBN imputation provides 90% accuracy.
Article
Full-text available
The IoT-based wheelchair fall detection and health care system is used to detect when a patient is fallen from the wheelchair. When a patient has fallen out of their wheelchair, the IoT-based wheelchair fall detection and health care system is utilized to determine when this has occurred. When there is a need for the patient, the accelerometer sensor is engaged by hand movement. The message is then communicated to the appropriate person via text or phone, and the need is satisfied. It is possible to automate the light and fan with your voice using voice recognition. This technology provides continuous monitoring of the patient's temperature and pulse.
Article
Full-text available
Human action wave has been very much studied in uses of PC vision. Several abundant activity acknowledgement plans have shown that movement information can be added from undertaking footages or still images. Activity acknowledgement techniques skill the ill effects of lacking adequate named preparing recordings. In such cases, over-fitting would be a likely issue, and the exhibition of activity acknowledgement is limited. Many current video activity acknowledgement techniques experience the ill effects of lacking adequate named preparing recordings. In such cases, over-fitting would be a likely issue, and the presentation of activity acknowledgement is controlled. This paper proposes a variation technique to upgrade recording activity acknowledgement by adjusting information from pictures. In the meantime, stretched out the variation strategy to a semi-managed structure that can use both named and unlabeled recordings. The activity video acknowledgement can be arranged into a picture outline design by utilizing IVA calculation to increase precision and characterize the casing of obscured images. The over-fitting can be eased, and the exhibition of activity acknowledgement is enhanced. Semi-Managed Picture to-Video Transformation for Video Activity Acknowledgment, Trials on open benchmark datasets and genuine world datasets show that our technique beats a few other bests in-class movement greeting approaches.
Article
Full-text available
The area of postmodern literary criticism, an arena very a good deal devoted to the concept of "decentring", is immediately ideally prepared to challenge the authority of any rival ideology and, way to this same dedication, continuously on the verge of collapsing below the burden of Hysteria stemming from its lack of authority. To devotees to the various theoretical practices that coexist underneath that umbrella of postmodernism, the above citation from Linda Hutcheon needs to offer some comfort. Here Hutcheon indicates that the influential and nevertheless-vital theories of Foucault, Derrida, and Marx persist regardless of the struggle they have implicated in We that belief of centre they try to subvert, and they're so implicated deeply and knowingly. Begin with Hutcheon"s idea of endurance in the face of conflict because it indicates immediately the experience of humility and feeling of boldness that we posit have to underlie all discourse on the postmodern. Individuals on this discourse, like writers, critics, and readers, have to humbly receive the instability and uncertainty of that means that accompanies the venture of decentring epistemological authority; but then again, they have to be bold enough to produce that means from such risky floor.
Article
Full-text available
Counterfeiting is a global international phenomenon that is steadily growing because of globalization. Furthermore, exchanges between various countries and cultures are promoted.In other words, the fact that products with lower quality or value are similar to genuine ones also entails brand piracy and imitation of the logos and even packages of brands. These goods are generally unlawfully placed on the market without charging government taxes. China and Turkey are accused of producing, distributing and hitting brand shares of these products to nations. In the business world, luxurious brands for their leading role in the consumer's life are thus the first objective of counterfeit products.Financial values-Prices play an indispensable role in driving consumers to counterfeit because luxury brands were only targeted at high classes. In contrast, a large mass of medium-sized classes could not buy them. The creation of an equivalent replica of the first version was also liable for the proliferation of such goods on the market. More customers are already able to purchase them as they imitate, since buyers demand a comparable commodity while asking for fewer a reasonable price.The research implemented quantitative methodology, and results found a direct relationship between branding, counterfeiting, and brand image.
Article
Full-text available
Product quality is ensured by the use of well-known brand names. Because customers may be unable to distinguish between a counterfeit and a genuine product, counterfeits cause branded goods to lose value. Businesses in the same industry and sector are not the only competitors for brand companies; there is also an unauthorized rival in the counterfeit market. Anti-counterfeiting measures may increase the costs of businesses. Demand for counterfeit goods is the primary driver of its development since supply and demand are closely linked. Therefore, businesses should identify the causes of this consumption and strong demand for counterfeit goods to create appropriate countermeasures. These variables directly affect methods to counterfeit consideration and may be discovered directly from customers. The research implemented a quantitative approach throughout implementing surveys for data collection, and the data had been analyzed using the SPSS statistical tool for hypothesis validation. By perception theory, consumers generally perceive counterfeits as of poor quality and commonplace. As such, the study results show that this is indeed how customers see them. Authentic luxury brands have a good reputation, and owning one is a mark of status. Authentic luxury companies’ goods have a high price tag, too. These findings align with the hypothesis put out in the literature study about the connections people have with premium brands. The theory is also backed by the findings of this thesis in terms of customers’ perceptions of luxury brand quality, durability, and dependability, all of which are mentioned in the hypothesis.
Article
Full-text available
Cost efficiency is an important and critical aspect that influences the decision-making process. In cases of financial uncertainty, this becomes more basic and significant. Cost productivity and resources are critical for the organization to control to guarantee its development's longevity. Cost efficiency and other organizational success dimensions are based on external and internal influences. The corporation will mainly affect its capital costs through management, transparency of financial condition, and corporate accountability, which is at the centre of the market climate within the company. External aspects, such as inflation rates, tax, interest on credit or financial stability, cannot manage to satisfy its capital cost requirements. The study carried out historical data from banks from 2008 to 2018. Data evaluated using econometric models revealed that the contingent and independent variables strongly correlated as they appear to influence costs.
Article
Full-text available
Product quality is ensured by the use of well-known brand names. Because customers may be unable to distinguish between a counterfeit and a genuine product, counterfeits cause branded goods to lose value. Businesses in the same industry and sector are not the only competitors for brand companies; there is also an unauthorized rival in the counterfeit market. Anti-counterfeiting measures may increase the costs of businesses. Demand for counterfeit goods is the primary driver of its development since supply and demand are closely linked. Therefore, businesses should identify the causes of this consumption and strong demand for counterfeit goods to create appropriate countermeasures. These variables directly affect methods to counterfeit consideration and may be discovered directly from customers. The research implemented a quantitative approach throughout implementing surveys for data collection, and the data had been analyzed using the SPSS statistical tool for hypothesis validation. By perception theory, consumers generally perceive counterfeits as of poor quality and commonplace. As such, the study results show that this is indeed how customers see them. Authentic luxury brands have a good reputation, and owning one is a mark of status. Authentic luxury companies' goods have a high price tag, too. These findings align with the hypothesis put out in the literature study about the connections people have with premium brands. The theory is also backed by the findings of this thesis in terms of customers' perceptions of luxury brand quality, durability, and dependability, all of which are mentioned in the hypothesis.
Article
Full-text available
Product quality is ensured by the use of well-known brand names. Because customers may be unable to distinguish between a counterfeit and a genuine product, counterfeits cause branded goods to lose value. Businesses in the same industry and sector are not the only competitors for brand companies; there is also an unauthorized rival in the counterfeit market. Anti-counterfeiting measures may increase the costs of businesses. Demand for counterfeit goods is the primary driver of its development since supply and demand are closely linked. Therefore, businesses should identify the causes of this consumption and strong demand for counterfeit goods to create appropriate countermeasures. These variables directly affect methods to counterfeit consideration and may be discovered directly from customers. The research implemented a quantitative approach throughout implementing surveys for data collection, and the data had been analyzed using the SPSS statistical tool for hypothesis validation. By perception theory, consumers generally perceive counterfeits as of poor quality and commonplace. As such, the study results show that this is indeed how customers see them. Authentic luxury brands have a good reputation, and owning one is a mark of status. Authentic luxury companies' goods have a high price tag, too. These findings align with the hypothesis put out in the literature study about the connections people have with premium brands. The theory is also backed by the findings of this thesis in terms of customers' perceptions of luxury brand quality, durability, and dependability, all of which are mentioned in the hypothesis.
Article
Full-text available
To support the bank to act as a buffer if the situation is adverse, ownership funds would amount to capital. Moreover, higher bank balances reduce the likelihood of trouble. Campaign capital adequacy is the degree to which banks need to deal with uncertainties such as credit, market and operational risk to withstand potential losses and safe debtors. Banks, credit rates, cash balance, and legal proportions were the monetary factors, each of which individually regressed to the performance of deposit money. The loan rates have been found to have a significant, positive effect on banks' profitability, which shows a drop in lending rates which reduces the banks' profitability. Bank rates, cash balance ratios, and regulatory ratios have also been detected negatively impacting bank profitability. They also found that the connection of bank profitability to monetary policy tools in the Private Sector was united in credit, bank rates, cash balance ratios and statutory ratios. The research stated a direct relationship between monetary policies and financial performance.
Article
Full-text available
Commercial banks that manage a substantial share of the financial industry's total assets depend mostly on credit. Banks may increase their revenues via this function, one of the main tasks of commercial banks. It should be recalled that banks will differ in various ways in terms of their aims, services, and strategies. In reality, in their day-today operations, banks confront several risks. Bank Performance is highly affected by "Credit Risk" since it is the possibility that the total value of assets may change in value because the counterparty has failed to meet its commitments under the contracted liability. A bank's primary purpose is to accept deposits and provide credit facilities which thus become necessarily subject to credit risk. So, Credit risks constitute the most significant risk that banks are subjected to, and their success depends to a degree greater than other risks from accurate measurement and successful risk management. The study carried out a quantitative technique during the survey distribution to a certain number of participants, and the findings were seen concerning the regression. Pearson Correlations analyzes, and the findings indicated that market risk, liquidity risk, loan risk and solvency risks are directly linked. However, Nat. Volatiles & Essent. Oils, 2021; 8(5): 8447-8469 8448 throughout 2017 and 2018, the balance sheet was employed to concentrate on the net income effect of ratios. The findings have shown that the greater the risk management ratios, the higher the net income.
Article
Full-text available
For years, workers of an organization have strived to strengthen it and expand it with fresh concepts and strategies to accomplish new objectives. The layoff is, by definition, a spontaneous release from an institution, i.e. a compulsory resignation for certain purposes of employee categories, of permanent or temporary personnel (economic reasons, downsizing personal management). Outsourcing is a way of reducing costs and changing fixed costs to varying expenses for companies. It transfers work or researches to outside households, which lead to job losses. Finishing is a major business challenge, forced disconnected jobs and survivors. This sudden dismantling was triggered by the economic depression, which increased with corruption by governments. The COVID-19 spread around the world is further overcome day by day. When the layoff is mass, companies may notify the workers of the reasons for the reduction. Some hospitals issued departed personnel a warning for clarifying things and preparing the workers even though they could do serious harm. This form of warning illustrates a pandemic COVID-19 by delivering unemployment until layoffs explain the financial downturn for workers. Certain hospitals and organizations offer warning without consideration about the discharged workers on the same day. Moreover, this existed in tiny clinics, where there were not very significant layoffs. Substantial government institutions, including A.U.B.M.C., B.M.G. and other hospitals, prefer cuts as a remedy. This research aims to determine the effect of forced termination health care institutions on survivors' effectiveness, performance, quality of service, and relational results.
Article
Full-text available
Measurements always associate a certain degree of uncertainty. In order to achieve high precision measurement in presence of uncertainty an efficient computation is desired. Statistical definition of precision of any measurement is defined as one standard deviation divided by the square root of the sample size taken for measurements. Accordingly, tolerance limits are statistical in nature. Therefore, measurements are required to repeat large number of times to obtain better precision. Hence, the target is to establish the tolerance limits in presence of uncertainty in computer and communication systems. Nonparametric method is applied to establish the tolerance limits when uncertainty is present in measurements. The basic aim of the present paper is to explore order statistics based nonparametric method to estimate the appropriate number of samples required to generate the realizations of the uncertain random parameters which further will facilitate user to establish the tolerance limits. A case study of solute transport model is experimented where tolerance limits of solute concentration at any spatial location at any temporal moment is shown. Results obtained based on the nonparametric simulation are compared with the results obtained by executing traditional method of setting tolerance limits using Monte Carlo simulations using computer and communication systems.
Article
Full-text available
The available butterfly data sets comprise a few limited species, and the images in the data sets are always standard patterns without the images of butterflies in their living environment. To overcome the aforementioned limitations in the butterfly data sets, we build a butterfly data set composed of all species of butterflies in China with 4270 standard pattern images of 1176 butterfly species, and 1425 images from living environment of 111 species. We propose to use the deep learning technique Faster-Rcnn to train an automatic butterfly identification system including butterfly position detection and species recognition. We delete those species with only one living environment image from data set, then partition the rest images from living environment into two subsets, one used as test subset, the other as training subset respectively combined with all standard pattern butterfly images or the standard pattern butterfly images with the same species of the images from living environment. In order to construct the training subset for FasterRcnn, nine methods were adopted to amplifying the images in the training subset including the turning of up and down, and left and right, rotation with different angles, adding noises, blurring, and contrast ratio adjusting etc. Three prediction models were trained. The mAP (Mean Average prediction) criterion was used to evaluate the performance of the prediction model. The experimental results demonstrate that our Faster-Rcnn based butterfly automatic identification system performed well, and its worst mAP is up to 60%, and can simultaneously detect the positions of more than one butterflies in one images from living environment and recognize the species of those butterflies as well.
Article
Full-text available
Cognitive communication model perform the investigation and surveillance of spectrum in cognitive radio networks abetment in advertent primary users (PUs) and in turn help in allocation of transmission space for secondary users (SUs). In effective performance of regulation of wireless channel handover strategy in cognitive computing systems, new computing models are desired in operating set of tasks to process business model, and interact naturally with humans or machine rather being programmed. Cognitive wireless network are trained via artificial intelligence (AI) and machine learning (ML) algorithms for dynamic processing of spectrum handovers. They assist human experts in making enhanced decisions by penetrating into the complexity of the handovers. This paper focuses on learning and reasoning features of cognitive radio (CR) by analyzing primary user (PU) and secondary user (SU) data communication using home location register (HLR) and visitor location register (VLR) database respectively. The SpecPSO is proposed for optimizing handovers using supervised machine learning technique for performing dynamic handover by adapting to the environment and make smart decisions compared to the traditional cooperative spectrum sensing (CSS) techniques.
Article
Full-text available
Feature representation and classification are two key steps for face recognition. We compared three automated methods for face recognition using different method for feature extraction: PCA (Principle Component Analysis), LDA (Linear Discriminate Analysis), ICA (Independent Component Analysis) and SVM (Support Vector Machine) were used for classification. The experiments were implemented on two face databases, The ATT Face Database [1] and the Indian Face Database (IFD) [2] with the combination of methods (PCA+ SVM), (ICA+SVM) and (LDA+SVM) showed that (LDA+SVM) method had a higher recognition rate than the other two methods for face recognition.
Article
A novel dynamic spectrum sharing method inspired by natural communities based on social language has been proposed to overcome prevailing spectrum underutilization and scarcity. The Social Cognitive Radio Network (SCRN) combines social data and mobile communication network by providing a range of data delivery services concerning social relationship among mobile users. The research focuses on diverse SCRN applications and its handover issues, a bio-intelligent supervised learning approach called SpecPSO is devised for performing social cognitive handover (SCH) to: a) Evaluate efficient spectrum utilization and b) Increase data rate for applications like Facebook, LinkedIn. Experimental results show that the proposed SCH-SpecPSO outperforms 75% more than state of art mobile social networks by optimizing various handover issues.
Conference Paper
Many studies are based on automatic or semi-automatic measurement of various a priori brain Region of Interest (ROI) to compare and discriminate between healthy controls (HC) and Alzheimer Disease (AD) patients. The proposed diagnosis method results yield up to about 84% stratification accuracy with Multi-kernel SVM along with high sensitivity and specificity above 85%.
Article
Identification of butterfly species is essential because they are directly associated with crop plants used for human and animal consumption. However, the widely used reliable methods for butterfly identification are not efficient due to complicated butterfly shapes. We previously developed a novel shape recognition method that uses branch length similarity (BLS) entropy, which is a simple branching network consisting of a single node and branches. The method has been successfully applied to recognize battle tanks and characterize human faces with different emotions. In the present study, we used the BLS entropy profile (an assemble of BLS entropies) as an input feature in a feed-forward back-propagation artificial neural network to identify butterfly species according to their shapes when viewed from different angles (for vertically adjustable angle, θ = ± 10°, ± 20°, …, ± 60° and for horizontally adjustable angle, φ = ± 10°, ± 20°, …, ± 60°). In the field, butterfly images are generally captured obliquely by camera due to butterfly alignment and viewer positioning, which generates various shapes for a given specimen. To generate different shapes of a butterfly when viewed from different angles, we projected the shapes captured from top-view to a plane rotated through angles θ and φ. Projected shapes with differing θ and φ values were used as training data for the neural network and other shapes were used as test data. Experimental results showed that our method successfully identified various butterfly shapes. In addition, we briefly discuss extension of the method to identify more complicated images of different butterfly species.
Article
There is increasing interest in the automatic identification of insect species from images. Here content-based image retrieval (CBIR) is applied because of its capacity for mass processing and operability. A series of shape, colour and texture features was developed that draw on CBIR and allow the identification of butterfly images to the taxonomic scale of family. In our test the accuracy of Papilionidae reached 84% indicating that CBIR is suitable for the identification of butterflies at the family level. Furthermore, experiments with different features, feature weights and similarity matching algorithms were compared. Testing revealed that data attributes such as species diversity, image quality and resolution affected system success the most, followed by features and match algorithms; shape features are more important than colour or texture features in the identification of butterfly families. These findings are important to future improvements in this technology and its applicability.
Article
Bell System Technical Journal, also pp. 623-656 (October)
Article
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Article
An abstract is not available.
Conference Paper
This paper presents the use of multilayer perceptrons (MLP) trained with various training algorithms for image analysis and pattern recognition. Given a data set of images with known classifications, a system can predict the classification of new images. However, the accuracy of the networks, having the same size and same learning parameters, changes according to training algorithm used in MLP. The effects of the different algorithms are investigated and the best learning methods were proposed for image segmentation.
Age Estimation via Fusion of Depthwise Separable Convolutional Neural Networks
  • K Liu
  • P Liu
  • T Chan
  • S Liu
  • Pei
Improving Butterfly Family Classification Using Past Separating Features Extraction in Extreme Learning Machine
  • S Iamsa-At
  • P Horata
  • K Sunat
  • N Thipayang
Mobile Object Detection using TensorFlow Lite and Transfer Learning
  • O Alsing
Improving Butterfly Family Classification Using Past Separating Features Extraction in Extreme Learning Machine
  • iamsa-at