Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Introduction In the present scenario, social media network plays a significant role in sharing information between individuals. This incorporates information about news and events that are presently occurring in the real world. Anticipating election results is presently turning in to a fascinating research topic through social media. In this article, we proposed a strategy to anticipate election results by consolidating sub-event discovery and sentimental analysis in micro blogs to break down as well as imagine political inclinations un covered by those social media users Methodology This approach discovers and investigates sentimental data from micro-blogs to anticipate the popularity of contestants. In general, many organizations and media houses conduct prepoll review and expert’s perspectives to anticipate the result of the election, but for our model, we use twitter data to predict the result of an election by gathering twitter information and evaluate it to anticipate the result of the election by analyzing the sentiment of twitter information about the contestants. Results The number of seats won by the first, second and the third party in AP Assembly Election 2019 has been deter-mined by utilizing PSS’s of these parties by means of equation(2),(3), and(4), respectively. In Table 2 actual results of the election and our model prediction results are shown and these outcomes are very close to actual results. We utilized SVM with 15-fold cross-validation, for sentiment polarity classification utilizing our training set, which gives us the precision of 94.2%. There are 7500 tuples in our training data set, with 3750 positive tweets and 3750 negative tweets. Conclusions Our outcomes state that the proposed model can precisely forecast the election results with accuracy (94.2 %) over the given baselines. The experimental outcomes are very closer to actual election results and contrasted with conventional strategies utilized by various survey agencies for exit polls and approval of results demonstrated that social media data can foresee with better exactness. Discussion In the future we might want to expand this work into different areas and nations of the reality where Twitter is picking up prevalence as a political battling tool and where politicians and individuals are turning towards micro-blogs for political communicates and data. We would likewise expand this research into various fields other than general elections and from politicians to state organizations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This is a standard approach in studies that work with NLP and that only analyze the text. After applying the indicated filters, the total sample obtained was 41,059 tweets (Sucharitha et al., 2021;Vyas et al., 2021). ...
Chapter
Tourists’ satisfaction and motivation have been recurrent themes in tourism literature. In recent years these themes have also been addressed based on online evaluations carried out by tourists, TripAdvisor is one of the most used sites. In this context, this study aims to analyse the image of Bragança's tourism destination based on TripAdvisor reviews during the pandemic period (2020–2022). To this end, 1444 quantitative and qualitative reviews of attractions, hotels, and restaurants in Bragança, Northern Portugal, were analysed. Based on the Latent Dirichlet Allocation Algorithm three dimensions were determined for the attractions, two dimensions for the hotels and two dimensions for the restaurants. The descriptive statistics made it possible to establish that the municipality has a positive tourist image. Given results, theoretical and practical implications of this important Marketing theme are presented.
... The practice of waste segregation has been going on for many years but still, a lot of people have difficulty in proper segregation [1], [2]. While separating wet and dry waste there are a few factors like moisture content of waste and type of waste that are to be considered for precise separation [3]. ...
... This is an algorithm that aids in avoiding learning the same resource allocation and efficiently improves learning. [13][14][15][16][17] These articles provide such a method for managing the forecast on a particular time, ahead to the anticipated time point, to allow enough time for task scheduling based on the expected workload. That article introduced a clustering-based workload prediction technique, in which the tasks are first grouped and then a prediction model for the appropriate is developed. ...
... This is a standard approach in studies that work with NLP and that only analyze the text. After applying the indicated filters, the total sample obtained was 41,059 tweets (Sucharitha et al., 2021;Vyas et al., 2021). ...
Article
Full-text available
Content marketing involves producing and distributing content effectively and initially through digital channels. However, digital marketing strategies and business models can succeed only if content marketing is developed correctly. This study aims to develop a relevant theoretical framework linked to content marketing and identify the leading techniques and uses linked to its development. In this context, we developed an innovative data-driven methodology consisting of three steps. In the first phase, sentiment analysis that works with machine learning was conducted with Textblob, and four experiments were performed using support vector classifier, multinomial naïve bayes, logistic regression, and random forest classifier. First, we aimed to increase the accuracy of sentiment analysis (negative, neutral and positive) of a sample of user-generated content collected from the social network Twitter. Second, a mathematical topic-modelling algorithm known as latent dirichlet allocation was used to divide the database into topics. Finally, a textual analysis was developed using the Python programming language. Based on the results, we identified 11 topics, of which four were positive (Smart Content, Video Marketing, Podcast, and Influencer Marketing). Six of them were neutral (Content Personalization, Social Media Posts, Blogging, search engine optimization, Advergames, and NFTs), and one was negative (Email Marketing). Our results suggest that companies should use content personalisation ethically, mainly when AI-based techniques are used to predict user behaviors. While content marketing strategies are a fundamental part of digital marketing tactics, they can elicit changes in user online behavior when Big Data or AI algorithms are used. This fact raises concerns about the non-ethical design of online strategies in digital environments and the imperative that content marketing strategies should not be developed with purely economic and profitability interests.
... Sucharitha et al. [42] accomplished specified criteria for UWSNs, such as minimal EC, duplicated signal creation, and empty gap prevention. In LMPC [43], the binary trees were formed by each origin node, but with [44], a binary tree was created by each crossing device. ...
Article
Full-text available
Aims & Background Energy saving or accurate information transmission within resource limits were major challenges for IoT Underwater Sensing Networks (IoT-UWSNs) on the Internet. Conventional transfer methods increase the cost of communications, leading to bottlenecks or compromising the reliability of information supply. Several routing techniques were suggested using UWSN to ensure uniform transmission of information or reduce communication latency while preserving a data battery (to avoid an empty hole in the network). Objectives & Methodology In this article, adaptable power networking methods based on the Fastest Route Fist (FRF) method and a smaller amount of the business unit method are presented to solve the problems mentioned above. Both Back Laminated Inter Energy Management One (FLMPC-One) networking method, that employs 2-hop neighborhood knowledge, with the Laminated Inter Energy Management Two (FLMPC-Two) networking procedure, which employs 3-hop neighborhood data, were combined to create such innovative technologies (to shortest path selection). Variable Session Portion (SP) and Information Speed (IS) were also considered to ensure that the suggested method is flexible. Results & Conclusions These findings show that the suggested methods, Shortest Path First without 3-hop Relatives Data (SPF-Three) or Broadness Initial Searching for Shortest Route. Breadth First Search to 3-hop Relatives Data (BFS-Three) was successfully developed (BFS-SPF-Three). These suggested methods are successful in respect of minimal Electric Cost (EC) and Reduced Transmission Drop Rates (RTDR) given a small number of operational sites at a reasonable latency, according to the simulated findings.
... Deep networks naturally integrate low/mid/highlevel features [1] and classifiers in an end-to-end multilayer fashion, and the "levels" of features can be enriched by the number of stacked layers (depth). Recent evidence [2] reveals that network depth is of crucial importance, and the leading results [3] on the challenging ImageNet dataset all exploit "very deep" models, with a depth of sixteen to thirty. Many other nontrivial visual recognition tasks have also greatly benefited from very deep models. ...
Article
Full-text available
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers-8× deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1 st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
... Meta-learning is challenging because to evaluate an update rule, it must first be applied. This often leads to high computational costs [3]. As a result most works optimise performance after K applications of the update rule and assume that this yields improved performance for the remainder of the learner's lifetime. ...
Article
Full-text available
Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging metaoptimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the metalearner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that the metric can control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an ɛ-greedy Q-learning agent—without backpropagating through the update rule.
... To find out their personality, people are required to take a personality test. Social media became a platform for people to express their feelings to the world, based on the posts in social media made by users can be used to analyze their behavior [8]. This experiment uses text written by Twitter users to predict personality of the particular user. ...
... Applying machine learning to predict customer purchase intention has been presented in [19]. A study of applying machine learning to predict election results was was presented in [20]. The machine learning was developed using dataset collected from Twitter. ...
Article
Aims and Background Wireless Body Sensor Network (WBSN) technology is one of the major research areas in the medical and entertainment industries. A wireless sensor network (WSN) is a dense sensor network that senses environmental conditions, processes, and outgoing data at the sink node. A WBSN develops patient monitoring systems that provide the flexibility and mobility needed to monitor patient health. In data communications, it is difficult to find flexible optical routing paths, switching capabilities, and packet processing in the composition of optical networks. Information-centric networks (ICNs) are a new network model and are different from information- centric models. The priority of the information-centric model is the communication network. Objectives In the existing literature, such methods are typically developed using computationally expensive procedures, such as bilinear pairing, elliptic curve operations, etc., which are unsuitable for biomedical devices with limited resources. Using the concept of hyperelliptic curve cryptography (HECC), we propose a new solution: a smart card-based two-factor mutual authentication scheme. In this new scheme, HECC’s finest properties, such as compact parameters and key sizes, are utilized to enhance the real-time performance of an IoT-based TMIS system. Methodology A fuzzy–based Priority Aware Data Sharing (FPADS) method is introduced to schedule the priority data and monitor the transmission length. The child node adjusts the transmission speed of the cluster head with the help of a fuzzy logic controller (FLC). Results The proposed model estimated the traffic load of the child node and the priority of the different amounts of data to be transmitted. The principle of scheduling data packets to be developed is based on the precedence of the data with the lowest transmit length in the network. Conclusion The proposed FPADS performance increases in terms of scheduling time utilisation, traffic distribution, and mean delay. Simulations have been done using NS2, and the outcomes have shown that the proposed methodology is efficient and improves the overall QoS of the system.
Conference Paper
Election results are a topic that never stops being talked about and even more so that social platforms are the perfect medium where polarization to a political party is established. That is why many academics have seen the potential of this data source for the prediction of electoral elections. Therefore, it is necessary to review what kind of machine learning models perform better in predicting election results. Therefore, a literature review is carried out, following the guidelines of the PRISMA methodology, for which databases such as Scopus, IEEE-Xplore, Science Direct, Google Academic, Springer, Ebscohost, Iop, Wiley, and Sage were used. After the literature review analysis, a total of 1638 manuscripts related to the research topic were obtained, and the inclusion and exclusion criteria were applied. Thus, 69 manuscripts were systematized. The results showed that one of the models most used by the scientific community is sentiment analysis. It was also noted that the best performing model was random forest (RF), with an accuracy rate of 97%. In the second place, we have the recurrent neural networks (RNNs) model with an accuracy rate of 91.6%. However, unlike RF, RNN requires a high computational knowledge and effort. Finally, it is concluded that the RF model is the most suitable for the prediction of electoral results since it can perform better in this type of case.KeywordsAutomatic learningElectionsElectoralMachine learningVote
Article
Emotions play a critical role in understanding human behaviors and are direct indicators of residents' well-being and quality of life. Assessing spatial-emotional interactions is crucial for human-centered urban planning and public mental health. However, prior research has focused on the spatial analysis of every single emotion, ignoring the intricate interactions between multi-emotions and space. To address this gap, we propose a novel framework to reveal the spatial co-occurrence patterns of multi-emotions using massive social media data in Wuhan, China. Specifically, the BERT (bidirectional encoder representations from transformers) pre-trained model is utilized to classify each post into one of five basic emotions. Given the implementation of the K-means algorithm on these emotional results, the emotion-based similarities among different grids are investigated. The qualitative and quantitative results reveal six spatial co-occurrence patterns of conflicting or consistent emotions in urban space, namely, happiness-fear, happiness-anger, balanced emotion, happiness dominated, happiness-surprise, and happiness-sadness. In particular, the balanced emotion pattern is the most prevalent and tends to be spatially concentrated in the city center, while patterns of happiness-anger and happiness-sadness are mainly observed in the suburbs. Plus, results of the Multinomial Logit Model (MNLM) indicate that the spatial multi-emotions co-occurrence patterns are significantly correlated with land use characteristics based on points-of-interest (POIs) data. These findings provide an innovative perspective for understanding the complex interactions between emotions and space, with theoretical and practical implications for designing and maintaining an emotionally healthy city.
Article
Aims/Background: Twitter has rapidly become a go-to source for current events coverage. The more people rely on it, the more important it is to provide accurate data. Twitter makes it easy to spread misinformation, which can have a significant impact on how people feel, especially if false information spreads around COVID-19. Methodology Unfortunately, twitter was also used to spread myths and misinformation about the illness and its preventative immunization. So, it is crucial to identify false information before its spread gets out of hand. In this research, we look into the efficacy of several different types of deep neural networks in automatically classifying and identifying fake news content posted on social media platforms in relation to the COVID-19 pandemic. These networks include long short-term memory (LSTM), bi-directional LSTM, convolutional-neural-networks (CNN), and a hybrid of CNN-LSTM networks. Results The "COVID-19 Fake News" dataset includes 42,280, actual and fake news cases for the COVID-19 pandemic and associated vaccines and has been used to train and test these deep neural networks. Conclusion The proposed models are executed and compared to other deep neural networks, the CNN model was found to have the highest accuracy at 95.6%.
Chapter
Artificial Intelligence and Data Science in Recommendation System: Current Trends, Technologies and Applications captures the state of the art in usage of artificial intelligence in different types of recommendation systems and predictive analysis. The book provides guidelines and case studies for application of artificial intelligence in recommendation from expert researchers and practitioners. A detailed analysis of the relevant theoretical and practical aspects, current trends and future directions is presented. The book highlights many use cases for recommendation systems: - Basic application of machine learning and deep learning in recommendation process and the evaluation metrics - Machine learning techniques for text mining and spam email filtering considering the perspective of Industry 4.0 - Tensor factorization in different types of recommendation system - Ranking framework and topic modeling to recommend author specialization based on content. - Movie recommendation systems - Point of interest recommendations - Mobile tourism recommendation systems for visually disabled persons - Automation of fashion retail outlets - Human resource management (employee assessment and interview screening) This reference is essential reading for students, faculty members, researchers and industry professionals seeking insight into the working and design of recommendation systems.
Article
Aims and Background Integrated computing technologies such as the Internet of Things (IoT), Multi-Agent Systems (MAS), and automatic networking to deliver Internet of Vehicles (IoV) applications. Objectives and Methodology The main objective of this paper is to combine MAS with IoT or IoV a new paradigm within its Cypher Physical System (CPS) for intelligent car applications. We proposed the MAS algorithm and applied it to control traffic lights at multiple intersections. When using MAS together with scattered computing architectures, IoV can achieve higher efficiency. The proposed combination is based on the independent knowledge, adaptability, assertiveness, and responsiveness that can be used in wireless sensor paradigms to bring new remedies. Smart products will explore further advancements and diverse mobility capabilities. Results IoT provides an appropriate atmosphere for connecting with MAS concepts and programs in addition to providing reliable, adaptable, efficient, and intelligent solutions in the automotive network. In addition, the combination of MAS with IoT and cognitive conditions could result in scalable, automated, and smart wireless sensor solutions. Conclusion We conduct experiments on three different datasets, and the results demonstrate that MAS outperforms several state-of-the-art algorithms in alleviating traffic congestion with shorter training time.
Article
Aims & Background Businesses in the E-Commerce sector, especially those in the business-to-consumer segment, are engaged in fierce competition for survival, trying to gain access to their rivals' client bases while keeping current customers from defecting. The cost of acquiring new customers is rising as more competitors join the market with significant upfront expenditures and cutting-edge penetration strategies, making client retention essential for these organizations. Objectives & Methodology The main objective of this research is to detect probable churning customers and prevent churn with temporary retention measures. It's also essential to understand why the customer decided to go away to apply customized win-back strategies. Predictive analysis uses the hybrid classification approach to address the regression and classification issues. The process for forecasting E-Commerce customer attrition based on support vector machines is presented in this paper, along with a hybrid recommendation strategy for targeted retention initiatives. You may prevent future customer churn by suggesting reasonable offers or services. Results The empirical findings demonstrate a considerable increase in the coverage ratio, hit ratio, lift degree, precision rate, and other metrics using the integrated forecasting model. Conclusion To effectively identify separate groups of lost customers and create a customer churn retention strategy, categorize the various lost customer types using the RFM principle.
Article
Introduction Over the past few years, researchers have greatly focused on increasing the electrical efficiency of large computer systems. Virtual machine (VM) migration helps data centers keep their pages' content updated on a regular basis, which speeds up the time it takes to access data. Offline VM migration is best accomplished by sharing memory without requiring any downtime. Objective and Methodology The objective of the paper was to reduce energy consumption and deploy a unique green computing architecture. The proposed virtual machine is transferred from one host to another through dynamic mobility. The proposed technique migrates the maximally loaded virtual machine to the least loaded active node, while maintaining the performance and energy efficiency of the data centers. Taking into account the cloud environment, the use of electricity could continue to be critical. These large uses of electricity by the internet information facilities that maintain computing capacity are becoming another major concern. Another way to reduce resource use is to relocate the VM. Results Using a non-linear forecasting approach, the research presents improved decentralized virtual machine migration (IDVMM) that could mitigate electricity consumption in cloud information warehouses. It minimizes violations of support agreements in a relatively small number of all displaced cases and improves the efficiency of resources. Conclusion The proposed approach further develops two thresholds to divide overloaded hosts into massively overloaded hosts, moderately overloaded hosts, and lightly overloaded hosts. The migration decision of VMs in all stages pursues the goal of reducing the energy consumption of the network during the migration process. Given ten months of data, actual demand tracing is done through PlanetLab and then assessed using a cloud service.
Article
Aims and Background Agriculture plays a major role in the global economy, providing food, raw materials, and jobs to billions of people and driving economic growth and poverty reduction. Rice is the most widely consumed crop domestically, making it a particularly important crop for rural populations. The exact number of rice varieties worldwide is difficult to determine as new varieties are constantly being developed and marketed. Objectives and Methodology The most common method of rice variety identification is a comparison of its physical and chemical properties to a reference collection of known types. This is a relatively quick and cost-effective approach that can be used to accurately differentiate between distinct varieties. In some cases, genetic testing may be used to confirm the identity of a variety, although this technique is more expensive and time-consuming. However, we can also utilize efficient, precise, and cost-effective digital image processing and machine vision techniques. Results This study describes different types of ensemble methods, such as bagging (Decision Tree, Random Forest, Extra Tree), boosting (AdaBoost, Gradient Boost, and XGBoost), and voting classifiers to classify five different varieties of rice. Extreme Gradient Boosting (XGBoost) has achieved the highest average classification accuracy of 99.60% among all the algorithms. Conclusion The findings of the performance measurement indicated that the proposed model was successful in classifying the various varieties of rice.
Article
Most people consider traffic congestion to be a major issue since it increases noise, pollution, and time wastage. Traffic congestion is caused by dynamic traffic flow, which is a serious concern. The current normal traffic light system is not enough to handle the traffic congestion problems since it functions with a fixed-time length strategy. Methodology Despite the massive amount of traffic surveillance videos and images collected in daily monitoring, deep learning techniques for traffic intelligence management and control have been underutilized. Hence, in this paper, we propose a novel traffic congestion prediction system using a deep learning approach. Initially, the traffic data from the sensors is obtained and pre-processed using normalization. The features are extracted using Multi-Linear Discriminant Analysis (M-LDA). We propose Tri-stage Attention-based Convolutional Neural Network- Recurrent Neural Network (TA- CNN-RNN) for predicting traffic congestion. Results To evaluate the effectiveness of the proposed model, the Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE) were used as the evaluation metrics. Conclusion The experimental trial could extend its successful application to the traffic surveillance system and has the potential to enhancement an intelligent transport system in the future.
Article
Aims and Background Artificial intelligence (AI) is expanding in the market daily to assist humans in a variety of ways. However, as these models are expensive, there is still a gap in the availability of AI products to the common public with high component dependency. Objective and Methodology To address the issue of additional component dependency on AI products, we propose a model that can use available Smartphone resources to perceive real-world huddles and assist ordinary people with their daily needs. The proposed AI model is to predict the user’s indoor position (Node) at the computer science and engineering block of CMR Institute of Technology (CMRIT) by using Smartphone sensors and wireless signals. We used SVR to predict the regular walk steps needed between two Nodes and Pedestrian Dead Reckoning (PDR) to predict the walk steps needed while the signal was lost in the indoor environment. Results The Support vector regression (SVR) models make the locations to be available within the specified building boundaries for proper guidance. The PDR approach supports the user while signal loss between two Received Signal Strength Indicators (RSSI). The Pedestrian dead reckoning - Support Vector Regression (PD-SVR) results are showing 98% accuracy in NODE predictions with routing tables. The indoor positioning is 100% accurate with dynamic crowd-sourcing Node preparation. Conclusion The results are compared with other indoor navigation models K-nearest neighbor (KNN) and DF-SVM are given 95% accurate NODE estimation with minimal need for network components.
Article
Aims and Background The Internet of Things has evolved over the years to a greater extent, where objects communicate with each other over a network. Heterogenous communication between the nodes leads to a large amount of information sharing, and sensitive information could be shared over the network. It is important to maintain privacy and security during information sharing to protect devices from communicating with malicious nodes. Objectives and Methodology The concept of trust was introduced to prevent nodes from communicating with malicious nodes. A trust computation model for the IoT based on machine learning concepts was designed, which evaluates trust based on the Trust Marks. There are three trust marks, out of which two are evaluated. The three trust marks are knowledge, experience, and reputation. Knowledge trust marks are evaluated separately based on their trust property mathematical formulations, and then based on these properties, machine learning-based algorithms are applied to train the model to classify the objects as trustworthy and untrustworthy. Results The effectiveness of the Knowledge Trust Mark is measured by a simulation and confusion matrix. The accuracy of the trained model is shown by the accuracy score. The trust computational model for IoT using machine learning shows higher accuracy in classifying the objects as trustworthy and untrustworthy. Conclusion The experience trust mark is evaluated based on its properties, and the behaviour of the experience is shown over time graphically.
Article
Full-text available
Aim and Background In recent periods, micro-array data analysis using soft computing and machine learning techniques gained more interest among researchers to detect prostate cancer. Due to the small sample size of micro-array data with a larger number of attributes, traditional machine learning techniques face difficulty detecting prostate cancer. Methodology The selection of relevant genes exploits useful information about micro-array data, which enhances the accuracy of detection. In this research, the samples are acquired from the gene expression omnibus database, particularly related to the prostate cancer GEO IDs such as GSE 21034, GSE 15484 and GSE 3325/GSE 3998. In addition, ensemble feature optimization technique and Bidirectional Long Short Term Memory (Bi-LSTM) network are employed for detecting prostate cancer from the microarray data of gene expression. Results The ensemble feature optimization technique includes 4 metaheuristic optimizers that select the top 2000 genes from each GEO IDs, which are relevant to prostate cancer. Next, the selected genes are given to the Bi-LSTM network for classifying the normal and prostate cancer subjects. Conclusion The simulation analysis revealed that the ensemble based Bi-LSTM network obtained 99.13%, 98.97%, and 94.12% of accuracy on the GEO IDs like GSE 3325/GSE 3998, GSE 21034, and GSE 15484. Conclusion The simulation analysis revealed that the ensemble based Bi-LSTM network obtained 99.13%, 98.97%, and 94.12% of accuracy on the GEO IDs like GSE 3325/GSE 3998, GSE 21034, and GSE 15484.
Article
Background The progress of the Cognitive Radio-Wireless Sensor Network is being influenced by advancements in wireless sensor networks (WSNs), which significantly have unique features of cognitive radio technology (CR-WSN). Enhancing the network lifespan of any network requires better utilization of the available spectrum as well as the selection of a good routing mechanism for transmitting informational data to the base station from the sensor node without data conflict. Aims Cognitive radio methods play a significant part in achieving this, and when paired with WSNs, the above-mentioned objectives can be met to a large extent. Methodology A unique energy-saving Distance- Based Multi-hop Clustering and Routing (DBMCR) methodology in association with spectrum allocation is proposed as a heterogeneous CR-WSN model. The supplied heterogeneous CR-wireless sensor networks are separated into areas and assigned a different spectrum depending on the distance. Information is sent over a multi-hop connection after dynamic clustering using distance computation. Results The findings show that the suggested method achieves higher stability and ensures the energy-optimizing CR-WSN. The enhanced scalability can be seen in the First Node Death (FND). Additionally, the improved throughput helps to preserve the residual energy of the network which helps to address the issue of load balancing across nodes. Conclusions Thus, the result acquired from the above findings shows that the proposed heterogeneous model achieves the enhanced network lifetime and ensures the energy optimizing CR-WSN.
Article
Full-text available
India is a country which has exemplary climate circumstances comprising of different seasons and topographical conditions like high temperatures, cold atmosphere, and drought, heavy rainfall seasonal wise. These utmost varieties in climate make us exact weather prediction is a challenging task. Majority people of the country depend on agriculture. Farmers require climate information to decide the planting. Weather prediction turns into orientation in the farming sector to decide the start of the planting season and furthermore quality and amount of their harvesting. One of the variables are influencing agriculture is rainfall. The main goal of this project is early and proper rainfall forecasting, that helpful to people who live in regions which are inclined natural calamities such as floods and it helps agriculturists for decision making in their crop and water management using big data analytics which produces high in terms of profit and production for farmers. In this project, we proposed an advanced automated framework called Enhanced Multiple Linear Regression Model (EMLRM) with MapReduce algorithm and the Hadoop file system. In the proposed model (EMLRM) first, we stored the unstructured weather data in hadoop distributed file system (HDFS), process that stored data by using MapReduce Algorithm and build the rainfall prediction model by utilizing Multiple Linear Regression. We used climate data from IMD (Indian Metrological Department, Hyderabad) in 1901 to 2002 period. The experimental outcomes show that the EMLRM provided the lowest value of Root Mean Square Error (RMSE= 0.274) and Mean Absolute Error (MAE= 0.0745) compared with existing methods. The results of the analysis will help the farmers to adopt effective modeling approach for predicting long-term seasonal rainfall.