Figure - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Source publication
With the increasing popularity of social media, people has changed the way they access news. News online has become the major source of information for people. However, much information appearing on the Internet is dubious and even intended to mislead. Some fake news are so similar to the real ones that it is difficult for human to identify them. T...
Similar publications
Click counts are related to the amount of money that online advertisers paid to news sites. Such business models forced some news sites to employ a dirty trick of click-baiting, i.e., using a hyperbolic and interesting words, sometimes unfinished sentence in a headline to purposefully tease the readers. Some Indonesian online news sites also joined...
Citations
... For the task of fake news detection, a feature set can never be considered complete and sound. Jiang et al. [15] evaluated the performance of five machine learning models and three deep learning models on two fake and real news datasets of different sizes withholding out cross-validation. Moreover, the detection of fake news with sentiment analysis is required for different machine learning and deep learning models. ...
... Further, the paper, proposed a new multiple imputation strategy for handling multivariate missing variables in the ISOT of social media data. The researchers have used this imputation approach to bring in Table 1 Comparison table for the state of art techniques References Objective Pros Cons Sahoo et al [15] Automatic fake news detection approach in chrome environment using machine learning and deep learning classifiers on which it can detect fake news on Facebook. ...
Fake news on social media, has spread for personal or societal gain. Detecting fake news is a multi-step procedure that entails analysing the content of the news to assess its trustworthiness. The article has proposed a new solution for fake news detection which incorporates sentiment as an important feature to improve the accuracy with two different data sets of ISOT and LIAR. The key feature words with content’s propensity scores of the opinions are developed based on sentiment analysis using a lexicon-based scoring algorithm. Further, the study proposed a multiple imputation strategy which integrated Multiple Imputation Chain Equation (MICE) to handle multivariate missing variables in social media or news data from the collected dataset. Consequently, to extract the effective features from the text, Term Frequency and Inverse Document Frequency (TF-IDF) are introduced to determine the long-term features with the weighted matrix. The correlation of missing data variables and useful data features are classified based on Naïve Bayes, passive-aggressive and Deep Neural Network (DNN) classifiers. The findings of this research described that the overall calculation of the proposed method was obtained with an accuracy of 99.8% for the detection of fake news with the evaluation of various statements such as barely true, half true, true, mostly true and false from the dataset. Finally, the performance of the proposed method is compared with the existing methods in which the proposed method results in better efficiency.
... To improve this performance, the work by Han et al. (2021) proposes the use of a twostream network for fake news detection. Similarly the work by Li et al. (2021) uses unsupervised fake news detection method based on auto encoder, and the work by Jiang et al. (2021) proposes an ensemble method of stacking of logistic regression, decision tree, k-nearest neighbor, random forest, and support vector machine (SVM). All of these approaches achieved an accuracy of over 85%, for verification of real-time generated news. ...
Fake news, which considers and modifies facts for virality objectives, causes a lot of havoc on social media. It spreads faster than real news and produces a slew of issues, including disinformation, misunderstanding, and misdirection in the minds of readers. To combat the spread of fake news, detection algorithms are used, which examine news articles through temporal language processing. The lack of human engagement during fake news detection is the main problem with these systems. To address this problem, this paper presents a cooperative deep learning-based fake news detection model.The suggested technique uses user feedbacks to estimate news trust levels, and news ranking is determined based on these values. Lower-ranked news is preserved for language processing to ensure its validity, while higher-ranked content is recognized as genuine news. A convolutional neural network (CNN) is utilized to turn user feedback into rankings in the deep learning layer. Negatively rated news is sent back into the system to train the CNN model. The suggested model is found to have a 98% accuracy rate for detecting fake news, which is greater than most existing language processing based models.The suggested deep learning cooperative model is also compared to state-of-the-art methods in terms of precision, recall, F-measure, and area under the curve (AUC). Based on this analysis, the suggested model is found to be highly efficient.
... It is an ensemble AI calculation. It utilizes a meta-learning calculation to figure out how to best join the expectations from at least two base AI calculations [52]. The advantage of stacking is that it can bridle the capacities of a scope of well-performing models on an order or relapse undertaking and cause forecasts that to have preferable execution over any single model in the ensemble. ...
The exact forecast of heart disease is necessary to proficiently treating cardiovascular patients before a heart failure happens. Assuming we talk about artificial intelligence (AI) techniques, can be accomplished utilizing an ideal AI model with rich medical services information on heart diseases. To begin with, the feature extraction technique, gradient boosting-based sequential feature selection (GBSFS) is applied to separate the significant number of features from coronary illness dataset to create important medical services information. Using machine learning algorithms like Decision tree (DT), Random forest (RF), Multilayer perceptron (MLP), Support vector machine (SVM), Extra tree (ET), Gradient boosting (GBC), Linear regression (LR), K-nearest neighbor (KNN), and stacking, a comparison model is created between dataset that include both all features and a significant number of features. With stacking, the proposed framework achieves test accuracy of 98.78 percent, which is higher than the existing frameworks and most notable in the marking model with 11 features. This outcome shows that our framework is more powerful for the expectation of coronary illness, in contrast with other cutting edge strategies.
... Their technique controls text input from bilingual encoder modifiers. Jiang et al. [37] used ML classifiers, such as LR, SVM, k-NN, DT, and RF, and also used Deep Networks, i.e., CNN, GRU, 6 CMC, 2023 and LSTM, for misinformation detection. Shu et al. [38] propose a verdict comment co-consideration sub-network to achieve user comments and new article content. ...
... To approve the obtained results, we used multiple performance measures, such as accuracy, precision, recall and f1-score [24]. In reality, it was hard to annotate 28 k tweets, Sustainability 2023, 15, 420 9 of 13 so we resorted to statistical sampling theory. ...
This paper proposes an artificial intelligence model to manage risks in healthcare institutions. This model uses a trendy data source, social media, and employs users’ interactions to identify and assess potential risks. It employs natural language processing techniques to analyze the tweets of users and produce vivid insights into the types of risk and their magnitude. In addition, some big data analysis techniques, such as classification, are utilized to reduce the dimensionality of the data and manage the data effectively. The produced insights will help healthcare managers to make the best decisions for their institutions and patients, which can lead to a more sustainable environment. In addition, we build a mathematical model for the proposed model, and some closed-form relations for risk analysis, identification and assessment are derived. Moreover, a case study on the CVS institute of healthcare in the USA, and our subsequent findings, indicate that a quartile of patients’ tweets refer to risks in CVS services, such as operational, financial and technological risks, and the magnitude of these risks vary between high risk (19%), medium risk (80.4%) and low risk (0.6%). Further, several performance measures and a complexity analysis are given to show the validity of the proposed model.
... In 2021, Shishah (2021) has implemented a new fake news detection model through BERT with a joint learning scheme by integrating the Named Entity Recognition (NER) and Relational Features Classification (RFC). In 2021, Jiang et al. (2021) have investigated the efficiency of three deep learning models and five machine learning models. The superior performance of the designed model was verified while estimating with other existing approaches. ...
Emerging of social media creates inconsistencies in online news, which causes confusion and uncertainty for consumers while making decisions regarding purchases. On the other hand, in existing studies, there is a lack of empirical and systematic examination observed in terms of inconsistency regarding reviews. The spreading of fake news and disinformation on social media platforms has adverse effects on stability and social harmony. Fake news is often emerging and spreading on social media day by day. It results in influencing or annoying and also misleading nations or societies. Several studies aim to recognize fake news from real news on online social media platforms. Accurate and timely detection of fake news prevents the propagation of fake news. This paper aims to conduct a review on fake news detection models that is contributed by a variety of machine learning and deep learning algorithms. The fundamental and well-performing approaches that existed in the past years are reviewed and categorized and described in different datasets. Further, the dataset utilized, simulation platforms, and recorded performance metrics are evaluated as an extended review model. Finally, the survey expedites the research findings and challenges that could have significant implications for the upcoming researchers and professionals to improve the trust worthiness of automated fake news detection models.
... The approach achieves good accuracy in comparison with literature. Jiang et al. (2021) proposed a novel stacking model and to compare it with some machine learning models and three deep learning models. To make predictions that have better performance than any single model in the ensemble. ...
Fake news is information that does not represent reality but is commonly shared on the internet as if it were true, mainly because of its dramatic, appealing, and controversial content. Therefore, a relevant issue is to find characteristics that can assist in identifying Fake News, mainly nowadays, where an increasing number of fake news is spread all over the internet every day. This work aims to extract knowledge from Brazilian fake news data based on statistical learning. Initially, an exploratory data analysis is performed for the available variables to extract insights from the differences between fake and true news. Then, the prediction and modelling are carried out. The learning phase aims to build a model and measure the features that best explain the behaviour of misleading texts, which leads to a parsimonious model. Finally, the test phase estimates the fitted model accuracy based on 10‐fold cross‐validation in the Monte Carlo framework. The results show that four variables are significant to explain fake news. Moreover, our model achieved comparable results with state‐of‐the‐art, 0.941 F‐measure, for a single classifier while having the advantage of being a parsimonious model. This work's details and code can be found at https://github.com/limagbz/fake-news-ptBR.
... There are research advances and methods used in research in the use of technology for the detection of fake news. One of the studies in using technology for fake news detection is to evaluate the performance of five machine learning models and three deep learning models on two real and fake news data sets (Jiang et al., 2021). Another similar study evaluated the performance of two machine learning models and three deep learning models using two different English news datasets (Alameri & Mohd, 2021). ...
This study aims to (1) determine research developments, (2) produce research distribution maps based on co-authorship analysis, (3) produce research distribution maps based on citation analysis, (4) produce research distribution maps based on co-citation analysis, (5) produce a research distribution map based on co-occurrence analysis, and (6) find out the state-of-the-art research in the use of technology for fake news detection. This study uses a quantitative method with a descriptive approach in bibliometric analysis from 2011 to 2021. From the research results obtained (1) the development of research publications in the field of the use of technology for fake news detection shows an increase, (2) the interrelationships between countries of authors, organizations affiliated with the authors, and among authors, (3) distribution map of citation analysis based on units of countries, sources, and authors, (4) distribution map of co-citation analysis based on unit cited sources and cited authors, (5) connection of keyword in the use of technology for fake news detection publications is related to literacy skills such as media literacy and information literacy, and (6) several research trends that researchers have widely used in the last 2 years including COVID-19, media literacy, and cyber deception. The research results can be used to determine the development of research o in the use of technology for fake news detection and as a reference for developing further research in the use of technology for fake news detection.
... This dataset contains a total of 44898 data points, 21417 of which are true news and 23481 of which are false news. Upon this ISOT and KDnugget datasets, we provide a novel stacking technique with testing precision of 99.94% and 96.05%, respectively [9]. ...
... Numerous approaches for automatically detecting the authenticity of news have been developed. Initially, Natural Language Processing (NLP) issues were handled using traditional Machine Learning (ML) methods such as Logistic Regression and Support Vector Machine (SVM) with hand-crafted features [10,11]. These approaches inevitably produced high-dimensional interpretations of language processing, giving rise to the curse of dimensionality. ...
Fake news detection techniques are a topic of interest due to the vast abundance of fake news data accessible via social media. The present fake news detection system performs satisfactorily on well-balanced data. However, when the dataset is biased, these models perform poorly. Additionally, manual labeling of fake news data is time-consuming, though we have enough fake news traversing the internet. Thus, we introduce a text augmentation technique with a Bidirectional Encoder Representation of Transformers (BERT) language model to generate an augmented dataset composed of synthetic fake data. The proposed approach overcomes the issue of minority class and performs the classification with the AugFake-BERT model (trained with an augmented dataset). The proposed strategy is evaluated with twelve different state-of-the-art models. The proposed model outperforms the existing models with an accuracy of 92.45%. Moreover, accuracy, precision, recall, and f1-score performance metrics are utilized to evaluate the proposed strategy and demonstrate that a balanced dataset significantly affects classification performance.