ArticlePDF Available

Abstract

These days, wide spread of fake news on different social media platforms and different news sources are one of the strong reasons to cause social damage which is a serious concern. Through different social media platforms such as: Instagram, Twitter, Facebook and YouTube etc. fake news are spread among billions of people daily. In this approach, our aim to distribute something more on Fake News Detection, we focus more on techniques to classify among fake and genuine news. We aim to provide people with the real news to avoid conflicts.
... The future of ERP systems will heavily depend on AI integration to optimize workflows and promote data-driven decision-making across various sectors. Developing effective tools by employing advanced Machine Learning (ML) methods have its own significant challenge but this is something that we can discuss on another paper (Mittal and Saini, 2024;Mittal, 2024) ...
... The future of ERP systems will heavily depend on AI integration to optimize workflows and promote data-driven decision-making across various sectors. Developing effective tools by employing advanced Machine Learning (ML) methods have its own significant challenge but this is something that we can discuss on another paper (Mittal and Saini, 2024;Mittal, 2024) ...
Article
Full-text available
Enterprise Resource Planning (ERP) systems play a crucial role in today’s businesses by offering centralized management of operations, financial transactions, human resources, and resource allocation. In the past, companies typically relied on single-suite ERP solutions like SAP or Oracle ERP, which aimed to provide a cohesive approach to managing enterprises. However, as business functions have become more complex, there has been a shift towards best-of-breed strategies. This approach involves integrating multiple specialized tools to effectively meet specific enterprise needs. This paper examines the strategic benefits of adopting a best-of-breed ERP model, highlighting the integration of Workday for Human Capital Management (HCM), ServiceNow for ticket automation and workflow optimization, Oracle Fusion for financial management, and Microsoft Azure for data warehousing and analytics. While these tools do not dominate the entire ERP market, they are recognized as leaders in their respective areas due to their advanced features, flexibility, and scalability. The paper discusses the implementation strategies necessary for achieving seamless interoperability among these platforms, ensuring efficient data flow, compliance with security standards, and automation of processes. It also addresses common challenges such as data silos, integration difficulties, system downtime, and user resistance, offering practical solutions to overcome these obstacles. Furthermore, the study points out future trends like AI-driven automation, the expansion of cloud-based infrastructure, and predictive analytics, which will further enhance multi-tool ERP ecosystems. By embracing a best-of-breed strategy, businesses can boost operational agility, improve decision-making, and optimize resource use, gaining a competitive advantage in an increasingly digital landscape
Article
Full-text available
Hype r-automation, the strategic convergence of advanced technologies such as artificial intelligence (AI), robotic process automation (RPA), and the Internet of Things (IoT), is reshaping supply chain and quality management across industries. This paper examines the deployment and implications of Hyper-automation within the Consumer Packaged Goods (CPG) and Healthcare sectors, focusing on its influence on operational efficiency, quality control, and scalability. Through a comparative, cross-sectoral lens, we analyze current literature, identify implementation gaps, and propose actionable opportunities for leveraging Hyper-automation to navigate evolving industry challenges. Our findings suggest that while Hyper-automation enables real-time decision-making and reduces operational errors, barriers such as high initial investment and the need for workforce reskilling persist. Future outlooks emphasize the importance of enhanced interoperability, advanced analytics, and sustainable integration.
Article
Full-text available
This paper explores the growing importance of explainability in artificial intelligence (AI) models deployed in enterprise decision support systems (DSS). As organizations increasingly rely on AI for critical business decisions, the "black box" nature of many advanced models poses significant challenges for stakeholders who need to understand, trust, and justify AI-driven recommendations. We examine current approaches to explainable AI (XAI), evaluate their effectiveness in enterprise contexts, and propose a framework for implementing explainable models in decision support systems. Through case studies and empirical analysis, we demonstrate that explainable AI can enhance decision quality, regulatory compliance, and stakeholder trust while maintaining competitive performance levels
Article
Full-text available
The transition to a circular economy, where resources are reused, recycled, and repurposed to minimize waste, demands innovative approaches that transcend traditional linear models. Digital transformation, powered by artificial intelligence (AI), offers a transformative pathway to achieve sustainable innovation by optimizing processes, enhancing decision-making, and fostering systemic change. This research explores how AI-driven digital transformation can enable circular economy models, with a focus on practical applications in the Consumer Packaged Goods (CPG), healthcare, and medical technology (med-tech) industries. Drawing from real-world insights in consulting for CPG and healthcare firms, as well as current experience in med-tech, this study examines how AI technologies—such as predictive analytics, supply chain optimization, and product lifecycle management—can reduce waste, improve resource efficiency, and drive sustainability. Through a mixed-method approach combining case studies, industry data analysis, and theoretical frameworks, the paper identifies key opportunities and challenges in leveraging AI for circularity. The findings aim to provide actionable strategies for organizations seeking to integrate digital transformation into sustainable practices, contributing to both environmental goals and economic resilience.
Article
Full-text available
Recent political events have led to an increase in the popularity and spread of fakenews. As the far-reaching impact of the spread of fake news has shown, people are inconsistent, if not poor, at detecting fake news. Therefore, efforts have been made to automate the process of detecting fake news. Among the most widespread attempts of this kind are “blacklists” of sources and authors who cannot be trusted. While these tools are useful for creating a more comprehensive end-to-end solution, we must consider the more difficult cases where credible sources and authors publish fake news. The aim of this project was therefore to create a tool to use machine learning to detect linguistic patterns that characterize false and real information. The results of this project show that machine learning can be useful in this task. We have developed a model that captures many intuitive clues about true and false news, as well as an application that visualizes the classification decision.
Preprint
Full-text available
The problem associated with the propagation of fake news continues to grow at an alarming scale. This trend has generated much interest from politics to academia and industry alike. We propose a framework that detects and classifies fake news messages from Twitter posts using hybrid of convolutional neural networks and long-short term recurrent neural network models. The proposed work using this deep learning approach achieves 82% accuracy. Our approach intuitively identifies relevant features associated with fake news stories without previous knowledge of the domain.
Conference Paper
Full-text available
The proliferation and rapid diffusion of fake news on the Internet highlight the need of automatic hoax detection systems. In the context of social networks, machine learning (ML) methods can be used for this purpose. Fake news detection strategies are traditionally either based on content analysis (i.e. analyzing the content of the news) or - more recently - on social context models, such as mapping the news’ diffusion pattern. In this paper, we first propose a novel ML fake news detection method which, by combining news content and social context features, outperforms existing methods in the literature, increasing their already high accuracy by up to 4.8%. Second, we implement our method within a Facebook Messenger chatbot and validate it with a real-world application, obtaining a fake news detection accuracy of 81.7%.
Conference Paper
Full-text available
Fake News has been around for decades and with the advent of social media and modern day journalism at its peak, detection of media-rich fake news has been a popular topic in the research community. Given the challenges associated with detecting fake news research problem, researchers around the globe are trying to understand the basic characteristics of the problem statement. This paper aims to present an insight on characterization of news story in the modern diaspora combined with the differential content types of news story and its impact on readers. Subsequently, we dive into existing fake news detection approaches that are heavily based on text-based analysis, and also describe popular fake news data-sets. We conclude the paper by identifying 4 key open research challenges that can guide future research.
Article
Fake news spreading in social media severely jeopardizes the veracity of online content. Fortunately, with the interactive and open features of microblogs, skeptical and opposing voices against fake news always arise along with it. The conflicting information, ignored by existing studies, is crucial for news verification. In this paper, we take advantage of this "wisdom of crowds" information to improve news verification by mining conflicting viewpoints in microblogs. First, we discover conflicting viewpoints in news tweets with a topic model method. Based on identified tweets' viewpoints, we then build a credibility propagation network of tweets linked with supporting or opposing relations. Finally, with iterative deduction, the credibility propagation on the network generates the final evaluation result for news. Experiments conducted on a real-world data set show that the news verification performance of our approach significantly outperforms those of the baseline approaches.
Article
Finding facts about fake news There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters. Science , this issue p. 374 ; see also p. 348