Article

An intelligent decision support system for warranty claims forecasting: Merits of social media and quality function deployment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Social media is a highly interactive platform that allows individuals to share, co-create, discuss, and modify usergenerated content. For example, customers frequently update their emotions, preferences, interests, ideas, and issues about goods and services on social media platforms, such as blogs, content communities, and social networking sites (Nikseresht et al., 2024). Such customer opinions serve as valuable knowledge, including information, data, and insights, that complements existing knowledge bases and adapts them to environmental changes (Ji et al., 2024); however, the neglect of customer feedback, particularly customer complaints and concerns, can hurt the attitudes and purchasing intentions of customers and negatively affect brand reputation (Vermeer et al., 2019). ...
Article
Purpose This study aims to (1) investigate the influence of firms’ social media utilization on performance through supply chain agility, (2) examine the mediating role of supply chain agility and (3) explore the indirect effect of social media utilization on operational performance via supply chain agility as knowledge transfer increases. Design/methodology/approach A survey of 298 Chinese manufacturing firms was conducted to assess the proposed relationships, employing moderated mediation analysis with Andrew Hayes (2017) PROCESS macro. Findings Social media utilization indirectly enhances operational performance through supply chain agility, supporting our mediation hypothesis (H1). Additionally, knowledge transfer moderates the positive impact of social media utilization on supply chain agility (H2). The moderated mediation analysis reveals that the mediating effect of supply chain agility on operational performance is stronger at higher levels of knowledge transfer (H3), shedding light on the intricate relationships between these variables and providing insights for businesses seeking to leverage social media and knowledge transfer to enhance supply chain resilience and operational performance. Originality/value This study empirically investigates the role of social media utilization in supply chains within the digital age. We explore how social media enhances supply chain agility and knowledge transfer, highlighting its transformative potential for real-time communication, responsiveness and collaboration across networks. By integrating dynamic capability theory with contemporary digital practices, we demonstrate how leveraging digital platforms alongside traditional supply chain processes can significantly improve manufacturing efficiency. This research bridges existing gaps in the literature and provides valuable insights for businesses navigating complex, rapidly changing environments in the era of digital transformation.
Article
This paper concentrates on Machine Learning (ML) and data-driven models with applications in Sustainable Supply Chain Management (SSCM) utilizing network, bibliometric, and content analyses that can render several innovative insights and perspectives into contemporary research trends in this field. In this work, a comprehensive systematic literature review and bibliometric analysis are undertaken using 324 out of more than 9000 research papers, and accordingly, the decision factors, assumptions, and research objectives for each model are highlighted. The results contribute to both theoretical and practical management elements and give a solid road map for future study in this sector. This paper's final goal is to provide a thorough overview of applications of ML and data-driven models in SSCM, serving as a source of prospective studies for SSCM scholars and practical insights for SSCM professionals who try to implement their solutions based on ML and AI algorithms.
Article
As financial technology (fintech) rapidly transforms the financial services landscape, the integration of artificial intelligence (AI) in providing financial advice becomes a focal point. This study delves into the intricate dynamics of AI-based financial advice adoption, focusing on technology integration, decision support systems, and the mediating role of perceived utility. The primary purpose of this research is to unravel the complex relationships between technology integration, Decision Support System (DSS), perceived utility, and their collective influence on consumer attitudes toward AI-based financial advice and technology adoption. Adopting a cross-sectional design, we target the Chinese population and utilize an online questionnaire for data collection. A sample size of 259 participants is determined using the rule of thumb technique, with random sampling ensuring representativeness. Data analysis will be conducted using the AMOS software, allowing for an in-depth examination of the relationships between variables. The study findings demonstrate how integrating technology enhances customers' perceptions and acceptance of it, especially when it comes to data security and compatibility. The study also demonstrates how customer attitudes and the uptake of AI-driven financial aid are significantly impacted by DSS aspects like decision transparency and predictive analytics precision. The study is unique because of its cross-cultural approach and perceived value as a mediating component. While this study focuses on the Chinese population, the researchers acknowledge the importance of cultural differences in shaping user perceptions and behaviors. While the findings may offer valuable insights into the Chinese context, caution is advised in generalizing the results to other cultural contexts.
Article
Full-text available
In this study, a new hybrid metaheuristic algorithm named Chaotic Sand Cat Swarm Optimization (CSCSO) is proposed for constrained and complex optimization problems. This algorithm combines the features of the recently introduced SCSO with the concept of chaos. The basic aim of the proposed algorithm is to integrate the chaos feature of non-recurring locations into SCSO’s core search process to improve global search performance and convergence behavior. Thus, randomness in SCSO can be replaced by a chaotic map due to similar randomness features with better statistical and dynamic properties. In addition to these advantages, low search consistency, local optimum trap, inefficiency search, and low population diversity issues are also provided. In the proposed CSCSO, several chaotic maps are implemented for more efficient behavior in the exploration and exploitation phases. Experiments are conducted on a wide variety of well-known test functions to increase the reliability of the results, as well as real-world problems. In this study, the proposed algorithm was applied to a total of 39 functions and multidisciplinary problems. It found 76.3% better responses compared to a best-developed SCSO variant and other chaotic-based metaheuristics tested. This extensive experiment indicates that the CSCSO algorithm excels in providing acceptable results.
Article
Full-text available
Currently, artificial intelligence is facing several problems with its practical implementation in various application domains. The explainability of advanced artificial intelligence algorithms is a topic of paramount importance, and many discussions have been held recently. Pioneering and classical machine learning and deep learning models behave as black boxes, constraining the logical interpretations that the end users desire. Artificial intelligence applications in industry, medicine, agriculture, and social sciences require the users’ trust in the systems. Users are always entitled to know why and how each method has made a decision and which factors play a critical role. Otherwise, they will always be wary of using new techniques. This paper discusses the nature of fuzzy cognitive maps (FCMs), a soft computational method to model human knowledge and provide decisions handling uncertainty. Though FCMs are not new to the field, they are evolving and incorporate recent advancements in artificial intelligence, such as learning algorithms and convolutional neural networks. The nature of FCMs reveals their supremacy in transparency, interpretability, transferability, and other aspects of explainable artificial intelligence (XAI) methods. The present study aims to reveal and defend the explainability properties of FCMs and to highlight their successful implementation in many domains. Subsequently, the present study discusses how FCMs cope with XAI directions and presents critical examples from the literature that demonstrate their superiority. The study results demonstrate that FCMs are both in accordance with the XAI directives and have many successful applications in domains such as medical decision-support systems, precision agriculture, energy savings, environmental monitoring, and policy-making for the public sector.
Article
Full-text available
The strategic use of social media tools facilitates firms’ entrepreneurial capabilities, enabling them to become more innovative, increasing their proactivity, and helping them to renew themselves internally. In today’s turbulent landscape, organizational resilience has emerged as a key variable for responding to external challenges and facing uncertainty. In this context, our study aims to analyze the role of social media use as an antecedent of corporate entrepreneurship and firm performance in Spanish SMEs, while also examining the mediating role of organizational resilience in this process. Analyzing data from a sample of 259 firms, we tested our proposed hypotheses using structural equation modeling. The results confirm that use of social media tools positively impacts the entrepreneurial capabilities of the SMEs examined. The findings also stress the strategic relevance of organizational resilience, which exerts a perfect mediating impact on firm performance. These findings have significant implications for managers, as they show the path managers must take to benefit from social media use, become more entrepreneurial and resilient, and achieve business success in these turbulent times.
Article
Full-text available
This study presents big data applications with quantitative theoretical models in financial management and investigates possible incorporation of social media factors into the models. Specifically, we examine three models, a revenue management model, an interest rate model with market sentiments, and a high-frequency trading equity market model, and consider possible extensions of those models to include social media. Since social media plays a substantial role in promoting products and services, engaging with customers, and sharing sentiments among market participants, it is important to include social media factors in the stochastic optimization models for financial management. Moreover, we compare the three models from a qualitative and quantitative point of view and provide managerial implications on how these models are synthetically used along with social media in financial management with a concrete case of a hotel REIT. The contribution of this research is that we investigate the possible incorporation of social media factors into the three models whose objectives are revenue management and debt and equity financing, essential areas in financial management, which helps to estimate the effect and the impact of social media quantitatively if internal data necessary for parameter estimation are available, and provide managerial implications for the synthetic use of the three models from a higher viewpoint. The numerical experiment along with the proposition indicates that the model can be used in the revenue management of hotels, and by improving the social media factor, the hotel can work on maximizing its sales.
Article
Full-text available
Mobile payment systems are becoming more popular due to the increase in the number of smartphones, which, in turn, attracts the interest of fraudsters. Extant research has therefore developed various fraud detection methods using supervised machine learning. However, sufficient labeled data are rarely available and their detection performance is negatively affected by the extreme class imbalance in financial fraud data. The purpose of this study is to propose an XGBoost-based fraud detection framework while considering the financial consequences of fraud detection systems. The framework was empirically validated on a large dataset of more than 6 million mobile transactions. To demonstrate the effectiveness of the proposed framework, we conducted a comparative evaluation of existing machine learning methods designed for modeling imbalanced data and outlier detection. The results suggest that in terms of standard classification measures, the proposed semi-supervised ensemble model integrating multiple unsupervised outlier detection algorithms and an XGBoost classifier achieves the best results, while the highest cost savings can be achieved by combining random under-sampling and XGBoost methods. This study has therefore financial implications for organizations to make appropriate decisions regarding the implementation of effective fraud detection systems.
Article
Full-text available
Existing sales forecasting models are not comprehensive and flexible enough to consider dynamic changes and nonlinearities in sales time-series at the store and product levels. To capture different big data characteristics in sales forecasting data, such as seasonal and trend variations, this study develops a hybrid model combining adaptive trend estimated series (ATES) with a deep neural network model. ATES is first used to model seasonal effects and incorporate holiday, weekend, and marketing effects on sales. The deep neural network model is then proposed to model residuals by capturing complex high-level spatiotemporal features from the data. The proposed hybrid model is equipped with a feature-extraction component that automatically detects the patterns and trends in time-series, which makes the forecasting model robust against noise and time-series length. To validate the proposed hybrid model, a large volume of sales data is processed with a three-dimensional data model to effectively support business decisions at the product-specific store level. To demonstrate the effectiveness of the proposed model, a comparative analysis is performed with several state-of-the-art sales forecasting methods. Here, we show that the proposed hybrid model outperforms existing models for forecasting horizons ranging from one to 12 months.
Article
Full-text available
We propose an out-of-sample prediction approach that combines unrestricted mixed-data sampling with machine learning (mixed-frequency machine learning, MFML). We use the MFML approach to generate a sequence of nowcasts and backcasts of weekly unemployment insurance initial claims based on a rich trove of daily Google Trends search volume data for terms related to unemployment. The predictions are based on linear models estimated via the LASSO and elastic net, nonlinear models based on artificial neural networks, and ensembles of linear and nonlinear models. Nowcasts and backcasts of weekly initial claims based on models that incorporate the information in the daily Google Trends search volume data substantially outperform those based on models that ignore the information. Predictive accuracy increases as the nowcasts and backcasts include more recent daily Google Trends data. The relevance of daily Google Trends data for predicting weekly initial claims is strongly linked to the COVID-19 crisis.
Article
Full-text available
Sustainable development emergent subfields have been rapidly evolving, and their popularity increased in recent years. Sustainable development is a broad concept having numerous sub-concepts including, but not limited to, circular economy, sustainability, renewable energy, green supply chain, reverse logistics, and waste management. This polymorphism makes decision-making in this field to be an abstruse task. In this perplexing circumstance, the presence of VUCA conditions makes decision-making even more challenging. By taking advantage of artificial intelligence tools and approaches, this paper aims to study with a concentration on sustainable development-related decision-making under VUCA phenomena elements using bibliometric and network analyses which can propose numerous novel insights into the most recent research trends in this area by analyzing the most influential and cited research articles, keywords, author collaboration network, institutions, and countries that finally provides results not previously fully comprehended or assessed by other studies on this topic. In this study, an extensive systematic literature review and bibliometric analysis are conducted using 534 research articles out of more than 3600. From the content analysis part, four clusters have been found. The decision parameters, presumptions, and research goal(s) for each model are pointed out too. The findings contribute to both conceptual and practical managerial aspects and provide a powerful roadmap for future research directions in this field, such as how real-life multidimensionality can be considered in sustainable development-related decision-making, or what are the effects of the VUCA in sustainable development considering the circular economy and waste management intersection.
Article
Full-text available
Warranty data analysis is a form of life data analysis and is part of the engineering and business process of assessing product reliability and predicting the number of parts failing in the field and the associated warranty cost. However, oftentimes the prediction is started early and is based only on a few months of field data, which raises a question of data maturity. As data matures and the product is operating longer in the field the prediction changes, since the failure distribution becomes the function of the observation time. Data maturation has been known as a complicating factor in warranty data analysis affecting the accuracy of prediction, however there were very few attempts to look at how the data maturity affects the failure patterns and the probability of failure as the time in the field and consequently the observation time increases. This paper discusses the causes of data maturity, presents an analytical model to assess the maturation trends and presents several case studies based on the automotive electronics warranty data. The paper analyzes the patterns of how warranty data matures as more field data becomes available and how it affects the accuracy of prediction. It also suggests the criteria of determining the levels of data maturity.
Article
Full-text available
The performance of a model in machine learning problems highly depends on the dataset and training algorithms. Choosing the right training algorithm can change the tale of a model. While some algorithms have a great performance in some datasets, they may fall into trouble in other datasets. Moreover, by adjusting hyperparameters of an algorithm, which controls the training processes, the performance can be improved. This study contributes a method to tune hyperparameters of machine learning algorithms using Grey Wolf Optimization (GWO) and Genetic algorithm (GA) metaheuristics. Also, 11 different algorithms including Averaged Perceptron, FastTree, FastForest, Light Gradient Boost Machine (LGBM), Limited memory Broyden Fletcher Goldfarb Shanno algorithm Maximum Entropy (LbfgsMxEnt), Linear Support Vector Machine (LinearSVM), and a Deep Neural Network (DNN) including four architectures are employed on 11 datasets in different biological, biomedical, and nature categories such as molecular interactions, cancer, clinical diagnosis, behaviour related predictions, RGB images of human skin, and X-rays images of Covid19 and cardiomegaly patients. Our results show that in all trials, the performance of the training phases is improved. Also, GWO demonstrates a better performance with a p-value of 2.6E-5. The proposed method just receives a dataset as an input and suggests the best-explored algorithm with related arguments. So, it is appropriate for users who are not experts in analytical statistics and data science algorithms.
Article
Full-text available
This study proposes an ensemble deep learning approach that integrates Bagging Ridge (BR) regression with Bi-directional Long Short-Term Memory (Bi-LSTM) neural networks used as base regressors to become a Bi-LSTM BR approach. Bi-LSTM BR was used to predict the exchange rates of 21 currencies against the USD during the pre-COVID-19 and COVID-19 periods. To demonstrate the effectiveness of our proposed model, we compared the prediction performance with several more traditional machine learning algorithms, such as the regression tree, support vector regression, and random forest regression, and deep learning-based algorithms such as LSTM and Bi-LSTM. Our proposed ensemble deep learning approach outperformed the compared models in forecasting exchange rates in terms of prediction error. However, the performance of the model significantly varied during non-COVID-19 and COVID-19 periods across currencies, indicating the essential role of prediction models in periods of highly volatile foreign currency markets. By providing an improved prediction performance and identifying the most seriously affected currencies, this study is beneficial for foreign exchange traders and other stakeholders in that it offers opportunities for potential trading profitability and for reducing the impact of increased currency risk during the pandemic.
Article
Full-text available
In reality, time series subject to the internal/external influence are usually characterized by nonlinearity, uncertainty, and incompleteness. Therefore, how to model the features of time series in nondeterministic environments is still an open problem. In this article, a novel high-order intuitionistic fuzzy cognitive map (HIFCM) is proposed, where intuitionistic fuzzy set (IFS) is introduced into fuzzy cognitive maps with temporal high-order structure. By means of IFS, the ability of model for the representation of uncertainty can be effectively improved. In order to capture the fluctuation features of series data, variational mode decomposition is utilized to decompose time series into sequences of various frequencies, based on which fine feature structures on different scales can be obtained. Each concept of HIFCM corresponds to one decomposed sequence such that casual reasoning can be achieved among the obtained features in various frequencies of time series. All parameters are learned by the particle swarm optimization algorithm. Finally, the performance of the method is verified on the public datasets, and experimental results show the feasibility and effectiveness of the proposed method.
Article
Full-text available
Product Returns (PR) are an inevitable yet costly process in business, especially in the online marketplace. How to deal with the conundrums has attracted a great deal of attention from both practitioners and researchers. This paper aims to synthesise research developments in the PR domain in order to provide an insightful picture of current research and explore future directions for the research community. To ensure research rigour, we adapt a six-step framework - defining the topic, searching databases, cleaning and clustering data, paper selection, content analysis, and discussion. A hybrid approach is adopted for clustering and identifying the distribution and themes in a large number of publications collected from academic databases. The hybrid approach combines machine learning topic modelling and bibliometric analysis. The machine learning results indicate that the overall research can be clustered into three groups: (1) operations management of PR, covering (re)manufacturing network design, product recovery, reverse distribution, and quality of cores; (2) retailer and (re)manufacturer issues including return policy, channel, inventory, pricing, and information strategies; and (3) customer's psychology, experience, and perception on marketing-operation interface. Furthermore, from the content analysis, five potential future directions are discussed, namely digitalisation in the context of PR; globalisation versus localisation in the context of PR; multi-layer (i.e., retailer, manufacturer, logistics provider, online platform) and multi-channel (i.e., online, offline, dual and omni channel) oriented bespoke return policy; understanding and predicting customer return behaviour via online footprints; and customer return perception across the marketing–operations interface.
Article
Full-text available
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Article
Purpose This study reviews scholarly work in sustainable green logistics and remanufacturing (SGLR) and their subdisciplines, in combination with bibliometric, thematic and content analyses that provide a viewpoint on categorization and a future research agenda. This paper provides insight into current research trends in the subjects of interest by examining the most essential and most referenced articles promoting sustainability and climate-neutral logistics. Design/methodology/approach For the literature review, the authors extracted and sifted 2180 research and review papers for the period 2008–2023 from the Scopus database. The authors performed bibliometric and content analyses using multiple software programs such as Gephi, VOSviewer and R programming. Findings The SGLR papers can be grouped into seven clusters: (1) The circular economy facets; (2) Decarbonization of operations to nurture a climate-neutral business; (3) Green sustainable supply chain management; (4) Drivers and barriers of reverse logistics and the circular economy; (5) Business models for sustainable logistics and the circular economy; (6) Transportation problems in sustainable green logistics and (7) Digitalization of logistics and supply chain management. Practical implications In this review, fundamental ideas are established, research gaps are identified and multiple future research subjects are proposed. These propositions are categorized into three main research streams, i.e. (1) Digitalization of SGLR, (2) Enhancing scopes, sectors and industries in the context of SGLR and (3) Developing more efficient and effective climate-neutral and climate change-related solutions and promoting more environmental-related and sustainability research concerning SGLR. In addition, two conceptual models concerning SGLR and climate-neutral strategies are developed and presented for managers and practitioners to consider when adopting green and sustainability principles in supply chains. This review also highlights the need for academics to go beyond frameworks and build new techniques and instruments for monitoring SGLR performance in the real world. Originality/value This study provides an overview of the evolution of SGLR; it also clarifies concepts, environmental concerns and climate change practices, particularly those directed to supply chain management.
Article
Purpose Warranty service plays a critical role in sustainability and service continuity and influences customer satisfaction. Considering the role of social networks in customer feedback channels, one of the essential sources to examine the reflection of a product/service is social media mining. This paper aims to identify the frequent product failures through social network mining. Focusing on social media data as a comprehensive and online source to detect warranty issues reveals opportunities for improvement, such as user problems and necessities. This model will detect the causes of defects and prioritize improving components in a product-service system based on FMEA results. Design/methodology/approach Ontology-based methods, text mining and sentiment analysis with machine learning methods are performed on social media data to investigate product defects, symptoms and the relationship between warranty plans and customer behaviour. Also, the authors have incorporated multi-source data collection to cover all the possibilities. Then the authors promote a decision support system to help the decision-makers using the FMEA process have a more comprehensive insight through customer feedback. Finally, to validate the accuracy and reliability of the results, the authors used the operational data of a LENOVO laptop from a warranty service centre and classifier performance metrics to compare the authors’ results. Findings This study confirms the validity of social media data in detecting customer sentiments and discovering the most defective components and failures of the products/services. In other words, the informative threads are derived through a data preparation process and then are based on analyzing the different features of a failure (issues, symptoms, causes, components, solutions). Using social media data helps gain more accurate online information due to the limitation of warranty periods. In other words, using social media data broadens the scope of data gathering and lets in all feedback from different sources to recognize improvement opportunities. Originality/value This work contributes a DSS model using multi-channel social media mining through supervised machine learning for warranty-service improvement based on defect-related discovery to unravel the potential aspects of social networks analysis to predict the most vulnerable components of a product and the main causes of failures that lead to the inputs for the FMEA process and then, a cost optimization. The authors have used social media channels like Twitter, Facebook, Reddit, LENOVO Forums, GitHub, Quora and XDA-Developers to gather data about the LENOVO laptop failures as a case study.
Article
T-spherical fuzzy set (T-SFS) is emerged as one of the effective tools for dealing uncertainty in decision-making process. Whereas, power aggregation operators help us in normalizing the impact of extreme values and capture the interconnectedness of the arguments. On the other hand, in multi-attribute decision-making (MADM) problems, one of the most important contributing factors is the lack of awareness of biasness. Neutral operators emphasize fair and unbiased character of the decision makers. Thus, for the first time, taking advantages of these operators, a hybrid form of operators, weighted power partitioned neutral average operator and weighted power partitioned neutral geometric operator under T-SFS environment are developed. Beside these, power weighted neutral average, power ordered weighted neutral average, power hybrid neutral average operators, and their dual forms are initiated under T-SFS environment. A new modified score function for T-SFS is formulated. Based on the developed operators and score function, we constitute an MADM algorithm and utilize in solving a hypothetical case study problem on hydrogen (H2 \text{H}_2 ) refuelling station site selection. Finally, comparative study of the developed operators with other operators is carried out to explore the applicability and supremacy of our designed MADM algorithm.
Article
This paper aims to develop an artificial neural networkbased forecasting model employing a nonlinear focused time-delayed neural network (FTDNN) for energy commodity market forecasts. To validate the proposed model, crude oil and natural gas prices are used for the period 2007 to 2020, including the Covid-19 period. Empirical findings show that the FTDNN model outperforms existing baselines and artificial neural networkbased models in forecasting West Texas Intermediate and Brent crude oil prices and National Balancing Point and Henry Hub natural gas prices. As a result, we demonstrate the predictability of energy commodity prices during the volatile crisis period, which is attributed to the flexibility of the model parameters, implying that our study can facilitate a better understanding of the dynamics of commodity prices in the energy market.
Article
Business organizations are surging to integrate social media with their business and operations management, as it is broadly recognized that social media usage (SMU) could bring them superior performance advantages, especially in the digital era. However, prior studies investigating the effects of SMU on organizational performance provide scattered, mixed, and even conflicting results from diverse disciplines. This study aims to map the comprehensive relationship between SMU and organizational performance by adopting meta-analysis and examining potential factors that moderate such relationships. Based on the sample size of 24,576 organizations accumulated from 65 empirical studies, this study attempts to dissect SMU into manifold practices containing social marketing, social listening & monitoring, social communication, and social networking & collaboration. Meanwhile, these SMUs have varying relationships not only with financial performance but also with innovation, social and operational performance. Further analysis results also confirm that theoretical lens, social media platforms, and industry-level factors (i.e., firm size, industry type, economic market) significantly moderate the strength of SMU-Performance relationships. These findings provide theoretical contributions, managerial implications , and future research directions on the integrated SMU-Performance relationship.
Article
Research indicates that social media platform (SMP) use may adversely influence university students' academic performance (AP)—a phenomenon broadly known as the dark side of social media (DoSM). Our study applies the Situation-Organism-Behaviour-Consequence (S-O-B-C) framework to explicate pathways through which situational triggers (loneliness and self-presentation) lead to students' experience of cognitive (information and communication) overload, addiction (SMA), and consequentially, reduced academic performance (RAP). Methodologically, we deploy a mixed-methods approach comprising three studies—a qualitative study (Study A, n = 48) and two quantitative, cross-sectional studies in India (Study B: n = 479, Study C: n = 618)—through convenience sampling to develop and test a conceptual model through PLS-SEM. Our results provide evidence that loneliness and students' self-presentation significantly influence overload and SMA, which strongly influence RAP. Additionally, a partial moderating effect of demotivation due to social comparison was found in Study C, lending nuanced insight into the effects of personal tendencies on students' SMP use. Our study is limited to an emerging country context, but the results raise practical implications for students across the globe. In addition, our study suggests that future scholars should further investigate the personal and situational factors that can affect students' DoSM experiences like cognitive overload.
Article
Supercritical water gasification (SCWG) is an advanced technology for sewage sludge treatment, which can effectively remove hazardous substances in it and realize the resource utilization of sludge. Given the harsh operating environment and complex operating process, it is particularly critical for SCWG system to adopt failure mode and effects analysis (FMEA) to ensure its reliability and security. In particular, the risk management process of SCWG system needs to include two considerations: the evaluation process requires the participation of a large number of team members (TMs) with different professional knowledge and operational skills, and the cross-correspondence between the factors within the system cannot be ignored in the analysis process. Thus, an evidence theory-based FMEA framework is established in this paper to meet practical needs, which can model the bounded confidence and stubbornness of TMs, and reflect the interdependencies among the factors of the system by combining DEMATEL and regret theory (RT). Firstly, assessment values in the form of probability linguistic term sets (PLTSs) are converted into mass functions due to the excellence of evidence theory in information aggregation. Secondly, a bounded confidence-based clustering method is developed to consider TMs’ willingness to interact during the clustering procedure. Additionally, evidence conflicts are managed using a new discounting method that captures TMs’ stubbornness. Besides, the cross-correspondence between failure modes (FMs) and cause of failures (CFs) is analyzed by a DEMATEL method incorporating evidence theory. Further considering the bounded rationality of TMs on this basis, regret theory is used to distinguish and prioritize FMs. Finally, a case study of an SCWG system is successfully solved by applying the proposed framework, and the effectiveness and superiority are confirmed by a series of analyses and discussions.
Article
Accurate wind power forecasting can effectively reduce the adverse effects of wind power forecasting errors on wind power grid integration and power dispatch. However, current wind power forecasting technology, such as the method based on machine learning, belongs to the black box model and is not solvable. Variational mode decomposition (VMD) is a decomposition technique based on the time-frequency characteristics of the original time series, which has a mathematical theoretical foundation. Besides, fuzzy cognitive map (FCMs) is a kind of soft computing method with strong knowledge representation and reasoning ability. Therefore, to enhance the forecasting accuracy of wind power, in this paper, a novel time series forecasting method based on improved VMD (IVMD) and high-order FCM (HFCM), namely IVMDHFCM is proposed. IVMD can effectively extract the features in the raw time series depending on the time-frequency characteristics of the time series. Then, the subseries obtained by IVMD are modeled and forecasted by HFCM, and the Bayesian ridge regression method is adopted to learn the weight of HFCM. Finally, the differential evolution (DE) algorithm is used to get the optimal hyperparameters of IVMDHFCM. The performance of IVMDHFCM is verified by comparing it with that of state-of-the-art methods on ten publicly available datasets. Moreover, the proposed IVMDHFCM is compared with the existing HFCM based method on ten actual wind power datasets. The results show that the IVMDHFCM can effectively improve the accuracy of wind power forecasting and reduce the forecasting error. Besides, the IVMDHFCM can also effectively explore the fluctuations of wind power.
Article
In the field of decision making, three-way decision making has been proven more fruitful for providing scopes to make delayed decision along with acceptance and rejection simultaneously. As a result, the decision risk and loss, which could occur due to take rapid decisions in traditional two-way decision making, are effectively reduced. Accordingly, this paper offers a novel three-way multi-attribute decision making model by combining three-way decision making and multi-attribute decision making under an incomplete information system. The incertitude in the information system is illustrated by introducing interval-valued Fermatean connection number based on interval-valued Fermatean fuzzy number and set pair analysis theory. Thereafter, the achievement of this study is five-fold. First, a possibility dominance relation is developed under incomplete information system in the basis of possibility degree measure of interval-valued Fermatean connection number. Second, we put forward a novel procedure to set up the fuzzy state set. Third, the conditional probability is estimated in light of the fuzzy state set and probability dominance relation. Fourth, the relative utility functions are determined in the virtue of regret theory. Lastly, a three-way multi-attribute decision making model is implemented in incomplete information system and exploited to deal with incomplete multi-attribute decision making problems. Eventually, the propriety, stability and superiority of the proposed model is established via conducting the comparative and experimental analysis.
Article
Insurance fraud is ranked second in the list of expensive crimes in the United States, with healthcare fraud being the second highest amongst all insurance fraud. Contrary to the popular belief, insurance fraud is not a victimless crime. The cost of crime is passed onto law-abiding citizens in the form of increased premiums or serious harm or danger to beneficiaries. To combat this kind of societal threat, there is an intense need for healthcare fraud detection systems to evolve. Some common roadblocks in implementing digital advancements (as seen in other domains) to healthcare are the complexity, heterogeneity of the data systems, and varied health program models across the United States. In other words, data are not stored in a centralized manner due to the sensitive domain nature, thus making it difficult to implement a robust real-world fraud-detection system. At the same time, in addition to the complexity of the varied systems involved, there is also the need to meet certain standards before a fraud actor can be prosecuted in a litigation setting. Thus, there is a human aspect to the fraud detection process flow in the real-world. In this article, a novel framework was outlined that converts diverse prescription claims (both fee-for-service and managed care) into a set of input variables/features suitable for implementation of an advanced statistical modeling fraud framework. This article thus aims to contribute to the existing literature by describing a process to transform prescription claims data to secondary features specific to provider fraud detection. The core idea was to focus on three main aspects of fraud (business heuristics on claims, provider-to-prescriber relations, and provider’s client populations) to design the input features. A systematic method was proposed to extract features that have the potential to detect billing or behavioral outliers among pharmacy providers using information extracted from a secondary database (outpatient prescriptions). The application of a commonly used dimensionality reduction method, the Principal Component Analysis (PCA), was evaluated. PCA evaluates and reduces the extensive feature subspace to only those that captures the most variance in the data. To evaluate the features extracted from this framework, the application of the engineered features and the principal components to out-of-the-box logistic regression and Random Forest algorithms were considered to identify potential fraud. The engineered features when tested in different experimental settings with a logistic regression model had the highest area under the Receiver Operating Characteristic (ROC) curve of 0.76 and a weighted F score of 0.85 while a random forest model had the highest area under curve of 0.74 and a weighted F score of 0.88.
Article
Accurate forecasting is indispensable for improving solar renewables integration and minimizing the effects of solar energy's intermittency. Existing research on time series solar forecasting confronts challenges such as determining the accurate hyperparameters and flexibility in considering meteorological parameters. This study proposes a novel deep learning model, namely an optimized stacked Bi-directional Long Short-Term Memory (BiLSTM)/ Long Short-Term Memory (LSTM) model to forecast univariate and multivariate hourly time series data by integrating stacked LSTM layers, drop out architecture, and LSTM based model. The performance of the model is enhanced by Bayesian optimization with the tuning of six relevant hyperparameters. To evaluate the model, standard Global Horizontal Irradiance (GHI) and observed Plane of Array (POA) irradiance with meteorological real-world solar data from Sweihan Photovoltaic Independent Power project in Abu Dhabi, UAE, and NREL solar data for year-round data are forecasted. Furthermore, the performance of the proposed algorithm is also evaluated under weather uncertainty for different climate types. The forecasting accuracy is evaluated based on various performance metrics and it is observed that the proposed model offered the best R² values, 0.99 for univariate as well as multivariate models using GHI data and 0.97 using POA data. The findings suggest that the proposed model is a reliable technique for solar prediction due to its comparable performance with both GHI and POA in terms of accuracy.
Article
As an important part of the Design for X tools, Design for Quality (DFQ) is used to reduce cost and improve quality of products while maintaining reliability in preliminary design phase. As a powerful tool to reduce and eliminate possible failures, failure mode and effects analysis (FMEA) is broadly applied in detail design phase. However, scholars have criticized the traditional FMEA model for several shortages. In the past decades, nearly all the FMEA methods have been presented to heighten the rationality of ranking results by considering the risk factors (severity (S), occurrence (O) and detection (D)) simultaneously. The simultaneous analysis of risk factors (RFs) may result in the ignorance of impact on failure modes from extreme RFs. In fact, different combinations of RFs may obtain more comprehensive risk information about failure modes. Thus, a novel FMEA classification method is proposed by combining risk factors in pairs (i.e., S&O, S&D and O&D) to conduct risk assessment which can avoid interaction effect caused by simultaneous analysis of risk factors. Specifically, the fuzzy adaptive resonance theory is used to conduct failure modes classification based on the assessment results of S&O, S&D and O&D obtained by the grey relational analysis. Finally, a real case study, i.e., the final assembly process of spark plugs, from an automotive manufacturer in China is adopted to clarify the advantages of the proposed method.
Article
Remanufacturing is a key element of circular economy solutions as it aims at increasing the service lifetime of entire products or specific components, which may reduce the demand for new, resource-consuming devices. To assess the potential of disassembling and subsequent remanufacturing of EV batteries, we present a discrete event simulation approach. This approach depicts the life cycle of batteries and EVs separately, which allows capturing the demand for spare batteries and the potential contribution of remanufacturing batteries to cover this demand. By running various scenarios taking the German EV market as an example, the importance of providing cost-effective spare batteries through remanufacturing is underlined. As a baseline, a linear case is examined, where remanufacturing is not an option. Additionally, we built scenarios where remanufactured batteries are used as spare parts for older vehicles. Another major variation is introduced by different average battery lifetimes (10, 15, and 20 years), while the average vehicle lifetime is 15 years in all cases. The results show that remanufactured spare batteries could decrease the demand for new batteries compared to the linear base case. When battery lifetimes are lower than those of vehicles, new battery demand could be reduced by 6–7%, given our assumptions. In future scenarios where expected battery lifetimes might exceed vehicle lifetimes, up to 2% savings in new batteries could still be possible. Therefore, remanufacturing could be a viable option for improving the sustainability of electric mobility, and (re)manufacturers should consider intensifying their engagement in designing remanufacturable batteries, in research on remanufacturing technologies, and in investing in remanufacturing infrastructure.
Article
The problem of robust Bayesian estimation of chain ladder (development) factors and Bayesian prediction of claim reserves is considered. Two different classes of priors (the classes, where parameters of a prior are not specified exactly and the class, where the prior cumulative distribution function is distorted) are presented. The oscillation (as a measure of robustness) of Bayes estimators and predictors of reserves, when priors are in the considered class, is calculated and the posterior regret Γ-minimax (PRGM) rules as optimal procedures are obtained. The numerical example compares different methods of the estimation of development factors and the calculation of claim reserves. The chain ladder estimators and predictors, exact Bayes estimators and predictors, PRGM estimators and predictors for aforementioned classes of priors and empirical credibility estimators and predictors are considered. It is shown that the variability of the expected value parameter of a prior has a greater impact on the oscillation of the Bayes estimators of the development factors and Bayes predictors of reserves than the variability of the shape parameter. The Bayes predictors are more robust with respect to the distortion of a prior c.d.f. than the fluctuation of the expected value parameter of a Gamma prior. A distortion of the shape of the prior c.d.f. has also a smaller impact on the value of PRGM estimators and predictors than the variability of the parameters of Gamma distribution. The difference between the Bayesian and chain ladder estimators (and predictors) depends mainly on the difference between the expected value parameter of the prior and the chain ladder estimator of development factor.
Article
A two-dimensional warranty coverage is available for those items for which age and usage jointly fall below the respective warranty limits, thus making the warranty field data incomplete. In this paper, the joint reliability function of age and usage from the observed incomplete two-dimensional warranty data has been estimated in terms of reliability function and density function of age and usage rate respectively by using non-parametric method under the assumption of independence between age and usage rate. Bivariate mean residual life for age and usage has also been determined using the bivariate reliability function. From the reliability engineering perspective, mean residual life provides valuable information for various decision making problems, such as optimizing burn-in test, extending warranty policy, and making maintenance decision. In this paper, the mean residual life has been predicted for formulating extended warranty policy. Pro-rata warranty policy has also been considered here using linear and non-linear pro-rata functions. The utilities of this research work have been demonstrated with the help of a real-life warranty data set.
Article
Forecasting warranty claims for complex products is a reliability challenge for most manufacturers. Several factors increase the complexity of warranty claims forecasting, including, the limited number of claims reported at the early stage of launch, reporting delays, dynamic change in the fleet size, and design/manufacturing adjustments for the production line. The aggregated effect of those complexities is often referred to as the “warranty data maturation” effect. Unfortunately, most of the existing models for warranty claims forecasting fail to explicitly consider warranty data maturation. This work address warranty data maturation by proposing the Conditional Gaussian Mixture Model (CGMM). CGMM uses historical warranty data from similar products to develop a robust prior joint Gaussian mixture distribution of warranty trends at both, the current and future maturation levels. CGMM then utilizes Bayesian theories to estimate the conditional posterior distribution of the warranty claims at the future maturation level conditional on the warranty data available at the current maturation level. The CGMM identifies non-parametric temporal warranty trends and automatically clusters products into latent groups to establish (learn) an effective prior joint distribution. The CGMM is validated on an extensive automotive warranty claims dataset comprising of four model years and >15,000 different components from >10 million vehicles.
Article
With the rapid development of technologies and increasing availability of degradation data, the design of appropriate warranty policy for products subject to performance deterioration has hitherto received more attention in boosting profits and improving the competition of enterprises. In most existing works, a product is considered to have failed and trigger a warranty service when the degradation level exceeds the constant failure threshold. However, the stochastic nature of failure threshold exists in many applications. In this study, firstly, two truncated distributions are employed to model the random failure threshold. Then we propose three types of warranty policies-free replacement, full refund and partial refund, that take into account the random failure threshold based on the degradation model. Under the first policy, the manufacturer’s total expected profit is maximized to determine the optimal price and warranty period, besides these, the prescribed maintenance times can also be obtained under the other two policies. We further compare these warranty policies under random failure threshold characterized by different distributions. Finally, a numerical example is presented along with sensitivity analysis to illustrate and compare the proposed warranty policies, showing that the parameters of degradation model and random failure threshold can lead to different expected profits.
Article
In the era of internet of things and Industry 4.0, smart products and manufacturing systems emit signals tracking their operating condition in real-time. Survival analysis shows its strength in modeling such signals to determine the condition of in-service equipment and products to yield critical operational decisions, i.e., maintenance and repair. One appealing aspect of survival analysis is the possibility to include subjects in the model which did not have their failure yet or when the exact failure time is unknown. NN-based survival models, i.e., deep survival models, show superior performance in modeling the non-linear relationship between the reliability function and covariates. We propose a novel deep survival model, seq2surv, to incorporate the seq2seq structure and attention mechanism to enhance the ability to analyze a sequence of signals in the survival analysis. Similar to the seq2seq model which shows superior performance in machine translation, we designed the seq2surv model to translate from a sequence of signals to a sequence of survival probabilities and to update the reliability predictions along with real-time monitoring. Our results show that the seq2surv model outperforms existing deep survival approaches in terms of higher prediction accuracy and lower errors in the survival function estimation on both simulated and real-world datasets.
Article
Warranty plays an important role in retaining consumers' loyalty, increasing the competitive advantage and the profit of companies. Moreover, warranty claim prediction based on social media is a novel area, enabling managers to foresee problems in production and take the proper measures to mitigate them. The higher the precision of the warranty claim predictions, the lower the risk the company faces. This paper examines the impacts of utilizing social media data on daily warranty claim prediction. In this paper, we showed that social media data could enhance the accuracy of daily warranty claim predictions. We cooperated with Sam Service Warranty Company that provides warranty and aftersales services for Samsung products in Iran. Warranty operational data along with Twitter data analyses were used to improve the precision of warranty claim prediction. Operational data from Sam Service Company include the total number of warranties, the number of warranties for new customers, and the number of warranties for those who return. A novel framework was presented that uses the Random Forrest algorithm for prediction of the number of daily warranty claims. The results show that our framework improves the accuracy of out-of-sample warranty claims predictions, with respective development at a range of 14.98% to 21.90% across various timeframes. Improving prediction accuracy enables managers to effectively minimize warranty-related costs, inventory levels, waste, and customer dissatisfaction while maximizing the return on investment, profit, efficiency, and customer satisfaction.
Article
Research on the dark side of social media usage has explored the fear of missing out (FoMO), social media fatigue (fatigue), social media stalking (stalking), and online social comparison (social comparison) independently. Accordingly, the complex interrelationships among these phenomena have remained understudied, creating a chasm that hinders a clearer understanding of their drivers and the potential counterstrategies to mitigate the collateral damage they may cause. We attempt to bridge this gap by drawing upon the theory of social comparison and the theory of compensatory internet use to formulate a framework that hypothesizes the mechanism of interaction among these negative fallouts. The model, tested through analysis of data collected from 321 social media users from the United Kingdom (UK), takes into consideration the moderation effect of the frequency of posting social media status updates and social media envy, along with the mediation effect of social comparison and stalking. The results indicate that FoMO and social comparison are directly associated with fatigue. Furthermore, social comparison partially mediates the association of FoMO and fatigue, while social media envy negatively moderates the association of FoMO with social comparison. The results provide new insights into the dynamic interplay of these dark side manifestations of social media.
Article
This paper presents an improved Ant Colony Optimization (ACOII) algorithm to solve the dynamic facility layout problem for construction sites. The algorithm uses a construction approach in building the layout solutions over time and uses a discrete dynamic search with heuristic info based on both relocation and flow costs to influence facilities’ placement in different time periods. The performance of ACOII is investigated using randomly generated data sets where the number of facilities and the number of time periods in the planning horizon vary to mimic what happens on a construction site over time. The experimental results show that ACOII is effective in solving the problem. A benchmarking study using instances from the literature showed promising results with improved solutions for all instances with very large number of facilities and periods.
Article
Leveraging the increasing availability of ”big data” to inform forecasts of labor market activity is an active, yet challenging, area of research. Often, the primary difficulty is finding credible ways with which to consistently identify key elasticities necessary for prediction. To illustrate, we utilize a state-level event-study focused on the costliest hurricanes to hit the U.S. mainland since 2004 in order to estimate the elasticity of initial unemployment insurance (UI) claims with respect to search intensity, as measured by Google Trends. We show that our hurricane-driven Google Trends elasticity leads to superior real-time forecasts of initial UI claims relative to other commonly used models. Our approach is also amenable to forecasting both at the state and national levels, and is shown to be well-calibrated in its assessment of the level of uncertainty for its out-of-sample predictions during the Covid-19 pandemic.
Article
Much textual engineering knowledge is captured in tables, particularly in spreadsheets and in documents such as equipment manuals. To leverage the benefits of artificial intelligence, industry must find ways to extract the data and relationships captured in these tables. This paper demonstrates the application of an ontological approach to make the classes and relations held in spreadsheet tables explicit. Ontologies offer a pathway because they use formal descriptions to define machine-interpretable definitions of shared concepts and relations between concepts. We illustrate this with two case studies on a failure modes and effects analysis (FMEA) table. Our examples demonstrate how the relationship between rows and columns in a table can be represented in logic for FMEA entries, thereby allowing the same ontology to ingest instance data from the IEC 60812:2006 FMEA Standard and a real industrial FMEA. We give relationships in the FMEA and asset hierarchy spreadsheets an explicit representation, so that OWL-DL reasoning can infer final failure effects at the system level from component failures. The prototype ontologies described in this paper are modular and aligned to a top level ontology, and hence can be applied to other use cases. Our contribution is showing that engineers can make data captured in commonly used spreadsheet tables machine readable using a FMEA ontology.
Article
Product return is a critical issue due to the uncertainty associated with the price, demand, and quality of the product. Thus, businesses must improve their information transparency to administer the product return behaviour of the end-user. Different studies so far have contributed to developing solutions to manage product return issues. This paper provides a comprehensive review of the literature on the product returns management domain to provide the scientific landscape map of existing studies for exploring the state of the current body of knowledge. A systematic literature review of existing literature, quantitative bibliometric analysis, and in-depth content analysis are conducted to accomplish the purpose. A total of 518 published articles from January 1986 to November 2020 are selected, reviewed, classified, and analysed in this study. We classified papers into six identified PRM categories, namely product recovery, forecasting product returns, consumer behaviour, return policy, uncertainty, and technology. Finally, we blended the state-of-the-art research and outlined the future research agenda concerning various themes, methodologies used, and aspects like lean, agility, and disruption in PRM based on research gap analysis.
Article
Consumers who decide to adopt complex, radically innovative products simultaneously can hold very different belief structures that, for example, capture concern for future losses, and beliefs of future gains, as well as the desire to coalesce with referents. This research develops a model of how consumers decide their next electrified vehicle. Based on the Theory of Reasoned Action (TRA) and Risk-Benefit Models, the electric vehicle (EV) purchase decision is modeled as primarily based on beliefs of the perceived benefits and the perceived risks of technology adoption and social influences. Further, beliefs of a manufacturer's expertise and trustworthiness were found to reduce consumer risk concerns and strengthen consumer conviction that the benefits of technology were attainable. Structural equation modeling of survey data confirm the proposed consumer decision model, and our contention that technology adoption can be better understood by specifically exploring discordant consumer beliefs of the post-purchase consequences. The results of our research provide a new understanding of salient consumer risk and benefit beliefs when consumers face new technologies that represent a paradigm shift. Results also provide insight for technology firms that need to constantly develop new strategic marketing actions designed to increase demand for their complex technological products.
Article
In this paper, we propose deep learning frameworks based on the randomized neural network. Inspired by the principles of Random Vector Functional Link (RVFL) network, we present a deep RVFL network (dRVFL) with stacked layers. The parameters of the hidden layers of the dRVFL are randomly generated within a suitable range and kept fixed while the output weights are computed using the closed-form solution as in a standard RVFL network. We also propose an ensemble deep network (edRVFL) that can be regarded as a marriage of ensemble learning with deep learning. Unlike traditional ensembling approaches that require training several models independently from scratch, edRVFL is obtained by training a single dRVFL network once. Both dRVFL and edRVFL frameworks are generic and can be used with any RVFL variant. To illustrate this, we integrate the deep learning RVFL networks with a recently proposed sparse pre-trained RVFL (SP-RVFL). Experiments on 46 tabular UCI classification datasets and 12 sparse datasets demonstrate that the proposed deep RVFL networks outperform state-of-the-art deep feed-forward neural networks (FNNs).
Article
Recently, the explosive increase in social media data enables manufacturers to collect product defect information promptly. Extant literature gathers defect information like defective components or defect symptoms without distinguishing defect-related (DR) texts from defect-unrelated (DUR) texts and thus makes defects discussed by few texts buried in enormous DUR texts. Moreover, existing studies do not consider the defect severity which is valuable and important for manufacturers to make remedial decisions. To bridge these research gaps, we propose a novel approach that integrates the probabilistic graphic model named Product Defect Identification and Analysis Model (PDIAM) with Failure Mode and Effect Analysis (FMEA) to derive product defect information from social media data. Comparing to extant studies, PDIAM identifies DR texts and then extracts defect information from these texts. And PDIAM provides more defect information than previous researches. Besides, we further analyze defect severity with the combination of FMEA and PDIAM which alleviates the inherent subjectivity brought by expert evaluation in the traditional FMEA. A case study in the automobile industry proves the predominant performance of our approach and great potential in defect management.