Annals of Operations Research

Published by Springer Nature
Online ISSN: 1572-9338
Learn more about this page
Recent publications
  • Flavia BonomoFlavia Bonomo
  • Alejandro CataldoAlejandro Cataldo
  • Antonio MauttoneAntonio Mauttone
  • Erik Papa QuirozErik Papa Quiroz
Statistics in sports plays a key role in predicting winning strategies and providing objective performance indicators. Despite the growing interest in recent years in using statistical methodologies in this field, less emphasis has been given to the multivariate approach. This work aims at using the Bayesian networks to model the joint distribution of a set of indicators of players’ performances in basketball in order to discover the set of their probabilistic relationships as well as the main determinants affecting the player’s winning percentage. From a methodological point of view, the interest is to define a suitable model for non-Gaussian data, relaxing the strong assumption on normal distribution in favour of Gaussian copula. Through the estimated Bayesian network, we discovered many interesting dependence relationships, providing a scientific validation of some known results mainly based on experience. At last, some scenarios of interest have been simulated to understand the main determinants that contribute to rising in the number of won games by a player.
COVID-19 led restrictions make it imperative to study how pandemic affects the systemic risk profile of global commodities network. Therefore, we investigate the systemic risk profile of global commodities network as represented by energy and nonenergy commodity markets (precious metals, industrial metals, and agriculture) in pre- and post-crisis period. We use neural network quantile regression approach of Keilbar and Wang (Empir Econ 62:1–26, 2021) using daily data for the period 01 January 2018–27 October 2021. The findings suggest that at the onset of COVID-19, the two firm-specific risk measures namely value at risk and conditional value of risk explode pointing to increasing systemic risk in COVID-19 period. The risk spillover network analysis reveals moderate to high lower tail connectedness of commodities within each sector and low tail connectedness of energy commodities with the other sectors for both pre- and post-COVID-19 periods. The Systemic Network Risk Index reveals an abrupt increase in systemic risk at the start of pandemic, followed by gradual stabilization. We rank commodities in terms of systemic fragility index and observe that in post COVID-19 period, gold, silver, copper, and zinc are the most fragile commodities while wheat and sugar are the least fragile commodities. We use Systemic Hazard Index to rank commodities with respect to their risk contribution to global commodities network. During post COVID-19 period, the energy commodities (except natural gas) contribute most to the systemic risk. Our study has important implications for policymakers and the investment industry.
The goal of this paper is to examine the effect of high liquidity creation on systemic risk. We use a hand-collected dataset on 94 banks from 16 Western European countries over the 2004–2020 period, including the crisis (2008–2009) period and sound periods (2004–2007 and 2010–2020). We assess banks’ systemic risk using two different proxies: banks’ systemic risk exposure, measured by the marginal expected shortfall (MES), and banks’ systemic risk contribution, measured by the delta conditional value at risk (ΔCoVaR). Based on panel regressions, our results mainly show that, during calm periods, high liquidity creation is associated with high systemic risk exposure. Moreover, we show that the effect of liquidity creation on banks’ systemic risk exposure is stronger during turmoil periods. Interestingly, our results show that banks’ liquidity creation increases the systemic contribution only during the financial crisis of 2008–2009. Our findings contribute to the literature and the regulatory debate by suggesting that regulators should pay more attention to high liquidity-creating banks as they may cause aggregate financial fragility.
We contribute to the literature on statistical robustness of risk measures by computing the index of qualitative robustness for risk measures based on utility functions. This problem is intimately related to finding the natural domain of finiteness and continuity of such risk measures.
We consider the problem of scheduling a set of direct deliveries between a depot and multiple customers using a given heterogeneous truck fleet. The trips have time windows and weights, and they should be completed as soon after release as possible (minimization of maximum weighted flow time). Moreover, some trips can optionally be combined in predefined milk runs (i.e., round trip tours), which need not be linear combinations of the constituent direct trips, accounting, e.g., for consolidation effects because the loading dock needs to be approached only once. This problem has applications, e.g., in just-in-time, humanitarian, and military logistics. We adapt a mixed-integer programming model from the literature to this problem and show that deciding feasibility is NP-complete in the strong sense on three levels: assigning trips to trucks, selecting milk runs, and scheduling trips on each individual truck. We also show that, despite this complexity, a state-of-the-art constraint programming solver and a problem-specific approach based on logic-based Benders decomposition can solve even large instances with up to 175 trips in many cases, while the mixed-integer programming model is essentially unsolvable using commercial optimization software. We also investigate the robustness of the maximum flow time objective in the face of unforeseen delays as well as the influence of milk runs.
Conceptual model
Path coefficients of determinants on MSMEs’ intention to adopt BCT-SCM
Supply chain (SC) digitalization has become a new trend in the development of micro, small, and medium enterprises (MSMEs). Blockchain technology (BCT) is a cutting-edge innovation that many supply chain management (SCM) professionals have already adopted. Although studies on BCT have yielded some findings, they do not provide sufficient discussion regarding BCT adoption in SCs among MSMEs. Moreover, the determinants and effects of BCT adoption on SCM among MSMEs remain unclear. This study aims to bridge these gaps by helping understand individual BCT adoption in the SC domain in Chinese MSMEs. Using the technology-organization-environment (TOE) framework, this study examines the effects of BCT, organizational, and environmental contexts on BCT adoption in MSMEs’ SCs. The findings reveal that cost saving, complexity, relative advantage, top management support, SC cooperation, and government support positively affect BCT adoption in SCM. Whereas, compatibility, technological readiness, financial readiness, and competitive pressure had no significant impact on BCT adoption in SCM among MSMEs in China.
An example of rescheduling schedule (E1: emergency surgery)
An example of calculating balance time
Comparison between the proposed heuristic algorithm and intelligent algorithm (H0:objectivefunctionvalues/runningtimeofproposedheuristicalgorithm;GA:objectivefunctionvalues/runningtimeofGAalgorithm;GA-H:objectivefunctionvalues/runningtimeofHybridalgorithmbasedonGAalgorithmandH0)
Comparison between the Poisson (PS) process and the non-homogeneous (NH) process
How to improve the efficiency of operating rooms (ORs) has always been a challenging problem in the context of healthcare operations management. This paper focuses on the research of operating room scheduling under non-operating room anesthesia (NORA) mechanism, in the presence of the uncertainty of emergency arrivals. In particular, we examine the advantages of the NORA mechanism compared with traditional surgical anesthesia practice under different operating room settings. Operationally, the process is comprised of two stages: (1) initial scheduling and (2) rescheduling. In the first stage, the initial schedule for elective surgery under NORA is first formed through our developed model. With experiments, it is shown that for different operating room settings, the NORA mechanism can significantly improve the operating room utilization in comparison with the traditional OR anesthesia process. In the second rescheduling stage, our experiment results show that the rescheduling model can effectively address the disruptions caused by the random arrival of emergency patients.
Because of the global competition of greenhouses and the importance of the global food safety, designing higher yield and efficiency greenhouse systems becomes a hot spot problem. A greenhouse designed for industrial head lettuce production not only can increase yield by using A-Frame systems, but can also improve efficiency through an operation process of alleviating human labor by machines. However, the facility layout problem (FLP) of greenhouse with complex crop production system and multi machines was always ignored in the past decades. In order to maximize production capacity and efficiency, the FLP of greenhouse should be considered as an essential section at the conceptual design phase. To overcome these problems, a framework integrating systematic layout planning (SLP) and simulation is proposed to design and evaluate facility layout in greenhouses. When applying the framework to the facility layout for industrial head lettuce production, we can get the optimal layout plan which lead to a daily production of 7752 head lettuces within about 98 min in a greenhouse of 14,784 m² and an efficiency improvement of 67.31% compared with another initial layout plan. These research results can help the designer find a more effective layout and the managers to support decisions before greenhouse construction.
The proposed decision framework utilising SMCDM, BWM, and TOPSIS
The graph of events and the resulting possible scenarios
The graph for five events and the resulting possible scenarios
Organisations need to develop long-term strategies to ensure they incorporate innovation for environmental sustainability (IES) to remain competitive in the market. This can be challenging given the high level of uncertainty regarding the future (e.g., following the COVID pandemic). Supplier selection is an important decision that organisations make and can be designed to support IES. While the literature provides various criteria in the field of IES strategies, it does not identify the criteria which can be utilised to assist organisations in their supplier selection decisions. Moreover, the literature in this field does not consider uncertainty related to the occurrence of possible future events which may influence the importance of these criteria. To address this gap, this paper develops a novel criteria decision framework to assist supplier evaluation in organisations, taking into consideration different events that may occur in the future. The framework that combines three decision-making methods: the stratified multi-criteria decision-making method, best worst method, and technique for order of preference by similarity to ideal solution. The framework, proposed in this paper, can also be adopted to enable effective and sustainable decision making under uncertainty in various fields.
The United Nations Humanitarian Response Depot (UNHRD) coordinated 515 different shipments (2,420,258 km in total) from six UNHRD depots to 88 different countries to provide a 27,343 m³ volume of products in 2018. The main purpose of the proposed study is to investigate the current distribution plan and identify the potential improvements using operations research techniques. Maximization of the number of covered people, minimization of the traveled distance, and also analyzing the necessity of the response depots are the problems that need to be addressed for UNHRD. Several methods are applied to the UNHRD network for the first time in the literature, and it is aimed to provide a practical solution considering reductions in total distance, time, and cost. To achieve this, three different location-allocation models are devised and deployed on the real UNHRD distribution network, including maximum coverage, P-median, and set covering. As a result of utilizing the P-median model, the total traveled distance is reduced by 58%, and the most significant depots are identified as Accra and Dubai under various distance limitations using the maximum coverage and set covering models. When it comes to the application and managerial implications of the proposed study, the biggest part of demand was supplied by the Dubai depot before the P-median application, and the Accra depot now has the biggest part. Therefore, it is required to increase the volume of the Accra depot. In addition, Accra Depot is preferred 28 times according to the maximum covered model application, with 6 different coverage limits. From this point of view, it is necessary to have efficient and effective planning for the Accra depot in particular. With the set covering method, it is demonstrated the minimum and maximum ranges to serve demand points with the number of the opened depot(s). The most important contribution of the paper to the literature is to address and improve a real humanitarian aid logistics problem using well-known location-allocation models. The study provides a better understanding of the system to have advanced managerial insight by considering the UNHRD logistic network.
This paper aims to investigate the impact of blockchain application on trust levels in supply chains. Through the systematic review of the relevant literature, three dimensions of trust, i.e., the trustor–trustee perspective, forms of trust, and time orientation, are investigated. Our findings show that, first, there are three pairs of trustors and trustees involved in blockchain implementation: (a) the user and the blockchain, (b) two supply chain partners, and (c) the consumer/public and a supply chain unit. Second, the two forms of trust, namely cognition-based and institution-based trust, are likely to be enhanced by blockchain execution, while affect-based trust may not be directly impacted by the technology. Third, the presence of blockchain technology would facilitate swift trust-building between unknown supply chain partners under specific circumstances. Moreover, we also find contradicting assertions among scholars on the implications of blockchain for trust in supply chains. While some studies pointed out that blockchain will enable a trustless trusted scheme, others expected the reinforcement of interorganizational trust. To test these assertions, we develop the blockchain-entrusted supply chain models to present the three-step process of how trust is developed through the blockchain and diffused to supply chain partners and external stakeholders.
It is widely recognized that limited attention capacity of individual investors affects stock performance. We construct five aggregate investor attention indices for each stock by extracting common information components related to stock returns from various attention proxies using equal-weighted (EW), principal component analysis (PCA), partial least squares (PLS), gradient boosting decision tree (GBDT), and random forest (RF) methods. In a sample of all Shanghai Stock Exchange 50 constituent stocks, we identify two attention indices constructed by machine learning algorithms, RF and GBDT, that provide economically meaningful enhanced prediction of stock returns in both in-sample and out-of-sample periods. Moreover, these indices are negatively related to return volatility. Results suggest the utility of using machine-learning to form proxies of investor attention and reveal the excellent forecasting power of these proxies in asset pricing.
Pelvic fracture is a severe trauma and is often seen in the traffic accidents, which are associated with complications or multiple injuries. Surgery is the main treatment for patients with serious conditions, while conservative treatment is adopted for older or minor-illness patients. Surgery resources, such as doctors, nurses, and operating rooms, are shared by all pelvic fracture patients. From the perspective of patient state, this paper divides patients who require surgery into two types, convalescent patients and scheduled patients. Convalescent patients’ life states are always unstable, and they require recovery time to meet the condition of surgery. The recovery time is usually stochastic due to different patient situations. Scheduled patients have stable life states, and the pelvic fracture surgical plan is scheduled days or weeks in advance. Considering the characteristics of the two types of patients, a finite-horizon Markov decision process (MDP) model is established. With data collected from the hospital, parameters are set and experiments are designed to reveal the dynamic priority rules for receiving patients into surgery. Performances of different scenarios are compared, and the optimal policies obtained from the MDP are analyzed.
Microfoundations of data-driven cybersecurity awareness capability
Data breaches have become a formidable challenge for business operations in the twenty-first century. The emergence of big data in the ever-growing digital economy has created the necessity to secure critical organizational information. The lack of cybersecurity awareness exposes organizations to potential cyber threats. Thus, this research aims to identify the various dimensions of cybersecurity awareness capabilities. Drawing on the dynamic capabilities framework, the findings of the study show personnel (knowledge, attitude and learning), management (training, culture and strategic orientation) and infrastructure capabilities (technology and data governance) as thematic dimensions to tackle cybersecurity awareness challenges.
Stock returns in excess of the risk-free rate. In-sample part (black), out-of-sample part (red). Left: Time-series plot, Right: Density estimates. Period: 1872–2022. Data: annual S &P 500. (Color figure online)
Correlations of predictions for stock returns in excess of the risk-free rate (for nonlinear models of one or two predictive variables). Left: In-sample, Right: Out-of-sample. Period: 1872–2022. Data: annual S &P 500
Robustness over time (increasing in-sample period) for selected models for stock returns in excess of the risk-free rate. Left: RV2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_V^2$$\end{document}, Right: Roos2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_{oos}^2$$\end{document}. Period: 1872–2022. Data: annual S &P 500
Correlations of predictions for stock returns in excess of the inflation rate (for nonlinear models of one or two predictive variables). Left: in-sample, Right: out-of-sample. period: 1872–2022. Data: annual S &P 500
Robustness over time (increasing in-sample period) for selected models for stock returns in excess of the inflation rate. Left: RV2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_V^2$$\end{document}, Right: Roos2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_{oos}^2$$\end{document}. Period: 1872–2022. Data: annual S &P 500
Forecast combinations are a popular way of reducing the mean squared forecast error when multiple candidate models for a target variable are available. We apply different approaches to finding (optimal) weights for forecasts of stock returns in excess of different benchmarks. Our focus lies thereby on nonlinear predictive functions estimated by a fully nonparametric smoother with the covariates and the smoothing parameters chosen by cross-validation. Based on an out-of-sample study, we find that individual nonparametric models outperform their forecast combinations. The latter are prone to in-sample over-fitting and in consequence, perform poorly out-of-sample especially when the set of possible candidates for combinations is large. A reduction to one-dimensional models balances in-sample and out-of-sample performance.
A two-commodity queueing-inventory system with phase-type service times and exponential lead times is considered. There are two types of customers; Type 1 and Type 2. Demands from each customer type occur independently according to a Poisson process with different rates whereas the service times follow a phase-type distribution. Type 1 customers have a non-preemptive priority over Type 2 customers. We assume a finite waiting space for Type 1 customers whereas there is no limit on the waiting room for Type 2 customers. Type i customers demand only commodity i, i=1,2. For the ith commodity, Si and si represent, respectively, the maximum inventory level and the reorder level. Whenever the inventory level of ith commodity drops to si, an order is placed from retailer-i to make the inventory level Si. The lead times of the commodities are exponentially distributed with different parameters. When there is a Type i customer waiting in the queue, if the inventory level of ith commodity is zero (or reaches zero), a decision of immediate purchase is made so as not to lose the waiting customer. The queueing-inventory model in the steady-state is analyzed using the matrix-geometric method. The system performance is examined for different values of parameters. Besides, an optimization study is performed for some system parameters.
Operations research flowchart for pandemic/epidemic planning to identify emerging location and transportation problems
Number of studies with respect to the countries
The recent COVID-19 pandemic once again showed the value of harnessing reliable and timely data in fighting the disease. Obtained from multiple sources via different collection streams, an immense amount of data is processed to understand and predict the future state of the disease. Apart from predicting the spatio–temporal dynamics, it is used to foresee the changes in human mobility patterns and travel behaviors and understand the mobility and spread speed relationship. During this period, data-driven analytic approaches and Operations Research tools are widely used by scholars to prescribe emerging transportation and location planning problems to guide policy-makers in making effective decisions. In this study, we provide a review of studies which tackle transportation and location problems during the COVID-19 pandemic with a focus on data analytics. We discuss the major data collecting streams utilized during the pandemic era, highlight the importance of rapid and reliable data sharing, and give an overview of the challenges and limitations on the use of data.
Inputs and outputs changes over different periods
The purpose of this contribution is to compute the popular Malmquist productivity index while adding a component representing plant capacity utilisation. In particular, this is-to the best of our knowledge-the first empirical application estimating both input- and output-oriented Malmquist productivity indices in conjunction with the corresponding input- and output-oriented plant capacity utilisation measures. Our empirical application focuses on a provincial data set of tourism activities in China over the period 2008-2016. The results contain the output- and input-oriented Malmquist productivity indices, some Spearman rank correlations between both, a t-test whether these indices differ from unity, and some bootstrapping analysis. Supplementary information: The online version contains supplementary material available at 10.1007/s10479-022-04771-8.
We establish a supply chain finance scheme containing a cash-strapped supplier, a creditworthy retailer as well as a financial institution to explore whether the positive or negative salvage value has a crucial impact on the order decisions and financing strategies. Buyer-backed purchase order financing and advanced payment discount (APD) financing are considered to settle the supplier’s fund shortage problem. We found that the positive and negative salvage values affect (1) the retailer’s optimal order quantity. The buyer orders more products with positive salvage value than those with no salvage value and reduces orders for items with negative salvage value; (2) the profits in the supply chain. Ordering items with a positive salvage value can reduce the risk of loss compared to orders with no salvage value, which leads to more gains for the buyer and the whole supply chain, while orders for items with negative salvage increase the losses, resulting in lower profits; (3) the threshold of the retailer’s internal asset level under single financing. The higher salvage value brings more inventory risk to the retailer; hence the retailer should have a higher asset level to ensure that there is sufficient capital to finance the supplier via APD. Finally, we verify the results by numerical experiments and present some managerial implications for different industries.
Supply chain management is facing serious challenges in the form of imperfections, which cause quality and environmental concerns. The supplier’s manufacturing system may not produce all perfect items and some chances of receiving a lot may include a proportion of imperfect items. It is a time-consuming, negative impact on the environment, and costly activity if the buyer instantly exchanges these defective items with the supplier. These defective products are still economically valuable and can be reworkable. It is more feasible to repair or rework the products at a local repair/service store for saving cost, environment, and time. The repaired products are expected to come back to the buyer when the inventory level is positive. In addition, global purchasing brings superfluous and continuing paybacks, where there are various scenarios when the sellers and buyers are working at a long distance and they are dealing in various businesses by importing and exporting products. Therefore, the supplier also offers a sustainable method of payment to the buyer known as a multi-trade credit period. An inventory model is developed to reduce the on-hand stock, save the environment, and benefit for interim financing. The objective is to optimize the profit of the supply chain by incorporating product reparation policy with the integration of multi-trade-credit policy and shortages simultaneously. A non-derivative approach is utilized to optimize the supply chain mathematical model by deciding the ordering lot size, cycle time, and proportion of backorder demand. The model also used numerical data of the firm to support decision-makers for transforming the proposed supply chain model into real practices. The proposed model has been checked for the sensitivity of various significant supply chain management parameters.
Sample output of an iteration
Demand pts allocation—local search
ADFList\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ADFList$$\end{document} representation
This study deals with the facility location-allocation problem with Euclidean distances and an unknown number of facilities. The problem is a harder variant of the NP-hard multisource weber problem where the number of facilities is known a priori. A worm optimization (WO) algorithm is developed for the problem, its parameters optimized using a custom design of experiments, and its performance assessed by comparing it to ant colony optimization (ACO) and genetic algorithms (GA). The extensive computational results showed that WO performed better than the other two algorithms in terms of both solution quality and convergence time, with ACO performing second and GA last.
We consider a general linear program in standard form whose right-hand side constraint vector is subject to random perturbations. For the corresponding random linear program, we characterize under general assumptions the random fluctuations of the empirical optimal solutions around their population quantities after standardization by a distributional limit theorem. Our approach is geometric in nature and further relies on duality and the collection of dual feasible basic solutions. The limiting random variables are driven by the amount of degeneracy inherent in linear programming. In particular, if the corresponding dual linear program is degenerate the asymptotic limit law might not be unique and is determined from the way the empirical optimal solution is chosen. Furthermore, we include consistency and convergence rates of the Hausdorff distance between the empirical and the true optimality sets as well as a limit law for the empirical optimal value involving the set of all dual optimal basic solutions. Our analysis is motivated from statistical optimal transport that is of particular interest here and distributional limit laws for empirical optimal transport plans follow by a simple application of our general theory. The corresponding limit distribution is usually non-Gaussian which stands in strong contrast to recent finding for empirical entropy regularized optimal transport solutions.
Tail dependence network of the overall industry chain
Tail dependence network of upstream enterprises
Tail dependence network of midstream enterprises
Tail dependence network of downstream enterprises
The emerging new energy vehicles (NEV) industry is strategically important for China. How to capture its operating characteristics is a challenging but meaningful work. Considering that physical network (e.g. buyer–supplier) or correlation network (e.g. financial contagion) can provide the effective market information for enterprises in the operations management, we first construct the stock returns-based tail dependence network of the NEV industry by combining the Delta conditional value-at-risk (CoVaR) measure and the triangulated maximally filtered graph (TMFG) algorithm. We then explore the topological structure of the constructed network and obtain the operating characteristics for each enterprise in the whole industrial supply chain and at different levels. The empirical results show that the dependence and influence of different enterprises in the whole industrial supply chain are heterogeneous. In particular, upstream enterprises have closer dependence and faster influence power at all levels. These findings from the NEV industry with 71 listed enterprises would not only help regulators identify enterprises that affect the industry stability, but also help investors reduce risk across different enterprises, and managers can adjust operation strategies to reduce operating risks. On the theoretical side, we extend the network theory to the NEV industry. On the practical side, it is the first to capture the operating characteristics of the NEV industry in mainland China. In addition, on the methodological side, it constructs a new TMFG-CoVaR network.
The mp-subgraphs of G are induced by the following sets M1={v1,v2,v3,v4,v5}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_1 = \{v_1,v_2,v_3,v_4,v_5\}$$\end{document}, M2={v4,v5,v6,v7}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_2 = \{v_4,v_5,v_6,v_7\}$$\end{document}, M3={v6,v7,v8,v9},\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_3 = \{v_6,v_7,v_8,v_9\},$$\end{document} and M4={v8,v9,v10,v11,v12}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_4 = \{v_8,v_9,v_{10},v_{11},v_{12}\}$$\end{document}. The extremal mp-subgraphs of G are M1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_1$$\end{document} and M4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_4$$\end{document}
A tolled walk W between vertices u and v in a graph G is a walk in which u is adjacent only to the second vertex of W and v is adjacent only to the second-to-last vertex of W. A set S⊆V(G) is toll convex if the vertices contained in any tolled walk between two vertices of S are contained in S. The toll convex hull of S is the minimum toll convex set containing S. The toll hull number of G is the minimum cardinality of a set whose toll convex hull is V(G). The main contribution of this work is a polynomial-time algorithm for computing the toll hull number of a general graph.
Meta-frontier cost efficiency and meta-technology cost efficiency ratio. Efficiencies for firm operating at point A belonging to country b: Country-specific cost efficiency = OB/OA. Meta-frontier cost efficiency = OC/OA. Meta-technology cost efficiency ratio = OC/OB = (OC/OA)/(OB/OA)
Cost efficiency scores of European Life Insurers, 1998–2014. Note: This figure plots the average values for the whole, 1998–2007 (pre-crisis) and 2008–2014 (post-crisis) period for every of the 10 countries of the sample as well as across the 10 EU countries. Differences in mean values between the 2008–2014 and 1998–2007 period were calculated. *** and * represent countries where these differences are statistical significance at 1% and 10% level, respectively
Revenue efficiency Scores of European Life Insurers, 1998–2014. Note: This figure plots the average values for the whole, 1998–2007 (pre-crisis) and 2008–2014 (post-crisis) period for every of the 10 countries of the sample as well as across the 10 EU countries. Differences in mean values between the 2008–2014 and 1998–2007 period were calculated. ***, ** and * represent countries where these differences are statistical significance at 1%, 5% and 10% level, respectively
This paper applies the meta-frontier Data Envelopment Analysis and the main concepts of convergence from the economic growth literature (β-convergence and σ-convergence) to analyze integration and convergence both in efficiency and in technology gap of European Union (EU) insurance markets. We evaluate 10 EU life insurance markets over the 17-year-period 1998–2014. Results show convergence in cost/revenue efficiency among major EU life insurance markets during the sample period. These findings indicate that the least efficient countries in 1998 have shown a higher improvement in cost/revenue efficiency than the most efficient countries in the same year as well as that the dispersion of the mean efficiency scores among EU life insurance markets decreased over the sample period. We also find convergence in cost/revenue technology gap among these markets, suggesting that they become more technologically homogeneous during the sample period. However, results show that the global financial crisis has led to a slowdown in the progress of integration and convergence in efficiency and technology gap of EU life insurance markets in terms of cost efficiency but not in terms of revenue efficiency.
Two financial schemes, i.e., purchase order financing (POF) and buyer direct financing (BDF), have been proposed for small and medium-sized manufacturers. This study considers a supply chain consisting of a capital-constrained manufacturer who faces the random yield and has a probability of credit default, a well-capitalized retailer, and a bank. We find that the manufacturer prefers POF scheme if the unit production cost is high and the default risk is low, and BDF scheme otherwise. Whereas the retailer benefits from POF when the unit production cost is small. Thus, the retailer, as the leader, has an incentive to distort the purchase price to induce the manufacturer’s financing strategy towards the retailer’s preference. Furthermore, only BDF can achieve a Pareto improvement since the retailer plays a dual role (i.e., buyer and lender) under BDF.
Empirical cumulative distribution of the benchmark HDI and HDI scores with governance indicators when lower bound weight is set to 0.15 (ECDF_HDI and ECDF_HDIGOV, respectively)
The well-known Human Development Index (HDI) goes beyond a single measure of well-being as it is constructed as a composite index of achievements in education, income, and health dimensions. However, it is argued that the above dimensions do not reflect the overall well-being, and new indicators should be included in its construction. This paper uses stochastic dominance spanning to test the inclusion of additional institutional quality (governance) dimensions to the HDI, and we examine whether the augmentation of the original set of welfare dimensions by an additional component leads to distributional welfare gains or losses or neither. We find that differently constructed indicators of the same institutional quality measure produce different distributions of well-being. Supplementary information: The online version contains supplementary material available at 10.1007/s10479-022-04656-w.
Conceptual framework of STRSPTW and OTRSPTW
The working areas designed by AC Company in Saskatchewan
Actual AC Company technician routing (September 18)
Technician routing using the Single-Day TRSPTW model (September 18)
This paper proposes two models for the Technician Routing and Scheduling Problem (TRSP), which are motivated by a telecom provider based in Saskatchewan, Canada. The proposed TRSP models are distinguished from existing models by their ability to address two key issues: overnight and lunch break scheduling. The models aim to scheduling a set of technicians with homogeneous skill levels and different working hours for the purpose of providing services with different service times and time windows to a diverse set of widely spread communities. As the large-sized experiments of this problem categorized into NP-hard problems, a metaheuristic-based technique, Invasive Weed Optimization, is developed to solve them. A comparative analysis is performed to choose the optimum TRSP model based on two factors which are distance of communities to the main depot and balanced service times during planning horizon. The performance of the models is evaluated through the real-world data obtained from the telecom provider. The results prove that the overnight TRSP model is capable of substantially decreasing travel costs and the number of technicians that are required to perform the same set of services.
Schematic representation of the solution strategy
Schematic representation of the operations of the clustering algorithm
Schematic representation of the operations of the algorithm for allocating the non-prime products to bundles
Comparison of bundles generated (top left), unassigned items (top right), and sum (bottom) for 250 products
Comparison of bundles generated (top left), unassigned items (top right), and sum (bottom) for 1000 products
Not all products meet customers’ quality expectations after the steelmaking process. Some of them, labelled as ‘non-prime’ products, are sold in a periodic online auction. These products need to be grouped into the smallest feasible number of bundles as homogeneous as possible, as this increases the attractiveness of the bundles and hence their selling prices. This results in a highly complex optimisation problem, also conditioned by other requirements, with large economic implications. It may be interpreted as a variant of the well-known bin packing problem. In this article, we formalise it mathematically by studying the real problem faced by a multinational in the steel industry. We also propose a structured, three-stage solution procedure: (i) initial division of the products according to their characteristics; (ii) cluster analysis; and (iii) allocation of products to bundles via optimisation methods. In the last stage, we implement three heuristic algorithms: FIFO, greedy, and distance-based. Building on previous works, we develop 80 test instances, which we use to compare the heuristics. We observe that the greedy algorithm generally outperforms its competitors; however, the distance-based one proves to be more appropriate for large sets of products. Last, we apply the proposed solution procedure to real-world datasets and discuss the benefits obtained by the organisation.
This paper proposes a generalization of Shleifer’s (RAND J Econ 16:319–327, 1985) model of yardstick competition to a dynamic framework. In a differential game setting, we show that the yardstick mechanism effectively replicates the first-best solution if players adopt open-loop behaviour rules and are symmetric at the initial time; in the absence of initial symmetry, the social efficiency is reached only in the asymptotic steady state. On the contrary, if players adopt Markovian behaviour rules, then the yardstick pricing rule cannot achieve the first-best solution along the equilibrium path of any Markov Perfect Nash Equilibrium.
Humanitarian Logistics (HL) is comprised of processes and involved systems in the mobilization of people, resources, and knowledge to help affected communities when they are faced with natural disasters. In this study, the Reference Task Model (RTM) provides an overview of these processes and supports Business Process Management (BPM). This article aims to evaluate from a proposed framework the selection of suppliers to guarantee indispensable material resources in the fastest way. We apply a BPM procedure to support the supplier selection process following a flood disaster. We describe each of the stages that make up the proposal of the BPM life cycle applied to the HL. We employ some tools as Balanced Scorecard (BSC) to achieve consensus on objectives, indicators, targets, and actions to be defined for a disaster situation. For balancing the allocation of supplies, the network flow problem is adapted for the quantitative model. That model contains the variables of time, demand, and capacity, and includes the set of adequate suppliers. For the application, we describe a flood disaster case study from a state located in southern Brazil. The main results of the application of the proposed framework are obtained from an optimized holistic view; they represent the selection of suppliers of humanitarian items and consider delivery times, resources, and deprivation costs. One contribution of the proposed framework is the ease of its implementation from process-based technologies and its emphasis on being strategy focused. Further, it concentrates experiences and good practices from humanitarian organizations.
Online peer-to-peer (P2P) lending platform is an emerging FinTech business model that establishes a link between investors and recipients of capital in supply chains (SCs). Businesses face capital constraints impacting directly on their final product price and demand. This article studies optimal decisions and operational strategies in a logistics network considering two capital-constrained manufacturers who produce products of different qualities and sell them to a retailer having deterministic demand over a specific period. The high quality product manufacturer borrows capital through an online P2P lending platform with a service fee, while the low quality product manufacturer pre-sells products for competing with the high quality product manufacturer. In this study, we find optimal prices of the SC participants, service rate of the online P2P platform and percentage of the pre-ordering quantity of the retailer. We analyse optimal Stackelberg and Nash equilibrium of the SC participants. We find that an increase in the amount of opportunity cost will cause a decrease in the pre-ordering quantity of the retailer affecting the SC profit in numerous ways. The online P2P lending platform should consider the amount of the retailer’s target profit in determining the platform’s service rate. We posit some practical insights based on our numerical study and observations for SC managers enabling them to take appropriate measures about their optimal strategies according to the networks’ existing economic conditions.
Schematic representation of the conditional DEA model used to assess WWTPs performance in this study
Sensitivity analysis for selection of the m-value
Univariate scatter plot for the influence of the percentage of utilization of the installed capacity a with confidence interval; b without confidence interval
List of peers and intensity values for the units considered inefficient under the robust approach
This paper explores robust unconditional and conditional nonparametric approaches to support performance evaluation in problematic samples. Real-world assessments often face critical problems regarding available data, as samples may be relatively small, with high variability in the magnitude of the observed indicators and contextual conditions. This paper explores the possibility of mitigating the impact of potential outlier observations and variability in small samples using a robust nonparametric approach. This approach has the advantage of avoiding unnecessary loss of relevant information, retaining all the decision-making units of the original sample. We devote particular attention to identifying peers and targets in the robust nonparametric approach to guide improvements for underperforming units. The results are compared with a traditional deterministic approach to highlight the proposed method's benefits for problematic samples. This framework's applicability in internal benchmarking studies is illustrated with a case study within the wastewater treatment industry in Portugal.
Dynamic multi-criteria decision making steps
Structure of dynamic decision making problem
Ranking similarity of different method
Z valued between the proposed method and other methods
This paper presents a new Dynamic Multi-Attribute Decision-Making method based on Markovian property, which can predict the performance of each alternative in the future and at the same time allows modeling interrelationship among different periods. To this aim, the criteria and decision alternatives in different periods are determined at first, and the information of decision matrices over the decision-making horizon is gathered. To increase the robustness of the results, criteria weights are extracted using the Entropy method in each period and alternatives performance is evaluated using different Multi-Attribute Decision-Making methods. To attain the final rank of alternatives in each period, the results of different methods are aggregated by the Correlation coefficient and standard deviation method. Following this, the rank transformation matrices of alternatives during the evaluation horizon are extracted and the stable rank probability of alternatives is calculated based on limiting probability. Eventually, the overall rank of alternatives is determined using a linear assignment-based method. The proposed model has been used in the promotion of the sales staff in a private company to show the model effectiveness in a real-world problem. Results are compared with some well-known methods (five methods, to be exact). Finally, the trustworthiness and acceptability of the method are assessed based on features discussed in the literature.
Optimal Investment Strategies for Terminal-VaR (blue line) and Minimum-VaR (orange line), versus ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document}
This paper introduces a methodology to produce analytical solutions to an expected utility optimization problem with path-dependent constraints on wealth. This is achieved via a combination of dynamic programming and financial derivatives. The paper focuses on solving the case of a Value at Risk constraint on the running minimum of the wealth process. The optimal wealth is shown to be a barrier-type contingent claim on the unconstrained optimal wealth; the optimal investment strategy and value function follow similarly. A comparison of Value at Risk constraints between terminal wealth and the running minimum of wealth demonstrates a difference of up to 30% on risky asset allocation. Other meaningful examples of interest for investment managers are briefly described.
For decades, the prediction of bank failure has been a popular topic in credit risk and banking studies. Statistical and machine learning methods have been working well in predicting the probability of bankruptcy for different time horizons prior to the failure. In recent years, bank efficiency has attracted much interest from academic circles, where low productivity or efficiency in banks has been regarded as a potential reason for failure. It is generally believed that low efficiency implies low-quality management of the organisation, which may lead to bad performance in the competitive financial markets. Previous papers linking efficiency measures calculated by Data Envelopment Analysis (DEA) to bank failure prediction have been limited to cross sectional analyses. A dynamic analysis with the updated samples is therefore recommended for bankruptcy prediction. This paper proposes a nonparametric method, Malmquist DEA with Worst Practice Frontier, to dynamically assess the bankruptcy risk of banks over multiple periods. A total sample of 4426 US banks over a period of 15 years (2002–2016), covering the subprime financial crisis, is used to empirically test the model. A static model is used as the benchmark, and we introduce more extensions for comparisons of predictive performance. Results of the comparisons and robustness tests show that Malmquist DEA is a useful tool not only for estimating productivity growth but also to give early warnings of the potential collapse of banks. The extended DEA models with various reference sets and orientations also show strong predictive power.
The graphical representation for LDA.
Adapted from Blei (2012)
Word cloud demonstrating changing pattern of terms over time (2000–2022)
Topic matrix generated in R-package using LDA principle
Topic trend/dynamic over time
The literature on healthcare operations and supply chain management has seen unprecedented growth over the past two decades. This paper seeks to advance the body of knowledge on this topic by utilising a topic modelling-based literature review to identify the core topics, examine their dynamic changes, and identify opportunities for further research in the area. Based on an analysis of 571 articles published until 25 January 2022, we identify numerous popular topics of research in the area, including patient waiting time, COVID-19 pandemic, Industry 4.0 technologies, sustainability, risk and resilience, climate change, circular economy, humanitarian logistics, behavioural operations, service-ecosystem, and knowledge management. We reviewed current literature around each topic and offered insights into what aspects of each topic have been studied and what are the recent developments and opportunities for more impactful future research. Doing so, this review help advance the contemporary scholarship on healthcare operations and supply chain management and offers resonant insights for researchers, research students, journal editors, and policymakers in the field. Citation: Ali, I. & Kannan, D. (2022). Mapping Research on Healthcare Operations and Supply Chain Management: A Topic Modelling-based Literature Review. Annals of Operations Research, pp. 1-38
Smoothing probability and smoothing correlation coefficients of the positive correlation regime between the oil returns and sectoral equity returns
Downside sectoral VaR, CoVaR and delta CoVaR
Symmetric dependences under a positive and negative correlation regime
In this paper, a dependence-switching copula model is used for the first time to analyse the dependence structure between sectoral equity markets and crude oil prices for India, one of the largest oil importing countries. Specifically, we investigate the dependence and tail dependence for four distinctive states of the market, i.e. rising oil prices—rising equity markets, declining oil prices—declining equity markets, rising oil prices—declining equity markets, and declining oil prices—rising equity markets. Our results reveal that the tail dependence is symmetric (asymmetric) in positive (negative) correlation regimes. Based on the copula results, we estimate the systemic crude oil price risk to different sectors using CoVaR and delta CoVaR. A fleeting positive sectoral CoVaR and delta CoVaR across all sectors implies a time-varying oil price systemic risk. Yet, little difference between CoVaR and VaR across the sectors reveals that a bearish oil market does not add additional systemic risk to a bearish sectoral equity market. The carbon sector is found to be the safe haven investment when both the equity and the oil markets are in a downward phase.
Operation room (OR) management has remained as the mainstream of hospital management research, as ORs are considered as one of the most expensive resources. However, the complicated intertwine and connection between the upstream and downstream units in hospital systems has recently drawn research attention that focuses more on allocating medical resources for the sake of a balanced coordination. As a critical step, surgical scheduling in the presence of uncertain surgery durations is pivotal but challenging since a patient cannot be hospitalized if a recovery bed will not be available to accommodate the patient. To tackle the challenge, we propose an overflow strategy that allows patients to be assigned to an undesignated department if the designated one is full, and this has been proved to successfully alleviate the imbalance of capacity utilization. However, some studies indicate that implementation of the overflow strategy increases the readmission rate as well as the length of stay (LOS). To rigorously examine the overflow strategy and explore its optimal solution, we thus propose a Fuzzy model for surgical scheduling by explicitly considering downstream shortage, as well as the uncertainty of surgery duration and patient LOS. To solve the fuzzy model, a hybrid algorithm (so-called GAP) is proposed stemming from Genetic Algorithm (GA). Extensive numerical results demonstrate the plausible efficiency of the GAP algorithm, especially for a large-scale scheduling problem (e.g., comprehensive hospitals). Additionally, it is shown that the overflow cost plays a critical role in determining the efficiency of the overflow strategy, viz., benefits from the overflow strategy can be reduced as the overflow cost increases, and eventually almost vanishes when the cost becomes sufficiently large. Finally, the Fuzzy model is tested to be effective in terms of simplicity and reliability, yet without cannibalizing the patient admission rate.
Forecasting energy demand has been a critical process in various decision support systems regarding consumption planning, distribution strategies, and energy policies. Traditionally, forecasting energy consumption or demand methods included trend analyses, regression, and auto-regression. With advancements in machine learning methods, algorithms such as support vector machines, artificial neural networks, and random forests became prevalent. In recent times, with an unprecedented improvement in computing capabilities, deep learning algorithms are increasingly used to forecast energy consumption/demand. In this contribution, a relatively novel approach is employed to use long-term memory. Weather data was used to forecast the energy consumption from three datasets, with an additional piece of information in the deep learning architecture. This additional information carries the causal relationships between the weather indicators and energy consumption. This architecture with the causal information is termed as entangled long short term memory. The results show that the entangled long short term memory outperforms the state-of-the-art deep learning architecture (bidirectional long short term memory). The theoretical and practical implications of these results are discussed in terms of decision-making and energy management systems.
Organ transplantation is a crucial task in the healthcare supply chain, which organizes the supply and demand for various vital organs. In this regard, dealing with uncertainty is one of the main challengings in designing an organ transplant supply chain. To address this gap, in the present research, a mathematical formulation and solution method is proposed to optimize the organ transplants supply chain under shipment time uncertainty. A possibilistic programming model and simulation-based solution method are developed for organ transplant center location, allocation, and distribution. The proposed mathematical model optimizes the overall cost by considering the fuzzy uncertainty of organ demands and transportation time. Moreover, a novel simulation-based optimization is applied using the credibility theory to deal with the uncertainty in the optimization of this mathematical model. In addition, the proposed model and solution method are evaluated by implementing different test problems. The numerical results demonstrate that the optimal credibility level is between 0.2 and 0.6 in all tested cases. Moreover, the patient’s satisfaction rate is higher than the viability rate in the designed organ supply chain.
Given a set of items, the portfolio selection problem involves selecting a subset of the items subject to resource constraints. We propose in this paper a multiobjective interactive approach based on a constrained Non-Compensatory Sorting model which integrates preferences both on items and portfolios in the same device. More precisely, we combine two evaluations models. The first one assigns items into two categories ( Good / Bad ) and model resource limitation using weighted cardinality constraints in such a way that the portfolio is composed of the items assigned to the good category. The second evaluation level compares portfolios on a set of portfolio-related criteria. We learn the constrained sorting model, based on a learning set, with SAT/MaxSAT language, which proves to be efficient for the preference learning task.
During the first wave of the COVID-19 pandemic, in France, people cleared the shelves of butter; in Italy, it was pasta; in Great Britain, it was chicken. While there may be cultural disagreement on what is essential, clearly, in times of crisis, consumers stockpile the ‘essentials’. We address the problem of “panic buying”, which is characterized by increasing demand in the face of diminishing inventory. In such cases, prices may hike and firms (retailers) selling the high-demand product are quantity takers, in terms of supply, and price setters. We consider a manufacturer who sells a scarce product to a single retailer. The retailer seeks to maximize her profit, while in contrast, the manufacturer pursues a social objective of regulating and lowering the amount that the end customer (consumer) pays (including the cost of traveling to obtain the scarce product). By analyzing the competition between the two parties, retailer and manufacturer, we find that even when the regulator (manufacturer) makes a significant social commitment, neither subsidizing the retailer nor subsidizing the consumers necessarily curbs price hikes. Furthermore, there is a threshold ratio (i.e., proportion of the end price subsidized by the regulator) that determines the minimal budget that the regulator would need to allocate in order for subsidization to make a difference to consumers.
Global corporate giants are keen to adopt Industry 4.0 (I4.0) owing to its continuous, impactful, and evident benefits. However, implementing I4.0 remains a significant challenge for many organizations, mainly due to the absence of a systematic and comprehensive framework. The risk assessment study is key to the flawless execution of any project is a proven fact. This paper aims to develop a KPIs-based sustainable integrated model to assess and evaluate risks associated with the I4.0 implementation. This research paper has developed the I4.0 risks evaluation model through fifteen expert interventions and an extensive systematic literature review. This research, based on sixteen KPIs evaluates six risks impacting the organization’s decision to adopt I4.0. Initially, the Fuzzy Decision-Making Trial and Evaluation Laboratory method is used to map the causal relationship among the KPIs. Further, the additive ratio assessment with interval triangular fuzzy numbers method is used to rank the risks. The study revealed that information technology infrastructure and prediction capabilities are the most crucial prominence and receiver KPIs. Simultaneously, technological and social risks are found to be highly significant in the I4.0 implementation decision-making process. The developed model meticulously supports the manufacturer’s, policymaker, and researchers’ viewpoint toward I4.0 implementation in the present and post COVID-19 pandemic phases in manufacturing companies. The comprehensive yet simple model developed in this study contributes to the larger ambit of new knowledge and extant literature. The integrated model is exceptionally based on the most prominent risks and a wider range of KPIs that are further analyzed by aptly fitting two fuzzy MCDM techniques, which makes the study special as it perfectly takes care of the uncertainties and vagueness in the decision-making process. Hence, this study is pioneering and unique in context to I4.0 risks prioritization aiming to accelerate I4.0 adoption.
Proposed research TOE framework for blockchain technology adoption
Blockchain Technology Adoption Model using TOE framework
Organizations adopt blockchain technologies to provide solutions that deliver transparency, traceability, trust, and security to their stakeholders. In a novel contribution to the literature, this study adopts the technology-organization-environment (TOE) framework to examine the technological, organizational, and environmental dimensions for adopting blockchain technology in supply chains. This represents a departure from prior studies which have adopted the technology acceptance model (TAM), technology readiness index (TRI), theory of planned behavior (TPB), united theory of acceptance and use of technology (UTAUT) models. Data was collected through a survey of 525 supply chain management professionals in India. The research model was tested using structural equation modeling. The results show that all the eleven TOE constructs, including relative advantage, trust, compatibility, security, firm’s IT resources, higher authority support, firm size, monetary resources, rivalry pressure, business partner pressure, and regulatory pressure, had a significant influence on the decision of blockchain technology adoption in Indian supply chains. The findings of this study reveal that the role of blockchain technology adoption in supply chains may significantly improve firm performance improving transparency, trust and security for stakeholders within the supply chain. Further, this research framework contributes to the theoretical advancement of the existing body of knowledge in blockchain technology adoption studies.
k-means clustering is a classic method of unsupervised learning with the aim of partitioning a given number of measurements into k clusters. In many modern applications, however, this approach suffers from unstructured measurement errors because the k-means clustering result then represents a clustering of the erroneous measurements instead of retrieving the true underlying clustering structure. We resolve this issue by applying techniques from robust optimization to hedge the clustering result against unstructured errors in the observed data. To this end, we derive the strictly and Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}-robust counterparts of the k-means clustering problem. Since the nominal problem is already NP-hard, global approaches are often not feasible in practice. As a remedy, we develop tailored alternating direction methods by decomposing the search space of the nominal as well as of the robustified problems to quickly obtain feasible points of good quality. Our numerical results reveal an interesting feature: the less conservative Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}-approach is clearly outperformed by the strictly robust clustering method. In particular, the strictly robustified clustering method is able to recover clusterings of the original data even if only erroneous measurements are observed.
With the development of industry 4.0, the credit data of SMEs are characterized by a large volume, high speed, diversity and low-value density. How to select the key features that affect the credit risk from the high-dimensional data has become the critical point to accurately measure the credit risk of SMEs and alleviate their financing constraints. In doing so, this paper proposes a credit risk feature selection approach that integrates the binary opposite whale optimization algorithm (BOWOA) and the Kolmogorov–Smirnov (KS) statistic. Furthermore, we use seven machine learning classifiers and three discriminant methods to verify the robustness of the proposed model by using three actual bank data from SMEs. The empirical results show that although no one artificial intelligence credit evaluation method is universal for different SMEs’ credit data, the performance of the BOWOA-KS model proposed in this paper is better than other methods if the number of indicators in the optimal subset of indicators and the prediction performance of the classifier are considered simultaneously. By providing a high-dimensional data feature selection method and improving the predictive performance of credit risk, it could help SMEs focus on the factors that will allow them to improve their creditworthiness and more easily access loans from financial institutions. Moreover, it will also help government agencies and policymakers develop policies to help SMEs reduce their credit risks.
Top-cited authors
Dmitry Ivanov
  • Hochschule für Wirtschaft und Recht Berlin
Samuel Fosso Wamba
  • Toulouse Business School
Angappa Gunasekaran
  • Pennsylvania State University Harrisburg
David Roubaud
  • Montpellier Business School
Yogesh Kumar Dwivedi
  • Swansea University