Article

A Predictive Analytics Tool to Provide Visibility Into Completion of Work Orders in Supply Chain Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In current supply chain operations, original equipment manufacturers (OEMs) procure parts from hundreds of globally distributed suppliers, which are often small and medium-scale enterprises (SMEs). The SMEs also obtain parts from many other dispersed suppliers, some of whom act as sole sources of critical parts, leading to the creation of complex supply chain networks. These characteristics necessitate having a high degree of visibility into the flow of parts through the networks to facilitate decision making for OEMs and SMEs, alike. However, such visibility is typically restricted in real-world operations due to limited information exchange among the buyers and suppliers. Therefore, we need an alternate mechanism to acquire this kind of visibility, particularly for critical prediction problems, such as purchase orders deliveries and sales orders fulfillments, together referred as work orders completion times. In this paper, we present one such surrogate mechanism in the form of supervised learning, where ensembles of decision trees are trained on historical transactional data. Furthermore, since many of the predictors are categorical variables, we apply a dimension reduction method to identify the most influential category levels. Results on real-world supply chain data show effective performance with substantially lower prediction errors than the original completion time estimates. In addition, we develop a web-based visibility tool to facilitate real-time use of the prediction models. We also conduct a structured usability test to customize the tool interface. The testing results provide multiple helpful suggestions on enhancing the ease-of-use of the tool.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Finally, Liu, Hwang, Yund, Neidig, Hartford, Boyle et al. (2020) use the Random Forest algorithm to predict the completion times of work orders. These predictions provide users with a more reliable understanding of when open work orders will be delivered and what the estimated delivery times will be for future work orders currently being planned. ...
... • Neural networks predict batch viability in hierarchical production planning (Gahm et al., 2022) • Random Forest (RF) predicts work order completion times and PCA Principal Component Analysis identifies the most influential levels of categorical variables (Liu et al., 2020). ...
Article
Full-text available
Purpose: This study presents a systematic literature review that provides a broad and holistic view of how machine learning can be used and integrated to enhance decision-making in various areas of the supply chain, highlighting its combination with other techniques and models.Design/methodology/approach: An exhaustive literature review used three sets of keywords in the Scopus and Web of Science (WoS) databases. Through a rigorous filtering process, 70 articles were selected from an initial total of 410, focusing on those that specifically addressed the intersection of machine learning and decision-making in supply chain management.Findings: Machine learning has proven to be an essential tool in the supply chain, with applications in inventory management, logistics, and transportation, among others. Its integration with other techniques has led to significant advances in decision-making, improving efficiency in complex environments. Combining machine learning methods with traditional techniques has been particularly effective, and integration with emerging technologies has opened up new application possibilities.Originality/value: Unlike previous studies that focused on specific areas, this study offers a broad perspective on the application of machine learning in the supply chain. Additionally, combining machine learning techniques with other models is highlighted, representing added value for the scientific community and suggesting new avenues for future research.
... The integrated solutions for green development in industry, construction, transportation, and finance built based on cloud computing, artificial intelligence (AI), and the digital twin [9] of the Internet of Things (IoT) are typical examples of digital intelligence technologies driving carbon reduction, pollution reduction, and green expansion through efficiency enhancement and paradigm change. For instance, solar and wind power facilities may use cloud computing technologies [10] and blockchain technology (BCT) [11], which may be used to authenticate green electricity and trace its source. Artificial intelligence and IoT technologies can respond to, learn, and optimize complex decision-making challenges involving various objectives such as economic, safety, and environmental compliance in integrated energy management in regions, parks, financial institutions, influential organizations, etc., [12]. ...
Article
Full-text available
The introduction of the idea of “carbon neutrality” gives the development of low carbon and decarbonization a defined path. Climate change is a significant worldwide concern. To offer a theoretical foundation for the implementation of carbon reduction, this research first analyzes the idea of carbon footprinting, accounting techniques, and supporting technologies. The next section examines carbon emission reduction technologies in terms of lowering emissions and raising carbon sequestration. Digital intelligence technologies like the Internet of Things, big data, and artificial intelligence will be crucial throughout the process of reducing carbon emissions. The implementation pathways for increasing carbon sequestration primarily include ecological and technological carbon sequestration. Nevertheless, proving carbon neutrality requires measuring and monitoring greenhouse gas emissions from several industries, which makes it a challenging undertaking. Intending to increase the effectiveness of carbon footprint measurement, this study created a web-based program for computing and analyzing the whole life-cycle carbon footprint of items. The practical applications and difficulties of digital technologies, such as blockchain, the Internet of Things, and artificial intelligence in achieving a transition to carbon neutrality are also reviewed, and additional encouraging research ideas and recommendations are made to support the development of carbon neutrality.
... These nine pillars are: The Industrial Internet of Things (IIoT), Big Data and Analytics, Horizontal and vertical system integration, Simulation, Cloud computing, Augmented Reality (AR), Autonomous Robots, Additive manufacturing, and Cyber Security. Numerous studies report on each of these technologies and concepts regarding how they can be implemented into factories to reduce costs and increase performance [7,2,8,9,10,11]. ...
Conference Paper
div class="section abstract"> With the developments of Industry 4.0, data analytics solutions and their applications have become more prevalent in the manufacturing industry. Currently, the typical software architecture supporting these solutions is modular, using separate software for data collection, storage, analytics, and visualization. The integration and maintenance of such a solution requires the expertise of an information technology team, making implementation more challenging for small manufacturing enterprises. To allow small manufacturing enterprises to feasibly obtain the benefits of Industry 4.0 data analytics, a full-stack data analytics framework is presented, and its performance evaluated as applied in the common industrial analytics scenario of predictive maintenance. The predictive maintenance approach was achieved by using a full-stack data analytics framework comprised of the PTC Inc. Thingworx software suite. When deployed on a lab-scale factory, there was a significant increase in factory uptime in comparison with both preventive and reactive maintenance approaches. The predictive maintenance approach simultaneously eliminated unexpected breakdowns and extended the uptime periods of the lab-scale factory. This research concluded that similar or better results may be obtained in actual factory settings, since the only source of error on predictions in the testing scenario would not be present in real world scenarios. An analysis of the effect of downtime period durations and discussion on the cost of reactive maintenance and associated breakdowns is also presented. </div
Article
Full-text available
In recent years, there has been a growing surge of interest in the application of data analytics (DA) within the realm of supply chain management (SCM), attracting attention from both practitioners and researchers. This paper presents a comprehensive examination of recent implementations of DA in SCM. Employing a systematic literature review (SLR), we conducted a meticulous analysis of over 354 papers. Building upon a prior SLR conducted in 2018, we identify contemporary areas where DA has been applied across various functions within the supply chain and scrutinize the DA models and techniques that have been employed. A comparison between past findings and the current literature reveals a notable upsurge in the utilization of DA across most SCM functions, with a particular emphasis on the prevalence of predictive analytics models in contemporary SCM applications. The findings of this paper offer a detailed insight into the specific DA models and techniques currently in use across various SCM functions. Additionally, a discernible increase in the adoption of mixed or hybrid DA models is observed. However, several research gaps persist, including the need for more attention to real-time DA in SCM, the integration of publicly available data, and the application of DA to mitigate uncertainty in SCM. To address these areas and guide future research endeavors, the paper concludes by delineating six concrete research directions. These directions offer valuable avenues for further exploration in the field.
Article
Full-text available
Purpose In today's fast developing era, the volume of data is increasing day by day. The traditional methods are lagging for efficiently managing the huge amount of data. The adoption of machine learning techniques helps in efficient management of data and draws relevant patterns from that data. The main aim of this research paper is to provide brief information about the proposed adoption of machine learning techniques in different sectors of manufacturing supply chain. Design/methodology/approach This research paper has done rigorous systematic literature review of adoption of machine learning techniques in manufacturing supply chain from year 2015 to 2023. Out of 511 papers, 74 papers are shortlisted for detailed analysis. Findings The papers are subcategorised into 8 sections which helps in scrutinizing the work done in manufacturing supply chain. This paper helps in finding out the contribution of application of machine learning techniques in manufacturing field mostly in automotive sector. Practical implications The research is limited to papers published from year 2015 to year 2023. The limitation of the current research that book chapters, unpublished work, white papers and conference papers are not considered for study. Only English language articles and review papers are studied in brief. This study helps in adoption of machine learning techniques in manufacturing supply chain. Originality/value This study is one of the few studies which investigate machine learning techniques in manufacturing sector and supply chain through systematic literature survey. Highlights A comprehensive understanding of Machine Learning techniques is presented. The state of art of adoption of Machine Learning techniques are investigated. The methodology of (SLR) is proposed. An innovative study of Machine Learning techniques in manufacturing supply chain.
Chapter
Usability is a quality attribute that concerns all professionals involved in the software development process. For this reason, several methods have been established to determine if a software product is easy to use, intuitive, understandable, and attractive to users. However, despite the relevance of this software quality attribute, there are still applications with a low level of usability. Enterprise Resource Planning (ERP) software products still provide graphical interfaces that do not consider the context, the conditions of use, and the final objectives of the user. There is evidence that many of these applications, which are widely used in the market, especially the purchase order generation modules, have been designed without following a user-centered design process and without going through an evaluation process. This fact leads many companies to redesign the graphical interfaces that allow user interaction with ERPs. This study reports the results obtained from a systematic literature review (SLR) that aims to identify case studies on which redesign of purchase order modules are reported. The purpose was to identify and analyze the methodologies, tools, and methods most used in the redesign of this type of software application as well as the reasons that lead companies to modify the interfaces. A total of 159 studies were identified, of which 22 were selected as relevant to this review. According to the analysis, frustration and little comfort lead companies to use a User-Centered Design (UCD) framework to redesign the graphical interfaces.
Article
Purpose- This work aims to review past and present articles about data-driven quality management (DDQM) in supply chains (SCs). The motive behind the review is to identify associated literature gaps and to provide a future research direction in the field of DDQM in SCs. Design/Methodology/Approach- A systematic literature review was done in the field of DDQM in SCs. SCOPUS database was chosen to collect articles in the selected field then an SLR methodology has been followed to review the selected articles. The bibliometric and network analysis has also been conducted to analyze the contributions of various authors, countries, and institutions in the field of DDQM in SCs. Network analysis was done by using the VOS viewer package to analyze collaboration among researchers. Findings- The findings of the study reveal that the adoption of data-driven technologies and quality management tools can help in strategic decision making. The usage of data-driven technologies such as artificial intelligence and machine learning can significantly enhance the performance of supply chain operations and networks. Originality/Value- The paper discusses the importance of data-driven techniques enabling quality in SCs management systems. The linkage between the data-driven techniques and quality management for improving the SCs performance was also elaborated in the presented study. Keywords- Quality Management; Data-driven; Supply chain; Systematic literature review; Bibliometric.
Article
Full-text available
The natural language descriptions of the capabilities of manufacturing companies can be found in multiple locations including company websites, legacy system databases, and ad hoc documents and spreadsheets. To unlock the value of unstructured capability data and learn from it, there is a need for developing advanced quantitative methods supported by machine learning and natural language processing techniques. This research proposes a hybrid unsupervised learning methodology using K-means clustering and topic modeling techniques in order to build clusters of suppliers based on their capabilities, automatically infer topics from the created clusters, and discover nontrivial patterns in manufacturing capability corpora. The capability data is extracted either directly from the website of manufacturing firms or from their profiles in e-sourcing portals and directories. Feature extraction and dimensionality reduction process in this work in supported by N-gram extraction and Latent Semantic Analysis (LSA) methods. The proposed clustering method is validated experimentally based on a dataset composed of 150 capability descriptions collected from web-based sourcing directories such as the Thomas Net directory for manufacturing companies. The results of the experiment show that the proposed method creates supplier cluster with high accuracy. Two example applications of the proposed framework, related to supplier similarity measurement and automated thesaurus creation, are introduced in this paper.
Book
Full-text available
Human-centred design is based on the satisfaction of the user needs related to performances, aesthetics, reliability, usability, accessibility and visibility issues, costs, and many other aspects. The combination of all these aspects has been called as "perceived quality", that is definitely a transdisciplinary topic. However, the "real" perceived quality is usually faithfully assessed only at the end of the design process, while it is very difficult to predict on 3D CAD model. In this context, digital manufacturing tools and virtual simulation technologies can be validly used according to a transdisciplinary approach to create interactive digital mock-ups where the human-system interaction can be simulated and the perceived quality assessed in advance. The paper proposes a mixed reality (MR) setup where systems and humans interacting with them are digitalized and monitored to easily evaluate the human-machine interaction. It is useful to predict the design criticalities and to improve the global system design. An industrial case study has been developed in collaboration with CNH Industrial to demonstrate how the proposed setup can be validly used to support human-centred design.
Article
Full-text available
Manufacturing capability analysis is a necessary step in the early stages of supply chain formation. In the contract manufacturing industry, companies often advertise their capabilities and services in an unstructured format on the company website. The unstructured capability data usually portrays a realistic view of the services a supplier can offer. If parsed and analyzed properly, unstructured capability data can be used effectively for initial screening and characterization of manufacturing suppliers specially when dealing with a large pool of suppliers. This work proposes a novel framework for capability-based supplier classification that relies on the unstructured capability narratives available on the suppliers websites. Four document classification algorithms, namely, Support Vector Machine (SVM), Nave Bayes (NB), Random Forest (RF), and K-Nearest Neighbour (KNN) are used as the text classification techniques. One of the innovative aspects of this work is incorporating a thesaurus-guided method for feature selection and tokenization of capability data. The thesaurus contains the formal and informal vocabulary used in the contract machining industry for advertising manufacturing capabilities. A web-based tool is developed for the generation of the concept vector model associated with each capability narrative and extraction of features from the input documents. The proposed supplier classification framework is validated experimentally through forming two capability classes, namely, heavy component machining and difficult and complex machining, based on real capability data. It was concluded that thesaurus-guided method improves the precision of the classification process.
Conference Paper
Full-text available
Aircraft engine assembly operations require thousands of parts provided by several geographically distributed suppliers. A majority of the operation steps are sequential, necessitating the availability of all the parts at appropriate times for these steps to be completed successfully. Thus, being able to accurately predict the availabilities of parts based on supplier deliveries is critical to minimizing the delays in meeting the customer demands. However, such accurate prediction is challenging due to the large lead times of these parts, limited knowledge of supplier capacities and capabilities, macroeconomic trends affecting material procurement and transportation times, and unreliable delivery date estimates provided by the suppliers themselves. We address these challenges by developing a statistical method that learns a hybrid stepwise regression — generalized multivariate gamma distribution model from historical transactional data on closed part purchase orders and is able to infer part delivery dates sufficiently before the supplier-promised delivery dates for open purchase orders. The hybrid form of the model makes it robust to data quality and short-term temporal effects as well as biased toward overestimating rather than underestimating the part delivery dates. Test results on real-world purchase orders demonstrate effective performance with low prediction errors and constantly high ratios of true positive to false positive predictions. Copyright © 2015 by ASME Country-Specific Mortality and Growth Failure in Infancy and Yound Children and Association With Material Stature Use interactive graphics and maps to view and sort country-specific infant and early dhildhood mortality and growth failure data and their association with maternal
Article
Full-text available
To discover how a newly developed library mobile website performed across a variety of devices, the authors used a hybrid field and laboratory methodology to conduct a usability test of the website. Twelve student participants were recruited and selected according to phone type. Results revealed a wide array of errors attributed to site design, wireless network connections, as well as phone hardware and software. This study provides an example methodology for testing library mobile websites, identifies issues associated with mobile websites, and provides recommendations for improving the user experience.
Article
Full-text available
In modern manufacturing era, supply chains are increasingly becoming global and agile. To build agile global supply chains, companies first need to have access to a large supply base and secondly need an efficient mechanism for cost-effective and rapid location, evaluation, and selection of suppliers. This work introduces a matchmaking algorithm for connecting buyers and sellers of manufacturing services based on their semantic similarities in terms of manufacturing capabilities. The proposed matchmaking algorithm operates over Manufacturing Service Description Language (MSDL), an ontology for formal representation of manufacturing services. Since MSDL descriptions can be represented as directed labeled trees, a tree matching approach is implemented in this work.
Article
Full-text available
Facing uncertain environments, firms have strived to achieve greater supply chain collaboration to leverage the resources and knowledge of their suppliers and customers. The objective of the study is to uncover the nature of supply chain collaboration and explore its impact on firm performance based on a paradigm of collaborative advantage. Reliable and valid instruments of these constructs were developed through rigorous empirical analysis. Data were collected through a Web survey of U.S. manufacturing firms in various industries. The statistical methods used include confirmatory factor analysis and structural equation modeling (i.e., LISREL). The results indicate that supply chain collaboration improves collaborative advantage and indeed has a bottom-line influence on firm performance, and collaborative advantage is an intermediate variable that enables supply chain partners to achieve synergies and create superior performance. A further analysis of the moderation effect of firm size reveals that collaborative advantage completely mediates the relationship between supply chain collaboration and firm performance for small firms while it partially mediates the relationship for medium and large firms.
Article
Full-text available
The last several years have seen a growth in the number of publications in economics that use principal component analysis (PCA), especially in the area of welfare studies. This paper gives an introduction into the principal component analysis and describes how the discrete data can be incorporated into it. The ef-fects of discreteness of the observed variables on the PCA are overviewed. The concepts of polychoric and polyserial correlations are introduced with appropriate references to the existing literature demonstrating their statistical properties. A large simulation study is carried out to shed light on some of the issues raised in the theoretical part of the paper. The simulation results show that the currently used method of running PCA on a set of dummy variables as proposed by Filmer & Pritchett (2001) is inferior to other methods for analyzing discrete data, both simple such as using ordinal variables, and more sophisticated such as using the polychoric correlations.
Article
Full-text available
This paper evaluates the impact of forecasting models and early order commit-ment in a supply chain with one capacitated manufacturer and four retailers under demand uncertainty. Computer simulation models were used to simulate di erent demand forecasting and inventory replenishment decisions by the retai-lers as well as production decisions by the manufacturer under a variety of demand patterns and capacity tightness scenarios. This study found that early order commitments signi®cantly a ected the total costs and service levels, to various degrees, for the manufacturer and the retailers, suggesting that the bene®ts of early order commitment could be in¯uenced by a combination of forecasting models, demand patterns and capacity tightness.
Article
Full-text available
Demand forecasts play a crucial role for supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Several forecasting techniques have been developed, each one with its particular advantages and disadvantages compared to other approaches. This motivates the development of hybrid systems combining different techniques and their respective strengths. In this paper, we present a hybrid intelligent system combining Autoregressive Integrated Moving Average (ARIMA) models and neural networks for demand forecasting. We show improvements in forecasting accuracy and propose a replenishment system for a Chilean supermarket, which leads simultaneously to fewer sales failures and lower inventory levels than the previous solution.
Article
Full-text available
The coming century is surely the century of data. A combination of blind faith and serious purpose makes our society invest massively in the collection and processing of data of all kinds, on scales unimaginable until recently. Hyperspectral Imagery, Internet Portals, Financial tick-by-tick data, and DNA Microarrays are just a few of the betterknown sources, feeding data in torrential streams into scientific and business databases worldwide. In traditional statistical data analysis, we think of observations of instances of particular phenomena, these observations being a vector of values we measured on several variables (e.g. blood pressure, weight, height, ...). In traditional statistical methodology, we assumed many observations and a few, wellchosen variables. The trend today is towards more observations but even more so, to radically larger numbers of variables voracious, automatic, systematic collection of hyper-informative detail about each observed instance. We are seeing examples where the observations gathered on individual instances are curves, or spectra, or images, or even movies, so that a single observation has dimensions in the thousands or billions, while there are only tens or hundreds of instances available for study. Classical methods are simply not designed to cope with this kind of explosive growth of dimensionality of the observation vector. We can say with complete confidence that in the coming century, high-dimensional data analysis will be a very significant activity, and completely new methods of high-dimensional data analysis will be developed; we just don't know what they are yet. Mathematicians are ideally prepared for appreciating the abstract issues involved in finding patterns in such high-dimensional data. Two of the most influential principles in the coming century will be principles originally discovered and cultivated by mathematicians: the blessings of dimensionality and the curse of dimensionality. The curse of dimensionality is a phrase used by several subfields in the mathematical sciences; I use it here to refer to the apparent intractability of systematically searching through a high-dimensional space, the apparent intractability of accurately approximating a general high-dimensional function, the apparent intractability of integrating a high-dimensional function. The blessings of dimensionality are less widely noted, but they include the concentration of measure phenomenon (so-called in the geometry of Banach spaces), which means that certain random fluctuations are very well controlled in high dimensions and the success of asymptotic methods, used widely in mathematical statistics and statistical physics, which suggest that statements about very high-dimensional settings may be made where moderate dimensions would be too complicated. There is a large body of interesting work going on in the mathematical sciences, both to attack the curse of dimensionality in specific ways, and to extend the benefits of dimensionality. I will mention work in high-dimensional approximation theory, in probability theory, and in mathematical statistics. I expect to see in the coming decades many further mathematical elaborations to our inventory of Blessings and Curses, and I expect such contributions to have a broad impact on societys ability to extract meaning from the massive datasets it has decided to compile. In my talk, I will also draw on my personal research experiences which suggest to me (1) there are substantial chances that by interpreting ongoing development in high-dimensional data analysis, mathematicians can become aware of new problems in harmonic analysis; and (2) that many of the problems of data analysis even in fairly low dimensions are unsolved and are similar to problems in mathematics which have only recently been attacked, and for which only the merest beginnings have been made. Both fields can progress together.
Article
Full-text available
This paper presents the reshape package for R, which provides a common framework for many types of data reshaping and aggregation. It uses a paradigm of ???melting??? and ???casting???, where the data are ???melted??? into a form which distinguishes measured and iden- tifying variables, and then ???cast??? into a new shape, whether it be a data frame, list, or high dimensional array. The paper includes an introduction to the conceptual framework, practical advice for melting and casting, and a case study.
Article
Full-text available
This paper describes recent research in subjective usability measurement at IBM. The focus of the research was the application of psychometric methods to the development and evaluation of questionnaires that measure user satisfaction with system usability. The primary goals of this paper are to (1) discuss the psychometric characteristics of four IBM questionnaires that measure user satisfaction with computer system usability, and (2) provide the questionnaires, with administration and scoring instructions. Usability practitioners can use these questionnaires with confidence to help them measure users' satisfaction with the usability of computer systems.
Article
Full-text available
Although nonparametric regression has traditionally focused on the estimation of conditional mean functions, nonparametric estimation of conditional quantile functions is often of substantial practical interest. We explore a class of quantile smoothing splines, defined as solutions to minσ P c (y i _g{(x i )}&plus;λ (int1 0 lgn(x)&sol;pdx)1&sol;p with p t (u)&equals;u{t_I(u< )}, pages; 1, and appropriately chosen G. For the particular choices p &equals; 1 and p &equals; ∞ we characterise solutions g{circumflex} as splines, and discuss computation by standard l 1 -type linear programming techniques. At λ &equals;0, g{circumflex} interpolates the τ th quantiles at the distinct design points, and for λ sufficiently large g{circumflex} is the linear regression quantile fit (Koenker & Bassett, 1978) to the observations. Because the methods estimate conditional quantile functions they possess an inherent robustness to extreme observations in the y i 's. The entire path of solutions, in the quantile parameter τ, or the penalty parameter λ 2 , may be efficiently computed by parametric linear programming methods. We note that the approach may be easily adapted to impose monotonicity and&sol;or convexity constraints on the fitted function. An example is provided to illustrate the use of the proposed methods.
Article
Rapidly changing environment has affected organizations ability to maintain viability. As a result, various criteria and uncertain situations in a complex environment encounter problems when using the traditional performance evaluation with precise and deterministic data. The purpose of this paper is to propose an applicable model for evaluating the performance of the overall supply chain (SC) network and its members. Performance evaluation methods, which do not include uncertainty, obtain inferior results. To overcome this, rough set theory (RST) was used to deal with such uncertain data and extend rough noncooperative Stackelberg data envelopment analysis (DEA) game to construct a model to evaluate the performance of supply chain under uncertainty. This applies the concept of Stackelberg game/leader-follower in order to develop models for measuring performance. The ranking method of noncooperative two-stage rough DEA model is discussed. While developing the model, which is suitable to evaluate the performance of the supply chain network and its members when it operates in uncertain situations and involves a high degree of vagueness. The application of this paper provides a valuable procedure for performance evaluation in other industries. The proposed model provides useful insights for managers on the measurement of supply chain efficiency in uncertain environment. This paper creates a new perspective into the use of performance evaluation model in order to support managerial decision-making in the dynamic environment and uncertain situations.
Article
Small-to-medium sized enterprises (SMEs) in the manufacturing sector are increasingly strengthening their web presence in order to improve their visibility and remain competitive in the global market. With the explosive growth of unstructured content on the Web, more advanced methods for information organization and retrieval are needed to improve the intelligence and efficiency of the supplier discovery and evaluation process. In this paper, a technique for automated characterization and classification of manufacturing suppliers based on their textual portfolios was presented. A probabilistic technique that adopts Naïve Bayes method was used as the underlying mathematical model of the proposed text classifier. To improve the semantic relevance of the results, classification was conducted at the conceptual level rather than at the term level that is typically used by conventional text classifiers. The necessary steps for training data preparation and representation related to manufacturing supplier classification problem are delineated. The proposed classifier is capable of forming both simple and complex classes of manufacturing SMEs based on their advertised capabilities. The performance of the proposed classifier wass evaluated experimentally based on the standard metrics used in information retrieval such as precision, recall, and F-measure. It was concluded that the proposed concept-based classification technique outperforms the traditional term-based methods with respect to accuracy, robustness, and cost.
Article
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This paper develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach to address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. The proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.
Conference Paper
The web presence of manufacturing suppliers is constantly increasing and so does the volume of textual data available online that pertains to the capabilities of manufacturing suppliers. To process this large volume of data and infer new knowledge about the capabilities of manufacturing suppliers, different text mining techniques such as association rule generation, classification, and clustering can be applied. This paper focuses on classification of manufacturing suppliers based on the textual description of their capabilities available in their online profiles. A probabilistic technique that adopts Naïve Bayes method is adopted and implemented using R programming language. Casting and CNC machining are used as the examples classes of suppliers in this work. The performance of the proposed classifier is evaluated experimentally based on the standard metrics such as precision, recall, and F-measure. It was observed that in order to improve the precision of the classification process, a larger training dataset with more relevant terms must be used.
Article
Design team belonging to powertrain divisions can speed up the process of managing information, within gearbox design activities, by adopting digital pattern tools. These tools, belonging to a knowledge-based engineering (KBE) system, can assist engineers in re-using company knowledge in order to improve time-consuming tasks as retrieval and selection of previous architectures and to modify and virtually test a new gearbox design. A critical point in the development of a KBE system is the usability of user’s interface to demonstrate effective reduction of development time and satisfaction in its use. In this paper, the authors face the problem of usability improvement of the graphical user interface (GUI) of the tool belonging to the KBE system and previously proposed. An approach based on analytic hierarchy process and multiple-criteria decision analysis is used. A participatory test is performed for evaluating the usability index of the GUI. Taking into account the data analysis, some changes are carried out and a new GUI release is validated through new experimentations.
Article
Global manufacturing has extended the supply chain not only in span of networks and dispersion of geographical locations but also more importantly, it also transcends vast organizational boundaries among webs of long supply chains. Naturally, the information and computation technology (ICT) systems supporting these entities serve well on their own merits independently; however, the sharing of information among them in a responsive and agile and secure fashion is limited. The advent of cloud computing provides a distinct possibility to enhance sharing of information, to better apply modern analytic tools and to improve managing the controlled access as well as security. This paper is prepared to address the architectural issues of the data, analytics, and organizational layers.
Article
Since the time of Gauss, it has been generally accepted that l 2 -methods of combining observations by minimizing sums of squared errors have significant computational advantages over earlier l 1 -methods based on minimization of absolute errors advocated by Boscovich, Laplace and others. However, l 1 -methods are known to have significant robustness advantages over l 2 -methods in many applications, and related quantile regression methods provide a useful, complementary approach to classical least-squares estimation of statistical models. Combining recent advances in interior point methods for solving linear programs with a new statistical preprocessing approach for l 1 -type problems, we obtain a 10- to 100-fold improvement in computational speeds over current (simplex-based) l 1 -algorithms in large problems, demonstrating that l 1 -methods can be made competitive with l 2 -methods in terms of computational speed throughout the entire range of problem sizes. Formal complexity results suggest that l 1 -regression can be made faster than least-squares regression for n sufficiently large and p modest.
Article
The effects of collaborative planning, forecasting and replenishment in the performance of supply chains have been discussed in the literature. In this research paper, we posit that these effects along with other collaborative factors influence the success of collaboration in supply chains. The objective of this paper is to uncover the impact of collaborative planning, collaborative decision making of supply chain partners and collaborative execution of all supply chain processes in the success of collaboration. We used empirical analysis to validate our research paradigm. Data were obtained through a questionnaire survey of customers of a Textile company. We used confirmatory factor analysis and structural equation modelling (using AMOS). The results of the analysis confirm that the factors of collaboration impact the success of supply chains that will lead to future collaborations. Collaborative execution of supply chain plans will also have an impact on future collaborations. Companies that are interested in supply chain collaborations can consider engaging in long-term collaboration depending on the success of current collaborations. This will help SC partners to make investment decisions particular to collaboration.
Article
This study describes a method of designing a graphic user interface (GUI)-based human–computer interface for a process control room, where the users monitor and control the manufacturing processes. The process control room of a steel manufacturing company was selected as a case study to apply the method developed. The method consists of six phases: (1) surveying human–computer interface design guidelines appropriate for control room tasks; (2) defining the requirements for designing new user interfaces; (3) evaluating the current user interfaces; (4) developing design rules and guidelines for new interfaces; (5) designing user interfaces and developing prototypes for implementation; and (6) evaluating the prototypes and redesigning. The new user interfaces that were developed on the basis of the method are expected to enhance task efficiency and safety by reducing human errors.
Article
Purpose The purpose of this paper is to empirically examine the impact of internal and external collaborative forecasting and planning on logistics and production performance. Design/methodology/approach To measure the degree of collaborative forecasting and planning, the concept of collaboration is categorized into three dimensions: sharing resources, collaborative process operation, and collaborative process improvement. Based on these dimensions, a survey of Japanese manufacturers was conducted and the analytical model is proposed to examine using structural equation modeling. Findings There are positive relationships between internal and external collaborative forecasting and planning. Upstream and downstream collaborative forecasting and planning are also positively related. Internal collaborative forecasting and planning has a positive effect on relative logistics and production performance. External collaborative forecasting and planning does not have a significant effect on relative logistics and production performance. Research limitations/implications This study does not clarify how firms can achieve the improvement of forecasting and planning process. Future research should investigate the mechanism of process improvement in supply chain. Practical implications Not only sharing resources and collaborative process operation but also collaborative process improvement play a crucial role in gaining sustainable competitive advantage in logistics and production. Originality/value This study focuses on the forecasting and planning process in supply chain and proposes new dimensions measuring the degree of collaborative forecasting and planning. By focusing on the process and using the dimensions, the relationship between supply chain collaboration and performance are discussed concretely.
Article
It has long been recognized that the mean provides an inadequate summary whereas the set of quantiles can supply a more complete description of a sample. We introduce bivariate quantile smoothing splines, which belong to the space of bilinear tensor product splines, as nonparametric estimators for the conditional quantile functions in a two-dimensional design space. The estimators can be computed by using standard linear programming techniques and can further be used as building-blocks for conditional quantile estimations in higher dimensions. For moderately large data sets, we recommend penalized bivariate B-splines as approximate solutions. We use real and simulated data to illustrate the methodology proposed.
Article
Recent research suggests that the bullwhip effect, or increasing variability of inventory replenishment orders as one moves up a supply chain, illustrates inventory management inefficiencies. This paper quantifies the bullwhip effect in the case of serially correlated external demand, if autoregressive models are applied to obtain multiple steps demand forecasts. A materials requirements planning (MRP) based inventory management approach is proposed to reduce the order variance. Simulation modeling is used to investigate the impact of the forecasting method selection on the bullwhip effect and inventory performance for the most downstream supply chain unit. The MRP based approach is shown to reduce magnitude of the bullwhip effect while providing the inventory performance comparable to that of a traditional order-up approach. The application of autoregressive models compares favorably to other forecasting methods considered according to both the bullwhip effect and inventory performance criteria.
Article
This paper describes a model predictive control strategy to find the optimal decision variables to maximize profit in supply chains with multiproduct, multiechelon distribution networks with multiproduct batch plants. The key features of this paper are: (1) a discrete time MILP dynamic model that considers the flow of material and information within the system; (2) a general dynamic optimization framework that simultaneously considers all the elements of the supply chain and their interactions; and (3), a rolling horizon approach to update the decision variables whenever changes affecting the supply chain arise. The paper compares the behavior of a supply chain under centralized and decentralized management approaches, and shows that the former yields better results, with profit increases of up to 15% as shown in an example problem.
Article
Full collaboration in supply chains is an ideal that the participant firms should try to achieve. However, a number of factors hamper real progress in this direction. Therefore, there is a need for forecasting demand by the participants in the absence of full information about other participants’ demand. In this paper we investigate the applicability of advanced machine learning techniques, including neural networks, recurrent neural networks, and support vector machines, to forecasting distorted demand at the end of a supply chain (bullwhip effect). We compare these methods with other, more traditional ones, including naïve forecasting, trend, moving average, and linear regression. We use two data sets for our experiments: one obtained from the simulated supply chain, and another one from actual Canadian Foundries orders. Our findings suggest that while recurrent neural networks and support vector machines show the best performance, their forecasting accuracy was not statistically significantly better than that of the regression model.
Article
Supply chain management (SCM) has been a major component of competitive strategy to enhance organizational productivity and profitability. The literature on SCM that deals with strategies and technologies for effectively managing a supply chain is quite vast. In recent years, organizational performance measurement and metrics have received much attention from researchers and practitioners. The role of these measures and metrics in the success of an organization cannot be overstated because they affect strategic, tactical and operational planning and control. Performance measurement and metrics have an important role to play in setting objectives, evaluating performance, and determining future courses of actions. Performance measurement and metrics pertaining to SCM have not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of SCM performance measurement and metrics. Using the current literature and the results of an empirical study of selected British companies, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.
Article
This paper presents a study on the impact of forecasting model selection on the value of information sharing in a supply chain with one capacitated supplier and multiple retailers. Using a computer simulation model, this study examines demand forecasting and inventory replenishment decisions by the retailers, and production decisions by the supplier under different demand patterns and capacity tightness. Analyses of the simulation output indicate that the selection of the forecasting model significantly influences the performance of the supply chain and the value of information sharing. Furthermore, demand patterns faced by retailers and capacity tightness faced by the supplier also significantly influence the value of information sharing. The result also shows that substantial cost savings can be realized through information sharing and thus help to motivate trading partners to share information in the supply chain. The findings can also help supply chain managers select suitable forecasting models to improve supply chain performance.
Article
Random Forests were introduced as a Machine Learning tool in Breiman (2001) and have since proven to be very popular and powerful for high-dimensional regression and classifi- cation. For regression, Random Forests give an accurate approximation of the conditional mean of a response variable. It is shown here that Random Forests provide information about the full conditional distribution of the response variable, not only about the con- ditional mean. Conditional quantiles can be inferred with Quantile Regression Forests, a generalisation of Random Forests. Quantile Regression Forests give a non-parametric and accurate way of estimating conditional quantiles for high-dimensional predictor variables. The algorithm is shown to be consistent. Numerical examples suggest that the algorithm is competitive in terms of predictive power.
Article
We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = θ(F^)θ(F)\theta \left( {\hat F} \right) - \theta \left( F \right), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.
Article
An analytic criterion for rotation is defined. The scientific advantage of analytic criteria over subjective (graphical) rotational procedures is discussed. Carroll's criterion and the quartimax criterion are briefly reviewed; the varimax criterion is outlined in detail and contrasted both logically and numerically with the quartimax criterion. It is shown that thenormal varimax solution probably coincides closely to the application of the principle of simple structure. However, it is proposed that the ultimate criterion of a rotational procedure is factorial invariance, not simple structure—although the two notions appear to be highly related. The normal varimax criterion is shown to be a two-dimensional generalization of the classic Spearman case, i.e., it shows perfect factorial invariance for two pure clusters. An example is given of the invariance of a normal varimax solution for more than two factors. The oblique normal varimax criterion is stated. A computational outline for the orthogonal normal varimax is appended.
Conference Paper
Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits
Article
An important observation in supply chain management, known as the bullwhip effect, suggests that demand variability increases as one moves up a supply chain. In this paper we quantify this effect for simple, two-stage, supply chains consisting of a single retailer and a single manufacturer. Our model includes two of the factors commonly assumed to cause the bullwhip effect: demand forecasting and order lead times. We extend these results to multiple stage supply chains with and without centralized customer demand information and demonstrate that the bullwhip effect can be reduced, but not completely eliminated, by centralizing demand information.
A text mining tributed manufacturing environments
  • P Yazdizadeh
Yazdizadeh, P., and Ameri, F., 2015. "A text mining tributed manufacturing environments". Journal of Computing and Information Science in Engineering, 8(1), p. 011002.
Website Usability Testing Software-Improving User Experience and Satisfaction With Community College Websites
  • Dishman
Dishman, M., 2015. Website usability testing softwareimproving user experience and satisfaction with community college websites.
Fraud Detection for Online Retail Using Random Forests
  • J Altendrof
  • P Brende
  • L Lessard
Altendrof, J., Brende, P., and Lessard, L., 2005. "Fraud detection for online retail using random forests". Technical Report.
quantregForest: Quantile Regression Forests. R package version
  • N Meinshausen
Meinshausen, N., 2017. quantregForest: Quantile Regression Forests. R package version 1.3-7.
shiny: Web Application Framework for R
  • W Chang
  • J Cheng
  • J Allaire
  • Y Xie
  • J Mcpherson
Chang, W., Cheng, J., Allaire, J., Xie, Y., and McPherson, J., 2018. shiny: Web Application Framework for R. R package version 1.2.0.
  • Efron