ArticlePDF Available

Abstract and Figures

This research examines the application of the Theory of Swift, Even Flow (TSEF) by a distribution company to improve the performance of its processes for parcels. TSEF was deployed by the company after experiencing lean improvement fatigue and diminishing returns from the time and effort invested. This case study combined quantitative and qualitative approaches to develop a good understanding of the operation. This approach enabled the business to utilise Discrete Event Simulation (DES), which facilitated the implementation of TSEF. From this study, the development of a novel DES application revealed the primacy of process variation and throughput time, key factors in TSEF, in driving improvements. The derived DES approach is reproducible and demonstrates its utility with production improvement frameworks. TSEF, through the visualisations and analysis provided by DES, broadened the scope of improvements to an enterprise level, therefore assisting the business managers in driving forward when lean improvement techniques stagnated. The impact of the research is not limited to the theoretical contribution, as the combination of DES and TSEF led to significant managerial insights on how to overcome obstacles and substantiate change.
Content may be subject to copyright.
RAIRO-Oper. Res. 58 (2024) 4197–4220 RAIRO Operations Research
https://doi.org/10.1051/ro/2024142 www.rairo-ro.org
SMOOTHLY PASS THE PARCEL: IMPLEMENTING THE THEORY OF SWIFT,
EVEN FLOW
Wolfgang Garn1,*, James Aitken1and Roger Schmenner2
Abstract. This research examines the application of the Theory of Swift, Even Flow (TSEF) by
a distribution company to improve the performance of its processes for parcels. TSEF was deployed
by the company after experiencing lean improvement fatigue and diminishing returns from the time
and effort invested. This case study combined quantitative and qualitative approaches to develop a
good understanding of the operation. This approach enabled the business to utilise Discrete Event
Simulation (DES), which facilitated the implementation of TSEF. From this study, the development of
a novel DES application revealed the primacy of process variation and throughput time, key factors in
TSEF, in driving improvements. The derived DES approach is reproducible and demonstrates its utility
with production improvement frameworks. TSEF, through the visualisations and analysis provided by
DES, broadened the scope of improvements to an enterprise level, therefore assisting the business
managers in driving forward when lean improvement techniques stagnated. The impact of the research
is not limited to the theoretical contribution, as the combination of DES and TSEF led to significant
managerial insights on how to overcome obstacles and substantiate change.
Mathematics Subject Classification. 90-10, 90B06.
Received March 5, 2024. Accepted July 8, 2024.
1. Introduction
This paper focuses on the flow of parcels through a distribution company’s processes and the aspects of
its operations that impede throughput. Improving the flow of parcels is more important than ever, given the
continuing development of internet retailing and the concomitant increase in the volume of parcel shipments.
Because of the frequency of delivery and the growth of final destinations, network entropy is increasing. Providing
a cost-efficient and time-bound service under such circumstances is a significant test for any organisation engaged
in distribution. This research explores the approach developed by a specific firm and its use of the theory of swift,
even flow (TSEF) to develop enterprise-wide improvement facilitated by Discrete Event Simulation (DES).
The case study firm decided to adopt TSEF after 4 years of implementing lean principles with declining success
and concerns about the sustainability of improvements achieved [1]. The lean campaign involved site-specific
improvements that had failed to involve the wider process. The disconnect between lean implementation projects
and business-wide strategic improvements has been evidenced by many researchers [2,3]. Swift, even flow, on
Keywords. Theory of swift even flow, discrete event simulation, operations management, process improvement.
1Surrey Business School, University of Surrey, Guildford, Surrey, UK.
2Kelley School of Business, Indiana University, Indianapolis, Indiana, USA.
*Corresponding author: w.garn@surrey.ac.uk
c
The authors. Published by EDP Sciences, ROADEF, SMAI 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
4198 W. GARN ET AL.
the other hand, enlarged the company’s field of vision to include its end-to-end processes and unlocked savings
across its organisational boundaries. This shift in perspective, for the case study firm, required some fundamental
changes in management’s view of productivity within and across organisational boundaries. Visualising the
benefits of applying a TSEF approach was critical to the management team and the operators within the site.
This contributed to the deployment of TSEF and enabled the researchers to empirically test the concept.
Two factors underpin TSEF: process variation and throughput time [4]. Improving both factors is expected to
deliver competitive improvements across an enterprise [5,6]. DES was selected as the simulation tool to model
and underpin the implementation of TSEF. DES is a prominent operational research tool, especially where
scenarios are too complex, like in mail sorting centres, to gain meaningful insights using deterministic methods.
For instance, Vrgoˇc and ˇ
Ceri´c [7] used simulation to map and understand the parcel sorting operations to design
optimal structures aiding the analysis and decision-making. Klomjit et al. [8], through simulation, demonstrated
how to improve the efficiency of a parcel service company. Though DES has been deployed to model parcel flows,
the authors are unaware of any previous work that has empirically applied DES to underpin the implementation
of a theoretical concept such as TSEF. This gap in the literature is addressed through the contribution of this
study.
Deploying DES to support the implementation of TSEF provided the researchers with an opportunity to
empirically test the swift, even flow concept as an instrument for change, moving the idea from the realms of
academia to the practitioners’ arena. This manifests our method as DES and motivates the use of TSEF. Three
key questions were asked in conducting the investigation: (a) Can TSEF breakthrough where lean principles
become stymied?, (b) What is the role of DES in discovering process inefficiencies, validating the feasibility of
change implementations, and saving costs? and (c) Does DES support TSEF as a business-level improvement
tool?
The case study offers four pivotal insights: (1) the validation of the TSEF’s process variability and through-
put time factors as critical dimensions in improving business performance, (2) the presentation of a novel DES
application to support TSEF implementation, (3) the development of the DES approach, which is reproducible
and demonstrates its utility with production improvement frameworks, and (4) how the TSEF, through broad-
ening the scope of improvements to an enterprise level, can assist business to drive forward when lean stagnates.
The impact of the research is not limited to theoretical contribution, as the combination of DES and TSEF led
to a nationwide change of operations and significant cost savings for the case study firm.
The next section reviews the literature about TSEF, DES, and relevant applications. This is followed by a
discussion of the case study firm. Subsequently, the methodology used is explained, followed by a discussion
of the discrete event simulation that was an important part of the implementation. After that discussion, the
results of the implementation of TSEF are presented, and those results are then discussed in more detail.
2. Literature review
Researchers have used TSEF to investigate the cost and flow of patients through healthcare processes [911],
to understand the performance of service firms over several years [12], development of circular economy manu-
facturing initiatives combining TSEF principles with lean practices [13] and to explain why some manufacturing
firms operational performance provides advantages over their competitors.
Schmenner defines the Theory of Swift, Even Flow in this way:
“The theory of swift, even flow states that two factors and only two factors are essential to productivity
gain, no matter how one measures them. The first essential factor is to reduce variation. That variation
can be of three types: quality, quantities, and timing. That is, one wants (1) to reduce defects and to
perfect quality, (2) to even out the varieties of goods produced and the quantities of each so that each
day of production resembles every other day of production, and (3) to produce with a regular timing or
sequence to production. The second essential factor is to measure the time it takes to produce something
from start to finish its throughput time and to reduce that throughput time as much as possible.
Swift, even flow concentrates its attention on the flow of materials through a process; it asks people to
SMOOTHLY PASS THE PARCEL 4199
take the viewpoint of the materials moving through a process. By reducing the variation and throughput
time of those materials, one eliminates the non-value-added aspects of production, which is where the
cost and inefficiencies lie”. ([4], p. 345)
Swift, even flow developed from Schmenner’s empirical work on factory productivity. It is a theory that helps
to explain how a variety of modern techniques and philosophies work as they do, among them lean operations,
the theory of constraints, Six Sigma, and factory focus ([5], Chaps. 4 and 8). TSEF has been used to explain
the huge leaps in productivity that accompanied the creation of the factory, the development of the continuous
flow process, the moving assembly line, and other significant milestones in industrial history [4,6].
TSEF does not seek to diminish the power of the landmark lean paradigm [14]. Instead, it provides a ratio-
nale for lean operations, and for other concepts, such as factory focus, that can affect a company’s entire
supply chain. A focused factory has one (or two) overarching objectives (key manufacturing tasks) that allow
an optimised process, usually with a narrower range of products. Focused factories can expect to outperform
general-purpose production operations [15,16]. By so doing, TSEF can overcome a common weakness of lean
implementation, namely bogging down within individual functions, which can limit lean progression and poten-
tial [6,1720]. Several researchers highlight the potential for lean principles to be a boundary-spanning improve-
ment approach. However, it is also noted that its occurrence as such is rare [17,21,22]. Driving improvement
based on an enterprise-level process perspective overcomes the limitations of functionally driven, task-orientated,
lean approaches that many organisations adopt [17,20]. TSEF provides management with a platform from which
to envision and reconfigure the entire process, supporting the organisation in its drive for continuous improve-
ment. Through better-integrated processes, TSEF can enable higher operational performance as bottlenecks
and variability diminish [23].
Discrete Event Simulation is a valuable tool for understanding the flow of items through various process
stages. More precisely, DES models the flow of entities through a system using discrete time steps created by
state changes, where the state changes are triggered by events which often follow a random distribution [24].
This definition agrees well with the concepts we mentioned for the TSEF, such as “people taking the viewpoint
of the materials moving through a process”. Queueing systems such as the M/M/s [25,26] are a good example
of modelling a process stage, where operational characteristics can be obtained analytically or through DES.
Typical operational characteristics are the number of items and time spent in the queue, service or system [27].
These are used to identify bottlenecks, plan required capacity and allocate resources. Generally, DES is used in
capacity planning to test theories about required resources [28]. This can relate to operational characteristics
such as parcel volume, processing speed and number of employees. Bottleneck and process analysis are theoretical
concepts substantiated by simulation analysis. DES identifies bottleneck effects, which are alleviated through
process improvements. Additionally, DES can test scheduling theories to find improved shifts and workforce
allocation. Most of these aspects will be explored in detail in our case study.
3. Case study firm
The case study firm is a European national distribution business focused on the sorting, distribution and
delivery of high-volume parcels, among other items. The organisation is split into regions that operate as hubs
for the processing of parcels from local, national, and international customers. Each region has transportation,
sorting, and distribution operations. Even though these operations differ in size and complexity, they are linked
by a common performance goal of delivering parcels anywhere within the country within 24–48 h, depending
upon the service purchased by the customer. Delivery timeliness is critical in terms of customer service.
3.1. The process
The activities within the parcel process are triggered by a continuous stream of arriving trucks at the Oper-
ations Hub. Vehicles are unloaded, and the parcels are moved into the preparation area, where a rough filtering
process puts them into trolleys for further processing. The preparation and sorting areas follow a schedule. The
4200 W. GARN ET AL.
volume and timing of incoming parcels exhibit strong variations from day to day. The flow of incoming parcels
could not be controlled in this study.
The sorting area at the Operations Hub consists of several identical machines that run in parallel. Parcels are
transferred from the preparation area to the machines in such a way that the first machine is filled until it runs
at full capacity. Only then are subsequent machines utilised. The machines are continuously filled with items
to be sorted according to their destination location. The sequencing stage processes the sorted items in more
detail on separate machines to deliver them efficiently to their final destinations. A sequenced batch contains
the parcels in the order in which they will be delivered to end customers.
3.2. Characterising the operations hub and the distribution centres
The regional Operations Hub provided each of its Distribution Centres with parcels in two waves (batches)
each day. At the Distribution Centres (DCs), the two sequenced batches were merged by hand before being
processed further. The regions operated as independent entities that were measured on performance at a local,
not a company-wide, level.
The Operations Hub could be characterised as follows:
(a) Mission: to turn the chaos of the arriving parcels into an orderly sequence of parcels that subsequent oper-
ations could use to deliver them to their destinations. The Operations Hub under study fed 20 Distribution
Centres.
(b) Metrics: the major metrics used were “items per hour per machine” and “workers per machine”.
(c) Issues: because of these metrics, the incentive was to keep the sorting and sequencing machines busy and
always to process all of the parcels that had been received that day. This is why the Operations Hub
provided each of the 20 Distribution Centres with parcels in two waves (batches).
Each Distribution Centre could be characterised similarly:
(a) Mission: to take the output of the Operations Hub and to sort the parcels into smaller batches for delivery
by hundreds of delivery vehicles.
(b) Metrics: how quickly can the delivery people get their batches ready for delivery?
(c) Issues: because the Operations Hub fed each Distribution Centre twice during the day, the delivery people
had to merge the two batches by hand. This involved much work and considerable space so that the final
delivery sequence could be accomplished accurately. Delivery could not proceed until both waves of parcels
were merged at the Distribution Centre. In essence, the Distribution Centre was forced to engage in sorting
and sequencing itself.
3.3. History of improvement initiatives
For 4 years, the company had used a Japanese lean operations consultant and had deployed lean tools to
make improvements to its operations at the major sorting hub under study (Fig. 1) [29]. The approach initially
provided increases in labour productivity and equipment utilisation. However, early gains over the 4years were
not maintained, with overall equipment effectiveness (OEE) increasing initially by 3% and then falling back to
0.5% as the lean campaign continued.
The company’s approach to improvement focused on its Operations Hub and not on its entire company-wide
operations. Such an approach is commonly deployed by organisations engaged in a lean campaign [30]. Although
implementing lean into parts of a process is a pragmatic and common occurrence [17], reducing the supply chain
and its processes into its constituent parts, instead of taking an end-to-end process perspective, can obscure the
causes of problems [31,32]. The partial implementation of lean thinking within the company’s functional silos
had not engendered a lean philosophy across the entire business. Instead, it created islands of excellence [33].
Such localisation has been found to diminish the ability of organisations to sustain improvements [34]. The
financial benefits delivered by the case study company’s lean improvement approach had begun to dwindle over
the 4 years, leading to questions about the sustainability and purpose of continuing.
SMOOTHLY PASS THE PARCEL 4201
Figure 1. Process flow schematic (as-is scenario).
Recently, the company has been undergoing a series of modernisation activities due to a change in its own-
ership. This change in ownership prompted the firm to step up its improvement efforts. The first area selected
for company-wide improvement was the distribution of small parcels. This project provided the opportunity
for improvement in sorting, transportation, and distribution. End-to-end process changes across functional silos
were recognised as offering potentially significant increases in cost and service performance. The operating hub
and distribution centre management teams, which remained unchanged following the ownership change, were
eager to address the limitations of localised area improvements and to move forward.
The researchers had initially been invited by the case study firm to investigate the organisation’s approach
to improvement after the lean campaign had begun to deliver diminishing returns. After discussions with
senior executives, it became apparent that something more was needed to help the firm move forward with its
continuous improvement initiatives. The management team was introduced to the concept of swift, even flow,
and they read Schmenner’s 2012 book [5].
Upon learning about swift, even flow and asking themselves the questions of where variation exists in the
process and where throughput time bogs down, the company’s managers hypothesised that there could be
savings in transportation and handling costs by condensing the two process waves into one. They envisioned
different strategic “missions” for the Operations Hub and each Distribution Centre. The Operations Hub’s
product would no longer be “waves” of sorted packages but a single sequenced daily batch of them. This batch
would become the single input for each Distribution Centre. The Distribution Centres would no longer have to
merge the batches. This simplified the missions for both operations. Management also realised that the metrics
they had used for each location and the incentives that those metrics fostered had to be changed to unleash the
potential of the organisation [15]. In academic parlance, two “focused factories” would be created in place of
the more chaotic, overlapping situation that had prevailed.
Once this strategic insight was agreed upon, the managers’ concern was whether the Operation Hub’s capacity
would be sufficient to process all parcels in a single batch. Changes in the initial sorting operation were expected
to show up as financial gains in the subsequent transportation and distribution operations. This represented
a marked change in approach as it would cross functional boundaries and require cross-party cooperation,
an essential, strategic issue that the company’s lean campaign had not addressed. Management would have to
consider the flow of information and products across their sites to deliver the benefit. Doing so can be challenging
because applying new approaches across organisational boundaries can result in resistance by employees [35].
We readily acknowledge that a different consultant could have advocated for the same action plan that is
reported in this article. Nevertheless, an experienced Japanese lean operations consultant, in work spaced over
4 years, missed the opportunity that we recognised almost immediately using the theory of swift, even flow. It
has been said that there is nothing as useful as a good theory, and for us, this case study provides another
supporting example. This paper does not doubt the powerful track record of lean, but the firm had failed to
progress with its lean approach [17]. However, the research emphasis here is on the usefulness of TSEF in
providing a platform for change, including the strategic change embodied in the focused factory concept.
4202 W. GARN ET AL.
4. Methodology
Case study research supports “empirical research that primarily uses contextually rich data from bounded
real-world settings to investigate a focused phenomenon” ([36], p. 329). Utilising a case study approach for
deductive, theory-testing purposes within operations management is a fruitful methodological approach [37
39]. However, this case study research is not exclusively deductive. While TSEF provided the basic logic for
the research questions posed, the data analysis and empirical findings exhibited inductive features. As Ketokivi
and Choi ([40], p. 235) explain in their review of case study research, “Theory-testing is driven by theoretical
deduction, but not exclusively limited to it”.
The case study research design combined a quantitative and qualitative approach to gathering and analysing
data [41]. Gathering a mix of quantitative and qualitative data enabled the research team to obtain a good
understanding of the operation [42,43] and a “synergistic view of the evidence” gathered ([44], p. 533). On
the quantitative side, varied data collection methods provided strong substantiation of the theoretical model.
Furthermore, three investigators were deployed, strengthening the confidence and credibility of the findings
[36,44]. Case selection is a critical step in case study research as it focuses the efforts of the investigators.
Cases should be chosen that aid researchers to “replicate or extend the emergent theory” ([44], p. 537). By
examining TSEF within a case study, the researchers had the opportunity to examine the concept using a
business improvement approach. The details and criteria used to select the chosen case are as follows:
It had actively pursued variability reduction in its processes. The organisation worked with lean tools and
techniques, such as TQM, SPC, TPM, and 5S, for 4years to minimise process variation against a background
of high volatility in customer demand.
It demonstrated an interest in improving its throughput time and, therefore, flow in its processes.
Through the mapping of the process and the development of simulations and animations to visualise flow,
the business itself identified opportunities for improvement.
The case study company, as a result of changes in ownership, had begun to look at altering the flow of parcels
across functional boundaries to gain end-to-end supply chain benefits instead of pursuing a traditional silo
approach. With this change in its point of view, the company could potentially overcome the limitations of
its “islands of excellence” experience by applying lean principles [17].
It was willing to execute changes as a result of the research so that the researchers could observe changes to
the processes and organisation as they unfolded.
4.1. Qualitative aspects
Data were collected through a multiple-method approach, including semi-structured interviews, observations
and internal document reviews. Interviews were conducted with 16 people, ranging from senior group executives
to front-line operators, across the Operations Hub and the Distribution Centres (Tab. 1for details). Information
on the views of the participants, as well as data on changes in performance due to the application of TSEF and
factory focus, were collected from observations made at meetings and as the process was altered.
Quarterly review meetings were conducted with the steering committee in charge of implementing the changes.
These meetings provided project updates as well as insights into technical and organisational issues. Senior
management progress presentations permitted the project team to update management on progress and obstacles
to implementation. These sessions helped to develop a standardised approach for the future implementation of
TSEF and factory focus across other regions and sites.
These feedback sessions also provided an opportunity to triangulate our findings with the people managing
and operating the processes, providing internal validity [9]. Following interviews, meetings and observations, the
research team met to discuss and consider the challenges and successes that the organisation was experiencing.
These post-meeting sessions allowed the researchers to work together to reach a consensus view of the progress
and issues faced by the company.
SMOOTHLY PASS THE PARCEL 4203
Table 1. Interview details.
Role(s) Duration and frequency
Hub management (including operations
director, quality manager, improvement
manager and logistics manager)
Interviewed between 60–90 min before and
post-TSEF implementation
DC manager Interviewed for 45min before and 30 min
post TSEF implementation
Hub shift supervisors (two), logistics
supervisor and operators (one despatch
operator and two parcel operators)
Interviewed between 20–45 min before and
post-TSEF implementation
DC operators (two) Interviewed between 15–20 min before and
post-TSEF implementation
Group Management ( Head of Design,
Technical and Logistics)
Each interviewed for between 40–50 min
post-TSEF Implementation
4.2. Quantitative aspects
Although the case study company’s managers were open to the application of TSEF to their operations, some
of them still needed convincing. Therefore, it was decided to embark on several quantitative exercises that could
help the managers envision what the adoption of swift, even flow and focused factories could mean for them. To
that end, data were collected directly from the case study firm and researcher measurements and observations.
Historical data covering 2years were gathered and analysed. Of particular interest were data on:
1. Demand the delivery profile from day to day,
2. Quality waste reduction, quality levels,
3. Bottlenecks machine capacities, throughput rates, capacity constraints, and utilisation,
4. Scheduling and resource planning,
5. Variability volumes, transport times, operations times.
Staats et al. ([42], p. 380) suggest that before investigating future changes, it is important to identify the
previous “initiative’s empirical performance” in a quantitative manner. Data was collected and assessed for
reliability and accuracy. For example, researchers tested efficiencies and utilisation through observation and
measurement. Although the recorded output data were found to be accurate, the standards used to gauge
performance were found to be at variance with the machine manufacturers’ published Data. Machines were
found to be “slow running”, and agreed performance standards were below the potential of the process, leading
to inflated efficiency figures. These data provided the research team with an understanding of “true” performance
changes due to improving flow and reducing variances. The overall case study has the following sequence: Case
selected; Protocol and data collection; Data Analysis (simulation); TSEF Pilot and data collection; Discrete
Event Simulation; Answering research questions; Literature comparison; and Research closure.
5. Discrete event simulation
In this section, we provide a detailed analysis of the problem and a methodological approach to applying the
TSEF to the DES.
The goal of the simulation was to compare the current (as-is) model with the proposed (to-be) scenario so
that the company’s managers could see the advantages of the perspective taken by the theory of swift, even flow.
The as-is structure is shown in Figure 1, and the to-be scenario is depicted in Figure 6. Specifically, the aim
was to quantify the reduction of labour and the value-added process time. Furthermore, the effects on variation
by removing the second process cycle can be observed.
4204 W. GARN ET AL.
The simulation design follows the classical phases. To begin with, the input data for the simulation was
collected. This data was used to determine arrival rates, throughput rates and capacities for each process
stage. Probability distributions [27] were fitted accordingly. The simulation was realised using a discrete event
simulator, specifically the Rockwell Arena simulator. The appendix details the discrete event simulation imple-
mentation details, such as the processes, logical controls and specific stages. This offers complete transparency
and reproducibility of the case study. Each process stage was verified independently. The simulation structure
and results were validated by subject matter experts. This was done for each process stage and the entire pro-
cess chain. The design of the experiments took into account sufficient variations of input, output and resources.
Multiple replications were used to increase the confidence of the results.
To configure the simulation models appropriately, all essential process stages (Fig. 1) have to be analysed.
The overall demand for parcels, which is the input and output, is the driver of the whole process. Thus,
understanding and quantification are the first steps in the analysis (Sect. 5.1). The arrival of the “parcels” via
trucks is explained in more detail (Sect. 5.2). The flow of parcels through the various process stages in the
“as-is” scenario is specified and shown in Sections 5.3 and 5.4. These sections explain the technical details and
measurements. In Section 5.5, particular emphasis is given to timing. The timings suggest the feasibility of
combining duplicated process stages (cycles). This is confirmed with the “to-be” simulation scenario (Sect. 5.6).
Further, this improved process flow leads to cost savings.
5.1. Incoming demand and daily profiles
The number of items received by the sorting centre daily was recorded over almost 2 years (98 weeks). The
weekly volume was 2.04 million parcels, on average. A linear trend analysis indicated a year-to-year decline
in parcel volume of about 2.1% (Fig. 2a). The weekday profile is shown in Figure 2b. The figure highlights
that Wednesday is the “heaviest” day. Therefore, special attention was given to that day, and all weekdays were
normalised based on its 97% quantile expected volume. A 3% service-level violation on the heaviest day was seen
as more than acceptable by the practitioners. That means we expected that 97% of all Wednesdays would have
a volume that is less than 525 979 parcels. On average, a Wednesday has 411 689 parcels (normally distributed
with a standard deviation of 75 123 parcels). To get an idea of service-level volumes, we determined the 90% and
97% quantile parcel volumes per weekday in addition to the average volume. The 90% quantile parcel volume
was directly derived from the sample of 98weeks, whilst the 97% quantile was based on a normal distribution
assumption. Given the above Wednesday data, other absolute quantities can be derived. For instance, Tuesday’s
average volume is 35.2% ×525 979 parcels = 194 649 parcels. The profile analysis highlighted the variability of
demand in terms of weekdays and arrival times. It showed that if there was sufficient capacity in the sorting centre
and distribution centres to deal with Wednesday demands, then the other weekdays could be accommodated
as well. It can be seen that a potential solution to improve the flow of parcels through the process chain would
have to be able to operate under significant variances in demand across the week. The nature of the demand
suggests that the operation can be designed as a pull system ([45], p. 656). A strategy of levelling out the daily
demand variations cannot be implemented due to the company’s service agreements.
The normal distribution has been found by fitting the data to several classic distributions (see Appendix for
details). The choice of the normal distribution is theoretically motivated by the high number of independent
observations, the large volume and the low coefficient of variation [46]. An additional advantage of choosing the
normal distribution is the ease of deriving and explaining service levels.
5.2. Incoming parcel arrival stream
The above volume is delivered to the sorting centre via trucks with varying loads. Inter-arrival patterns of
trucks are shown in Figure 3a. The 17 observations took place between 6:40 pm and 4:05 am and were confirmed
via 9 weekly repetitions. It was assumed that the obtained pattern was representative of each Wednesday and
could be extended to a 24-h time frame.
SMOOTHLY PASS THE PARCEL 4205
Figure 2. (a) Demand/weekly volume time series; (b) weekday profile.
Figure 3. (a) Inter-arrival time distribution of trucks; (b) sequencing labour and machine
throughput times.
An exponential distribution was fitted to the data, giving a maximum likelihood estimate of 33.2 min for
the mean. Thus, we expected 43.4 trucks (using Little’s Law 𝑁=𝜆𝑇 ) over 24 h to carry an average load of
412 000 parcels. A truck carries, on average, 9492 parcels with a standard deviation of 1732 parcels (normally
distributed, derived from the overall demand and the nine observational repetitions).
Arrival processes are commonly modelled using a Poisson process, implying an exponential distribution [25],
which is known as a memoryless process. The mean arrival time of approximately 30 min was also observed by
Bartholdi and Hackman ([47], p. 24) for trucks arriving at warehouses.
5.3. Throughput rates and capacity
The throughput rate is defined as the number of items that are processed to completion during a specified
period. The nominal (design) capacity is the maximum achievable throughput rate under ideal workload condi-
tions. The usable (effective) capacity is the average achievable throughput rate under “typical” (high) workload
conditions. Here, the service rate will be defined as the usable capacity. Utilisation is defined as actual through-
put and is a percentage of nominal capacity. Efficiency is the actual throughput as a percentage of usable
capacity.
The firm investigated the application of TSEF to its Operations Hub and Distribution Centres specifically
to reduce variation and improve throughput time. To this end, the throughput rate for each process stage was
measured. The challenge here was converting different batch units, i.e., finding the “smallest” common entity.
In the beginning, the units of arrivals are truckloads. These units are transformed into cage trolleys, followed by
items (parcels) for analytical consideration. The analytical considerations were primarily based on throughput
4206 W. GARN ET AL.
Figure 4. (a) Observed throughput for sorting; (b) fitted sorting time.
rates (𝜆), volume (𝑁) and time (𝑇). The relation of these measures can be expressed using Little’s Law:
𝑁=𝜆𝑇. (1)
The throughput rates for all process steps were determined. The analysis of available and necessary times for
each process step showed that sequencing was the critical process step because the machines can only start once
the items have been sorted. Interestingly, this is due to the nature of the process rather than its performance.
Throughput rates for all process steps were determined based on actual observations rather than the machines’
specified maximum throughput rates. As indicated in the above definitions, the provided workload at each
process stage (i.e., the fill factor of buffers/queues) is essential for the actual throughput. That means random
arrivals without sufficiently filled item buffers lead to significant drops in the throughput rate at a process stage.
5.4. Sorting, sequencing and merging process stage characteristics
Several sorting machines (4–6, average: 4.85), including the operating personnel, were observed. Thirteen
observations were made over 87 days. Each observation analysed a planned run of 5 h. Figure 4a displays the
operational throughput rate observations.
The variability is mainly due to human interaction in the feeding process or when removing full cage trolleys.
A gamma distribution with parameters 𝛼= 36.98 and 𝛼= 0.5778 with a log-likelihood of 34.7 was fitted
to describe the service times (Fig. 4b). This leads to an average sorting machine throughput of 𝜆= 15 157
items/h with an average total processing time of 5.51 h for all five machines. The overall throughput rate for
sequencing varied between 5475 items/h and 8536 items/h per machine. Figure 3b shows the corresponding
labour (preparation and destack time) and machine (three passes) throughput times for the sequencing stage,
as well as the workload (volume of parcels).
This indicates that a higher volume of parcels can be prepared by human resources than is required in the
subsequent machine stage. Labour’s service time was approximated by fitting an exponential distribution with
a mean of 11.1 s plus a 9-s offset. The machine performance depends on the workload, as Figure 3b shows.
The machines’ best performance (26.4 s for 100 items) was used as the nominal capacity (assuming the ideal
workload). Further, this capacity will be used to describe the machine’s maximum service rate. The workforce
required to feed the machines has a higher throughput rate than the machines (Fig. 3b), indicating possible
resource savings and a further reduction in process speed variations.
The merging process stage takes place in the distribution centres. The directive is that a person should process
(merge) 32 items per minute. However, the actual observations showed that a worker has an average throughput
of 11.3 items per minute. Non-standard and variable approaches to executing the sequencing tasks were found
to diminish the throughput rate. For example, operators would operate differently in terms of preparation for
merging. Some would organise their parcels to be closer to the workstation before work commences, whilst others
SMOOTHLY PASS THE PARCEL 4207
Table 2. Service time/rate probability distributions of essential process stages.
Process stage Distribution p1 p2 p3 Unit Arena expression
Daily volume Normal 𝜇= 4116.9𝜎= 751.2 100 parcels/day
Truck arrivals Exponential 𝜆= 33.2 min/truck EXPO(33.2)
Load per truck Normal 𝜇= 94.92 𝜎= 17.32 100 parcels/truck NORM(94.92,17.32)
Preparation Uniform a = 3.375 b = 4.125 s/100 parcels UNIF(3.375 , 4.125)
Sorting rate Triangular a = 113 c = 180 b = 213 100 parcels/h
Sorting time Gamma 𝛼= 36.98 𝛽= 0.578 s/100 parcels GAMM(36.98, 0.5778)
Seq. time - labour Exponential 𝜆= 11.1 c = 9 s/100 parcels 9 + EXPO(11.1)
Seq. time - machines Constant c = 26.4 s/100 parcels 26.4
Transport Uniform a = 20 b = 40 min/100 parcels UNIF(20, 40)
Merging Normal 𝜇= 9.1𝜎= 1.82 min/100 parcels NORM(9.1, 1.82)
Figure 5. Time-activity diagram.
would prefer to walk between the loading bays to collect their parcels during the merging period. In total, 515
workers are available in all the distribution centres. Table 2summarises the found service time probability
distributions of the entire process chain.
5.5. Timings and process flow
Figure 5shows the essential activities and their respective timings for the two batch process cycles. Transitions
between activity timelines involve storage and movement. As explained above, a truck arrives on average every
33.2 min (varying arrivals and workloads). The first truck arrives at 4 am, and arrivals continue until the cut-off
time of 8 pm. The time window [4 am, 8 pm] of 16 h defines the first batch. Once it is 8 pm, the volume for
batch 1 is known. The second batch run covers the remaining 8 h and completes a full daily cycle irrespective of
the day of the week. At 4 am, the actual volume for the day is known (Fig. 5). The received items are unloaded
and prepared in a dedicated area. The systematic preparation is discontinued at 1 pm and substituted with an
ad-hoc preparation at the sorting machines.
The sorting process starts at 2 pm and stops at 10 pm. Here, a complication can occur when items cannot be
fed into the sorting machines. Usually, these are small amounts which are dealt with manually before sequencing
starts. The criteria used to start the sequencing process varied occasionally and were based on the utilisation of
4208 W. GARN ET AL.
Table 3. Durations, resources and costs per activity.
Activity Batch Duration Resources Planned cost Simulated busy cost
Start Plan Sim. Diff. PR HR Total PR HR PR HR Diff.
Arrival & prepare Batch 1 04:00 16.00 16.00 3 720 720 66 1014
Batch 2 20:00 8.00 8.00 3 360 360
Sort Batch 1 14:00 7.00 5.61 1.39 5 5 1295 770 525 555 378 1379
Batch 2 22:00 5.50 5.51 0.01 5 5 1018 605 413
Sequence Batch 1 22:00 5.00 4.01 0.99 6 6 1110 660 450 686 356 401
Batch 2 04:30 1.50 1.70 0.20 6 6 333 198 135
Transport Batch 1 04:00 0.50 0.48 0.02 20 20 370 220 150 424 289 27
Batch 2 06:30 0.50 0.48 0.02 20 20 370 220 150
Merging Batch 1 05:00 0.75 0.98 0.23 515 5794 5794 9669 13
Batch 2 08:00 0.50 0.55 0.05 515 3863 3863
Delivery All 09:00 515
Total 29 45.3 43.3 1.9 15 232 2673 12 559 11 334 1090 2808
workers and the capacity of the equipment. Success for the area was assessed on the overall equipment efficiency
per machine based on running time and labour efficiency, not the achievement of the schedule, which was a
plant-level measure. This view mistakenly thinks that labour efficiency is indicative of productivity [4,15].
The sequencing stage for small parcels operates as a batch operation. The sequencing machine group was
identified in the study as a bottleneck in the supply chain and, therefore, a limitation to increasing the throughput
of the machines. The researchers observed that certain machines were operating at full capacity intermittently
whilst others ran at a lower level consistently. Some operators would fully load the equipment for short periods
and then leave the area to collect further parcels or have unplanned rest breaks. Others would ensure that
a sufficient workload was available to support a constant volume over the allocated period. Both approaches,
reminiscent of the tortoise and hare fable, eventually produced the planned output. The observations highlighted
the non-standardised work procedures across the area. Issues of employees failing to adhere to standard operating
procedures, therefore diminishing the power of lean, were a common occurrence.
The sequencing stage is followed by a transportation activity, where trucks distribute the items to the corre-
sponding distribution centres. Here, a fleet of 20 trucks and drivers was used, and travel times varied with an
average duration of approximately 30 min. These transportation journeys start at 5 am for batch 1 and at 8 am
for batch 2. In the distribution centres, the merging occurs with an aggregated workforce of 515 people. The
planned durations are 45 min and 30 min for batches 1 and 2, respectively. Table 3summarises all the activities
and their duration characteristics. It also shows the associated resources and costs.
The resources are divided into physical resources (PR) and human resources (HR). The number of available
(or assigned) physical and human resources are abbreviated with 𝑛𝑝and 𝑛, respectively. For instance, the
transport activity from the operations hub to the distribution centres requires 𝑛𝑝= 20 trucks and 𝑛= 20
drivers. The planned cost for using 20 trucks for half an hour is determined by $22/h ×0.5 h ×20 trucks =
$220. Roughly spoken, the busy cost is the product of resource cost, busy time and several busy resources. A
more precise formulation is:
Δ𝑡𝑇
𝑐𝑡, (2)
where 𝑡is the time interval a resource is used for servicing, 𝑐is the cost for using the resource, and 𝑇is the
set of all time intervals (which can overlap). The simulated busy cost is the busy cost but with 𝑡used from
the simulation (abbreviated with 𝑡𝑠). Note that:
𝑡𝑠<(𝑑𝑠1+𝑑𝑠2)(𝑛𝑝+𝑛),(3)
where 𝑑𝑠𝑖 is the duration obtained by the simulation for batch 𝑖.
The previous subsections have given a detailed explanation of the current scenario and raise the question: Is
it possible to combine the operations of batches 1 and 2?
SMOOTHLY PASS THE PARCEL 4209
Figure 6. Optimised process flow (to-be scenario).
Table 4. Results of simulated to-be scenario.
Activity Start Duration Resources Planned cost Simulated busy cost
Plan Sim. Diff. PR HR Total PR HR PR HR Diff.
Arrival & prepare 04:00 24.00 24.00 3 1080 1080 66 1014
Sort 15:30 11.12 12.08 0.96 5 5 2057 1223 834 544 371 1143
Sequence 02:40 5.71 5.62 0.09 6 6 1268 754 514 670 349 248
Transport 08:05 0.48 0.48 0.00 20 20 355 211 144 210 143 2
Pick-up 08:35 0.19 0.19 0.00 515 1468 1468 1211 257
Delivery 09:00 515
Total 29 41.5 42.4 0.9 6228 2188 4040 2634 929 2664
5.6. To-be scenario
This sub-section will show that sufficient resources are available to allow a single batch run. The To-Be
scenario (Fig. 6) simplifies the As-Is scenario (Fig. 1) by combining the two batches.
The perceived bottleneck in the area was not machine capacity but scheduling. Labour would be scheduled to
move between sequencing equipment and another area of the plant to balance the workloads across the different
areas. The shift manager explained the logic behind this approach as a “balancing act”. While the small parcel
area waited for the next batch to build up, the operators could be gainfully employed and work in another
part of the business to ensure high labour efficiencies. “We work in two cycles as this is a more efficient use of
labour. While we wait for the next batch to build up, we move labour to prep work in the large parcels area,”
stated a supervisor. However, the perceived “efficient” use of labour did not improve the throughput time for
sequencing small parcels. Focusing on and improving labour and equipment efficiencies had no impact on the
overall throughput time of the process and its potential competitive advantage [15].
The to-be scenario details are shown in Table 4. The activities are a subset of the as-is scenario. They range
from the arrival and preparation of parcels to delivering them.
It can be seen that activity durations overlap, which supports the importance of using simulation rather than
average value calculations. In this scenario, the arrival and preparation at the operations hub is continuous
throughout a complete day cycle (24 h) rather than being split up into a 16-h and 8-h batch (as done in the
as-is scenario). To find appropriate activity start times of operations and transportation, the latest allowed
delivery time (9 am) at the distribution centre is the starting point for calculations. The expected durations (in
the above table “plan” columns) are obtained by using the throughput rates found in the previous subsections.
Backtracking these durations leads to the specified start times. Simulations allow further refinements of the
anticipated durations because of their ability to consider the whole process chain’s random behaviour (varia-
tions). The averages from multiple simulation runs were used in the “sim” column. Another advantage of DES
is the availability of resulting probability distributions for service-level considerations. It is recommended to
use those values rather than the “plan” values. For instance, it can be seen that the simulated sorting duration
is about an hour longer than the planned duration, which is a more reliable measure. However, the overall
duration of the to-be scenario is similar to the as-is scenario (2.2% difference). The cost savings are substan-
tial. The planned cost savings are 59.1% using the to-be scenario ($6.2 k/day) rather than the as-is scenario
4210 W. GARN ET AL.
($15.2 k/day). The planned costs assume that the personnel has to be paid even when resources are not adding
value. The busy cost focuses on the value-added services only. The busy cost (value added) savings are 71.3%.
A closer investigation of the tables reveals that these savings were mainly due to removing the excessive
labour cost that was caused by the manual merging process.
The unevenness of flow in the small parcel area was a result of resource planning, labour and machine
utilisation, and non-standardised work practices, not machine capacity. By running in two batches, management
optimised machine running efficiency and delivered against their KPIs for utilisation. This also meant that the
sequencing operation, due to sufficient buffer capacity (time), did not lead to any blockage in the preceding
upstream process steps. The downstream supply chain, however, experienced “starvation”. The manual merge
area at the Distribution Centre received parcels in two batches. This meant that unloading vehicles and handling
products would occur twice. The first batch would be unloaded and reside in the merge area until the second
delivery of parcels arrived. This led to space problems, particularly around peak periods such as Black Friday
and Christmas, as operators would have to manoeuvre around their work-in-progress parcels until such time
that they could execute the merging activity.
Smoothing the flow of work through the sequencing area was expected to provide a continuous volume of
products across the supply chain. This was expected to reduce transport costs between the operations and result
in fewer process delays, less duplicate handling, and unnecessary motion. However, achieving these benefits would
require a change in not only the planning of resources across the supply chain but also the key performance
indicators (KPIs) used to drive performance. To achieve the support required, the project team mapped and
analysed the processes leading to the development of simulations and animations to explain and show the
potential benefits of the changes.
6. Implementation results
The data analysis for each process stage identified essential statistics on volume and time distributions. The
implementation results are summarised in Table 2. The findings highlighted that the normally distributed daily
volume was experiencing a 2.1% yearly decline. This insight allowed the operations management to forecast the
demand and develop confidence in the intended service changes. The identification of the weekly profile and the
usage of the demand for the heaviest day assures management that process changes are feasible and achievable
at the required service level. The design/dimensioning of the process was based on a 97% service level (Fig. 2).
And, because of the forecasted reduction in future volumes, the service level will be even higher in the future.
The visualisation of the as-is simulation scenario in the simplified schematic shown in Figure 1caused the
questioning of the need for the second process cycle. All previous lean approaches were only applied to the
operations hub but missed out on the detrimental manual merging step within the distribution centre, believing
it to be a necessity due to capacity constraints within the operations hub. The TSEF promotes an even flow
rather than a “stop-and-go” approach caused by repeating process steps twice. The collected data, its analysis
and simulation demonstrated to the case study firm that condensing the two cycles of parcel sorting, as shown in
Figure 6, was both feasible and desirable. This reduces the waiting time for parcels in the process and smooths
the flow. The important aspect to consider was the runtime of the sequencing step, which can be derived from
Little’s Law using the throughputs from Table 2. The average throughput rate was 7128 items/h per sequencing
machine, a rate sufficient to handle most periods. This visualisation of the process led to the decision to proceed
with the project and implement the principles of TSEF.
Given this analysis, the two sequence cycles were combined, leading to cost savings in transport and labour
between the Operations Hub and the Distribution Centres. The significant cost reduction was in the Distribution
Centre (over 90%), whilst most of the changes in process and working practices occurred in the Operations Hub.
Smoothing the flow across 12.5 h by removing the batching approach to sequencing resulted in the eradication
of the merging activity in the Distribution Centre and reduced transport movements.
Through piloting the new way of working, the savings demonstrated by the simulation (Tab. 4) were beginning
to be realised. However, they were not fully matured before our study finished. Savings, as expected, were
SMOOTHLY PASS THE PARCEL 4211
mainly due to removing the excessive labour cost that was caused by the manual merging process. Further, the
condensing of the two batch cycles into a single even flow annualised savings of 106 000 travelled kilometres and a
saved travel time of 2117 h, based on the pilot, for the Distribution Centres was being projected. Labour savings
due to the change in flow were significant, resulting in a redistribution and refocus of labour to improve the
service offering and frequency of deliveries to major population centres. Thompson [48] showed that controllable
work improves labour utilisation, which was confirmed during this project. Furthermore, rejected parcels from
the Operations Hub that were manually handled by the Distribution Centres were reduced by 1.5% in terms of
volume, leading to additional savings. Minimisation of rework improved the flow of parcels through the supply
chain and reduced the effort required to handle them as operational failures diminished. Operators recorded a
reduction of over 60% in time wasted travelling between goods-in and final despatch.
7. Discussion
These empirically grounded findings show that the application of TSEF can indeed improve the performance of
a services-based organisation. To make it work, however, several inhibitors to reducing variation and throughput
time improvement had to be overcome. In this section, we address those inhibitors: (i) silos, (ii) inappropriate
performance measures, (iii) lack of vision, and (iv) sources of variation.
Toppling silos. One of the major impediments to developing a TSEF approach was the organisational
structure that existed within the case study firm. Historically, managers devoted attention to their immediate
area of responsibility. Such a silo perspective limited understanding of the enterprise-wide improvements that
could be implemented [17,49]. Functional orientations reduced both the flow of information and the end-to-end
process data that could be used to optimise the flow of value across the organisation. Silos also minimise inter-
nal coordination, and that hinders the ability of a firm to manage demand fluctuations [11]. This silo problem
surfaced in this case with the cancellation of several meetings between the TSEF project team and the DC.
The director had to intervene. “Resistance from managers there [DC] delayed the implementation. Once we
could explain and show the benefits, this improved. We are just not used to talking about working together
to make improvements”, explained one project leader from the Operations Hub. Reducing organisational bar-
riers and developing an end-to-end perspective that can drive flow across functional boundaries was critical to
implementing TSEF.
The change in ownership created the impetus for improving flow and developing an inter-organisational
improvement perspective. Harmonising activities end-to-end improved the decision-making within the entire
organisation. Skinner ([15], p. 56) highlighted the importance of altering the “approaches in materials and
workforce management” as critical to unlocking the competitive advantage of a factory. Cross-site teams were
established to support enhanced communications and information sharing across supply chain boundaries. “Cre-
ating a single batch run will deliver substantial savings across the pipeline of our entire business”, stated the
head of design for the group. The management of the company recognised that current work practices and gov-
ernance structures could be limiting the organisation’s opportunities. This aligns with the argument of Bamford
et al. [17] on the development of lean that full adoption of the concept requires the removal of “restrictions and
blockages to progress”. By adopting TSEF and building upon the benefits of previous lean projects, management
enabled company-wide improvements to be made.
Overcoming inappropriate performance measures. Altering the flow across the company required
the case study firm to create new metrics because the historical approach, which had been the foundation for
improvement, was no longer appropriate. Operationally, the case study firm concentrated on increasing efficiency
when the machines ran by maximising loading for discrete and unconnected periods. This surging approach
was driven by KPIs such as Overall Equipment Effectiveness (OEE) and labour efficiencies, which measured
output when the machine ran. The weaknesses of a productivity approach that focuses tightly on the efficiency
of workers through the application of more stringent controls “detracts attention from the structure of the
production system itself ([15], p. 56). Achieving improvements in the evenness of flow requires management to
focus on measures of variability and throughput time reduction, not labour and machine efficiencies [10,11]. Our
4212 W. GARN ET AL.
findings align with the view of Onofrei et al. [6], Schmenner [5] and Skinner [15] that measures of performance
are important. However, they can be misleading if not used to drive appropriate supply chain and factory
improvements.
Moving beyond the modus operandi of incremental lean improvements required a “deal breaker,” stated the
Operations Hub director. By utilising a TSEF perspective, the company recognised that an end-to-end process
change would not only deliver significant benefits but would also widen the influence of its lean ethos [17]. Using
TSEF to envision what process should be permitted the case study firm to concentrate on increasing value
and eliminating waste. The resulting company-wide improvement plan (i.e., focusing the factories) built upon
previous successes.
Using simulation to aid vision in managers. The case study‘s use of DES and animations demonstrated
to the organisation the potential of looking at supply chain-level improvements. Realising the potential of TSEF
required visualising the flow of parcel distribution. For services, developing a map that engages, is dynamic, and
represents the flow of value through an organisation is a significant challenge [30]. Simulations and animations
provided such a mechanism for the case study firm. There are various ways to enhance simulations to make
them more accessible. Turner and Garn [24] discuss aspects such as immersive simulations in virtual reality and
optimisations, which may increase managers’ acceptability. Data analytics provided the platform for TSEF to
demonstrate its power to shift the focus of change from a narrow activity focus to a wider enterprise. Through
developing simulations to demonstrate the benefits of an even flow of parcels between the process stages, the
project team gained buy-in to implement the changes to the process within the operations hub and its linked
distribution centres.
“Seeing what would happen to my job once the changes occurred made it easier to support it, though they
still have to sort out the number of failures at the Operations Hub for it to work”, stated one operator. The
visualisations developed through modelling aided the project team in explaining the potential benefits to the
organisation. Developing a mechanism that provides employees with the confidence to try new ideas in a safe
environment is critical for long-term sustainability and lean improvements [21]. Experiments with the physical
system would have affected the daily operations. Hence, simulations were decided to be used; this is supported
by Kelton et al. ([50], p. 3). They explain that simulations are a particularly useful approach for modelling
complex systems. Borshchev and Grigoryev ([51], pp. 26–36) supports this view and identifies simulation as a
requirement for companies in their decision-making process. Discrete event simulation lends itself naturally to
be a TSEF tool since it is based on entities flowing through the system, characterising and defining variations
caused in various process stages.
Understanding where variation comes from. The research identified that the variability that affects
flow can be generated either externally or internally [11]. Customer-derived variability is an important activity
in service-based organisations, and it can be addressed by smoothing the demand entering the process [49]. This
option, however, was not available to the case study firm. On the other hand, reducing internally generated
variance was possible. Our findings illustrate that the major gain for the business was achieved through even-
ness of flow. Removing the in-built stoppages to smooth flow inherent in the design of the process delivered
the improvements sought. Smooth flow, not efficiency of machinery or labour, was the key to unlocking the
improvements and subsequent cost savings for the organisation. “We always focus on improving the process
as it is. Changing the design of the process is not something that we had considered”, remarked the quality
manager, reinforcing Skinner’s point that changes in process design are “powerful engines” for improvement.
Theoretical contributions and managerial insights. TSEF has been deployed by many researchers
to explain the underpinning rationale of productivity and performance improvements [10,11,52]. However,
empirical evidence to support process variability and throughput times factors as key measures in deploying
TSEF has been limited. This research contributes to the literature by validating the two factors as pivotal in
delivering process-based improvements. Through the development of a novel DES application, the case study
identified and overcame several obstacles to TSEF implementation.
The derived DES approach in this study is reproducible and demonstrates its utility with production improve-
ment frameworks. Combining DES and the TSEF concept highlights the value of simulations in assisting
SMOOTHLY PASS THE PARCEL 4213
researchers in examining process improvement issues. The lack of sustainability in continuous improvement
techniques such as lean is frequently reported [6,17,18]. The approach revealed in this paper highlights the
opportunity for future studies to utilise simulations as a lens to examine why improvements become stymied.
Managerially, our study reveals several inhibitors to reducing variation and throughput time improvements.
Silo structures and a lack of vision at an enterprise level were shown to limit progress and ambition. Inappropri-
ate performance measures were found to focus on labour and machine efficiencies rather than reducing process
variations, therefore improving business levels. TSEF, through the facilitation of DES, encouraged operations
managers to venture out beyond their realm of responsibility. Through applying DES, managers quantified and
validated the feasibility of change implementations beyond individual silos. DES revealed the importance of
managing variability effectively, providing assurance that envisaged changes were worth pursuing. The improve-
ments not only helped the operation hub save costs but, more importantly, assisted the distribution centres and
the entire business in achieving substantial cost savings. This enabled the case study company to become more
competitive.
Limitations. Our findings are derived from a single in-depth case study on the application of TSEF in a mass
service environment with synchronised activities. This limits the generalizability of the findings but has allowed
the researchers to develop insights that can be examined in the wider contexts of services. It is worth noting,
however, that the approach has allowed the organisation to develop a roll-out plan for other sites, highlighting
its transferability.
Schmenner et al. ([52], p. 339) state that the purpose of theories is to “make predictions” of how phenomena
work and that the theory can be “disproved by findings that run counter to their predictions or explanations”.
Our findings have supported the “prediction” of TSEF. However, our research was based on a single case study
of a high-volume business that had started to address some of the issues that affect the flow between the two
sites. Further research is required to test TSEF in service environments that have different process variety and
volume characteristics. Research is needed to examine the deployment of TSEF in environments where the
customer is co-creating the service, which challenges the standardisation of processes, increases variability, and
drives serial activities. As TSEF argues, “productivity rises with the speed of flow of materials through a process
and reduces with increases in the variability associated with the flow” ([16], p. 102). Examining the application
of the theory in an agile environment would be a further test of its explanatory power.
8. Conclusion
Three key questions were posed in conducting the investigation: (a) Can TSEF breakthrough where lean
principles become stymied?, (b) What is the role of DES in discovering process inefficiencies, validating the
feasibility of change implementations, and saving costs? and (c) Does DES support TSEF as a business-level
improvement tool? The historical improvement approach utilised by the case study company had stagnated
at a low level of lean maturity [33]. Lean principles delivered isolated efficiency-based improvements and sub-
optimisation across the company-wide processes. The study demonstrated that DES lends itself naturally as
a tool for the TSEF. This allowed the case study firm to enhance its vision for the process, develop focused
factories, and substantially reduce costs. Our research has found that TSEF, in combination with DES, offers
service organisations a practical option to improve performance.
Our findings from the case study have allowed us to elaborate on TSEF and how it can stimulate more
strategic solutions for productivity (e.g., focused factories). Our research has highlighted several mechanisms
that are important for the implementation of TSEF, moving the concept from the academic design board to the
practitioner’s toolbox. Both strategic and operational elements were found to be important if the potential of
swift, even flow is to be realised. The design of the company-wide processes that deliver value and the missions
given to different operations may lead to variation that should be managed. Removing or reducing self-induced
variation requires a strategic review of the structure of the system (e.g., the character of the focused factories
established), which is in addition to the acknowledged variations of the process itself.
4214 W. GARN ET AL.
Figure A.1. Process stages.
Appendix A. Discrete event simulation implementation
The as-is and to-be scenarios were implemented using Rockwell’s Input Analyzer and Discrete Event Simulator
(DES) Arena. The simulation models can be found on Github [53].
Most of the probability distributions mentioned (Tab. 2) were derived using an “Input Analyzer”. This tool
fits several standard distributions to fit the given data. The one with the least mean squared error is ranked top.
Visual verification and consideration of the Kolmogorov-Smirnov test and the Chi-square test (especially the
p-values) eventually led to the decision about the probability distribution to use to describe the service times.
The structure of the as-is process was mapped in the DES. The high-level mapped process is shown in
Figure A.1.
The modelling of the truck arrivals is shown in Figure A.2. Trucks arrive at the operations centre at 4 am, as
described in the main text. The truckloads are split into entities (1 entity = 100 parcels). After 24 h of incoming
parcels, the cut-off time for the days arrives, i.e. all further parcels are discarded. At the cut-off time (28 h),
the volume of the day is known. Figure A.2 also shows several sub-systems running concurrently. One of them
checks that the timings are logically correct (at bottom Fig. A.2). Another one ensures that the volume will be
set, even if trucks stop arriving. The third one ensures that the volume for the first batch is set appropriately.
After the truck’s arrival. The parcels are unloaded and prepared (Fig. A.3). At this stage, we need to decide
whether parcels are classified for batch (wave) 1 or 2. This is achieved by knowing the starting time of the batch
2 cycle. The logic systems monitor the start and end time of the preparation process.
Following the preparation, the sorting stage starts. Figure A.4 shows the modelled implementation. Initially,
the items are held in a storage area until sorting can start (specified by the sorting start time). Note that
once sorting has started, parcels can pass through the storage directly. However, if the parcels are meant to be
processed as the second batch (wave 2), they will be kept in the storage area. The required sorting time for
each parcel is recorded. The actual sorting process uses several (in this case, five) machines. The service time of
each machine varies according to a gamma distribution. We also record the number of items and other sorting
measures, such as the current throughput performance. In parallel, we run the KPI sub-system (Fig. A.5), which
records essential key performance indicators. These include the number of items processed for each of the two
batches, the throughput rate for the system, the throughput rate per machine, and the throughput time.
Figure A.7 shows the transportation model. The two batches are dealt with separately. The transporta-
tion process utilises the whole fleet to transport the parcels to the distribution centres. Note that the reverse
transportation times are neglected.
After the sorting stage, the sequencing stage begins. The implementation is displayed in Figure A.6. The
storage for the sequencing is emptied for batch 1 at 10 pm. Batch 2 needs special attention because the
SMOOTHLY PASS THE PARCEL 4215
Figure A.2. Arrival process.
Figure A.3. Unloading and preparation of parcels.
Figure A.4. Sorting stage.
sequencing of this batch may not start before the other one has finished. The sequencing itself is split into
the labour and machine parts to reflect reality more accurately. The KPI logic system works similarly to the
one explained in the sorting stage. At the structural end of this process stage, the transportation process stage
is signalled (informed) about the status of each batch process.
After the parcels have been delivered to the sorting centre, they are merged, as shown in Figure A.8.
After the merging process, parcels are ready for delivery. A simulation run stops as soon as the last parcel
has been merged. Simulation runs are replicated a hundred times to ensure statistical stability.
4216 W. GARN ET AL.
Figure A.5. KPI logic for the sorting system.
Figure A.6. Sequencing stage.
Figure A.7. Transportation model.
Figure A.8. Merge process implementation.
SMOOTHLY PASS THE PARCEL 4217
Figure A.9. Implementation of to-be scenario.
Figure A.10. Sequence stage of to-be scenario.
The to-be scenario is displayed in Figure A.9. It reflects all the essential stages, implemented as sub-systems.
In general, the implementation is similar to the as-is scenario with the simplification that it is not necessary to
distinguish the two batch cycle processes.
Thus, we will only display the sorting stage (Fig. A.9) and sequence stage (Fig. A.10). Theoretically, the
earliest time the sorting could start is when the first truck arrives at 4 am. The sorting control logic requires
that sorting cannot start before a specified time. Further, a minimum number of parcels must exist in the storage
area. This allows more efficient processing. From the as-is scenario, an “average” lower bound for the earliest
sorting time can be derived through reverse engineering. We know that pick-up (former merging) will take less
than 1.53 h, transportation will take less than 0.96 h, sequencing will take less than 5.71 h, and sorting will take
less than 11.12 h. This sums up to 19.32 h, and the planned delivery start time is 9 am. That means the earliest
start time for sorting can be set to 13.68 pm. Taking into consideration the shortened merging (pick-up) stage
(average: 0.19h, max: 0.25 h) and the shortened transportation time (average: 0.48 h, max: 0.66 h), a reduction
4218 W. GARN ET AL.
of 1.82 h is possible. That leads to a refined start time of 3.50 pm. A more precise start time can be obtained
by either using an experimental design or running an optimisation.
Further, the merging process is not necessary and is substituted with a pickup process. The delivery staff
can do this in a far shorter time. This time is normally distributed with a mean of 1.25 min and a standard
deviation of 0.626 min for 100 parcels per person.
Data Availability Statement
The simulation models created for this paper are available online in a Github repository: https://github.com/
Wolfgang-Garn/pass-the-parcel [53].
References
[1] H. Leite, N. Bateman and Z. Radnor, Beyond the ostensible: an exploration of barriers to lean implementation and
sustainability in healthcare. Prod. Plan. Control 31 (2020) 1–18.
[2] C.F. Lindsay and J. Aitken, Using Programme Theory to evaluate Lean interventions in healthcare. Prod. Plan.
Control 35 (2024) 824–841.
[3] W.K. Balzer and T. Sinha, Hoshin Kanri in Higher Education: A Guide to Strategy Development, Deployment, and
Management. Taylor & Francis, Routledge & CRC Press, Boca Raton (2023).
[4] R.W. Schmenner, The pursuit of productivity. Prod. Oper. Manag. 24 (2015) 341–350.
[5] R.W. Schmenner, Getting and Staying Productive: Applying Swift Even Flow to Practice. Cambridge University
Press (2012).
[6] G. Onofrei, B. Fynes, H. Nguyen and A.H. Azadnia, Quality and lean practices synergies: a swift even flow perspec-
tive. Int. J. Quality Reliab. Manag. 38 (2021) 98–115.
[7] M. Vrgoˇc and V. ˇ
Ceri´c, Investigation and design of parcel sorting systems in postal centres by simulation. Comput.
Ind. 10 (1988) 137–145.
[8] P. Klomjit, C. Anurattananon, A. Chatmuangpak and A. Amaluk, Efficiency improvement by simulation technique
in the parcel service company. Sci. Technol. Asia 25 (2020) 20–29.
[9] L.D. Fredendall, J.B. Craig, P.J. Fowler and U. Damali, Barriers to swift, even flow in the internal supply chain of
perioperative surgical services department: a case study. Decis. Sci. 40 (2009) 327–349.
[10] S. Devaraj, T.T. Ow and R. Kohli, Examining the impact of information technology and patient flow on healthcare
performance: a Theory of Swift and Even Flow (TSEF) perspective. J. Oper. Manag. 31 (2013) 181–192.
[11] S. Thirumalai and S. Devaraj, Mitigating the curse of complexity: the role of focus and the implications for costs of
care. J. Oper. Manag. 70 (2024) 157–179.
[12] R.W. Schmenner, Service businesses and productivity. Decis. Sci. 35 (2004) 333–347.
[13] A. Sartal, N. Ozcelik and M. Rodriguez, Bringing the circular economy closer to small and medium enterprises:
improving water circularity without damaging plant productivity. J. Clean. Prod. 256 (2020) 120363.
[14] T.C. Papadopoulou and M. ¨
Ozbayrak, Leanness: experiences from the journey to date. J. Manuf. Technol. Manag.
16 (2005) 784–807.
[15] W. Skinner, The productivity paradox. Harv. Bus. Rev. 64 (1986) 55–59.
[16] R.W. Schmenner and M.L. Swink, On theory in operations management. J. Oper. Manag. 17 (1998) 97–113.
[17] D. Bamford, P. Forrester, B. Dehe and R.G. Leese, Partial and iterative lean implementation: two case studies. Int.
J. Oper. Prod. Manag. 35 (2015) 702–727.
[18] E.R.G. Pedersen and M. Huniche, Determinants of lean success and failure in the Danish public sector: a negotiated
order perspective. Int. J. Public Sect. Manag. 24 (2011) 403–420.
[19] Z. Radnor, P. Walley, A. Stephens and G. Bucci, Evaluation of the lean approach to business management and its
use in the public sector. Technical report, Scottish Executive Social Research, Edinburgh, Scotland (2006).
[20] N. Rich and N. Bateman, Companies’ perceptions of inhibitors and enablers for process improvement activities. Int.
J. Oper. Prod. Manag. 23 (2003) 185–199.
[21] M. Scherrer-Rathje, T.A. Boyle and P. Deflorin, Lean, take two! reflections from the second attempt at lean imple-
mentation. Bus. Horizons 52 (2009) 79–88.
[22] M. Ball´e, Lean attitude [considering attitude in lean production]. Manuf. Eng. 84 (2005) 14–19.
[23] O. Ganbold, Y. Matsui and K. Rotaru, Effect of information technology-enabled supply chain integration on firm’s
operational performance. J. Enterp. Inf. Manag. 34 (2020) 948–989.
SMOOTHLY PASS THE PARCEL 4219
[24] C.J. Turner and W. Garn, Next generation DES simulation: a research agenda for human centric manufacturing
systems. J. Ind. Inf. Integr. 28 (2022) 100354.
[25] L. Kleinrock, Queueing Systems: Theory. John Wiley, New York, NY (1975).
[26] R. Jain, The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement,
Simulation, and Modeling. Wiley, Hoboken, NJ (1991).
[27] W. Garn. Introduction to Management Science: Modelling, Optimisation and Probability. Smartana Ltd, London,
UK (2018).
[28] M. Jurczyk-Bunkowska, Tactical manufacturing capacity planning based on discrete event simulation and throughput
accounting: a case study of medium sized production enterprise. Adv. Prod. Eng. Manag. 16 (2021) 335–347.
[29] MIT, Transitioning to a Lean Enterprise: A Guide for Leaders. Vols. I, II, III. MIT Libraries, Cambridge, MA
(2000).
[30] J. Bicheno and M. Holweg, The Lean Toolbox: The Essential Guide to Lean Transformation. Picsie Books (2008).
[31] P. Checkland, Systems Thinking, Systems Practice. Wiley, New York, NY (1999).
[32] D. Simons and D. Taylor, Lean thinking in the UK red meat industry: a systems and contingency approach. Int. J.
Prod. Econ. 106 (2007) 70–81.
[33] P. Hines, M. Holweg and N. Rich, Learning to evolve: a review of contemporary lean thinking. Int. J. Oper. Prod.
Manag. 24 (2004) 994–1011.
[34] A. Gurumurthy and R. Kodali, Design of lean manufacturing systems using value stream mapping with simulation:
a case study. J. Manuf. Technol. Manag. 22 (2011) 444–473.
[35] J. Schilling and A. Kluge, Barriers to organizational learning: an integration of theory and research. Int. J. Manag.
Rev. 11 (2009) 337–360.
[36] M. Barratt, T.Y. Choi and M. Li, Qualitative case studies in operations management: trends, research outcomes,
and future research implications. J. Oper. Manag. 29 (2011) 329–342.
[37] J. Meredith, Building operations management theory through case and field research. J. Oper. Manag. 16 (1998)
441–454.
[38] C. Voss, N. Tsikriktsis and M. Frohlich, Case research in operations management. Int. J. Oper. Prod. Manag. 22
(2002) 195–219.
[39] A. Bitektine, Prospective case study design: qualitative method for deductive theory testing. Organ. Res. Methods
11 (2008) 160–180.
[40] M. Ketokivi and T. Choi, Renaissance of case research as a scientific method. J. Oper. Manag. 32 (2014) 232–240.
[41] R.K. Yin, Case Study Research: Design and Methods. Vol. 5. SAGE Publications (2009).
[42] B.R. Staats, D.J. Brunner and D.M. Upton, Lean principles, learning, and knowledge work: evidence from a software
services provider. J. Oper. Manag. 29 (2011) 376–390.
[43] R. Narasimhan, Theory development in operations management: extending the frontiers of a mature discipline via
qualitative research. Decis. Sci. 45 (2014) 209–227.
[44] K.M. Eisenhardt, Building theories from case study research. Acad. Manag. Rev. 14 (1989) 532–550.
[45] J. Heizer, B. Render and C. Munson, Operations Management: Sustainability and Supply Chain Management, 14th
edition. Pearson, New York, NY (2023).
[46] J. Hu and C.L. Munson, Improved profit functions for newsvendor models with normally distributed demand. Int.
J. Procure. Manag. 4(2011) 20–36.
[47] J.J. Bartholdi and S.T. Hackman, Warehouse & Distribution Science: Release 0.96. The Supply Chain and Logistics
Institute, 30332 (2014).
[48] G.M. Thompson, Improving the utilization of front-line service delivery system personnel. Decis. Sci. 23 (1992)
1072–1098.
[49] H. Akkermans and C. Voss, The service bullwhip effect. Int. J. Oper. Prod. Manag. 33 (2013) 765–788.
[50] W.D. Kelton, R.P. Sadowski and N.B. Zupick, Simulation with Aren. McGraw-Hill, New York, NY (2015).
[51] A. Borshchev and I. Grigoryev, Big Book of Simulation Modeling: Multimethod Modeling with AnyLogic 8. Any-
Logic, North America (2024).
[52] R.W. Schmenner, L. Van Wassenhove, M. Ketokivi, J. Heyl and R.F. Lusch, Too much theory, not enough under-
standing. J. Oper. Manag. 27 (2009) 339–343.
[53] W. Garn, J. Aitken and R.W. Schmenner, Simulation models for “Smoothly pass the parcel: implementing the
theory of swift, even flow”. https://github.com/Wolfgang-Garn/pass-the-parcel (2024).
4220 W. GARN ET AL.
Please help to maintain this journal in open access!
This journal is currently published in open access under the Subscribe to Open model
(S2O). We are thankful to our subscribers and supporters for making it possible to
publish this journal in open access in the current year, free of charge for authors and
readers.
Check with your library that it subscribes to the journal, or consider making a personal donation to
the S2O programme by contacting subscribers@edpsciences.org.
More information, including a list of supporters and financial transparency reports,
is available at https://edpsciences.org/en/subscribe-to-open-s2o.
... DES is a modelling methodology employed to depict the behaviour of intricate systems by simulating sequential discrete occurrences throughout time (Oliveira et al., 2022). DES entails deconstructing operations into discrete events, including package sorting, transportation, and delivery, to examine the interactions and results of diverse processes (Garn et al., 2024;Lyu et al., 2023). It may be utilised to assess scenarios, examine tactics, and forecast outcomes without interfering with actual operations (Rotunno et al., 2023;Lyu et al., 2023). ...
Article
Full-text available
The swift digital transformation of courier logistics is transforming the sector, propelled by the imperatives of Industry 4.0 and the necessity for resilience in the changing global economy. This article examines the influence of Discrete Event Simulation (DES) on the digital transformation of courier logistics and its role in Industry 4.0 frameworks within the context of a resilient economy. Discrete Event Simulation (DES) functions as an effective instrument for modelling and optimising intricate logistical systems, allowing organisations to improve operational efficiency, adaptability, and sustainability. This research employs a quantitative case study methodology to analyse how DES applications tackle significant issues, including real-time decision-making, resource allocation, and supply chain disruptions. The findings highlight the crucial importance of DES in promoting digital innovation, enhancing system resilience, and facilitating the incorporation of Industry 4.0 technologies in courier logistics. This research underscores the revolutionary capacity of DES as a catalyst for attaining a robust, sustainable, and digitally optimised logistics sector.
Article
Full-text available
includes the solution of 3 problems: a queueing of products problem, a duplicated work processes problem and an unsystematic working area problem. The theories that apply are simulation technique (ProModel), method study, goodness of fit test, ECRS method and material handling. The queueing of products problem and duplicated work processes problem were solved by installing a belt conveyor for sorting that combined the zone sorting process and distribution center sorting process. The unsystematic working area problem was solved by arranging the working area clearly to make product flow more linear and reduce distances. All of the solutions and the current system were created as models in ProModel. The results after improving the queueing of products problem and duplicated work processes problem are increased output about 2,530 pieces per day from the original and decreased queueing time for 49.23% from the original, unsystematic working area problem solving increase output about 1,332 pieces per day from the original and decrease queueing time for 29.41% from the original.
Article
Full-text available
There has been a resurgence of interest in the role of operational focus in the healthcare operations literature in the backdrop of increasing demand for efficient and effective care. However, the evidence on the benefits of focus in healthcare is mixed. Our study proposes that a key piece of this puzzle that is been largely missing is an explicit consideration of the complexity of patient care needs. Specifically, our study serves to answer the questions: How does the complexity of care requirements affect care delivery operations? How does focus across the hierarchical levels of care affect care delivery outcomes across complexity regimes? The empirical analysis in the study is based on a large generalizable dataset of 246,663 patient discharges across 26 categories at 154 hospitals. We develop a multi‐factor measure of complexity of care requirements. The study results point to the deleterious impact of complexity on the costs of care. Next, our findings highlight the differential impact of focus across hierarchical levels (task‐level focus, category‐level focus, and selective focus in related areas) on the costs of care. Third, our study findings highlight the role of focus in mitigating the effects of complexity on costs in a healthcare setting. We discuss the implications of the study findings for theory and practice, and the directions for future research.
Article
Full-text available
Lean in healthcare has continued to attract practitioner and research interest over the past two decades yet lacks evidence of sustained application. This longitudinal case study examines Lean healthcare interventions through the lens of Programme Theory, to provide an understanding of the design and implementation of a Lean programme. Through a qualitative single case study of an NHS organisation implementing Lean, content analysis of 70 reports from a 6-year period was undertaken and supported by interview data from 12 respondents involved in Lean initiatives. Through Programme Theory, we identify contextual moderators, internal and external, informing the design and development of Lean efforts. The key to successful and sustained Lean efforts is staff acceptance of the Programme Theory for delivering and maintaining outcomes. This has been identified as a key social determinant for Lean sustainability. Results from the research provide practical implications for designing a programme that accommodates resource challenges while incorporating multiple contextual drivers. The case study highlights an organisation’s Lean journey and the best practices that evolved; however, further longitudinal investigations are required to validate the findings.
Article
Full-text available
The article presents the application of the original methodology to support tactical capacity planning in a medium-sized manufacturing company. Its essence is to support medium-term decisions regarding the development of the production system through economic assessment of potential change scenarios. It has been assumed that the developed methodology should be adapted to small and medium-sized enterprises (SMEs). Due to their flexibility, they usually have limited time for decision-making, and due to limited financial resources, they rely on internal competencies. The proposed approach that does not require mastery of mathematical modelling but allows streamlining capacity planning decisions. It uses the reasoning of throughput accounting (TA) supported by data obtained based on discrete event simulation (DES). Using these related tools in the design and analysis of change scenarios, make it possible for SME managers to make a rational decision regarding the development of the production system. Case studies conducted in a roof window manufacturing company showed the methodology. The application example presented in the article includes seven change scenarios analyzed based on computer simulations by the software Tecnomatix Plant Simulation. The implementation of the approach under real conditions has shown that a rational decision-making process is possible over time scale and with the resources available to SMEs for this type of decision.
Article
Full-text available
The purpose of this study is to investigate the relationship between investments in quality and lean practices, and their impact on factory fitness. Using concepts originating in the theory of swift even flow, this study asserts that manufacturers in order to improve their production swiftness and evenness, must leverage the potential synergetic effects between quality and lean practices. This research uses data from the Global Manufacturing Research Group (GMRG) survey project (with data collected from 922 manufacturing plants, across 18 countries). The constructs and measurement model were assessed using confirmatory factor analysis (CFA) and the hypotheses were tested using ordinary least square (OLS) models. This study highlights that both investments in quality and lean practices have a direct impact factory fitness. The results provide insights into the efficacy of the investments in manufacturing practices and their role in augmenting the operational performance. The investments in quality practices were found to enhance the efficacy of investments in lean practices, which in turn impact the factory fitness. From a practical perspective, the study informs managers on how to leverage investment in quality practices to enhance the impact of lean practice on performance. The results provide empirical evidence to support management decision making concerning the development of competences in quality and lean practices, that may create competitive advantage. This study contributes to the quality and lean literature and provides empirical evidence of the synergetic effects between investments in quality and lean practices. The analysis offers a greater understanding of the mechanisms that can be used to maximise the impact of investments in lean practices, from a global perspective. The findings are important to the advancement of theory in operations management, as it integrates three research streams: quality practices, lean practices and swift even flow research.
Chapter
This book examines influential ideas within Management Information Systems (MIS). Leading international contributors summarize key topics and explore a variety of issues currently being discussed in the field. They re-visit influential ideas such as socio-technical theory, systems thinking, and structuration theory and demonstrate their relevance to newer ideas such as re-engineering, hybrid management, knowledge workers, and outsourcing. In locating MIS within an interdisciplinary context, particularly in the light of rapid technological changes, this book will form the link between past and future approaches to MIS.
Article
In this paper we introduce a research agenda to guide the development of the next generation of Discrete Event Simulation (DES) systems. Interfaces to digital twins are projected to go beyond physical representations to become blueprints for the actual “objects” and an active dashboard for their control. The role and importance of real-time interactive animations presented in an Extended Reality (XR) format will be explored. The need for using game engines, particularly their physics engines and AI within interactive simulated Extended Reality is expanded on. Importing and scanning real-world environments is assumed to become more efficient when using AR. Exporting to VR and AR is recommended to be a default feature. A technology framework for the next generation simulators is presented along with a proposed set of implementation guidelines. The need for more human centric technology approaches, nascent in Industry 4.0, are now central to the emerging Industry 5.0 paradigm; an agenda that is discussed in this research as part of a human in the loop future, supported by DES. The potential role of Explainable Artificial Intelligence is also explored along with an audit trail approach to provide a justification of complex and automated decision-making systems with relation to DES. A technology framework is proposed, which brings the above together and can serve as a guide for the next generation of holistic simulators for manufacturing.
Article
The field of operations management has been criticized for the inadequacy of its theory. We suggest that this criticism may be too harsh, and further, that many building blocks of theory are prevalent in the body of existing research. This paper has two goals. The first is to suggest that careful organization of our thinking can lead to useful, productive theories in operations management that demonstrate all the hallmarks of the familiar theories of natural science. We discuss the nature of scientific inquiry in general terms, and examine the implications for what should be expected from theory in operations management. Our second goal is to illustrate through examples how such theories and their related laws might be developed. Two theories are proposed: the Theory of Swift, Even Flow, and the Theory of Performance Frontiers. The Theory of Swift, Even Flow addresses the phenomenon of cross‐factory productivity differences. The Theory of Performance Frontiers addresses the multiple dimensions of factory performance and seeks to unify prior statements regarding cumulative capabilities and trade‐offs. Implications drawn from the theories are discussed and concluding remarks suggest the advantages of future theory development and test.
Article
Purpose Using the assumptions of the resource-based view, relational view and swift, even flow theories and the overarching principles of supply chain management, the study aims to test the role of information technology (IT) capability (cross-functional application, supply chain application and data consistency) in enabling supply chain integration (SCI; internal, customer and supplier integration) and the impact of SCI on firm's operational performance in terms of quality, delivery, production cost, inventory level, customer service and product-mix flexibility. Design/methodology/approach The structural equation modeling approach is used to test theoretical predictions underlying the relationship among dimensions of IT capability, SCI and operational performance based on data obtained from senior executives of 108 large manufacturing firms listed in the Tokyo Stock Exchange. Findings The results suggest that IT capability has positive impact on SCI, except for data consistency, which is found to have negative impact on internal integration. The results further indicate that SCI, especially customer integration, has positive and significant impact on all operational performance indicators. Practical implications The findings inform future initiatives associated with the SCI improvement via specific IT capabilities. When undertaking such initiatives, managers are advised to consider the differential impact of the following IT capabilities on SCI: cross-functional applications, supply chain applications, and data consistency capability. Originality/value The study makes an empirical contribution to the body of knowledge by demonstrating the value of the multidimensional representation and analysis of IT capability, SCI, and operational performance given a differential and even opposed influence by some of the dimensions in specific business contexts.