Content uploaded by C. MacSwiney Brugha
Author content
All content in this area was uploaded by C. MacSwiney Brugha on Jun 15, 2016
Content may be subject to copyright.
Analytics For Enabling Strategy in Sport
Alan Freeman
Management Information Systems
UCD School of Business
University College Dublin
Dublin 4
Ireland
Declan Treanor
Management Information Systems
UCD School of Business
University College Dublin
Dublin 4
Ireland
Cathal M. Brugha
Centre for Business Analytics
UCD School of Business
University College Dublin, Dublin 4, Ireland
Telephone: (+353-1)-716-4708
Email: cathal.brugha@ucd.ie
Abstract—The area of team performance analysis in Sport
is ever growing. Compared to other sports, Rugby Union has,
so far, seen little research in this regard. Currently, methods
used to objectively depict team performance rely on expert
users’ analysis after the fact. We have devised a metric for
capturing performance that brings experts users into the process
earlier, thus creating a more meaningful performance metric.
This scoring process uses Multi-Criteria Decision Making , and
is verified by analysing 552 rugby matches over three seasons
of the Celtic League and European Rugby Cup. We examine if
the attributes of performance follow an underlying structure. We
also ask if our method provides meaningful insight and we test
if our model stands up to artificial intelligence, when it comes to
forecasting match results. We find that this is indeed the case and
conclude that our methodology provides a reasonable basis for
both comparative performance analysis and strategy formulation.
Keywords—Business Analytics, Sport Performance, Multi-
Criteria Strategy, Practice of OR
I. INTRODUCTION
This article presents a novel application of Multi-Criteria
Decision Making tools to the field of Sports Performance
Analytics. Currently, methods used to objectively depict team
performance involve the collection of individual and team
match metrics and leave it up to the expert user to make sense
of them using technical and qualitative analysis. In our Practice
Based Approach, we introduce the concept of Hot Performance
Indicators as a team performance metric that incorporates
expert users’ knowledge before any ex-post analysis is done.
The factors that contribute to sports performance are shown
to follow an adjusting process in the context of [1] and [2].
By understanding the 8 adjusting activities within the process,
expert users measure the relative importance of possible match
actions that contribute to performance. Based on these, a Hot
Performance Indicator is constructed by refining the perfor-
mance measurements in light of the technical quality of the
action; the positional role of the player involved; the impact
on the opposition; the quality of the opposition; the area of
the pitch on which the action took place; and the location of
the match itself.
We aim to show that the resulting HPI is a performance
metric that can be used to analyse comparative team per-
formance by highlighting imbalances within the underlying
adjusting structure and thus, form an adjusting basis for
devising performance strategies.
A. Research Questions
This paper looks to answer the following questions:
1) Can the factors that contribute to team performance in
Rugby Union be considered to follow an underlying
adjusting structure
With this in mind, we will build on the work of [3] and
[4] and derive an initial set of criteria that contribute to
team performance in Practice. Using these initial criteria,
we will work through Brugha’s 8 stage adjusting structure
[5] and try to evince from expert users a full set of match
events to be considered when evaluating performance.
2) Using this underlying structure, can a new team per-
formance metric be created that will relate to match
outcomes and lend itself to comparative analysis of teams.
By showing imbalance between the performance factors
contributing to the adjusting process, comparative areas
of strength and weakness of teams are highlighted using
our HPIs.
Our review of existing literature in section II looks at some
of the standard metrics used in the analysis of sports perfor-
mance. No method of calculating team performance metrics
that integrate expert users’ knowledge exists. At the moment,
expert users, such as coaches, bookmakers and analysts depend
on statistics, and other such quantitative analysis, in order to
gain extra insight. We consider this to be a missed opportunity;
by bringing the expert user into the process early, better
information can be derived.
It became apparent that sport was objectively about win-
ning and, as such, that there could be a link between sport and
the objective model outlined by [1] , [2] and later by [6].
Brugha [6] has shown that many business systems and
methodologies have what he calls an objective requirement to
adjust so as to keep several dichotomies in balance. The first
of these is about what to do; should it be more planning or
putting plans into action. The second of these is about where
to focus; should it be more the people involved or the place
where it happens, which might be a management system or
structure. And the third of these is about which way to use,
should be more about personal engagement or more about the
decision-makers using their position, of influence or control.
We wanted to explore if the contributing factors to a
winning rugby union team performance followed a similar
adjusting process, using the data available and by employing
the MCDM process given by [5]. Such a link, as far as we
could see, had not been made before.
II. LITERATURE REVIEW
[7] pointed to the idea of notational analysis as “an
objective way of recording performance so that key elements
of that performance can be quantified in a valid and consistent
manner”.
Examples of applications of notational analysis of tactical
evaluation of performance appear in papers by [8], [9] and
later by [10], and all look at the game related performance
statistics that allow to discriminate between successful and
unsuccessful teams. For example, [10] found that the variables
that discriminate between winning, drawing and losing teams
were the total shots, shots on goal, crosses, crosses against, ball
possession and venue. [7] refer to similar articles, such as [11],
[12], as publications that are good examples of how analysts
use performance metrics to inform the coaching process of
tactical options.
[3] provided an overview of how performance indica-
tors are in performance analysis. They define a performance
indicator as a selection, or combination, of action variables
that aim to define some or all aspects of a performance. The
authors considered the different variables that contribute to an
improved performance.
[13] established key positional performance indicators that
were defined and coded in a valid and reliable manner. Fur-
thermore, an explicit process for identifying key performance
behaviours were presented and verified by individuals with
considerable coaching and playing experience in the sport.
With this in mind, [14] and then later [4] examined method-
ologies in objectively depicting team performance indicators.
The former considered the winning and losing performances of
a single team and found significant differences. For example,
“lineout success on the opposition throw” differed significantly
between winning and losing performances.
While winning and losing can often indicate a level of per-
formance, [4] purport that it may be more practical for coaches
to adopt a team performance measure that was independent
of match outcome. [4] highlighted that, up to this point, no
study had assessed team performance via the evaluation of
team performance indicators.
One factor which has been shown to contribute to how
well teams perform relates to home advantage. [15] aimed to
examine when and why home advantage exists. They examined
the 8 major leagues in British football, in one season.
[16] looked at the effects of different situation vari-
ables, on specific technical aspects of individual and team
performance. As with [15], location was examined, and was
once again shown to be important. In addition [16] examined
the quality of opposition and the affect that this has on
performance, having noted that this factor is often ignored in
similar studies. Unsurprisingly, they found that the quality of
opposition had a significant influence on the odds of success.
III. METHODOLOGY
The basis for our methodology is the Structured Multi
Criteria Decision Making approach, as described by [5]. Here,
we can consider Decision Makers considering different teams
as alternatives, and use match actions contributing to each
teams’ performance as the scorable criteria.
In the aforementioned paper, [5] describes Multi Criteria
Decision Making, an 8 stage process for extracting information
from Decision Makers (DMs) pertaining to their criteria, in
relation to a specific multi-criteria decision.
The aim of the process is a detailed analysis of the
technical, contextual and situational aspects performance, the
refinement of these, and evincing of new requirements by the
Decision Adviser (DA), with a view to efficiently helping and
advising the Decision Maker.
MCDM is an adjusting process where the Decision Ad-
visers must find a balance in the Brugha Meta Model, in the
context of the three dichotomies described by [2] namely and
respectively: Planning a solution vs Putting a plan into effect;
concerns of People (systems) vs concerns of Place (structures);
and using Personal interaction vs using one’s Position.
A. Initial Criteria
Three “Expert Users” (Decision Makers) participated in
discussions with the authors (Decision Advisers) with the aim
of forming criteria trees that would illuminate all aspects of
what contributed to good performance.
To begin with, the DMs were asked to think about the fac-
tors that contribute to good (individual and team) performance
in a Rugby Union game.
What was derived as an initial set of criteria is shown in
figure III.1. As with Soccer, factors such as passing and set-
pieces are important. Comparatively, though, Rugby is a vastly
more structured sport than Soccer, where the field position is
contested more vigorously. In fact, the DMs agreed that all
the factors that lead to good performance tend to translate to a
good field position, leading to scores and successful outcomes.
Fig. III.1. Initial Criteria - Factors leading to improved performance in Rugby
Union
In summary, the high level criteria derived related to (i)
ball possession, (ii) set pieces, (iii) ball possession and (iv)
execution of technique.
B. Link to Generic Adjusting Structure
In the context of [1] and [2], we can consider games
in professional team sports as an objective process towards
winning. The process of winning is not so much owned by
players, but more so by the coaches and management who
ultimately decide who plays and how they play. From game
to game, strategies are devised; team and player performance
is analysed and then adjusted, by selecting different players,
and/or different strategies to improve the chances of winning
in the next game.
As with any Multi Criteria decision, DMs frequently need
to be convinced and will look to the DAs to bring them through
the convincing levels of decision, by looking at the technical,
contextual and situational aspects of the multiple criteria [5],
[17].
Our problem is no different. Our DMs need to be convinced
that a team is performing well. If they become convinced
that some aspect of the teams play could be improved they
then make a corresponding adjustment. With this in mind, our
approach is to modify base scores to take into account the
technical, contextual and situational aspects of performance.
The 8-Stage Adjusting Model given by [2] (figure III.2)
allows us to examine the activities in each facet of the
team performance life cycle, by considering the dichotomous
answers to the simple questions: what should be done? where?
and which way? [2] contends that a balanced adjusting life-
cycle will have a balance within the dichotomies [2].
What is important is that rugby practice fits the nomologi-
cal structure. Thus, when the issue is about ball possession and
set pieces there is uncertainty about what to do; will one even
be able to do anything with the ball? When the issue is about
ball distribution and execution there is more certainty about
what to do, it a question of can we do it. Likewise, when the
issue is about ball possession OR set pieces, set pieces bring
the focus on the people in the team, in a scrum or a lineout,
with the backs all ready to perform. On the other hand ball
possession is more about the structures and systems that the
team has in place to protect and build its advantage.
At the next level in a criteria structure, the difference
between the scrum and a lineout is that a scrum is highly
structured with little personal engagement; both sides get
down and push. On the other hand, with a lineout personal
engagement is key: the thrower, the lifters, the catcher, the
dummies, the people protecting the catcher, the pass to the
scrum half and onwards, all personally challenging tasks
The factors that lead to performance are shown to follow an
adjusting process as shown in figure III.2. The eight activities
put forward by [2] can be described here as (i) ball carries, (ii)
ball collections, (iii)scrums, (iv) lineouts, (v) hand passes, (vi)
kicks out of hand, (vii) tackles and (viii) carries executions.
C. Application of MCDM Methodology
The derived high-level activities or criteria were considered
individually. Based on the convincing levels described by [1]
and [18], technical, contextual and situational match event
outcomes, relating to these criteria, were formulated.
Fig. III.2. HPI Adjusting Model
It made sense that the activities and corresponding match
event outcomes should be grouped in this way. Decision
Makers need to be convinced that performance was good. For
example, they may need to be convinced that an action was
technically well carried out, that it was contextually appropriate
in gaining some advantage over the opposition (others), and
convinced that it helped in situations, where it was used in the
game to achieve an advantage, either on the score board or in
terms of field position.
Structured MCDM methodology as presented by [5] fol-
lows an 8 stage process, each with 3 convincing stages: consid-
eration of technical aspects, then relating them to the context
of the problem, and finally taking into account the particular
situation of the decision. Given the underlying structure of the
processes that contribute to performance, it makes sense that
the construction of our HPI metric should follow a similar
format.
Each of the eight stages of the adjusting process outlined
by [2] correspond to our 8 contributing factors to performance,
namely: (i) ball carries, (ii) ball collections, (iii)scrums, (iv)
lineouts, (v) hand passes, (vi) kicks out of hand, (vii) tackles
and (viii) carries executions.
We can consider the technical, contextual and situational
stages of convincing, as described by [5] in terms of the
constructs of our Hot Performance Indicators as:
Convincing Stage Rugby Performance HPI Construct
Technical Was it technically well carried
out? Did it demonstrate skill
or accuracy?
Technique &
Skill
Contextual Did it gain an advantage over
the other (opposition) team
Gain Advantage
over Opposition
Situational Did it improve the teams situ-
ation i.e. gain some advantage
in the match in terms of field
position or score
Improve Field
Position or Score
Given these derived constructs, the DMs were then asked
to score, out of 10, the corresponding match event outcomes
relating to each criteria. Where match outcomes described
negative or poor performance, negative scores were given.
D. Additional Convincing Level Modifiers
To further enhance the convincing nature of our approach,
the match event outcomes or criteria were further modified to
take into account additional technical, contextual and situa-
tional aspects of performance.
Technical - Positional Clusters: Positional Clusters (as
described by [13]) were devised so that match event outcomes
from players from different positions could be modified. For
example, a prop scoring a try should be measured higher than
a wing scoring a try.
Context - Quality of Opposition Team: Quality of Oppo-
sition has been found to be a contributing factor to a teams
performance [16]. In order to reflect the fact that lower quality
teams need to perform better to reach a par or out-perform their
superior opposition, the base scores of lower ranked teams
were reduced
Situation - Location on Pitch & Match Location: To
consider further situational aspects of performance, modifiers
were derived to take into account the location on the pitch the
match event outcome took place, and whether or not the team
being analysed was playing away from home.
The DMs felt that where a match event happens closer
to team’s own goal or opposition goal, it should be scored
higher. For example, a missed tackle close to the teams goal
could have more serious consequences than if it took place in
the middle of the pitch.
Playing at home gives an advantage in any team sport [15].
Therefore, using the same rationale as used in section III-D,
away teams should be penalised to reflect the fact they need
to perform better to reach a par with the performance of the
home team.
E. Scoring Methodology
The Hot Performance Indicator ηz, for team zin respect
of fixture tis calculated as follows.
ηz(t) = (1 + cz(t))(1 + σz(t)) X
i∈{εz(t)}
βi(1 + τi)(1 + si)
(III.1)
where:
ηz(t)is the hot performance indicator for team z
for fixture t,t≥1;
cz(t)is the contextual modifier to apply to team
zfor fixture tfor away disadvantage;
σz(t)is the situational modifier to apply to team
zfor fixture tfor quality of opposition;
εz(t)is the set of match event outcomes, for team
zin fixture t;
βiis the base score used for the outcome of match
event i;
τiand ciare the technical (positional clusters)
and situational modifiers for pitch location respec-
tively, applied in respect of the outcomes of match
event i.
IV. RES ULT S & ANA LYSIS
In this section we present our results and analysis of the
constructed HPIs in the context of the following questions:
1) Do higher Hot Performance Indicators translate to win-
ning matches?
Derived HPI scores for competing teams are compared
against the actual match outcomes in section IV-B.
2) Can Hot Performance Indicators be used to detect imbal-
ance or deficiencies in a team’s comparative performance?
A comparison is made between teams that finished in the
top, and bottom of the Celtic League table, after the 2011
regular season in section IV-C.
A. Analysis Methodology
1) HPI vs Match Outcomes: To determine whether or not
higher HPIs translate to actual match outcomes, we define
acorrect outcome as a fixture where the team that had the
highest calculated HPI,also wins the game. Correct outcomes
are shown as a percentage of all fixtures.
To investigate further the value of our derived HPIs in
this context, we calculate the correlation coefficient in respect
of the difference in match score between the two competing
teams, and the difference in their calculated HPI value for each
fixture.
Pearson Correlation Coefficient [19] for a sample is used
here; the formula for which is given in equation IV.1.
rXY =P(xi−¯x)(yi−¯y)
pP(xi−¯x)2P(yi−¯y)2(IV.1)
Here the summations are taken over the number of fixtures
considered. xand yrefer to the match score difference and
HPI difference between the competing teams respectively.
2) Comparison of Team Performance for Successful and
Unsuccessful Teams: To detect imbalance or deficiencies in
a team’s comparative performances, the calculated HPIs, over
a season, are broken down into the constituent scores of the
8 derived criteria that contribute to a teams performance, as
described in section III-B. The criteria that give constituent
scores are (i) ball carries, (ii) ball collections, (iii)scrums, (iv)
lineouts, (v) hand passes, (vi) kicks out of hand, (vii) tackles
and (viii) carries executions.
When looking at the constituent HPI scores, we did not
apply the modifiers for away disadvantage and opposition
quality (ie cz(t)and σz(t)in equation III.1), as we felt that
these would not really impact the overall balance between the
teams’ contributing scores.
Constituent HPI scores for the teams under comparison
were standardised using a min-max normalisation as given by
[20]. A Min-max normalization maps a value, v, of a data
series Ato v∗in the range [newM inA, newM axA]as shown
as follows:
v∗=v−minA
maxA−minA
(newM axA−newM inA)
+newM inA
Here, newM inAand newM axAwere chosen as 0and 100
respectively. Min-max normalisation was used as it preserves
the relationships among the original data values [20].
The normalised scores are then presented on a radar chart
to allow a visual comparison of the balance between the
constituent HPI scores for each team.
The following pages give an overview of the results found.
B. HPIs vs Match Outcomes
Overall out of 552 matches, our calculated HPIs accurately
reflected the match outcome 74.5% of the time. The correla-
tion coefficient between the match score and HPI differences
(per fixture) was calculated to be 72.3% overall, which was
pleasing.
In terms of the first question posed by our success criteria,
these results seem to indicate that higher HPI does indeed
translate to winning matches. This is consistent with [21]
who found that there is not distinguishable difference between
winning and losing teams when the mach score is closer than
15 points (in Super Rugby).
The HPI difference given in the following tables, is calcu-
lated as the absolute percentage difference between competing
teams’ HPI.
Table IV.1 shows that our correct percentage is relatively
low when the difference in HPI between teams is small.
However, when there is a bigger difference, our HPI metric
reflects the actual match outcome much better.
HPI difference % of total % correct
[0%,10%) 26.8% 58.1%
[10%,25%) 34.8% 70.8%
[25%,50%) 24.5% 84.4%
[50%,∞)13.9% 97.4%
Total 100.0% 74.5%
TABLE IV.1. ACT UAL HPIS V S MATCH OUT CO MES P ER P ERC EN TAGE
DI FFER EN TIA L IN OB SE RVED HPI BETWEEN PARTICIPATING TEAMS.
This leads us to believe that refinement of the HP scoring
methodology could lead to better results. The refinement of
scores by Decision Makers is a stage suggested by the MCDM
methodology given by [5] and used in particular by [22].
C. Comparison of Team Performance for Successful and Un-
successful Teams
Here we investigate the usefulness of using Hot Perfor-
mance Indicators to detect imbalance or deficiencies when
comparing performance attributes between teams.
The constituent HPI scores were calculated for the best
and worst performing teams over the 2010-2011 Celtic League
regular season. At the end of the regular season, Munster had
finished top, Leinster finished second, while Aironi finished
bottom Figure IV.1 shows the teams’ constituent HPI scores
(the calculation of which is described in section IV-A2.)
Figure IV.1 shows the normalised scores. It is clear from
visual inspection that the better performing teams have higher
scores than the worst performing team. The inherent balance
(and imbalance) in teams’ performance in respect of the top
Fig. IV.1. Comparison of normalised constituent HPI scores for
Aironi, Leinster and Munster for the 2010 - 2011 Celtic League
regular season.
team can be clearly seen if we adjust proportionally the scores
of the other teams as shown on the right of figure IV.1.
From the 2011 Celtic Season at least, better performing
teams have better balance across the 8 activities pertaining to
performance. HPI metrics lend themselves well to comparative
analysis between teams.
Fig. IV.2. Comparison of HPI scores for Aironi, Leinster and Munster
for the 2010 - 2011 Celtic League regular season
Figure IV.2 underlines how form can be compared at a
high-level over a set of fixtures. The graphic uses smoothed
HPI scores, η∗
z(t), to allow easier comparison. Smoothing is
done here using:
η∗
z(t) = αηz(t) + (1 −α)η∗
z(t−1) (IV.2)
with α= 0.5and ηz(t)given by equation III.1.
V. CONCLUSIONS AND FUTURE RESEARCH
In this section we will look back at our original goals
to evaluate how successful we have been, in terms of our
academic and business contribution. We will also evaluate
what, if anything, we have added to the fields of sports
analytics, and, in particular, Rugby Union.
Finally, we will critically examine our methods, offering
suggestions for refinements and further research.
A. Academic and Practical Contribution
Our first claim to academic contribution is that we are able
to employ expert users’ knowledge in Rugby Union analytics
using Business Analytics tools and that, in so doing, we are
offering a new dimension in this field. Performance in Rugby
Union is linked to an underlying generic adjusting structure.
By following the MCDM process for evincing scoring
mechanisms for each action in a rugby match, we ended up
with a useful HPI score. This score, when high, seemed to
correspond well with actual match outcomes.
We also wanted to investigate a possible link to Brugha’s 8
Stage Adjusting Process. We found a strong link in this regard
in section III-B. we were able to map the scorable actions
(in the HPI context) with each of the 8 phases of the Brugha
Adjusting Process, and found, generally at least, that the more
balanced teams were more successful.
The Structured MCDM methodology uses 8 adjusting
stages, each with a convincing level (i.e. looking at technical,
contextual and situational aspects of a decision.). In our
methodology, we similarly consider 8 stages, but here we apply
an additional level of convincing stages.
Technical Contextual Situational
Level 1 Technique Gain advantage Improve field posi-
tion or score
Level 2 Player positional
cluster
Quality of opposi-
tion
Area on playing
pitch and match
location
In keeping with the notion that balance should exist across
the adjusting structure ( [2]), we found in section IV-C that
successful teams indeed appeared more balanced than unsuc-
cessful teams.
A more general question is why is there a match between
the structures of decision-making in rugby and the generic
structures? And indeed why do the names of activities evolve
that fit these structures? This is a philosophical question.
The answer is the same for why many management systems
have the same structure [6]? It relates to the way that people
structure their decisions, and therefore to how they shape the
game, how it is played, organised and its rules. The playing,
training, rules and refereeing have all evolved to make a
good game, and have been adjusted over the years to make
a natural coherent structure. This was done intuitively without
any awareness of the underlying generic structures. Rugby may
have learned from other sports, including soccer, from which
it derived in the first place. And it would have picked up the
best bits, the most interesting, the ones that contributed most
to the game
In terms of practical contribution, the results given in
section IV, show that our methodology deserves further study.
A method for comparative analysis of teams was success-
fully developed and verified. We feel that, given the scale of
this research project, as a proof of concept, we have shown
that, where there is business advantage in having insight in
sport, this research is relevant. Certainly, one can say, based
on our results, that this methodology could enhance more
quantitative analysis.
Finally, this framework is abstractable, insofar as it was
created with all sport in mind, and, as such, remains a fairly
robust framework for processing of sports data.
REFERENCES
[1] C. M. Brugha, “The structure of qualitative decision making,” European
Journal of Operations Research, vol. 104, no. 1, pp. 46–62, 1998a.
[2] ——, “The structure of adjustment decision making,” European Journal
of Operations Research, vol. 104, no. 1, pp. 63–76, 1998b.
[3] M. D. Hughes and R. M. Bartlett, “The use of performance indicators
in performance analysis,” Journal of Sports Sciences, vol. 20, no. 10,
pp. 739–754, 2002.
[4] N. M. Jones, N. James, and S. D. Mellalieu, “An objective method for
depicting team performance in elite professional rugby union,” Journal
of Sports Sciences, vol. 26, no. 7, pp. 691–700, 2008.
[5] C. M. Brugha, “Structure of multi-criteria decision-making,” Journal of
the Operational Research Society, vol. 55, no. 1, pp. 1156–1168, 2004.
[6] ——, “Introduction to nomology,” European Journal of Operations
Research, 2012, in Review.
[7] A. Nevill, G. Atkinson, and M. Hughes, “Twenty-five years of sport
performance research in the journal of sports sciences,” Journal of
Sports Sciences, vol. 26, no. 4, pp. 413–426, 2008.
[8] M. Lewis and M. D. Hughes, “Attacking play in the 1986 world cup of
association football,” Journal of Sports Science, vol. 6, p. 169, 1998.
[9] M. Hughes and S. Churchill, “Attacking profiles of successful and
unsuccessful teams in copa america 2001,” in Science and Football 5:
The Proceedings of the Fifth World Conference on Science and Football,
2005, pp. 288–293.
[10] C. Lago-Penas, J. Lago-Ballesteros, A. Dellal, and M. Gomez, “Game-
related statistics that discriminated winning, drawing and losing teams
from the spanish soccer league,” Journal of Sports Science and
Medicine, vol. 9, no. 4, pp. 288–293, 2010.
[11] C. Palmer, M. Hughes, and A. Borrie, “A comparative study of centre
pass patterns of play of successful and non successful international
netball teams,” Journal of Sports Sciences, vol. 12, p. 181, 1994.
[12] P. O’Donoghue and B. Ingram, “A notational analysis of elite tennis
strategy,” Journal of Sports Sciences, vol. 19, no. 2, pp. 107–115, 2001.
[13] N. James, S. Mellalieu, and N. Jones, “The development of position-
specific performance indicators in professional rugby union,” Journal
of Sports Sciences, vol. 23, no. 1, pp. 63–72, 2005.
[14] N. M. Jones, S. D. Mellalieu, and N. James, “Team performance indica-
tors in rugby union as a function of winning and losing,” International
Journal of Performance Analysis in Sport, vol. 4, pp. 61–71, 2004.
[15] A. Nevill, S. M.Newell, and S. Gale, “Factors associated with home
advantage in english and scottish soccer matches,” Journal of Sports
Sciences, vol. 14, no. 2, pp. 181–186, 2007.
[16] J. B. Taylor, S. D. Mellalieu, N. James, and D. A. Shearer, “The
influence of match location, quality of opposition and match status on
technical performance in professional association football,” Journal of
Sports Sciences, vol. 26, no. 9, pp. 885–895, 2008.
[17] C. M. Brugha, “Phased multicriteria preference finding,” European
Journal of Operational Research, vol. 158, no. 2, pp. 308–316, 2004.
[18] ——, “The structure of development decision making,” European
Journal of Operations Research, vol. 104, no. 1, pp. 77–92, 1998c.
[19] R. Lomax, An Introduction to Statistical Concepts, Second Edition.
Taylor & Francis, 2007.
[20] J. Han and M. Kamber, Data Mining: Concepts and Techniques, ser.
The Morgan Kaufmann Series in Data Management Systems. Elsevier,
2006.
[21] L. Vaz, M. V. Rooyen, and J. Sampaio, “Rugby game-related statistics
that discriminate between winning and losing teams in irb and super
twelve close games,” Journal of Sports Science and Medicine, vol. 9,
pp. 51–55, 2010.
[22] B. O’Brien and C. M. Brugha, “Adapting and refining inmulti-criteria
decision-making,” Journal of the Operational Research Society, vol. 61,
no. 1, pp. 756–767, 2010.