ArticlePDF Available

Abstract

There is a common expression in sports, that there is no 'i' in team. However, there is also a very important 'i' in sports teams - the individual athlete/player. Each player has his/her own unique characteristics including physical, physiological and psychological traits. Due to these unique characteristics, each player requires individual provision - whether it be an injury risk profile and targeted prevention strategy or treatment/rehabilitation for injury, dietary regimen, recovery or psychological intervention. The aim of this commentary is to highlight how four high-performance teams from various professional football codes are analysing individual player data.
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Note. This article will be published in a forthcoming issue of the
International Journal of Sports Physiology and Performance. The
article appears here in its accepted, peer-reviewed form, as it was
provided by the submitting author. It has not been copyedited,
proofread, or formatted by the publisher.
Section: Invited Commentary
Article Title: Putting the i Back in Team
Authors: Patrick Ward1, Aaron J. Coutts2, Ricard Pruna3, and Alan McCall4,5
Affiliations: 1Seattle Seahawks, Seattle, WA, USA. 2Faculty of Health, University of
Technology Sydney (UTS), Australia. 3FC Barcelona, Barcelona, Spain. 4Arsenal Football
Club, London, UK. 5Edinburgh Napier University, Edinburgh, UK.
Journal: International Journal of Sports Physiology and Performance
Acceptance Date: June 12, 2018
©2018 Human Kinetics, Inc.
DOI: https://doi.org/10.1123/ijspp.2018-0154
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Title: Putting the ‘i’ back in team
Authors: Patrick Ward1, Aaron J Coutts2, Ricard Pruna3, Alan McCall4,5
Institution:
1 Seattle Seahawks, Seattle, USA
2 University of Technology Sydney (UTS), Faculty of Health, , Australia
3 FC Barcelona, Barcelona
4 Arsenal Football Club, London, UK
5 Edinburgh Napier University, Edinburgh, UK
Corresponding author:
Alan McCall
Arsenal Football Club
Bell Ln, London Colney
Hertfordshire, AL2 1DR
Tel: +33 651748266 - Fax: +33 320887363
Email: amccall@arsenal.co.uk
Submission type: Invited commentary
Running title: There is an ‘I” in team
Abstract word count: 118
Manuscript word count: 1914
Number of figures: 3
Number of tables: 0
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Abstract
There is a common expression in sports, that there is no iin team. However, there is also a very
important ‘i in sports teams - the individual athlete/player. Each player has his/her own unique
characteristics including physical, physiological and psychological traits. Due to these unique
characteristics, each player requires individual provision - whether it be an injury risk profile and
targeted prevention strategy or treatment/rehabilitation for injury, dietary regimen, recovery or
psychological intervention. The aim of this commentary is to highlight how four high-performance
teams from various professional football codes are analysing individual player data.
Key words: data analysis, individual, team-sports, athletes, monitoring
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Introduction
WIthin high-performance sports organisations, a critical question for science/medicine
practitioners is, how best to analyse, interpret and report monitoring and testing protocols in order to
collaborate with the coaching staff on the design and implementation of the training program for both
individual players and the team as a collective. The gold standard is likely to follow an evidence-led
approach1,2 using the integration of coaching expertise, athlete values, and the best relevant research
evidence into the decision-making process for the day-to-day service delivery to players”2. The aim of
this commentary is to highlight how an evidence-led approach, targeted at the individual player is
currently being integrated by four high-performance teams from various football codes; Association
Football, American Football and Australian Rules Football.
There is a well-known saying in sport; there is no ‘i’ in team’, and the current authors argue
that, collectively, there is a very important ‘i’ in team – the ‘individual’. In the current authors affiliated
teams, there are collectively > 150 players from > 20 countries spanning 5 continents. With a variety of
ethnicities, each player with unique physiological and psychological characteristics. Each of these
players requires individual catering - whether an injury risk profile, prevention strategy,
treatment/rehabilitation, dietary regimen, recovery or psychological intervention. It is hoped that this
commentary will provide some insights into innovative practices and begin the discussion and sharing
of ideas of how other teams are integrating an evidence-led approach to optimise the servicing of the
individual. The methods of analysis discussed can supplement the daily interaction with players,
allowing recommendations for modifications to training to be discussed with the coaching staff’.
Analysing individual player data
In the sport scientist/medicine practitioners daily practice, many decisions are made ‘on the fly’
(i.e. within a short timeframe or even immediately).For these practitioners, it is important that they have
confidence in their decision making processes before they provide advice to coaches and/or players. As
practitioners, we typically collect a variety of measurements around a players’ physical capacity, wellness,
injury-risk, rehabilitation progress, training/match performance with the intent to identify and monitor the
individual responses and guide specific treatments and/or interventions. A challenge for practitioners is
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
identifying important trends in player’s data and/or deviations from normal patterns. Whilst monitoring
strategies were first reported for individual sport athletes (e.g., endurance sports) as a way to identify risks of
overtraining3-6, this approach has since been adopted by team-sports to guide the training process2,7. The
information obtained from these measures are commonly used to identify players who are not responding
well to team training, individualising rehabilitation and/or identifying reduced readiness or injury risk7. In
team sports, individualised monitoring can be particularly challenging when working with individuals given
the number of players in a team and the wide variety of training interventions prescribed each day (i.e. many
players full-training, whilst some are prescribed modified programs within the team session and others in
bespoke rehabilitation away from other team members). In these sports the coaching staff usually develop a
training plan based on the team’s technical/tactical strategies required for the upcoming match. In this way,
the risk is that players on the team can be administered similar field-based training programs, often without
sufficient consideration of how each individual may be tolerating the prescribed training. Even for teams
where players are carefully monitored at the individual level, it can still be difficult to individualise the
training program as available approaches to make inferences at the individual level are not well described in
the literature.
A number of statistical approaches have been reported to identify individual differences across
periods of time in physical therapy, exercise, and elite sport8-11. The aim of implementing these analyses is to
enhance the decision-making around issues relating to individual athletes. Analytic approaches such as those
drawing from single-subject research design and time series analysis12, or those using a magnitude-based
inference (MBI) approach to analysing the individual9,13 may provide fast thinking sports science and
medicine practitioners with simple methods of analysing individual player data.
Magnitude-Based Inference for Assessing Individuals
Magnitude-based inference is a statistical approach that allows the practitioner to interpret the
magnitude of the observed effect, relative to some standardized threshold, as being either substantial or
trivial14. While initially proposed for making group comparisons, this method has recently been extended to
assessing individuals9,13,15. Through this approach, practitioners can statistically evaluate the direction of a
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
player’s trend and the magnitude of that trend over time9, allowing for more meaningful and evidence-led
conversations with coaches regarding the health, well-being and performance capacity of players.
Through the MBI approach, the practitioner can qualitatively assess the observed effect relative to a
smallest worthwhile change (SWC)14. Several methods exist for calculating the SWC such as practical
experience in sport, recent publications identifying typical variations in the variable of interest for a similar
level of player, or more commonly, by multiplying the between-subject (player) standard deviation (SD) by
0.216. While this approach may be useful when analysing performance testing data for groups of athletes (e.g.,
vertical jump for athletes in each position group of an American Football team) the magnitude of between
subject SD falls victim to the heterogeneity of the group15 limiting its utility to detect changes when evaluating
serial measures within the individual (e.g., Daily Wellness Questionnaire data). This has led some authors to
determine the SWC from the within-individual coefficient-of-variation (CV), as this approach is not only
individualized to that person’s typical variation but also takes into account the repeated measures structure of
the data15. For example, Plews and colleagues17 took an individualized MBI approach to evaluate the heart
rate variability (HRV) of elite triathletes during a specific training block. The authors established a SWC in
Ln rMSSD (the natural logarithm of the square root of the mean sum of the squared differences between
adjacent normal R:R intervals) as 0.5*individual athlete CV and found that a large linear decrease in Ln
rMSSD CV for the athlete’s 7-day rolling average revealed a trend towards non-functional over-reaching17.
This approach is similar to examining individualized z-scores, a commonly used method in sports science18.
However, it may be easier for practitioners to explain data to coaches or players/athletes in reference to
percentage changes (e.g. she ran 30% more than usual) rather than explaining data on a standardized scale as
is the case with z-scores. For team sport athletes, using a MBI approach such as we have outlined, would be
appropriate for analysing meaningful changes in training and monitoring data. For example, the training load
and/or wellness response of players on a team can be analysed using a within-individual MBI approach by
applying similar logic to that of Plews and colleagues17. In doing so, the outcomes can be visualized to show
when meaningful changes in the player’s training plan occur. MBI can also be applied to various other team
monitoring aspects. Figure 1 follows the trend of an individual players perceived soreness during 20-weeks
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
of preseason training. This type of visualization has been suggested as a positive feature of taking an MBI
approach to data analysis13.
Statistical Process Control
Statistical Process Control (SPC), sometimes referred to as the “two standard deviation band
method”, is a single-subject approach that can be applied by the practitioner to quickly understand a player’s
trend in any type of monitoring data. Taking its roots in industrial quality control, SPC has since been applied
in both social work and physical therapy settings11,19. The SPC analysis utilizes a control chart, which
visualizes the individual player’s time series data with respect to that individual’s control line (average) and
control limits, often representing 2 SD above and below the mean. Observations, which lie above or below
the upper control limit (UCL) or lower control limit (LCL), are deemed to be “out of statistical control” and
would warrant further investigation. To evaluate whether more subtle shifts in the individual’s trend are taking
place, run time errors can be established to identify, for example, periods of time where several observations
(e.g., 8 or 9) reside above or below the control line indicating a potential shift in the overall process19. The
versatility of SPC provides the practitioner with options of setting their own UCL and LCL. For example,
instead of 2SD control limit, a practitioner may feel it important to be alerted when a value exceeds 1SD8.
The control limits may be initially set with general heuristics in mind (fast working sports science approach),
however, as more data is collected and more detailed analysis is carried out, these should be adjusted to
represent the change that is most meaningful for the given population (slow working sports science approach).
It is common practice for sports practitioners to use the entire historic data of a player/athlete as their
"mean" and "SD", however, this may be limited given the changes in training demands that take place
throughout a season, for example between the pre- and in-season periods20,21. Therefore, it might be
more useful to evaluate data that is recent, for example a 28 day mean and SD or to use some form of
rolling average22.
Figure 2 provides an example of an individual player’s 48 h post-match values from a test of isometric
hamstring force. Amber dots represent when the isometric force values (newtons) drop 1SD below the
player’s mean values and red dots when the force falls beyond 1.5SD. These flags (among other subjective,
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
objective physical, psychological and contextual factors) are used to inform the decision-making process for
that player’s training program, recovery intervention and physiotherapy treatment prior to the next match.
Mixed Models
One potential limitation of SPC analysis is that several measurements on the individual are
required in order to establish the control line, UCL and LCL. This may be problematic for individuals
who enter the team at different stages of the season (e.g., new players via draft or transfer, trade or free
agent signings) or indeed those with missing data due to poor compliance, technological failure or as
commonly seen when international level players transition from club to national team for major
tournaments. To account for these limitations, while still evaluating individual differences in training,
mixed model analyses have been recommended10,23,24. These types of models allow the practitioner to
leverage the “wisdom of the crowd” via fixed effects while also analysing individual differences
through the specification of random intercepts and/or slopes25. In brief, a mixed-model approach
represents a compromise between pooled data (e.g., the average across all observations) and non-pooled
data (e.g., the averages for the individuals themselves, such as SPC)25. This approach is useful in sports-
science where player’s that have a small number of observations may initially be better represented by
the fixed effects until sufficient observations can be obtained. Similarly, those with a substantial amount
of data will have more individualized slope and regression values around the fixed effects of the model.
The random component of these models allow the practitioner to evaluate how much the individual
athlete deviates from the group. Therefore, such a model may be useful for exploring inter-individual
responses that athletes may have to the prescribed training dose.
Mixed-model approaches have begun to find their way into the sports science literature20,26,27.
However, it is generally uncommon for sports scientists and medicine professionals to discuss random
effects outputs, as most simply interpret the fixed effects portion of the model. Without a more thorough
discussion of the model’s random effects it is challenging to understand the relationship between
individual differences displayed by players. One method to utilize the random effects of the model is to
determine the “effect” of interest based off the between-subject SD14. The typical within-individual
variation can be used to represent the confidence we place on a change in a player’s value from one
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
measurement to the next. This analytic approach can be visualized with a control chart and error bars
around the observed value. Figure 3 shows the difference between observed and predicted session-RPE
(rating of perceived exertion) training loads for a player group, where the prediction came from a mixed
effects model. When the error bars surrounding the value reside outside of the grey area, then we can
be more confident that the change is meaningful and further investigation is warranted (figure 3).
Take home message
The intention of this commentary to increase the awareness of practitioners who may not be applying
statistical approaches to better understand individual athletes within a team sport setting. We hope that by
implementing these approaches that the fast-thinking practitioners can make confident decisions at the
individual player level. While, we concede that such approaches will require detailed planning and new skill
sets for many sports practitioners, this is the beginning of the push to better understand individual players and
we feel that this is an ideal opportunity to start on the right foot. It is the philosophy and methods of the current
authors’ affiliated teams to adopt such strategies as and when appropriate to both enhance our sports science
and medicine service to our own individual players. We welcome and encourage other sports practitioners
from teams to share other methods of analysing the individual so that we can learn and improve together.
Acknowledgments
The authors would like to acknowledge the sports science, medicine and performance staff of each of the
affiliated teams. In particular, Colin Lewin, Ben Ashworth and Sarah Rudd from Arsenal Football Club.
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
References
1. Buchheit M. Houston, We Still Have a Problem. Int J Sports Physiol Perform. 2017;12:1111-4.
2. Coutts AJ. Challenges in Developing Evidence-Based Practice in High-Performance Sport. Int J
Sports Physiol Perform. 2017;12:717-8.
3. Foster, C., A. Snyder, and R. Welsh, Monitoring of training, warm up, and performance in athletes,
in Overload, Performance Incompetence and Regeneration in Sport., M. Lehmann, et al.,
Editors. 1999, Kluwer Academic / Plenum Publishers: New York. p. 43-52.
4. Foster, C., et al., Athletic performance in relation to training load. Wisconsin Medical Journal,
1996;95:370-374.
5. Hooper, S.L. and L.T. Mackinnon, Monitoring overtraining in athletes:recommendations. Sports
Medicine, 1995;20:321-327.
6. Rushall, B.S., A tool for measuring stress tolerance in elite athletes. Journal of Applied Sport
Psychology, 1990;9:51-66.
7. Coutts, A.J., S. Crowcroft, and T. Kempton, Developing athlete monitoring systems: Theoretical
basis and practical applications, in Sport, Recovery and Performance: Interdisciplinary
Insights, M. Kellmann and B. Beckmann, Editors. 2017, Routledge: Abingdon. p. in press.
8. Sands WA, Kavanaugh AA, Murray SR, McNeal JR, Jemni M. Modern Techniques and
Technologies Applied to Training and Performance Monitoring. 2017;12:S263-272.
9. Hopkins WG. A spreadsheet for monitoring an individuals changes and trends. Sportsci. 2017;21:5-
9.
10. Kwok OM, Underhill AT, Berry JW, Luo W, Elliot TR, and Yoon M. Analyzing Longitudinal
Data with Multilevel Models: An Example with Individuals Living with Lower Extremity
Intra-articular Fractures. Rehabil Psychol. 2008;53:370-386.
11. Nourbakhsh MR, and Ottenbacher KJ. The Statistical Analysis of Single-Subject Data: A
Comparative Examination. Phys Ther. 1994;74:768-76.
12. Sands WA, McNeal JR, and Stone M. Plaudits and pitfalls in studying elite athletes. Percept Mot
Skills. 2005;1:22-4.
13. Buchheit M. The numbers will love you back in return I promise. In J Sports Physiol Perf.
2016;11:551-4.
14. Hopkins WG and Batterham AM. Error rates, decisive outcomes and publication bias with several
inferential methods. Sports Med. 2016;46:1563-73.
15. Buchheit M. Monitoring training status with HR measure: do all roads lead to Rome? Front
Physiol. 2014; doi: 10.3389/fphys.2014.00073
16. van Shaik P and Weston M. Magnitude-based inference and its application in user research. Int J
Human-Computer Studies. 2016;88.38-50.
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
17. Plews DJ, Laursen PB, Kilding AE, Buchheit M. Heart rate variability in elite triathletes, is
variation in variability the key to effective training? A case comparison. Sports Med.
2012;112:3729-41.
18 Robertson. S., Bartlett. JD., Gastin. PB. (2017). Red, Amber, or Green? Athlete monitoring in team
sport: The need for decision-support systems. Int J Sports Physiol Perform. 12(Suppl 2):
S273-S279.
19. Orme JG, and Cox ME. Analyzing single-subject design data using statistical process control
charts. Social Work in Research. 2001;25:115-27.
20. Ritchie. D., Hopkins. WG., Buchheit. M., Courdy. J., Bartlett. JD. (2016). Quantification of
training ad competition load across a season in an elite Australian football club. Int J Sports
Physiol Perform. 11(4): 474-479.
21. Moreiera. A., Bilsborough. JC., Sullivan. CJ., CIancosi. M., Coutts. AJ. (2015). Training
periodization of professional Australian football players during an entire Australian Football
League season. Int J Sports Physiol Perform. 10(5): 566-571.
22. Williams. S., West. S., Cross. MJ., Stokes. KA. (2017). Better way to determine the actute:chronic
workload ratio? Br J Sports Med. 51(3): 209-210.
23. Gastin PG, Meyer D, and Robertson D. Perceptions of wellness to monitor adaptive responses to
training and competition in elite Australian Football. J Strength Cond Res. 2013;27:2518-26.
24. Cnaan A, Laird NM, and Slasor P. Using the general linear mixed model to analyse unbalanced
repeated measures and longitudinal data. Statistics in Medicine. 1997;16:2349-80.
25. Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models. Cambridge
University Press, New York, NY. 2007.
26. Malone JJ, Di Michele R, Morgans R, Burgess D, Morton JP, Drust B. Seasonal Training Load
Quantification in Elite English Premier League Soccer Players. Int J Sports Physiol Perform.
2015;10:489-97.
27. Kempton T and Coutts AJ. Factors affecting exercise intensity in professional rugby league. J Sci
Med Sport. 2016;19:504-8.
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Figure 1. An individual player’s perceived soreness during a 20-week pre-season period. Values that
lie beyond the dashed lines are have a 75% chance of a moderate effect (effect size >0.6).
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Figure 2. Statistical Process Control chart displaying Match day + 2 (48h) isometric hamstring force
(Left Hamstring at 30 degrees knee flexion). Amber dot represents 1 standard deviation and Red dot
represents 1.5 standard deviations from player mean values. The dotted lines represent the area where
any scores are considered within statistical control. The practitioner can quickly scan each Match day
+ 2 testing day and see where a players hamstring isometric force is out with their normal range i.e. out-
with statistical control.
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
Putting the i’ Back in Team” by Ward P, Coutts AJ, Pruna R, McCall A
International Journal of Sports Physiology and Performance
© 2018 Human Kinetics, Inc.
Figure 3. Difference between the Observed and Predicted RPE Training Load for players of an
American football team. Predictions were made using a linear effects mixed model. The shaded area
represents a moderate effect relative to the between subject standard deviation. Error bars represent the
within-subject standard deviation from the mixed-model. The practitioner can quickly scan training for
the day and identify any athletes with an abnormal difference between their observed and predicted
responses for this training session.
Downloaded by aaron.coutts@uts.edu.au on 07/01/18, Volume ${article.issue.volume}, Article Number ${article.issue.issue}
... JASP Version 0.17.2.1 [Apple Silicon]). For the case series reports, z-scores for individual subject analyses were calculated as (subject value − group mean value)/group standard deviation, as recommended for sport science analyses [31]. ...
... Due to increased risk of type II error, the interpretation of the ES, Bayesian analysis, and individual data is highlighted. Sport science research usually has to face the problem of low sample sizes, as in our case, and it could be argued that case study reports may be a better approach to present the results; however, despite their frequent use in other disciplines and their educational value for theory building and/or testing [32][33][34], they are not so common in sport science, although an individual analysis has been recommended [31]. Our approach combining a group analysis and case series reports, and methodologies of a few similar previous studies [35,36], could be an example on how to apply these types of designs in sport science and deal with low sample sizes. ...
Article
Full-text available
Background/Objective: This study investigated the differences in acute fatigue following resistance training performed with low versus high loads in the bench press (BP). Methods: Trained males (n = 5, 21.2 ± 2.77 years; 81.86 ± 6.67 kg; 177 ± 7.52 cm) undertook three protocols with 50%RM and three with 85%RM with volume equalized between protocols: muscular failure protocols (TF, RTP1 and 2), half-maximum repetition protocols (RTP3 and 4), and cluster set protocols (RTP5 and 6). Mechanical performance, lactate, and perceptual responses were analyzed during protocols and at post 0, 24, and 48 h using frequentist (p < 0.05) and Bayesian approaches. Results: Moderate to large (ES ≥ 0.3) and trivial to moderate (ES < 0.3) effects were observed at 0 and 24 h post-session, respectively, across all protocols. TF protocols, particularly RTP1, showed the greatest impairments when compared to the other RTP (ES ≥ 0.3). The Bayesian analysis supported the frequentist results, showing strong-decisive evidence for our data under the model that included protocols as predictors for mechanical, metabolic, and perceptual variables during protocols. Inter-individual variability in responses was observed in the neuromuscular tests, potentially related to the strength level and perceptual responses. Conclusions: In summary, TF generates greater fatigue, while reducing set volume to half of maximum repetitions or including intra-set rest that helps to mitigate fatigue symptoms.
... Last but not least, some valuable insights can be drawn from the responses on how practitioners interpret fitness testing results. Most practitioners (89%) compared current test results to previous results, which is reasonable as assessing individual changes is the most relevant for the practitioners working in the practical setting to inform the continuation or modification of a training intervention (Ward et al., 2018). Nevertheless, a lower percentage (38%) accounted for measurement error, which allows for the identification of normal variation between testing sessions. ...
Article
Full-text available
This study provides insight into the current fitness testing practices in elite male soccer. One hundred and two practitioners from professional soccer leagues across 24 countries completed an online survey comprising 29 questions, with five sections: a) background information, b) testing selection, c) testing implementation, d) data analysis, and e) data reporting. Frequency analysis was used to evaluate the responses to fixed response questions and thematic analysis was used for open-ended questions to generate clear and distinct themes. Strength (85%) and aerobic capacity (82%) represent the most frequently assessed physical qualities. Scientific literature (80%) is the most influential factor in testing selection and practitioners conduct fitness testing less frequently than their perceived ideal frequency per season (3.6 ± 2 vs. 4.5 ± 2). Time and competitive schedule were the greatest barriers to fitness testing administration. Practitioners mostly used a ‘hybrid’ approach (45%) to fitness testing, blending ‘traditional’ (i.e., a day dedicated to testing) and ‘integrated’ (i.e., testing within regular training sessions) methods. Microsoft Excel is the most used software for data analysis (95%) and visualization (79%). An equal use of the combination of best and mean scores of multiple trials (44%) and the best score (42%) was reported. Comparing a player’s test performance with previous scores (89%) was the most common method for interpreting test results. However, only 38% considered measurement error. Digital displays and verbal feedback are the most common data reporting methods, with different data reporting processes for coaches and players. Practitioners can use data and findings from this study to inform their current testing practices and researchers to further identify areas for investigation, with the overarching aim of developing the field of fitness testing in elite male soccer.
... This should not be confused with similar processes to compare data from different numeric scales (64). Rather, it is a way to control the outcome sensitivity for performance tests known to have greater or lesser movement variability. ...
Article
Full-text available
There is an ongoing surge of sports science professionals within sports organizations. However, when seeking to determine training-related adaptations, sports scientists have demonstrated continued reliance on group-style statistical analyses that are held to critical assumptions not achievable in smaller-sample team settings. There is justification that these team settings are better suited for replicated single-subject analyses, but there is a dearth of literature to guide sports science professionals seeking methods appropriate for their teams. In this report, we summarize four methods' ability to detect performance adaptations at the replicated single-subject level and provide our assessment for the ideal methods. These methods included the model statistic, smallest worthwhile change (SWC), coefficient of variation (CV), and standard error of measurement (SEM), which were discussed alongside step-by-step guides for how to conduct each test. To contextualize the methods' use in practice, real countermovement vertical jump (CMJ) test data was used from four athletes (two females and two males) who complete five biweekly CMJ test sessions. Each athlete was competing in basketball at the NCAA Division 1 level. We concluded the combined application of the model statistic and CV methods should be preferred when seeking to objectively detect meaningful training adaptations in individual athletes. This combined approach ensures that the differences between tests are A) not random and B) reflect a worthwhile change. Ultimately, the use of simple and effective methods that are not restricted by group-based statistical assumptions can aid practitioners when conducting performance tests to determine athlete adaptations.
... Sports scientists utilize select fitness assessments regularly throughout competitive seasons to assess injury risks and monitor neuromuscular status (Brasch et al., 2019;Claudino et al., 2017;Merrigan et al., 2020;Watkins et al., 2017). For example, practitioners can establish an athlete's baseline abilities and assess whether deviations from baseline are cause for concern (Ward et al., 2018). ...
Thesis
Full-text available
Introduction Ice-hockey players develop asymmetrical movement patterns by favoring rotation through the torso and hips while passing and shooting. Interlimb asymmetries have been shown to affect repeated sprint ability, vertical and horizontal countermovement jump power, and general athletic performance. Isometric hip strength and the countermovement jump are commonly assessed in ice-hockey players because of their relationships with skating performance and incidence of groin injuries, respectively. Purpose: This study explored whether asymmetries returned during isometric hip strength and countermovement jump assessments relate to those from stride-by-stride analyses. Methods: Thirty-seven professional ice-hockey players performed weekly hip strength and jump assessments and wore inertial momentum units during on-ice sessions throughout the pre- and competitive seasons. Data were either available for both limbs and were utilized to calculate inter-limb asymmetries, or as an asymmetry percentage. Results: Among all parameters measured, only the CMJ peak landing force asymmetry exceeded 10% for all positions (22.1%) and by position (21.3% - 22.6%). Centers and Defense positions returned several moderate to large relationships between fitness assessment asymmetries (r: -0.67 – 0.38, p < 0.01). All positions returned moderate to large relationships between hip strength and on-ice skating load and average force per stride (r: -0.32 – 0.56, p < 0.05). Centers returned moderate countermovement jump and on-ice asymmetries (r: -0.31 – 0.43, p < 0.01). Conclusion: This study revealed that significant relationships exist between on- and off-ice asymmetries in men’s professional ice-hockey. The results from this study also provide practitioners with reference values for on- and off-ice asymmetries. DOI: https://doi.org/1866/28281
... Exposure and response varies across individuals and across time and thus, may be better supported as a non-linear association (53). More complex analysis techniques such as linear mixed models (54,55) or time to event analysis (56,57) may be suitable to address potential insights into individual slopes and time varying associations. ...
Article
Full-text available
Introduction Patellar tendon adaptations occur in response to mechanical load. Appropriate loading is necessary to elicit positive adaptations with increased risk of injury and decreased performance likely if loading exceeds the capacity of the tendon. The aim of the current study was to examine intra-individual associations between workloads and patellar tendon properties and neuromuscular performance in collegiate volleyball athletes. Methods National Collegiate Athletics Association Division I men's volleyball athletes ( n = 16, age: 20.33 ± 1.15 years, height: 193.50 ± 6.50 cm, body mass: 84.32 ± 7.99 kg, bodyfat%: 13.18 ± 4.72%) competing across 9 weeks of in-season competition participated. Daily measurements of external workloads (i.e., jump count) and internal workloads [i.e., session rating of perceived exertion (sRPE)] were recorded. Weekly measurements included neuromuscular performance assessments (i.e., countermovement jump, drop jump), and ultrasound images of the patellar tendon to evaluate structural adaptations. Repeated measures correlations ( r-rm) assessed intra-individual associations among performance and patellar tendon metrics. Results Workload measures exhibited significant negative small to moderate ( r-rm =−0.26–0.31) associations with neuromuscular performance, negative ( r-rm = −0.21–0.30), and positive ( r-rm = 0.20–0.32) small to moderate associations with patellar tendon properties. Discussion Monitoring change in tendon composition and performance adaptations alongside workloads may inform evidence-based frameworks toward managing and reducing the risk of the development of patellar tendinopathy in collegiate men's volleyball athletes.
... To date, research in team sports examining the dose −response relationship between training load and SMFT response measures has primarily been conducted in training-camp environments (7-14 days; Buchheit et al., 2013;Malone et al., 2017). These studies have used betweenindividual analysis methods (Buchheit et al., 2013;Malone et al., 2017), which differ from routine industry practice where performance staff use individual data to drive player management strategies (Ward et al., 2018). Despite these differences, the findings are concordant; observing a reduction in HRex and strong negative associations between HRex and training load (sRPE-TL; r = −0.85 ...
Article
Full-text available
We aimed to examine the reliability, validity and sensitivity of an individualised sub-maximal fitness test (SMFTIFT60). Nineteen elite rugby league players performed a one-week test-retest of SMFTIFT60. Typical Errors and ICCs were: small (<3.5%) and extremely high (>0.90) for accelerometer-derived variables; moderate (<2.5% points) and moderate to very high (0.71–0.89) for exercise and recovery heart rate (HRex and HRR, respectively). Convergent validity correlations with the 10-week pre-season change in 30–15 Intermittent Fitness Test performance were large for changes in SMFTIFT60 HRex (r = −0.57) and HRR (0.60), and very large for changes in accelerometer measures (range: −0.71 to −0.79). For sensitivity, within-player dose–response relationships between SMFTIFT60 HRex and prior 3-day training loads were negative and ranged from moderate (session ratings of perceived exertion [sRPE-TL], r = −0.34), to large (high-speed running distance, −0.51; acceleration load, −0.73) and very large (heart rate Training Impulse [TRIMP], −0.83). All other relationships were unclear or trivial to small. Physiological and accelerometer-derived measures from the SMFTIFT60 are reliable and valid for the assessment of fitness in rugby league players. Only HRex appears sensitive to acute changes in training load. The SMFTIFT60 could be a useful monitoring tool in team sports
... In addition, 86% of clubs indicate that they use player characteristics, such as age and injury history, and use standardized small-sided games (77%) in their load monitoring process. This shows that clubs use several approaches in the load monitoring, which can help the challenging process of interpreting trends in individual data (Ward et al., 2018). This could be especially important in women's football; while male players are full-time football players, many of their female peers have to work or study in addition to their daily training load as football players. ...
Article
Full-text available
The description of current load monitoring practices may serve to highlight developmental needs for both the training ground, academia and related industries. While previous studies described these practices in elite men's football, no study has provided an overview of load monitoring practices in elite women's football. Given the clear organizational differences (i.e., professionalization and infrastructure) between men's and women's clubs, making inferences based on men's data is not appropriate. Therefore, this study aims to provide a first overview of the current load monitoring practices in elite women's football. Twenty-two elite European women's football clubs participated in a closed online survey (40% response rate). The survey consisted of 33 questions using multiple choice or Likert scales. The questions covered three topics; type of data collected and collection purpose, analysis methods, and staff member involvement. All 22 clubs collected data related to different load monitoring purposes, with 18 (82%), 21 (95%), and 22 (100%) clubs collecting external load, internal load, and training outcome data, respectively. Most respondents indicated that their club use training models and take into account multiple indicators to analyse and interpret the data. While sports-science staff members were most involved in the monitoring process, coaching, and sports-medicine staff members also contributed to the discussion of the data. Overall, the results of this study show that most elite women's clubs apply load monitoring practices extensively. Despite the organizational challenges compared to men's football, these observations indicate that women's clubs have a vested interest in load monitoring. We hope these findings encourage future developments within women's football.
... The real possibilities of change in the performance variables were classified as follows: "<1%" = very unlikely increase/decrease; "1-5%" = unlikely trivial increase/decrease; "5-25%" = trivial increase/decrease "25-75%" = increase/decrease; "75-95%" = substantial increase/decrease; "95-99%" = likely substantial increase/decrease; and, ">99%" = very likely increase/decrease. The Hopkins' spreadsheet has been used in other studies of team sports to analyse variations in competition performance (Ward et al., 2018;Lorenzo et al., 2019) or physical performance (Pliauga et al., 2018;Ruf et al., 2018;Ferioli et al., 2020) over a period of time. ...
Article
Full-text available
Background: Competitive success is the ultimate objective of elite professional sport organisations. Relative age effects (RAE) impact athlete selection processes in the short and long-term performance. The aims of this study were: (i) examine the presence of RAE by gender, competitive level, and playing position, as well as evaluate the impact of RAE on individual (goals, percentage of effectiveness in shots, saves; percentage of effectiveness in saves, assists, turnovers, steals, blocked shots, penalties, minutes played, and minutes played per match) and collective competition performance (final team position); and (ii) analyse the impact of RAE on the evolutionary trends of individual performance in international competitions throughout 16 seasons in Spanish handball (2005–2020). Methods: The sample included 631 Spanish handball players (male: n = 359; female: n = 272). A Chi-square goodness-of-fit test was used to assess whether a skewed birthdate distribution occurred. A one-way analysis of variance (ANOVA) of independent measures was used to examine the individual and collective statistical parameters by birth quartiles. A linear regression in a Hopkins sheet were performed to compare individual performance trends. Results: The results revealed RAE in the male formative categories (p < 0.001), as well as the male and female senior categories (p < 0.05). By position, RAE especially affected the “centre-back” in the male formative (p < 0.01) and senior categories (p < 0.05). No significant relationship between RAE and individual performance was found in male formative categories, while an impact of RAE on the “minutes played” was detected in the female senior category (p < 0.05). With regard to collective performance, a higher number of relatively older handball players was observed in the best ranked teams in the male formative categories and in the quarter-final teams in the female formative categories (p < 0.05). Among the male players, relatively older players spent more minutes on the court than relatively younger players, although this advantage dissipated over time and did not lead to better performance. Among the female players, relatively younger players were found to perform better as the level of competitive handball increased. Discussion: These findings are important for talent identification and development policies in sport federations and other elite sport institutions by demonstrating the many unintended consequences of selections to international competitions at the youth level.
Article
Full-text available
To enhance athletic performance and reduce the risk of injury, load quantification has allowed for a better understanding of the individual characteristics of the physical demands on soccer players during training or competition. In this regard, it appears crucial to summarize scientific evidence to provide useful information and future directions related to the speed and acceleration profiles of male soccer players. This review aims to evaluate the findings reflected in the available literature on both profiles in football, synthesizing and discussing data from scientific articles, while providing insights into quantification methods, employed thresholds, tracking systems, terminology, playing position, and microcycle day. Therefore, it is hoped that this narrative review can support objective decision-making in practice for coaches, sports scientists, and medical teams regarding individualized load management and the appropriate selection of metrics, to explore current trends in soccer player profiles.
Article
Full-text available
This project explored whether a) landing performances and b) impact force asymmetries were different during countermovement jump (CMJ) landings with leftward versus rightward aerial rotation in 19 collegiate men’s basketball players. Replicated single-subject analyses were performed to identify differences that were both statistically significant and important for each individual. CMJ landing performance and loading, attenuation, and control phase durations were compared, while interlimb vertical ground reaction forces (GRF) were compared during each phase of CMJ landings with leftward and rightward rotations, respectively, using the model statistic and coefficient of variation techniques. The model statistic provided random chance probability (α = 0.05). The coefficient of variation provided whether differences exceeded the largest amount of variation from each limb or rotation direction. The bilateral asymmetry index (BAI; difference between dominant and non-dominant limbs divided by sum of the two limbs) was also calculated. Statistically significant (model statistic results) and important (coefficient of variation results) differences in landing performance were detected between rotation conditions in four participants. Most participants did not display significant and important asymmetries for the changes of vertical GRF during any phase of CMJ landings with leftward nor rightward rotations. Large amounts of intra-individual variation seem to be an influential factor for these results, as basketball players seem to have unrefined landing strategies that could require targeted training. Because the BAI values reached as high as ± 531% without coinciding with significant and important asymmetry, researchers and practitioners may need to re-evaluate the way in which asymmetry indices are interpreted.
Article
Full-text available
Athlete preparation and performance continues to increase in complexity and costs. Modern coaches are shifting from reliance on personal memory, experience, and opinion to evidence from collected training load data. Training load monitoring may hold vital information for developing systems of monitoring that follow the training process with such precision that both performance prediction and day-to-day management of training become an adjunct to preparation and performance. Time series data collection and analyses in sport are still in their infancy with considerable efforts being applied in "big-data" analytics and models of the appropriate variables to monitor and methods for doing so. Training monitoring has already garnered important applications, but lacks a theoretical framework from which to develop further. As such, we propose a framework involving the following: analyses of individuals, trend analyses, rules-based analysis, and statistical process control.
Article
Full-text available
We read with great interest the recent letter, “Time to bin the term ‘overuse’ injury: is ‘training load error’ a more accurate term?”1 and in particular its associated PostScript correspondence, “Are rolling averages a good way to assess training load for injury prevention?”2 We are currently investigating the association between training loads and injury risk,3 and so we have also been considering the best way to model this relationship. We share Dr Menaspa's concerns regarding the use of rolling averages for the calculation of ‘acute’ and ‘chronic’ loads. Namely, that they fail to account for the decaying nature of fitness and …
Article
Full-text available
Background Statistical methods for inferring the true magnitude of an effect from a sample should have acceptable error rates when the true effect is trivial (type I rates) or substantial (type II rates). Objective The objective of this study was to quantify the error rates, rates of decisive (publishable) outcomes and publication bias of five inferential methods commonly used in sports medicine and science. The methods were conventional null-hypothesis significance testing [NHST] (significant and non-significant imply substantial and trivial true effects, respectively); conservative NHST (the observed magnitude is interpreted as the true magnitude only for significant effects); non-clinical magnitude-based inference [MBI] (the true magnitude is interpreted as the magnitude range of the 90 % confidence interval only for intervals not spanning substantial values of the opposite sign); clinical MBI (a possibly beneficial effect is recommended for implementation only if it is most unlikely to be harmful); and odds-ratio clinical MBI (implementation is also recommended when the odds of benefit outweigh the odds of harm, with an odds ratio >66). Methods Simulation was used to quantify standardized mean effects in 500,000 randomized, controlled trials each for true standardized magnitudes ranging from null through marginally moderate with three sample sizes: suboptimal (10 + 10), optimal for MBI (50 + 50) and optimal for NHST (144 + 144). Results Type I rates for non-clinical MBI were always lower than for NHST. When type I rates for clinical MBI were higher, most errors were debatable, given the probabilistic qualification of those inferences (unlikely or possibly beneficial). NHST often had unacceptable rates for either type II errors or decisive outcomes, and it had substantial publication bias with the smallest sample size, whereas MBI had no such problems. Conclusion MBI is a trustworthy, nuanced alternative to NHST, which it outperforms in terms of the sample size, error rates, decision rates and publication bias.
Article
The general linear mixed model provides a useful approach for analysing a wide variety of data structures which practising statisticians often encounter. Two such data structures which can be problematic to analyse are unbalanced repeated measures data and longitudinal data. Owing to recent advances in methods and software, the mixed model analysis is now readily available to data analysts. The model is similar in many respects to ordinary multiple regression, but because it allows correlation between the observations, it requires additional work to specify models and to assess goodness-of-fit. The extra complexity involved is compensated for by the additional flexibility it provides in model fitting. The purpose of this tutorial is to provide readers with a sufficient introduction to the theory to understand the method and a more extensive discussion of model fitting and checking in order to provide guidelines for its use. We provide two detailed case studies, one a clinical trial with repeated measures and dropouts, and one an epidemiological survey with longitudinal follow-up. © 1997 John Wiley & Sons, Ltd.
Article
Apollo 13 was initially looking like it would be the smoothest flight ever. After the explosion of an oxygen tank however, the astronauts were close to spending the rest of their lives in rotation around the planet. I wished to use this well-known incident to discuss further the link, or lack thereof, between sport sciences research and current field practices. There is a feeling that the academic culture and its publishing requirements have created a bit of an Apollo 13-like orbiting world (e.g., journals and conferences) that is mostly disconnected from the reality of elite performance. I discuss how poor research discredits our profession, and show some examples from the field where the research doesn't apply. In fact, as sport scientists, we often don't have the right answers. To conclude, some perspectives to improve translation are discussed, including a rethink of the overall publishing process: promotion of relevant submission types (e.g., short paper format types, short reports, as provided by IJSPP), improving the review process (faster turnaround, reviewers identified to increase accountability and in turn, review quality), and media types (e.g., free downloads, simplified versions published into coaching journals, book chapters, infographics, dissemination via social media). When it comes to guiding practitioners and athletes, instead of using an evidence-based approach, we'd rather promote an "evidence-lead" or "informed practice" approach; one that appreciates context over simple scientific conclusions.
Article
Decision support systems are used in team sport for a variety of purposes including evaluating individual performance and informing athlete selection. A particularly common form of decision support is the traffic light system, where colour coding is used to indicate a given status of an athlete with respect to performance or training availability. However despite relatively widespread use, there remains a lack of standardisation with respect to how traffic light systems are operationalised. This paper addresses a range of pertinent issues for practitioners relating to the practice of traffic light monitoring in team sports. Specifically, the types and formats of data incorporated in such systems are discussed, along with the various analysis approaches available. Considerations relating to the visualisation and communication of results to key stakeholders in the team sport environment are also presented. In order for the efficacy of traffic light systems to be improved, future iterations should look to incorporate the recommendations made here.
Article
The first sport-science-oriented and comprehensive paper on magnitude-based inferences (MBI) was published 10 y ago in the first issue of this journal. While debate continues, MBI is today well established in sport science and in other fields, particularly clinical medicine, where practical/clinical significance often takes priority over statistical significance. In this commentary, some reasons why both academics and sport scientists should abandon null-hypothesis significance testing and embrace MBI are reviewed. Apparent limitations and future areas of research are also discussed. The following arguments are presented: P values and, in turn, study conclusions are sample-size dependent, irrespective of the size of the effect; significance does not inform on magnitude of effects, yet magnitude is what matters the most; MBI allows authors to be honest with their sample size and better acknowledge trivial effects; the examination of magnitudes per se helps provide better research questions; MBI can be applied to assess changes in individuals; MBI improves data visualization; and MBI is supported by spreadsheets freely available on the Internet. Finally, recommendations to define the smallest important effect and improve the presentation of standardized effects are presented.
Article
Magnitude-based inference offers a theoretically justified and practically useful approach in any behavioural research that involves statistical inference. This approach supports two important types of inference: mechanistic inference and practical inference to support real-world decision-making. Therefore, this approach is especially suitable for user research. We present basic elements of magnitudebased inference and examples of its application in user research as well as its merits. Finally, we discuss other approaches to statistical inference and limitations of magnitude-based inference, and give recommendations on how to use this type of inference in user research.