ResearchPDF Available

TEST-RETEST RELIABILITY OF A VISUAL-COGNITIVE TECHNOLOGY (BlazePod™) TO MEASURE RESPONSE TIME

Authors:

Abstract

A new technology (BlazePod™) that measures response time (RT) is currently on the market and has been used by strength and conditioning professionals. Nevertheless, to trust in the measurement, before the use of a new device to measure any outcome in the research or clinical setting, a reliability analysis of its measurement must be established (Koo and Li, 2016). Hence, we assessed the test-retest reliability (repeatability) of the BlazePod™ (Play Coyotta Ltd., Tel Aviv, Israel) technology during a pre-defined activity to provide information about the level of agreement and the magnitude of errors incurred when using the technology. This information can assist practitioners and researchers in the use of BlazePod™ technology. We recruited 24 physically active young adults (age = 23.9 ± 4.0 years; height = 1.67 ± 0.09 m; body mass = 68.2 ± 13.1 kg), who were free of injuries, and any orthopedic, or cardiorespiratory diseases. Participants reported to the laboratory on two occasions, separated by one week. One week before, participants performed a familiarization session with the instrument. During the first session, the one-leg balance activity (OLBA) was performed. This activity was chosen randomly among all BlazePod™ pre-defined activities. We conducted all sessions in a physiology laboratory at the same time for each participant and under similar environmental conditions (~23° C; ~60% humidity). The OLBA consisted of a unipedal balance activity performed with four pods arranged in a square on the floor.
©Journal of Sports Science and Medicine (2020) 19, 179-180
http://www.jssm.org
Received: 27 November 2020 / Accepted: 29 November 2020 / Published (online): 01 December 2020
`
Test-Retest Reliability of a Visual-Cognitive Technology (BlazePod™) to Measure Response
Time
Dear Editor-in-chief
A new technology (BlazePod™) that measures response
time (RT) is currently on the market and has been used by
strength and conditioning professionals. Nevertheless, to
trust in the measurement, before the use of a new device to
measure any outcome in the research or clinical setting, a
reliability analysis of its measurement must be established
(Koo and Li, 2016). Hence, we assessed the test-retest re-
liability (repeatability) of the BlazePod™ (Play Coyotta
Ltd., Tel Aviv, Israel) technology during a pre-defined ac-
tivity to provide information about the level of agreement
and the magnitude of errors incurred when using the tech-
nology. This information can assist practitioners and re-
searchers in the use of BlazePod™ technology.
We recruited 24 physically active young adults (age
= 23.9 ± 4.0 years; height = 1.67 ± 0.09 m; body mass =
68.2 ± 13.1 kg), who were free of injuries, and any ortho-
pedic, or cardiorespiratory diseases. Participants reported
to the laboratory on two occasions, separated by one week.
One week before, participants performed a familiarization
session with the instrument. During the first session, the
one-leg balance activity (OLBA) was performed. This ac-
tivity was chosen randomly among all BlazePod™ pre-de-
fined activities. We conducted all sessions in a physiology
laboratory at the same time for each participant and under
similar environmental conditions (~23° C; ~60% humid-
ity). The OLBA consisted of a unipedal balance activity
performed with four pods arranged in a square on the floor.
Participants stood up in the center of the square, and the
OLBA aim was to tap out as many lights as possible with
the dominant foot during 30 seconds. The system lighted
up in a random order not known by the participants neither
the researchers. The distance between the Pods was the in-
dividual lower limb length. Three trials were performed.
The best value obtained was recorded. A one-minute rest
interval between all trials was given. The total number of
taps and average RT of all taps in the OLBA were recorded
for further analysis.
Da ta are pres ented as mean ± SD or 9 5% conf ide nce
interval (CI). We confirmed the normal data distribution
using the Shapiro-Wilk test. A paired t-test, Cohen’s d ef-
fect size (ES) and its 95% CI were calculated to assess the
magnitude of the mean difference between sessions. The
interpretation of the ES was: trivial (<0.20), small (0.20-
0.59), moderate (0.60-1.19), large (1.2-2.0) and very large
(>2.0) effect (Hopkins et al., 2009). The intraclass correla-
tion coefficient (ICC) and its 95% CI was used to assess
the reliability based on a single measurement, absolute-
agreement, two-way mixed-effects model. The ICC value
was interpreted as follows: poor (<0.5), moderate (0.5-
0.75), good (0.75-0.9), and excellent (>0.9) reliability (Koo
and Li, 2016). We also calculated the standard error of
measurement (SEM), the coefficient of variation (CV), the
smallest detectable change (SDC), the level of agreement
between sessions by a Bland-Altman plot, the systematic
bias, and its 95% limits of agreement (LoA = bias ± 1.96
SD) (Bland and Altman, 1986).
We observed a small to moderate increase between
sessions for the number of taps (Day 1 = 20 ± 3 taps, Day
2 = 22 ± 4 taps; t(23) = -4.121; p < 0.001; ES = 0.55, 95%
CI = 0.43 to 0.67) and a trivial to small decrease for the RT
(Day 1 = 1418 ± 193 ms, Day 2 = 1358 ± 248 ms; t(23) =
1.721; p = 0.099; ES = -0.27, 95% CI = -0.15 to -0.38 CI).
All reliability indexes for both outcome measures are
shown in Table 1. Moderate to excellent levels of reliabil-
ity were found by the ICC (95% CI) values and acceptable
reliability by the CV for both measures. Bland-Altman
plots are depicted in Figure 1. The systematic bias that we
found showed that on average in the second day, partici-
pants achieved two taps more than the first day and were
59 ms faster than the first day. The LoA showed that the
number of taps measured in the first day might be 7 units
below or 3 units above Day 2. Besides, the RT measured in
Day 1 might be 272 ms below or 391 ms above Day 2.
In conclusion, the BlazePod™ technology provides
reliable information during its OLBA in physically active
young adults. We considered the measurement error as ac-
ceptable for practical use since low systematic biases and
errors of measurement were reported in this study, besides
a moderate ICC and excellent CV. These results suggest
that practitioners can use the information provided by the
BlazePod™ technology to monitor performance changes
during cognitive training and to evaluate the effects of a
training intervention.
Levy A. de-Oliveira 1, Matheus V. Matos 1,
Iohanna G. S. Fernandes 1, Diêgo A. Nascimento 2
and Marzo E. da Silva-Grigoletto 1
1 Department of Physical Education, Federal University of
Sergipe, São Cristóvão, Brazil
2 Department of Physical Education, State University of
Rio de Janeiro, Rio de Janeiro, Brazil
References
Bland, J.M. and Altman, D.G. (1986) Statistical methods for assessing
agreement between two methods of clinical measurement. Lan-
cet (London, England) 1, 307–310.
Hopkins, W.G., Marshall, S.W., Batterham, A.M. and Hanin, J. (2009)
Progressive Statistics for Studies in Sports Medicine and Exer-
cise Science: Medicine & Science in Sports & Exercise 41, 3–13.
Koo, T.K. and Li, M.Y. (2016) A Guideline of Selecting and Reporting
Intraclass Correlation Coefficients for Reliability Research.
Journal of Chiropractic Medicine 15, 155–163.
Levy A, de-Oliveira
E-mail: levyanthony@academico.ufs.br
Letter to editor
Reliability of a visual-cognitive technology
180
Table 1. Reliability indexes for response time on BlazePod™ technology. Data are present as mean (95%
confidence interval).
Variables ICC CV SEM SDC
Number of taps 0.81 (0.30-0.93) 3% (2-4) 0.7 (0.5-1.0) 2 (1-3)
Reaction time (ms) 0.82 (0.59-0.92) 3% (2-4) 42 (28-56) 116 (77-154)
ICC: intraclass correlation coefficient, CV: coefficient of variation, SEM: standard error of measurement; SDC:
smallest detectable change.
Figure 1. Plots of the differences between Day 1 and Day 2 vs. the mean of the paired measurements for the number of
taps (A) and the reaction time (B). Dashed line represents the systematic bias and dotted lines represents the upper and
lower limits of a agreement.
ResearchGate has not been able to resolve any citations for this publication.
Article
Objective: Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. Discussion for researchers: There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. Discussion for readers: When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. Conclusion: This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article.
Article
In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability.
  • W G Hopkins
  • S W Marshall
  • A M Batterham
  • J Hanin
Hopkins, W.G., Marshall, S.W., Batterham, A.M. and Hanin, J. (2009) Progressive Statistics for
  • W G Hopkins
  • S W Marshall
  • A M Batterham
  • J Hanin
Hopkins, W.G., Marshall, S.W., Batterham, A.M. and Hanin, J. (2009) Progressive Statistics for Studies in Sports Medicine and Exercise Science: Medicine & Science in Sports & Exercise 41, 3-13.