Conference PaperPDF Available

Assessing the probability of human injury during UV-C treatment of crops by robots

Authors:

Abstract and Figures

This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light. Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modeled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.
Content may be subject to copyright.
Assessing the probability of human injury during
UV-C treatment of crops by robots
Leonardo Guevara, Muhammad Khalid, Marc Hanheide and Simon Parsons
Lincoln Centre for Autonomous Systems,
University of Lincoln, UK
{lguevara, mkhalid, mhanheide, sparsons}@lincoln.ac.uk
Abstract—This paper describes a hazard analysis for an
agricultural scenario where a crop is treated by a robot using
UV-C light. Although human-robot interactions are not expected,
it may be the case that unauthorized people approach the robot
while it is operating. These potential human-robot interactions
have been identified and modeled as Markov Decision Processes
(MDP) and tested in the model checking tool PRISM.
Index Terms—agricultural robotics, UV-C treatment, hazard
analysis, human-aware navigation, model checking
I. INTRODUCTION
In commercial growing operations, crops are sprayed with
various pesticides in order to keep diseases at bay. To help
reduce the use of chemicals, our collaborators at SAGA
robotics have developed a robot that can dose strawberry
plants with UV-C light to treat powdery mildew. The robot
configuration used during the UV-C treatment is presented
in Fig.1, where the robot straddles the tables on which the
strawberries grow so that the UV-C emissions are directed
inwards. The UV-C dose is carefully calibrated to not damage
the strawberry plants but it can harm any other living thing that
come closer than 7m to the robot. Thus, even though human-
robot interaction during the UV-C treatment is unlikely, it is
always possible that an untrained human decides to approach
the robot to have a look. For these situations it is crucial that
the robot incorporates an on-board safety system with the aim
of detecting the approach of a human, alerting the human of
the danger and stopping operations if it is required.
In this context, this paper summarizes the potential risks and
failure modes identified during a hazard analysis of the UV-C
treatment scenario. These failures are then used to construct a
model of the human-robot interaction which can be translated
into a Markov Decision Process (MDP) to be tested by the
PRISM model checking tool [2]. Some preliminary results
assessing human injuries are given, pointing some important
safety requirements that must be considered during the design
and validation of a safety system architecture for the robot.
II. METHODOLOGY
A. Hazard identification
For the hazard analysis, we followed the systematic tech-
nique called Failure Mode and Effects Analysis (FMEA) [3],
This project is supported by the Assuring Autonomy International Pro-
gramme, a partnership between Lloyd’s Register Foundation and the Univer-
sity of York.
Fig. 1: The robot configuration for the UV-C treatment.
which involves identifying and evaluating potential hazards
in a system, their occurrence frequency, and determining the
severity of the consequences. [4]. In this context, Table I
gives a list with the three main risk situations that may occur
during UV-C treatment according to a cognitive walkthrough.
The consequences of identified failures correspond to potential
injuries from UV-C light (F2 and F3), and the risk that people
are not getting aware of the danger and continue approaching
(F1 and F4) which later contribute to the F2 and F3 occurrence.
B. Safety requirements
The hazard identification is used as input for a Functional
Hazard Analysis (FHA) in order to define safety requirements
which reduce the severity and/or occurrence of the failures
F1-4 described in Table I. In our case, the following two
requirements were proposed:
SR1: The robots must incorporate an Audiovisual Alert Sys-
tem (AAS) to signal their current behavior and potential
danger. The alerts are triggered any time a human is detected
(hopefully above 7m), but also are programmed to be activated
periodically in case a human was not detected on time.
SR2: The robots must implement a robust Human Detection
System (HDS) based on LiDARs and/or cameras that can
detect human presence above 7m. In this way, the robot can
stop operations before the human get closer than 7m.
III. PRELIMINARY RES ULTS
A. Modelling
The human-robot interactions during UV-C treatment and
the behavior of the safety systems (i.e HDS and AAS), were
modeled as Markov Decision Processes (MDP) in which the
TABLE I: List of possible risky situations and failure modes during UV-C treatment.
Possible situations Code Possible failures Potential effect Consequence Severity Ocurrence
Robot moving along
the row while a human
is approaching frontally
F1 Robot fails to detect
human farther than 7m
Robot audiovisual
alerts are not activated
Human is still
approaching to the robot critical occasional
F2 Robot fails to detect
human closer than 7m
Robot safety stop
is not activated
Human is injuried
by the UV-C light catastrophic occasional
Robot at the end of the row while
a human is approaching laterally F3 Robot is aware of the human presence
only when they are too close
Robot safety stop
is not activated
Human is injuried
by the UV-C light catastrophic probable
Robot detects a human and
activate audiovisual alerts F4 Human was not trained
to interpret the alerts
Human is not getting
aware of the danger
Human is still
approaching to the robot marginal remote
transition between states is non-deterministic and modeled by
probability distributions. To implement the MDP model in
PRISM, a single module was created with 5 local variables
which define the states of the robot, human, HDS, and AAS.
Ten constants were used to define the transition probabilities
of the human decisions, and to characterize the effectiveness
of HDS beyond 7m and the effectiveness of the AAS to make
the human aware of the danger. Additional auxiliary variables
were used to synchronize the transition of states in a specific
order. Full details may be found in [1].
B. Model checking
The MDP was analyzed through model checking. Figure
2 gives preliminary results showing how the probability of
human injury varies according to the occurrence of failures F1-
4. During the experiments, the probability of each failure was
varied from 0 to 1 while keeping the probability of remaining
failures constant at 0.1 (i.e. failures are always present, but
the aim is to analyze which failure influences the most on
human injuries). In all the plots, the potential human injuries
were also evaluated according to the probability of a human
deciding to approach the robot. This probability was varied
from 0 to 1 and is shown on the x-axis as the probability
of human-robot interaction. The riskiest situation is shown
in Fig. 2(c) where, under the assumption that the robot is
completely unaware of the human when they approach from
the side, the probability of injury is 0.52. The remaining plots
showed a much lower chance of injury, with the probability
of human injury being less than 0.1. These preliminary results
suggest that more effort should be put on robustify the HDS
when the robot is at the end of the rows than where the robot
is moving along the row. Moreover, pre-programmed explicit
voice messages may be activated each time the robot is going
to leave a row in order to get the human (trained or not) aware
of the robot presence on time.
IV. CONCLUSIONS
This paper presented a preliminary assessment of potential
human injuries during UV-C treatment operations. Based on
the failures identified during a traditional hazard analysis,
we have constructed a probabilistic model to evaluate the
effectiveness of any proposed safety systems. The results
of the model checking gives the user guidelines on how
to improve the current safety systems effectiveness either
through improving detection algorithms, adding new sensors
to overcome possible hardware limitations, or by including
new safety policies related to the workspace.
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Probability of human getting injured
0 0.2 0.4 0.6 0.8 1
Probability of human-robot interaction
0
0.1
0.2
0.3
0.4
0.5
0.6
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
(a)
(b)
(d)
(c)
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F4
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F3
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F1
10.8 0.6 0.4 0.2 0
Ocurrence Probability of F2
Fig. 2: Probability of a human getting injured by the UV-C
light when varying the occurrence probability of a) F1 b) F2 c)
F3 d) F4. The three remaining failures which are not analyzed
on each case are assumed with a fixed occurrence of 0.1.
REFERENCES
[1] L. Guevara. Probabilistic modelling and formal verification using PRISM;
As case study in agricultural robotics. Technical report, School of
Computer Science, University of Lincoln, 2021.
[2] M. Kwiatkowska, G. Norman, and D. Parker. PRISM 4.0: Verification of
probabilistic real-time systems. In G. Gopalakrishnan and S. Qadeer, edi-
tors, Proc. 23rd International Conference on Computer Aided Verification
(CAV’11), volume 6806 of LNCS, pages 585–591. Springer, 2011.
[3] Diomidis H Stamatis. Failure mode and effect analysis: FMEA from
theory to execution. Quality Press, 2003.
[4] Roger Woodman, Alan F.T. Winfield, Chris Harper, and Mike Fraser.
Building safer robots: Safety driven control. The International Journal
of Robotics Research, 31(13):1603–1626, 2012.
Article
In the last decades, robotic solutions have been introduced in agriculture to improve the efficiency of tasks such as spraying, plowing, and seeding. However, for a more complex task like soft-fruit harvesting, the efficiency of experienced human pickers has not been surpassed yet by robotic solutions. Thus, in the immediate future, human labor will probably be still necessary for picking tasks while robotic platforms could be used as collaborators, supporting the pickers in the transportation of the harvested fruit. This cooperative harvesting strategy creates a human–robot interaction (HRI) that requires significant further development in human-aware safe navigation and effective bidirectional communication of intent. In fact, although agricultural robots are considered small/medium size machinery, they still represent a risk of causing injuries to human collaborators, especially if people are not trained to work with robots or robot operations are not intuitive. Avoiding such injury is the aim of this work which contributes to the development, implementation, and evaluation of a human-aware navigation (HAN) module that can be integrated into the autonomous navigation system of commercial agricultural robots. The proposed module is responsible for the detection and monitoring of humans working around the robot and uses this information to activate safety actions depending on whether the human presence is considered at risk or not. Apart from ensuring a physically safe HRI, the proposed module deals with the comfort level and psychological safety of human coworkers. The latter is possible by using an explicit human–robot communication strategy that lets both know of the other's intentions, increasing the level of trust and reducing inefficient pauses triggered by unnecessary safety actions. The proposed HAN solution was integrated into a commercial agricultural robot and tested in several situations that are expected to happen during cooperative harvesting operations. The results of an usability assessment illustrated the benefits of the proposal in terms of safety, efficiency, and ergonomics.
Article
Full-text available
In recent years there has been a concerted effort to address many of the safety issues associated with physical human–robot interaction (pHRI). However, a number of challenges remain. For personal robots, and those intended to operate in unstructured environments, the problem of safety is compounded. In this paper we argue that traditional system design techniques fail to capture the complexities associated with dynamic environments. We present an overview of our safety-driven control system and its implementation methodology. The methodology builds on traditional functional hazard analysis, with the addition of processes aimed at improving the safety of autonomous personal robots. This will be achieved with the use of a safety system developed during the hazard analysis stage. This safety system, called the safety protection system, will initially be used to verify that safety constraints, identified during hazard analysis, have been implemented appropriately. Subsequently it will serve as a high-level safety enforcer, by governing the actions of the robot and preventing the control layer from performing unsafe operations. To demonstrate the effectiveness of the design, a series of experiments have been conducted using a MobileRobots PeopleBot. Finally, results are presented demonstrating how faults injected into a controller can be consistently identified and handled by the safety protection system.
Probabilistic modelling and formal verification using PRISM; As case study in agricultural robotics
  • L Guevara
L. Guevara. Probabilistic modelling and formal verification using PRISM; As case study in agricultural robotics. Technical report, School of Computer Science, University of Lincoln, 2021.
PRISM 4.0: Verification of probabilistic real-time systems
  • M Kwiatkowska
  • G Norman
  • D Parker
M. Kwiatkowska, G. Norman, and D. Parker. PRISM 4.0: Verification of probabilistic real-time systems. In G. Gopalakrishnan and S. Qadeer, editors, Proc. 23rd International Conference on Computer Aided Verification (CAV'11), volume 6806 of LNCS, pages 585-591. Springer, 2011.