BookPDF Available

The Near Future of Unmanned Vessels; A Complexity-Informed Perspective

Authors:

Abstract and Figures

Unmanned systems are all the hype today. Following the extremely dynamic developments in the automotive industry, the maritime industry is preparing for a future where human presence is no longer required in the logistical chains or on board ships. The advocates of these developments promise great opportunities and talk of ‘Smart Ports’, ‘Smart Cities’, ‘Smart Vessels’ and a lot of other smart things where “Big Data”, “Deep Learning” and the “Internet of Things” are driving a strong technology push in this direction. There are also more critical voices, which stress the threats to job security and safety, and even warn against the consequences of robot supremacy. In the past four years, Research Centre “Sustainable PortCity” has been at the forefront of these debates, both with regard to the maritime industry and Rotterdam Mainport, as a city where people live, work and spend their leisure time. As a research professor at this centre with ‘Big Data for Delta Technology’ in my portfolio, I have followed these debates with great interest. My portfolio focussed on smart, innovative solutions for water management, such as the inspection and monitoring of rivers, (other) waterways and coastal areas. As such, and in a typically Dutch fashion, I could not refrain from forming an opinion on the issue of maritime robotics. At the end of my four-year assignment at the Rotterdam University of Applied Sciences, I have welcomed the opportunity granted me by my employer to set my thoughts on these developments down in writing, in the hope that they may help decision makers and policymakers, technologists and other stakeholders involved in the issue of autonomy in their attempts to guide developments towards increasing robotisation and automation in a major International Mainport region.
Content may be subject to copyright.
The Near Future
of Unmanned Vessels
A Complexity-Informed Perspective
ESSAY
Kees Pieters
Kees Pieters
The Near Future of Unmanned Vessels
The Near Future
of Unmanned Vessels
A Complexity-Informed Perspective
Colophon
ISBN: 9789051799538
first edition, 2015
© Dr. Kees Pieters
This work is licensed under the Creative Commons Attribution 4.0 International
License (CC-BY-4.0). To view a copy of this license, visit http://creativecommons.org/
licenses/by/4.0/.
Publications can be ordered by
www.hr.nl/onderzoek/publicaties
Hogeschool Rotterdam Uitgeverij
Kees Pieters
applied research professor Big Data for Deltatechnology
April 21, 2017
The Near Future
of Unmanned Vessels
A Complexity-Informed Perspective
5
Introduction
Unmanned systems are all the hype today. Following the extremely dynamic
developments in the automotive industry, the maritime industry is preparing for a
future where human presence is no longer required in the logistical chains or on
board ships. The advocates of these developments promise great opportunities
and talk of ‘Smart Ports’, ‘Smart Cities’, ‘Smart Vessels’ and a lot of other smart
things where “Big Data”, “Deep Learning” and the “Internet of Things” are driving
a strong technology push in this direction. There are also more critical voices,
which stress the threats to job security and safety, and even warn against the
consequences of robot supremacy.
In the past four years, Research Centre “Sustainable PortCity” has been at
the forefront of these debates, both with regard to the maritime industry and
Rotterdam Mainport, as a city where people live, work and spend their leisure time.
As a research professor at this centre with ‘Big Data for Delta Technology’ in my
portfolio, I have followed these debates with great interest. My portfolio focussed
on smart, innovative solutions for water management, such as the inspection
and monitoring of rivers, (other) waterways and coastal areas. As such, and in a
typically Dutch fashion, I could not refrain from forming an opinion on the issue
of maritime robotics. At the end of my four-year assignment at the Rotterdam
University of Applied Sciences, I have welcomed the opportunity granted me
by my employer to set my thoughts on these developments down in writing, in
the hope that they may help decision makers and policymakers, technologists
and other stakeholders involved in the issue of autonomy in their attempts to
guide developments towards increasing robotisation and automation in a major
International Mainport region.
6
CONTENTS
Introduction 5
Contents 7
Robots in Mainport Rotterdam 9
Complexity Thinking 13
Complexity-informed Methodology 13
Complexity and Applied Research 17
The Deconstruction of Autonomy 21
Agents and their Environments 21
The Rationality of Robots 25
Man and Machine 27
The Exo-Skeleton 30
Swarming and Platooning 31
A Deconstruction of Vessels 37
The Near Future of Demersal Fishing 38
Predicting the Applicability of Maritime Robots 41
Conclusion 43
Literature 44
Overview publications 46
7
8
Robots in Mainport
Rotterdam
When I started my term as a research professor at the Mainport Innovation
Research Centre, as it was called in 2013, the buzz concerning robotics
was mainly concentrated around Google Car, Uber and Tesla’s car. These
developments in automotive technology concerned themselves with the concept
of (machine) autonomy, such as self-driving cars and platooning. In the Port
of Rotterdam, the second Maas Area (Maasvlakte 2) was close to completion,
a 2000 hectare land reclamation project off the coast by Hoek van Holland
which added 20 per cent to the Rotterdam port area.1 APM Terminal, one of the
major terminals in Rotterdam, had claimed a significant portion of this area for
their autonomous guided vehicles (AGVs), unmanned carriers which transport
containers to and from the ships.2 The cranes on this site that load and unload
the containers from the vessels are remotely operated, which is another step in
the direction of autonomy. In Norway, Rolls-Royce was already anticipating the
first unmanned vessels to transport the containers across the oceans,3 and so it
was not difficult for me to select my first research theme. When Roel Bakker, one
of the lecturers at the computer science department told me that he wanted to
do something with robots, I suggested that he concentrate on ships. Everybody
else was already busy with quadcopters and vehicles, and the robotisation of
ships was most definitely very topical for Rotterdam.
As it turned out, there was a very successful project called ‘Project Zeeslag’
(project ‘Naval Battle’) at one of the other schools of Rotterdam University of
Applied Sciences (RUAS), which concentrated on building vessels to clean up the
‘plastic soup’ in the ocean. Every semester, two teams of students were to engage
in combat to pick up as many ping-pong balls as they could in a water basin with
a vessel that they had built during the term. Emile Jackson and Maarten Dubbeld,
the lecturers who had been running this programme from its inception in 2010,
and Mart Hurkmans from maritime systems integrator RH Marine, who financed
the programme, were reconsidering the format of the programme. When we were
first introduced, it became clear that we would concentrate more on autonomy in
order to adapt the programme to the requirements of applied research. We would
stick to the hull size of one-and-a-half metres, mainly because this implied that
9
10 the vessels could be transported by car and the components would be affordable.
In addition we would concentrate on what was needed to ensure that these small
vessels carried out tasks in real-world environments. The first goal we set for
ourselves was to make a successful crossing of the river Maas between the St.
Job Harbour, where our maritime school was situated, and the Dock Harbour in
the Heijplaat area, some four kilometres further down the river. This was where
the research centre and our colleagues from the RDM Centre of Expertise (RDM
CoE) were located. The preparations for this event confronted us ultimately with
the fact that we not only faced a technical challenge, but also with rules and
regulations, and especially the absence of these where autonomous vessels were
concerned.2
Ever since, we have seen time and time again that this pattern applies not only to
the small vessels that we were making, but that it could also be scaled up to the
sizes and dimensions of the vessels that are customary in the Port of Rotterdam.
Figure 1: Aquabot crossing the Maas River
The crossing in the summer of 2014 heralded the start of the Aquabots (research)
programme and attracted some attention from the local media. Soon we were
getting the first requests to use our vessels for monitoring and inspection purposes.
The port authorities had a long-standing desire to automate their inspection of
quays with maritime robots and Rijkswaterstaat—the Directorate-General for
Public Works and Water Management—was looking into the use of autonomous
vessels for the inspection and monitoring of the Dutch waterways.5 Some smaller
companies, such as Indymo of my colleague, Rutger de Graaf, were looking for
11small submersible ‘remotely operated’ vessels (ROVs) for inspection of floating
structures. Soon we had a small team assembled to coordinate further efforts in
this direction.
This essay is in part a reflection on the two-and-a-half years that we spent on
this issue. For even though the size of the vessels we have been working with are
relatively small, many of the issues we have faced and the technology we have used
can be scaled up to the larger vessels that visit the Port of Rotterdam. Furthermore
the projects we carried out with our students have been a great introduction to the
challenges and opportunities of autonomy in an internationally oriented main port
region.
12
13
Complexity Thinking
Before delving into the technicalities of unmanned vessels or vehicles, it may be
good to introduce myself briefly, in order better to understand the perspective
that will guide the rest of my argument. This is good practice in the ‘complexity-
informed’ approach that I will use to build my plea, as it helps the reader to
understand the logic of my reasoning.
I have a background in electro-technical engineering and artificial intelligence.
I worked for more than a decade in industrial automation before switching over
to small start-ups during the Dotcom hype at the turn of the millennium. During
this period, I also started a PhD at the Utrecht-based University for Humanistic
Studies, where I prepared my thesis on complex systems and complexity, assisted
in the development of my thinking by my supervisor, Prof. Harry Kunneman, and
my co-supervisor, Prof. Paul Cilliers. I like to believe that the three of us managed
to make headway in a new methodological approach to science, based on insights
into complex systems, that was (methodologically) inclusive, democratic and
did justice to the complex nature of many real-world phenomena. Especially
the latter became crucial to my appointment as a research professor, since the
nature of applied sciences tends to be complex rather than complicated, to use a
distinction that was introduced by Cilliers.6 However, before I get ahead of myself,
it may be good to introduce (critical) complexity thinking briefly, and to give an
indication of where it differs from regular (linear) science.
Complexity-informed Methodology
In order to understand what ‘complexity-informed’ research entails, it may be
good to recall the archetypes of scientific research. According to Henri Atlan and
others, modern science sprouted from the search for ‘God’s Plan’, and this quest
came with a number of assumptions that, if embraced, would lead to ‘pure’ or
‘exact’ knowledge.7, 8 The first assumption was that this quest was a search for
a certain ‘truth’ that was universal, timeless and independent of location.
This followed from the fact that God’s presence was everywhere and His plan,
that had been driving humankind since the Genesis, would inevitably accumulate
in the end of time, as the prophets through the ages had already pointed out.
God’s presence could be found in the extremely small, and the enormous vastness
of the Universe, so His plan would be visible regardless of time and space.
14 The second assumption, derived from the first, was that an observer would
regard her object ‘from a distance’, and did not interact or influence the
results. Everything was fixed, and no matter what we humans attempted,
events would indeterminably take their course. By the early nineteenth century,
the humanist movement in Western Europe had replaced God by a mechanical
clockwork universe, that once wound up, would tick-tock its way through time
in a predetermined fashion. Every cause had a singular, linear effect on our
life-world, and by careful reasoning and logical deduction, the threads of causal
relationships could be traced back in time and could be used to predict the future!
The controversies with the dominant religions that opposed these ideas often
obfuscated the fact that the underlying assumptions were still very much the same.
With all the historical nuances and critique that also travelled with these
assumptions through the ages, one could say that the most dogmatic advocates
of this view were ambassadors of one extreme methodological stance with regard
to science. This view proved to be extremely important for the development
of the natural sciences, which made the search for ‘pure’ or ‘exact’ knowledge
through logical reasoning and mathematical proof the dominant approach in
science until the latter half of the twentieth century.
The other extreme developed most strongly in the social sciences and the
humanities. Its proponents saw a world that was not objective, allowed for
multiple perspectives and where truths are social constructs.9, 10 This ‘relativist’
view developed most strongly in the sixties of the previous century and is
probably most vividly embodied by Paul Feyerabend’s call ‘Against Method’,
which questioned the dominant approaches of scientific rigour as a means
of discovering our life world.11 With hindsight, one might argue that the most
extreme of these ‘postmodern’ views tended to mistake relativism with
randomness and, in general, failed to explain why ‘classical approaches’ were
so tremendously successful in the natural sciences. Notwithstanding this, the
social sciences and the humanities tend to accept multi-interpretability of subject
matter. Research in these areas can also be based on a plethora of theoretical
frameworks and methodologies, both qualitative and quantitative, and sometimes
inclusive of those of the natural sciences.
Complexity-informed methodological perspectives—we shall not speak of
‘complexity theory’ as we, like the social sciences, consider complexity to require
an amalgam of methodological approaches that—with all their strengths and
weaknesses—tend to explore the middle ground between the two archetypes
described above. Solid research from the natural sciences brought (amongst
many others) the concepts of non-linearity, fractals, (quantum) uncertainty and
self-referentiality, while postmodern approaches made us more aware of the
15limitations of observation (most notably bias), the role of history, culture and
normativity in a system or network. Problems between micro and macro effects
in such systems, new methodological constructs, such as patterns and the
concept of ‘scale’, help us to understand that knowledge is designed rather than
‘found ’,1 0, 11 and that this design is influenced by historical and cultural effects.12
However, these designs are not arbitrary; they are still constrained by the ‘real’
thing. Even if there are multiple models, there must be, at least to some extent, a
means of transformation from one to the other. If not, there must be an account
of why two models do not seem to match well, as this demarcates a boundary
of methodological or epistemological uncertainty. At these boundaries the
researcher has an obligation to state that ‘she does not know’, as the framing
of her models are no longer adequate to address the issue at hand.
One of the most important contributions of complexity so far, is
that at least some researchers are at ease with the ambiguity and
uncertainty of this middle ground, and manage to find their way in
the resulting ‘swampy lowlands’ of knowledge.13
One of the implications of this stance is that academic research is in many ways a
professional practice like many others. The differences lie mainly in the way that
this particular practice is organised and in the fact that academia is a breeding
ground for (future) elites. As a result, the specific biases of this practice tends to
be quite dominant in society and the normative rationale behind this hardly tends
to be questioned from the ranks outside of this culture. A complexity-informed
perspective is more open to the incorporation of relevant voices from other
practices, as this should contribute to the full picture of a complex phenomenon.
For the purposes here, one of the most distinctive differences with the classical
(linear) scientific approaches—which by the way has also long been accepted in the
social sciences— the fact that the observer (the scientist) is part of the system she
researches. The observer is not an independent, disconnected entity in the research
process. Rather she shapes her findings by the choice of her theoretical framework,
the type of experiments she designs, her interests and the cultural background she
brings with her to the research. Also, the models we make are distorted by the way
our brain is organised and the fact that it is embedded in our body.7, 8
Our cognitive system has not evolved to discover ‘truths’ and scientific research
therefore tends to be quite demanding on our brains.17 For instance, humans are
visual species, so we tend to project causal implications to observable change
and are often oblivious to the processes that have been driving this change
‘under the radar’ prior to the visible events.
16 A complexity-informed methodology therefore must account for the resulting
‘distortions’ of the research, as these influence the outcome. One way to counter
the inevitability of bias in the research that is performed is to cross-reference
the scientist’s observations with those in other independent scientific research
areas that investigate the same phenomena. One could say that we use a
different theoretical framework to provide the independent (meta-) perspective
from another theoretical framework to observe another one as a replacement
for the ‘God‘ position in traditional science and, in true self-referential fashion,
this works both ways! This also implies that with complex systems, different
methodologies, frameworks and approaches must interact with each other (we
call this methodological interference) and that these theories and frameworks
are units with which we build or design our models of our lifeworld. The particular
complexity-informed dialect that I will be using is based on the notion of patterns.
This is derived from well-established methodological research in (building)
architecture18 and software engineering.19
A last approach that is typical for a complexity-informed methodology is that
the research and the process of researching are intertwined. When we research
a certain phenomenon, such as robotics in a major main port, we often have to
switch to a position where we can see the researcher who is performing that
research, in order to account for the bias and to augment her findings with other
insights. These insights do not necessarily have to be academic—the insights of
professionals are equally important—but if we are to play the game of science
then we are not allowed to use these insights to further our argument without
solid scientific backing or a critique of such academic sources. A major issue for a
university of applied sciences is whether the voice of professional practice should
take a more equal position in these debates, since its knowledge is often more
relevant than fundamental knowledge. Professional knowledge may be a strong
source of methodological interference.
From the above it may have become clear that the complexity-informed dialect
we propose here is an inclusive one, which allows insights and perspectives from
many different sources, but where we must take care of moulding the models
we make (which by definition are incomplete) to the specific games20 that are
required for certain areas. On the one hand, it prevents the knowledge that is
designed using these models from becoming encapsulated in little bubbles of
like-minded people who all preach to the converted. On the other hand, we must
respect the rules of certain ‘language games’, such as the academic world, and
tweak our models to fit the rules that apply there.
Complexity thinking is modest in the sense that its ambition goes little further
than making models of certain aspects of ‘reality’, or the ‘life world’ as I prefer
17to call it. The life world is that part of ‘reality’ that is relevant for the observer.
This implies that the life world extends beyond the horizon of that observer
and often includes processes and dynamics that the observer cannot see or
know. The models the observer makes are relational, by which I mean that the
model and the life world is intimately connected. Besides, the relationships are
not necessarily unidirectional, but the model and life world can interact with
and mutually affect each other. This, as we shall see, has consequences when
performing applied research.
Complexity and Applied Research
Figure 2: Applied research in action
One of the most distinctive differences between fundamental and applied
research is that the latter research often results in actions in the life world.
This creates feedback loops which affect the life world and therefore the models
that drive these changes. This is not to say that fundamental research does not
have these forms of feedback, but often the consequences are negligible (e.g.
in astronomy), or it is assumed that this is the case. Either way, fundamental
research tends to limit the scope of these feedback loops, based on normative
choices which are usually implicit and not based on objective criteria. In the case
of applied research, such implicit choices may affect the system dynamics as a
whole, with the result that the entire model is dangerously incomplete and affects
the life world in unexpected and undesired ways. Especially when the outcomes
of these models are materialised in concrete products, services or policies, we
have to ensure that they perform safely in a certain environment.
18
Just recently, I was made ultimately aware of the pitfalls, when I attended a
meeting on unmanned vessels, and I heard one of the speakers – an engineer- say
that, for him, a human on board of a vessel was a ‘system that could be replaced
by technology’. All it took was understanding the professional role of the human,
and comparing this to technological feasibility. This stance may be perfectly valid
in order to understand the primary function of a human operator, but it runs
a serious risk of being blind to all the other tasks a professional often carries
out, which are not primarily related to carrying out a certain function, but are
necessary to keep things running smoothly. When technology replaces these
professionals, you also replace these often mundane ‘extra’ functions, which are
usually not in vogue of the tech guys.
Applied research therefore typically requires complexity-informed perspectives,
since the research inherently concerns open, dynamic systems which interact
intimately with their environment. Complexity-informed perspectives are vital for
the research that is conducted at the Rotterdam University of Applied Sciences,
especially when it has a focus on developing new forms of education. For
instance, most researchers are also lecturers at the University and are actively
involved in designing of new forms of education, which makes the research
inherently self-referential. Self-referential methodology teaches us that:
the initial settings, conditions and mindset are going to influence
the way that these projects develop;
truths’ and ‘facts’ which the research unearth may be the result
of self-fulfilling prophecies, as a complex system tends to adapt
itself to that what is measured;
open systems are complex systems which interact continuously
with their environment; Expressed more strongly, the environ-
ment is entwined in such systems.6, 21, 2 2 This means that the
traditional way of excluding the influence of the environment
by carrying out research in the controlled environment of the
laboratory, through the constraints of the theoretical framework
or the experiment, or through a scientific culture is the result of
a normative choice and is not justified by a deep ‘objective’ rea-
lity which implies that such choices do not affect the results. In
other words, we have to accept the condition that our research
has ‘too many variables’, for many of which the researcher has
no prior schooling, training or background;
19scale (in)variance;
Patterns we discover can be observed at different levels. For in-
stance, any patterns relating to autonomous agents will inherently
apply to machine robots, social agents such as humans and social
organisations. They will (therefore) also apply to a researcher who
is looking into the issue of autonomy in MainPort Rotterdam.
Figure 3: RoboLAB at RDM
This section of this essay contains many concepts and ideas that may be tough
to digest, especially for those who are trained to think in linear, hierarchical
ways and in distinct causality between cause and effect. This is probably one
of the most profound consequences of learning to think in terms of complexity,
that we are subject to a ‘fundamental limited knowing’.21,23 If we are dealing
with a complex system, then we cannot capture this system in an abstract or a
‘management summary’. These forms of simplification are often necessary, but
are the result of the limitations of the observer and will not automatically make
our lifeworld simpler. Put more strongly, actions based on these simpler, coarser
models will sooner increase the risk of creating undesired effects than a more
fine-grained, detailed one. If one really wants to think in a complex way, one must
realise that the observer has to change and has to subject herself to a regime of
continuous learning. Unlike Occam’s razor, a simple answer to a complex problem
is usually (but not always) the wrong one!
20 One of the most practical consequences of this methodological stance is that
many of the projects we started at RDM were multidisciplinary. We sometimes
had student teams from six or more programmes and from different schools
within RUAS working together on a certain project, closely supported by
lecturers and experienced professionals from maritime companies and other
organisations. Another consequence was that in time our robotic research
became more focused on the issue of situational awareness to cater for the
environment in which the robots have to perform their tasks.
21
The Deconstruction
of Autonomy
In order to arrive at a balanced view on the near future of unmanned vessels,
it is necessary to break this complex issue down into a number of separate,
but related issues. For this reason, I will ‘deconstruct’ this topic along three
dimensions, namely along the three dominant rationales or narratives for
automation or robotisation of work processes, the difference between man and
machine, and lastly, a functional deconstruction of a vessel.
The above qualifications are obviously made to be set against the main contender
in the work process, namely the robot or the machine. However, before delving
deeper into the man-machine distinction, it is worth deconstructing the robot—or
intelligent machine—along two well-known discussions in the field of artificial
intelligence (AI), or computer intelligence as it is recently more often called.24
Agents and their Environments
When a robot is said to have ‘intelligence’, this usually means that the robot
can perform certain tasks in a predictable way and usually it is more or less
responsive to changes in the environment while carrying out these tasks. The
robot has a certain repertoire of actions which it can perform at a given time.
These actions are, for an external observer, adequate for the task at hand. With
the current state of technology, however, the robot is not capable of asking itself
the ‘why’ question prior to these actions. Unlike human beings, robots (currently)
do not have the ability to reflect critically on (the consequences of) their choices.
Someone has instructed the robot to carry out a certain task and the robot starts
doing so until the stop button is pressed. For most robots, this also applies to the
‘intelligence’ it contains. Many everyday robots, such as the ones that are found
in production halls, are ‘intelligent by design’, which means that smart engineers
and designers provide the actual intelligence and the robots dumbly carry out
the required tasks over and over again. This usually means that the intelligence
of these robots is constrained by:
• what the designers can foresee when the robot is designed;
the choice of technology that is needed to make the robot adap-
tive to its surroundings.
22 As a result, this technology usually works quite well in predictable, stable
environments. Usually the surroundings of these robots are isolated from the
dynamics of everyday life by walls, fences or otherwise. People who enter these
confined areas are usually trained professionals who handle the technology with
the necessary care. The current developments in robotics are primarily focused
on allowing these robots to perform in these confined spaces with increasing
autonomy, which results in gated compounds where human presence is no longer
desired. The APM-T terminal at the second Maas Area I mentioned earlier is a
good example of such an environment for these types of robot.
There are some schools of thought in the AI community which do not consider
these robots to be truly intelligent and certainly not ‘autonomous’. According
to those who hold these views, an autonomous machine should be able to make
its own decisions, given certain circumstances and goals, and to decide which
course of action to take. The robot should be able to make decisions that the
designer could not foresee in advance based on the conditions of its environment
and previous experience. One of the most vital consequences of this stance
is that these machines must be able to make mistakes, in order to learn from
them. However, the prevention of mistakes is one of the main reasons why
most organisations are interested in robots in the first place! As a result, most
practical applications of robots tend towards the first, pre-programmed type,
although some elements of truly autonomous machines are being integrated
into these robots. Some robots get trained prior to their introduction into a
production plant, which limits the chance of errors once they take their place
in the assembly line. However, this training is subject to the same constraints
that apply to the pre-programmed robots, so this usually works well in stable
environments. Another major focus lies on automating processes where human
actions are prone to error and are potentially hazardous.
Practical robotics tends to take up all kinds of intermediate positions on the scale
from ‘pre-programmed intelligence’ to ‘true autonomy’. The majority of today’s
robots are positioned on the former end of the scale and are usually referred
to as ‘machines’. Truly autonomous robots are still quite rare and usually have
limited practical application. There is a strong development towards augmenting
machines by given them the capability to learn and adapt, which in a sense
mimics the way that organisms operate. Organisms are usually also equipped
with an ‘instinct’, which is enriched by means of learning. In this sense, there is
therefore little difference between a machine and an organism.
Another related issue is the design of the robot. The sensors and actuators that
a robot has at its disposal is also predominantly pre-determined. A robot can be
extremely intelligent, but if it has no ability to interact with its surroundings its
intelligence remains hidden. This issue is usually called the situational awareness
of the robot. One can say that the robot’s environment can be split up into two
parts, namely the environment ‘as is’, and a subsection thereof of which the robot
is aware. We will call this the ‘surroundings’ of the robot,1 although one must
realise that these surroundings do not correspond with the immediate vicinity.
After all, if a robot cannot sense water, the robot will be totally oblivious of the
fact that it is submersed in this liquid after it has just fallen of a bridge! The
surroundings of that robot will not provide any clues of the predicament which
that robot faces!
The surroundings of a robot are (currently) also predominantly provided by its
builders. Therefore the ability of the robot to operate efficiently in a certain
environment is largely determined by the foresight and expertise of its creators.
In general, this means that such robots will usually perform best in predictable,
stable environments.
One of the consequences of truly autonomous machines is that they increasingly
resemble biological organisms. This is not surprising, as any agent who has
to establish itself in a complex environment is subject to the same rules and
conditions. A silicon-based computer has some distinct features that can help
it to maintain itself in these environments, which are different to a biological
carbon-based organism, but the rules of the game remain fairly similar.
Interestingly enough, much of the sensory apparatus of organisms is as yet
unknown. New receptors for certain stimuli are regularly found and often some
that were considered redundant prove to be more important than expected. As a
consequence, we tend to underestimate the complexity of the sensory apparatus
of organisms and robotic replacements are often of too simplistic a design to
really take over the tasks of these organisms.
Google Car, the most striking current development in autonomy by the
well-known American software company, reveals one of the most telling
consequences of this fact. Google and Tesla spurred on a technology race in
driverless cars, which is currently attracting investment running into hundreds
of millions of dollars. In 2015, a consortium led by General Motors invested half
a billion dollars in Lyft, a promising start-up in autonomous vehicles,25 so that
these vehicles can achieve what an average human being can achieve in thirty or
so driving lessons. Of course, this is a somewhat cynical way of putting it and the
argument certainly does not nullify the efforts made by these companies, but it
does put the current state of robotics in perspective.
1 This distinction is very similar to the previous discussion on the difference between ‘reality’ and the ‘life
world’ of an observer or researcher.
23
24 From the previous discussion it may have become clear that ‘autonomy’ is not
necessarily a distinctive feature of a certain agent, but it bears a relationship to
the environment in which that agent has to perform. This basically follows from
two dominant narratives in the scientific community and the popular press. The
techno-positivist narrative is dominant in the Anglo-American world and tends to
consider (robot) intelligence as an atomic property of the corresponding material
object. Coarsely stated, they think that if one puts enough computer power into a
material object the object will become increasingly ‘intelligent’.26
The second narrative is stronger on Continental Europe. This narrative considers
intelligence to be a relationship between an object and its environment. If the
environment is complex, it requires more intelligence to cope with this complexity.
Conversely, a complex environment also allows more opportunities to train this
intelligence, which makes the object smarter. A machine with great capabilities for
intelligent behaviour will not reach its full capacity if it gets no proper training and
this, of course, not only applies to machines. Proponents of this stance will usually
prefer to use the term ‘adaptiveness’ instead of ‘intelligence’, as the ability to adapt
is considered to be the key property of an intelligent agent. As the latter narrative
does not necessarily exclude the former, I will follow this line of reasoning.
As the relational stance of intelligence draws attention to the environment as well
as the machine, it may be good to look into the environment in more detail. On
one extreme there are stable, predictable environments in which relatively simple
robots can adequately perform their tasks, while on the other extreme there are
such volatile environments in which only the most rugged agents can survive.
In the middle ground of a practical lifeworld that we inhabit, one can speak of
complex environments, which at least have a number of distinct characteristics.
They are:
predominantly unknown: an agent cannot make an exact,
complete representation of its surroundings
np-complete: the relations between entities in the environment
tend to grow exponentially, resulting in a combinatorial explosion
of the search space. As a result, an agent is not able to make a
best decision by testing all the available options one by one.
dynamic: the conditions of the environment may change before
the agent has made an appropriate model of its surroundings
reactive; the surroundings change due to the actions of an
agent
contingent: the environment is subject to unexpected events of
which the agent has no adequate models
25To summarize the above, the differences between man and machine have to be
set against the environments in which these agents have to perform their duties.
With this, we can start our deconstruction of the man-machine discussion.
The Rationality of Robots
When considering the issue of machine autonomy, we often hear two stories or
narratives that aim to rationalise the need to deploy this technology:
To replace people in the work flow, as they are considered to be
the weak link in the process. Humans are fallible, expensive and
a source of risk to the operations
To reach areas which are hazardous for humans. In the maritime
sector, one of the most captivating examples is the prospect of
deep sea mining, which captures the famous slogan from Star
Trek “To boldly go where no man has gone before
I will return to these rationales later on, but first I would like to include a third
rationale, which for me was a real eye-opener. This was a realisation by dr.
Stephania Giodini from TNO, with whom we have worked quite extensively in
recent years. She introduced what I call the ‘Stoffel de schildpad’ (Stoffel de
Turtle) argument, after one of the characters from De Fabeltjeskrant, a famous
Dutch animation series of the seventies. The father of one of my lifelong friends
had appropriately named his robot lawn-mower after this character, as it tended
to work its way over the grass in a slow, but determined manner. When we
organised a conference on 4 November 2016, with Maurits Huisman and Danny
Blind from TNO, on a platform for underwater communication at RDM that
Maurits had conceived, I realised that for the application of autonomous robots
for maintenance purposes, the continuous, gradual interventions of “Stoffel”
robots would be able
to provide a less invasive means of maintenance than the incidental, tougher
interventions that humans often have to carry out.
26
Figure 4: Platform for Underwater Communications (image from Maurits Huisman, TNO)
This argument is particularly potent when robots are used for inspection and
maintenance in vulnerable ecosystems or other environments where small,
continuous and gradual interventions are preferable to incidental invasive
ones. This type of robot can also be very successful when changing a certain
environment to a preferred condition, for instance when a lake that has
decreased biodiversity due to human intervention needs to be restored to its
original state. The ‘Stoffel robots’ can take their place in the ecosystem until the
desired conditions are reached and the interventions provided by the artificial
beings are no longer required. Another interesting area of application can be
found in maintenance tasks in enclosed spaces of vessels, such as tanks.
Interestingly enough, the ‘Stoffel robots’ take a position in-between the two other
rationales. On one hand, these robots can exceed human capabilities, not because
the environment is potentially off-limits, but because continuous human presence
is not an option or it is not desirable. ‘Stoffel robots’ will therefore most likely
replace human interventions in environments that are predictable in the sense that
the consequences of intended intervention in such an environment are known in
27advance. The ‘Stoffel robots’ will have a limited form of autonomy and/or adaptive
behaviour, such as navigation, collision avoidance, charting and planning.
The Platform for Underwater Communications as was perceived by TNO has
interesting implications for so-called swarming and platooning solutions. I will
return to this in a later stage.
Man and Machine
Figure 5: Unmanned Port (published with permission from Universal Studios)
The first rationale in the discussion on autonomy is one that receives the most
attention. When automation and robotisation become the focus of the public
media, this usually revolves around the loss of jobs, the ethics of intelligent
man-made machines and the possible threat that robots will become the
dominant species on Earth. I will concentrate on the use of robots (and humans)
in the workplace and point out some of the differences between a human and a
machine.
Human agents are a product of evolutionary adaptation. In the struggle for
survival, humans have found a particular niche in which cognitive and artificial
adaptations occur at a faster rate than biological adaptation. This is not to say
that biological processes are no longer relevant; many of our social structures
(hierarchical) can also be seen with other primates, monkeys and other social
species.27 The famous Dutch primatologist Frans de Waal has argued that we,
as primates, can be positioned in-between our close relatives, the chimpanzees,
28 and the bonobo apes, and therefore mix extreme aggression with gentler social
interactions.28 At the deepest core of our cognitive centre, the amygdala controls
our most elementary impulse, namely the fight, freeze, flight or flock impulses.15
Even though there are conscious feedback mechanisms that temper the effects
of the amygdala, this nervous centre is responsible for the most primitive actions
of any higher organism. As a result, human agents are finely tuned to respond
strongly to anomalies, experiences that are uncommon, unexpected and rare.
Consequently we humans have a paradoxical tendency to strive for regularity
and calmness, while our brain gives the best performance when (moderately)
challenged by unusual and unexpected events, under influence of the hormones
adrenaline and endorphins.29
This touches on the issue of robotics. When humans are said to be the ‘weak link’
in a work process, it usually is the result of repetitive, monotonous labour which
is not challenging. Humans in general become distracted, sloppy and distraught
under these circumstances, especially when they occur for extended periods
in time. At the other extreme, humans tend to become overly stimulated when
contingencies and unexpected events become the norm. Only few people, such as
fighter pilots, are capable of handling the resulting stress and can do so only for a
limited amount of time.
A third distinctive feature of our brain is that it is a relatively slow information
processor, especially when compared to a computer. However, the brain can
process patterns in a massive parallel fashion, which allows for a very rich
contextual ‘picture’ of certain events. For instance, in their mutual interactions
humans can quickly combine instrumental, social and ethical considerations to
assess the situation, predict possible outcomes and include past experience.
The recent science fiction movie Arrival catches this information-rich means of
information processing quite aptly, although interestingly enough it also shows
that humans have to revert to a more linear, sequential way of communicating,
when switching to voice or, to a lesser extent, when writing. As a result,
humans tend to perform poorly when tasks are (perceived as being) relatively
unimportant, or are of little meaning for the entire process.
This issue touches on many of the popular debates on artificial intelligence
(AI) that are currently being waged, for instance in a recent report from the
European Parliament.30 We are told that AI is currently developing at such
a fast pace that we need to consider the consequences of robot ethics and
other issues that consider the robot as an equal to human beings, or even as
a superior being. Often the successes of computers, such as IBM Watson, in
playing chess, GO or poker, are introduced into these debates to underline the
advances that are currently being made. However, in my view, these examples
29also demonstrate the biggest weakness of the current state of technology, which
is their inability to process entirely different tasks concurrently. Even though
humans are also poor multitaskers, we are still superior to these robots. Humans,
to some extent, are perfectly capable of playing multiple games with multiple
rules simultaneously, and this rich playing field is usually called ‘everyday life’.
We drive, communicate, check calendars, work, do the groceries and all this
(usually) with apparent ease. If we set this against a large, energy consuming
mainframe that is dedicated to one task at a time, or maybe at best a few tasks,
then human beings currently still have a significant edge over these computers.
Furthermore, we are extremely mobile.
The current state of computers such as Watson is undoubtedly impressive, but
they are comparable to a toddler’s mastery of a game of Memory; they will beat
their parents time and time again because they have the luxury that they can
dedicate their time to mastering the game, but we intuitively know that this does
not make them superior to a grown-up.
Considering robot ethics—the question of which rights and responsibilities a
robot with superior intelligence will have in a future society—I take, as a rule of
thumb, the rights of animals as a benchmark. In the Netherlands, we currently
have a political party that considers the rights of animals to be an essential
part of a civilised society. The ethical position of robots is currently still very
much the same as any other machine. If, in the near future, we are considering
a political party for robots, then it is time to start worrying about robots and
their place in our society.
A last, very important feature of (human) organisms is their ability to repair
themselves. When something goes wrong, a carbon-based organism is able
to a certain extent to correct mistakes that were made and heal itself when it
gets wounded. Repair in this sense is a materialised form of learning; a mistake
or unwanted event results in a material response to correct the damage. With
the current state of technology we cannot expect this from robots, or at best
to a severely limited extent. Besides this, the ability to repair oneself requires
energy and other resources to be dedicated to this task. If an organism is
considered ‘weaker’ than the robot, it often means that the organism is
less efficient in performing a certain task, but the reason for this is usually
that the organism was not built for this purpose in the first place and that a
significant portion of the energy and resources of that organism are required
for self-maintenance and for repairing itself. As a result, a definite strength
is portrayed as a weakness, due to a limited, normative perspective on the
existence of an organism.
30 Another pattern amplifies this effect, especially in discussions concerning the human
workforce. This pattern concentrates around the notion of risk. In many discussions
on safety and security, the resulting weight is often defined in terms of risk:
“risk = probability x impact”
A loss with high impact with low probability can therefore be equal to a low impact
with a high probability. As we have seen that the probability of loss is strongly tied
to the complexity of an environment, this means that an experienced human being
operating in such an environment may still embody a higher risk than a robot that
performs significantly poorer. The argument that humans are the “weak link” in a
work process usually refers to this risk, especially if this has to be covered by life
and health insurance or if human loss may be detrimental to the public image of
an organisation. Technology push in robotics is currently predominantly targeted
at replacing high-risk human presence with robotic replacement of lower risk.
This push is also amplified by a social pressure that does not accept human loss.
In many ways, we currently live in one of the safest periods in human history, but
as a result the loss of a life, especially if it is work related, is not considered to be
acceptable, even if the work is known to be hazardous. The ‘weak link’ argument
therefore often does not mean that humans perform poorly, but rather that robots
can perform the same tasks adequately, but that the risk of loss is relatively low.
This also applies where risks involved in the consequences of human error is high.
The Exxon Valdez accident in 1986 and more recently the BP oil spill in the Gulf
of Mexico provide interesting scenarios for the application of fail-safe robotic
alternatives that obviate human error. However, in the latter case, robot error
will be subject to the same judgement as human error would.
The Exo-Skeleton
Ever since the Industrial Revolution we have seen a development in which human
beings have been fitted with certain kinds of artificial, mechanical suits that
make them stronger or faster, or allow them to enter areas that were previously
off-limits to human presence. Currently these exoskeletons have many names,
such as cranes, trucks, tractors, aeroplanes and rockets and they have become
so commonplace that we hardly regard them as such. The current movement
in robotics aims to replace the humans in these exoskeletons with artificial or
computer intelligence (bio-computing is currently still too limited for practical
use) and so the main distinction between man and machine lies in the specific
characteristics of computers.
For one, computers at present predominantly require electricity for their energy
needs. Especially in the West and most developing countries, we have finely
meshed electricity grids, so it is not difficult to access this source of energy.
31With recent developments in solar power, the possibilities increase even further.
Ongoing improvements in the efficient transformation of electricity into work
also provide a perfect ecosystem for robots. The majority of robots are designed
to perform one task very well. As with all tools, this does not mean that they are
constrained by this, but the degrees of freedom for alternative applications are
usually limited. Furthermore, the design of the exoskeletons is usually optimised
for a certain environment or task, so similar tasks will have different designs
depending on their intended use. A knife used in the kitchen is very different to
the scalpel that a surgeon uses. Again this touches on the previous discussion,
where we saw that a robot, if it is to be effective, must adapt to a certain
environment and intended purpose, just as organisms do.
This immediately begs the question whether there is an artificial design that
is extremely adaptive and can mould itself to a wide variety of circumstances.
Humans are amongst the most versatile of carbon-based organisms. This begs
the question whether there is a ‘universal robot’ that can outperform humans
in this sense. Currently most robots designed with this aim in mind mimic the
human form with two arms, two legs and an information processing unit at
the top of the shape. There are a number of reasons for doing this.
Human beings are the result of millions of years of evolutionary
trial and error. This means that there is a good chance that the
human form is one of the most optimal designs when it comes
to multi-purpose behaviour.
Most robots are meant to perform in human environments,
which are designed specifically to facilitate the human form.
Mimicking the human form is a way of increasing the acceptance
of robots when they need to interact closely with humans.
This is not to say that alternative forms are not being tested. In fact, a specific
class of robots—swarm robots— are, in my view, extremely good candidates for
the universal robot.
Swarming and Platooning
Professor Chris Verhoeven, supervisor of the Swarm Theme of the Robotics
Institute of Delft University of Technology, is someone who believes that swarm
robots are going to be the first robots to enter the public domain.iSwarm robots
are autonomous robots of relatively limited individual intelligence, but which as
a collective display surprisingly complex behaviour. This also stands at the core
of Chris’s argument that people are likely to accept that the individual robots
i 2014, personal communication
32 present little danger to them, which makes their presence in public domains less
problematic. There is also another reason why swarm robots are good candidates
for practical application, which is that robot swarms tend to be very robust, which
makes them well suited for complex environments. At present swarming robots
are still very much an academic pastime, but we can see them as one extreme
possibility on the following scale:
1. A single human agent or robot
2. A few dedicated robots working together with human agents
3. A massive collective of robots and human agents
4. Pure robot swarms
The majority of the discussions on robotics have so far concentrated
predominantly on the first two scenarios on this scale, but the technology is
pushing into scenario three, most notably in the automotive industry. Remember
the exoskeleton argumentation? Well, if there is one area where smart
exoskeletons are currently being wrapped around human agents it is on our
streets and highways!
Despite our own conceptions of our driving skills, most humans are bad drivers.
When we are forced to engage with others in a relatively limited, often scarce
space, and the interactions with others are predominantly contrary to our own
goals to travel from A to B as fast as possible, we fall prone to a tendency to claim
our own space without regard to the consequences for others. In game theory
this is often observed as a problem where individual gain impairs collective gain,
and vice versa.31 , 32 On the road we often see this problem occur when there is
quite a lot of traffic. Theoretically speaking, this should not have to lead to any
congestion, provided that everybody travels at exactly the same speed. Since
this is not the case in practical situations, this means that some stretches of road
will get cluttered with cars, while other parts of the road remain empty. If one
suddenly applies one’s breaks, this will create a cascade effect of other drivers
applying their brakes, often a bit harder than the cars before them, which results
in all cars stopping a few hundred metres behind the original event. A traffic jam
is born!
This problem is exacerbated by the phenomenon that on a busy road people tend
to stick to the left lanes (or the right lanes in the UK and some other countries),
resulting in large stretches of unused road in the right lanes, which in turn
increases the chance of congestion. Most drivers also do not care too much about
merging, which gives the drivers who do select the right lanes ever less incentive
to do so. As we humans therefore tend to use available, scarce space inefficiently,
there is a strong tendency to let technology make the correct decisions for us. It
is a matter of time before our cars will be forced to slow down to a predetermined
33fixed speed when there is congestion. This is technologically feasible, and will
result in a much more efficient use of the roads and highways. It will also reduce
the probability of traffic jams, which will provide a considerable economic
advantage. This technology will therefore create the first massive swarm robots
in the public domain. People will gradually lose their responsibility as drivers and
become passengers in their own cars. As the technology advances, the cars will
evolve into luxury cabins where people can enjoy leisure activities or work, while
the coaches take them to their destination. This development is currently already
underway and is a realistic scenario for the near future.31
The development of swarm robots in the public domain, in my view, will follow the
scale that I introduced above. The first swarms will consist of a mix of human and
technological agents, of which the former will usually command the swarm, if only
because current rules and regulations require a human agent for reasons of legal
responsibility. This is particularly the case in maritime environments, where human
agents, who can be held liable if anything goes wrong, are required to be on board
a vessel. On the basis of current developments in the automotive industry, it is
only a matter of time before a serious accident happens involving an autonomous
vehicle, which will spur on the development of case law on legal liability.
Considering the current legal situation, it is not unthinkable that
software engineers will be held liable for accidents, as the robot
was ‘merely carrying out what the designers had programmed it
to do’.30
It is likely that in the future vessels will increasingly take over technology that
has already been implemented in aeroplanes, such as the automatic pilot and
fly-by-wire technology.
One consequence of this gradual transfer of human control to machine
control is that humans will continuously need to perform different tasks as the
technology advances. Instead of applying the brakes and operating switching
gear, at present we often already rely on the buttons of the cruise control.
Interventions will decrease even more as technology takes over and cars are
switched to ‘auto pilot’. However, this also means that humans will increasingly
be invited to engage in other kinds of activity. The result will be that it will take
some time to get the driver back in her seat and to be fully prepared for the
situation that caused the event when action is required. The technology that is
gradually replacing human control, will therefore need to be augmented with
sufficient predictive capabilities to signal the human in time for an impeding
event that requires human intervention. For this reason, paradoxically enough,
the transition to fully autonomous vehicles, vessels and (other) drones is going to
require quite some effort in relation to the human factor.
34 One of the first applications of these combined man-machine swarms is currently
being tested in the automotive industry, namely platooning applications. In
2015 a number of successful tests were carried out on Dutch public roads with a
line-up of trucks, of which the first truck was driven by a human chauffeur, while
the other unmanned vehicles drove behind it like an Australian road train, but
then disconnected.33 In a platooning line-up, a swarm is controlled by a (human)
master agent, while the others—the robots—support the main task. This scenario
is also often envisioned for inland shipping, but I do not see this happening very
soon as most barges are operated by small family businesses with very little
incentive to adapt to a line-up. Besides this, there already is a very successful
equivalent, namely the push barge.
At the Rotterdam University of Applied Science we used this idea for a project with
Rijkswaterstaat (part of the Dutch Ministry of Infrastructure and the Environment)
to measure the depth of the Dutch waterways. Currently every year the main
waterways have to be checked to ensure that they are still deep enough for barges.
A small vessel with a very accurate multi-beam ultrasonic sensor is used for this
type of inspection. These inspection vessels are usually small motorboats, and due
to the fact that the sensor can only measure a small portion of the width of the
waterway, the boat needs to zig-zag along the width of the river for every mile or
so that is inspected. An inspection of a certain stretch of river typically requires
five sweeps, and as a result, a waterway such as the 90 kilometre long IJssel river,
requires a 450 km inspection run. At RUAS we developed the idea of a platooning
line-up, where the master vessel was flanked by two or four unmanned boats—
Aquabots—which were also equipped with relatively cheap and less accurate depth
sensors. The rational for this choice was that cheap sensors can often make a very
accurate relative measurement. With one accurate reference, one can easily make
corrections for the inaccuracy. Furthermore the accuracy of the current sensor
was not really necessary for the task at hand. A tolerance of one millimetre is not
very important when centimetres of soil are swept over the bottom at any given
moment due to the current and the boat itself. A solution involving platooning
would reduce five sweeps to two or three and would result in an immediate cost
reduction of this type of inspection by 40 to 60 percent. The project was carried
out with students over the course of two years, with two demonstrations in the
IJssel river. Sadly the project had to be discontinued due to the difficulty of
obtaining adequate funding.
35
Figure 6: Demonstration of Aquabots in a platooning line-up
The platooning line-up envisioned above was only one step in the direction of a
full swarm solution, where ‘artificial jellyfish’ as we called them would carry out
the inspection as a swarm. They would have been thrown in the river upstream,
and would have followed the current over a number of weeks, measuring the
full width of the waterway as they moved along. They would have been able to
communicate their distance from each other and would have been able to detect
and evade obstacles, such as passing vessels. If one or two were to have broken
down or to have become entangled, it would have been a minor problem, as long
as the swarm had had enough jellyfish to complete the survey. Once they reached
the end of the inspection, they would collect at a single spot, so that they could
be reused.
36
Figure 7: Artificial jellyfish (photograph by Christian Charisius/Reuters with permission)
Currently robot swarms usually consist of exactly the same robots, which are
programmed to perform the same tasks and achieve a common shared goal.
This type of swarming solution has been proposed for a project in automated
evasive manoeuvring, where vessels in a shipping lane try to stay clear of each
other in an efficient manner.34 In the proposal this technology is presented as an
additional aid to the helmsman, but the aim is eventually to apply this technology
to unmanned systems. This demonstrates that in practice there are gradations
between fully manned and unmanned systems.
There is an interesting design of swarming solutions where the robots carry
out different tasks and the individual designs vary. In this configuration only
the common goal is mutually shared. The mixed configuration, with human and
machines, is obviously an example of such a swarm. In the next section, I will
return to a more elaborate example which involves demersal fishing.
37
A Deconstruction
of Vessels
We can now home in on a specific class of robots, namely unmanned vessels. This is
not to say that the line of reasoning I will pursue here does not apply to vehicles or
airborne drones, but I will concentrate on vessels, as this is the main type of robot
that we have been focusing on in Rotterdam over the past four years.
In line with my reasoning so far, we will deconstruct a vessel into a number of
functional units that are concentrated in a confined space. These functional
units have become so tightly knit, mainly for historical reasons, and they are
often so self-evident, that most people have learned to see a vessel (or another
exoskeleton) as an indivisible unit. However, when one looks at the possible
functions that a vessel has, one can quickly discern the following:
a transport function
a carrier function
a hospitality function, i.e. hosting people on board
dedicated tasks (most notably in the case of highly specialised
vessels, such as barges, cutters, hoppers and fishing vessels)
When considering the issue of autonomous vessels, this discussion basically
needs to be broken down into the various functions that a vessel (or another
exoskeleton) has. For instance, the current developments in unmanned shipping
mainly concern the transport function of a cargo vessel, which is clearly the
main function of this type of vessel. Autonomy would adversely affect the
hospitality function of a vessel, since the main aim would be to remove human
presence from the vessels, at least on the long stretches over the oceans. The
carrier function typically requires most effort during the loading and unloading
of the cargo (containers), which also requires the majority of dedicated tasks.
An analysis of the development of autonomous cargo vessels may therefore
be quite simple. If one considers the typical timeline of this particular form
of transport, the complexity of the tasks and complexity in the environment
is mainly concentrated around the time that these ships are in coastal waters
and loading and unloading containers. This is (ideally) a relatively short period
38 of time, in comparison to the time these vessels are in open waters. Aside
from an occasional tropical cyclone or hurricane, the oceans are fairly stable
environments for this type of vessels, the task at hand is fairly simple and
technologically feasible. If one takes into account the fact that the absence of
human presence reduces the risks relating to this kind of transport, it is therefore
fairly safe to assume that the development of unmanned cargo vessels is going
to happen in the near future, once the rules and regulations of the international
agencies, such as the IMO, have embraced these developments.33
This does not mean that these vessels are going to be fully unmanned. The relatively
short periods near the coastal regions and in the national waterways are of such
complexity that human presence is going to be required for at least a number of
decades. This is partially due to physical constraints, but also to social and regulatory
factors. As a result, one of the scenarios that one could envision is the instalment
of hop-on captains, employees of the ship owners who live near the ports and are
brought on board the vessels once they approach the coast or are outbound to the sea.
With respect to ocean-going cargo vessels, we can therefore see that
autonomy:
is technologically feasible;
significantly reduces the risks of human loss or failure;
has benefits that extend across a significant part of the time travelled;
allows for significant construction savings due to the reduction in
the hospitality function of a ship, leading to a more efficient design
of the main transport function.
We can therefore conclude that this development is going to accelerate in
the coming years, due to technology push and commercial interest.
The Near Future of Demersal Fishing
A more complex issue involves the near future of demersal fishing. In 2015 I
was asked to attend a number of brainstorm sessions on the future of demersal
fishing on the North Sea. These sessions were organised by Jules Dock, one of
our partners at RDM Rotterdam, at the request of the Dutch Ministry of Economic
Affairs.36 The Netherlands still has a significant stake in North Sea fishing,
but it is becoming increasingly difficult to maintain a fleet, due to overfishing,
regulations and otherwise. The Dutch fishing fleet is on the decline, which not
only affects the fishermen, but also the industry that processes and sells fish,
and the local harbours along the coast that have a significant tourist appeal.
In the two brainstorm sessions that Jules Dock organised at RDM, I was
reintroduced to an industry that I had not been part of since my internship in the
39fishing port of Harlingen in 1986. The group consisted of fishermen, fleet owners,
equipment suppliers and various research institutes, all with direct or indirect
stakes in the future of demersal fishing. For me, one of the big eye-openers of
these sessions was to learn how much innovation there had been in netting. Fishing
vessels are able to control the flow of fish in the net by the size of the holes in the
grid at various locations in the net. This way small or young fish, or some forms of
bycatch can leave the net before they get squashed by the bulk of the catch.
Figure 8: Mind-map of the brainstorm session demersal fishing
A fishing boat differs significantly from a cargo transport vessel, in the sense that
the ‘dedicated task’, namely fishing, is the primary function and that the other
functions are subordinate to this. For various reasons—most notably cultural
unmanned demersal fishing is highly unlikely in the coming years, which means
that the hospitality function will remain an important part of the overall design
of the ships that are used. However, the current designs also predominantly stem
from historical traditions and combine various functions in one concrete vessel. As
a result, the ships’ design is often optimised for one function—usually the carrier
function—and the necessity of the other functions affect each other. For example,
the typical situation with regard to a fishing vessel is that it starts with an empty
hold and carries out a survey of a stretch of water for hours to days until a shoal of
fish is found. Ships that focus on bottom-dwelling fish such as halibut will travel to
a predefined location, throw out their nets and start fishing by trial and error. This
means that this phase of the fishing process is time consuming and fuel intensive, as
the weight of the nets are continuously in friction with the water. Some ship owners
currently implement a functional decomposition where a number of fishing vessels
and one or two large vessels that mainly serve as a factory ship to process and
store the fish. This construction is often used with ocean-going vessels. The fishing
40 ships can save fuel by concentrating on their main function, while the processing
ships can concentrate on the other functions.
As the nets fill up with fish, there is a continuous friction between the desired catch
and the bycatch. As was mentioned earlier, the design of the nets may influence this
process, but as the actual fishing is a process that cannot be stopped once it has
started, there is only so much that can be done. As a result, the nets will still contain
a significant amount of bycatch, which will have to be removed or taken back to land
for further processing. Some bycatch can still be sold, but if this includes endangered
species or young fish, the ramifications can be quite considerable. Bycatch that is
brought on board is usually severely wounded or dead, so the chance of survival
after it has been thrown back into sea is fairly low. Young fish are quickly plucked up
by seagulls which forage in the wake of the vessel, while larger fish and mammals
sink to the bottom and will probably die soon afterwards.
One of the ideas that we came up with, and which I personally really liked, was based
on a platooning solution, where the fishing vessel was flanked and fronted by a
number of small underwater drones (AUVs), whose main aim was to prevent bycatch
from getting into the nets. The noise of a fishing ship often elicits an instinctive
response from the fish. As different fish have different responses, we could use this
fact to steer bycatch away from the trawlers and funnel the desired catch towards
them. Often the sound of the motor and the pressure waves of the oncoming ship
will ignite a response which, like all other creatures, boils down to fight, flight, freeze
or flock. As a result, some species of fish move away from the ship, others move
towards it, while yet others remain where they are. If the hunter AUV’s could use
these responses to guide fish to or bycatch away from the oncoming fishing boat,
a first selection could be made before the fish enter the nets. This would reduce
the amount of bycatch that gets injured. It would also save energy because the
vessel would not have to drag the bycatch along with it. A similar approach could be
applied around the net(s), where hunter AUVs could support the netting in guiding
the desired catch into the net. However, if we are going to break down a traditional
fishing vessel into its various functional parts, we might as well tackle the transport
function, the hospitality function and the carrier function as well. As was mentioned
earlier, some of these ideas are already being implemented, but the idea of a
platoon that consists of hunter AUVs, a netting vessel that would automatically plug
itself into a carrier vessel, while the crew would assemble on a floating hotel in the
evening to discuss the catch with colleagues from other vessels in the vicinity, was a
radical new way of conceiving the future of demersal fishing. The success or failure
of these visions are predominantly a matter of economics and cultural acceptance.
The current state of technology can support this vision, the circumstances of
demersal fishing are such that the environment would accept such an approach
and the potential benefits comply with many rules and regulations which at present
weigh heavily on the industry.
41
Predicting the
Applicability of
Maritime Robots
In the previous sections I have introduced a way to deconstruct the discussion of the
application of robots. The various perspectives that I used for this decomposition are
summarised below:
1. Agents and their environments:
a. Intelligence ‘by design’ versus autonomy/adaptivity
b. Atomic intelligence versus relational intelligence
c. The complexity of the environment; predictable versus complex
2. Rationality of robots:
a. Reduction in the risks involved
b. Extending human capabilities
c. Small and gradual interventions
3. Man and Machine:
a. Organism versus machine
b. Risk
4. Functional deconstruction
I have argued that in order to assess the applicability of a robot or unmanned
system, a functional deconstruction is needed, as many (human) activities consist
of a set of functions that are all needed to perform a certain task. For each of
these functions one can ask the following questions:
Are the tasks required currently carried out by humans or is the function
currently not possible?
Is the environment in which the activities take place stable / predictable or
complex, or can the environment be made stable?
If the environment is predictable, can technology be designed to operate safely
in that environment?
If the environment is complex, can a robot learn to deal with that complexity?
If the environment is complex and currently beyond human capabilities, is an
exoskeleton a viable option?
42 Following this line of reasoning, the environment in which the agent will have
to perform its tasks is a good predictor of an unmanned system. A stable
environment does not suffer overly from the five characteristics of a complex
environment in the sense that the technology can be designed to cope with this.
The following table may assist in predicting what kind of agent is best equipped
for a certain function.
Function: …………
Environment Stable Complex
Rationale Replace Humans Gradual
Interventions
Limited Complexity Within Human
Capabilities
Extend Human
Capabilities
Approach Intelligence by
Design
Autonomous / Adaptive Systems Risk Reduction Exoskeleton
Agent Machines Robots Humans
Table 1: Type of agent in relation to the complexity of the environment
For complex tasks or environments, human presence is likely to remain necessary
in the coming years, but technology will advance in order to reduce the risks
involved. Of course, the table is only indicative as a measure of the applicability
of man or machine for a given function, but hopefully it will give some insight into
this topic, especially for policymakers and potential end users of this technology.
43
Conclusion
In this essay I have covered a large number of topics relating to unmanned systems
and robot technology. Most of these topics set markers on a scale of current
developments relating to unmanned systems. The scales are summarised below:
Environment: predictable/controllable versus unpredictable / complex
Robot intelligence: pre-programmed versus ‘truly’ autonomous / adaptive
Rationale: replacement of humans, extending human capabilities or allowing
small, gradual interventions
Risk: the consequences of human versus machine failure
Man or machine: The exo-skeleton
Swarming and platooning
Functional decomposition: the various functions that are combined in one vessel
Humans and machines take up positions along these axes, based on technological
feasibility and human possibilities. I have argued that this can often be expressed
as a (relative) measure of risk. The near future of unmanned vessels will very
much be determined by the assessment of this risk, with respect to their intended
functions. Despite the hype, I strongly believe that human flexibility still will allow
humans to outperform robots in many real life circumstances in the coming
two decades. However, autonomous technology will change the way that tasks
are currently carried out and sometimes this may even make human presence
obsolete. The biggest threat of automation and robotisation is not in places
where manual tasks are required. Rather it will be found in office functions.
As a result, the near future of vessels will still be very much as we know it today.
However, between the barges, the trawlers and the coasters, small vessels and
AUVs will increasingly swarm to monitor and inspect the infrastructure and guide
the larger vessels in performing their intended tasks.
This essay has provided a view on the current developments in unmanned
systems and robotics that tries to steer clear of hypes and hopes. In fact, this
rather down to earth position couldn’t have been voiced more clearly in a recent
Article in the Dutch newspaper de Volkskrant of March 25th 2017, in an interview
with Greg Corrado, co-founder of Google Brain, the division of Google dedicated
to machine learning.37 When he was asked about the current developments in
machine learning, and what the public could expect in the near future, his answer
was that we could expect machines to become ‘less stupid’. I think that this view
quite aptly captures my own on the near future of unmanned systems.
44 Literature
1. Rotterdam opens first phase of €3 billion Maasvlakte 2 project.
Port Technology (2015).
2. APMT Maasvlakte 2 officially opened. Port of Rotterdam (2015).
3. Reynolds, M. Rolls-Royce unveils concept fleet of self-driving drone ships -
and it could launch by 2020. Wired Magazine (2016).
4. Verschuur Jackson, K. Tugging for the Future. in Proceedings of the second
International Plugboat Conference (2015).
5. Schultz van Haegen, M. & Sonneveld, J. Robots in de Openbare Ruimte.
Lichtkogel (2016).
6. Cilliers, P. Complexity and Postmodernism: Understanding Complex Systems.
(Routledge, 1998).
7. Atlan, H. Enlightenment to Enlightenment: Intercritique of Science and Myth.
(State University of New York Press, 1993).
8. Schon, D. A. The Reflective Practitioner: How Professionals Think In Action.
(Basic Books, 1984).
9. Fay, B. Contemporary Philosophy of Social Science: A Multicultural Approach.
(Wiley-Blackwell, 1996).
10. Hacking, I. The Social Construction of What? (Harvard University Press,
2000).
11. Feyerabend, P. Against Method. (Verso, 1993).
12. Simon, H. A. The Sciences of the Artificial - 3rd Edition. (The MIT Press, 1996).
13. Taylor, C. Sources of the Self: The Making of the Modern Identity. (Harvard
University Press, 1992).
14. Pieters, C. P. Donald Schon en John Holland: over toegepast onderzoek in
drassige gronden. J. Humanist. Stud. 2016, 77–90 (2016).
15. Edelman, G. M. & Tononi, G. Consciousness: How Matter Becomes Imagination.
(Allen Lane, 2000).
16. Hawkins, J. & Blakeslee, S. On Intelligence. (Holt Paperbacks, 2005).
17. Donald, M. A Mind So Rare: The Evolution of Human Consciousness. (W.W.
Norton & Co., 2002).
18. Alexander, C. A Pattern Language: Towns, Buildings, Construction. (Oxford
University Press, USA, 1977).
19. Gamma, E., Helm, R., Johnson, R. & Vlissides, J. M. Design Patterns: Elements
of Reusable Object-Oriented Software. (Addison-Wesley Professional, 1994).
20. Wittgenstein, L. Philosophical Investigations. (Prentice Hall, 1973).
4521. Cilliers, P. Knowledge, limits and boundaries. Futu res 37, 605–613 (2005).
22. Cilliers, P. Boundaries, Hierarchies and Networks in Complex Systems.
Int. J. Innov. Manag. IJIM 5, 135–147 (2001).
23. Pieters, C. P. Into Complexity: A Pattern-oriented Approach to Stakeholder
Communications. (Dissertation.Com, 2010).
24. Pieters, C. P. in Computational Intelligence in Optimization-Applications and
Implementations 7, (Springer, 2010).
25. Solomon, B. GM Invests $500 Million In Lyft For Self-Driving Car Race With
Uber, Tesla And Google. Forbes (2016).
26. Weinberg, G. M. An Introduction to General Systems Thinking. (Dorset House
Publishing Company, Incorporated, 2001).
27. de Waal, F. de. Primates and Philosophers: How Morality Evolved. (Princeton
University Press, 2006).
28. de Waal, F. D. Our Inner Ape: The Best and Worst of Human Nature. (Granta
Books, 2005).
29. Damasio, A. Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. (Mariner
Books, 2003).
30. Delvaux, M. DRAFT REPORT with recommendations to the Commission on
Civil Law Rules on Robotics. 22 (Committee on Legal Affairs, 2016).
31. Axelrod, R. The Evolution of Cooperation: Revised Edition. (Basic Books,
2006).
32. Luce, R. D. & Raiffa, H. Games and Decisions: Introduction and Critical Survey.
(Dover Publications, 1989).
33. Zwijnenberg, H., Janssen, R., Blankers, I. & de Kruijff, J. TRUCK PLATOONING
DRIVING THE FUTURE OF TRANSPORTATION. (TNO, 2015).
34. Pieters, C. P. Application of Swarming Algorithms for Evasive Manoeuvring
Toepassing van zwerm algoritmen in het ontwijken van tegenliggers voor
slim en veilig varen. (Nederland Maritiem Land, 2016).
35. Visser, R., de Vleesschouwer, S. & Zeijlstra, M. Blauwdruk 2050, de maritieme
wereld voorbij de horizon. Stichting ondersteuningsfonds NISS 100 Jaar
(2016).
36. de Lange, J. & Verwayen, E. INNOVATIEBIJEENKOMSTEN SELECTIVITEIT IN
DE DEMERSALE VISSERIJ;ONTWIKKELING VAN SLIMME VISTECHNIEKEN.
(Jules Dock Consultancy B.V., 2015).
37. Keulemans, M. Kunstmatige intelligentie gaat uw leven veranderen, en deze
man speelt daarin een sleutelrol - Economie - Voor nieuws, achtergronden en
columns. De Volkskrant (2017).
46 Overview publications
Hogeschool Rotterdam Uitgeverij
Een goed begin is het halve werk
Auteur Hanneke Harmsen van der Vliet - Torij
ISBN 9789051799521
Verschijningsdatum maart 2017
Aantal pagina’s 132
Prijs € 14,95
Professionele identiteit
Auteur Martin Reekers
ISBN 9789051799514
Verschijningsdatum maart 2017
Aantal pagina’s 54
Prijs € 14,95
Creatieve Ruimte
Auteur Michiel de Ronde
ISBN 9051799373
Verschijningsdatum juni 2016
Aantal pagina’s 96
Prijs € 14,95
Ongebaande paden
Auteur Paul van der Aa
ISBN 9789051799385
Verschijningsdatum juni 2016
Aantal pagina’s 86
Prijs € 14,95
The Honours experience
Auteurs Pierre van Eijl, Albert Pilot (redactie)
ISBN 9789051799361
Verschijningsdatum mei 2016
Aantal pagina’s 272
Prijs € 26,95
Een goed begin
is het halve werk
Verloskunde in de kern van preventie
OPENBARE LES
Hanneke Harmsen van der Vliet – Torij
OPENBARE LES
praktijkgericht onderzoek
Hanneke Harmsen van der Vliet – Torij
Een goed begin is het halve werk
Hanneke Harmsen van der Vliet – Torij
Een goed begin is het halve werk
Verloskunde in de kern van preventie
Het lectoraat Verloskunde en Geboortezorg heeft als doel om met praktijk-
gericht onderzoek bij te dragen aan de kwaliteit van de geboortezorg en aan
innovaties in het onderwijs. Onderzoek is hierbij een middel om de kennis te
genereren die nodig is om dit doel te bereiken. Kennis die bovendien van
belang is om producten en instrumenten te ontwikkelen voor de praktijk
en voor het onderwijs. Het is de kunst om dat zo te doen dat praktijkvragen
worden beantwoord en dat implementatie van ontwikkelde producten en
kennis in het onderwijs en in de praktijk succesvol is. Het is daarbij van
essentieel belang dat bij het doen van onderzoek studenten en professionals
vanaf het begin betrokken zijn!
Het lectoraat richt zich op de organisatie van de verloskundige zorg en op
ketensamenwerking met specifieke aandacht voor het perspectief van de cliënt
en voor zelfredzaamheid. Voorts richt het lectoraat zich op preventie, met
specifieke aandacht voor achterstandsproblematiek, risicoselectie en screening
en gezondheidsvoorlichting en -bevordering. Daarbij richt het lectoraat zich
niet uitsluitend op de zwangerschapsfase, maar op de hele keten van vóór
de conceptie tot het moment dat het jonge kind naar de basisschool gaat.
Juist in deze fase – juist door gezonde, veilige omstandigheden en keuzes –
kunnen alle kansen voor een gezonde toekomst worden benut.
Het lectoraat draagt actief bij aan de ontwikkeling van praktijkgericht
onderzoek door hogescholen en alle kansen die hier in de toekomst liggen.
In dit verband is het onderzoeksdomein van het lectoraat tevens relevant
voor andere lectoraten, juist omdat een gezonde start van het leven van
invloed is op de rest van het leven.
Tijdens haar openbare les gaat Hanneke Harmsen van der Vliet - Torij in op
ontwikkelingen en uitdagingen in de verloskundige zorg en de wijze waarop
de verbinding tussen onderwijs, onderzoek en praktijk vanuit het lectoraat
wordt vormgegeven.
Het lectoraat is onderdeel van Kenniscentrum Zorginnovatie van
Hogeschool Rotterdam en meer specifiek van de onderzoekslijn Samenhang
in Zorg. Het lectoraat is tevens verbonden aan de Verloskunde Academie
Rotterdam, een uniek samenwerkingsverband tussen Hogeschool Rotterdam
en het Erasmus MC voor wat betreft verloskundig onderwijs en onderzoek.
Hogeschool Rotterdam Uitgeverij
OL_Cover_Hanneke Torij-170x240mm.indd 1-3 15-03-17 15:52
OPENBARE LES
Paul van der Aa
OPENBARE LES
praktijkgericht onderzoek
Paul van der Aa
Ongebaande paden
Paul van der Aa
Ongebaande paden
Sociale inclusie van kwetsbare burgers via arbeid als
beroepsopgave voor professionals in het sociale domein
Grote groepen burgers lukt niet om op eigen kracht betekenisvol werk
te vinden en te behouden. In zijn openbare les gaat Paul van der Aa in
op de complexe beroepsopgave van professionals die kwetsbare burgers
ondersteunen bij hun arbeidsloopbanen.
Het werk van deze professionals bestaat uit verschillende puzzels waarvoor
geen eenvoudige standaardoplossingen bestaan. Hoe kunnen zij aansluiten
bij de ambities van kwetsbare groepen in een arbeidsmarkt die snel verandert
en steeds hogere eisen stelt? Hoe kunnen zij belangen van werkzoekenden en
werkgevers met elkaar verbinden? Hoe gaan zij om met de sociale problemen
die vaak samengaan met werkloosheid en armoede? Hoe kunnen zij rekening
houden met de sturing vanuit beleid en de organisatie, zonder het perspectief
van burgers en werkgevers uit het oog te verliezen? Hoe kunnen zij kennis
ontwikkelen over effectief handelen?
Het doel van dit lectoraat is om professionals via praktijkgericht onderzoek
te ondersteunen bij deze beroepsopgave en om de opleidingen van de hoge-
school op het thema inclusieve arbeid te versterken.
Paul van der Aa is lector ‘Inclusieve arbeid, kwetsbare burgers’ bij het Kennis-
centrum Talentontwikkeling van Hogeschool Rotterdam en verbonden aan de
onderzoekslijn ‘Inclusie’.
Hogeschool Rotterdam Uitgeverij
Ongebaande paden
Sociale inclusie van kwetsbare burgers via arbeid als
beroepsopgave voor professionals in het sociale domein
OL_Cover_Paul van der Aa-170x240mm.indd 1-3 26-09-16 16:40
OPENBARE LES
Michiel de Ronde
Michiel de Ronde
Creatieve ruimte
Creatieve ruimte
Begeleidingskundige kwaliteit gevraagd voor
gezamenlijk leren en ontwikkelen in organisaties
The Honours Experience
INSPIRATIEBUNDEL
Pierre van Eijl en Albert Pilot (redactie)
Talentontwikkeling door de ogen van de honoursstudent
Professionele
identiteit
PROFESSIONEEL IDENTITEITSPEL
Martin Reekers
PROFESSIONEEL IDENTITEITSPEL
Martin Reekers
Professionele identiteit
Martin Reekers
Professionele identiteit
omdat je toekomst op het spel staat!
Wat zijn de kenmerken van een onderwijsprogramma dat bachelor-studenten
in staat stelt stage-ervaringen te vertalen naar beroepsoverstijgende
competenties en deze overtuigend te communiceren in de vorm van een
personal brand?
Deze publicatie bevat tekst en werkvormen die stimuleren om zelfkennis
te ontwikkelen over de eigen motieven en kwaliteiten en hoe STARR’s zo
geschreven kunnen worden dat deze overtuigende bewijzen vormen van de
persoonlijke kwaliteiten. Het ‘pièce de résistance’ vormt het professionele
identiteitsspel waarin deelnemers al spelende meer kennis te krijgen over
zichzelf. Het spel maakt gebruik van het verschijnsel dat anderen eerder je
kwaliteiten zien dan jij zelf, doordat het spel aanzet tot dialoog.
Martin Reekers (1951) is hoofddocent bij Hogeschool Rotterdam waar hij
in 2006 begon als senior beleidsadviseur. De afgelopen jaren werkte hij
als docent aan de masteropleiding Leren & Innoveren. Hij is lid van de
hogeschool brede werkgroep Studieloopbaancoaching en van de denktank
rond het lectoraat Studiesucces. Hij is tevens actief als trainer voor de
HR-academie.
Onder auspiciën van het Kenniscentrum Business Innovation van
Hogeschool Rotterdam doet Martin promotieonderzoek gericht op het
ontwerpen van een programma dat studenten leert om praktijkervaringen
overtuigend te vertalen naar kwaliteiten. Daarnaast is Martin redactielid
van het vakblad LoopbaanVisie, auteur van leerboeken en cartoonist.
Hogeschool Rotterdam Uitgeverij
omdat je toekomst op het spel staat!
OL_Cover_MartinReekers-170x240mm-WT2.indd 1-3 06-03-17 17:36
Een goed begin
is het halve werk
Verloskunde in de kern van preventie
OPENBARE LES
Hanneke Harmsen van der Vliet – Torij
OPENBARE LES
praktijkgericht onderzoek
Hanneke Harmsen van der Vliet – Torij
Een goed begin is het halve werk
Hanneke Harmsen van der Vliet – Torij
Een goed begin is het halve werk
Verloskunde in de kern van preventie
Het lectoraat Verloskunde en Geboortezorg heeft als doel om met praktijk-
gericht onderzoek bij te dragen aan de kwaliteit van de geboortezorg en aan
innovaties in het onderwijs. Onderzoek is hierbij een middel om de kennis te
genereren die nodig is om dit doel te bereiken. Kennis die bovendien van
belang is om producten en instrumenten te ontwikkelen voor de praktijk
en voor het onderwijs. Het is de kunst om dat zo te doen dat praktijkvragen
worden beantwoord en dat implementatie van ontwikkelde producten en
kennis in het onderwijs en in de praktijk succesvol is. Het is daarbij van
essentieel belang dat bij het doen van onderzoek studenten en professionals
vanaf het begin betrokken zijn!
Het lectoraat richt zich op de organisatie van de verloskundige zorg en op
ketensamenwerking met specifieke aandacht voor het perspectief van de cliënt
en voor zelfredzaamheid. Voorts richt het lectoraat zich op preventie, met
specifieke aandacht voor achterstandsproblematiek, risicoselectie en screening
en gezondheidsvoorlichting en -bevordering. Daarbij richt het lectoraat zich
niet uitsluitend op de zwangerschapsfase, maar op de hele keten van vóór
de conceptie tot het moment dat het jonge kind naar de basisschool gaat.
Juist in deze fase – juist door gezonde, veilige omstandigheden en keuzes –
kunnen alle kansen voor een gezonde toekomst worden benut.
Het lectoraat draagt actief bij aan de ontwikkeling van praktijkgericht
onderzoek door hogescholen en alle kansen die hier in de toekomst liggen.
In dit verband is het onderzoeksdomein van het lectoraat tevens relevant
voor andere lectoraten, juist omdat een gezonde start van het leven van
invloed is op de rest van het leven.
Tijdens haar openbare les gaat Hanneke Harmsen van der Vliet - Torij in op
ontwikkelingen en uitdagingen in de verloskundige zorg en de wijze waarop
de verbinding tussen onderwijs, onderzoek en praktijk vanuit het lectoraat
wordt vormgegeven.
Het lectoraat is onderdeel van Kenniscentrum Zorginnovatie van
Hogeschool Rotterdam en meer specifiek van de onderzoekslijn Samenhang
in Zorg. Het lectoraat is tevens verbonden aan de Verloskunde Academie
Rotterdam, een uniek samenwerkingsverband tussen Hogeschool Rotterdam
en het Erasmus MC voor wat betreft verloskundig onderwijs en onderzoek.
Hogeschool Rotterdam Uitgeverij
OL_Cover_Hanneke Torij-170x240mm.indd 1-3 15-03-17 15:52
47Design in een genetwerkte ecologie
Auteur Anne Nigten
ISBN 9051799330
Verschijningsdatum april 2016
Aantal pagina’s 54
Prijs € 14,95
Intelligent interveniëren
Auteur Josephine Lappia
ISBN 9789036540094
Verschijningsdatum december 2015
Aantal pagina’s 319
Prijs € 28,95
Veranderstad
Auteur Gert-Joost Peek
ISBN 9051799217
Verschijningsdatum november 2015
Aantal pagina’s 108
Prijs € 14,95
Informatierevolutie in de delta
Auteur Kees Pieters
ISBN 9051799225
Verschijningsdatum november 2015
Aantal pagina’s 80
Prijs € 14,95
Beyond Consenting Nerds
Auteur Peter Troxler
ISBN 9051799233
Verschijningsdatum november 2015
Aantal pagina’s 104
Prijs € 14,95
Zorgtechnologie: dwarsliggers voor de zorg
Auteur Linda Wauben
ISBN 9051799195
Verschijningsdatum november 2015
Aantal pagina’s 72
Prijs € 14,95
Exemplaren zijn bestelbaar via www.hr.nl/onderzoek/publicaties. Hier zijn ook
eerder verschenen uitgaven van Hogeschool Rotterdam Uitgeverij beschikbaar.
Zorgtechnologie:
dwarsligger voor de zorg
OPENBARE LES
Linda Wauben
Linda Wauben
Zorgtechnologie
Beyond Consenting Nerds
Lateral Design Patterns for New Manufacturing
OPENBARE LES
Peter Troxler
Peter Troxler
Beyond Consenting Nerds
Informatierevolutie
in de delta
OPENBARE LES
Kees Pieters
Kees Pieters
Informatierevolutie in de delta
Hyperconnectiviteit in Smart Port Rotterdam
Design in een
genetwerkte ecologie
OPENBARE LES
Anne Nigten
Anne Nigten
Design in een genetwerkte ecologie
48
49
50
ESSAY
applied research
Kees Pieters
The Near Future of Unmanned Vessels
Kees Pieters
The Near Future of Unmanned Vessels
A Complexity-Informed Perspective
In this essay, Kees Pieters presents his view on the current developments
of unmanned systems and maritime robots. Based on his experiences in
Rotterdam, and collaboration in various initiatives regarding unmanned ves-
sels, he offers a pragmatic view that tries to see beyond the hypes
and hopes of autonomous systems, and argues why humans still play an
important role in this near future of robot technology.
In the past four years, Kees Pieters has been at the forefront of new innovations
In the maritime industry, as research professor at Research Centre Sustainable
PortCity. With a background in electrotechnical engineering, computer science
and artificial intelligence, Pieters understands the technological capabilities
and limitations of this new technology, but his Ph D research in Humanistics on
complex systems and complexity thinking has been vital in looking beyond the
hypes, and see the societal impact of robotics and machine learning.
His central thesis is that the complexity of the environment is paramount in
understanding whether technology can replace or augment human presence,
as technology will be subject to the same criteria as any organism in practical
situations. By assessing the strengths and weaknesses of both technology
and (human) organisms, he develops a balanced view on this very topical
development in the maritime industry.
Hogeschool Rotterdam Uitgeverij
... The vessels involved are perfectly suited for the journey and the demands on the unattended technology can be overseen by the designers. 4 The complexity increases tremendously near the coast and on the inland waterways. 5 Not only does the amount of traffic increase, but the nature of participants on the shared waterways changes as well, for instance due to recreational vessels who do not necessarily have to register themselves through AIS or otherwise. ...
Conference Paper
Full-text available
With the increasing interest in the maritime industry in unmanned systems and maritime robotics, attention is shifting towards extremely safe automated collision avoidance systems (ACAS). An ACAS combines the sensors for situational awareness with specific algorithms that prevents a vessel from hitting other objects, usually other nearby vessels. The current state of technology is such that this is one of the big challenges for unmanned systems, as most ACAS are not sufficiently reliable to replace a human helmsman, especially in busy shipping lanes. In this paper we present an approach to ACAS which uses swarming algorithms to prevent vessels of colliding. Swarming algorithms have the characteristic that in itself simple-and therefore robust-rules for the individual participants in the swarm, can result in complex and intelligent behavior of the swarm. The hypothesis that was tested is that an individual ACAS will perform relatively poor in comparison to a solution where all the vessels within a certain range are made subject to a swarming strategy. In the simulations, this hypothesis was tested, not only for collision avoidance, but also with efficient routing and minimal mutual hindrance. The solution we propose is based on a layered approach, where steering actions are abstracted to a number of evasive strategies, on which machine learning can be applied. These strategies correspond to the rules that define the entire swarm. As a special measure of robustness, the swarm should be able to cater for nodes that do not conform to these rules, for instance floating objects or vessels that are not equipped with ACAS. A typical characteristic of a swarm algorithms is that all the participants in the swarm are equal. As vessels can be mutually different, we propose an additional layer of complexity where the models that the vessels make of the other participants are based on theories from evolutionary and social theories. The solution has been developed for the ship simulator of RH Marine on RDM Campus, and could be the onset for automated support for the helmsman, but with the eventual goal to develop ACAS for unmanned vessels. We have run extensive simulations with vessels in a shipping lane, where individual ACAS strategies were compared with a swarming solution.
Book
Full-text available
Door de technologische innovatie in de laatste decennia worden meer en meer taken van de mens overgenomen door geautomatiseerde systemen en robots. Dat geldt ook voor taken waarbij het tot voor kort niet voor mogelijk leek dat ze volledig geautomatiseerd kunnen worden uitgevoerd, zoals het besturen van een auto of het varen van een schip. Veel van deze automatiseringstechnologie wordt ontwikkeld met het doel de precisie, prestaties en efficiëntie van processen te verbeteren en het werk van operators sterk te vereenvoudigen. In deze openbare les laat ik zien dat als de technische omgeving waarin de mens werkt onvoldoende wordt aangepast aan de eigenschappen van de mens, er onhandige automatisering ontstaat die de veiligheid van mens en systeem in gevaar kan brengen. Autonoom varen wordt gezien als de heilige graal van automatisering. Het berust op de aanname dat met toepassing van kunstmatige intelligentie het technisch mogelijk is om autonome systemen te ontwikkelen die weinig, of zelfs geen enkele menselijke betrokkenheid vergen en daarmee de kans op menselijke fouten verminderen of elimineren. In deze openbare les leg ik uit dat automatiseringstechnologie nog niet ver genoeg ontwikkeld is en kan worden, dat een systeem om kan gaan met grote onzekerheid – en dus complexe omgevingen – zonder dat daar menselijk ingrijpen bij nodig is. Niet alleen autonome systemen kennen grenzen, ook de mens is begrensd door zijn cognitieve capaciteiten. We hebben daarom niet meer automatisering nodig, maar automatisering die gericht is op synergie en samenwerking tussen mens en systeem. Hans van den Broek is lector Human factors in maritieme automatisering bij Kenniscentrum Duurzame HavenStad van Hogeschool Rotterdam. Het lectoraat onderzoekt de gevolgen voor de rol van de mens in de maritieme bedien- en beheersprocessen als gevolg van toenemende informatisering, automatisering en robotisering.
Book
Full-text available
Wat moet de havenstad Rotterdam gaan doen met alle toekomstige kansen en bedreigingen? Welke disruptieve veranderingen zijn bij uitstek geschikt voor de haven- en stadsontwikkeling en welke ontwikkelingen zien wij op ons afkomen? Op welke manier kunnen we daar op inspelen met het onderzoek en het onderwijs? Welke nieuwe kennis op het gebied van verandermanagement, informatica en logistiek is nodig om als logisticus van de toekomst goed te kunnen blijven anticiperen op innovaties? Ron van Duin, lector Haven- & Stadslogistiek, gaat in zijn openbare les op zoek naar duurzame logistieke verbeteringen in de havenstad met inzet van nieuwe, slimme (ICT-)techniek. Met behulp van een uitgewerkte SWOT (Strength/Weakness/Oppurtunities/Threads)-analyse definieert hij de volgende onderzoeksvoorstellen, inzet van synchromodaliteit ter verbetering van het container transport, inzet Lichte Elektrische VrachtVoertuigen in the last mile, inzet van Blockchain technologie voor de CO2-footprint van producten, bouwlogistiek en de inzet van het Bouw Informatie Model ter verbetering van de ketenefficiëntie, Internet of Things toepassing in avocado ketens, energiereductie in koeltransport en integratie van het magazijn van de toekomst. Het brede scala van onderzoeksvoorstellen moet bijdragen aan het ontwikkelen van nieuwe kenniscompetenties in de curricula van de drie logistieke opleidingen Logistics Engineering, Logistiek & Economie en International Business and Management Studies van Hogeschool Rotterdam. Het lectoraat Haven & Stadslogistiek is ingebed bij Kenniscentrum Duurzame Havenstad en Center of Expertise RDM van Hogeschool Rotterdam.
Article
Contents: Professional Knowledge and Reflection-in-Action: The crisis of confidence in professional knowledge From technical rationality to reflection-in-action. Professional Contexts for Reflection-in-Action: Design as a reflective conversation with the situation Psychotherapy: The patient as a universe of one The structure of reflection-in-action Reflective practice in the science-based professions Town planning: Limits to reflection-in-action The art of managing: Reflection-in-action within an organizational learning system Patterns and limits of reflection-in-action across the professions. Conclusion: Implications for the professions and their place in society.