Conference PaperPDF Available

# BONSEYES: Platform for Open Development of Systems of Artificial Intelligence: Invited paper

Authors:

## Abstract and Figures

The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.
Content may be subject to copyright.
BONSEYES: Platform for Open Development of Systems of
Artificial Intelligence
Invited paper
Tim Llewellynn
nVISO SA
Switzerland
tim.llewellynn@nviso.ch
M. Milagro
Fernández-Carrobles
University of Castilla-La Mancha
Spain
Oscar Deniz
University of Castilla-La Mancha
Spain
Samuel Fricker
i4Ds Centre for Requirements
Engineering, FHNW
Switzerland
Amos Storkey
University of Edinburgh
United Kingdom
Nuria Pazos
Haute Ecole Specialisee de Suisse
Occidentale
Switzerland
Gordana Velikic
RT-RK
Serbia
Kirsten Leufgen
SCIPROM SARL
Switzerland
Rozenn Dahyot
Trinity College Dublin
Ireland
Sebastian Koller
Technical University Munich
Germany
Georgios Goumas
The Institute of Communications and
Computer Systems of the National
Technical University of Athens
Greece
Peter Leitner
SYNYO GmbH
Austria
Ganesh Dasika
ARM Ltd.
United Kingdom
Lei Wang
ZF Friedrichshafen AG
Germany
Kurt Tutschku
Blekinge Institute of Technology
(BTH)
Sweden
ABSTRACT
The Bonseyes EU H2020 collaborative project aims to develop a
platform consisting of a Data Marketplace, a Deep Learning Toolbox,
and Developer Reference Platforms for organizations wanting to
adopt Articial Intelligence. The project will be focused on using
articial intelligence in low power Internet of Things (IoT) devices
("edge computing"), embedded computing systems, and data center
servers ("cloud computing"). It will bring about orders of magnitude
improvements in eciency, performance, reliability, security, and
productivity in the design and programming of systems of articial
intelligence that incorporate Smart Cyber-Physical Systems (CPS).
In addition, it will solve a causality problem for organizations who
facilitate adoption of the whole concept on a wider scale. To evaluate
the eectiveness, technical feasibility, and to quantify the real-world
improvements in eciency, security, performance, eort and cost
of adding AI to products and services using the Bonseyes platform,
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CF’17, May 15-17, 2017, Siena, Italy
ACM ISBN 978-1-4503-4487-6/17/05.
DOI: http://dx.doi.org/10.1145/3075564.3076259
four complementary demonstrators will be built. Bonseyes platform
capabilities are aimed at being aligned with the European FI-PPP
activities and take advantage of its agship project FIWARE. This
paper provides a description of the project motivation, goals and
preliminary work.
KEYWORDS
Data marketplace, Deep Learning, Internet of things, Smart Cyber-
Physical Systems
ACM Reference format:
Tim Llewellynn, M. Milagro Fernández-Carrobles, Oscar Deniz, Samuel
Fricker, Amos Storkey, Nuria Pazos, Gordana Velikic, Kirsten Leufgen,
Rozenn Dahyot, Sebastian Koller, Georgios Goumas, Peter Leitner, Ganesh
Dasika, Lei Wang, and Kurt Tutschku. 2017. BONSEYES: Platform for Open
Development of Systems of Articial Intelligence. In Proceedings of CF’17,
May 15-17, 2017, Siena, Italy, 2017, 6 pages.
DOI: http://dx.doi.org/10.1145/3075564.3076259
1 INTRODUCTION
Articial intelligence (AI) is developing much faster than we thought
and specially its emerging branch called deep learning, which is
transforming AI. Deep learning relies on simulating large, multi-
layered webs of virtual neurons. This enables a computer to learn
299
CF’17, May 15-17, 2017, 2017, Siena, Italy T. Llewellynn et al.
to recognize abstract patterns and solve any general pattern recog-
nition problem. The results using the new methodology have been
impressive, and both academia and the industry are currently mov-
ing at lightspeed towards the deep learning paradigm.
Thousands or millions of training examples are currently re-
quired by these state-of-the-art machine classication algorithms,
while in contrast humans are able to learn from few examples. The
current trend for improving AI to tackle increasingly complex prob-
lems is therefore a brute force solution to scale up the infrastructure:
more data, more computing power, more neurons in deep learning
algorithms, see for instance [
2
]. Tuning the very large number of
latent parameters controlling the resulting deep architectures is a
dicult optimisation problem where over-tting (or learning the
noise) is one of the major issues. More recent works aim at reduc-
ing the dimensionality of these architectures by imposing sparsity
constraints in the latent space of parameters. The ability for the
user to introduce any prior knowledge to help learning from small
datasets is also essential for ecient architectures to be designed for
a wide range of applications where data is scarce. Moreover learn-
ing eciently from unstructured (or poorly structured) datasets
and data streams that have noise is still currently a challenge for
deep architectures.
Another problem arising from the increase of the deep learning
techniques is related to GPUs. GPUs are regarded as one of the main
drivers of deep learning. As such, the DNN (Deep Neural Network)
models have, mostly, been designed with the GPU computational
model in mind and may not be suitable for other platforms. On the
other hand, the existing models have been mostly hand-crafted with
no proof of optimality. Therefore, embedding these networks into
other platforms may require either 1) developing new networks
taking into account the specications of the target platforms, or 2)
adapting the existing models to the target platforms. This will have
a direct impact on power eciency, which is a key dierentiator in
the case of embedded (and wearable) platforms.
On the other hand, despite advances in data management and
cloud infrastructure, AI systems are still predominantly developed
as monolithic systems. A key reason is that oerings like Microsoft
Azure Marketplace, Google Cloud Machine Learning, and IBM Wat-
son require deployment of data into the clouds of the respective
vendor and the use of the learning tools of these vendors. This
approach makes a marketplace unattractive for data providers that
are not willing to disclose their data, prevents scenarios in which
data is produced by a network of devices (e.g. as in IoT), and im-
pedes the emergence of a thriving ecosystem of data providers,
algorithm developers and model trainers, and AI system developers.
As a result, companies own the full AI systems development value
time-to-market and quality through reuse of data and models.
Finally, the massive data collection required for deep learning
presents obvious privacy issues [
11
]. Users’ personal, highly sensi-
tive data such as photos and voice recordings is kept indenitely by
the companies that collect it. Users can neither delete it, nor restrict
the purposes for which it is used. Furthermore, centrally kept data
is subject to legal subpoenas and extra-judicial surveillance. Many
data owners for example, medical institutions that may want to
apply deep learning methods to clinical records are prevented by
privacy and condentiality concerns from sharing the data and
thus beneting from large-scale deep learning. Generally the best
way to protect privacy is to conne it to the device so that sensor
data never leaves it. This approach requires training of networks
or adaptation of networks in the device.
In this context, the Bonseyes
1
collaborative project has been
proposed and recently funded by the European Comission through
its Horizon 2020 Research and Innovation Programme. In terms of
scientic and technological background, the Bonseyes Project Con-
sortium consists of leading researchers who have a strong mix of
knowledge. This includes statistical and machine learning, embed-
ded software and compiler optimization, power ecient hardware
architectures, image processing and computer vision, cloud and
distributed systems and software ecosystems and requirements
engineering.
1.1 Objectives
The Bonseyes project aims to create a platform for open develop-
ment of systems of AI, which is clearly emerging as a key growth
driver in Smart CPS systems in the next decade. This is in contrast
to monolithic system design currently used in closed end-to-end
solutions. The main objectives of the project are summarized in
Table 1.
2 METHODOLOGY
Bonseyes is a platform for open development in building, improving,
and maintaining systems of AI in the age of the IoT. Figure 1 shows
the problem that Bonseyes is solving, which is that Monolithic
development of Systems of AI give rise to the Datawall eect that
only large companies with end to end solutions can pass. In addition,
Fig. 2 shows a summary of the target platforms that will be used in
the project.
2.1 Data Marketplace
The objective of the Data Marketplace is to enable a modularized
AI systems development value chain by oering the publishing,
surements from the real world, typically made available as streams
of data or as batches of archival data. Metadata enhances that data
by capturing pre-processing results, classifying the primary data
according to expert knowledge (e.g. for supervised learning), spec-
ifying the context the data relates to, and documenting feedback
from AI system developers and users (e.g. for online learning). AI
models are created by applying learning algorithms on data and
cations and enable automated classication. All three components
are used by AI system developers as components to build smart
systems.
2.2 Deep Learning Toolbox
The objective of the Deep Learning Toolbox is to provide a set
of deep learning components that are tailored for embedded, con-
strained, distributed systems operating in real environments with
noisy, sometimes missing data. The toolbox will enable the selection
1www.bonseyes.com
300
BONSEYES: Platform for Open Development of Systems of Artificial Intelligence CF’17, May 15-17, 2017, 2017, Siena, Italy
DATA
WALL
C
O
M
P
A
N
I
E
S
W
I
T
H
E
N
D
T
O
E
N
D
S
O
L
U
T
I
O
N
S
O
T
H
E
R
C
O
M
P
A
N
I
E
S
MORE
CUSTOMERS
MORE
DATA
BETTER
INSIGHT
BETTER
MODELS
» CHICKEN/EGG PROBLEM
COMPANIES WITH
END-TO-END SOLUTIONS
» VIRTUOUS CIRCLE
OTHER COMPANIES
WITHOUT CRITICAL MASS
Figure 1: The problem that Bonseyes is solving.
DATA
MARKETPLACE
DEEP
LEARNING
TOOLBOX
SYSTEMS OF
ARTIFICIAL
INTELLIGENCE
DATA FEEDBACK
MODEL CODE
DEVELOPMENT PLATFORM
FOR
FROM
CLOUD EDGE
TO
COMPUTING
DATA
CENTER
EMBEDDED
COMPUTING
LOW-POWE R
IOT DEVICE
DATA KNOWLEDGE MODEL APPLICATION
SERVICE & TOOLS
DATA
PROVIDER
ANNOTATOR DATA
SCIENTIST
APP
DEVELOPER
INFRASTRUCTURE PROVIDER
UNIVERSAL
REFERENCE PLATFORM
OPEN SOURCE
FRAMEWORKS
»
TENSORFLOW
»
CAFFE
»
THEANO
»
CPU
»
DSP
»
VPU
»
GPU
Figure 2: Target platforms.
and optimisation of tools for a particular task. The key components
of the toolbox are:
2.2.1 Deep learning methods. Methods that are exible in terms
of representation: number of bits precision, hashing methods, pa-
rameter sharing structure, memory usage, sparsity, and robustness.
301
CF’17, May 15-17, 2017, 2017, Siena, Italy T. Llewellynn et al.
Table 1: Bonseyes objectives
Objectives
Accelerate the design and programming of systems of
articial intelligence. Design and implement a Data
Marketplace. Reusing of data, meta data,and models among
separate legal entities to reduce design and development time
in building systems of AI as compared to existing monolithic
system of systems design and development approaches.
Reduce the complexity in training deep learning models
for distributed embedded systems. Provide the fundamental
tools for deep learning for constrained architectures. Design and
implement noise resistant machine learning capabilities able to
learn eciently from unstructured, partially labelled datasets.
Design and implement sparse models to scale and adapt to
various computational architectures. Enable users for designing
deep learning models taking advantage of prior domain
knowledge information.
Foster "embedded intelligence" on low power and resource
constrained Smart CPS. Bonseyes will enable the development
of deep learning models tailored for low power embedded
systems by incorporating architecture-awareness in existing
standardised deep learning packages via an open source toolbox.
Deep Learning Toolbox will allow deep models to be designed
and eciently deployed across multiple embedded systems and
support multiple low power developer reference platforms.
Demonstrate the whole concept in four key challenging
scenarios. Demonstrate the technical feasibility of Bonseyes
with at least three Developer Platforms in four scenarios:
automotive safety, automotive personalisation, consumer
devices, health-care. Each scenario will involve the creation
of a specic application using the Data Marketplace together
with one or more Developer Platforms to build systems of AI
2.2.2 Cost sensitive optimisation methods. Methods (e.g. Bayesian
optimisation) that can optimise a particular deep learning archi-
tecture for a particular embedded environment by incorporating
the particular individual costs (e.g. cost of using memory resources,
cost of processing power, energy cost, training time costs etc.) of
that environmental system into the optimisation. This includes
obtaining reduced dimensional parametrisations for particular ar-
chitecture classes, rather than just individual architectures. We will
explicitly encode for the costs of on-device learning, and costs (e.g.
communication costs) for cloud-based learning.
2.2.3 Component valuation. The value of a given deep learning
component in a given environment will be considered an integral
part of the component itself. The Deep Learning Toolbox will also
include a measure of when a component would be useful. The infor-
mation value of both data and deep-learning components can then
provide a structural marketplace for deciding what components are
best used with what data.
2.2.4 Transferability metrics. A component can either be a free
form deep learning model that needs optimizing for a particular
data source, or a learnt component, already optimised on dierent
data source, or cost function.
2.2.5 Structure-sensitive implementation. Often generic low-level
linear algebra libraries are used for implementing deep neural net-
works, but such libraries rarely provide the best computation for the
restricted architectures needed for embedded systems. For example,
both structural sparsity and dynamic sparsity (associated with the
use of sparse activation functions) provide substantial opportunity
for computational saving.
2.2.6 Implementation and tailoring of these components for par-
ticular reference architectures. The targets across all components
are the real systems that the deep learning components need to
work on. All development will use specic embedded environments
as exemplars. The specic development of deep learning methods
for the individual environments will form a substantial part of the
development eort.
2.3 Universal Developer Reference Platforms
The Deep Learning Toolbox will provide a unied framework for
accelerating Deep Convolutional Neural Networks on resource
constrained architecture-dependent embedded systems. A number
of platforms will be made available with pre-integrated middle-
ware via open source packages. This oers developers a number of
Heterogeneous: Supports a wide array of CPU-based plat-
forms, VPUs, and DSPs.
Performance: Optimised code for reduced memory, power,
Scalability: Ability to run models on the cloud or on em-
bedded systems.
Congurability: Support for multiple types of deep neural
network architectures.
Concurrent Classication & Learning: Support for incre-
mental learning at runtime through feedback APIs, which
allows for more exible and general network.
The following platforms will be supported by consortium part-
ners ARM, RT-RK, and HES-SO:
ARM-based platforms are used extensively when deploying
articial intelligence in embedded and automotive envi-
ronments. Optimizing the deep learning toolbox for an
ARM-based platform is a clear choice.
Low power CPU/VPU/GPU-based platforms are emerg-
ing for always-on visual intelligence applications based on
low power vision processors. Standing at the intersection
of low-power and high performance, these platforms en-
able the development of embedded intelligent solutions.
Equipped with multiple sensors, cameras and communica-
tion means, they will be used during the development and
validation stages of the networks prior to deployment and
integration.
DSP (Digital Signal Processor)-based platforms provided
by RT-RK have been successfully deployed for a spectrum
of consumer electronics applications, complex industrial
settings, and highly demanding automotive environments
and military applications. The board will support up to
ten cameras, basic and advanced warning systems, active
control systems and semi-autonomous applications.
302
BONSEYES: Platform for Open Development of Systems of Artificial Intelligence CF’17, May 15-17, 2017, 2017, Siena, Italy
Apart from the platforms above, Android smartphones will be
also considered in the Bonseyes project.
2.4 Systems of articial intelligence
Most AI systems involve some sort of integrated technologies, for
example the integration of speech synthesis technologies with that
of speech recognition. However, in recent years there has been an
increasing discussion on the importance of systems integration as
aeld in its own right. AI integration has recently been attracting
attention because a number of (relatively) simple AI systems for spe-
cic problem domains (such as computer vision, speech synthesis,
is a more logical approach to broader AI than building monolithic
systems from scratch. Within Bonseyes, four demonstrators will be
used to build such systems of AI (c.f. section 2.7).
2.5 Computing Power
The objective of the Computing Power component is to provide
resources in terms of CPU, memory, and storage to provide the
capabilities for the Data Marketplace. Bonseyes will use a well-
balanced approach for providing the training in the cloud backend
with sucient computing power. The balance aims at economic
and energy eciency, robustness, functional capabilities and data
privacy and isolation. It will mainly be based on the use of the
EU agship platform FIWARE
2
. The Bonseyes project will use a
balanced and agile concept of non-commercial, commercial and,
if necessary, self-operated compute infrastructure sourced from
FIWARE LAB
3
. These options include possible special support by
FIWARE Lab. The balance will reect on the economics, availability
and stability of the compute resources and the needs of the use
cases. The consideration of available resources will increase the
agility of the Bonseyes concept of a system of AI systems. The
detailed balance will be determined during the architecting phase
of the Bonseyes project.
2.6 Data Tools
Data Tools aims to provide tools to allow data collection, curation,
evaluating, crowdsourcing, and editing data necessary for training
models using the Deep Learning Toolbox. One key area will be
on IoT data collection by providing a programming model and
micro-kernel style runtime. That can be embedded in gateways and
small footprint edge devices enabling local, real-time, analytics on
the continuous streams of data coming from equipment, vehicles,
systems, appliances, devices and sensors of all kinds. By performing
real-time analytics on the edge device, only anomalies or unseen
data can be transmitted for storage and archival used for learning.
2.7 Demonstrators
For demonstration, four scenarios in three sectors (automotive, con-
sumer, and healthcare) will be considered: automotive intelligent
safety, automotive cognitive computing, consumer emotional vir-
tual agents, and healthcare patient monitoring. These use cases
2www.ware.org
3lab.ware.org
have been considered as they are far reaching across a number of
high-value industries with high social impact.
2.7.1 Automotive Intelligent Safety. Autonomous systems are
able to control steering, braking, and accelerating and are already
starting to appear in cars. These systems require drivers to keep an
eye on the road and hands on the wheel. But the next generation of
self-driving systems [
4
], [
8
] could be available in less than a decade
and free drivers so they can work, text, or just relax. Ford, General
Motors, Toyota, Nissan, Volvo, and Audi have all shown ocars
that can drive themselves. They all have declared that within a
decade they plan to sell cars with some form of advanced automa-
tion. These cars will be able to take over driving on highways or to
park themselves in a garage.
In this demonstrator, Bonseyes will be used to build a system of AI
that will use scene and people detection (contextual awareness) and
driver distraction (driver monitoring) to trigger active or passive
safety systems. It will involve the following AI systems: per-pixel
scene labelling, scene detection, people detection, and driver dis-
traction.
2.7.2 Automotive Cognitive Computing. The vehicles of the near
future will be "intelligent". Electronics will bring new capabilities to
every part of the vehicle. New technologies will provide for greater
vehicle, its environment [
10
] and vehicle connectivity. Consumers,
with a plethora of electronic devices that inform them [
5
], entertain
them and keep them safe [
9
], [
6
], will nd themselves enjoying
the overall experience of their vehicles. Connectivity and lifestyle
trends will change the way cars are used. This "experience" will be
a key dierentiator in attracting consumers.
In the Automotive Cognitive Computing Demonstrator, Bonseyes
will be used to build an in-vehicle digital assistant. The in-vehicle
digital assistant will be able to recognise the driver and then person-
alise his/her car experience while learning the driver’s preferences
and allowing natural language interaction within the vehicle and
the driving context. It will provide the driver with personalised
advice on how to interact with the vehicle for route planning, envi-
ronment information, entertainment, etc. It will involve the follow-
ing AI systems: face recognition, demographic detection, emotion
recognition, speech-to-text, and natural language processing.
2.7.3 Consumer Emotional Virtual Assistants. Many institutions
are creating innovation labs aimed at understanding how they can
rewrite their existing applications to improve the technology and
consumer interaction and to provide a more compelling customer
service experience. Human sensor data, such as facial expressions
[
12
], voice input, hand gestures, even brain waves, emotion sensors,
heart beat are being tested as new forms of input [
1
]. While voice re-
sponse systems, haptics (tactile feedback) and holographs are being
tested as new forms of output. These inputs and outputs are being
augmented with AI to make them more useful and human-like
enabling, for example, discourse in natural language and sensing
emotions. Based upon what is learned, a new paradigm will likely
emerge that will fundamentally change the way machines and peo-
ple communicate.
Bonseyes will be used to leverage the increasing amount of compu-
tational capacity of mobile devices to develop real time multimodal
303
CF’17, May 15-17, 2017, 2017, Siena, Italy T. Llewellynn et al.
applications. An emotional virtual agent will be implemented for
improving the communication between services and users through
an agent-based application that allows multimodal and emotional
interaction with the users through dierent channels: visual, oral
and written. A use case will be developed showing how multiple
sensing technologies can be combined with object detection to en-
hance consumer interaction in a more natural way. It will involve
the following AI systems: face recognition, multi-modal emotion
recognition, object recognition, and speech-to-text.
2.7.4 Healthcare Patient Monitoring. Patient tracking will be
used to optimise capacity utilisation in diagnostic departments
and reduce waiting times for patients. The vital sensor delivers
data like acceleration, pose, heart rate and others which will be
used to estimate the mobility of the patient and the time needed
to reach the diagnostic department [
3
], [
7
]. Personal recording of
not only in Europe. Hospitals started in the last years to monitor
patient beds, devices and patients by the use of RFID, however,
with the signicant drawback of limited range and necessary huge
installations for the RFID antennas in the corresponding areas. In
the future, it will be focused on patient tracking and monitoring of
dierent vital signals by use of smart low power devices in order
to plan and schedule further diagnostic procedures and calculate
expected stress and therapeutic options by use of deep learning
techniques. Patients scheduled for elective surgery will be equipped
during the diagnosis day with smart devices which track their
position and record and transmit vital signs (heart rate, breathing
rate, pose, skin conduction, etc.). Based on necessary diagnostics
patients will be sent to the respective diagnostic department. The
Deep Learning Toolbox will analyse vital signs data and predict
further diagnostics and stress levels which will be used to adjust
sedation prior to the surgical intervention. Postoperatively, vital
data from the sensors will be used to predict the earliest day of
discharge. It will involve the following AI systems: vital sensors,
and location tracking technologies.
3 CONCLUSIONS
The main challenge and contribution of the Bonseyes collaborative
project is to design and implement highly distributed and con-
nected digital technologies that are embedded in a multitude of
increasingly autonomous physical systems. These systems must
satisfy multiple critical constraints including safety, security, power
eciency, high performance, size and cost. Developing new model-
centric and predictive engineering methods and tools for CPS with
a high degree of autonomy, ensuring adaptability, scalability, com-
plexity management, security and safety providing trust to humans
in the loop. Driven by industrial needs and validated in at least
four complementary use cases in dierent application domains and
sectors. The results are intended to enable integration of broader
development environments and middleware. The merits of the con-
4 ACKNOWLEDGMENTS
This project has received funding from the European Union’s Hori-
zon 2020 research and innovation programme under grant agree-
ment No 732204 (Bonseyes). This work is supported by the Swiss
State Secretariat for Education Research and Innovation (SERI)
under contract number 16.0159. The opinions expressed and argu-
ments employed herein do not necessarily reect the ocial views
of these funding bodies.
REFERENCES
[1]
Sourav Bhattacharya and Nicholas D. Lane. 2016. From smart to deep: Robust
activity recognition on smartwatches using deep learning. In 2016 IEEE Interna-
tional Conference on Pervasive Computing and Communication Workshops (PerCom
Workshops). 1–6. DOI:http://dx.doi.org/10.1109/PERCOMW.2016.7457169
[2]
Xue-Wen Chen and Xiaotong Lin. 2014. Big Data Deep Learning: Challenges
and Perspectives. IEEE Access 2 (2014), 514–525.
DOI:
http://dx.doi.org/10.1109/
ACCESS.2014.2325029
[3]
Shin Hoo-Chang, Orton Matthew R., Collins David J., Doran Simon J., and
Leach Martin O. 2013. Stacked Autoencoders for Unsupervised Feature Learning
and Multiple Organ Detection in a Pilot Study Using 4D Patient Data. IEEE
Transactions on Pattern Analysis & Machine Intelligence 35, 8 (2013), 1930–1943.
[4]
Brody Huval, Tao Wang, Sameep Tandon, JeKiske, Will Song, Joel Pazhayam-
pallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migimatsu, Royce Cheng-
Yue, Fernando Mujica, Adam Coates, and Andrew Y. Ng. 2015. An Empirical
Evaluation of Deep Learning on Highway Driving. CoRR abs/1504.01716 (2015).
[5]
Ashesh Jain, Hemma S. Koppula, Shane Soh, Bharad Raghavan, Avi Singh, and
Ashutosh Saxena. 2016. Brain4Cars: Car That Knows Before You Do via Sensory-
Fusion Deep Learning Architecture. CoRR abs/1601.00740 (2016). http://arxiv.
org/abs/1601.00740
[6]
Ashesh Jain, Avi Singh, Hemma S. Koppula, Shane Soh, and Ashutosh Saxena.
2015. Recurrent Neural Networks for Driver Activity Anticipation via Sensory-
Fusion Architecture. CoRR abs/1509.05016 (2015). http://arxiv.org/abs/1509.05016
[7]
Guo-Ping Liu, Jianjun Yan, Yiqin Wang, Wu Zheng, Tao Zhong, Xiong Lu, and
Peng Qian. 2014. Deep Learning Based Syndrome Diagnosis of Chronic Gastritis.
Comp. Math. Methods in Medicine 2014 (2014), 938350:1–938350:8.
DOI:
http:
//dx.doi.org/10.1155/2014/938350
[8]
Qudsia Memon, Muzamil Ahmed, Shahzeb Ali, Azam R. Memon, and Wajiha
Shah. 2016. Self-driving and driver relaxing vehicle. (Nov 2016), 170–174.
DOI:
http://dx.doi.org/10.1109/ICRAI.2016.7791248
[9]
Pavlo Molchanov, Shalini Gupta, Kihwan Kim, and Kari Pulli. 2015. Multi-sensor
system for driver’s hand-gesture recognition. In 2015 11th IEEE International
Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. 1.
1–8. DOI:http://dx.doi.org/10.1109/FG.2015.7163132
[10]
David Ribeiro, André Mateus, Jacinto C. Nascimento, and Pedro Miraldo. 2016.
A Real-Time Pedestrian Detector using Deep Learning for Human-Aware Navi-
gation. CoRR abs/1607.04441 (2016). http://arxiv.org/abs/1607.04441
[11]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning.
In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Com-
munications Security (CCS ’15). ACM, New York, NY, USA, 1310–1321.
DOI:
http://dx.doi.org/10.1145/2810103.2813687
[12]
Inchul Song, Hyun-Jun Kim, and Paul B. Jeon. 2014. Deep learning for real-time
robust facial expression recognition on a smartphone. In 2014 IEEE International
Conference on Consumer Electronics (ICCE). 564–567.
DOI:
http://dx.doi.org/10.
1109/ICCE.2014.6776135
304
... • We describe several tools which we have used to increase the productivity of our DNN research, including PyTorch Lightning [13] and templates from the Bonseyes Marketplace Platform [14]. • We highlight how these tools were valuable to us in a case study for visual industrial defect detection, and how we used them to develop an end-to-end solution. ...
... Instead we focus on systems and tools which we believe filled a niche that greatly increased our productivity in carrying out our case study discussed in Section III, and may not necessarily be well known or commonly used within the research community. In particular, instrumental to our success were systems and templates provided by the Bonseyes Marketplace [14], which were designed with these goals in mind 1 . The core tools we leveraged were as follows: ...
... In conclusion, there are a wide range of tools available to increase the productivity of both deep learning engineers and researchers. Our paper highlighted that features such a continuous integration and deployment, which are rarely a priority for researchers, can bring a number of benefits and require little effort to set up if researchers embrace predefined workflows such as the open source tools provided by the Bonseyes Marketplace [14]. In addition, there are emerging higher level deep learning libraries (such as SMP [17]) that can improve productivity for specific domains, that should be exploited where possible. ...
Preprint
Full-text available
As Deep Neural Networks (DNNs) have become an increasingly ubiquitous workload, the range of libraries and tooling available to aid in their development and deployment has grown significantly. Scalable, production quality tools are freely available under permissive licenses, and are accessible enough to enable even small teams to be very productive. However within the research community, awareness and usage of said tools is not necessarily widespread, and researchers may be missing out on potential productivity gains from exploiting the latest tools and workflows. This paper presents a case study where we discuss our recent experience producing an end-to-end artificial intelligence application for industrial defect detection. We detail the high level deep learning libraries, containerized workflows, continuous integration/deployment pipelines, and open source code templates we leveraged to produce a competitive result, matching the performance of other ranked solutions to our three target datasets. We highlight the value that exploiting such systems can bring, even for research, and detail our solution and present our best results in terms of accuracy and inference time on a server class GPU, as well as inference times on a server class CPU, and a Raspberry Pi 4.
... In that case, the data owner can be considered a private seller or private buyer, respectively. Alternatively, a platform can be operated by a consortium, such as the BONSEYES initiative [50]. Finally, the platform owner may also be an independent party, which facilitates one-to-many (sell-side), many-to-one (buy-side) or many-to-many (two-sided) data trading for data providers and data consumers other than themselves. ...
... Three works focus on data markets for Artificial Intelligence (AI) and Machine Learning (ML) [50], [52], [62]. All of these are concerned with IoT data and are aiming to facilitate so-called edge computing. ...
... Some data markets provide an environment on their platform where data transformers can manually transform data products without them ever leaving the data markets. This approach is better suited to centralised data markets than decentralised ones because it requires a centralised authority that controls the data products e.g., [50], [131]. However, some decentralised approaches for IoT exist, which put the burden of transformation on the data provider: An example is the use of edge computing, whereby the data is transformed through a platform that uses hardware provided by the data provider [52]. ...
Article
Full-text available
Data markets are platforms that provide the necessary infrastructure and services to facilitate the exchange of data products between data providers and data consumers from different environments. Over the last decade, many data markets have sprung up, capitalising on the increased appreciation of the value of data and catering to different domains. In this work, we analyse the existing body of scientific literature on data markets to provide the first comprehensive overview of research into the design of data markets, regardless of scientific background or application domain. In doing so, we contribute to the field in several ways: 1) We present an overview of the state of the art in academic research on data markets and compare this with existing market trends to identify potential gaps. 2) We identify important application domains and contexts where data markets are being put into practice. 3) Finally, we provide taxonomies of both design problems for data markets and the solutions that are being investigated to address them. We conclude our work by identifying common types of data markets and corresponding best practices for designing them. The outcome of this work is intended to serve as a starting point for software architects and engineers looking to design data markets.
... Small companies, however, might collect data or define models but must rely on collaborations with distributed stakeholders for developing AI solutions. Hence, trusted collaborations in AI engineering are needed for empowering ML beyond large companies [1]. ...
... The authors of [1], [2] developed a collaborative form of AI engineering within a H2020 project. This approach uses agile methods, e. g. , continuous integration, to accelerate collaborative AI development. ...
... Another key element of the AI engineering in [1] is the Marketplace (MP) for AI artifacts. The MP enables stakeholders to meet, offer, find and exchange artifacts. ...
Conference Paper
In this paper, we investigate how to design a security architecture of a Platform-as-a-Service (PaaS) solution, denoted as Secure Virtual Premise (SVP), for collaborative and distributed AI engineering using AI artifacts and Machine Learning (ML) pipelines. Artifacts are re-usable software objects which are a) tradeable in marketplaces, b) implemented by containers, c) offer AI functions as microservices, and, d) can form service chains, denoted as AI pipelines. Collaborative engineering is facilitated by the trading and (re-)using artifacts and, thus, accelerating the AI application design. The security architecture of the SVP is built around the security needs of collaborative AI engineering and uses a proxy concept for microservices. The proxy shields the AI artifact and pipelines from outside adversaries as well as from misbehaving users, thus building trust among the collaborating parties. We identify the security needs of collaborative AI engineering, derive the security challenges, outline the SVP’s architecture, and describe its security capabilities and its implementation, which is currently in use with several AI developer communities. Furthermore, we evaluate the SVP’s Technology Readiness Level (TRL) with regard to collaborative AI engineering and data security.
... In the 21 st century, artificial intelligence becomes the system that develops in all fields such as engineering, science, education, medicine, business, accounting, finance, marketing, economics, stock market and law, and governmental [1]. Especially in the government context, the development of artificial intelligence gets a positive response [6]. The benefits of Artificial intelligence that gives opportunities to the Government to reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks become the reason [7]. ...
Article
Full-text available
The development of artificial intelligence has become an important issue in this era. However, the development of artificial intelligence is growing evenly, especially in Asian continents. A lot of countries develop artificial intelligence because artificial intelligence provides a lot of benefits. Hence, the Government makes various strategies to develop artificial intelligence in their country. Almost the country utilizes three sectors, such as the education sector, private sector, and the government sector, to develop artificial intelligence. Nevertheless, there are different approaches to develop artificial intelligence. Singapore prefers to utilize government sectors than the education sectors and private sectors. Then, the UK prefers to utilize education sectors than private sectors and government sectors. Different from Singapore and the UK, the USA prefers to utilize two sectors, which are very significant support by the education sectors and private sectors.
... As an alternative to monolithic and closed solutions, we form part of the Bonseyes project [59], a European collaboration to facilitate Deep Learning to any stakeholder and reduce development time. ...
Article
Next generation of embedded Information and Communication Technology (ICT) systems are interconnected and collaborative systems able to perform autonomous tasks. The remarkable expansion of the embedded ICT market, together with the rise and breakthroughs of Artificial Intelligence (AI), have put the focus on the Edge as it stands as one of the keys for the next technological revolution: the seamless integration of AI in our daily life. However, training and deployment of custom AI solutions on embedded devices require a fine-grained integration of data, algorithms, and tools to achieve high accuracy and overcome functional and non-functional requirements. Such integration requires a high level of expertise that becomes a real bottleneck for small and medium enterprises wanting to deploy AI solutions on the Edge , which, ultimately, slows down the adoption of AI on applications in our daily life. In this work, we present a modular AI pipeline as an integrating framework to bring data, algorithms, and deployment tools together. By removing the integration barriers and lowering the required expertise, we can interconnect the different stages of particular tools and provide a modular end-to-end development of AI products for embedded devices. Our AI pipeline consists of four modular main steps: (i) data ingestion, (ii) model training, (iii) deployment optimization, and (iv) the IoT hub integration. To show the effectiveness of our pipeline, we provide examples of different AI applications during each of the steps. Besides, we integrate our deployment framework, Low-Power Deep Neural Network (LPDNN), into the AI pipeline and present its lightweight architecture and deployment capabilities for embedded devices. Finally, we demonstrate the results of the AI pipeline by showing the deployment of several AI applications such as keyword spotting, image classification, and object detection on a set of well-known embedded platforms, where LPDNN consistently outperforms all other popular deployment frameworks.
... We form part of a European collaboration to bring deep learning methods to any party who would like to take up deep learning solutions in an industrial environment [57]. In this context, a deep learning framework (LPDNN) has been developed [22] to produce efficient and tunable code that enables and maximizes the portability among platforms. ...
Preprint
The spread of deep learning on embedded devices has prompted the development of numerous methods to optimise the deployment of deep neural networks (DNN). Works have mainly focused on: i) efficient DNN architectures, ii) network optimisation techniques such as pruning and quantisation, iii) optimised algorithms to speed up the execution of the most computational intensive layers and, iv) dedicated hardware to accelerate the data flow and computation. However, there is a lack of research on the combination of these methods as the space of approaches becomes too large to test and obtain a globally optimised solution, which leads to suboptimal deployment in terms of latency, accuracy, and memory. In this work, we first detail and analyse the methods to improve the deployment of DNNs across the different levels of software optimisation. Building on this knowledge, we present an automated exploration framework to ease the deployment of DNNs for industrial applications by automatically exploring the design space and learning an optimised solution that speeds up the performance and reduces the memory on embedded CPU platforms. The framework relies on a Reinforcement Learning -based search that, combined with a deep learning inference framework, enables the deployment of DNN implementations to obtain empirical measurements on embedded AI applications. Thus, we present a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU platforms achieving up to 4x improvement in performance and over 2x reduction in memory with negligible loss in accuracy with respect to the BLAS floating-point implementation.
Chapter
With pressure rising in the global economy for companies to adopt AI, responsible business conduct and the consideration of stakeholder interests become more challenging. Since scholars have repeatedly highlighted the gap in research on AI governance, I present a theoretical contribution to this young research field. Due to the recent emergence of the field, hardly any publications examine AI governance from a theoretical perspective. Therefore, I initially examine the problem structure AI governance seeks to address, defining it as wicked, exceptionally complex, and characterised by high uncertainty levels. Based on this need-oriented analysis, I choose Relational Economics to develop the Relational Governance of Artificial Intelligence.
Article
The spread of deep learning on embedded devices has prompted the development of numerous methods to optimize the deployment of deep neural networks (DNNs). Works have mainly focused on: 1) efficient DNN architectures; 2) network optimization techniques, such as pruning and quantization; 3) optimized algorithms to speed up the execution of the most computational intensive layers; and 4) dedicated hardware to accelerate the data flow and computation. However, there is a lack of research on cross-level optimization as the space of approaches becomes too large to test and obtain a globally optimized solution. Thus, leading to suboptimal deployment in terms of latency, accuracy, and memory. In this work, we first detail and analyze the methods to improve the deployment of DNNs across the different levels of software optimization. Building on this knowledge, we present an automated exploration framework to ease the deployment of DNNs. The framework relies on a reinforcement learning search that, combined with a deep learning inference framework, automatically explores the design space and learns an optimized solution that speeds up the performance and reduces the memory on embedded CPU platforms. Thus, we present a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU platforms achieving up to $4\times$ improvement in performance and over $2\times$ reduction in memory with negligible loss in accuracy with respect to the BLAS floating-point implementation.
Article
The increasing networking of data systems in medicine is not only leading to modern interdisciplinarity in the sense of cooperation between different medical departments, but also poses new challenges regarding the building and room infrastructure. The surgical operating room of the future expands or augments its reality, away from the pure building characteristics, towards an intelligent and communicative space platform. The building infrastructure (operating theatre) serves as sensor and actuator. Thus, it is possible to inform about missing diagnostics as well as to register them directly in the contextualization of the planned surgical intervention or to integrate them into the processes. Integrated operating theatres represent a comprehensive computer platform based on a corresponding system architecture with software-based protocols. An underlying modular system consisting of various modules for image acquisition and analysis, interaction and visualization supports the integration and merging of heterogeneous data that are generated in a hospital operation. Integral building data (e.g., air conditioning, lighting control, device registration) are merged with patient-related data (age, type of illness, concomitant diseases, existing diagnostic CT and MRI images). New systems coming onto the market, as well as already existing systems will have to be measured by the extent to which they will be able to guarantee this integration of information—similar to the development from mobile phone to smartphone. Cost reduction should not be the only legitimizing argument for the market launch, but the vision of a new quality of surgical perception and action.
Conference Paper
Full-text available
In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper.
Article
Full-text available
This paper addresses the problem of Human-Aware Navigation (HAN), using multi camera sensors to implement a vision-based person tracking system. The main contributions of this paper are a novel and real-time Deep Learning person detection and a standardization of personal space, that can be used with any path planer. In the first stage of the approach, we propose to cascade the Aggregate Channel Features (ACF) detector with a deep Convolutional Neural Network (CNN) to achieve fast and accurate Pedestrian Detection (PD). For the personal space definition (that can be defined as constraints associated with the robot's motion), we used a mixture of asymmetric Gaussian functions, to define the cost functions associated to each constraint. Both methods were evaluated individually. The final solution (including both the proposed pedestrian detection and the personal space constraints) was tested in a typical domestic indoor scenario, in four distinct experiments. The results show that the robot is able to cope with human-aware constraints, defined after common proxemics rules.
Conference Paper
Full-text available
Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.
Article
Full-text available
Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.
Conference Paper
The use of deep learning for the activity recognition performed by wearables, such as smartwatches, is an understudied problem. To advance current understanding in this area, we perform a smartwatch-centric investigation of activity recognition under one of the most popular deep learning methods - Restricted Boltzmann Machines (RBM). This study includes a variety of typical behavior and context recognition tasks related to smartwatches (such as transportation mode, physical activities and indoor/outdoor detection) to which RBMs have previously never been applied. Our findings indicate that even a relatively simple RBM-based activity recognition pipeline is able to outperform a wide-range of common modeling alternatives for all tested activity classes. However, usage of deep models is also often accompanied by resource consumption that is unacceptably high for constrained devices like watches. Therefore, we complement this result with a study of the overhead of specifically RBM-based activity models on representative smartwatch hardware (the Snapdragon 400 SoC, present in many commercial smartwatches). These results show, contrary to expectation, RBM models for activity recognition have acceptable levels of resource use for smartwatch-class hardware already on the market. Collectively, these two experimental results make a strong case for more widespread adoption of deep learning techniques within smartwatch designs moving forward.
Article
Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation. For this purpose we equip a car with cameras, Global Positioning System (GPS), and a computing device to capture the driving context from both inside and outside of the car. In order to anticipate maneuvers, we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We propose a novel training procedure which allows the network to predict the future given only a partial temporal context. We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds before they occur in real-time with a precision and recall of 90.5\% and 87.4\% respectively.
Conference Paper
We propose a novel multi-sensor system for accurate and power-efficient dynamic car-driver hand-gesture recognition, using a short-range radar, a color camera, and a depth camera, which together make the system robust against variable lighting conditions. We present a procedure to jointly calibrate the radar and depth sensors. We employ convolutional deep neural networks to fuse data from multiple sensors and to classify the gestures. Our algorithm accurately recognizes 10 different gestures acquired indoors and outdoors in a car during the day and at night. It consumes significantly less power than purely vision-based systems.
Conference Paper
We developed a real-time robust facial expression recognition function on a smartphone. To this end, we trained a deep convolutional neural network on a GPU to classify facial expressions. The network has 65k neurons and consists of 5 layers. The network of this size exhibits substantial overfitting when the size of training examples is not large. To combat overfitting, we applied data augmentation and a recently introduced technique called "dropout". Through experimental evaluation over various face datasets, we show that the trained network outperformed a classifier based on hand-engineered features by a large margin. With the trained network, we developed a smartphone app that recognized the user's facial expression. In this paper, we share our experiences on training such a deep network and developing a smartphone app based on the trained network.
Article
Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.