
Fuyuki Ishikawa- PhD
- Associate Professor at National Institute of Informatics
Fuyuki Ishikawa
- PhD
- Associate Professor at National Institute of Informatics
About
205
Publications
24,060
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,650
Citations
Current institution
Publications
Publications (205)
Systems based on Deep Neural Networks (DNNs) are increasingly being used in industry. In the process of system operation, DNNs need to be updated in order to improve their performance. When updating DNNs, systems used in companies that require high reliability must have as few regressions as possible. Since the update of DNNs has a data-driven natu...
Large Language Model (LLM) image recognition is a powerful tool for extracting data from images, but accuracy depends on providing sufficient cues in the prompt - requiring a domain expert for specialized tasks. We introduce Cue Learning using Evolution for Accurate Recognition (CLEAR), which uses a combination of LLMs and evolutionary computation...
Optimizing the quality of machine learning (ML) services for individual consumers with specific objectives is crucial for improving consumer satisfaction. In this context, end-to-end ensemble ML serving (EEMLS) faces many challenges in selecting and deploying ensembles of ML models on diverse resources across the edge-cloud continuum. This paper pr...
Cyber-Physical Systems (CPSs) are increasingly adopting deep neural networks (DNNs) as controllers, giving birth to AI-enabled CPSs . Despite their advantages, many concerns arise about the safety of DNN controllers. Numerous efforts have been made to detect system executions that violate safety specifications; however, once a violation is detected...
In this chapter, we review our research for dependable service composition for smart cities in both cyber and physical spaces. For the cyber space, given the active investigation on web services or web APIs, we intensively worked on the problem of service composition that explores the “best” combination of available services from different provider...
The use of autonomous robots for delivery of goods to customers is an exciting new way to provide a reliable and sustainable service. However, in the real world, autonomous robots still require human supervision for safety reasons. We tackle the realworld problem of optimizing autonomous robot timings to maximize deliveries, while ensuring that the...
Ensuring the safety of autonomous vehicles (AVs) is the key requisite for their acceptance in society. This complexity is the core challenge in formally proving their safety conditions with AI-based black-box controllers and surrounding objects under various traffic scenarios. This paper describes our strategy and experience in modelling, deriving,...
Digital twins (DTs) are promising to revolutionize the way future Cyber-Physical Systems (CPSs) – which are becoming increasingly complex every day– will be developed and operated. To deal with such increasing complexity and to enable CPSs to handle uncertain and unknown situations, DTs provide a viable solution, although they are themselves compli...
Autonomous Driving Systems (ADSs) are promising, but must show they are secure and trustworthy before adoption. Simulation-based testing is a widely adopted approach, where the ADS is run in a simulated environment over specific scenarios. Coverage criteria specify what needs to be covered to consider the ADS sufficiently tested. However, existing...
We introduce a goal-aware extension of responsibility-sensitive safety (RSS), a recent methodology for rule-based safety guarantee for automated driving systems (ADS). Making RSS rules guarantee goal achievement -- in addition to collision avoidance as in the original RSS -- requires complex planning over long sequences of manoeuvres. To deal with...
Additional training of a deep learning model can cause negative effects on the results, turning an initially positive sample into a negative one (degradation). Such degradation is possible in real-world use cases due to the diversity of sample characteristics. That is, a set of samples is a mixture of critical ones which should not be missed and le...
Systematic techniques to improve quality of deep neural networks (DNNs) are critical given the increasing demand for practical applications including safety-critical ones. The key challenge comes from the little controllability in updating DNNs. Retraining to fix some behavior often has a destructive impact on other behavior, causing regressions, i...
Autonomous Driving Systems (ADSs) are complex systems that must consider different aspects such as safety, compliance to traffic regulations, comfort, etc. The relative importance of these aspects is usually balanced in a weighted cost function. However, there is generally no optimal set of weights, and different driving situations may require diff...
We introduce a goal-aware extension of responsibility-sensitive safety (RSS), a recent methodology for rule-based safety guarantee for automated driving systems (ADS). Making RSS rules guarantee goal achievement—in addition to collision avoidance as in the original RSS—requires complex planning over long sequences of manoeuvres.To deal with the com...
The decentralised railway signalling systems have a potential to
increase capacity, availability and reduce maintenance costs of
railway networks. However, given the safety-critical nature of
railway signalling and the complexity of novel distributed
signalling solutions, their safety should be guaranteed by using
thorough system validation methods...
The safety of Self-Driving Vehicles (SDVs) is crucial for social acceptance of self-driving technology/vehicles, and how to assure such safety is of great concern for automakers and regulatory and standardization bodies. ANSI/UL 4600 (4600) [3], a standard for the safety of autonomous products, has an impact on the regulatory regime of self-driving...
Formal reasoning on the safety of controller systems interacting with plants is complex because developers need to specify behavior while taking into account perceptual uncertainty. To address this, we propose an automated workflow that takes an Event-B model of an uncertainty-unaware controller and a specification of uncertainty as input. First, o...
We introduce a new logic named Quantitative Confidence Logic (QCL) that quantifies the level of confidence one has in the conclusion of a proof. By translating a fault tree representing a system's architecture to a proof, we show how to use QCL to give a solution to the test resource allocation problem that takes the given architecture into account...
We introduce a new logic named Quantitative Confidence Logic (QCL) that quantifies the level of confidence one has in the conclusion of a proof. By translating a fault tree representing a system’s architecture to a proof, we show how to use QCL to give a solution to the test resource allocation problem that takes the given architecture into account...
Formal reasoning on the safety of controller systems interacting with plants is complex because developers need to specify behavior while taking into account perceptual uncertainty. To address this, we propose an automated workflow that takes an Event-B model of an uncertainty-unaware controller and a specification of uncertainty as input. First, o...
Control of abstraction levels is key to tackling the increasing complexity of emerging systems such as cyber-physical systems. Formal methods for dependability assurance have been used to explore this point by using refinement mechanisms, with which complex models are gradually constructed and verified. However, refinement mechanisms to derive the...
ACM SIGKDD Conference on Knowledge Discovery and Data Mining, [En ligne] Singapour, SGP, 14-/08/2021 - 18/08/2021
Hybrid systems consist of a discrete part (controller) that interacts with a continuous physical part (plant). Formal verification of such systems is complex and challenging in order to handle both the discrete objects and continuous objects, such as functions and differential equations for modelling the physical part, for synthesising hybrid contr...
Automated and autonomous driving systems (ADS) are a transformational technology in the mobility sector. Current practice for testing ADS uses virtual tests in computer simulations; search-based approaches are used to find particularly dangerous situations, possibly collisions. However, when a collision is found, it is not always easy to automatica...
Significant effort is being put into developing industrial applications for artificial intelligence (AI), especially those using machine learning (ML) techniques. Despite the intensive support for building ML applications, there are still challenges when it comes to evaluating, assuring, and improving the quality or dependability. The difficulty st...
Recent approaches in testing autonomous driving systems (ADS) are able to generate a scenario in which the autonomous car collides , and a different ADS configuration that avoids the collision. However, such test information is too low level to be used by engineers to improve the ADS. In this paper, we consider a path planner component provided by...
Great efforts are currently underway to develop
industrial applications for artificial intelligence (AI), especially
those using machine learning (ML) techniques. Despite the
intensive support for building ML applications, there are still
challenges when it comes to evaluating, assuring, and improving
the quality or dependability. The difficulty st...
The decentralisation of railway signalling systems has the potential to increase railway network capacity, availability and reduce maintenance costs. Given the safety-critical nature of railway signalling and the complexity of novel distributed signalling solutions, their safety should be guaranteed by using thorough system validation methods. In t...
Autonomous cars are subjected to several different kind of inputs (other cars, road structure, etc.) and, therefore, testing the car under all possible conditions is impossible. To tackle this problem, scenario-based testing for automated driving defines categories of different scenarios that should be covered. Although this kind of coverage is a n...
There are unique kinds of uncertainty in implementations constructed by machine learning from training data. This uncertainty affects the strategy and activities for safety assurance. In this paper, we investigate this point in terms of continuous argument engineering with a granular performance evaluation over the expected operational domain. We e...
Refinement-based formal specification is a promising approach to the increasing complexity of software systems, as demonstrated in the formal method Event-B. It allows stepwise modeling and verifying of complex systems with multiple steps at different abstraction levels. However, making changes is more difficult, as caution is necessary to avoid br...
More and more software practitioners are tackling towards industrial applications of artificial intelligence (AI) systems, especially those based on machine learning (ML). However, many of existing principles and approaches to traditional systems do not work effectively for the system behavior obtained by training not by logical design. In addition...
Safety assurance for automotive products is crucial and challenging. It becomes even more difficult when the variability in automotive products is considered. Recently, the notion of automotive multi-product lines (multi-PL) is proposed as a unified framework to accommodate different sources of variability in automotive products. In the context of...
Requirements Engineering (RE) is regarded as key to software project success and has been researched and practiced for decades. With the growing maturity and complexity of software development, however, the contemporary RE environment has been changing so that the intertwining of requirements with implementation and organizational contexts has been...
Event-B has been attracting much interest because it supports a flexible refinement mechanism that reduces the complexity of constructing and verifying models of complicated target systems by taking into account multiple abstraction layers of the models. Although most previous studies on Event-B focused on model construction, the constructed models...
Safety analyses in the automotive domain (in particular automated driving) present unprecedented challenges due to its complexity and tight integration with the physical environment. Given the diversity in the types of cars, potentially unlimited number of possible environmental and driving conditions, it is crucial to devise a systematic way of ma...
There have been active efforts to use machine learning (ML) techniques for the development of smart systems, e.g., driving support systems with image recognition. However, the behavior of ML components, e.g., neural networks, is inductively derived from training data and thus uncertain and imperfect. Quality assessment heavily depends on and is res...