Konstantinos Koufos’s research while affiliated with University of Warwick and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (89)


Counterfactual Explainer for Deep Reinforcement Learning Models Using Policy Distillation
  • Article

December 2024

·

10 Reads

·

1 Citation

ACM Transactions on Intelligent Systems and Technology

Amir Samadi

·

Konstantinos Koufos

·

·

Deep Reinforcement Learning (DRL) has demonstrated promising capability in solving complex control problems. However, DRL applications in safety-critical systems are hindered by the inherent lack of robust validation techniques to assure their performance in such applications. One of the key requirements of the verification process is the development of effective techniques to explain the system functionality, providing why the system produces specific results in given circumstances. Recently, interpretation methods based on the Counterfactual (CF) explanation approach have been proposed to address the problem of explanation in DRLs. This paper proposes a novel CF explainer to interpret the decisions made by a black-box DRL. To evaluate the efficacy of the proposed explanation framework, we carried out several experiments in the domains of automated driving systems (ADSs) and the Atari Pong game. Our analysis demonstrates that the proposed framework generates plausible and meaningful explanations for various decisions made by deep underlying DRLs. Additionally, we discuss the practical implications of our approach for various automotive stakeholders, illustrating its potential real-world impact. Source codes are available at: https://github.com/Amir-Samadi/Counterfactual-Explanation .


SAFE-RL: Saliency-Aware Counterfactual Explainer for Deep Reinforcement Learning Policies

November 2024

·

5 Reads

·

2 Citations

IEEE Robotics and Automation Letters

While Deep Reinforcement Learning (DRL) has emerged as a promising solution for intricate control tasks, the lack of explainability of the learned policies impedes its uptake in safety-critical applications, such as automated driving systems (ADS). Counterfactual (CF) explanations have recently gained prominence for their ability to interpret black-box Deep Learning (DL) models. CF examples are associated with minimal changes in the input, resulting in a complementary output by the DL model. Finding such alternations, particularly for high-dimensional visual inputs, poses significant challenges. Besides, the temporal dependency introduced by the reliance of the DRL agent action on a history of past state observations further complicates the generation of CF examples. To address these challenges, we propose using a saliency map to identify the most influential input pixels across the sequence of past observed states by the agent. Then, we feed this map to a deep generative model, enabling the generation of plausible CFs with constrained modifications centred on the salient regions. We evaluate the effectiveness of our framework in diverse domains, including ADS, Atari Pong, Pacman and Space-invaders games, using traditional performance metrics such as validity, proximity and sparsity. Experimental results demonstrate that this framework generates more informative and plausible CFs than the state-of-the-art for a wide range of environments and DRL agents. In order to foster research in this area, we have made our datasets and codes publicly available https://github.com/Amir-Samadi/SAFE-RL.</uri


Experimental Study of Multi-Camera Infrastructure Perception for V2X-Assisted Automated Driving in Highway Merging

November 2024

·

3 Reads

IEEE Transactions on Intelligent Transportation Systems

Kang Shan

·

Matteo Penlington

·

Sebastian Gunner

·

[...]

·

Ian Kirwan

Accurate and reliable perception of the surrounding environment, e.g., detection and classification of nearby objects, is the primary and most important function of automated/autonomous vehicles. However, onboard perception systems face challenges in complex road segments due to various environmental effects, such as occlusions, or high sensor noise. A potential enhancement is to equip such environments with cost-effective infrastructures that perceive the environment and provide additional perception support to autonomous vehicles through vehicle-to-everything (V2X) communication technologies. This paper develops an experimental study of vehicle detection and tracking on a bird’s eye view (BEV) map using raw video collected from several low-cost roadside monocular cameras with overlapping views installed near a motorway junction to support the merging of autonomous vehicles. The paper explains how to produce vehicle tracks from the camera infrastructure and reports the real-world evaluation of the proposed solution on a physical test bed in the UK’s West Midland region.


Good Data Is All Imitation Learning Needs
  • Preprint
  • File available

September 2024

·

22 Reads

In this paper, we address the limitations of traditional teacher-student models, imitation learning, and behaviour cloning in the context of Autonomous/Automated Driving Systems (ADS), where these methods often struggle with incomplete coverage of real-world scenarios. To enhance the robustness of such models, we introduce the use of Counterfactual Explanations (CFEs) as a novel data augmentation technique for end-to-end ADS. CFEs, by generating training samples near decision boundaries through minimal input modifications, lead to a more comprehensive representation of expert driver strategies, particularly in safety-critical scenarios. This approach can therefore help improve the model's ability to handle rare and challenging driving events, such as anticipating darting out pedestrians, ultimately leading to safer and more trustworthy decision-making for ADS. Our experiments in the CARLA simulator demonstrate that CF-Driver outperforms the current state-of-the-art method, achieving a higher driving score and lower infraction rates. Specifically, CF-Driver attains a driving score of 84.2, surpassing the previous best model by 15.02 percentage points. These results highlight the effectiveness of incorporating CFEs in training end-to-end ADS. To foster further research, the CF-Driver code is made publicly available.

Download

Fig. 1: LiDAR-based object detection pipeline depicted at the
Fig. 2: Point Cloud and Image from the KITTI Dataset [10], illustrating two pedestrians with different degree of criticality that were not detected by the 3D object detector. Coloured boxes are used to emphasise the correspondence of these objects between the point cloud and the image.
Fig. 4: Max activation maps and Eigen-CAM visualisations, for example, frames on the KITTI and NuScenes datasets. The top row
Integrity Monitoring of 3D Object Detection in Automated Driving Systems using Raw Activation Patterns and Spatial Filtering

The deep neural network (DNN) models are widely used for object detection in automated driving systems (ADS). Yet, such models are prone to errors which can have serious safety implications. Introspection and self-assessment models that aim to detect such errors are therefore of paramount importance for the safe deployment of ADS. Current research on this topic has focused on techniques to monitor the integrity of the perception mechanism in ADS. However, existing introspection models in the literature largely concentrate on detecting perception errors by assigning equal importance to all parts of the input data frame to the perception module. This generic approach overlooks the varying safety significance of different objects within a scene, which obscures the recognition of safety-critical errors, posing challenges in assessing the reliability of perception in specific, crucial instances. Motivated by this shortcoming of state of the art, this paper proposes a novel method integrating raw activation patterns of the underlying DNNs, employed by the perception module, analysis with spatial filtering techniques. This novel approach enhances the accuracy of runtime introspection of the DNN-based 3D object detection by selectively focusing on an area of interest in the data, thereby contributing to the safety and efficacy of ADS perception self-assessment processes.


Low‐complexity channel estimation for V2X systems using feed‐forward neural networks

June 2024

·

36 Reads

In vehicular communications, channel estimation is a complex problem due to the joint time–frequency selectivity of wireless propagation channels. To this end, several signal processing techniques as well as approaches based on neural networks have been proposed to address this issue. Due to the highly dynamic and random nature of vehicular communication environments, precise characterization of temporal correlation across a received data sequence can enable more accurate channel estimation. This paper proposes a new pilot constellation scheme in combination with a small feed‐forward neural network to improve the accuracy of channel estimation in V2X systems while keeping low the implementation complexity. The performance is evaluated in typical vehicular channels using simulated BER curves, and it is found superior to traditional channel estimation methods and state‐of‐the‐art neural‐network‐based implementations such as feed‐forward and super‐resolution. It is illustrated that the improvement becomes pronounced for small subcarrier spacings (or low 5G numerologies); hence, this paper contributes to the development of more reliable mobile services across rapidly varying vehicular communication channels with rich multi‐path interference.


Run-time Monitoring of 3D Object Detection in Automated Driving Systems Using Early Layer Neural Activation Patterns

June 2024

·

5 Reads

·

2 Citations

Monitoring the integrity of object detection for errors within the perception module of automated driving systems (ADS) is paramount for ensuring safety. Despite recent advancements in deep neural network (DNN)-based object detectors, their susceptibility to detection errors, particularly in the less-explored realm of 3D object detection, remains a significant concern. State-of-the-art integrity monitoring (also known as introspection) mechanisms in 2D object detection mainly utilise the activation patterns in the final layer of the DNN-based detector’s backbone. However, that may not sufficiently address the complexities and sparsity of data in 3D object detection. To this end, we conduct, in this article, an extensive investigation into the effects of activation patterns extracted from various layers of the backbone network for introspecting the operation of 3D object detectors. Through a comparative analysis using Kitti and NuScenes datasets with PointPillars and CenterPoint detectors, we demonstrate that using earlier layers’ activation patterns enhances the error detection performance of the integrity monitoring system, yet increases computational complexity. To address the real-time operation requirements in ADS, we also introduce a novel introspection method that combines activation patterns from multiple layers of the detector’s backbone and report its performance.


Fig. 5. The motion planning system diagram developed in this paper illustrating the relations between its various components. The high-level route planning along with the current and future states of surrounding vehicles (trajectory predictions) are fed into the motion planner by other modules.
Towards A General-Purpose Motion Planning for Autonomous Vehicles Using Fluid Dynamics

June 2024

·

48 Reads

General-purpose motion planners for automated/autonomous vehicles promise to handle the task of motion planning (including tactical decision-making and trajectory generation) for various automated driving functions (ADF) in a diverse range of operational design domains (ODDs). The challenges of designing a general-purpose motion planner arise from several factors: a) A plethora of scenarios with different semantic information in each driving scene should be addressed, b) a strong coupling between long-term decision-making and short-term trajectory generation shall be taken into account, c) the nonholonomic constraints of the vehicle dynamics must be considered, and d) the motion planner must be computationally efficient to run in real-time. The existing methods in the literature are either limited to specific scenarios (logic-based) or are data-driven (learning-based) and therefore lack explainability, which is important for safety-critical automated driving systems (ADS). This paper proposes a novel general-purpose motion planning solution for ADS inspired by the theory of fluid mechanics. A computationally efficient technique, i.e., the lattice Boltzmann method, is then adopted to generate a spatiotemporal vector field, which in accordance with the nonholonomic dynamic model of the Ego vehicle is employed to generate feasible candidate trajectories. The trajectory optimising ride quality, efficiency and safety is finally selected to calculate the imminent control signals, i.e., throttle/brake and steering angle. The performance of the proposed approach is evaluated by simulations in highway driving, on-ramp merging, and intersection crossing scenarios, and it is found to outperform traditional motion planning solutions based on model predictive control (MPC).


A Survey on Hybrid Motion Planning Methods for Automated Driving Systems

June 2024

·

73 Reads

·

1 Citation

Motion planning is an essential element of the modular architecture of autonomous vehicles, serving as a bridge between upstream perception modules and downstream low-level control signals. Traditional motion planners were initially designed for specific Automated Driving Functions (ADFs), yet the evolving landscape of highly automated driving systems (ADS) requires motion for a wide range of ADFs, including unforeseen ones. This need has motivated the development of the ``hybrid" approach in the literature, seeking to enhance motion planning performance by combining diverse techniques, such as data-driven (learning-based) and logic-driven (analytic) methodologies. Recent research endeavours have significantly contributed to the development of more efficient, accurate, and safe hybrid methods for Tactical Decision Making (TDM) and Trajectory Generation (TG), as well as integrating these algorithms into the motion planning module. Owing to the extensive variety and potential of hybrid methods, a timely and comprehensive review of the current literature is undertaken in this survey article. We classify the hybrid motion planners based on the types of components they incorporate, such as combinations of sampling-based with optimization-based/learning-based motion planners. The comparison of different classes is conducted by evaluating the addressed challenges and limitations, as well as assessing whether they focus on TG and/or TDM. We hope this approach will enable the researchers in this field to gain in-depth insights into the identification of current trends in hybrid motion planning and shed light on promising areas for future research.


Run-Time Introspection of 2D Object Detection in Automated Driving Systems Using Learning Representations

June 2024

·

170 Reads

·

2 Citations

IEEE Transactions on Intelligent Vehicles

Reliable detection of various objects and road users in the surrounding environment is crucial for the safe operation of automated driving systems (ADS). Despite recent progresses in developing highly accurate object detectors based on Deep Neural Networks (DNNs), they still remain prone to detection errors, which can lead to fatal consequences in safety-critical applications such as ADS. An effective remedy to this problem is to equip the system with run-time monitoring, named as introspection in the context of autonomous systems. Motivated by this, we introduce a novel introspection solution, which operates at the frame level for DNN-based 2D object detection and leverages neural network activation patterns. The proposed approach pre-processes the neural activation patterns of the object detector's backbone using several different modes. To provide extensive comparative analysis and fair comparison, we also adapt and implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets. We compare the performance of the proposed solution in terms of error detection, adaptability to dataset shift, and, computational and memory resource requirements. Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.


Citations (56)


... Unfortunately, despite their promising performance with low-dimensional tabular data, these methods suffer from various limitations. On the one hand, perturbation-based methods can be computationally expensive and often generate non-realistic or irrelevant CF instances [7], [8]. On the other hand, gradient-based techniques may not be suitable for DNNs due to their non-linearity, particularly when processing high-dimensional image input data, resulting in adversarial examples [6], [9], [10]. ...

Reference:

SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems
Counterfactual Explainer for Deep Reinforcement Learning Models Using Policy Distillation
  • Citing Article
  • December 2024

ACM Transactions on Intelligent Systems and Technology

... However, these approaches may truncate information useful for model interpretation, as feature maps are high-dimensional tensors with channel, width, and height dimensions. Furthermore, the synthesized NAPs are usually clusters or metrics related to the reliability of a model prediction 32,34 . None of these representations are well-suited to interpreting the continuous predicted output from a regression oculomics model looking at continuous biomarkers. ...

Run-time Monitoring of 3D Object Detection in Automated Driving Systems Using Early Layer Neural Activation Patterns

... While the above-mentioned methods can only generate CFEs for low-dimensional inputs such as tabular data, Mahajan et al. [24] introduced a generative modeling approach to produce CFEs to provide CFEs for high-dimensional inputs such as images leveraging the power of deep learning. However, they can introduce artefacts along the generated CFEs, which has been discussed in [31]. ...

SAFE-RL: Saliency-Aware Counterfactual Explainer for Deep Reinforcement Learning Policies
  • Citing Article
  • November 2024

IEEE Robotics and Automation Letters

... Fourth, in order to keep the arguments focused we assume a modular system architecture for automated driving, such as the sense-plan-act control methodology, which is the standard system architecture for automated driving systems to date (International Organization for Standardization (ISO), 2022;Shalev-Shwartz et al., 2017;Gruyer et al., 2017;Sormoli et al., 2024). Here, the planning stage is functionally separate from the perception and execution stages. ...

A Survey on Hybrid Motion Planning Methods for Automated Driving Systems

... This method relies on the storage and retrieval of past experiences for error identification, as mentioned in [15]. Additionally, predicting when system performance, indicated by metrics like mean average precision (mAP), falls below a specific standard in runtime helps in pinpointing errors, as detailed in [16]- [18]. ...

Run-Time Introspection of 2D Object Detection in Automated Driving Systems Using Learning Representations

IEEE Transactions on Intelligent Vehicles

... The approach based on neural network (NN) has become a prevailing non-parametric technique for trajectory prediction, because of its convenient model structure and precise performance. Regarding merging scenarios, Mozaffari et al. (2023) used highD and exiD datasets and developed a transformer-based highway merging trajectory prediction model, which was proved to improve safety, comfort, and efficiency in dense flows. Dong et al. (2024) established a transformer-based merging trajectory-prediction model, upon various driving styles collected from drones on a ramp in Xi'an, China. ...

Trajectory Prediction with Observations of Variable-Length for Motion Planning in Highway Merging Scenarios
  • Citing Conference Paper
  • September 2023

... CFEs have gained significant attention in the field of explainable AI (XAI) due to their ability to provide insight into the decision-making process of machine learning models [17,33]. A CFE describes a minimal change in the input features that would alter the model's output [38]. ...

SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems
  • Citing Conference Paper
  • September 2023

... However, the three-dimensional (3D) nature of the world requires a comprehensive understanding of the environment in 3D to ensure the resilience and robustness of ADS applications. Additionally, in recent years, investigating neural activation patterns for either uncertainty estimation or for identifying performance drops has gained popularity due to its flexibility and ease of integration into other systems [16], [17]. Despite that, only limited attempts for the monitoring of commonly utilised LIDAR-based 3D object detectors have been made so far, which are mainly clustered around uncertainty estimation [12], [19]. ...

Introspection of 2D Object Detection using Processed Neural Activation Patterns in Automated Driving Systems

... trajectories are likely to be near-optimal. The fluid flow model has also been used in [27] for trajectory prediction in highway scenarios, but the generated vector field over there doesn't have the temporal component. ...

A Novel Deep Neural Network for Trajectory Prediction in Automated Vehicles Using Velocity Vector Field

... By shifting the focus to simulated solutions for smart mobility on roundabouts, insights derived from computer simulations can illuminate noteworthy accomplishments and offer valuable lessons learned and lessons to be learned [20]. Simulations contribute to a nuanced understanding of mobility dynamics, enabling informed decisions to optimize traffic flow in cities, enhance safety, and advance the integration of intelligent systems in roundabout design and operations [21]. ...

Impact analysis of cooperative perception on the performance of automated driving in unsignalized roundabouts

Frontiers in Robotics and AI