Delft University of Technology
  • Delft, Zuid-Holland, Netherlands
Recent publications
The Attention-Related Driving Errors Scale (ARDES) is a self-report measure of individual differences in driving inattention. ARDES was originally developed in Spanish (Argentina), and later adapted to other countries and languages. Evidence supporting the reliability and validity of ARDES scores has been obtained in various different countries. However, no study has been conducted to specifically examine the measurement invariance of ARDES measures across countries, thus limiting their comparability. Can different language versions of ARDES provide comparable measures across countries with different traffic regulations and cultural norms? To what extent might cultural differences prevent researchers from making valid inferences based on ARDES measures? Using Alignment Analysis, the present study assessed the approximate invariance of ARDES measures in seven countries: Argentina (n = 603), Australia (n = 378), Brazil (n = 220), China (n = 308). Spain (n = 310), UK (n = 298), and USA (n = 278). The three-factor structure of ARDES scores (differentiating driving errors occurring at Navigation, Manoeuvring and Control levels) was used as the target theoretical model. A fixed alignment analysis was conducted to examine approximate measurement invariance. 12.3 % of the intercepts and 0.8 % of the item-factor loadings were identified as non-invariant, averaging 8.6 % of non-invariance. Despite substantial differences among the countries, sample recruitment or representativeness, study results support resorting to ARDES measures to make comparisons across the country samples. Thus, the range of cultures, laws and collision risk across these 7 countries provides a demanding assessment for a cultural-free inattention while-driving. The alignment analysis results suggest that ARDES measures reach near equivalence among the countries in the study. We hope this study will serve as a basis for future cross-cultural research on driving inattention using ARDES.
As today's engineering systems have become increasingly sophisticated, assessing the efficacy of their safety-critical systems has become much more challenging. The more classical methods of "failure" analysis by decomposition into components related by logic trees, such as fault and event trees, root cause analysis, and failure mode and effects analysis lead to models that do not necessarily behave like the real systems they are meant to represent. These models need to display similar emergent and unpredictable behaviors to sociotechnical systems in the real world. The question then arises as to whether a return to a simpler whole system model is necessary to understand better the behavior of real systems and to build confidence in the results. This question is more prescient when one considers that the causal chain in many serious accidents is not as deep-rooted as is sometimes claimed. If these more obvious causes are not taken away, why would the more intricate scenarios that emanate from more sophisticated models be acted upon. The paper highlights the advantages of modeling and analyzing these "normal" deviations from ideality, so called weak signals, versus just system failures and near misses as well as catastrophes. In this paper we explore this question.
Densely populated areas are major sources of air, soil and water pollution. Agriculture, manufacturing, consumer households and road traffic all have their share. This is particularly true for the country featured in this paper: the Netherlands. Continuous pollution of the air and soil manifests itself as acification, decalcification and eutrofication. Biodiversity becomes lower and lower in nature areas. Biological farms are also under threat. In case of mobility, local air pollution may have a huge health impact. Effective policy is called for, after high courts blocked construction projects, because of foreseen building- and transport-related NO x emissions. EU law makers are after Dutch governments, because these favoured economics and politics over environmental and liveability concerns. But, people in the Netherlands are strongly divided. The latest provincial elections were dominated by environmental concerns, next to many socio-economic issues. NO x and CO 2 emissions by passenger cars are in focus. Technical means and increasing fuel economy norms strongly reduced NO x emissions to a still too high level. A larger number of cars neutralized a technological reduction of CO 2 emissions. The question is: What would be the impact of a drastic mandatory reduction in CO 2 , NO x , and PM 10 emissions on car ownership and use in the Netherlands? The authors used literature, scenario analysis and simulation modelling to answer this question. Electric mobility could remove these emissions. Its full impact will only be achieved if the grid-mix, which is still dominated by fossil fuels, becomes green(er), which is a gradual, long-term, process. EVs compete with other consumers of electricity, as many other activities, such as heating, are also electrifying. With the current grid-mix, it is inevitable that the number of km per vehicle per year is reduced to reach the scenario targets (−25% resp. −50% CO 2 emissions by cars). This calls for an individual mobility budget per car user. Keywords: climate change, CO 2 emissions, fuel efficiency, electric vehicles, behaviour, policy-making
Test case prioritization techniques have emerged as effective strategies to optimize this process and mitigate the regression testing costs. Commonly, black-box heuristics guide optimal test ordering, leveraging information retrieval (e.g., cosine distance) to measure the test case distance and sort them accordingly. However, a challenge arises when dealing with tests of varying granularity levels, as they may employ distinct vocabularies (e.g., name identifiers). In this paper, we propose to measure the distance between test cases based on the shortest path between their identifiers within the WordNet lexical database. This additional heuristic is combined with the traditional cosine distance to prioritize test cases in a multi-objective fashion. Our preliminary study conducted with two different Java projects shows that test cases prioritized with WordNet achieve larger fault detection capability (APFD\(_{C}\)) compared to the traditional cosine distance used in the literature.
For decades, mechanical resonators with high sensitivity have been realized using thin‐film materials under high tensile loads. Although there have been remarkable strides in achieving low‐dissipation mechanical sensors by utilizing high tensile stress, the performance of even the best strategy is limited by the tensile fracture strength of the resonator materials. In this study, a wafer‐scale amorphous thin film is uncovered, which has the highest ultimate tensile strength ever measured for a nanostructured amorphous material. This silicon carbide (SiC) material exhibits an ultimate tensile strength of over 10 GPa, reaching the regime reserved for strong crystalline materials and approaching levels experimentally shown in graphene nanoribbons. Amorphous SiC strings with high aspect ratios are fabricated, with mechanical modes exceeding quality factors 10 ⁸ at room temperature, the highest value achieved among SiC resonators. These performances are demonstrated faithfully after characterizing the mechanical properties of the thin film using the resonance behaviors of free‐standing resonators. This robust thin‐film material has significant potential for applications in nanomechanical sensors, solar cells, biological applications, space exploration and other areas requiring strength and stability in dynamic environments. The findings of this study open up new possibilities for the use of amorphous thin‐film materials in high‐performance applications. This article is protected by copyright. All rights reserved
Things simply push back. We had to grapple with this fact while working on our project Lichtsuchende and realising how things (which sometimes may feel inanimate) and their digital aspects continuously participate in the environments that they are part of and co-create, as well as in their own making. As researchers, we inevitably started to think about how we could better understand what was happening in this creative process and about how the combination of physical and digital agents and the entangled aspects of these and their complexities could be unpacked. How are we making these environments together? Through this process of inquiry a particular framework started to emerge which allowed us to explore and discuss the challenges and highlights of creating digitally (in our case an interactive installation) using a Latournian and an Ingoldian approach. Using Latour’s actor network and Ingold’s meshwork as theories helped us recognise the interrelations between the things in, within and outside the installation and their interrelations. We seek to understand the ways in which these two viewpoints can be applied as methodologies to unpack the factors involved in the creation of an artificial society and the emergence of a shared environment made of non-human and human things.
This research presents a comprehensive analysis of the dynamic stability of aircraft using modules of the software named SCSim (Stability and Control Simulation Tool), which is dedicated to the analysis of aircraft stability and control. The stability of an aircraft can be examined in two directions: longitudinal and lateral. The ability to determine an aircraft's limits and handling qualities depends on its stability. This study report demonstrates a complete dynamic stability analysis using SCSim with aerodynamic input data from the commercial software Advanced Aircraft Analysis (AAA). SCSim is implemented within a common framework in MATLAB, adapted from a MATHCAD script, providing an easier way to enter user-defined input data, and introducing a set of new features. The study is divided into three main parts, each based on an analysis of stability. First, static stability is examined to understand how the system behaves when it is in equilibrium. Then, dynamic stability is assessed to understand how the system reacts to perturbations and disturbances. Finally, handling qualities systematically assess the dynamic modes of the aircraft's behavior and establish the levels of handling qualities. In this paper, a generic model of a propeller-driven aircraft is used for the study due to the availability of flight parameters, geometric, and aerodynamic data. The obtained results demonstrate the capabilities of the Stability and Control Simulation Tool (SCSim) in designing and analysing an aircraft's stability under various flight conditions and configurations, with validation using AAA.
Compensation capacitors are naturally susceptible to manufacturing defects and aging effects, leading to the degraded performance of a wireless power transfer (WPT) system. This paper focuses on the compensation parameters optimization during the design stage and control strategy during the operation phase to improve the inherent capacitor error tolerance of the WPT system. The Sobol sensitivity method is applied to rank the importance of deviations of three capacitors on the transfer characteristics, and then the method of tracking the secondary resonance frequency is proposed. The numerical method is applied to find the optimal compensation parameters, with the constraint that the output voltage change caused by the shift of the designed compensation condition is limited to be less than ±5%. Experimental results show that with the proposed frequency tracking method and compensation parameter optimization, the deviation tolerance index is decreased from 0.485 to 0.363, showing an improvement of 25.2%, and the minimum power factor is increased from 0.78 to 0.89. Besides, the characteristics of constant primary coil current and voltage gain are almost not affected.
Data lakes are becoming increasingly prevalent for big data management and data analytics. In contrast to traditional ‘schema-on-write’ approaches such as data warehouses, data lakes are repositories storing raw data in its original formats and providing a common access interface. Despite the strong interest raised from both academia and industry, there is a large body of ambiguity regarding the definition, functions and available technologies for data lakes. A complete, coherent picture of data lake challenges and solutions is still missing. This survey reviews the development, architectures, and systems of data lakes. We provide a comprehensive overview of research questions for designing and building data lakes. We classify the existing approaches and systems based on their provided functions for data lakes, which makes this survey a useful technical reference for designing, implementing and deploying data lakes. We hope that the thorough comparison of existing solutions and the discussion of open research challenges in this survey will motivate the future development of data lake research and practice.
Lane detection is crucial for vehicle localization which makes it the foundation for automated driving and many intelligent and advanced driving assistant systems. Available vision-based lane detection methods do not make full use of the valuable features and aggregate contextual information, especially the interrelationships between lane lines and other regions of the images in continuous frames. To fill this research gap and upgrade lane detection performance, this paper proposes a pipeline consisting of self pre-training with masked sequential autoencoders and fine-tuning with customized PolyLoss for the end-to-end neural network models using multi-continuous image frames. The masked sequential autoencoders are adopted to pre-train the neural network models with reconstructing the missing pixels from a random masked image as the objective. Then, in the fine-tuning segmentation phase where lane detection segmentation is performed, the continuous image frames are served as the inputs, and the pre-trained model weights are transferred and further updated using the backpropagation mechanism with customized PolyLoss calculating the weighted errors between the output lane detection results and the labeled ground truth. Extensive experiment results demonstrate that, with the proposed pipeline, the lane detection model performance on both normal and challenging scenes can be advanced beyond the state-of-the-art, delivering the best testing accuracy (98.38%), precision (0.937), and F1-measure (0.924) on the normal scene testing set, together with the best overall accuracy (98.36%) and precision (0.844) in the challenging scene test set, while the training time can be substantially shortened.
This article presents a hybrid magnetic current sensor for contactless current measurement. Pick-up coils and Hall plates are employed to sense the high and low-frequency fields, respectively, generated by a current-carrying conductor. Due to the differentiating characteristic of the pick-up coils, a flat frequency response can then be obtained by summing the outputs of the coil and the Hall paths and passing the result through a 1st-order low-pass filter (LPF). For maximum resolution, the LPF corner frequency (2 kHz) is set such that the noise contribution of each path is equal. To suppress the coil-path offset without the use of large ac coupling capacitors, an area-efficient dual dc servo loop (D3SL) is used. This effectively suppresses the coil-path offset, resulting in a total offset of 73 $\mu$ T, which is mainly dominated by the Hall path. Fabricated in a standard 0.18- $\mu$ m CMOS process, the current sensor occupies 3.9 mm $^{2}$ and draws 7.1 mA from a 1.8 V supply. It achieves 43 mA resolution in a 5 MHz bandwidth, which is 1.5 $\times$ better than the state-of-the-art hybrid sensors. It also achieves the lowest energy efficiency FoM (3.5 $\times$ ) among CMOS magnetic current sensors.
This article presents a sub-1 V bipolar junction transistor (BJT)-based temperature sensor that achieves both high accuracy and high energy efficiency. To avoid the extra headroom required by conventional current sources, the sensor’s diode-connected BJTs are biased by precharging sampling capacitors to the supply voltage and then discharging them through the BJTs. This capacitive biasing technique requires little headroom ( $\sim$ 150 mV), and simultaneously samples the BJTs’ base–emitter voltages. The latter are then applied to a switched-capacitor (SC) $\Delta\Sigma$ ADC to generate a digital representation of temperature. For robust sub-1 V operation and high energy efficiency, the ADC employs auto-zeroed inverter-based integrators. Fabricated in a standard 0.18- $\mu$ m CMOS process, the sensor occupies 0.25 mm $^2$ and consumes 810 nW from a 0.95-V supply at room temperature. It achieves an inaccuracy of $\pm$ 0.15 $^\circ$ C (3 $\sigma$ ) from $-$ 55 $^\circ$ C to 125 $^\circ$ C after a 1-point trim, which is at par with the state-of-the-art. It also achieves a resolution figure of merit (FoM) of 0.34 pJ $\cdot$ K $^2$ , which is more than 6 $\times$ lower than that of state-of-the-art BJT-based sensors with similar accuracy.
Accurately locating the fault helps in the rapid restoration of the isolated line back into the system. This paper proposes a novel communication-based multi-terminal method to locate the fault in a radial medium voltage DC (MVDC) microgrid. A time-domain based algorithm is proposed which applies to an MVDC system with different possible combinations of lines and cables. Terminal measurements of voltages and currents, voltages across current limiting reactor (CLR), and node currents are used to propose a flexible online fault location method. Based on the availability of communication and sensors, different terminals can be used to increase the reliability of the proposed fault location method. This method is robust to variations of key implementation parameters like type of faults, fault resistance, fault location, sampling frequency, white Gaussian noise (WGN) in measurement, and different line/cable combinations. Further, the fault location calculation is analyzed with parameter variation. PSCAD/EMTDC based electromagnetic transient simulations are used to validate the performance of the algorithm.
We experimentally investigate the problem of monitoring the Kerr-nonlinearity coefficient γ from transmitted and received data for a single-mode fiber link of 1600km length. We compare the accuracy and speed of three different approaches. First, a standard split-step Fourier method is used to predict the output at various γ values, which are then compared to the measured output. Second, a recently proposed nonlinear Fourier transform (NFT)-based method, which matches solitonic eigenvalues in the transmitted and received signals for various γ values. Third, a novel fast version of the NFT-based method, which only matches the highest few eigenvalues. Although the NFT-basedmethodsdonotscalewithlinklength,wedemonstrate that the SSFM-based method is significantly faster than the basic NFT-based method for the considered link of 1600km, and outperforms even the faster version. However, for a simulated link of 8000km, the fast NFT-based method is shown to be faster than the SSMF-based method, although at the cost of a small loss in accuracy.
This article presents a digital-input class-D amplifier (CDA) achieving high dynamic range (DR) by employing a chopped capacitive feedback network and a capacitive digital-to-analog converter (DAC). Compared with conventional resistive-feedback CDAs driven by resistive or current-steering DACs, the proposed architecture eliminates the noise from the DAC and feedback resistors. Intermodulation between the chopping, pulsewidth modulation (PWM), and DAC sampling frequency is analyzed to avoid negative impacts on the DR and linearity. Real-time dynamic element matching (RTDEM) is employed to address distortion due to mismatch in the DAC, while its intersymbol interference (ISI) is eliminated by deadbanding. The prototype, implemented in a 180-nm bipolar, CMOS, and DMOS (BCD) process, achieves 120.9 dB of DR and a peak total harmonic distortion plus noise (THD $+$ N) of $-$ 111.2 dB. It can drive a maximum of 15/26 W into an 8-/4- $\Omega$ load with a peak efficiency of 90%/86%.
Ensuring safety in real-world robotic systems is often challenging due to unmodeled disturbances and noisy sensors. To account for such stochastic uncertainties, many robotic systems leverage probabilistic state estimators such as Kalman filters to obtain a robot's belief , i.e. a probability distribution over possible states. We propose belief control barrier functions (BCBFs) to enable risk-aware control, leveraging all information provided by state estimators. This allows robots to stay in predefined safety regions with desired confidence under these stochastic uncertainties. BCBFs are general and can be applied to a variety of robots that use extended Kalman filters as state estimator. We demonstrate BCBFs on a quadrotor that is exposed to external disturbances and varying sensing conditions. Our results show improved safety compared to traditional state-based approaches while allowing control frequencies of up to 1kHz.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
27,301 members
Alessandro Bozzon
  • Faculty of Electrical Engineering, Mathematics and Computer Sciences (EEMCS)
Venkatesha Prasad
  • Department of Software and Computer Technology
Warren Walker
  • Faculty of Technology, Policy and Management
Stefan van der SPEK
  • Department of Urbanism
Mohammad Khosravi
  • Delft Center for Systems and Control (DCSC)
Information
Address
Mekelweg 2, 2628 CD, Delft, Zuid-Holland, Netherlands
Head of institution
Prof.dr.ir. Tim van der Hagen
Website
https://www.tudelft.nl/
Phone
+31 (0)15 27 89111