Eindhoven University of Technology
Recent publications
Periodontitis is a chronic inflammatory condition that often causes serious damage to tooth-supporting tissues. The limited successful outcomes of clinically available approaches underscore the need for therapeutics that cannot only provide structural guidance to cells but can also modulate the local immune response. Here, three-dimensional melt electrowritten (i.e., poly(ε-caprolactone)) scaffolds with tissue-specific attributes were engineered to guide differentiation of human-derived periodontal ligament stem cells (hPDLSCs) and mediate macrophage polarization. The investigated tissue-specific scaffold attributes comprised fiber morphology (aligned vs. random) and highly-ordered architectures with distinct strand spacings (small 250 μm and large 500 μm). Macrophages exhibited an elongated morphology in aligned and highly-ordered scaffolds, while maintaining their round-shape on randomly-oriented fibrous scaffolds. Expressions of periostin and IL-10 were more pronounced on the aligned and highly-ordered scaffolds. While hPDLSCs on the scaffolds with 500 μm strand spacing show higher expression of osteogenic marker (Runx2) over 21 days, cells on randomly-oriented fibrous scaffolds showed upregulation of M1 markers. In an orthotopic mandibular fenestration defect model, findings revealed that the tissue-specific scaffolds (i.e., aligned fibers for periodontal ligament and highly-ordered 500 μm strand spacing fluorinated calcium phosphate [F/CaP]-coated fibers for bone) could enhance the mimicking of regeneration of natural periodontal tissues.
Background and Objectives In a significant portion of surgeries, blood pressure (BP) is often measured non-invasively in an intermittent manner. This practice has a risk of missing clinically relevant BP changes between two adjacent intermittent BP measurements. This study proposes a method to non-invasively estimate systolic blood pressure (SBP) with high accuracy in patients undergoing surgery. Methods Continuous arterial BP, electrocardiography (ECG), and photoplethysmography (PPG) signals were acquired from 29 patients undergoing surgery. After extracting 9 features from the PPG and ECG signals, we dynamically selected features upon each intermittent measurement (every 10 min) of SBP based on feature robustness and the principle of correlation-based feature selection. Finally, multiple linear regression models were built to combine these features to estimate SBP every 30 s. Results Compared to the reference SBP, the proposed method achieved a mean of difference at 0.08 mmHg, a standard deviation of difference at 7.97 mmHg, and a correlation coefficient at 0.89 (p < 0.001). Conclusions This study demonstrates the feasibility of non-invasively estimating SBP every 30 s with high accuracy during surgery by using ECG, PPG, and intermittent SBP measurements every 10 min, which meets the standard of the Association for the Advancement of Medical Instrumentation. The proposed method has the potential to enhance BP monitoring in the operating room, improving patient outcomes and experiences.
Purpose: Hyperthermia treatments are successful adjuvants to conventional cancer therapies in which the tumor is sensitized by heating. To monitor and guide the hyperthermia treatment, measuring the tumor and healthy tissue temperature is important. The typical clinical practice heavily relies on intraluminal probe measurements that are uncomfortable for the patient and only provide spatially sparse temperature information. A solution may be offered through recent advances in magnetic resonance thermometry, which allows for three-dimensional internal temperature measurements. However, these measurements are not widely used in the pelvic region due to a low signal-to-noise ratio and presence of image artifacts. Methods: To advance the clinical integration of magnetic resonance-guided cancer treatments, we consider the problem of removing air-motion-induced image artifacts. Thereto, we propose a new combined thermal and magnetic susceptibility model-based temperature estimation scheme that uses temperature estimates to improve the removal of air-motion-induced image artifacts. The method is experimentally validated using a dedicated phantom that enables the controlled injection of air-motion artifacts and with in vivo thermometry from a clinical hyperthermia treatment. Results: We showed, using probe measurements in a heated phantom, that our method reduced the mean absolute error (MAE) by 58% compared to the state-of-the-art near a moving air volume. Moreover, with in vivo thermometry our method obtained a MAE reduction between 17% and 95% compared to the state-of-the-art. Conclusion: We expect that the combined thermal and magnetic susceptibility modeling used in model-based temperature estimation can significantly improve the monitoring in hyperthermia treatments and enable feedback strategies to further improve MR-guided hyperthermia cancer treatments.
Real time bidding is one of the most popular ways of selling impressions in online advertising, where online ad publishers allocate some blocks in their websites to sell in online auctions. In real time bidding, ad networks connect publishers and advertisers. There are many available ad networks for publishers to choose from. A possible approach for selecting ad networks and sending ad requests is called Waterfall Strategy, in which ad networks are selected sequentially. The ordering of the ad networks is very important for publishers, and finding the ordering that will provide maximum revenue is a hard problem due to the highly dynamic environment. In this paper, we propose a dynamic ad network ordering method to find the best ordering of ad networks for publishers that opt for Waterfall Strategy to select ad networks. This method consists of two steps. The first step is a prediction model that is trained on real time bidding historical data and provides an estimation of revenue for each impression. These estimations are used as initial values for the Q-table in the second step. The second step is based on Reinforcement Learning and improves the output of the prediction model. By calculating the revenue of our method and comparing that with the revenue of a fixed and predefined ordering method, we show that our proposed dynamic ad network ordering method increases publishers’ revenue.
The androgen receptor (AR) is a prostate master transcription factor. It binds to genetic enhancers, where it regulates gene activity and plays a fundamental role in prostate pathophysiology. Previous work has demonstrated that AR-DNA binding is systematically and consistently reprogrammed during prostate tumorigenesis and disease progression. We charted these reprogrammed AR sites and identified genes proximal to them. We were able to devise gene lists based on AR status within specific histological contexts: normal prostate epithelium, primary prostate tumor, and metastatic prostate cancer. We evaluated expression of the genes in these gene sets in subjects from two distinct clinical cohorts—men treated with surgery for localized prostate cancer and men with metastatic prostate cancer. Among men with localized prostate cancer, expression of genes proximal to AR sites lost in the transition from normal prostate to prostate tumor was associated with clinical outcome. Among men with metastatic disease, expression of genes proximal to AR sites gained in metastatic tumors was associated with clinical outcome. These results are consistent with the notion that AR is fundamental to both maintaining differentiation in normal prostate tissue and driving de-differentiation in advanced prostate cancer. More broadly, the study demonstrates the power of incorporating context-dependent epigenetic data into genetic analyses.
One of the main rationales for the existence of the patent system is to encourage knowledge diffusion through the full disclosure of the technical knowledge embodied in a patented invention. Yet, economists and legal scholars cast doubts on the validity of the disclosure theory. The empirical evidence on the actual benefits of the disclosure function remains limited. The present paper aims to expand our understanding of how information spreads via patent disclosure and exploits recent improvements in machine translation (MT) to identify the effect of broader access to patented knowledge. More specifically, the paper uses a unique natural experiment. In September 2013, Google launched a major upgrade to its Google Patents service and added patent applications from the China National Intellectual Property Agency (CNIPA) to its searchable patent database. Using a difference-in-differences approach, we show that the translation of the Chinese patents into English resulted in an increase in citations received from patents filed by US inventors compared to a suitable control group comprising patents that Google translated only in 2016. Our results suggest that improved access to patented knowledge fosters knowledge diffusion.
The market for patent licenses, despite its paramount importance for technological innovation, has various inefficiencies. A particular problem with widely used technical standards such as LTE and Wi-Fi is the lack of information regarding which patents are “essential” to implement the standard. This information is crucial because it simplifies determining infringement and implies specific “FRAND” licensing rules. While many standards-developing organizations stipulate that such patents are explicitly declared, little is known about which are actually essential. The absence of publicly available information on essentiality incurs significant social costs due to the resulting friction in the licensing market. With the growing use and importance of standards to mobility and energy markets, and to the Internet of Things, these costs are likely to rise. Responding to calls from industry, courts and policymakers, commercial and academic studies have attempted to assess essentiality, but they all have limitations. This paper reports on the technical feasibility of a system of expert assessments for patent essentiality. Based on a factorial design, we conducted a field experiment with 20 patent examiners performing over 100 assessments. Comparing the outcomes to a high-quality reference point shows that sufficiently accurate expert assessments, at a price level that allows large scale testing, are technically feasible, and we identify routes to further improvement.
The impact of rotor setting and relative arrangement on the individual and overall power performance and aerodynamics of double rotor vertical axis wind turbine (VAWT) arrays is investigated. Eight rotor settings are considered: two relative rotational directions (co-rotating, CO, and counter-rotating, CN), two relative positionings (downstream turbine positioned in the leeward, LW, and windward, WW, of the upstream rotor), and two phase lags (Δθ = 0° and 180°). For each of the eight rotor settings, 63 different relative arrangements are considered resulting in 504 unique cases. The arrangements are considered within 1.25d ≤ R ≤ 10d (d = rotor diameter) and 0° ≤ Φ ≤ 90°, where R and Φ are relative distance and angle of the rotors, respectively. Unsteady Reynolds-Averaged Navier–Stokes (URANS) CFD simulations, validated with experimental data, are employed. The results show that the power performance of the array is significantly influenced by the relative rotational direction and positioning, ∼8% in power coefficient (CP), while it is marginally dependent on relative phase lag. The different performance of the studied arrays is because of different parts of the downstream turbine revolution being affected by the wake of the upstream turbine and dissimilar strength/width of the shear layer created in the two rotors’ wake overlap. The preferred rotational direction for WW arrays is co-rotating while for LW arrays counter-rotating is favored. For the same arrangement, counter-rotating turbines with LW relative positioning have the highest CP due to their downstream turbine blade moving along the flow direction in the wake overlap region resulting in little energy dissipation and weak shear layer. In contrast, counter-rotating arrays with WW relative positioning have the lowest CP, because the downstream turbine blade moves against the flow in the wake overlap region, resulting in extensive velocity deficit and a thick, strong shear layer.
Multivariate functions emerge naturally in a wide variety of data-driven models. Popular choices are expressions in the form of basis expansions or neural networks. While highly effective, the resulting functions tend to be hard to interpret, in part because of the large number of required parameters. Decoupling techniques aim at providing an alternative representation of the nonlinearity. The so-called decoupled form is often a more efficient parameterisation of the relationship while being highly structured, favouring interpretability. In this work two new algorithms, based on filtered tensor decompositions of first order derivative information are introduced. The method returns nonparametric estimates of smooth decoupled functions. Direct applications are found in, i.a. the fields of nonlinear system identification and machine learning.
Context Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objective We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case. Conclusion Tangled commits have a high prevalence in bug fixes and can lead to a large amount of noise in the data. Prior research indicates that this noise may alter results. As researchers, we should be skeptics and assume that unvalidated data is likely very noisy, until proven otherwise.
The operation of a community energy storage system (CESS) is challenging due to the volatility of photovoltaic distributed generation, electricity consumption, and energy prices. Selecting the optimal CESS setpoints during the day is a sequential decision problem under uncertainty, which can be solved using dynamic learning methods. This paper proposes a reinforcement learning (RL) technique based on temporal difference learning with eligibility traces (ET). It aims to minimize the day-ahead energy costs while maintaining the technical limits at the grid coupling point. The performance of the RL is compared against an oracle based on a deterministic mixed-integer second-order constraint program (MISOCP). The use of ET boosts the RL agent learning rate for the CESS operation problem. The ET effectively assigns credit to the action sequences that bring the CESS to a high state of charge before the peak prices, reducing the training time. The case study shows that the proposed method learns to operate the CESS effectively and ten times faster than common RL algorithms applied to energy systems such as Tabular Q-learning and Fitted-Q. Also, the RL agent operates the CESS 94% near the optimal, reducing the energy costs for the end-user up to 12%.
This work investigates the structure of rank-metric codes in connection with concepts from finite geometry, most notably the q-analogues of projective systems and blocking sets. We also illustrate how to associate a classical Hamming-metric code to a rank-metric one, in such a way that various rank-metric properties naturally translate into the homonymous Hamming-metric notions under this correspondence. The most interesting applications of our results lie in the theory of minimal rank-metric codes, which we introduce and study from several angles. Our main contributions are bounds for the parameters of a minimal rank-metric codes, a general existence result based on a combinatorial argument, and an explicit code construction for some parameter sets that uses the notion of a scattered linear set. Throughout the paper we also show and comment on curious analogies/divergences between the theories of error-correcting codes in the rank and in the Hamming metric.
Self-Admitted Technical Debt (SATD) consists of annotations—typically, but not only, source code comments—pointing out incomplete features, maintainability problems, or, in general, portions of a program not-ready yet. The way a SATD comment is written, and specifically its polarity, may be a proxy indicator of the severity of the problem and, to some extent, of the priority with which it should be addressed. In this paper, we study the relationship between different types of SATD comments in source code and their polarity, to understand in which circumstances (and why) developers use negative or rather neutral comments to highlight an SATD. To address this goal, we combine a manual analysis of 1038 SATD comments from a curated dataset with a survey involving 46 professional developers. First of all, we categorize SATD content into its types. Then, we study the extent to which developers express negative sentiment in different types of SATD as a proxy for priority, and whether they believe this can be considered as an acceptable practice. Finally, we look at whether such annotations contain additional details such as bug references and developers’ names/initials. Results of the study indicate that SATD comments are mainly used for annotating poor implementation choices ( $\simeq $ ≃ 41%) and partially implemented functionality ( $\simeq $ ≃ 22%). The latter may depend from “waiting” for other features being implemented, and this makes SATD comments more negatives than in other cases. Around 30% of the survey respondents agree on using/interpreting negative sentiment as a proxy for priority, while 50% of them indicate that it would be better to discuss SATD on issue trackers and not in the source code. However, while our study indicates that open-source developers use links to external systems, such as bug identifiers, to annotate high-priority SATD, better tool support is required for SATD management.
Study objective To investigate if an interpectoral-pectoserratus plane (PECS II) block decreases postoperative pain, postoperative nausea and vomiting and improves quality of recovery in patients with neurogenic thoracic outlet syndrome (NTOS) undergoing trans-axillary thoracic outlet decompression surgery. Design A prospective single center double blinded randomized placebo-controlled trial. Setting Perioperative period; operating room, post anesthesia care unit (PACU) and hospital ward. Patients Seventy patients with NTOS, undergoing trans-axillary thoracic outlet decompression surgery. Interventions Patients were randomized to an interventional arm, receiving the block with 40 ml ropivacaine 0.5% (concentration was adjusted if the patient's weight was <66 kg), and a placebo group, receiving a sham block with 40 ml NaCl 0.9%. The interpectoral-pectoserratus plane block was performed ultrasound guided; the first injection below the pectoral minor muscle and the second below the pectoral major muscle. The hospitals' pharmacist prepared the study medication and was the only person able to see the randomization result. The study was blinded for patients, researchers and medical personnel. Measurements Primary outcome parameters were postoperative pain, measured by numeric rating scale on the PACU (start and end) and on the ward on postoperative day (POD) 0 and 1, and postoperative morphine consumption, measured on the PACU and on the ward during the first 24 h. Secondary outcome parameters were postoperative nausea and vomiting, and quality of recovery. Main results There was no statistically significant difference in NRS on the PACU at the start (ropivacaine 4.9 ± 3.2 vs placebo 6.2 ± 3.0, p = .07), at the end (ropivacaine 4.0 ± 1.7 vs placebo 3.9 ± 1.7, p = .77), on the ward on POD 0 (ropivacaine 4.6 ± 2.0 vs placebo 4.6 ± 2.0, p = 1.00) or POD 1 (ropivacaine 3.9 ± 1.8 vs placebo 3.6 ± 2.0, p = .53). There was no difference in postoperative morphine consumption at the PACU (ropivacaine 11.0 mg ± 6.5 vs placebo 10.8 mg ± 4.8, p = .91) or on the ward (ropivacaine 11.6 mg ± 8.5 vs placebo 9.6 mg ± 9.4, p = .39). Conclusions The interpectoral-pectoserratus plane block is not effective for postoperative analgesia in patients with NTOS undergoing trans-axillary thoracic outlet decompression surgery.
Dealing with heterogeneous plastic waste – i.e., high polymer heterogeneity, additives, and contaminants – and lowering greenhouse gas (GHG) emissions from plastic production requires integrated solutions. Here, we quantified current and future GHG footprints of direct chemical conversion of heterogeneous post-consumer plastic waste feedstock to olefins, a base material for plastics. The net GHG footprint of this recycling system is −0.04 kg CO2-eq./kg waste feedstock treated, including credits from avoided production of virgin olefins, electricity, heat, and credits for the partial biogenic content of the waste feedstock. Comparing chemical recycling of this feedstock to incineration with energy recovery presents GHG benefits of 0.82 kg CO2-eq./kg waste feedstock treated. These benefits were found to increase to 1.37 kg CO2-eq./kg waste feedstock treated for year 2030 when including (i) decarbonization of steam and electricity production and (ii) process optimizations to increase olefin yield through carbon capture and utilization and conversion of side-products.
Trapezoidal steel sheeting is a popular product in the built environment, especially for roofing and cladding. However, at a support the webs of the sheeting may fail due to concentrated force, referred to as web crippling. In this paper it is investigated whether the Direct Strength Method (DSM) can be used to predict this failure. First, an experimental programme, carried out in the past, is selected for its sheet section dimensions, materials, span lengths, and load bearing plate widths which were varied systematically. Two distinct failure mechanisms occurred: the yield arc and the rolling mechanisms. Based on the experiments, in this paper a finite element model is developed to simulate Interior One Flange (IOF) and Interior Two Flange (ITF) web crippling, and a very good correlation is obtained with the experiments. Subsequently, the simulations are used to predict the buckling load (via the first positive Eigenvalue), the yield load (via first plastic strains or stress extrapolation), and the ultimate load for each experiment and additional cases. In addition, for each failure mechanism, an existing theoretical model is used to predict the first order elasto-plastic mechanism initiation load. Six different DSM equations are calibrated for both IOF and ITF web crippling, using the above different buckling and yield loads. For IOF cases, the use of the IOF first positive Eigenvalue for the buckling load, and the first order elasto-plastic mechanism initiation load, results in the best correlation and coefficient of variation. For ITF cases, the ITF Eigenvalue and same yield load shows the best results. Excluding the rolling mechanism, the DSM correlates equally (IOF) or better (ITF) with the numerical results than the AISI S100-16 design rule. Thus, the DSM, as a generalized slenderness approach, is suited for IOF and ITF web crippling of first generation trapezoidal steel sheeting. Furthermore, it could be considered to allow for the use of the well performing FE models explicitly by the codes—e.g. enabling manufacturers to create tables of strength.
Complete vehicle energy management (CVEM) aims at minimizing the energy consumption of all subsystems inside a vehicle. When the subsystems consist of energy buffers with linear dynamics and/or power converters with quadratic power losses, the CVEM becomes a large-scale and ill-conditioned optimal control problem for which well-known solution approaches (e.g. Pontryagin’s maximum principle, dynamic programming, convex and derivative-free optimization methods) are not effective. In this paper, we design an efficient solution algorithm for this problem by leveraging the framework of operator splitting methods. To facilitate its implementation, we propose a spectral method that automatically tunes the step sizes at every iteration and, in practice, accelerates convergence. Finally, we consider a case study for a parallel-hybrid vehicle and we illustrate the numerical performance of our method by showing that we can efficiently solve a CVEM problem with very large horizons.
Paraffin waxes are promising phase change materials, abundantly available at very low cost. Having large latent heat, these materials can be used for thermal energy storage. However, when used in heat batteries, paraffin’s low thermal conductivity prevents fast charging and discharging. This calls for the design of tailored hybrid materials with improved properties, the present study concentrates on properties of pure paraffin wax. Using fully atomistic molecular-dynamics (MD) simulations, we study the effects of polydispersity and branching on the thermal conductivity of paraffin waxes, in molten (450 K) and solid (250 K) state. Both branching and polydispersity affect the density and especially the crystallinity of the solid. Branching has a pronounced effect on crystallisation caused by inhibited alignment of the polymer backbones while the effect of polydispersity is less pronounced. The thermal conductivity (TC) has been simulated using the reverse non-equilibrium molecular-dynamics method, as well as the equilibrium Green-Kubo approach. Increased branching, added to backbones comprised of twenty monomers, results in decreasing TC of up to 30%, polydispersity only has an effect in the semi-crystalline state. Comparison to available experiments shows good agreement which validates the model details, applied force field and the calculation methods. We show that at comparable computational costs, the reverse non-equilibrium MD approach produces more reliable results for TC, as compared to the equilibrium Green-Kubo method. The major contribution to TC by acoustic phonon transport along the backbone was shown by analysing extreme cases. The phonon density of states (PDOS) of samples with high branching or with small chain length displayed diminished peaks in the acoustic range as compared to the PDOS of samples with low branching or larger chain length, respectively. The suggested MD approach can definitely be used to investigate specific material modifications aimed at increasing the overall TC.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
12,105 members
Nicholas Agung Kurniawan
  • Department of Biomedical Engineering
Pieter Pauwels
  • Department of Built Environment
Jagadeesh Chandra Bose R.P.
  • Department of Mathematics and Computer Science
Erjen Lefeber
  • Department of Mechanical Engineering
Information
Address
De Zaale, Eindhoven, Netherlands
Website
https://www.tue.nl/en/
Phone
+31 40 247 9111