Introduction This paper introduces OS-WALK-EU, a new open-source walkability assessment tool developed specifically for urban neighbourhoods and using open-source spatial data. A free and open-source tool, OS-WALK-EU is accessible to the general public. It uses open data available worldwide and free online services to compute accessibility, while at the same time allowing users to integrate local datasets if available. Based on a review of existing measurement concepts, the paper adopts dimensions of walkability that were tested in European city environments and explains their conceptualization for software development. We invite the research community to collaboratively test, adopt and use the tool as part of the increasing need to monitor walkability as part of health-promoting urban development. Methods Tool development is based on spatial analysis methods to compute indicators for five dimensions of walkability: residential density, weighted proximities to amenities, pedestrian radius of activity, share of green and blue infrastructure, and slope. Sample uses in the cities of Dublin, Düsseldorf and Lisbon test the validity of input data and results, including scenarios for target groups like older people. Results Overall, application of the tool in Dublin, Düsseldorf and Lisbon shows conclusive results that conform to local knowledge. Shortcomings can be attributed to deficiencies in open source input data. Local administrative data, if available, is suitable to improve results. Conclusions OS-WALK-EU is the first software tool that allows free and open walkability assessments with pedestrian routing capacities for ‘proximity to facilities’ calculations. Large scale implementation for 33 German city regions in an online application shows the value of comparative assessments of walkable neighbourhoods between urban and suburban neighbourhoods. Such assessments are important to monitor progress in a mobility transition towards improved walkability and public health.
Patients living with neurogenic bladder dysfunction can lose the sensation of their bladder filling. To avoid over-distension of the urinary bladder and prevent long-term damage to the urinary tract, the gold standard treatment is clean intermittent catheterization at predefined time intervals. However, the emptying schedule does not consider actual bladder volume, meaning that catheterization is performed more often than necessary which can lead to complications such as urinary tract infections. Time-consuming catheterization also interferes with patients' daily routines and, in the case of an empty bladder, uses human and material resources unnecessarily. To enable individually tailored and volume-responsive bladder management, we design a model for the continuous monitoring of bladder volume. During our design science research process, we evaluate the model's applicability and usefulness through interviews with affected patients, prototyping, and application to a real-world in vivo dataset. The developed prototype predicts bladder volume based on relevant sensor data (i.e., near-infrared spectroscopy and acceleration) and the time elapsed since the previous micturition. Our comparison of several supervised state-of-the-art machine and deep learning models reveals that a long short-term memory network architecture achieves a mean absolute error of 116.7 ml that can improve bladder management for patients.
Process mining is a fast-growing technology concerned with managing and improving business processes. While the technology itself has been thoroughly scrutinized by prior research, we are only beginning to understand the managerial and organizational implications of process mining. Creating such knowledge is essential for a successful adoption and use of process mining in organizations. We conduct a qualitative-inductive interview study to explore how process mining can be leveraged in organizations. To this end, we systematically examine the needs and experiences of practitioners with process mining at different levels, including heads of process mining, process analysts, and data engineers. Complementing our tutorial, this article provides a theoretical background, outlines our research approach, and presents preliminary findings.
This is the second part of a two‐part paper focusing on the assessment of accuracy of turbulence‐related data from CFD simulations using effective numerical dissipation rate and effective numerical viscosity. Experimental setup has been discussed in the first part of this series. Here, the relevant solution data obtained via CFD are compared to the values from laser Doppler anemometry measurements, and it is studied whether the accuracy of such data can be assessed using the two mentioned quantities. The overall outcome is that although judging mesh quality generally is possible, alone the two quantities are insufficient to draw conclusions regarding the actual solution data. The study focuses on the assessment of accuracy of turbulence‐related data from computational fluid dynamics simulations using effective numerical dissipation rate and effective numerical viscosity. The experimental setup was discussed in the first paper of the two‐part series. This second paper reports the results of the numerical experiments.
This is the first part of a two‐part paper focusing on the assessment of accuracy of turbulence‐related data from computational fluid dynamics (CFD) simulations using effective numerical dissipation rate and effective numerical viscosity. Setup of the CFD cases replicating a swirling pipe flow experiment from literature, for which turbulence‐related data measured via laser Doppler anemometry (LDA) had been reported, is presented. The way effective numerical dissipation rate and effective numerical viscosity were obtained for each mesh cell is also discussed. The results of the study are presented in the second part of this series. The study focuses on the assessment of accuracy of turbulence‐related data from computational fluid dynamics simulations using effective numerical dissipation rate and effective numerical viscosity. In this first paper of the two‐part series, the experimental setup is discussed. The actual results are reported in the second part.
Zusammenfassung Essensbestellungen, die ein Kühlschrank automatisiert vornimmt oder Kochrezepte, die ein Kühlschrank basierend auf dem Kühlschrankinhalt vorschlägt – das sind Beispiele für Smart Services. Diese Art Services sind an den Kunden angepasste, digitalisierte Lösungen, welche innovative Wertangebote und digitale Geschäftsmodelle ermöglichen. Sie vertiefen die Beziehung zwischen Kunden und Unternehmen, eröffnen darüber neue Marktchancen und bilden die Grundlage für Service-Ökosysteme. Trotz der vielfältigen Möglichkeiten stellt die Generierung von Smart Service Innovations-Ideen für Unternehmen eine komplexe Herausforderung dar. Die Komplexität ist auf die verschiedenen involvierten Akteure sowie die physischen und digitalen Komponenten einer Smart Service Innovation zurückzuführen. Um Unternehmen bei der Initiierung von Smart Service Innovationen zu unterstützen, stellt dieser Beitrag die relevanten Akteure und Komponenten von Smart Services vor. Außerdem untersucht der Beitrag, wie Unternehmen systematisch zu ihren Kunden und vorhandenen Ressourcen passende Smart Service Innovations-Ideen generieren können. Anhand eines Innovationsprojekts mit einem Unternehmen aus dem produzierenden Gewerbe werden praxisrelevante Erkenntnisse über eine Vorgehensweise zur Initiierung von Smart Service Innovationen abgeleitet. Außerdem werden Handlungsempfehlungen für Unternehmen ausgesprochen, die vor ähnlichen (Innovations‑) Herausforderungen stehen und eine gezielte Entwicklung von Smart Service Innovationen ergänzend zum bestehenden Produkt- und Service-Portfolio anstreben. Dies bietet Unternehmen Ansatzpunkte für die Anbindung an digitale Märkte und Service-Ökosysteme sowie für die ökonomisch nachhaltige Stärkung ihrer Wettbewerbsfähigkeit mithilfe von Smart Service Innovationen.
Background While effectiveness outcomes of eHealth-facilitated integrated care models (eICMs) in transplant and oncological populations are promising, implementing and sustaining them in real-world settings remain challenging. Allogeneic stem cell transplant (alloSCT) patients could benefit from an eICM to enhance health outcomes. To combat health deterioration, integrating chronic illness management, including continuous symptom and health behaviour monitoring, can shorten reaction times. We will test the 1st-year post-alloSCT effectiveness and evaluate bundled implementation strategies to support the implementation of a newly developed and adapted eICM in allogeneic s te m cell transplantation facilitated by eHealth (SMILe–ICM). SMILe-ICM has been designed by combining implementation, behavioural, and computer science methods. Adaptions were guided by FRAME and FRAME-IS. It consists of four modules: 1) monitoring & follow-up; 2) infection prevention; 3) physical activity; and 4) medication adherence, delivered via eHealth and a care coordinator (an Advanced Practice Nurse). The implementation was supported by contextually adapted implementation strategies (e.g., creating new clinical teams, informing local opinion leaders). Methods Using a hybrid effectiveness-implementation randomised controlled trial, we will include a consecutive sample of 80 adult alloSCT patients who were transplanted and followed by University Hospital Basel (Switzerland). Inclusion criteria are basic German proficiency; elementary computer literacy; internet access; and written informed consent. Patients will be excluded if their condition prevents the use of technology, or if they are followed up only at external centres. Patient-level (1:1) stratified randomisation into a usual care group and a SMILe-ICM group will take place 10 days pre-transplantation. To gauge the SMILe–ICM’s effectiveness primary outcome (re-hospitalisation rate), secondary outcomes (healthcare utilization costs; length of inpatient re-hospitalizations, medication adherence; treatment and self-management burden; HRQoL; Graft-versus-Host Disease rate; survival; overall survival rate) and implementation outcomes (acceptability, appropriateness, feasibility, fidelity), we will use multi-method, multi-informant assessment (via questionnaires, interviews, electronic health record data, cost capture methods). Discussion The SMILe–ICM has major innovative potential for reengineering alloSCT follow-up care, particularly regarding short- and medium-term outcomes. Our dual focus on implementation and effectiveness will both inform optimization of the SMILe-ICM and provide insights regarding implementation strategies and pathway, understudied in eHealth-facilitated ICMs in chronically ill populations. Trial registration ClinicalTrials.gov. Identifier: NCT04789863 . Registered April 01, 2021.
Energy efficiency investments are typically based on either one of two opposing perspectives on financial risk. This study conducted a choice experiment based on a simulated online shop for energetic retrofitting. Here, the resulting financial risk of retrofitting was presented in different treatment groups from these two perspectives. In this vein, participants in the first treatment group were confronted with the resulting risk of deviating energy bill savings (investment risk perspective), which increases with the investment. In the second treatment group, participants were confronted with resulting risk of deviating energy bills after the investment (energy bill risk perspective), which decreases with investment. In the third treatment group, we displayed risk from both perspectives. We found that participants deciding on retrofitting measures within the online shop displaying energy bill risk invested about 20% more than participants in an online shop displaying the investment risk, tested for significance. These findings establish a new way of nudging individuals towards energy efficiency investments, which is especially important for energy policymakers. We, therefore, recommended actively leveraging the risk-reducing potential under the energy bill perspective when promoting energy efficiency investments.
In this paper, we present an algorithm for indoor quadcopter navigation. We implemented a strapdown navigation algorithm combined with an error-state unscented Kalman-Filter capable of fusing IMU, barometer and UWB measurements. Optical flow and distance to ground measurements are additionally fused to further improve the state estimation quality. Compared to alternate approaches, the suggested algorithm has better trajectory following abilities and does not rely on the actual quadcopter’s dynamics, so it can be applied to a variety of flying platforms. We implemented and evaluated the algorithm on the Crazyflie 2.1 nano-quadcopter.
The use of eHealth components in healthcare is often viewed in the context of disruptive change and ethical pitfalls. We focus on interpersonal relationships between physicians, patients, and care providers and show that positive changes (also) occur within this context.
Increasing trust in energy performance certificates (EPCs) and drawing meaningful conclusions requires a robust and accurate determination of building energy performance (BEP). However, existing and by law prescribed engineering methods, relying on physical principles, are under debate for being error-prone in practice and ultimately inaccurate. Research has heralded data-driven methods, mostly machine learning algorithms, to be promising alternatives: various studies compare engineering and data-driven methods with a clear advantage for data-driven methods in terms of prediction accuracy for BEP. While previous studies only investigated the prediction accuracy for BEP, it yet remains unclear which reasons and cause–effect relationships lead to the surplus prediction accuracy of data-driven methods. In this study, we develop and discuss a theory on how data collection, the type of auditor, the energy quantification method, and its accuracy relate to one another. First, we introduce cause–effect relationships for quantifying BEP method-agnostically and investigate the influence of several design parameters, such as the expertise of the auditor issuing the EPC, to develop our theory. Second, we evaluate and discuss our theory with literature. We find that data-driven methods positively influence cause–effect relationships, compensating for deficits due to auditors’ lack of expertise, leading to high prediction accuracy. We provide recommendations for future research and practice to enable the informed use of data-driven methods.
This study proposes a new approach for dealing with the thermal management of batteries in fuel cell hybrid electric vehicles, by introducing a new concept of on-board energy storage system which integrates the battery pack with a metal hydride tank. The rationale behind this solution is to use the exothermic absorption and endothermic desorption processes of hydrogen in metal hydrides for heating and cooling the battery pack, respectively, thus ensuring an optimal thermal management control. An experimental investigation is carried out on a prototype in order to proof the concept and preliminary assess its feasibility for actual implementations. The results show that the system is capable to effectively control the temperature variations within the battery pack: for 1C and 1.5C constant-current discharge tests, a desorption of 30%–40% of hydrogen, over the total amount stored into the metal hydride tank of the system, allows the final average temperature of the battery pack to be around 15 °C lower than the one reached without thermal management. Moreover, it is found that the system is potentially capable of providing a suitable thermal management for more than four hours under realistic driving conditions. The proposed energy storage system also achieves inherently high gravimetric and volumetric energy densities, with theoretical values equal to 182 Wh/kg and 530 Wh/L, respectively. These estimates represent reference values for further design improvements.
Accurate predictions for buildings’ energy performance (BEP) are crucial for retrofitting investment decisions and building benchmarking. With the increasing data availability and popularity of machine learning across disciplines, research started to investigate machine learning for BEP predictions. While stand-alone machine learning models showed first promising results, a comprehensive analysis of advanced ensemble models to increase prediction accuracy is missing for annual BEP predictions. We implement and thoroughly tune twelve machine learning models to bridge this research gap, ranging from stand-alone to homogeneous and heterogeneous ensemble learning models. Based on an extensive real-world dataset of over 25,000 German residential buildings, we benchmark their prediction accuracy. The results provide strong evidence that ensemble models substantially outperform stand-alone machine learning models both on average and in case of the best-performing model. All models are tested for robustness and systematic bias by evaluating their prediction performance along different building age classes, living space bins, and several error measures. Extreme gradient boosting as ensemble model exhibits the highest prediction accuracy, followed by a multilayer perceptron ahead of further ensemble models. We conclude that ensemble models for annual BEP prediction are advantageous compared to stand-alone models and outperform their results in most cases.
This paper describes well-known classification techniques which are evaluated for emotion classification. The aim of this work is the comparison of different channel selection techniques to achieve fast computation for electroencephalogram (EEG) data. Swarm intelligence algorithms belong to the class of nature-inspired algorithms which are very useful in achieving high accuracies while reducing the computing cost. In this paper different channel optimization techniques are compared to each other. They are applied to the DEAP dataset to find the most suitable channels in the context of emotion recognition. For channel selection, principal component analysis (PCA), maximum relevance-minimum redundancy (mRMR), particle swarm optimization (PSO), cuckoo search (CS) and grey wolf optimization (GWO) were investigated. By applying these optimization algorithms, the number of EEG channels could be reduced from 32 to 20 while the accuracy remained nearly the same. The proposed optimizations techniques saved between two- and seven-hours of computing time in training the Bidirectional Long Short-Term Memory model to classify emotions, while the computing time without channel selection took 18 h. Among these algorithms, mRMR and CS obtained the most promising results. By using mRMR a total computing time of 11 h with an accuracy of 92.74% for arousal and 92.36% for valence was achieved. For CS a total computing time of 15 h was achieved, with an accuracy of 93.33% for arousal and 93.67% for valence.
SitAdapt is an architecture and runtime system for building adaptive interactive applications. The system is integrated into the PaMGIS framework for pattern- and model-based user interface construction and generation. This paper focuses on the situation-rule-based adaptation process and discusses the different categories of adaptations. As the system observes the user during sessions and collects visual, bio-physical, and emotional data about the user that may vary over time, SitAdapt, in contrast to other adaptive system environments, is able to create situation-aware adaptations in real-time that reflect the user’s changing physical, cognitive, and emotional state. With this advanced adaptation process, both, task accomplishment and user experience can be significantly improved. The operation of the system is demonstrated with an example from an adaptive travel-booking application.
Emotion recognition based on facial expressions is an increasingly important area in Human-Computer Interaction Research. Despite the many challenges of computer-based facial emotion recognition like, e.g., the huge variability of human facial features, cultural differences, and the differentiation between primary and secondary emotions, there are more and more systems and approaches focusing on facial emotion recognition. These technologies already offer many possibilities to automatically recognize human emotions. As part of a research project described in this paper, these technologies are used to investigate whether and how they can support virtual human interactions. More and more meetings are taking place virtually due to the Covid-19 pandemic and the advancing digitalization. Therefore, the face of the attendees is often the only visible part that indicates emotional states. This paper focuses on outlining why emotions and their recognition are important and in which areas the use of automated emotion detection tools seems to be promising. We do so by showing potential use cases for visual emotion recognition in the professional environment. In a nutshell, the research project aims to investigate whether facial emotion recognition software can help to improve self-reflection on emotions and the quality and experience of virtual human interactions.
This short research paper describes a study to observe the drivers awareness referring to color perception. The study is carried out at the Bosch Boxberg Proving Ground in Germany. By measuring the drivers awareness with eyetracking combined with speed data and a questionnaire the question is to be investigated whether color-designed energy absorber mats in a curve can have a positive influence to the drivers and their behavior. The results show that remarkable energy absorber mats lead to a faster driving style and a better assessment of the curve.
A fast adoption and diffusion of green technologies will be essential for a successful transition of the world’s economies towards more sustainable modes of production and consumption. This article investigates the speed of green technology diffusion in China using a unique data set, which lists geocoded patent licence agreements for green technologies from 2000–2019. We focus on the relation between spatial determinants, including geographic proximity and regional technological specialisation, and the time-to-adoption, thus analysing the factors explaining the time between technology development (patent application) and technology adoption (licencing). The main finding is that geographic proximity to the innovator is associated with an accelerated time-to-adoption. Moreover, we find that the more a region specialises in green technologies, the faster a patent is licenced within that region.
Autosomal dominant polycystic kidney disease is the most common monogenic disease that causes end-stage renal failure. It primarily results from mutations in the PKD1 gene that encodes for Polycystin-1. How loss of Polycystin-1 translates into bilateral renal cyst development is mostly unknown. cAMP is significantly involved in cyst enlargement but its role in cyst initiation has remained elusive. Deletion of Polycystin-1 in collecting duct cells resulted in a switch from tubule to cyst formation and was accompanied by an increase in cAMP. Pharmacological elevation of cAMP in Polycystin-1-competent cells caused cyst formation, impaired plasticity, nondirectional migration, and mis-orientation, and thus strongly resembled the phenotype of Polycystin-1-deficient cells. Mis-orientation of developing tubule cells in metanephric kidneys upon loss of Polycystin-1 was phenocopied by pharmacological increase of cAMP in wildtype kidneys. In vitro, cAMP impaired tubule formation after capillary-induced injury which was further impaired by loss Polycystin-1.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.