SINTEF
  • Trondheim, Norway
Recent publications
Introduction Patients with serious conditions face complex, long‐lasting patient journeys involving multiple healthcare providers. Research shows that these journeys are frequently perceived as fragmented, with significant challenges in communication and information flow. However, there is limited knowledge about the organisational and informational aspects linked to good and poor experiences. This study investigated critical factors in cancer journeys, focusing on communication and the informational and organisational elements shaping patient experiences. Methods The critical incident technique was used to identify positive and negative factors in cancer patient journeys. People with cancer and their next‐of‐kin were recruited through Norway's national cancer organisation. Patient episodes were collected from 41 participants via digital workshops combined with questionnaires and supplemented by in‐depth interviews. Critical incidents were extracted using specific analytical criteria. Results A total of 187 critical incidents were identified, including 81 positive and 106 negative. Content analysis revealed 12 categories of incidents. Positive incidents were linked to effective communication, timely information, and well‐coordinated care, particularly through cancer pathways. Negative incidents often involved communication delays, insensitive information delivery, and poor coordination among healthcare providers. Notably, around 40% of the negative incidents stemmed from fragmented health services or a lack of progress, often forcing patients to act as messengers. Conclusion By examining critical experiences, this study highlights key areas for improving cancer care. Timely information and clinical empathy when delivering sensitive diagnoses are essential. Healthcare providers must coordinate services more effectively to prevent patients from intervening to ensure care progress. Patient or Public Contribution Patients' stories formed the core data. The public contributed to recruitment, while patient feedback informed the workshop design.
Hydraulic test is one of the most common methods to determine the in situ stress conditions in the rock mass, especially at great depth, where the test locations are accessible by drillholes only. This paper discusses the hydraulic fracturing of intact rock (HF) and hydraulic test on pre-existing fractures (HTPF). The paper critically reviews the current understanding of HF and HTPF test methods and suggests corrections and improvements. The associated calculation methods using HF and HTPF data are also re-introduced with detailed discussions. To demonstrate the calculation process, SINTEF uses ISDM for the calculation of in situ stress with Monte Carlo simulation to get a priori estimates, which is also done by other researchers. SINTEF runs the stress calculations for different a priori estimates based on repeated Monte Carlo simulations to reduce the sensitivity of the solution to the a priori estimates. This calculation process is applied to data obtained from hydraulic tests carried out in a group of short drillholes (30 m) at Røldal hydropower project in Norway. Two calculation alternatives were performed, which are (a) Calculation using ISDM for HF tests, (b) Calculation using ISDM for HF and HTPF tests. A comparison is also made for the obtained results with the result from classical HF tests. The paper discusses the applicability of the ISDM method, impact factors on the accuracy of the ISDM, practical challenges and uncertainties during measurement and calculation processes, and possible improvements.
Context Organizations opt for continuous delivery of incremental updates to deal with uncertainty and minimize waste. However, applying continuous engineering (CSE) practices requires a continuous feedback loop with input from customers and end-users. Challenges It becomes increasingly challenging to apply traditional requirements elicitation and validation techniques with ever-shrinking software delivery cycles. At the same time, frequent deliveries generate an abundance of usage data and telemetry informing engineering teams of end-user behavior. The literature describing how practitioners work with user feedback in CSE, is limited. Objectives We aim to explore the state of practice related to utilization of user feedback in CSE. Specifically, what practices are used, how, and the shortcomings of these practices. Method We conduct a qualitative survey and report analysis from 21 interviews in 13 product development companies. We apply thematic and cross-case analysis to interpret the data. Results Based on our earlier work we suggest a conceptual model of how user feedback is utilized in CSE. We further report the identified challenges with the continuous collection and analysis of user feedback and identify implications for practice. Conclusions Companies use a combination of qualitative and quantitative methods to infer end-user preferences. At the same time, continuous collection, analysis, interpretation, and use of data in decisions are problematic. The challenges pertain to selecting the right metrics and analysis techniques, resource allocation, and difficulties in accessing vaguely defined user groups. Our advice to practitioners in CSE is to ensure sufficient resources and effort for interpretation of the feedback, which can be facilitated by telemetry dashboards.
To limit energy consumption and peak loads with increased electrification of our society, more information is needed about the energy use in buildings. This article presents a data set that contains 4 years (Jan. 2018- Dec. 2021/Mar. 2022) of hourly measurements of energy and weather data from 45 public buildings located in Drammen, Norway. The buildings are schools (16), kindergartens (20), nursing homes (7) and offices (2). For each building, the data set contains contextual data about the buildings including their floor area, construction year, energy label, information about their heating system and ventilation system in addition to time series data of energy use and weather data. For some of the buildings, the energy measurements only contain measurements of hourly imported electricity, while the time series data for other buildings have submeters for different energy services and technologies. Researchers, energy analysts, building owners and policy makers can benefit from the dataset for e.g. hourly load disaggregation, forecasting of energy loads and flexibility, grid planning and modelling activities.
A multiscale study was carried out to evaluate MIL-91 (Ti) sorbent for post-combustion CO2 capture in industrially relevant conditions. Initially, the process performance of the MOF was assessed using molecular simulated adsorption isotherms, which predicted an energy consumption of 1.65 MJ/kg and a productivity value of 0.42 mol/m³ ads s. Subsequently, MIL-91 (Ti) was characterized using several complementary experimental techniques, and the characterization data were supplied to a process simulator to assess energy consumption and productivity values for 95% purity and 90% recovery targets. The experimental adsorption isotherms resulted even better process performance, with a minimum energy consumption of 1.03 MJ/kg, and a maximum productivity of 0.61 mol/m3 ads s, compared to the GCMC simulated adsorption isotherms. This discrepancy can be attributed to the use generic force filed in the molecular simulation, which cannot accurately capture host-guest intermolecular interactions with the MOF pore surface in a highly confined environment of MOFs like MIL-91. However, the lower energy consumption and higher productivity of actual MIL-91 (Ti), which are both desirable outcomes for CO2 capture processes, suggests the viability of MIL-91 (Ti) for the implications in real CCS applications.
The Internet of Things (IoT) is becoming increasingly ubiquitous, acting as an important source of real-time data for various applications. By allowing data exchange between various parties along the IoT devices-Edge-Cloud computing continuum, the larger societal benefits of the IoT can be achieved. Assuring security and fostering confidence for IoT data sharing, however, is one of the biggest obstacles. Sharing real-time data originating from connected devices is crucial to real-world intelligent IoT applications, i.e., based on artificial intelligence/machine learning. Such IoT data sharing involves multiple parties for different purposes and is usually based on data contracts that might depend on the dynamic change of IoT data variety and velocity. We aim to support multiple parties (aka tenants) with dynamic contracts based on the data value for their specific contextual purposes. This work addresses these challenges by introducing a novel dynamic context-based policy enforcement framework to support IoT data sharing (on-Edge) based on dynamic contracts. Our enforcement framework allows IoT Data Hub owners to define extensible rules and metrics to govern the tenants accessing the shared data on the Edge based on policies defined with static and dynamic contexts. We have created an edge-centered architecture that enables multi-tenant use cases with tenant-specific application deployment and IoT-context-based data sharing on edge servers. Our proof-of-concept prototype for sharing sensitive data such as surveillance camera videos has illustrated our proposed framework. The experimental results demonstrated that our framework could soundly and timely enforce context-based policies at runtime with moderate overhead. Moreover, the context and policy changes are correctly reflected in the system in nearly real-time. We have addressed the need to enable multi-parties IoT (data) resources to be shared based on contracts, especially with dynamic IoT contexts, for tenant applications on the edge to allow their closer access to data.
This study focused on the performance evaluation of clay and thermomechanical pulp (TMP) fiber to reinforce low‐ and high‐density polyethylene (LDPE and HDPE) biocomposites. A 2 ³ factorial experiment was designed using two levels of clay, TMP fibers, and PE as variables. Mechanical properties, thermal behavior, melt flow index, and water absorption were evaluated. In HDPE, the partial replacement of TMP fiber with 10 wt% clay increased the melting point. Clay also reduced the main polymer degradation temperature in both matrices (LDPE and HDPE). The mechanical properties of the samples with 20 wt% fiber and 10 wt% clay were similar to or better than those containing 30 wt% TMP, that is, tensile strength and modulus of 34 and 2700 MPa, compared to 30 and 2400 MPa, respectively. Although the water absorption increased with the addition of TMP fiber and clay, the water absorption of the composite with 20 wt% TMP and 10 wt% clay was relatively low and similar to the biocomposite containing 30 wt% TMP, that is, 1.15 and 1.07% after 30 days, respectively. The comparable properties of biocomposites with 30 wt% TMP and biocomposites with 20 wt% TMP and 10 wt% clay demonstrate the potential of clay to reduce the cost of the final product. Highlights Clay enhances the tensile modulus and strength, and reduces the color darkening, compared to TMP. TMP fibers and clay reduce the melt flow index, elongation, and impact toughness. TMP fibers and clay increase the melting point and reduce the degradation temperature. Reduction in production costs of biocomposites by adding inorganic clay filler.
As the lightest structural metal materials, Mg alloys are promising for wider applications but are limited by low strength and poor corrosion resistance. Precipitation is an effective way to improve the strength and other performance of Mg alloys. Facing the extremely complex precipitation process, the crystal structures of precipitates, precipitation sequence, and precipitation thermodynamic and kinetics behaviors have stimulated extensive research interests. Precipitation kinetics, which connects composition, aging processes, and precipitate microstructure, is pivotal in determining the performance of age‐hardening Mg alloys. Despite numerous studies on this topic, a comprehensive review remains absent. This work aims to bridge that gap by analyzing precipitation from thermodynamic and kinetic perspectives. Thermodynamically, the stability of precipitates, nucleation driving forces, and resistances of precipitation are discussed. Kinetically, the various kinetic theories including semi‐empirical models, mean‐field models, phase‐field model, and atomistic approaches and their applications in Mg alloys are systematically summarized. Among these, mean‐field models emerge as particularly promising for accurately predicting precipitation processes. Finally, the framework for property prediction based on precipitation kinetics is introduced to illustrating the role of integrated computational materials engineering (ICME) in designing advanced Mg alloys.
Solid electrolytes in Li-ion batteries offer enhanced safety and stability and contribute to improved energy density. In this study, a novel approach to synthesize a solid molecular ionic composite as an electrolyte for Li-ion batteries using 1-benzyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [(Bn)mim][TFSI] ionic liquid as the principal component, a rigid polymer poly(2,2′-disulfonyl-4,4′-benzidine terephthalamide) (PBDT), and LiTFSI salt was explored. The composition of the membrane was systematically varied, with the percentage of polymer fixed at 10%, while the percentages of ionic liquid and LiTFSI salt were modified. The electrochemical performance of the resulting membranes was evaluated. Remarkably, the membrane containing 10% polymer, 10% LiTFSI salt, and 80% ionic liquid demonstrated exceptional electrochemical properties with a capacity of 150 mAh/g in LFP-Li half-cell, closing the theoretical capacity of LiFePO4. This membrane exhibited high conductivity and excellent stability, making it a promising candidate for use as an electrolyte in Li-ion batteries. The findings of this study provide valuable insights into the design and optimization of polymer-based electrolyte membranes for advanced energy storage applications.
Publishing preprints is quickly becoming commonplace in ecology and evolutionary biology. Preprints can facilitate the rapid sharing of scientific knowledge establishing precedence and enabling feedback from the research community before peer review. Yet, significant barriers to preprint use exist, including language barriers, a lack of understanding about the benefits of preprints and a lack of diversity in the types of research outputs accepted (e.g. reports). Community-driven preprint initiatives can allow a research community to come together to break down these barriers to improve equity and coverage of global knowledge. Here, we explore the first preprints uploaded to EcoEvoRxiv (n = 1216), a community-driven preprint server for ecologists and evolutionary biologists, to characterize preprint use in ecology, evolution and conservation. Our perspective piece highlights some of the unique initiatives that EcoEvoRxiv has taken to break down barriers to scientific publishing by exploring the composition of articles, how gender and career stage influence preprint use, whether preprints are associated with greater open science practices (e.g. code and data sharing) and tracking preprint publication outcomes. Our analysis identifies areas that we still need to improve upon but highlights how community-driven initiatives, such as EcoEvoRxiv, can play a crucial role in shaping publishing practices in biology.
The transition to sustainable energy systems is crucial in reducing greenhouse gas emissions and increasing energy efficiency. This paper synthesizes insights from industrial experts and academic researchers on the challenges, opportunities and solutions of integration of thermal energy storage (TES) in industrial energy systems. These insights were gathered during an international expert workshop on TES, organized by the European Energy Research Alliance as part of the Joint Program on Energy Efficiency in Industry (EERA-JP EEIP) on November 7th, 2023, discussing a white paper on industrial thermal energy storage. This paper provides a comprehensive overview of the current state and future potential of TES technologies. Demonstrating technology benefits, continuing material development, improving economic feasibility, enhancing system flexibility, developing innovative business models, fostering policy supports, and facilitating knowledge transfer are believed to be essential for the successful adoption of TES technologies in industry.
AI systems are becoming vital in many industries, including safety-critical domains, and are expected to become more complex. This book introduces the AI Act (REGULATION (EU) 2024/1689) and its impact on high risk systems, providing a foundation for safety plans. It targets experts, stakeholders, manufacturers, operators, start-ups, and SMEs. The book focuses on functional safety including automotive, railway, and seaborne systems, covering key aspects for safety plan development, including technical, human, and organizational factors. In the introduction of the book topics covering the AI Act’s risk levels, stakeholder obligations, standards, agile methods, and the role of the Agile Safety Plan in creating safety cases are presented.
This chapter first outlines why software and data are relevant to consider when developing safety-critical applications and which requirements need to be considered in this context. We consider both the software that companies buy and the software they develop themselves, but also pay attention to the software already present within the hardware that is in use. The chapter also provides a section on data, pointing out some key aspects relevant to consider in the context of developing AI and safety-critical systems.
This chapter focuses on planning tests, analysis, scenarios, verification, validation, and regression testing to ensure the reliability and safety of systems. It outlines the importance of systematic test methodologies, including scenario-based evaluations and simulations, to verify compliance with safety requirements and validate system performance. The chapter emphasizes the role of regression testing in maintaining system integrity after updates, highlighting how agile practices and automated tools can facilitate frequent testing and quicker resolution of potential issues. These methods ensure systems meet safety standards and operate reliably in real-world conditions.
This chapter outlines the System Design Plan, which aims to develop a system that meets functional safety and regulatory requirements while ensuring reliability, transparency, and stakeholder satisfaction. It emphasizes the importance of risk management, effective integration of subsystems, and continuous stakeholder engagement to deliver a compliant and trustworthy system design. The chapter also explores agile adaptations and the role of stakeholder feedback in refining the system throughout its development.
Documentation serves as the backbone of a comprehensive safety plan, ensuring that all processes, requirements, and functions are clearly recorded and accessible. This chapter highlights the essential types of documentation—system, installation, user, and maintenance—and discusses how each contributes to a system’s lifecycle from development through operation. Effective documentation must be accurate, user-focused, and kept up-to-date, supporting seamless communication and system usability across various stakeholder roles.
This chapter consists of five sections outlining aspects related to tools, libraries, formats, and programming languages, as well as considerations that need to be made regarding pre-existing software.
This chapter consists of three section outlining how a description of a safety-critical system should be provided. First, we elaborate on how the Definition of the System (DoS) should be provided, ensuring a detailed description of the system for which the safety case is being presented. Next, we elaborate on how the environment surrounding the safety-critical system should be accounted for through a detailed description of the Operational Design Domain (ODD). Lastly, we outline how the Concept of Operation (ConOps) bridges the description made of the DoS and the ODD by addressing the different types of users and modes of operation of the system. By taking all these three aspects into account, a comprehensive description of the safety-critical system in its operational use should be accounted for.
The chapters in the book have, until now, mainly focused on the different topics that should be included in the agile safety plan. However, as noted in the introduction, the work with the safety plan has a direct impact on safety case development. In this chapter, we first outline different types of safety cases and how an agile approach might facilitate an incremental development process. Next, we address how one should deal with related safety cases for subsystems, products, or modules. Lastly, we underscore the importance of preparing for improvements or modifications.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,293 members
Oistein Johansen
  • Research Division of Materials and Chemistry
Arne H. Eide
  • Department of Health Research
Georg Muntingh
  • Department of Applied Mathematics
Sverre Gullikstad Johnsen
  • Department of Metal Production and Processing
Information
Address
Trondheim, Norway