Article

From Data to Wisdom

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Ce processus consiste à transformer toutes les données, contenus, objets, outils en un code informatique lisible par la machine dans le but d'améliorer ses performances qualitatives. (1) la donnée [data] (D) représente une valeur sans être explicitement exprimée ; elle est un symbole ou une entité brute, présente sous diverses formes, dépourvue de signification, en dehors de son existence ; pouvant être exploitée ou non en fonction du contexte (Sowa 1984;Ackoff 1989) ; ...
Chapter
Full-text available
En complément des savoir-faire traditionnels, l’utilisation d’outils numériques et d’approches transdisciplinaires permet le développement de méthodes de rétroconception des objets patrimoniaux. Trois phases principales sont identifiées : (1) la collecte des données à partir de la numérisation d’un objet, (2) le traitement et l’analyse des données collectées, (3) la modélisation tridimensionnelle d’une copie de cet objet. Deux techniques sont plus particulièrement explorées ici : le cube de données pour l’analyse des données et le jumeau numérique pour la modélisation 3D en vue de l’obtention d’une archive numérique durable. Cette démarche repose sur l’exploration des relations possibles entre cinq niveaux cognitifs fondamentaux (donnée, information, connaissance, intelligence, conscience) et l’intégration des données qualitatives de l’objet à rétroconcevoir.
Article
This paper explores the potential of digitalisation in agriculture to improve the sustainability of agriculture production and industrial sectors, contributing to the twin digital and green transition. These systems can facilitate and enhance competitiveness by leveraging on mutually reinforcing transformations. The European Commission has proposed the creation of Common European Data Spaces in specific sectors to support such a transition. We focus on the agri-food domain, considering farmers and other actors in the food chain. The aim is to identify needs, priorities, opportunities, and barriers to a Common European Data Space for agriculture and food systems, thus going beyond the sectoral European Data Space for agriculture already under current development. In addition, this work looks at strategies for introducing the aforementioned novel data space and evidence of benefits for farmers, who are a key component of agricultural and food systems. To accomplish this, the concept of data spaces is presented, analysing main components, functions, and potential challenges and opportunities for data sharing and reuse, with the agri-food context as the main focus. It also presents current and future scenarios for data use at different decision-making levels, focusing on the specific role of farmers in the digital ecosystem. Additionally, it outlines the basic principles for an inclusive agri-food data strategy.
Article
Traditional open-field plant monitoring approaches are costly, labor-intensive, and risky. Under arid conditions, the risks are elevated. They are associated with fatigue, dehydration, animals’ attacks, etc. Plant detection is a pillar of precision agriculture. It is required for various crop management processes, including: autonomous plantation, irrigation, and harvesting. Researchers utilized off-the-shelf supervised learning modules for plant detection, by re-training these modules on powerful computers. This research, in contrast, aims at developing an unsupervised trainingless method that runs on resource-limited edge-computers. The outcome of this research is an Artificial Intelligence (AI) tool for the detection of two economic indigenous forage crops, namely: Cenchrus ciliaris and Pennisetum divisum for the pastures project in the Emirate of Sharjah, United Arab Emirates (UAE). The developed tool detects plants inflorescences in depth-color close-range aerial images of the plants. A novel Decision Hierarchy (DIKD) - data, information, knowledge, and decision - is proposed and showcased in this research. The proposed DIKD approach detected the target species with an average accuracy of 0.98, using novel blob features: blob regularity and blob strawness. Experimental results demonstrate the robustness of these features for target plants detection under open environment conditions. This work helps create georeferenced maps of plants distribution within the pastures. These maps enable sustainable management of the Sharjah pastures, for example, by aiding remote assessment of the pastures’ carrying capacity, and performing autonomous aerial-irrigation based on individual tussock needs. Moreover, these species-aware maps aid planning rotational grazing, and enable habitat restoration using seeds harvested from the resting paddocks.
Article
Full-text available
Construction work and regular inspection work at nuclear power plants involve many special tasks, unlike general on-site work. In addition, the opportunity to transfer knowledge from skilled workers to unskilled workers is limited due to the inability to easily enter the plant and various security and radiation exposure issues. Therefore, in this study, we considered the application of virtual reality (VR) as a method to increase opportunities to learn anytime and anywhere and to transfer knowledge more effectively. In addition, as an interactive learning method to improve comprehension, we devised a system that uses hand tracking and eye tracking to allow participants to experience movements and postures that are closer to the real work in a virtual space. For hand-based work, three actions, “pinch”, “grab”, and “hold”, were reproduced depending on the sizes of the parts and tools, and visual confirmation work was reproduced by the movement of the gaze point of the eyes, faithfully reproducing the special actions of the inspection work. We confirmed that a hybrid learning process that appropriately combines the developed active learning method, using experiential VR, with conventional passive learning methods, using paper and video, can improve the comprehension and retention of special work at nuclear power plants.
Article
Full-text available
The rapid digitalization of the mobility and transport ecosystem generates an escalating volume of data as a by-product, presenting an invaluable resource for various stakeholders. This mobility and transport data can fuel data-driven services, ushering in a new era of possibilities. To facilitate the development of these digitalized mobility services, we propose a novel conceptual framework for Mobility Data Science. Our approach seamlessly merges two distinct research domains: (a) mobility and transport science, and (b) data science. Mobility Data Science serves as a connective tissue, bridging the digital layers of physical mobility and transport artifacts (such as people, goods, transport means, and infrastructure) with the digital layer of data-driven services. In this paper, we introduce our conceptual framework, shaped by insights from domain experts deeply immersed in the mobility and transport ecosystem. We present a practical application of our framework in guiding the development of a driving style detection service, demonstrating its effectiveness in translating theoretical concepts into real-world solutions. Furthermore, we validate our framework’s versatility by applying it to various real-world cases from the scientific literature. This demonstration showcases the framework’s adaptability and its potential to unlock value by harnessing mobility and transport data, enabling the creation of impactful data-driven services. We believe our framework offers valuable insights for researchers and practitioners: It provides a structured approach to comprehend and leverage the potential of mobility and transport data for developing impactful data-driven services, which we refer to as digitalized mobility services.
Article
The effective utilization of massive safety performance data has become an urgent necessity in response to the growing complexity and scale of work safety systems. Data-driven technologies and theories have emerged as a promising avenue for enhancing safety performance management with the rapid development of big data, which urgently need to develop a comprehensive data-driven framework to continuously improve the performance of work safety systems. Firstly, the discipline support and rationale behind safety performance management in the information age are analyzed, and the research stages and strengths of data-driven are clarified. Subsequently, the three-dimensional structure of safety performance management under data-driven is depicted, the three-level data flow in the work safety process is summarized, and the data feedback process for safety performance management were described. Lastly, the overall framework of data-driven safety performance management is constructed, and the sub-frameworks are analyzed from four aspects: infrastructure, data collection and storage, data analysis, and information services. Framework advantages, development strategies, and research limitations for data-driven safety performance management are also provided. Overall, this study aims to advance the informatization and digitization construction of work safety performance management at the enterprise level through the synthesis of existing data-driven theoretical views and data analytic techniques.
Article
The growing global population and the need for sustainable energy due to nonrenewable resources necessitate innovative solutions to meet energy demands while minimizing environmental impacts. The cardinal motivation behind this research is to move towards efficient biomass resource management in order to overcome environmental uncertainty and ambiguity and mitigate adverse environmental impacts and climate change. The research literature and international experts’ opinions were reviewed to present the biomass EEEGOT model. This important study came up with three new methods: a Circular Intuitionistic Fuzzy Step-Wise Weight Assessment Ratio Analysis (CIF-SWARA) method for ranking challenges; a Circular Intuitionistic Fuzzy Decision Making Trial and Evaluation Laboratory (CIF-DEMATEL) method for picking the best challenges; and a Circular Intuitionistic Fuzzy Quality Function Deployment (CIF-QFD) method to put environmental challenges in order of importance and deal with them. According to the weight evaluation results, the organizational dimension (0.188) and environmental dimension (0.177) had the highest weights. Thus, they are considered the major challenges that require immediate attention from managers, decision-makers, and stakeholders. Additionally, the findings of conceptual modeling (TISM) indicated that culture, leadership, governance policy, human resources, and regulatory barriers are the most fundamental components affecting other components. The novelty of this study lies in the theoretical expansion of the research literature by proposing a model through a comprehensive approach to circular bioeconomy that can be used as a base by relevant decision-makers and researchers. However, to expand fuzzy science, three methods were developed in fuzzy logic. Therefore, since biomass management requires a comprehensive understanding of the challenges facing it, their precise prioritization, and the use of smart strategies and solutions, this research provides insights, useful tools, and solutions for organizational and industrial managers, decision makers, and scholars. They can use these resources to investigate, prioritize, and overcome these challenges. On the other hand, the focus of this research is on ignoring the linear economy and applying the circular concept in order to achieve several goals, including sustainable development, energy security, effective climate change, and reduced levels of poverty due to its global implications.
Conference Paper
The advanced metering infrastructure (AMI) have been widely installed in building sector especially in university. The data of AMI can represent information in real time not only about electricity consumption data but also as an indicator of communal behavior, cultural population and economic dynamics to derive many insights. The purpose of the evaluation is to analyses performed by the smart meter and to ensure that measurement services are available to views that operational technology infrastructure is reliable. Raw data from database will be transformation into information and knowledge which represented to transform into wisdom. This paper presents a big data mining cross industry standard process for data mining method for reliability evaluation analysis. The data modelling is used machine learning by isolation forest anomalies outlier to represented anomaly detection which using many features con-textual and variations. The reliability index parameters for the research results show that the AMI of SBM-ITB is trustworthy with a failure rate of 0.14 per day and the service availability of the smart metering system is 99.4 percent in a test period of 5199 hours.
Article
Full-text available
This paper will discuss Data, Information, Knowledge, and Wisdom, which is commonly referred to as DIKW. The DIKW Pyramid Model is a hierarchical model that is often referenced in both academic and practitioner circles. This model will be discussed and shown to be faulty on several levels, including a lack of definitional agreement. A new DIKW framework with systems orientation will be proposed that focuses on what the DIKW elements do in the way humans think, not what they are by definition. Information as a replacement for wasted physical resources in goal-oriented tasks will be a central organizing point. The paper will move the DIKW discussion to the computer-based concept of Digital Twins (DTs) and its augmentation of how we can use DIKW to be more effective and efficient. This will especially be the case as we move toward Intelligent Digital Twins (IDTs) with Artificial Intelligence (AI).
Article
Digital Twins (DTs) and Cyber-Physical Systems (CPSs) have the potential to play a crucial role in creating intelligent, connected, and efficient commercial vehicles (buses and trucks). A systematic literature review was conducted to analyze this area's current state of knowledge. The results of the review point to successful cases of using these technological solutions in this area. However, it also points to the need for a clear consensus regarding the definition of DT and CPS, generating conceptual challenges. Furthermore, the analysis reveals that most studies consider only one perspective concerning physical assets in DTs and CPSs, indicating the need to explore multiple dimensions of these assets. This study also emphasizes the potential of Industry 4.0 (I4.0) and its standards as possible solutions to address the identified gaps. The pursuit of integration and interoperability is highlighted as a promising direction to advance the representation and effective use of physical assets. This work provides a comprehensive overview of the opportunities and challenges related to DTs and CPSs in commercial vehicles, highlighting the continued need for research and development in this evolving field.
Article
Full-text available
To better understand the processes of digitalisation, dematerialisation and decarbonisation, this paper examines the relationship between energy and information for the global economy since 1850. It presents the long run trends in energy intensity and communication intensity, as a proxy for total information intensity. The evidence suggests that, relative to GDP, global economic production has been reducing energy and increasing information use since 1913. The analysis indicates that it initially required little information to replace energy in production and that the ability to substitute away from energy and towards information has been declining. The result implies that the global economy is now reducing energy and increasing information at a substitution rate of 0.2 kB per kWh of conserved energy or 0.8 GB per tonne of carbon dioxide mitigated. As the price ratio of energy to information is currently higher than this marginal rate of substitution, there are incentives to further substitute information for energy. However, one conclusion is that (without the long run escalation of carbon prices) substitution away from energy and towards information is likely to cease within the next few decades and, beyond that, digitalisation will play a declining role in the decarbonisation process.
Article
Full-text available
(Open Access) Recent technological improvements have made it possible for pervasive computing intelligent environments, augmented by sensors and actuators, to offer services that support society's aims for a wide variety of applications. This requires the fusion of data gathered from multiple sensors to convert them into information to obtain valuable knowledge. Poor implementation of data fusion hinders the appropriate actions from being taken and offering the appropriate support to users and environment needs, particularly relevant in the healthcare domain. Data fusion poses challenges that are mainly related to the quality of the data or data sources, the definition of a data fusion process and evaluating the data fusion carried out. There is also a lack of holistic engineering frameworks to address these challenges. These frameworks should be able to support automated methods of extracting knowledge from information, selecting algorithms and techniques, assessing information and evaluating information fusion systems in an automatic and standardized manner. This work proposes a holistic framework to improve data fusion in pervasive systems, addressing the issues identified by means of two processes: the first of which guides the design of the system architecture and focuses on data management. It is based on a previous proposal that integrated aspects of Data Fabric and Digital Twins to solve data management and data contextualization and representation issues, respectively. The extension of the previous proposal presented here was mainly defined by integrating aspects and techniques from different well-known multi-sensor data fusion models. The previous proposal identified high-level data processing activities and was intended to facilitate their traceability to components in the system architecture. However, the previously defined stages are not completely adequate in a data fusion process and the data processing tasks to be performed in each stage are not described in detail, especially in the data fusion stages. The second process of the framework deals with evaluating data fusion systems and is based on international standards to ensure the quality of the data fusion tasks performed by such systems. This process also offers guidelines for designing the architecture of an evaluation subsystem to automatically perform data fusion evaluation in runtime as part of the system. To illustrate the proposal, a system for preventing the spread of COVID-19 in nursing homes is described that was developed using the proposed guidelines It is also illustrated by a description of how the data fusion tasks it supports are evaluated by the proposed evaluation process. The overall evaluation of the data fusion performed by this system was considered satisfactory, which indicates that the proposal facilitates the design and development of data fusion systems and helps to achieve the necessary quality requirements.
Article
Full-text available
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software.
Article
Full-text available
Featured Application We provide a case study for the Wildfire Response, Risk Mitigation and Recovery and a methodology for research-to-commercialization (R2C) for analytics of value using solutions-oriented science. Abstract Global climate change and associated environmental extremes present a pressing need to understand and predict social–environmental impacts while identifying opportunities for mitigation and adaptation. In support of informing a more resilient future, emerging data analytics technologies can leverage the growing availability of Earth observations from diverse data sources ranging from satellites to sensors to social media. Yet, there remains a need to transition from research for knowledge gain to sustained operational deployment. In this paper, we present a research-to-commercialization (R2C) model and conduct a case study using it to address the wicked wildfire problem through an industry–university partnership. We systematically evaluated 39 different user stories across eight user personas and identified information gaps in public perception and dynamic risk. We discuss utility and challenges in deploying such a model as well as the relevance of the findings from this use case. We find that research-to-commercialization is non-trivial and that academic–industry partnerships can facilitate this process provided there is a clear delineation of (i) intellectual property rights; (ii) technical deliverables that help overcome cultural differences in working styles and reward systems; and (iii) a method to both satisfy open science and protect proprietary information and strategy. The R2C model presented provides a basis for directing solutions-oriented science in support of value-added analytics that can inform a more resilient future.
Article
Manufacturing companies usually ignore their current state of big data and analytics implementation, losing the opportunity to achieve high-performance targets. An evaluation through maturity models can support companies to better focalise their initiatives. However, the existing maturity models only focus on big data, and they lack of a scientific development approach and applications. They are also general in scope and not specific for manufacturing scenarios. This study provides an innovative maturity model, also including an assessment tool and an evaluation method. This model has been designed according to a well-founded methodology that is based on a structured procedure to assess maturity, as follows: organisation, data management, data analytics, infrastructure, governance & security. To enhance its validity, the model has been applied in three industrial companies working in the aerospace, automotive and machinery sectors. The results demonstrate the model's ability to leverage big data management and analytics through a deep, easy, and context-independent assessment that is focused on industrial scenarios. The suggested model enriches the state of art on big data and analytics maturity models providing a methodology and tool to apply on specific industrial case. Academics can benefit from the results to design and guide future research in this field.
Article
Social-media platforms have become a global phenomenon of communication, where users publish content in text, images, video, audio or a combination of them to convey opinions, report facts that are happening or show current situations of interest. Smart-city applications can benefit from social media and digital participatory platforms when citizens become active social sensors of the problems that occur in their communities. Indeed, systems that analyse and interpret user-generated content can extract actionable information from the digital world to improve citizens’ quality of life. This article aims to model the knowledge required for automatic problem detection to reproduce citizens’ awareness of problems from the analysis of text-based user-generated content items. Therefore, this research focuses on two primary goals. On the one hand, we present the underpinnings of the ontological model that categorises the types of problems affecting citizens’ quality of life in society. In this regard, this study contributes significantly to developing an ontology based on the social-sensing paradigm to support the advance of smart societies. On the other hand, we describe the architecture of the text-processing module that relies on such an ontology to perform problem detection, which involves the tasks of topic categorisation and keyword recognition.
Article
Full-text available
The COVID-19 pandemic required people to navigate complex information landscape situated in changing and uncertain environments. In places like Australia, where rigid restrictions were in place for over a year, most did so from their homes. Although expert advice cautions that inflexible government measures undermine compliance, research indicates that many Australians demonstrated both willingness and intention to follow these preventative measures. We examine the conditions underpinning this seeming anomaly by studying how Australian residents made sense of the COVID-19 health crisis and its governance. Semi-structured interviews with 40 participants evince how they sought out various forms of information and knowledge to produce meaning in actionable ways—in short, their sensemaking. Building on insights from information science and regulatory governance research, we trace individuals’ sensemaking practices, capturing how government-backed messaging became interconnected with diverse forms of information and knowledge drawn on by participants. Findings illustrate how sensemaking practices shifted over the first two years of the pandemic and how active information-seeking often reinforced trust in government responses during that time. This analysis demonstrates how sensemaking can inform the development of bottom-up pathways for encouraging compliance with public health interventions and promoting health literacy, especially in times of crisis.
Article
Health organizations and systems rely on increasingly sophisticated informatics infrastructure. Without anti-racist expertise, the field risks reifying and entrenching racism in information systems. We consider ways the informatics field can recognize institutional, systemic, and structural racism and propose the use of the Public Health Critical Race Praxis (PHCRP) to mitigate and dismantle racism in digital forms. We enumerate guiding questions for stakeholders along with a PHCRP-Informatics framework. By focusing on (1) critical self-reflection, (2) following the expertise of well-established scholars of racism, (3) centering the voices of affected individuals and communities, and (4) critically evaluating practice resulting from informatics systems, stakeholders can work to minimize the impacts of racism. Informatics, informed and guided by this proposed framework, will help realize the vision of health systems that are more fair, just, and equitable.
Article
Full-text available
The Internet of Things (IoT) represents a powerful new paradigm for connecting and communicating with the world around us. It has the potential to transform the way we live, work, and interact with our surroundings. IoT devices are transmitting information over the Internet, most of them with different data formats, despite they may be communicating similar concepts. This often leads to data incompatibilities and makes it difficult to extract the knowledge underlying that data. Because of the heterogeneity of IoT devices and data, interoperability is a challenge, and efforts are underway to overcome this through research and standardization. While data collection and monitoring in IoT systems are becoming more prevalent, contextualizing the data and taking appropriate actions to address issues in the monitored environment is still an ongoing concern. Context Awareness is a highly relevant topic in IoT, as it aims to provide a deeper understanding of the data collected and enable more informed decision-making. In this paper, we propose a semantic ontology designed to monitor global entities in the IoT. By leveraging semantic definitions, it enables end-users to model the entire process from detection to action, including context-aware rules for taking appropriate actions. The advantages of using semantic definitions include more accurate and consistent data interpretation, which improves the overall monitoring process and enables more effective decision-making based on the collected insights. Our proposal includes semantic models for defining the entities responsible for monitoring and executing actions, as well as the elements that need to be considered for an effective monitoring process. Additionally, we provide a new definition for the components known as gateways, which enable the connection and communication between devices and the Internet. Finally, we show the benefits of our ontology by applying it to a critical infrastructure domain where a rapid response is vital to prevent accidents and malfunction of the entities.
Article
Within poultry production systems, models have provided vital decision support, opportunity analysis, and performance optimization capabilities to nutritionists and producers for decades. In recent years, due to the advancement of digital and sensor technologies, 'Big Data' streams have emerged, optimally positioned to be analyzed by machine-learning (ML) modeling approaches, with strengths in forecasting and prediction. This review explores the evolution of empirical and mechanistic models in poultry production systems, and how these models may interact with new digital tools and technologies. This review will also examine the emergence of ML and Big Data in the poultry production sector, and the emergence of precision feeding and automation of poultry production systems. There are several promising directions for the field, including: (1) application of Big Data analytics (e.g., sensor-based technologies, precision feeding systems) and ML methodologies (e.g., unsupervised and supervised learning algorithms) to feed more precisely to production targets given a 'known' individual animal, and (2) combination and hybridization of data-driven and mechanistic modeling approaches to bridge decision support with improved forecasting capabilities.
Article
Medical authority is often thought to be threatened by lay access to information, but how does professional authority work when citizens have more knowledge and choices? We seek to understand how professional authority works in doctor-patient relationships and what each side does to navigate medical encounters. Our abductive study is relational as it builds on qualitative interviews with both doctors and patients. While doctors and patients each try to steer the encounter towards their desired outcomes, they also employ a series of 'connective tactics' to maintain a good, professional relationship. These connective tactics are often draped in a 'tactful' and informal manner so as not to threaten the continuous authority relationship between professionals and citizens. Both sides have a repertoire of how to act on authority relations, often supported by courteous attempts to not insist on formal superiority or patient rights. Each side shifts between what may seem like traditional and connective ways to perform medical authority. Doctors can continue to act as knowledge authorities if they also at least appear to be equals with patients; and patients can use internet findings to get involved in medical decisions as long as they pretend to still respect medical authority.
Article
Full-text available
How to ensure the normal production of industries in an uncertain emergency environment has aroused a lot of concern in society. Selecting the best emergency material suppliers using the multicriteria group decision making (MCGDM) method will ensure the normal production of industries in this environment. However, there are few studies in emergency environments that consider the impact of the decision order of decision makers (DMs) on the decision results. Therefore, in order to fill the research gap, we propose an extended MCGDM method, whose main steps include the following: Firstly, the DMs give their assessment of all alternatives. Secondly, we take the AHP method and entropy weight method to weight the criteria and the DMs. Thirdly, we take the intuitionistic fuzzy hybrid priority weight average (IFHPWA) operator we proposed to aggregate evaluation information and take the TOPSIS method to rank all the alternatives. Finally, the proposed method is applied in a case to prove its practicability and effectiveness. The proposed method considers the influence of the decision order of the DMs on the decision results, which improves the accuracy and efficiency of decision-making results.
Article
The urgency of climate change mitigation, rising energy prices and geopolitical crises make a quick and efficient energy transition in the building sector imperative. Building owners, housing associations, and local governments need support in the complex task to build sustainable energy systems. Motivated by the calls for more solution-oriented, practice-focused research regarding climate change and guided by design science research principles, we address this need and design, develop, and evaluate the web-based decision support system NESSI. NESSI is an open-access energy system simulator with an intuitive user flow to facilitate multi-energy planning for buildings and neighborhoods. It calculates the technical, environmental, and economic effects of 14 energy-producing, consuming, and storing components of the electric and thermal infrastructure, considers time-dependent effects, and accounts for geographic as well as sectoral circumstances. Its applicability is demonstrated with the case of a single-family home in Hannover, Germany, and evaluated through twelve expert interviews.
Conference Paper
Full-text available
Driving requires continuous decision making from a driver taking into account all available relevant information. Automating driving tasks also automates the related decisions. However, humans are very good at dealing with bad quality, fuzzy, informal and incomplete information, whereas machines generally require solid quality information in a formalized format. Therefore, the development of automated driving functions relies on the availability of machine-usable information. A digital twin contains quality controlled information collected and augmented from different sources, ready to be supplied to such an automated driving function. An information model that describes all conceivably relevant information is necessary. To this end, a list of requirements that such an information model should meet is proposed and each requirement is argued for. Based on the anticipated services and applications that such a system should support, a collection of requirements for system architecture is derived. Information modeling is performed for selected relevant information groups. A system architecture has been proposed and validated with three different implementations, addressing several different applications to support decisions at a highway tunnel construction site in Austria and throughout the Test Bed Lower Saxony in Germany.
Article
Full-text available
This study measures the relationship between tacit knowledge sharing and innovation in the Polish (n = 350) and US (n = 379) IT industries. Conceptually, the study identifies the potential sources of tacit knowledge development by individuals. That is, the study examines how "learning by doing" and "learning by interaction" lead to a willingness to share knowledge and, as a consequence, to support process and product/service innovation. This study empirically demonstrates that tacit knowledge internalization and externalization (awareness and sharing) significantly mediate between tacit knowledge experimentation and socialization (acquisition) and its final combination (knowledge in action). While such theoretical assumptions already exist, they have not yet been empirically explained and revealed in a single structural model. Further, this empirical approach enabled a demonstration that internalization and externalization of tacit knowledge may occur consciously or unconsciously with equal success. Even so, the study also showed conscious tacit knowledge's greater impact on innovation. Therefore, an organizational effort to manage autonomous, informal, and strongly contextual tacit knowledge is worthwhile and creates the capacity for superior competitive advantage. Finally, this study also demonstrates that national context influences tacit knowledge acquisition. In the US, "learning by doing" is dominant, whereas in Poland, "learning by interaction" and critical thinking are more common. This might be related to factors such as risk acceptance that could be studied in more detail and provide opportunities for future research.
Article
Objective: Diagnostic radiography education research is often aimed at developing new academic theory or pedagogy to instil evidence-based practice and bridge the theory-practice gap. However, there has been little empirical research of how knowledge is created and shared in the clinical learning environment. This paper offers a new perspective on professional knowledge sharing in radiography education through the theory of clinical mindlines. Key findings: Scrutinising clinical mindlines theory against current radiography education literature highlights issues with our conceptualisation of knowledge, and gaps in our understanding of how professional knowledge is created, shared, and accessed in radiography education. Empirical research exploring these factors, particularly in the clinical learning environment is largely absent from the current literature. Conclusion: Discourse on knowledge sharing in radiography education has historically been dictated by pedagogical theory and established within an academic setting. Using the clinical mindlines theory offers terminology and a framework which is rooted in clinical and organisational contexts, allowing us to study clinical learning and education more effectively. Implications for practice: Clinical mindlines have been effectively used across the healthcare landscape to understand and improve the movement of knowledge across boundaries. Radiography educators and researchers can use this new perspective to recognise the processes which aid knowledge sharing between diverse stakeholder groups. Radiographers and students can use this concept to reflect on their teaching and learning in practice to identify moments for more effective knowledge sharing.
Article
Full-text available
The concept of planetary mapping constitutes different activities within different contexts. Much like the field of cartography, it is an amalgamation of science, techniques, and artistic disciplines. It has undergone considerable changes over the last decades to cope with increasing demands related to data management, analysis, and visualization. Planetary mapping employs abstraction, which involves simplifications and generalizations. It aims to produce accessible visualization of planetary surfaces to gain insights and knowledge. Here, we show that different manifestations of this concept are interdependent and we discuss how different mapping concepts relate to each other semantically. We reason that knowledge gain can only be achieved through thematic mapping. The reasoning for systematic mapping and exploration is an intellectual product of thematic mapping. In order to highlight these relationships, we (a) develop in-depth definitions for different types of planetary mapping, (b) discuss data and knowledge flow across different mapping concepts, and (c) highlight systemic limitations related to data that we acquire and attempt to abstract through models. We finally develop a semantic proto-model that focuses on the transformation of information and knowledge between mapping domains. We furthermore argue that due to compositionality, map products suffer not only from abstraction but also from limitations related to uncertainties during data processing. We conclude that a complete database is needed for mapping in order to establish contextualization and extract knowledge. That knowledge is needed for reasoning for planning and operational decision making. This work furthermore aims to motivate future community-based discussions on functional semantic models and ontologies for the future development of knowledge extraction from thematic maps.
Article
Building on previous work to define the scientific discipline of biomedical informatics, we present a framework that categorizes fundamental challenges into groups based on data, information, and knowledge, along with the transitions between these levels. We define each level and argue that the framework provides a basis for separating informatics problems from non-informatics problems, identifying fundamental challenges in biomedical informatics, and provides guidance regarding the search for general, reusable solutions to informatics problems. We distinguish between processing data (symbols) and processing meaning. Computational systems, that are the basis for modern information technology (IT), process data. In contrast, many important challenges in biomedicine, such as providing clinical decision support, require processing meaning, not data. Biomedical informatics is hard because of the fundamental mismatch between many biomedical problems and the capabilities of current technology.
ResearchGate has not been able to resolve any references for this publication.