Article

The knowledge pyramid: A critique of the DIKW hierarchy

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The paper evaluates the data—information—knowledge—wisdom (DIKW) hierarchy. This hierarchy, also known as the `knowledge hierarchy', is part of the canon of information science and management. Arguments are offered that the hierarchy is unsound and methodologically undesirable. The paper identifies a central logical error that DIKW makes. The paper also identifies the dated and unsatisfactory philosophical positions of operationalism and inductivism as the philosophical backdrop to the hierarchy. The paper concludes with a sketch of some positive theories, of value to information science, on the nature of the components of the hierarchy: that data is anything recordable in a semantically and pragmatically sound way, that information is what is known in other literature as `weak knowledge', that knowledge also is `weak knowledge' and that wisdom is the possession and use, if required, of wide practical knowledge, by an agent who appreciates the fallible nature of that knowledge.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Step 1: I review the notion of the SMV (symbolsmeaning-value) space for describing, understanding, and representing human and machine perception, cognition, and action [97]. The concept of the SMV space weaves together ideas from trilevel thinking [87,95], the data-knowledge-wisdom (DKW) hierarchy [2,23,72,85,93], and the perception-cognition-action (PCA) trilogy [39,62]. Metaphorically speaking, the SMV space is a conception that covers the three fundamental aspects of human and machine intelligence in terms of seeing, knowing, and doing. ...
... The concept of the SMV space weaves together three powerful ideas [97]: the trilevel categorization of communications problems (i.e., technical problems of transmitting symbols, semantic problems of conveying meaning, and effectiveness problems of affecting conduct) of Weaver [87], the data-knowledge-wisdom (DKW) hierarchy in information science and management science [2,23,72,85,93], and the perception-cognition-action (PCA) trilogy in psychology and cognitive science [39,62]. The three labels of the SMV space, namely, 'Symbols,' 'Meaning,' and 'Value,' capture the basic and essential concepts underlying human/machine functions, intelligence, and actions. ...
... Weaver's elegant categorization simply divides a wide range of communications problems into three broad classes, clearly reflects a control-support relationship among the three levels, and offers a sequential approach to solving communications problems. The data-information-knowledge-wisdom (DIKW) hierarchy or the DIKW pyramid is a central concept in information science and management science [2,23,72,85]. There are two possible interpretations of the DIKW hierarchy [95]. ...
Article
Full-text available
Recent years have witnessed a rapidly-growing research agenda that explores the combined, integrated, and collective intelligence of humans and machines working together as a team. This paper contributes to the same line of research with three main objectives: a) to introduce the concept of the SMV (Symbols-Meaning-Value) space for describing, understanding, and representing human/machine perception, cognition, and action, b) to revisit the notion of human-machine symbiosis, and c) to outline a conceptual framework of human-machine co-intelligence (i.e., the third intelligence) through human-machine symbiosis in the SMV space. By following the principle of three-way decision as thinking in threes, triads of three things are used for building an easy-to-understand, simple-to-remember, and practical-to-use framework. The three elements of the SMV space, namely, Symbols, Meaning, and Value, are closely related to the three basic human/machine functions of perception, cognition, and action, which can be metaphorically described as the seeing-knowing-doing triad or concretely interpreted as the data-knowledge-wisdom (DKW) hierarchy. Human-machine co-intelligence emerges from human-machine symbiosis in the SMV space. As the third intelligence, human-machine co-intelligence relies on and combines human intelligence and machine intelligence, is a higher level of intelligence above either human intelligence or machine intelligence alone, and is greater than the sum of human intelligence and machine intelligence. There are three basic principles of human-machine symbiosis, i.e., unified oneness, division of labor, and coevolution, for nurturing human-machine co-intelligence.
... Secondly, a review of the literature on the model made clear that the criticisms the model has received would not affect its adequacy as a basis for categorizing personal data. The criticism that there are many different versions of the steps in the hierarchy and much debate about the definitions of each level (Frické, 2009) is not problematic here, because the model would be adapted and applied to personal data, meaning the specific definitions provided in this chapter and adopted in relation to the new approach to categorizing personal are the only ones which matter. Thus, the various different versions are actually a positive, because from these the most appropriate steps and definitions for the purpose of a model for categorising 'personal data' can be used. ...
... Thirdly, Frické's (2009) criticism of the DIKW Hierarchy, that it ignores the huge domain of the unobservable for which no instruments of measurement exist is also not relevant here. ...
Thesis
This thesis contributes to the field of privacy and data protection law, within both Law and Computer Science, by helping to better understand how to increase the transparency of personal data processing and to categorise personal data. To counter the threat to the privacy of individuals which increasing advancements in Information Technology have created, Data Protection laws have been introduced, which include the key principle of transparency. However, as the de facto method of compliance with the obligation to inform (which mandates the provision of certain information about personal data processing to individuals), Privacy Policies have continuously been criticised in their ability to make processing transparent. This problem makes the study of how to increase the transparency of personal data in the context of providing information to individuals about the processing of their personal data a key research area in both Law and Computer Science. In researching this problem, this thesis begins by highlighting a gap in the current literature due to the assumption that the problem lies in how information about processing is presented, summarised or communicated, rather than questioning what information is required for processing to be transparent. The finding that Social Networking Sites provided information about the specific personal data they processed in their Privacy Policies, despite the UK data protection Regulator not making this a recommendation led to the next contribution, a critical analysis of the previous and current data protection law of the EU and the UK on when it is a requirement to inform individuals about the specific personal data being processed. This analysis highlighted that despite its benefits in increasing transparency, organisations are not always required to provide information about the specific personal data they process under the obligation to inform and where they are, the term ‘category’ is used to differentiate between personal data, without a complete categorisation or sufficient guidance on how to do this beyond the categorisation of ‘Special Categories’ of personal data. This gap has led to various parties inferring categorisations from the law, or creating their own, without following a categorisation methodology or taking a consistent approach. The result is inconsistent approaches to categorisation of personal data, which fail to achieve the aims of the principle of transparency. The final contribution of this thesis is a proposed categorisation of personal data, based on categorisation methodology and the Data Information Knowledge Wisdom model in Computer Science, which aims to support organisations in increasing the transparency of their personal data processing and can be built upon in the future to support compliance with the Framework’s wider compliance requirements.
... Enformasyon bilimi, kütüphanecilik, arşivcilik ve benzer belgeleme alanlarından temel kuramını geliştirmiştir (Fricke, 2009). Bununla birlikte, bilişim disiplininin ampirik ve kanıta dayalı olgular üzerine kurulu olması verinin tanımımını beşeri bilimler tanımından ayırır. ...
... Örneğin bir odanın ısısı veridir, ısının derecesinin kaç olduğu sorusunun cevabı enformasyon ve derecenin uygun bulunması yada bulunmadığı için değişiklik kararının alınması (ısı verisinin enformasyonunu değiştirmenin yöntemi) bilgiyi oluşturur (Fricke, 2009). ...
Thesis
Cultural heritage artifacts foster versatile data and information. Today, recording heritage data is conducted via digital acquisition tools and methods. Despite the digital practice in heritage digitization, final representations are still limited to two-dimensional drawings, especially in conservation actions. As a result of the conventional implementation habits, conservation actions remain not fully integrated in the digital workflow and thus integrity issues remain an open research question. To remedy the gap, this study offers a methodology for sustainable management of heritage information bridging the technological advances and the practical needs of typical conservation actions. To tackle the research problem, the remains in the Erythrae archaeological site in Turkey, the in-situ remains of the Heroon, and the scattered stones around it offered as the case. By revisiting the conservation process, this study established a new data-driven conservation action process to offer a fully functional heritage information representation and management process. These actions are as follows: (i) data acquisition, (ii) data processing, (iii) information management, and (iv) curation. The study conducted digital context capturing methods, image-based (photogrammetry) and range-based (terrestrial laser scanning) techniques, for the data acquisition step. Next, the researchers analyzed the material culture of the remains in the data processing phase. Rendering the synthesis and intervention decisions is the third step for the conservation actions process. In the last phase, the workflow utilized the heritage building information modeling (HBIM) platform for the curation process. Consequently, this study offers a state-of-the-art management workflow of multi-dimensional heritage information modeling and a novel integration method into the conservation process paradigm. The offered method is open to adjustments and calibration for other cultural heritage artifacts and intended to be as comprehensive as possible for benefiting different heritage applications at large.
... Access to 'agricultural knowledge' is key to transforming the livelihoods of rural poor into one with increased income stability and food security (Lwoga et al., 2010). Knowledge is filtered from information, or in other words, information is connected to knowledge through the datainformation-knowledge hierarchy (Frické, 2009). ...
... The KG (i.e. the smash-Hit legal KG and its earlier version the CampaNeo KG) is the main data source for this work and is based on the smashHitCore ontology, which was earlier presented in section 4.1. The KG helps transform consent data into information and that information into knowledge in a machine-readable format with the help of semantics (climbing up the data, information, knowledge, wisdom (DIKW)[129] pyramid of knowledge management). This section presents details about the semi-automatic creation of the KG from structured contents (e.g. ...
Thesis
Full-text available
The General Data Protection Regulation (GDPR), which came into effect in May 2018, triggered a major technological shift towards greater transparency in data sharing. An emphasis has been put on the rights of individuals, especially European citizens, regarding their personal data sharing. Despite the fact that data sharing has been a widely researched topic for years, there is a lack of solutions that enable the transparent implementation of consent in an easily understandable manner for both humans and machines in compliance with GDPR. This thesis presents a knowledge graph-based approach for consent representation and implementation that supports machines and humans in making sense of consent through its entire life-cycle. This is achieved by combining approaches from the computer science and behavioural change fields and by considering the comprehension needs of both humans and machines. This thesis demonstrates the feasibility of the approach through its successful adoption and implementation in two industrial data sharing use cases in the smart cities and insurance domains from the smashHit and CampaNeo projects. The main objectives of this work are to use knowledge graphs to semantically represent the life-cycle of informed consent and to visually represent it to individuals in effort to increase legal awareness of the significance of consent. The visualisations place emphasis on the pre- and post-consent stages, including how to request consent in an informed manner and what happens to one’s data after consent is given. Incentives, in the form of gamification, are further used to overcome issues such as blindly given consent and to raise the consent rates.
... To generate knowledge, this information must subsequently be linked by considering the relevant context [17]. ...
Chapter
The digital twin (DT) is a lifecycle-spanning concept applied for the systematic management and efficient use of digital artefacts (data and models) associated to individual entities or entire system networks in the course of digitalisation. A multitude of data and models related to an aircraft with its components, processes and resources are collected during the design, manufacturing, operation and maintenance. The integration of such digital artefacts can, in turn, contribute to making workflows more effective and efficient in different lifecycle phases. However, such approaches usually fail due to the large number of different heterogeneous information silos and the difficulties in linking them with each other. In this context, semantic technologies (ST) have the potential to counteract such problems and to increase interoperability as well as reusability. The aim of this paper is to present the application potentials of ST for DTs of aircrafts with their components and systems in the life cycle phases design, manufacturing and maintenance. For this purpose, typical digital artefacts and the use of ontologies for the efficient management in DTs are described. In the first step, each life cycle phase is considered separately with its data and models for products, processes and resources, together with a description of the application potentials. In the second step, cross-life cycle application potentials are described.
... Data is the core of any DMS and is a symbolic representation of observable or nonobservable properties. In other words, data are the givens of any kind that leads to information, knowledge, and wisdom [10]. DMS store, process, retrieve, and deliver structured, unstructured, semi-structured, and streaming data to support data organization [1]. ...
Conference Paper
Full-text available
In the era of digitalization, healthcare has become highly dependent on data management. As a result, health data management systems have become increasingly important in cost reduction, treatment improvement, and healthcare procedures enhancement. This study explores blockchain-based health data management systems and their development factors in the context of smart city assets. The features and challenges of blockchain-based development solutions are explored based on the General Data Protection Regulation act and Regulations for the Directorate for e-Health of Norway. Latent Semantic Analysis correlation examination and word cloud analysis were conducted on scholarly documents and Tweets and a conceptual smart asset development framework for health data management systems has been proposed from a Scandinavian point of view. Moreover, based on the findings, this paper proposes a conceptual patient-centered blockchain-based architecture for the development of current health data management systems in Scandinavia.
... According to "Data, Information, Knowledge, Wisdom" (DIKW Pyramid), the data is mixed, unstructured and somehow hardly has meaning as it could be random numbers, letters, or symbols. On the other hand, information is known as organized data which has value and importance to certain people or organizations (Sharma, N. 2008;Frické, M. 2009). In other words, information is commonly understood as "processed data" or "data that has meaning." ...
Article
This paper aims to summarize the developments of previous studies done in Information Overload fields in the past five years and gives a prospect to future research in this field using the systematic literature review method. The results show very limitedly and low publication activity has been done in the area of information overload with Online Distance Learners. It is anticipated that this paper will trigger further studies that could focus on the impact of information overload on education fields. Keywords: Information Overload; Distance Learners; Online Learning; Systematic Literature Review. eISSN: 2398-4287 © 2022. The Authors. Published for AMER ABRA cE-Bs by E-International Publishing House, Ltd., UK. This is an open-access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under the responsibility of AMER (Association of Malaysian Environment-Behavior Researchers), ABRA (Association of Behavioral Researchers on Asians), and cE-Bs (Centre for Environment-Behavior Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia.
... Information is known as "processed data" or "data that has a meaning", such as name, id numbers, and address (Frické, M., 2009). The term "Information Security" is usually associated with confidentiality, integrity, and availability (CIA) of both electronic and physical information from unauthorized access (Qadir, S., & Quadri, S. M. K., 2016). ...
Article
Information such as bank access, password, and location data stored in the smartphone has become the primary target for cybercriminals. As the users are frequently stated as the weakest link in the information security chain, therefore, there is a need to investigate users' security behavior in the smartphone context. Using the systematic literature review approach, a total of 48 research articles were analyzed to summarizes the developments of Information Security literature on smartphone users. The findings suggest, Qualitative Approach are most adopted approach and Protection Motivation Theory is the most adopted theory in this field. Keywords: Smartphone user; Information Security; Security Behaviour; Literature review. eISSN: 2398-4287 © 2022. The Authors. Published for AMER ABRA cE-Bs by E-International Publishing House, Ltd., UK. This is an open-access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under the responsibility of AMER (Association of Malaysian Environment-Behavior Researchers), ABRA (Association of Behavioral Researchers on Asians), and cE-Bs (Centre for Environment-Behavior Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia
... L'ouverture des données publiques intervient par ailleurs à un moment où les données sont conçues comme une matière première (Ribes & Jackson, 2013) indispensable à la création de valeur ajoutée. Le schéma de la pyramide du savoir (Ackof, 1989), admis couramment malgré son réductionnisme, place ainsi les données comme le fondement de toute forme d'information (Rowley, 2007 ;Frické, 2009 ;Floridi, 2011). Alors que les données sont désormais indispensables pour analyser un phénomène, leur difusion est vue comme un vecteur de démocratisation de l'information et de l'expertise. ...
... The DIKW architecture has been studied by many scholars and applied to various fields [9][10][11][12]. However, some studies [13,14] propose that different models of DIKW should be interactive, but the relationship between the models is not clearly defined and discussed. Li et al. [15] proposed a new relationship between DIKW models: "purpose". ...
Preprint
Full-text available
Many factors can induce depression. Scholars have carried out research on the correlation between meteorological elements and occurence of depression. However, research conclusions are often inconsistent and even contradictory to each other. Very few researches have investigated the automatic identification and resolution of the inconsistency of the conclusions in different papers. We propose a purpose driven DIKWP modeling and synthesis of Meteorology and Depression. Firstly, based on purpose driven strategy, we map meteorological documents and depression document in the forms of data, information, knowledge and wisdom types as DIKWP Content Graphs. Secondly , through the interactive ontological semantic calculation and reasoning in DIKWP Content Graphs among stakeholders, we retrieve the cognitive DIKWP Cognition Graphs from the stakeholders. Finally, through purpose driven processing, we combined objective DIKWP Content Graphs and subjective DIKWP Cognition Graphs to form integrated DIKWP Semantic Graphs. In the DIKWP semantic space which combines the originally discrete meteorological DIKW elements and occurence DIKW of depression with a DIKWP models represented expertise participants, we maximized the searching space for identification of the semantic level difference of the inconsistent conclusions and finding the resolution of the inconsistencies.
... Numerous academics have underlined the relevance of knowledge, information, and data in information and knowledge studies as a primary, wellaccepted, and overlooked method (Rowley, 2007). Some researchers have referred to it as the discipline's core principles and fundamental essential aspects, doctrine and an element of the common field of IS (Frické, 2009;Kebede, 2010;Zins, 2007). According to Kebede (2010), developing knowledge is the "primary objective of Information Systems". ...
Article
This research examines critical elements of knowledge management and industry 4.0 that assist human resources in the air transport industry. In addition, we look at the essential components that connect the divide between Industry 4.0 and knowledge management. This paper uses the Grey- DEMATEL technique to comprehend the cause-and-effect factors and their interrelationships. A seven-scale ranking, and sensitivity analysis was further applied for better results. It was observed from the findings that information sharing (F15) was the most significant factor for the aviation sector as the causal factor was ranked as the top criterion. Joint knowledge creation (F15) was ranked as the second topmost criteria as effect factor. E-learning (F9) is the third-ranking effect factor. This paper showed that information sharing assists in management performance and efficiency performance in the aviation sector.
... 4). (Frické, 2009;Rowley, 2007). ...
Article
The paper investigates the ‘Knowledge sharing practices within academic libraries with specific reference to the Nigerian Defence Academy library. It has been observed that knowledge sharing in academic libraries is very paramount to the survival of the library. there are limitations in KS practices at the Nigerian Defence Academy library. The objectives of the study are; to investigate what the current knowledge sharing tools and practices are in library’, to identify to what extent the staff at the NDA library utilizes knowledge sharing tools, to identify the strengths and limitations in knowledge sharing practices etc. The research questions are; Does the library have a system in place that retains knowledge from experienced staff who either left or retired? Does the library have satisfactory ICT that can allow for capturing and storing explicit knowledge and subsequently allow it to be accessed by librarian (e.g database, repositories)? Do you feel that amongst colleagues there is hoarding of knowledge. The population of the study comprises of five (5) professional librarian in NDA Library. The instrument used for collecting data was questionnaire.
... Dans la science de données, après l'étape de l'acquisition, il existe trois sous-étapes principales pour préparer les données et pouvoir les exploiter dans des algorithmes, ou voire même pour faire de l'extraction des connaissances dans les données l'ECD : Ce modèle est généralement représenté par une pyramide (figure 7) de quatre niveaux fortement liés (le dernier niveau ne peut pas être atteint sans passer par le processus de transformation des autres niveaux). Dans la littérature, plusieurs travaux se sont basés sur ce modèle pour expliquer le processus de passage de la donnée aux connaissances [Ackoff, 1989 ;Zins, 2007 ;Rowley, 2007 ;Fricke, 2009 ;Ermine et al., 2012 ;Baskarda et Koronios, 2013 ;Allen, 2017]. Plusieurs définitions ont été introduites dans la littérature pour définir les concepts de donnée et de connaissance, elles sont plus ou moins pertinentes en fonction du domaine d'étude. ...
Thesis
Dans le cadre de sa quatrième révolution, le monde industriel subit une forte digitalisation dans tous les secteurs d’activité. Les travaux de recherche de cette thèse s’intègrent dans un contexte de transition vers l’industrie du futur, et plus spécifiquement dans les industries d’usinage mécanique. Ces travaux de recherche répondent ainsi à la problématique d’intégration données et connaissances industrielles, comme support aux systèmes d’aide à la décision. L’approche proposée est appliquée au diagnostic de défaillance des machines d’usinage connectées. Cette thèse propose, dans un premier temps, un cadre conceptuel pour la structuration de bases de données et de connaissances hétérogènes, nécessaires pour la mise en place du SAD.Grace à une première fonction de traçabilité, le système capitalise la description des caractéristiques de tous les événements particuliers et les phénomènes malveillants pouvant apparaître au moment de l’usinage. La fonction de diagnostic permet de comprendre les causes de ces défaillances et de proposer des solutions d’amélioration, à travers la réutilisation des connaissances stockées dans l’ontologie du domaine et un raisonnement à base de règles métiers. Le système à base de connaissances proposé est implémenté dans un Framework global d’aide à la décision, développé dans le cadre du projet ANR collaboratif appelé Smart Emma. Une application pratique a été faite sur deux bases de données réelles provenant de deux industriels différents
... Ackoff notes that knowledge (as well as information and understanding) are focused on efficiency-they are based on logic that can be programmed and automated-in contrast to wisdom, which operates on the basis of judgement and adds value. Ackoff's model has been criticized for being narrow in its conceptions of data, information, and knowledge, and for not acknowledging the fluidity between the concepts (e.g., that data and information or information and knowledge are not always separate and distinct) (Frické, 2009, Weinberger, 2010. ...
Thesis
Government funding agencies and commissions have proposed that sharing, preserving, and providing access to more scientific research data will lead to increased reuse of data in academic research and result in greater knowledge and new discoveries. However, researchers encounter significant logistical, theoretical, methodological and ethical challenges to reusing data that hinder the achievement of these goals. One of the challenges researchers face is obtaining sufficient knowledge about data and the context of data creation to make a decision to reuse the data in their research. In this dissertation, I report on a mixed methods study to investigate how researchers set limits on the types and amounts of knowledge they obtain about data, and what influences them to do so. A more nuanced understanding of how and why researchers determine such thresholds can inform strategic measures to enhance support for data reuse. My study included a survey and semi-structured interviews and was conducted on a sample of researchers who reused data from the ICPSR data archive. I used Donna Haraway’s theory of situated knowledges and Herbert Simon’s theory of satisficing to develop conceptualizations of data and means of evaluating thresholds of knowledge that researchers obtained about data. I defined a concept called “reuse equilibrium”—when researchers determine data are sufficient to reuse to meet their research goals—and examined whether satisficing was a means by which researchers obtained knowledge to reach reuse equilibrium. I found that researchers lacked knowledge they desired about data and that this lack of knowledge frequently had a negative impact on their research. The type of knowledge researchers most often desired but were unable to obtain was “supplemental” knowledge that was not archived with the data and may never have been collected. While researchers lacked knowledge about the data they desired, I found that satisficing did not accurately represent their behavior in knowledge attainment. Instead, researchers sought to maximize their knowledge of data to meet personal aims (i.e., to reach “personal reuse equilibrium”) in environments characterized by pressures and incentives that favored the achievement of social norms and requirements (i.e., "social reuse equilibrium"). I concluded that an important way to improve the environment for reuse was to assist researchers in obtaining supplemental knowledge about data they desired, thus supporting their achievement of personal equilibrium. This could be done by facilitating more structured and intentional “conversations” between data creators and data reusers with the purpose to influence the data that are created in the first place. My findings about the knowledge researchers lack about data and the ways they seek to obtain it will be of interest to data reusers to gain a broader perspective on their colleagues' experiences. They will also be of interest to data creators, as well as data stewards, publishers, and other data intermediaries, to understand the knowledge researchers desire about data and the role they can play in helping researchers obtain it. Such findings, in addition to those about pressures and considerations in the reuse environment, will be of interest to funders and policy makers to gain insight into the ways current policies, practices, and incentives could be enhanced or changed to maximize the return on investment in primary research.
... Concerning the scientificness of the data, the web analytics technique relies on the big data concept of the contents shared on the whole web. From the epistemological and scientific perspective, different researchers have examined big data in terms of its ability and suitability to discover, appraise or validate theories and inductive inferences (Frické 2009(Frické , 2015Rodgers 2010). Ongoing debates on the role of big data in the scientific investigation (Hey, Tansley, and Tolle 2009;Schmidt and Lipson 2009;Järvinen and Karjaluoto 2015;Park and Kipp 2019) suggest that active experimentation can be used mainly for theory testing, yet pragmatic studies, i.e. future trend analyses and predictions, require passive observation (Worrall 2007;Floridi 2012). ...
Article
Full-text available
Blockchain has gained momentum as a disruptive technology in supply chain management against its introduction as a finance-related instrument. Nevertheless, the developing academic understanding and the limited practical implications lead to insufficient insights into the use of blockchain technology, particularly in the supply chain finance (SCF) domain. Thus, the expected potential of blockchain technology remains underexplored. Accordingly, this study explicates this situation by examining the extant literature findings and web-based big data that can provide evidence about the real needs in supply chains, and investigating how blockchain emerges as a disruptive SCF-oriented technology. The study employs a web analytics method, Search Engine Results Page (SERP) analysis which considers the trends in blockchain technology use and the interactions between blockchain, supply chain and finance appearing in Google searches. The SERP method examined real-time clicks, web traffics and most commonly asked questions about blockchain. The SERP findings revealed that the interest in blockchain technology neither focused on finance nor data privacy as emphasised in the literature but mainly on the benefits of increasing digitalisation and efficiency in supply chains. The results offered practical implications for capturing recent blockchain-and supply chain-related trends and designing more digital and efficient supply chains.
... Although Michael Polany's (1952, 1967 concepts of personal and tacit knowledge are used by many in the organizational knowledge management discipline to establish a framework for judging the epistemic quality of information claimed to be knowledge (discussed in more detail in Episode 4), I personally still follow Popper's concept that personal beliefs must be connected to external reality via some form of testing against that reality before they can be called knowledge. Contra Fricke (2009), following Popper's fallibilism, there is no implication that data, information or knowledge in the strict senses of these words is necessarily true. Popper (1986) also argues that knowledge and information exist prior to any test of their truth. ...
Preprint
Full-text available
An extract from my unfinished book, Application Holy Wars or a New Reformation - A Fugue on the Theory of Knowledge (Rev. 26/4/2014) William P Hall (Kororoit Institute)
... Some believe that machines will never possess (or be possessed of) this uniquely human state. 18 The simplicity and linearity of the DIKW model has been a focus of criticism, 19 and it is worthwhile to consider these criticisms in light of issues faced when confronting performance data. According to Martin Fricke, a strict reading of the DIKW model's pyramid structure pre-supposes a hierarchy built on "low level true factual statements" (data) 20 , and forbids users from going beyond the data, which is necessary to answer why questions. ...
... For many scholars, data are conceptualized as the basis of the knowledge management pyramid that progresses hierarchically to information, knowledge, and wisdom (Ackhoff 1989). Commonly referred to as the DIKW model, the paradigm has faced criticism (e.g., Frické 2009); however, the scientific method requires scholars to engage with data to conduct analyses and develop theories. 1 Several volumes have been published to guide aspects of the research process for translation and interpreting scholars (e.g., Saldanha and O'Brien 2014;Angelelli and Baer 2016;Malamatidou 2017;Mellinger and Hanson 2017). ...
Chapter
Data represent the foundation of the research endeavor and scholarly inquiry. In translation and interpreting studies (TIS), data are derived from a variety of sources, including text as data, survey responses, ethnographies, experiments, and observational research. Transdisciplinary approaches to TIS research continue to expand the types and sources of available data as well as the increased number of data collection techniques now available to scholars in the field. This chapter takes a broad view of data in describing collection methods, quantitative and qualitative data, emerging sources of data for TIS, and some of the challenges of data handling in order to investigate the philosophy of data as the raw material that is collected, generated, curated, and analyzed during the research process. To do so, we adopt a tripartite structure by keeping in mind a definition of data that recognizes that data aim to capture scientific truth, that agents instigate the collection, processing, storage, and dissemination of data, and that every dataset is incomplete.
... A frequently cited definition that provides a clear starting point for opening up the concept of knowledge is the data-information-knowledge-wisdom (DIKW) hierarchy, which defines "knowledge" as the application of data and derived information to answer "how" questions [14]. However, information systems research often criticized the DIKW hierarchy as unsound and undesirable [15]. A more human-oriented definition by Davenport describes knowledge as a mixture of experience, intuition, values, contextual information, and expert insight [16]. ...
Article
Full-text available
The Internet-of-Things and ubiquitous cyber-physical systems increase the attack surface for cyber-physical attacks. They exploit technical vulnerabilities and human weaknesses to wreak havoc on organizations’ information systems, physical machines, or even humans. Taking a stand against these multi-dimensional attacks requires automated measures to be combined with people as their knowledge has proven critical for security analytics. However, there is no uniform understanding of information security knowledge and its integration into security analytics activities. With this work, we structure and formalize the crucial notions of knowledge that we deem essential for holistic security analytics. A corresponding knowledge model is established based on the Incident Detection Lifecycle, which summarizes the security analytics activities. This idea of knowledge-based security analytics highlights a dichotomy in security analytics. Security experts can operate security mechanisms and thus contribute their knowledge. However, security novices often cannot operate security mechanisms and, therefore, cannot make their highly-specialized domain knowledge available for security analytics. This results in several severe knowledge gaps. We present a research prototype that shows how several of these knowledge gaps can be overcome by simplifying the interaction with automated security analytics techniques.
... The hierarchical nature of the DIKW model has been called into question by researchers (e.g., Frické, 2009;Rowley, 2007) due to the conservative definitions of the four concepts. For example, the distinction between data and information is not so clear as some view information as a type of data. ...
Chapter
Wisdom, both personal and collective, is largely missing in both information science and knowledge management literature. Workplace culture and shared vision impact every level of organizations in a positive or negative direction. A healthy culture and optimistic shared vision can provide a climate for knowledge sharing and provide opportunity for rich transfer of collective wisdom in our workplace communities. Wisdom is evolved from knowledge and can be cultivated by knowledge and learning specialists. This chapter places wisdom as the desired result of successful knowledge management and provides an opportunity for scholars, students, and practitioners to leverage this rich resource in organizations and extends the models, processes, and theories.
... In this definition, knowledge does not only include a perception and cognition of signs, but also aspects of their relevance and mental connection ability for a recipient [195]. [195,196]). ...
Article
Full-text available
Digital 3D modelling and visualization technologies have been widely applied to support research in the humanities since the 1980s. Since technological backgrounds, project opportunities, and methodological considerations for application are widely discussed in the literature, one of the next tasks is to validate these techniques within a wider scientific community and establish them in the culture of academic disciplines. This article resulted from a postdoctoral thesis and is intended to provide a comprehensive overview on the use of digital 3D technologies in the humanities with regards to (1) scenarios, user communities, and epistemic challenges; (2) technologies, UX design, and workflows; and (3) framework conditions as legislation, infrastructures, and teaching programs. Although the results are of relevance for 3D modelling in all humanities disciplines, the focus of our studies is on modelling of past architectural and cultural landscape objects via interpretative 3D reconstruction methods.
... Such a structure, depicted in Figure 1.2, refers loosely to a class of models for representing structural and/or functional relationships between data, information, knowledge, and wisdom. In subsequent years, the original structure has been criticized, extended and revisited [26,27,28]. Generally speaking, data is about raw facts, observations or perceptions, information is a subset of data that has been processed as to have context, relevance and purpose, while knowledge is the interpretation of information and represents justified beliefs about relationships among concepts. ...
Thesis
The employment of personal robots or service robots has aroused much interest in recent years with an amazing growth of robotics in different domains. Design of companion robots able to assist, to share and to accompany individuals with limited autonomy in their daily life is the challenge of the future decade. However, performances of nowadays robotic bodies and prototypes remain very far from defeating such challenge. Although sophisticated humanoid robots have been developed, much more effort is needed for improving their cognitive capabilities.Actually, the above-mentioned commercially available robots or prototypes are not still able to naturally adapt themselves to the complex environment in which they are supposed to evolve with humans. In the same way, the existing prototypes are not able to interact in a versatile way with their users. In fact they are still very far from interpreting the diversity and the complexity of perceived information or to construct knowledge relating the surrounding environment. The development of bio-inspired approaches based on Artificial Cognition for perception and autonomous acquisition of knowledge in robotics is a feasible strategy to overcome these limitations. A number of advances have already conducted to the realization of an artificial-cognition-based system allowing a robot to learn and create knowledge from observation (association of sensory information and natural semantics). Within this context, the present work takes advantage from evolutionary process for semantic interpretation of sensory information to make emerge the machine-awareness about its surrounding environment. The main purpose of the Doctoral Thesis is to extend the already accomplished efforts (researches) in order to allow a robot to extract, to construct and to conceptualize the knowledge about its surrounding environment. Indeed, the goal of the doctoral research is to generalize the aforementioned concepts for an autonomous, or semi-autonomous, construction of knowledge from the perceived information (e.g. by a robot). In other words, the expected goal of the proposed doctoral research is to allow a robot progressively conceptualize the environment in which it evolves and to share the constructed knowledge with its user. To this end, a semantic-multimedia knowledge base has been created based on an ontological model and implemented through a NoSQL graph database. This knowledge base is the founding element of the thesis work on which multiple approaches have been investigated, based on semantic, multimedia and visual information. The developed approaches combine this information through classic machine learning techniques, both supervised and unsupervised, together with transfer learning techniques for the reuse of semantic features from deep neural networks models. Other techniques based on ontologies and the Semantic Web have been explored for the acquisition and integration of further knowledge in the knowledge base developed. The different areas investigated have been united in a comprehensive logical framework. The experiments conducted have shown an effective correspondence between the interpretations based on semantic and visual features, from which emerged the possibility for a robotic agent to expand its knowledge generalization skills in even unknown or partially known environments, which allowed to achieve the objectives set.
... (Actual) Knowledge can be considered as what makes the translation of data/ information into actions possible (Tiwana, 2002). Nowadays, the role of Industry 4.0 is linking knowledge more and more to the domain of information systems, together with data and information, hence creating a closer link between information systems and KM (Frické , 2009). Wisdom has so far been poorly defined and scantly studied. ...
Article
Purpose The effect of the transition toward digital technologies on today’s businesses (i.e. Industry 4.0 transition) is becoming increasingly relevant, and the number of studies that have examined this phenomenon has grown rapidly. However, systematizing the existing findings is still a challenge, from both a theoretical and a managerial point of view. In such a setting, the knowledge management (KM) discipline can provide guidance to address such a gap. Indeed, the implementation of fundamental digital technologies is reshaping how firms manage knowledge. Thus, this study aims to critically review the existing literature on Industry 4.0 from a KM perspective. Design/methodology/approach First, the authors defined a structuring framework to highlight the role of Industry 4.0 transition along with absorptive capacity (ACAP) processes (acquisition, assimilation, transformation and exploitation), while specifying what is being managed, that is data, information and/or (actual) knowledge, according to the data-information-knowledge (DIK) hierarchy. The authors then followed the systematic literature review methodology, which involves the use of explicit criteria to select publications to review and outline the stages a process has to follow to provide a transparent and replicable review and to analyze the existing literature according to the theoretical framework. This procedure yielded a final list of 150 papers. Findings By providing a clear picture of what scholars have studied so far on Industry 4.0 transition, in terms of KM, this literature review highlights that among all the studied digital technologies, the big data analytics technology is the one that has been explored the most in each phase of the ACAP process. A constructive body of research has also emerged in recent years around the role played by the internet of things, especially to explain the acquisition of data. On the other hand, some digital technologies, such as cyber security and smart manufacturing, have largely remained unaddressed. An explanation of the role of these technologies has been provided, from a KM perspective, together with the business implications. Originality/value This study is one of the first attempts to revise the literature on Industry 4.0 transition from a KM perspective, and it proposes a novel framework to read existing studies and on which to base new ones. Furthermore, the synthesis makes two main contributions. First, it provides a clear picture of the different digital technologies that support the four ACAP phases in relation to the DIK hierarchy. Accordingly, these results can emphasize what the literature has looked at so far, as well as which digital technologies have gained the most attention and their impacts in terms of KM. Second, the synthesis provides prescriptive considerations on the development of future research avenues, according to the proposed research framework.
... Kitchin's pyramid builds on similar visual representations of knowledge by Russel Ackoff (1989), Mortimer Adler (1986) and David Weinberger (2014). The analytical consistency of the pyramid has been critically evaluated and partially contested by various authors (Frické 2009;Rowley 2007 Based on detailed topographic and population data, local high tide lines, and regional long-term sea-level commitment for different carbon emissions and ice sheet stability scenarios, we compute the current population living on endangered land at municipal, state, and national levels within the United States. (Strauss et al. 2015: 1) The scientific article includes a political message urging for severe climate protection measures: ...
... Much of this engagement occurs through the lens of the document or file (Buckland, 1997;Day, 2008Day, , 2014Rayward, 1994;Vismann and Winthrop-Young, 2008), critical engagement with the standard data-information-knowledge-wisdom (DIKW) hierarchy (e.g. Bawden, 2007;Frick e, 2009) or the archive and its theories (Bowker and Star, 2000;Ernst and Parikka, 2013;Farge et al., 2013;Manoff, 2004;Røssaak, 2010;Seberger and Bowker, 2021). ...
Article
Full-text available
Purpose-This paper theorizes ubiquitous computing as a novel configuration of the archive. Such a configuration is characterized by shifts in agency underlying archival mechanics and a pronounced rhythmic diminution of such mechanics in which the user's experiential present tense is rendered fundamentally historical. In doing so, this paper troubles the relationship between: archival mechanics such as appraisal, accession and access; the archive as a site of historical knowledge production and the pervasiveness of data-driven daily life. Design/methodology/approach-By employing conceptual analysis, I analyze a classic vision of ubiquitous computing to describe the historicization of the present tense in an increasingly computerized world. The conceptual analysis employed here draws on an interdisciplinary set of literature from library and information science, philosophy and computing fields such as human-computer interaction (HCI) and ubiquitous computing. Findings-I present the concept of the data perfect tense, which is derived from the future perfect tense: the "will have had" construction. It refers to a historicized, data-driven and fundamentally archival present tense characterizing the user's lived world in which the goal of action is to have had created data for future unspecified use. The data perfect reifies ubiquitous computing as an archive, or a site of historical knowledge production predicated on sets of potential statements derived from data generated, appraised, acquisitioned and made accessible through and by means of pervasive "smart" objects. Originality/value-This paper provides foundational consideration of ubiquitous computing as a configuration of the archive through the analysis of its temporalities: a rhythmic diminution that renders users' experiential present tenses as fundamentally historical, constructed through the agency of smart devices. In doing so, it: contributes to ongoing work within HCI seeking to understand the relationship between HCI and history; introduces concepts relevant to the analysis of novel technological ecologies in terms of archival theory; and constitutes preliminary interdisciplinary steps towards highlighting the relevance of theories of the archive and archival mechanics for critiquing sociotechnical concerns such as surveillance capitalism.
... Bilgi hiyerarşisine odaklanan birçok uzman (Davenport ve Prusak, 1998;Bellinger, Castro ve Mills, 2004;Hey, 2004;Ahsan ve Shah, 2006;Rowley, 2007;Sharma, 2008;Frické, 2009;Bernstein, 2009) 4 kavramdan oluşan bu yapı konusunda hemfikirdirler. Uzmanlar bu yapıyı bilginin doğasını şuan için en iyi açıklayan ve formüle eden model olarak görmektedirler. ...
... Although Michael Polany's (1952, 1967 concepts of personal and tacit knowledge are used by many in the organizational knowledge management discipline to establish a framework for judging the epistemic quality of information claimed to be knowledge (discussed in more detail in Episode 4), I personally still follow Popper's concept that personal beliefs must be connected to external reality via some form of testing against that reality before they can be called knowledge. Contra Fricke (2009), following Popper's fallibilism, there is no implication that data, information or knowledge in the strict senses of these words is necessarily true. Popper (1986) also argues that knowledge and information exist prior to any test of their truth. ...
Presentation
This extract is from a literary fugue in five Episodes with an Interlude and is but one product from this hypertext tracking the evolutionary change in human cognition fueled by revolutions in cognitive technologies. This work extracts early sections from my hypertext, “Application Holy Wars or a New Reformation? A fugue on the theory of knowledge”. In it I set out themes to be further developed along the book’s “knowledge thread”. The knowledge thread is grounded on Karl Popper’s evolutionary epistemology, which is significantly modified as through the course of the book it is unified with Maturana and Varela’s (1980) autopoietic definition of life. The discussion of autopoiesis is beyond the scope of this extract, but if youwant to see how the unification works, please read Hall (2011). This extract begins with an operational definition for the slippery concept of “knowledge” and summary of the evolutionary epistemology Sir Karl Popper developed in his later life. I then focus on the the importance of knowing and the value of knowledge for living things from my disparate points of view as a biologist interested in evolution and my practical experience as a documentation and knowledge management systems analyst in defense industry prime contracting organization. In this book I am particularly interested in how knowledge embodied in living systems (individual cells, multicellular organisms, or various kinds of social systems) is transduced and transformed via cognitive systems and dynamically organized structures into autonomy over the circumstances of their lives. This extract presents some of the groundwork for the knowledge thread. Other threads, outside the scope of this extract, include “evolution and revolution”, “tools extending cognition”, “autopoiesis”, “thermodynamics and complexity”, “evolution of human cognition”, and “emergence of socio-technical organization”.
... A popular reference model for a conceptual clarification is presented by the so-called knowledge pyramid or DIKW hierarchy that presents levels of increasing complexity from data to information (processed data) to knowledge (meaningful information) to wisdom (applied knowledge). But as noted by Jennifer Rowley (2007) and Martin Frické (2009), there has been limited theoretical discussion of this model that seems to be based on problematic positivistic assumptions. In particular, it suggests that data-from Latin datum "the given"-at the most fundamental level can be considered to be neutral and objective, whereas the context factor comes into play not before the levels of processing, meaning attribution, and application. ...
Thesis
Full-text available
This dissertation is concerned with a systematic organization of the epistemological dimension of human knowledge in terms of viewpoints and methods. In particular, it will be explored to what extent the well-known organizing principle of integrative levels that presents a developmental hierarchy of complexity and integration can be applied for a basic classification of viewpoints or epistemic outlooks. The central thesis pursued in this investigation is that an adequate analysis of such epistemic contexts requires tools that allow to compare and evaluate divergent or even conflicting frames of reference according to context-transcending standards and criteria. This task demands a theoretical and methodological foundation that avoids the limitation of radical contextualism and its inherent threat of a fragmentation of knowledge due to the alleged incommensurability of the underlying frames of reference. Based on Jürgen Habermas’s Theory of Communicative Action and his methodology of hermeneutic reconstructionism, it will be argued that epistemic pluralism does not necessarily imply epistemic relativism and that a systematic organization of the multiplicity of perspectives can benefit from already existing models of cognitive development as reconstructed in research fields like psychology, social sciences, and humanities. The proposed cognitive-developmental approach to knowledge organization aims to contribute to a multi-perspective knowledge organization by offering both analytical tools for cross-cultural comparisons of knowledge organization systems (e.g., Seven Epitomes and Dewey Decimal Classification) and organizing principles for context representation that help to improve the expressiveness of existing documentary languages (e.g., Integrative Levels Classification). Additionally, the appendix includes an extensive compilation of conceptions and models of Integrative Levels of Knowing from a broad multidisciplinary field.
... This information is used to populate the ontology's [11] ABox using Owlready2 [63] and is available on GitHub as a fork 1 from the original information model project. 2 The approach was evaluated at the example of the data and models available for the extended pick and place unit (xPPU). 3 The xPPU is a suitable demonstrator, since a wide variety of engineering documents is available, ranging from mechanical CAD models to IEC 61131-3 code to simulations to documentation. This collection of models amounts to 1.20 GB, including text and binary files in various file formats. ...
Article
Modern production systems can benefit greatly from integrated and up-to-date digital representations. Their applications range from consistency checks during the design phase to smart manufacturing to maintenance support. Such digital twins not only require data, information and knowledge as inputs but can also be considered integrated models themselves. This paper provides an overview of data, information and knowledge typically available throughout the lifecycle of production systems and the variety of applications driven by data analysis, expert knowledge and knowledge-based systems. On this basis, we describe the potential for combining data analysis and knowledge-based systems in the context of production systems and describe two feasibility studies that demonstrate how knowledge-based systems can be created using data analysis. This article is part of the theme issue ‘Towards symbiotic autonomous systems’.
Book
Full-text available
Monografia je syntetickou interdisciplinárnou autorskou prácou, podávajúcou pohľad na vzdelávaciu politiku zo zorného uhla všetkých relevantných vedných odborov. Nekladie si za cieľ byť koncepčným materiálom pre reformy vzdelávania, aj keď nevylučujeme možnosť, že môže byť inšpiráciou aj pre inštitúcie, zodpovedné za vzdelávaciu politiku. Monografia napriek tomuto svojmu často polemickému zameraniu (a vlastne práve vďaka nemu) je však určená širokej odbornej verejnosti, ktorá sa zaoberá vzdelávaním v rôznych súvislostiach svojho teoretického i praktického záujmu.
Conference Paper
User interface design of nowadays safety-critical human-machine systems has a significant impact on human operator situation awareness (SA). SA is composed of three levels including the perception (level 1), comprehension (level 2) and projection (level 3) of information. A significant part of accidents can be attributed to level 1 error. This means that human operators have problems to satisfy their information demand with supplied information during task performance. While thoroughly checking user interface designs for information gaps is a standard in professional system design it is a time consuming and error prone process. In this paper we introduce an information gap model, which allows investigation of inconsistencies between information supply and demand. We present a method to detect information gaps and assess the fitness between information supply and demand. The method can be executed semi-automatically. We show the method’s implementation into an integrated system modelling environment and demonstrate the application with an autopilot component in a course change task on a ship bridge. We performed an expert evaluation with maritime system engineers and a human factors ergonomist to estimate the applicability, benefits and shortcomings of the method. Overall, the evaluation results are promising and warrant further research of the method.
Article
Full-text available
Knowledge management and its effectiveness may be a potential catalyst for achieving project goals for optimum success and speed, or it may be a major cause of project failure when knowledge is inadequately managed. Recent scientific research has touched on the failure to facilitate the application of knowledge management mechanisms in all project phases due to the temporary status of projects as well as deficiencies in applying the concept of accountability among specialists. This paper reports on one element of a wider research project undertaken with Abu Dhabi Police to identify enablers and barriers to knowledge management in police projects. The research was qualitative in nature and focused directly on addressing mechanisms and methods of integrating management knowledge to improve project success and develop the capability of human capital in policing. The results of the research identified nine enablers that support successful knowledge management in policing projects: A policy for the management of knowledge, institutional leadership in support of open communication and motivation, highlighting the value of knowledge management, linking knowledge management content with job descriptions, establishment of specific channels for communicating with project stakeholders, adequate technology resources, communication systems to manage knowledge between teams in separate locations, enable social interaction to facilitate the process of knowledge management in daily operations, and apply processes for assessment and measurement in the use of social networks for knowledge management.
Article
2 Hepatocellular carcinoma (HCC) is the most common type of liver cancer with a high morbidity and 3 fatality rate. Traditional diagnostic methods for HCC are primarily based on clinical presentation, 4 methotrexate, imaging features, and histopathology. With the rapid development of artificial 5 intelligence (AI), which is increasingly used in the diagnosis, treatment, and prognosis prediction 6 of HCC, an automated approach to HCC status classification is promising. AI integrates labeled 7 clinical data, trains on new data of the same type, and performs interpretation tasks. Several studies 8 have shown that AI techniques can help clinicians and radiologists be more efficient and reduce the 9 misdiagnosis rate. However, the coverage of AI technologies leads to difficulty in which the type of 10 AI technology is preferred to choose for a given problem and situation. Solving this concern, it can 11 significantly reduce the time required to determine the required healthcare approach and provide 12 more precise and personalized solutions for different problems. In our review of research work, we 13 summarize existing research works, compare and classify the main results of these according to the 14 specified data, information, knowledge, wisdom (DIKW) framework. 15
Article
Full-text available
Digital information infrastructures such as Google or Wikipedia are often compared to libraries. As traditional libraries, they support the circulation of knowledge resources. However, they are neither operated by nor designed for library institutions. In order to describe the contribution of libraries to the digital infrastructures of the 21st century more precisely, the term Library Technology is applied in this text. Library Technology will be demarcated from terms such as Digital Libraries, a frequently used concept in Computer Science and colloquial language. The focus lies on present and future developments of infrastructures in science, such as the European Science Cloud (EOSC). It is suggested that the original contribution of libraries to current and future data infrastructure is present but not explicitly visible or referenced in communications. This rather hidden, implicit role is interpreted to be detrimental to the library identity in the 21st century. It is recommended to reference the role of the library more explicitly.
Conference Paper
Full-text available
Production of high quality, ecologically clean agricultural products is a significant problem. The use of biohumus can play an important role in solving the problem, biohumus is obtained by recycling organic waste, where rain worms play a major role. At the same time, the protein masses of rain worms are effectively used to balance the feed of agricultural poultry and animals. Based on this value, we conducted experiments on the effect of oligophos (a complex of biologically active additives) of plant origin concentrate on rain worms on the following indicators: a) change in protein masses of earthworms, b) quality of reproduction c) quality of substrate and d) added concentrate To determine. Based on the data obtained, it can be concluded that balancing the worm substrate with oligophos concentrate has a positive effect on the increase in worm protein mass, reproduction quality, substrate quality and efficiency. The optimal dose of concentrate is 0.5 ml / 200 g.
Article
Purpose: The purpose of this research is to understand everyday information behavior (IB) during the Covid-19 pandemic at the “new normal” stage, focusing on the notions of experiential knowledge (EK), i.e. knowledge acquired by first-hand experience or in personal interactions, and local knowledge (LK) as perception of local environment. Design/methodology/approach: Seventeen interviews were carried out in February–May 2021, in a district of the city of Madrid (Spain). Interview transcripts were analyzed according to grounded theory, to identify major and complementary themes of EK and LK. Findings: Participants’ stories show that EK cooperated with information originating from government, scientific authorities and mainstream media, in patterns of convergence and divergence. While convergence produces “thick knowledge” (knowledge perceived as solid, real and multidimensional), divergence leads to uncertainty and collaboration, but it also supports a critical stance on authorities’ information. In addition, participants’ perceptions of LK emphasize its human component. LK and EK are exchanged both explicitly and tacitly. Originality/value: The paper presents the first approach to understanding EK and LK and their function during the health crisis, characterizing them as alternative information systems and as topics deserving major attention in research on IB and crisis management.
Book
Full-text available
[engl. INFORMATION LITERACY AND INFORMATION EDUCATION] Komplexný analytický pohľad na koncept informačnej gramotnosti a metódy informačného vzdelávania. 1. Informačné prostredie a človek v ňom. 2. Informačná gramotnosť (definičné ukotvenie, atribúty, prieniky). 3. Analytický pohľad na informačnú gramotnosť. 4. Modely informačnej gramotnosti. 5. Štandardy informačnej gramotnosti. 6. Dizajn informačného vzdelávania. 7. Hodnotenie informačnej gramotnosti. PRÍLOHA A: Prehľad operačných modelov informačnej gramotnosti. PRÍLOHA B: Prehľad vzdelávacích štandardov a kompetenčných rámcov IG. PRÍLOHA C: Rozšírená vizualizácia schémy modelu Concept-Based Inquiry. PRÍLOHA D: Aktívne slovesá podľa psychologických domén. PRÍLOHA E: Hodnotenie IG – príklad skórovacej tabuľky. PRÍLOHA F: Anotovaná rešerš k problematike informačného vzdelávania.
Article
Full-text available
Effective and efficient management of post-disaster damage and loss data is a key component of disaster risk reduction and climate change adaptation policies to fulfil the requirements of the Sendai Framework, Sustainable Development Goals, and more recently, the European Climate Law. However, the reality of organized and structured damage and loss data collection is still in its infancy. In the era of rapid technological improvements, with overwhelming volumes and channels of data, we still record a lack of basic figures of disaster losses at the scale, granularity and level of detail needed for most applications. First, a theoretical overview of data science applied to disaster risk management and the description of collection procedures and use of damage data for buildings in the case of earthquakes for Italy and Greece are provided. Second, the Information System (IS) which is intended to enhance damage and loss data collection and management proposed by the LODE (Loss Data Enhancement for Disaster Risk Reduction and Climate Change Adaptation) project is illustrated. The IS is described in detail, starting from the stakeholder consultation to elicit the requirements, to the system’s architecture, design and implementation. The IS provides a comprehensive tool to input and query multisectoral post-disaster damage and loss data at relevant spatial and temporal scales. The part of the IS devoted to building damage is described in depth showing how obstacles and difficulties highlighted in the collection and use of data in the Greek and the Italian case have been approached and solved. Finally, further developments of the IS and its background philosophy are discussed, including the need for institutionalized damage data collection, engineering of the developed software and re-engineering of current damage and loss data practices.
Article
By applying the principles of three-way decision as thinking in threes, in this paper I introduce a conceptual model of data science in three steps. First, I examine examples of triadic thinking in general and trilevel thinking in specific in data science. Then, based on Weaver's trilevel categorization of communications problems, I propose the concept of the symbols-meaning-value (SMV) space and discuss three perspectives on the SMV space from the viewpoints of information science and management science, cognitive science, and computer science. I label the operations on the SMV three levels metaphorically as seeing, knowing, and doing. Finally, I put forward a SMV-space-based conceptual model of data science, in which data are resource, the power of data is the knowledge embedded in data, and the value of data is the wise decision and the best course of action supported by data. The goals and functions of data science at the SMV three levels are, respectively, making data available, making data meaningful, and making data valuable. To demonstrate the potential contributions of the the conceptual model, I comment on some of its practical values and implications.
Chapter
This chapter looks at what facilitates the relevant decision-making and the modalities that can make the action (effect) more efficient. The aim is to provide a better understanding of the couple (knowledge, action). There is a strong dependency between the notion of knowing about a given world and the decisions that can be made and consecutively the potential actions that can be undertaken. We bring the notion of “mastering knowledge” for efficient actions. Mastering knowledge amounts to having a coherent set of means to represent the most useful knowledge in the context of the action, to know how to resort as necessary to the appropriate formalizations to model the situations.
Book
Full-text available
The Routledge Handbook of Translation and Methodology provides a comprehensive overview of methodologies in translation studies, including both well-established and more recent approaches. The Handbook is organised into three sections, the first of which covers methodological issues in the two main paradigms to have emerged from within translation studies, namely skopos theory and descriptive translation studies. The second section covers multidisciplinary perspectives in research methodology and considers their application in translation research. The third section deals with practical and pragmatic methodological issues. Each chapter provides a summary of relevant research, a literature overview, critical issues and topics, recommendations for best practice, and some suggestions for further reading. Bringing together over 30 eminent international scholars from a wide range of disciplinary and geographical backgrounds, this Handbook is essential reading for all students and scholars involved in translation methodology and research.
Chapter
Full-text available
Well-being data are often our data, in that they are personal data about us—and their collection requires our time and consideration. We are increasingly aware of data’s role in our everyday lives, yet we lack a shared understanding of data and well-being and how they are linked. This chapter illustrates that data don’t just represent society, but they actually change society, culture and our values in ways we cannot see. This chapter discusses who the book is for, what it is trying to do, how the book should be used, its structure and key arguments. Data collection and uses are value-laden exercises and this chapter guides the reader on how this book can help them judge what well-being data mean for them.
Article
Full-text available
The paper presents an improved DIKW+DM model which allows organizing not only the workflow of information processing and knowledge acquisition (with their subsequent application to determining the socio-economic impact of the education quality system), but also a decision-making algorithm in order to optimize the functioning of the education quality system. A detailed description of the DIKW+DM model sublayers is given with an algorithm for logical transition between sublayers in order to provide a rational solution based on the results of data collection, their systematization and analysis. On the basis of the model, recommendations are proposed for ensuring the effective functioning of education quality systems at various levels. In addition to internal assurance of the quality of education, attention is also paid to external control of the effectiveness of this system’s functioning. The sublayers of the DIKW+DM model are coupled with the criteria for educational programs quality assurance from the National Agency for Higher Education Quality Assurance of Ukraine.
Article
Preface. I. BASICS OF LOGIC. Introduction. The Structure of Simple Statements. The Structure of Complex Statements. Simple and Complex Properties. Validity. 2. PROBABILITY AND INDUCTIVE LOGIC. Introduction. Arguments. Logic. Inductive versus Deductive Logic. Epistemic Probability. Probability and the Problems of Inductive Logic. 3. THE TRADITIONAL PROBLEM OF INDUCTION. Introduction. Hume"s Argument. The Inductive Justification of Induction. The Pragmatic Justification of Induction. Summary. IV. THE GOODMAN PARADOX AND THE NEW RIDDLE OF INDUCTION. Introduction. Regularities and Projection. The Goodman Paradox. The Goodman Paradox, Regularity, and the Principle of the Uniformity of Nature. Summary. 5. MILL"S METHODS OF EXPERIMENTAL INQUIRY AND THE NATURE OF CAUSALITY. Introduction. Causality and Necessary and Sufficient Conditions. Mill"s Methods. The Direct Method of Agreement. The Inverse Method of Agreement. The Method of Difference. The Combined Methods. The Application of Mill"s Methods. Sufficient Conditions and Functional Relationships. Lawlike and Accidental Conditions. 6. THE PROBABILITY CALCULUS. Introduction. Probability, Arguments, Statements, and Properties. Disjunction and Negation Rules. Conjunction Rules and Conditional Probability. Expected Value of a Gamble. Bayes" Theorem. Probability and Causality. 7. KINDS OF PROBABILITY. Introduction. Rational Degree of Belief. Utility. Ramsey. Relative Frequency. Chance. 8. PROBABILITY AND SCIENTIFIC INDUCTIVE LOGIC. Introduction. Hypothesis and Deduction. Quantity and Variety of Evidence. Total Evidence. Convergence to the Truth. ANSWERS TO SELECTED EXERCISES. INDEX.
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
Foreword Preface Part I. Principles and Elementary Applications: 1. Plausible reasoning 2. The quantitative rules 3. Elementary sampling theory 4. Elementary hypothesis testing 5. Queer uses for probability theory 6. Elementary parameter estimation 7. The central, Gaussian or normal distribution 8. Sufficiency, ancillarity, and all that 9. Repetitive experiments, probability and frequency 10. Physics of 'random experiments' Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle 12. Ignorance priors and transformation groups 13. Decision theory: historical background 14. Simple applications of decision theory 15. Paradoxes of probability theory 16. Orthodox methods: historical background 17. Principles and pathology of orthodox statistics 18. The Ap distribution and rule of succession 19. Physical measurements 20. Model comparison 21. Outliers and robustness 22. Introduction to communication theory References Appendix A. Other approaches to probability theory Appendix B. Mathematical formalities and style Appendix C. Convolutions and cumulants.
Article
Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information science/studies. Concepts of natural and represented information (taking an unconventional sense of representation), encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the discipline is illustrated with examples from the study of information-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined. © 2006 Wiley Periodicals, Inc.
Article
The papers gathered in this book were published over a period of more than twenty years in widely scattered journals. They led to the discovery of randomness in arithmetic which was presented in the recently published monograph on “Algorithmic Information Theory” by the author. There the strongest possible version of Godel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs, was discussed. The present book is intended as a companion volume to the monograph and it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.
Article
Information science, or informatics, has almost from its beginnings been characterized by a seemingly inordinate self-consciousness, exemplified by concern with its status vis-à-vis other disciplines, with its status as a science, and with the significance of its objects of investigation and the goals of that investigation. The bibliography by Port, and the survey by Wellisch, of definitions of information science, and the historical survey by Harmon, all give substantial evidence of this self-consciousness. Some aspects of this attitude are of course due to the social and political problems facing any new discipline (or field of investigation aspiring to such status), such as indifference or hostility from the established academic community, the fight for a share of limited research and development funds, the inferiority complex associated with having no well-defined methods of investigation in a social situation which requires them for acceptance, and so on. Other aspects of this self-consciousness may, however, be more related to strictly internal, ‘scientific’ concerns; that is, to problems within the theoretical structure of information science which must be solved in order for substantial progress in solving its practical problems to be made. This review surveys contributions to one such problem: the question of a suitable concept of information for information science.
Chapter
This Guide provides an ambitious state-of-the-art survey of the fundamental themes, problems, arguments and theories constituting the philosophy of computing.
Article
ABSTRACT The purpose of this paper is to raise some questions about the idea, which was first made prominent by Gilbert Ryle, and has remained associated with him ever since, that there are at least two types of knowledge (or to put it in a slightly different way, two types of states ascribed by knowledge ascriptions) identified, on the one hand, as the knowledge (or state) which is expressed in the ‘knowing that’ construction (sometimes called, for fairly obvious reasons, ‘propositional’ or ‘factual’ knowledge) and, on the other, as the knowledge (or state) which is ascribed in the ‘knowing how’ construction (sometimes called ‘practical’ knowledge).† This idea, which might be said to be Ryle's most lasting philosophical legacy, has, in some vague form, remained part of conventional wisdom in philosophy since he put it forward.‡ My purpose here is fairly accurately described as ‘raising questions’, since both the criticisms of the received view (as I interpret it), and the positive alternative suggestions to be advanced, are, to some extent, tentative and exploratory. The aim is to assemble a broad range of evidence for the conclusion that we need to replace the standard account, to query especially what Ryle suggested as evidence for it, and to explore what seems to me to be the indicated replacement for it.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information science/studies. Concepts of natural and represented information (taking an uncon- ventional sense of representation), encoded and em- bodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the disci- pline is illustrated with examples from the study of infor- mation-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined.
Article
A suggestion is made regarding the nature of information: That the information in a theory be evaluated by measuring either its distance from the perfect theory or by measuring its distance from the right answer to the information seeking question that led to it. The measures here are provided by the Tichy-Hilpinen-Oddie-Niiniluoto-likeness measures which were introduced in the context of the philosophical problem of verisimilitude. One feature of this suggestion that differentiates it from most theories of information is that it does not use or depend on probabilities or uncertainty. Another unusual feature of it is that it permits false views or theories to possess information. © 1997 John Wiley & Sons, Inc.
Article
Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.
Book
Part I. Logic: 1. Logic 2. What is inductive logic? Part II. How to Calculate Probabilities: 3. The gambler's fallacy 4. Elementary probability 5. Conditional probability 6. Basic laws of probability 7. Bayes' rule Part III. How to Combine Probabilities and Utilities: 8. Expected value 9. Maximizing expected value 10. Decision under uncertainty Part IV. Kinds of Probability: 11. What do you mean? 12. Theories about probability Part V. Probability as a Measure of Belief: 13. Personal probabilities 14. Coherence 15. Learning from experience Part VI. Probability as Frequency: 16. Stability 17. Normal approximations 18. Significance 19. Confidence and inductive behaviour Part VII. Probability Applied to Philosophy: 20. The philosophical problem of induction 21. Learning from experience as an evasion of the problem 22. Inductive behaviour as an evasion of the problem.
Article
Humans do not apply formalistic scaffolds of fixed rules of ‘knowledge’ to integrate the a priori given objective world of data ‘out there’: they do not compute the world. Regardless of some ‘knowledge’-modeling assumptions, just the opposite is true: humans use their subjectively perceived world of turbulent circumstances to bring forth (create, recreate and adapt), again and again, knowledge as an autopoietic network of relations through which they coordinate their actions. Such knowledge brings (through language) coherence and coordination to the otherwise turbulent and chaotic world of human action. Knowledge is not ‘processing of information’ but a coordination of action. As a consequence, any management support system (DSS, AI, ES, etc.) claiming knowledge as its purpose or its base, cannot be of the symbolic computation type à la Simon.
Article
Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information. Existing noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n -ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.
Chapter
A certain conception of social epistemology is articulated and applied to numerous social arenas. This conception retains epistemology's traditional interest in truth and reliable inquiry, but replaces its customary emphasis on solitary knowers with a focus on social institutions and interpersonal practices. Postmodernism, science studies, and pragmatism pose worries about the meaning and attainability of objective truth and knowledge. After laying these concerns to rest, “veritistic” social epistemology is advanced as a normative discipline seeking practices and institutions that would best foster knowledge. The book explores forms and methods of communication, including norms of argumentation, information technology, and institutional structures governing speech and the media. Social dimensions of knowledge quests are explored in science, law, democracy, and education. The book examines popular topics in contemporary epistemology such as testimony and Bayesianism, while breaking new ground by connecting epistemology with historically unrelated branches of philosophy such as political and legal theory. Democracy's success, it is argued, requires the attainment of certain epistemic desiderata, and substantive justice depends on well‐chosen procedures of legal evidence.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
Full-text of this article is not available in this e-prints service. This article was originally published [following peer-review] in Journal of Information Science, published by and copyright Sage Publications Ltd. This paper revisits the data–information–knowledge–wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the ‘Knowledge Hierarchy’, the ‘Information Hierarchy’ and the ‘Knowledge Pyramid’ is one of the fundamental, widely recognized and ‘taken-for-granted’ models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff’s original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts.
Article
This is an updated, revised and enlarged edition of Howson and Urbach's account of scientific method from the Bayesian standpoint. The book offers both an introduction to probability theory and a philosophical commentary on scientific inference. This new edition includes chapter exercises and extended material on topics such as regression analysis, distributions densities, randomisation and conditionalisation.
Article
Background. Many definitions of information, knowledge, and data have been suggested throughout the history of information science. In this article, the objective is to provide definitions that are usable for the physical, biological, and social meanings of the terms, covering the various senses important to our field. Argument. Information 1 is defined as the pattern of organization of matter and energy. Information 2 is defined as some pattern of organization of matter and energy that has been given meaning by a living being. Knowledge is defined as information given meaning and integrated with other contents of understanding. Elaboration. The approach is rooted in an evolutionary framework; that is, modes of information perception, processing, transmission, and storage are seen to have developed as a part of the general evolution of members of the animal kingdom. Brains are expensive for animals to support; consequently, efficient storage, including, particularly, storage at emergent levels-for example, storing the concept of chair, rather than specific memories of all chairs ever seen, is powerful and effective for animals. Conclusion. Thus, rather than being reductionist, the approach taken demonstrates the fundamentally emergent nature of most of what higher animals and human beings, in particular, experience as information.
Article
To illustrate how multiple hypotheses testing can produce associations with no clinical plausibility. We conducted a study of all 10,674,945 residents of Ontario aged between 18 and 100 years in 2000. Residents were randomly assigned to equally sized derivation and validation cohorts and classified according to their astrological sign. Using the derivation cohort, we searched through 223 of the most common diagnoses for hospitalization until we identified two for which subjects born under one astrological sign had a significantly higher probability of hospitalization compared to subjects born under the remaining signs combined (P<0.05). We tested these 24 associations in the independent validation cohort. Residents born under Leo had a higher probability of gastrointestinal hemorrhage (P=0.0447), while Sagittarians had a higher probability of humerus fracture (P=0.0123) compared to all other signs combined. After adjusting the significance level to account for multiple comparisons, none of the identified associations remained significant in either the derivation or validation cohort. Our analyses illustrate how the testing of multiple, non-prespecified hypotheses increases the likelihood of detecting implausible associations. Our findings have important implications for the analysis and interpretation of clinical studies.
Article
We examined the impact on statistical inference when a chi(2) test is used to compare the proportion of successes in the level of a categorical variable that has the highest observed proportion of successes with the proportion of successes in all other levels of the categorical variable combined. Monte Carlo simulations and a case study examining the association between astrological sign and hospitalization for heart failure. A standard chi(2) test results in an inflation of the type I error rate, with the type I error rate increasing as the number of levels of the categorical variable increases. Using a standard chi(2) test, the hospitalization rate for Pisces was statistically significantly different from that of the other 11 astrological signs combined (P=0.026). After accounting for the fact that the selection of Pisces was based on it having the highest observed proportion of heart failure hospitalizations, subjects born under the sign of Pisces no longer had a significantly higher rate of heart failure hospitalization compared to the other residents of Ontario (P=0.152). Post hoc comparisons of the proportions of successes across different levels of a categorical variable can result in incorrect inferences.
Article
Although randomness can be precisely defined and can even be measured, a given number cannot be proved to be random. This enigma establishes a limit to what is possible in mathematics. Almost everyone has an intuitive notion of what a random number is. For example, consider these two series of binary digits: 01010101010101010101 01101100110111100010 11 12 Part I---Introductory/Tutorial/Survey Papers The first is obviously constructed according to a simple rule; it consists of the number 01 repeated ten times. If one were asked to speculate on how the series might continue, one could predict with considerable confidence that the next two digits would be 0 and 1. Inspection of the second series of digits yields no such comprehensive pattern. There is no obvious rule governing the formation of the number, and there is no rational way to guess the succeeding digits. The arrangement seems haphazard; in other words, the sequence appears to be a random assortment of 0's and 1's. The sec...
Positivism in the Twentieth Century (Logical Empiricism) In: Dictionary of the History of Ideas Available at
  • H Feigl
H. Feigl, Positivism in the Twentieth Century (Logical Empiricism). In: Dictionary of the History of Ideas (published 1974) (2003). Available at: http://etext.lib.virginia.edu/cgi-local/DHI/dhiana.cgi?id=dv3-69 (accessed 17 February 2008).
The analysis of natural language Pavel Tichy's Collected Papers in Logic and Philosophy
  • P Tichy
P. Tichy, The analysis of natural language. In: V. Svoboda et al. (eds), Pavel Tichy's Collected Papers in Logic and Philosophy (University of Otago Press, Dunedin, New Zealand, 2004).
A Guidebook to Learning: for a Lifelong Pursuit of Wisdom
  • M J Adler
M.J. Adler, A Guidebook to Learning: for a Lifelong Pursuit of Wisdom (Macmillan, New York, 1986).