Article

IS practitioners' views on core concepts of information integrity

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Based on a review of the literature on data quality and information integrity, a framework was created that is broader than that provided in the widely recognized international control guideline COBIT [ISACA (Information Systems Audit and Control Association) COBIT (Control Objectives for Information Technology) 3rd edition. Rolling Meadows, Il: ISACA, 2000], but narrower than the concept of information quality discussed in the literature. Experienced IS practitioners' views on the following issues were gathered through a questionnaire administered during two workshops on information integrity held in Toronto and Chicago: definition of information integrity, core attributes and enablers of information integrity and their relative importance, relationship between information integrity attributes and enablers, practitioners' experience with impairments of information integrity for selected industries and data streams and their association with stages of information processing, major phases of the system acquisition/development life cycle, and key system components. One of the policy recommendations arising from the findings of this study is that the COBIT definition of information integrity should be reconsidered. Also, a two-layer framework of core attributes and enablers (identified in this study) should be considered.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The main factors to safeguard the system represents in allow the authorised access in the firm system according to the necessity and deny the unauthorised access in the all other cases [14]. Boritz (2005) conducted a study to determine the significant attributes of the information integrity and the related issues [39]. Boritz (2005) considered the information security as one of the main attributes for the information integrity. ...
... The main factors to safeguard the system represents in allow the authorised access in the firm system according to the necessity and deny the unauthorised access in the all other cases [14]. Boritz (2005) conducted a study to determine the significant attributes of the information integrity and the related issues [39]. Boritz (2005) considered the information security as one of the main attributes for the information integrity. ...
... Boritz (2005) conducted a study to determine the significant attributes of the information integrity and the related issues [39]. Boritz (2005) considered the information security as one of the main attributes for the information integrity. The integrity is associated to the accuracy and the completeness of the accounting information and its validity regarding the customers' value or expectation [32]. ...
Article
Full-text available
The aim of this research is to propose a conceptual framework that links the Accounting Information System components with the Firm Performance. The framework contained the Availability, the Security and the Integrity, the Confidentiality and Privacy, and the System Quality as independent variable, with Firm financial and non-financial performance among the Jordanian Firms as dependent variable. The researcher followed the quantitative research methodology by testing the measurement model of the conceptual framework by checking the convergent and discriminant virility of the framework. The researcher used the mean of survey questionnaire as a research instrument, on which the researcher developed a 31 items questionnaire and distributed 350 questionnaires, and received 263 fully answered questionnaire. The findings of this study revealed that the scores of factor loadings and AVE did not achieve the recommended level of 0.4 and 0.5 respectively, which required a modification on the research model in the second run, on which the researcher achieve a satisfactory level of Factor loadings, Composite Reliability, Cronbach Alpha, and AVE. However, the scores of the Fornell and Larcker Criterion and HTMT which confirmed the discriminant validity. This study was limited to the measurement model analysis only, an empirical study with both the measurement and structural model will be a great addition to the future studies.
... System reliability in administration primarily guarantees the solidity of data and accounting framework. However, an unreliable system can exhibit a number of side effects, such as failure to prevent unauthorized access to the system, making it vulnerable to viruses, hackers and loss of data confidentiality Loss of data integrity, including defiled, inadequate and invented information, and genuine support issues bringing about unintended negative reactions from system changes, such as loss of access to system administrations, loss of information privacy or loss of information trustworthiness (Boritz, 2005;McPhie, 2000;Topash, 2014). In fact, the complexity of computerized information systems has increased the necessity of the assessment of the reliability of a firm's internal control systems (Joseph et al., 2009). ...
... In addition, it also indicates that the risk assessment process and procedures are not appropriately and effectively executed. Boritz (2005) conducts an extensive review of the literature to identify the key attributes of information integrity and related issues. He brought two focus groups of experienced practitioners to discuss the documented findings extracted from the literature review through questionnaire examining the core concepts of information integrity and it elements. ...
... Assurance of system security implies that access is restricted to the physical components of the system, the logic functions the system performs, and the information stored in the system. This results are in consistent with prior studies such as Hayale and Abu Khadra, (2006), Abu-Musa, (2010), and Boritz (2005). It could be concluded that the IT infrastructure of the Jordanian business originations (i.e. ...
Article
Full-text available
Purpose This study aims at investigating the extent of SysTrust’s framework (principles and criteria) as an internal control approach for assuring the reliability of accounting information system (AIS) were being implemented in Jordanian business organizations. Design/methodology/approach The study is based on primary data collected through a structured questionnaire from 239 out of 328 shareholdings companies. The survey units were the shareholding companies in Jordan, and the single key respondents approach was adopted. The extents of SysTrust principles were also measured. Previously validated instruments were used where required. The data were analysed using t -test and ANOVA. Findings The results indicated that the extent of SysTrust being implemented could be considered to be moderate at this stage. This implies that there are some variations among business organizations in terms of their level of implementing of SysTrust principles and criteria. The results also showed that the extent of SysTrust principles being implemented was varied among business organizations based on their business sector. However, there were not found varied due to their size of business and a length of time in business (experience). Research limitations/implications This study is only conducted in Jordan as a developing country. Although Jordan is a valid indicator of prevalent factors in the wider MENA region and developing countries, the lack of external validity of this research means that any generalization of the research findings should be made with caution. Future research can be orientated to other national and cultural settings and compared with the results of this study. Practical implications The study provides evidence of the need for management to recognize the importance of the implementation of SysTrust principles and criteria as an internal control for assuring the reliability of AIS within their organizations and be aware which of these principles are appropriate to their size and industry sector. Originality/value The findings would be valuable for academic researchers, managers and professional accounting to acquire a better undemanding of the current status of the implementation of the SysTrust principles (i.e., availability, security, integrity processing, confidentiality, and privacy) as an internal control method for assuring the reliability of AIS by testing the phenomenon in Jordan as a developing country.
... Integrity is the maintenance of accuracy of data through its life cycle (Boritz, 2005). ...
... This process requires a set of rules to be thoroughly and consistently applied to all the data entering the database (Boritz, 2005). In the case of phylogenetic matrices, data validation needs to be a constant procedure to ensure the new incoming information is compatible with the previous one. ...
... In data science, every change to a database is defined as a transaction (Beynon-Davies, 2000;Boritz, 2005;Connolly and Begg, 2015). In the case of phylogenetic matrices, each addition of new taxa or characters constitutes a transaction (Figure 3.6). ...
Thesis
Full-text available
Non-sauropod sauropodomorphs, also known as 'basal sauropodomorphs' or 'prosauropods', have been thoroughly studied in recent years. Several hypotheses on the interrelationships within this group have been proposed, ranging from a complete paraphyly, where the group represents a grade from basal saurischians to Sauropoda, to a group on its own. The grade-like hypothesis is the most accepted; however, the relationships between the different taxa are not consistent amongst the proposed scenarios. These inconsistencies have been attributed to missing data and unstable (i.e., poorly preserved) taxa, nevertheless, an extensive comparative cladistic analysis has found that these inconsistencies instead come from the character coding and character selection, plus the strategies on merging data sets. Furthermore, a detailed character analysis using information theory and mathematical topology as an approach for character delineation is explored here to operationalise characters and reduce the potential impact of missing data. This analysis also produced the largest and most comprehensive matrix after the reassessment and operationalisation of every character applied to this group far. Additionally, partition analyses performed on this data set have found consistencies in the interrelationships within non-sauropod Sauropodomorpha and has found strong support for smaller clades such as Plateosauridae, Riojasauridae, Anchisauridae, Massospondylinae and Lufengosarinae. The results of these analyses also highlight a different scenario on how quadrupedality evolved, independently originating twice within the group, and provide a better framework to understand the palaeo-biogeography and diversification rate of the first herbivore radiation of dinosaurs.
... Likewise, Nelson, et. al. (2005) argue that accessibility represents a system attribute that is distinct but similar in importance to the system's ability to produce reliable data, although they argue that this impact of accessibility is second in order of influence to the system's processing reliability Boritz (2005) conducts an extensive review of the literature to identify the key attributes of information integrity and related issues. He brought two focus groups of experienced practitioners to discuss the documented findings extracted from the literature review through questionnaire examining the core concepts of information integrity and it elements. ...
... AICPA, (2013; 2017); Greenberg, et. al., (2012) ;Boritz, (2005). ...
... This demonstrates that to have information integrity both the data and the system (including IT infrastructure and operating system) need to have integrity. Boritz (2005) define information integrity as representational faithfulness. Information integrity involves both accuracy and completeness and therefore timeliness too, as well as the validity with respect to applicable rules and regulations. ...
Thesis
This study aims to examine and validate the impact of the implementation of SysTrust's framework (principles and criteria) as an internal control method for assuring reliability of Accounting Information System on the business performance via the mediating role of the quality of financial reporting among Jordanian public listed companies Based upon the literature review and contingency theory, an integrated conceptual framework was developed to guide this study. The study's conceptual framework consists of three major constructs: The SysTrust's service framework (availability, security, integrity processing, confidentiality, and privacy), the business performance (financial and non-financial indicators) and the quality of financial reporting. Descriptive correlational survey design approach is used in this study used as it sought to describe and establish the relationships among the study variables and it employs quantitative method to test the hypotheses first, and then to answer the research questions. Data were collected through self-administrated questionnaire with 239 respondents. Several statistical techniques were used to analyse the collected data. The model fitness and the constructs' validity and reliability were tested, followed by the validation of the conceptual model and research hypotheses. The findings of the study support the proposition that availability of SysTrust requirements as internal method for assuring the reliability of AIS is positively linked to business performance via the mediating role of the quality of financial reporting. Therefore, a better understanding of the influence of SysTrust principles upon business performance and quality of financial reporting should be viewed as whole rather than isolated fragments. The magnitude and significance of the loading estimate indicate that all of these five principles of SysTrust are relevant in predicating business performance and quality financial reporting. Thus, this study and its findings have number of contributions and managerial implications. In terms of theoretical contributions, this study has extended the reliability of AIS literature by providing the following: First, it explained the unexplored relationship among the reliability of AIS, the quality of financial reporting using the IASB's framework fundamental qualitative characteristics and business performance indicators (financial and non-financial). Second, testing the impact of the role of the quality of financial reporting as a mediating factor between the reliability of AIS and business performance measures (financial and non-financial) considered another contribution for the current study Furthermore, the SysTrust's framework implementation as an internal control system for assuring the reliability of AIS could be considered as the critical intangible resources for any business organization seeks for a reliable and effective accounting system in the long run. In this study, financial reporting quality justified as the mediator from contingency theory perspective where good quality and effective of information system is an integral component of a strong internal control system. III Declaration This work has not previously been accepted in substance for any degree and is not being concurrently submitted in candidature for any degree. Signed………………………………………………… (Candidate) Date…………………………………………………… IV ACKNOWLEDGMENTS
... A good starting point for the attempt to define integrity of data sources is the vocabulary used in the field of computer science, particularly information systems. In the discussion on defining information integrity concepts presented in Boritz (2005), integrity is defined as an unimpaired or unmarred condition, hence providing the entire correspondence of a representation with an original condition. ...
... When it comes to information integrity, it is a measure of representational faithfulness of the information to the condition or subject that is being represented by the information (Boritz 2005). The core attributes of information integrity are identified as accuracy, completeness, timeliness and validity in Boritz (2005), Boritz (2004), and CobiT (2002) and Boritz (2005) also lists the enablers of information integrity as security, availability, understandability, consistency, predictability, verifiability and credibility. ...
... When it comes to information integrity, it is a measure of representational faithfulness of the information to the condition or subject that is being represented by the information (Boritz 2005). The core attributes of information integrity are identified as accuracy, completeness, timeliness and validity in Boritz (2005), Boritz (2004), and CobiT (2002) and Boritz (2005) also lists the enablers of information integrity as security, availability, understandability, consistency, predictability, verifiability and credibility. While core attributes of information integrity refer to the minimum criteria that must be satisfied while judging representational faithfulness of information, enablers are the properties or factors of information that help realize those core attributes. ...
Thesis
Full-text available
Intelligent vehicles are a key component in humanity’s vision for safer, efficient, and accessible transportation systems across the world. Due to the multitude of data sources and processes associated with Intelligent vehicles, the reliability of the total system is greatly dependent on the possibility of errors or poor performances observed in its components. In our work, we focus on the critical task of localization of intelligent vehicles and address the challenges in monitoring the integrity of data sources used in localization. The primary contribution of our research is the proposition of a novel protocol for integrity by combining integrity concepts from information systems with the existing integrity concepts in the field of Intelligent Transport Systems (ITS). An integrity monitoring framework based on the theorized integrity protocol that can handle multimodal localization problems is formalized. As the first step, a proof of concept for this framework is developed based on cross-consistency estimation of data sources using polynomial models. Based on the observations from the first step, a 'Feature Grid' data representation is proposed in the second step and a generalized prototype for the framework is implemented. The framework is tested in highways as well as complex urban scenarios to demonstrate that the proposed framework is capable of providing continuous integrity estimates of multimodal data sources used in intelligent vehicle localization.
... ISACA (Information Systems Audit and Control Association) has developed COBIT (Control Objectives for Information Technology) as a framework to "help companies implement healthy governance factors," providing a way for organizations to align their business strategy with IT objectives. [1,2] IOP Publishing doi: 10.1088/1757-899X/1174/1/012001 2 Over time, the COBIT framework has evolved (several versions are known), and IT specialists have combined this framework with other methods to address specific IT problems. ...
... Based on a literature review on data quality and information integrity, a framework has been created that is considered broader than that provided by COBIT on information integrity [10]. ...
... Data integrity ensures the correctness and consistency of data throughout its life cycle. It, therefore, plays a vital role in the design, implementation, and utilization of any data management system [3]. The current solution for data integrity verification is through third-party auditing service providers such as Spectra [4]. ...
Preprint
Data tampering is often considered a severe problem in industrial applications as it can lead to inaccurate financial reports or even a corporate security crisis. A correct representation of data is essential for companies' core business processes and is demanded by investors and customers. Traditional data audits are performed through third-party auditing services; however, these services are expensive and can be untrustworthy in some cases. Blockchain and smart contracts provide a decentralized mechanism to achieve secure and trustworthy data integrity verification; however, existing solutions present challenges in terms of scalability, privacy protection, and compliance with data regulations. In this paper, we propose the AUtomated and Decentralized InTegrity vErification Model (AUDITEM) to assist business stakeholders in verifying data integrity in a trustworthy and automated manner. To address the challenges in existing integrity verification processes, our model uses carefully designed smart contracts and a distributed file system to store integrity verification attributes and uses blockchain to enhance the authenticity of data certificates. A sub-module called Data Integrity Verification Tool (DIVT) is also developed to support easy-to-use interfaces and customizable verification operations. This paper presents a detailed implementation and designs experiments to verify the proposed model. The experimental and analytical results demonstrate that our model is feasible and efficient to meet various business requirements for data integrity verification.
... Integrity implies maintaining and ensuring the accuracy and completeness of data over its entire lifecycle [15]. It is designed to protect data from unauthorised alterations. ...
Conference Paper
Full-text available
The digitisation of personal health information (PHI) through electronic health record (EHR) has become widespread due to their efficiency in terms of cost, storage, processing, and the subsequent quality of delivering patient care. However, security concerns remain one of its major setback. In order to handle EHR, institutions need to comply with their local government security regulations. These regulations control to which extent health data can be processed, transmitted, and stored as well as define how misuses are addressed. This paper proposes φcomp, a solution for monitoring, assessing, and evaluating the compliance of health applications with respect to defined security regulations. φcomp is able to assess the level of security risk of an application at runtime and automatically perform the required mitigation actions to recover a compliant environment. φcomp was implemented in an industrial context and evaluated on a medical appointment application. The results of the experiments showed that it manages the security compliance of third party applications with low performance overhead and attenuate unacceptable levels of risk to restore compliance.
... Particularly, the integrity of IoT data and devices, e.g., sensor readings and actuator commands, is the basic guarantee for securing IoT operations. Effective mechanisms need to be designed to protect IoT communications for confidentiality, integrity, authentication and non-repudiation of information flows [54]. The IoT devices need to be identified to ensure the data integrity from the origin, which conventionally relies on trusted third parties, e.g., Identity Provider [55]. ...
... Data integrity is often referred to but poorly defined. For example, there are ambiguities even between information and data integrity (Boritz 2005). Notwithstanding this, it is widely accepted that it is synonymous with representational faithfulness. ...
Chapter
Full-text available
Cloud computing is the dominant paradigm in modern computing, used by billions of Internet users worldwide. It is a market dominated by a small number of hyperscale cloud service providers. The overwhelming majority of cloud customers agree to standard form click-wrap contracts, with no opportunity to negotiate specific terms and conditions. Few cloud customers read the contracts that they agree to. It is clear that contracts in cloud computing are primarily an instrument of control benefiting one side, the cloud service provider. This chapter provides an introduction to the relationship between psychological trust, contracts and contract law. It also offers an overview of the key contract law issues that arise in cloud computing and introduces some emerging paradigms in cloud computing and contracts.
... Data integrity is often referred to but poorly defined. For example, there are ambiguities even between information and data integrity (Boritz 2005). Notwithstanding this, it is widely accepted that it is synonymous with representational faithfulness. ...
Chapter
Full-text available
While the benefits of cloud computing are widely acknowledged, it raises a range of ethical concerns. The extant cloud computing literature reports specific ethical perspectives on focussed topics in this domain, but does not explicitly refer to a specific ethical conception or reference point. This chapter provides an overview of ethics and ethical theories, which can be used to analyse the use of cloud technology and the complex multi-stakeholder structure of the industry. It is critical that cloud providers and users recognise that they effectively shape the morality of the cloud computing context through their interactions with other providers and users, and with the platform itself. Both stakeholder sets must be accountable for the possibilities offered by the technology. While pertinent regulation is continuously evolving, it is unlikely to advance at a similar rapid pace to that of innovation in the cloud computing industry. It is therefore essential that ethics is carefully considered to orient cloud computing towards the good of society.
... However, considering that the studio deals with health information, ensuring data integrity and authenticity was of utmost priority. Data integrity is the trustworthiness of information over its lifecycle (Boritz, 2011) and should be uniform, complete, unchanged and secure. All users of the system were required to register login details, which were later used whenever they accessed the studio. ...
Thesis
Full-text available
Antenatal care attendance is still a very big challenge among expectant mothers in Uganda especially among those in poor and resource constrained communities. This is largely caused by the lack of access to ANC services, lack of autonomy by expectant mothers to make decisions to seek care among others. Hence, the key research question in this study was “How can antenatal care decisions among expectant mothers in Uganda be enhanced?” The need to improve decision making practices among expectant mothers in Uganda paved way into exploring different cases with the purpose of understanding underlying decision making challenges expectant mothers face during antenatal care. This was achieved by employing design science as a stance of engaged scholarship. It was revealed in literature and exploration that the lack of information, cultural inclination, social, infrastructural among others are factors inhibiting decision making among expectant mothers to utilize antenatal care services. It was against this background that the requirements leading to the design and instantiation of the Antenatal Care Studio emerged. The ACS design was anchored using the notion of decision enhancement (Keen and Sol, 2008). The ACS was tested and evaluated by expectant mothers and care providers in the antenatal care domain. Results demonstrated that the ACS was useful in enhancing antenatal care decisions, facilitating routine antenatal care practices and promoting a collaborative environment among expectant mothers and care providers. The ACS design and instantiation was a major contribution to knowledge and practice.
... Data integrity is often referred to but poorly defined. For example, there are ambiguities even between information and data integrity (Boritz 2005). Notwithstanding this, it is widely accepted that it is synonymous with representational faithfulness. ...
... Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle, [2] and it is a significant aspect of any system which is storing, managing, processing, or retrieving data. The term data integrity is broad in scope and have widely different definitions depending on the specific context -even in a single field of computing. ...
Article
Full-text available
This paper discuss on enhancing the data security of the secret text and verifying the data integrity. The proposed methodology discusses a way through which selection randomization of memory address selection. Additionally the data integrity is verified for possible image tampering by intruders using checksum in the self embedded technique.
... The treatment of different sensors as multimodal data sources with a common frame of representation and the same dimensionality allows us to use the definitions of integrity presented in the domain of data sciences. Integrity measures overall accuracy and consistency of data sources [24]. While accuracy is defined as the correctness of validated data, consistency refers to the measure of coherence between them. ...
Article
Full-text available
Autonomous driving systems tightly rely on the quality of the data from sensors for tasks such as localization and navigation. In this work, we present an integrity monitoring framework that can assess the quality of multimodal data from exteroceptive sensors. The proposed multisource coherence-based integrity assessment framework is capable of handling highway as well as complex semi-urban and urban scenarios. To achieve such generalization and scalability, we employ a semantic-grid data representation, which can efficiently represent the surroundings of the vehicle. The proposed method is used to evaluate the integrity of sources in several scenarios, and the integrity markers generated are used for identifying and quantifying unreliable data. A particular focus is given to real-world complex scenarios obtained from publicly available datasets where integrity localization requirements are of high importance. Those scenarios are examined to evaluate the performance of the framework and to provide proof-of-concept. We also establish the importance of the proposed integrity assessment framework in context-based localization applications for autonomous vehicles. The proposed method applies the integrity assessment concepts in the field of aviation to ground vehicles and provides the Protection Level markers (Horizontal, Lateral, Longitudinal) for perception systems used for vehicle localization.
... Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle, [2] and it is a significant aspect of any system which is storing, managing, processing, or retrieving data. The term data integrity is broad in scope and have widely different definitions depending on the specific contexteven in a single field of computing. ...
Article
Full-text available
In this paper, discuss on validating the data integrity of an image that carries secret information across the network. Validating the data integrity has been always a difficult task on steganographed image files. To discuss a way through which data integrity is verified for possible image tampering by intruders using md5 checksum in self embedded technique.
... It aims at protecting the private information of a patient. Integrity aims at maintaining the accuracy and completeness of data over its entire lifecycle [18]. It protects data from unauthorised alterations. ...
Conference Paper
Full-text available
Even if electronic health records have revolutionise the efficiency and quality of delivering healthcare, security concerns remain one of its major setback. For this reason, compliance to security regulations when processing, storing, and transmitting health data is mandatory in many countries. This paper reports the design and implementation of φcomp, an approach that automates the monitoring and enforcing of security compliance at runtime in sensitive health environment. It is composed of a distributed architecture which analyses the behaviour of e-health environments and subsequently enforces security rules to ensure compliance with respect to health security regulations such as HDS and HIPAA. The results of our experiments in a production environment showed that it is highly customisable with respect to different use case application, can easily and rapidly be deployed, and induces very low overhead on the third party infrastructure for compliance management.
... Data integrity is one of the critical aspects of text watermarking, which is related to reliability, usability, relevance, value, and quality. Data consistency and accuracy assurance can be part of integrity [105]. e explosion of the internet allows users to access a vast amount of information, where the integrity of information also required. ...
Article
Full-text available
In our daily life, Internet-of-Things (IoT) is everywhere and used in many more beneficial functionalities. It is used in our homes, hospitals, fire prevention, and reporting and controlling the environmental changes. Data security is the crucial requirement for IoT since the number of recent technologies in different domains is increasing day by day. Various attempts have been made to cater the user’s demands for more security and privacy. However, a huge risk of security and privacy issues can arise among all those benefits. Digital document security and copyright protection are also important issues in IoT because they are distributed, reproduced, and disclosed with extensive use of communication technologies. The content of books, research papers, newspapers, legal documents, and web pages are based on plain text, and the ownership verification and authentication of such documents are essential. In the current domain of the Internet of Things, limited techniques are available for ownership verification and copyright protection. In the said perspective, this study includes the discussion about the approaches of text watermarking, IoT security challenges, IoT device limitations, and future research directions in the area of text watermarking.
... Notably, the probity of IoT devices and data, e.g. , actuator commands and readings of sensors. Proper mechanisms must be established and used in order to protect communications in IoT for non-repudiation, authentication and confidentiality of flows of information [36]. Proper identification of devices of IoT must be done to ensure integrity of data from origin and it completely relies on third parties which are very much trusted, e.g., provider of identity [37]. ...
... Moreover, the data collection strategies involved several interviewers, and both in person and by telephone approaches. These may have increased the variation among individual responses and led to issues associated with the rigour and integrity of the data (Boritz, 2005). However, training the research team, both theoretically and practically with a piloting phase, aimed to ensure appropriate tool development and data collection (Gilson et al., 2011), and ultimately to improve the faithfulness of information we collected. ...
Article
Full-text available
Performance-based financing (PBF) is being implemented across low- and middle-income countries to improve the availability and quality of health services, including medicines. Although a few studies have examined the effects of PBF on the availability of essential medicines (EMs) in low- and middle-income countries, there is limited knowledge of the mechanisms underlying these effects. Our research aimed to explore how PBF in Cameroon influenced the availability of EMs, and to understand the pathways leading to the experiential dimension related with the observed changes. The design was an exploratory qualitative study. Data were collected through in-depth interviews, using semi-structured questionnaires. Key informants were selected using purposive sampling. The respondents (n = 55) included health services managers, healthcare providers, health authorities, regional drugs store managers and community members. All interviews were recorded, transcribed and analysed using qualitative data analysis software. Thematic analysis was performed. Our findings suggest that the PBF programme improved the perceived availability of EMs in three regions in Cameroon. The change in availability of EMs experienced by stakeholders resulted from several pathways, including the greater autonomy of facilities, the enforced regulation from the district medical team, the greater accountability of the pharmacy attendant and supply system liberalization. However, a sequence of challenges, including delays in PBF payments, limited autonomy, lack of leadership and contextual factors such as remoteness or difficulty in access, was perceived to hinder the capacity to yield optimal changes, resulting in heterogeneity in performance between health facilities. The participants raised concerns regarding the quality control of drugs, the inequalities between facilities and the fragmentation of the drug management system. The study highlights that some specific dimensions of PBF, such as pharmacy autonomy and the liberalization of drugs supply systems, need to be supported by equity interventions, reinforced regulation and measures to ensure the quality of drugs at all levels.
... Spectre and Meltdown are clear examples of violations to two classes of key security requirements: confidentiality and integrity. Confidentiality safeguards that an unauthorized party cannot obtain sensitive information [3], whereas integrity is the assurance that the information is trustworthy and accurate (i.e., not tampered) [4]. ...
Article
Full-text available
Spectre and Meltdown attacks in modern microprocessors represent a new class of attacks that have been difficult to deal with. They underline vulnerabilities in hardware design that have been going unnoticed for years. This shows the weakness of the state-of-the-art verification process and design practices. These attacks are OS-independent, and they do not exploit any software vulnerabilities. Moreover, they violate all security assumptions ensured by standard security procedures, (e.g., address space isolation), and, as a result, every security mechanism built upon these guarantees. These vulnerabilities allow the attacker to retrieve leaked data without accessing the secret directly. Indeed, they make use of covert channels, which are mechanisms of hidden communication that convey sensitive information without any visible information flow between the malicious party and the victim. The root cause of this type of side-channel attacks lies within the speculative and out-of-order execution of modern high-performance microarchitectures. Since modern processors are hard to verify with standard formal verification techniques, we present a methodology that shows how to transform a realistic model of a speculative and out-of-order processor into an abstract one. Following related formal verification approaches, we simplify the model under consideration by abstraction and refinement steps. We also present an approach to formally verify the abstract model using a standard model checker. The theoretical flow, reliant on established formal verification results, is introduced and a sketch of proof is provided for soundness and correctness. Finally, we demonstrate the feasibility of our approach, by applying it on a pipelined DLX RISC-inspired processor architecture. We show preliminary experimental results to support our claim, performing Bounded Model-Checking with a state-of-the-art model checker.
... Data integrity is often referred to but poorly defined. For example, there are ambiguities even between information and data integrity (Boritz 2005). Notwithstanding this, it is widely accepted that it is synonymous with representational faithfulness. ...
Book
Full-text available
This open access book brings together perspectives from multiple disciplines including psychology, law, IS, and computer science on data privacy and trust in the cloud. Cloud technology has fueled rapid, dramatic technological change, enabling a level of connectivity that has never been seen before in human history. However, this brave new world comes with problems. Several high-profile cases over the last few years have demonstrated cloud computing's uneasy relationship with data security and trust. The volume explores the numerous technological, process and regulatory solutions presented in academic literature as mechanisms for building trust in the cloud, including GDPR in Europe, as well as examining the – limited – evidence for their success. The massive acceleration of digital adoption resulting from the COVID-19 pandemic is introducing new and significant security and privacy threats and concerns. Against this backdrop, this book provides a timely reference and organising framework for considering how we will assure privacy and build trust in such a hyper-connected digitally dependent world. This book presents a framework for assurance and accountability in the cloud and reviews the literature on trust, data privacy and protection, and ethics in cloud computing. Theo Lynn is Full Professor of Digital Business at DCU Business School, Ireland. John G. Mooney is Associate Professor of Information Systems and Technology Management at the Pepperdine Graziadio Business School, United States. Lisa van der Werff is Associate Professor of Organisational Psychology at DCU Business School, Ireland. Grace Fox is Assistant Professor of Digital Business at DCU Business School, Ireland.
... Data sanity check is a mechanism that examines whether the data fed to a system is those expected during the system design [13]. Even though a system's logical functionalities are properly developed, severe system failures may still occur when undesirable input data are fed into the system [14], [15]. ...
Preprint
Deep learning (DL) techniques have demonstrated satisfactory performance in many tasks, even in safety-critical applications. Reliability is hence a critical consideration to DL-based systems. However, the statistical nature of DL makes it quite vulnerable to invalid inputs, i.e., those cases that are not considered in the training phase of a DL model. This paper proposes to perform data sanity check to identify invalid inputs, so as to enhance the reliability of DL-based systems. To this end, we design and implement a tool to detect behavior deviation of a DL model when processing an input case, and considers it the symptom of invalid input cases. Via a light, automatic instrumentation to the target DL model, this tool extracts the data flow footprints and conducts an assertion-based validation mechanism.
... The study recommended (Boritz, 2005) creates a framework that is broader than that stipulated in the guideline for international control COBIT (ISACA) (Information Systems Audit and Control Association) COBIT recognized widely (information technology control objectives). Witnessed IS views of practitioners on the following issues have been collected through a questionnaire administered during two workshops on the integrity of the information held in Toronto and Chicago: The definition of safety information, the basic features of the cofactors of the integrity of the information and the relative importance of, and the relationship between the integrity of the information attributes and cofactors, experienced people with knowledge of the safety information for selected industries and data stream and their associations with the stages of information processing, key stages of the life cycle of system acquisition/development, and the main components of the system. ...
Article
Full-text available
The study presents the results of the analysis of the goals of information security to ensure information security through a case study of the income tax department and sales in Jordan. The goal of the analysis is to identify the security situation in the department of information and the development of information system security software and knowledge of the impact of these standards on the security of computerized information systems in this department. The study sample consists of 360 questionnaires, of which 270 questionnaires subjected to statistical analysis, which accounted for 88%, the researcher has conducted a descriptive study to get to the analytical results and the achievement of the objectives of the study. The results showed the presence of a positive effect to the goals of the information security standards to ensure information security in this organization, where the information security is characterized by extreme sensitivity to external threats.
... Integrity Data integrity deals with assuring and maintaining the completeness and accuracy of data not only at a certain point of time but over the whole lifetime of data [Bor05]. Thus data is unmodifiable by unauthorized or undetected means. ...
Thesis
Full-text available
Over the last decades, cybersecurity has become an increasingly important issue. Between 2019 and 2011 alone, the losses from cyberattacks in the United States grew by 6217%. At the same time, attacks became not only more intensive but also more and more versatile and diverse. Cybersecurity has become everyone’s concern. Today, service providers require sophisticated and extensive security infrastructures comprising many security functions dedicated to various cyberattacks. Still, attacks become more violent to a level where infrastructures can no longer keep up. Simply scaling up is no longer sufficient. To address this challenge, in a whitepaper, the Cloud Security Alliance (CSA) proposed multiple work packages for security infrastructure, leveraging the possibilities of Software-defined Networking (SDN) and Network Function Virtualization (NFV). Security functions require a more sophisticated modeling approach than regular network functions. Notably, the property to drop packets deemed malicious has a significant impact on Security Service Function Chains (SSFCs)—service chains consisting of multiple security functions to protect against multiple at- tack vectors. Under attack, the order of these chains influences the end-to-end system performance depending on the attack type. Unfortunately, it is hard to predict the attack composition at system design time. Thus, we make a case for dynamic attack-aware SSFC reordering. Also, we tackle the issues of the lack of integration between security functions and the surrounding network infrastructure, the insufficient use of short term CPU frequency boosting, and the lack of Intrusion Detection and Prevention Systems (IDPS) against database ransomware attacks. Current works focus on characterizing the performance of security functions and their behavior under overload without considering the surrounding infrastructure. Other works aim at replacing security functions using network infrastructure features but do not consider integrating security functions within the network. Further publications deal with using SDN for security or how to deal with new vulnerabilities introduced through SDN. However, they do not take security function performance into account. NFV is a popular field for research dealing with frameworks, benchmarking methods, the combination with SDN, and implementing security functions as Virtualized Network Functions (VNFs). Research in this area brought forth the concept of Service Function Chains (SFCs) that chain multiple network functions after one another. Nevertheless, they still do not consider the specifics of security functions. The mentioned CSA whitepaper proposes many valuable ideas but leaves their realization open to others. This thesis presents solutions to increase the performance of single security functions using SDN, performance modeling, a framework for attack-aware SSFC reordering, a solution to make better use of CPU frequency boosting, and an IDPS against database ransomware. Specifically, the primary contributions of this work are: • We present approaches to dynamically bypass Intrusion Detection Systems (IDS) in order to increase their performance without reducing the security level. To this end, we develop and implement three SDN-based approaches (two dynamic and one static). We evaluate the proposed approaches regarding security and performance and show that they significantly increase the performance com- pared to an inline IDS without significant security deficits. We show that using software switches can further increase the performance of the dynamic approaches up to a point where they can eliminate any throughput drawbacks when using the IDS. • We design a DDoS Protection System (DPS) against TCP SYN flood at tacks in the form of a VNF that works inside an SDN-enabled network. This solution eliminates known scalability and performance drawbacks of existing solutions for this attack type. Then, we evaluate this solution showing that it correctly handles the connection establishment and present solutions for an observed issue. Next, we evaluate the performance showing that our solution increases performance up to three times. Parallelization and parameter tuning yields another 76% performance boost. Based on these findings, we discuss optimal deployment strategies. • We introduce the idea of attack-aware SSFC reordering and explain its impact in a theoretical scenario. Then, we discuss the required information to perform this process. We validate our claim of the importance of the SSFC order by analyzing the behavior of single security functions and SSFCs. Based on the results, we conclude that there is a massive impact on the performance up to three orders of magnitude, and we find contradicting optimal orders for different workloads. Thus, we demonstrate the need for dynamic reordering. Last, we develop a model for SSFC regarding traffic composition and resource demands. We classify the traffic into multiple classes and model the effect of single security functions on the traffic and their generated resource demands as functions of the incoming network traffic. Based on our model, we propose three approaches to determine optimal orders for reordering. • We implement a framework for attack-aware SSFC reordering based on this knowledge. The framework places all security functions inside an SDN-enabled network and reorders them using SDN flows. Our evaluation shows that the framework can enforce all routes as desired. It correctly adapts to all attacks and returns to the original state after the attacks cease. We find possible security issues at the moment of reordering and present solutions to eliminate them. • Next, we design and implement an approach to load balance servers while taking into account their ability to go into a state of Central Processing Unit (CPU) frequency boost. To this end, the approach collects temperature information from available hosts and places services on the host that can attain the boosted mode the longest. We evaluate this approach and show its effectiveness. For high load scenarios, the approach increases the overall performance and the performance per watt. Even better results show up for low load workloads, where not only all performance metrics improve but also the temperatures and total power consumption decrease. • Last, we design an IDPS protecting against database ransomware attacks that comprise multiple queries to attain their goal. Our solution models these attacks using a Colored Petri Net (CPN). A proof-of-concept implementation shows that our approach is capable of detecting attacks without creating false positives for benign scenarios. Furthermore, our solution creates only a small performance impact. Our contributions can help to improve the performance of security infrastructures. We see multiple application areas from data center operators over software and hardware developers to security and performance researchers. Most of the above-listed contributions found use in several research publications. Regarding future work, we see the need to better integrate SDN-enabled security functions and SSFC reordering in data center networks. Future SSFC should discriminate between different traffic types, and security frameworks should support automatically learning models for security functions. We see the need to consider energy efficiency when regarding SSFCs and take CPU boosting technologies into account when designing performance models as well as placement, scaling, and deployment strategies. Last, for a faster adaptation against recent ransomware attacks, we propose machine-assisted learning for database IDPS signatures.
... Some scholars perceive data quality as equivalent to information quality [41][42][43][44]. Data quality generally refers to the degree to which the data are fit for use [45]. ...
Article
Full-text available
The popularity of big data analytics (BDA) has boosted the interest of organisations into exploiting their large scale data. This technology can become a strategic stimulation for organisations to achieve competitive advantage and sustainable growth. Previous BDA research, however, has focused more on introducing more traits, known as Vs for big data traits, while ignoring the quality of data when examining the application of BDA. Therefore, this study aims to explore the effect of big data traits and data quality dimensions on BDA application. This study has formulated 10 hypotheses that comprised of the relationships of big data traits, accuracy, believability, completeness, timeliness, ease of operation, and BDA application constructs. This study conducted a survey using a questionnaire as a data collection instrument. Then, the partial least squares structural equation modelling technique was used to analyse the hypothesised relationships between the constructs. The findings revealed that big data traits can significantly affect all constructs for data quality dimensions and that the ease of operation construct has a significant effect on BDA application. This study contributes to the literature by bringing new insights to the field of BDA and may serve as a guideline for future researchers and practitioners when studying BDA application.
... Data Integrity has been defined to mean 'the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record.' 80 It has also been defined as 'the maintenance of, and the assurance of the accuracy and consistency of data over its entire life-cycle.' 81 The purport of data integrity is to ensure that stored data are recovered exactly the way in which it was initially stored. The concept of data integrity is not same with data security. ...
Article
Full-text available
Public Health Emergencies have been characterised with several restrictions to the fundamental Rights of Citizens. Right to Freedom of Association and Movement, are limited as a measure to curb the spread of Infectious Diseases during public health crisis. Such restriction presents a challenge to citizens' governance, political participation and access to Healthcare. This paper adopts the doctrinal approach in addressing the legal and policy framework for such restriction in time of health crisis. The paper discovers that the use of Information communications Technology during health emergencies bridges the gap in governance, citizen participation and adequate health care services delivery. The paper concludes by making far reaching recommendations on the need for the enactment of specific laws, which expressly provides for virtual or remote participation, E-governance and E-healthcare during Public Health Emergencies.
... Diverse preliminary analyses are available in confidentiality safeguarded Data Integrity. Conversely, there are several problems which require more analyses in the conception of data integrity from both confidentiality and safety initiatives [7,8]. ...
... • System quality and information quality theories (DeLone and McLean, 1992;Boritz 2005). ...
Article
We present an integrative review of existing marketing research on mobile apps, clarifying and expanding what is known around how apps shape customer experiences and value across iterative customer journeys, leading to the attainment of competitive advantage, via apps (in instances of apps attached to an existing brand) and for apps (when the app is the brand). To synthetize relevant knowledge, we integrate different conceptual bases into a unified framework, which simplifies the results of an in-depth bibliographic analysis of 471 studies. The synthesis advances marketing research by combining customer experience, customer journey, value creation and co-creation, digital customer orientation, market orientation, and competitive advantage. This integration of knowledge also furthers scientific marketing research on apps, facilitating future developments on the topic and promoting expertise exchange between academia and industry.
... Information quality could be explained as that consists of accuracy, timeliness, completeness and validity characteristics. As indicated by (Boritz, 2005), the word accuracy alludes to the data that relates with the truth and lack of bias. The expression completeness relates to information, which changes the client requirement's dimensionality. ...
Article
Full-text available
Quality of information is a priceless asset for organization to possess as its assist in carrying out business plans and changes. These business changes usually support the management executive in decision makings. In view of that, this study examines the information quality in AIS and its effects on organizational performance among conventional and Islamic banks in Jordan. To achieve that, proportionate stratified random sampling is applied to the information system users of sixteen conventional and Islamic banks in Jordan. Total copies of 600 questionnaires were distributed and only 250 among the returned copies were valid, suggesting a valid response rate of 41.7%. The study adopts the partial least square (Smart PLS 3) method to enhance the data analysis and perform hypotheses testing. Findings clearly show that quality of information is the key for business growth as it indicates a positive effect on organizational performance. Further result shows that organizational culture improves and increases business performance when combined with information quality. For this reason, conventional and Islamic banks in Jordan should have well-developed AIS as it assists organizations to -attain higher performance. There is need for more development in management skills to fully exploit the AIS in order to realize a greater organizational performance. In other words, full implementation of AIS should be given more priority by the managements of these conventional and Islamic banks.
Article
The understanding and promotion of integrity in information security has traditionally been underemphasized or even ignored. From implantable medical devices and electronic voting to vehicle control, the critical importance of information integrity to our well-being has compelled review of its treatment in the literature. Through formal information flow models, the data modification view, and the relationship to data quality, information integrity will be surveyed. Illustrations are given for databases and information trustworthiness. Integrity protection is advancing but lacks standardization in terminology and application. Integrity must be better understood, and pursued, to achieve devices and systems that are beneficial and safe for the future.
Chapter
Each information system (IS) has an underlying architecture, although its complexity and scope can vary quite substantially for differ kinds of systems. Since design decisions about the architecture define the very foundation of an IS, the design decisions cannot be easily undone or altered after they were made. If not taken seriously enough, improper IS architecture designs can result in the development of systems that are incapable of adequately meeting user requirements. Understanding the concept of good IS architecture design and taking design decisions diligently is, therefore, highly important for an IS development project’s success. In order to answer the question of what constitutes a good IS architecture, this chapter examines the importance of design decisions across a system’s lifecycle. In particular, two different perspectives on the concept of good IS architecture design are explicated: (1) design as the process and (2) design as the outcome of a design process. The two perspectives are closely related to each other and generally help explain the more abstract concept of IS architecture design and particularly the characteristics of a good IS architecture.
Book
Full-text available
This book TECHNO-BUSINESS DATA MANAGEMENT prepares students of Business Management specialization, Data Base Management, Knowledge Management, and students specializing to become data manager and other students of management who specializes in Computer Applications, Business Data, DBMS Specialists, Commerce, Entrepreneurship Management, BBA, MBA, or Business Strategy related subjects, Entrepreneurial practitioners, and includes the dynamic concepts of newer Entrepreneurial Strategies happening across the world, and also caters to the syllabus for BBA and MBA of all the leading Indian Universities specifically Bangalore University, Anna University, Bharathiar University, Kerala University, Calicut University, and other Indian Universities. These concepts in this book will prepare all Entrepreneurial professionals who are evolving into higher-level professionals who can use this book for their challenging and rewarding career. The readers can apply these concepts in their day today management strategy functions to have effective practical advancements in their career.
Book
Full-text available
This book TECHNO-BUSINESS DATA MANAGEMENT prepares students of Business Management specialization, Data Base Management, Knowledge Management, and students specializing to become data manager and other students of management who specializes in Computer Applications, Business Data, DBMS Specialists, Commerce, Entrepreneurship Management, BBA, MBA, or Business Strategy related subjects, Entrepreneurial practitioners, and includes the dynamic concepts of newer Entrepreneurial Strategies happening across the world, and also caters to the syllabus for BBA and MBA of all the leading Indian Universities specifically to Bangalore University, Anna University, Bharathiar University, Kerala University, Calicut University, and other Indian Universities. These concepts in this book will prepare all Entrepreneurial professionals who are evolving into higher level professionals who can use this book for their challenging and rewarding career. The readers can apply these concepts in their day to day management strategy functions to have effective practical advancements in their career.
Chapter
Today’s economies are facing massive changes, induced by development in emerging markets, the fast technological progress, sustainability policies, and changing customer preferences. The automotive industry undergoes a variety of challenges that could transform the industry in a way it has never changed before. One main driver of this transformation is connectivity, enabling cars to communicate with devices to offer new features to the customers. Many of which depend on personal data of passengers. This creates threats and opportunities at the same time. Within the ecosystem of the connected and autonomous vehicle, stakeholders’ responsibilities need to be discussed. As data privacy is a very sensitive topic in today’s world, it is pivotal to analyze customers’ willingness to share data and to discuss certain frameworks that ensure that customers are protected and that ethical standards are being implemented. Although many people show interest in these features, a lot of concerns regarding privacy are threatening people. Gaining trust among society and ensuring those features will be of benefit for humans, it is necessary that thoughts on security, privacy, and ethics are made before those features are brought to market.
Chapter
Financial technology (FinTech) as part of financial inlcussion changes conventional business models to be information technology minded. The presence of FinTech in the wider community makes it easy for access to financial service products and transactions and payment systems more practically, efficiently, and economically. Unfortunately, as the security risk in transacting increases, cyber security in the financial services industry and FinTech service providers is considered a major target by cybercriminals. This study proposed a security management approach through hybrid blockchain method implemented through flask framework and encryption to protect transaction data. The results are promising. Referring to accuracy, this study successfully reduces data leakage and misuse of personal data and financial data in FinTechs.
Chapter
Using an interpretive case study approach, this paper describes the data quality problems in a regional health insurance (RHI) company. Within this company, two interpretive cases examine different processes of the healthcare supply chain and their integration with a business intelligence system. Specifically, the first case examines RHI's provider enrollment and credentialing process, and the second case examines the processes within the special investigations unit (SIU) for investigating and detecting fraud. The second case examines DIQ issues and how social media can be used to acquire evidence to support a fraud case. In addition, the second case utilized lean six sigma to streamline internal processes. A data and information quality (DIQ) assessment of these processes demonstrates how a framework, referred to as PGOT, can identify improvement opportunities within any information intensive environment. This paper provides recommendations for DIQ and social media best practices, and illustrates these best practices within this real-world context of healthcare.
Article
In a recent survey of academic research, Fintech topics, broadly classified as cryptocurrency studies, were by far the most researched topics in the social sciences. Even though cryptocurrencies rely on a distributed accounting ledger technology, relatively few of those recent studies have been conducted by accounting academics. Some of the features of a cryptocurrency system such as Bitcoin include constructs such as Proof-of-Work (PoW) consensus, do not rely on a traditional accounting knowledge base instead depending on cryptography and computer science. However that does not necessarily imply that other potentially useful distributed ledger designs also rely on these accounting free features that arise in the Bitcoin environment. This research outlines four scenarios where choice between competing distributed ledger features critically depends upon the resolution of established accounting issues, all be it in a new distributed setting. Specifically, we identify four new research settings in which accounting knowledge contributes to the design of distributed ledgers. We propose that to date these settings have been overlooked in the accounting literature. We contribute to the ongoing debate on the applicability of distributed systems by proposing that accounting knowledge has a wider impact than in the established areas of auditing and operations management.
Article
In a recent survey of academic research, Fintech related topics, broadly classified as crypto-currency studies, were by far the most researched topics in the social sciences. However, we have observed that, perhaps surprisingly, even though crypto-currencies rely on a distributed accounting ledger technology, relatively few of those studies were conducted by accounting academics. While some of the features of a system like Bitcoin do not necessarily rely on a traditional accounting knowledge, this knowledge is key in designing effective real-world distributed systems. Building on a foundational framework developed by Risius and Spohrer (2017), we provide support for their hypothesis that to date, research in this area has been predominantly of a somewhat narrow focus (i.e., based upon exploiting existing programming solutions without adequately considering the fundamental needs of users). This is particularly reflected by the abundance of Bitcoin-like crypto-currency code-bases with little or no place for business applications. We suggest that this may severely limit an appreciation of the relevance and applicability of decentralized systems, and how they may support value creation and improved governance. We provide supporting arguments for this statement by considering four applied classes of problems where a blockchain/distributed ledger can add value without requiring a crypto-currency to be an integral part of the functioning system. We note that each class of problem has been viewed previously as part of accounting issues within the legacy centralized ledger systems paradigm. We show how accounting knowledge is still relevant in the shift from centralized to decentralized ledger systems. We advance the debate on the development of (crypto-currency free) value-creating distributed ledger systems by showing that applying accounting knowledge in this area has potentially a much wider impact than that currently being applied in areas limited to auditing and operations management. We develop a typology for general distributed ledger design which assists potential users to understand the wide range of choices when developing such systems.
Thesis
Full-text available
Nowadays pharmaceutical analysis and industry could not be imagined without using chromatographic methods like the High Performance Liquid Chromatography (HPLC) and Gas Chromatography (GC). Therefore, the field of chromatography is already firmly anchored in the three important, regional pharmacopeias: EP, USP and JP. Deficits of specifications within these pharmacopoeias relating to the detector parameters sampling rate and signal filtration are the major motivation of this thesis. Furthermore investigations on the data acquisition and processing within the detectors and software products dealing with collected data were done. Several concepts like double entry method, smoothing optimization and signal filtration based on persistence have been developed and are used as tools to examine the data integrity of commercial Chromatography Data System (CDS), improve the signal-to-noise ratio of small peaks to lower the LOD and LOQ by efficient denoising, and determine the suitable filter parameters and sampling rates an user can apply on his system to accelerate the method development and validation. All developments have been tested on simulated and real chromatograms and have shown that they are suitable for their specific purposes. In the end optimizations in the concepts still exist but some new aspects in the long-term investigated field of chromatography have been discovered.
Chapter
Full-text available
Predições são feitas cotidianamente, por todo mundo. Quantos pacotes de arroz é preciso comprar no supermercado? Qual é o melhor caminho para ir até o trabalho? A resposta que dermos a estas perguntas, geralmente inconsciente, se fundamenta nas informações disponı́veis e na experiência passada: por exemplo, se consumimos dois quilos de arroz na semana anterior e se não há circunstâncias extraordinárias a vista, é bem possı́vel que essa seja a quantidade necessária para a próxima semana. Naturalmente, somos também capazes de inserir informações adicionais para adequar nossas decisões. Quando há notı́cias de um acidente no caminho que costumamos utilizar, procuramos estabelecer uma nova rota alternativa. Esse procedimento cognitivo é natural.
Chapter
This chapter provides an overview of information security including key concepts, types of controls, access methods, and auditing concepts. The chapter also provides an overview of transportation-related cybersecurity systems and technologies used in martime transportation.
Book
Full-text available
A comprehensive entity security program deploys information asset protection through stratified technological and non-technological controls. Controls are necessary for counteracting threats, opportunities, and vulnerabilities risks in a manner that reduces potential adverse effects to defined, acceptable levels. This book presents a methodological approach in the context of normative decision theory constructs and concepts with appropriate reference to standards and the respective guidelines. Normative decision theory attempts to establish a rational framework for choosing between alternative courses of action when the outcomes resulting from the selection are uncertain. Through the methodological application, decision theory techniques can provide objectives determination, interaction assessments, performance estimates, and organizational analysis. A normative model prescribes what should exist according to an assumption or rule.
Article
Data warehouses are the most valuable assets of an organization and are basically used for critical business and decision-making purposes. Data from different sources is integrated into the data warehouse. Thus, security issues arise as data is moved from one place to another. Data warehouse security addresses the methodologies that can be used to secure the data warehouse by protecting information from being accessed by unauthorized users for maintaining the reliability of the data warehouse. A data warehouse invariably contains information which needs to be considered extremely sensitive and confidential. Protecting this information is invariably very important as data in the data warehouse is accessed by users at various levels in the organization. The authors propose a method to protect information based on an encryption scheme which secures the data in the data warehouse. This article presents the most feasible security algorithm that can be used for securing the data stored in the operational database so as to prevent unauthorized access.
Chapter
Using an interpretive case study approach, this chapter describes the data quality problems in two companies: (1) a Multi-Facility Healthcare Medical Group (MHMG), and (2) a Regional Health Insurance Company (RHIS). These two interpretive cases examine two different processes of the healthcare supply chain and their integration with a business intelligence system. Specifically, the issues examined are MHMG's revenue cycle management and RHIS's provider enrollment and credentialing process. A Data and Information Quality (DIQ) assessment of the revenue cycle management process demonstrates how a framework, referred to as PGOT, can identify improvement opportunities within any information-intensive environment. Based on the assessment of the revenue cycle management process, data quality problems associated with the key processes and their implications for the healthcare organization are described. This chapter provides recommendations for DIQ best practices and illustrates these best practices within this real world context of healthcare.
Preprint
Full-text available
Importance of Techno-Business Data for effective management of organisations. Exclusive for Technology Development Leaders.
Article
Full-text available
Data mining refers to the process of analyzing massive volumes of data and big data from different angles in order to identify relationships between data and transform them into actionable information. The Revolution through the digitization of the economy and the circulation of various data, in volumes never before reached and in record time, is probably fundamentally shaking up the functioning of our organizations. Organizations are thus faced with a gigantic wave of data both endogenous and exogenous to their own environment. Artificial intelligence represents, for its part, the impacting part for employment of this new industrial revolution, promising the conversion of a new category of tasks, previously performed by humans, towards a new type of robotization even more elaborate than that of previous industrial revolutions, in volumes that no one can still fully appreciate today. In the interest of having a much broader and clearer vision on Data Mining and its impact on education, it is necessary to draw up a judicious plan for the realization of the project by answering the following problem: How Big Data can influence the education sector?
Book
This book explores and analyses the historical development of archival science in the world and in Croatia, and focuses on comparative analysis between traditional and digital archival science. The primary support of this research is the introduction of new terminology from archiving in the digital environment that advocates archival science as a modern and independent scientific discipline within the field of Information and Communication Sciences. Also, as the main example of content research of new terminological concepts is the central journal of Croatian archival science Bulletin d’archives (published by the Croatian State Archives) is in the focus, because it’s promotes archival science as an autonomous scientific discipline (as is described on the Croatian scientific portal Hrčak). The analysis was conducted longitudinally on two levels: the first level was from 1899. to1945. and from 1958. to crucial 1984. This year is very important because in that year was established Department of Archival Science at the Department of Information and Communication Sciences, Faculty of Humanities and Social Sciences, University of Zagreb, which is important information for the interpretation of the development of archival science as a scientific discipline. In the second level from 1985. to 2020., the new digital archival science terminology provided insight the representation of certain standardized basic concepts of contemporary archiving, which, in addition to traditional ones, include new concepts such as: digitalization, computerizations, management of electronic records, preservation of authenticity of digital records, digital preservation process, digital signature (blokchain), digital archives, virtual environment, artificial intelligence, etc. Special attention is rewarded to the transition from traditional to digitally network virtual environment, which is supported by the programming of a structured database that connects the Hungarian portal Hungaricana and the Croatian portal Hrčak. Articles of content research in the journal Bulletin d’archives are determined according to the reviewed types: original scientific paper, previous announcement, review paper, professional paper and conference paper. This methodology also profiles the main participants in the field of archival science in the digital environment who change the historical direction of archiving from its beginnings in diplomatic (critical analysis of the document), through historical auxiliary science to the modern and autonomous discipline of digital archival science, which has already adopted in the independent standardized concepts aimed at artificial intelligence. The modern digital environment and the constant system of valuing archiving as digital science are inordinate challenge for the survival and transition of traditional archives into digital archives in the virtual environment of artificial intelligence within interdisciplinary sciences.
Article
Full-text available
Acknowledgments Work reported herein has been supported, in part, by MIT's Total Data Quality Management (TDQM) Research Program, MIT's International Financial Services Research Center (IFSRC), Fujitsu Personal Systems, Inc. and Bull-HN. An Object-Oriented Implementation of Quality Data Products
Article
Full-text available
Information is valuable if it derives from reliable data. However, measurements for data reliability have not been widely established in the area of information systems (IS). This paper attempts to draw some concepts of reliability from the field of quality control and to apply them to IS. The paper develops three measurements for data reliability: internal reliability-reflects the "commonly accepted" characteristics of various data items; relative reliability-indicates compliance of data to user requirements; and absolute reliability-determines the level of resemblance of data items to reality. The relationships between the three measurements are discussed, and the results of a field study are displayed and analyzed. The results provide some insightful information on the "shape" of the database that was inspected, as well as on the degree of rationality of some user requirements. General conclusions and avenues for future research are suggested.
Article
Full-text available
Information quality (IQ) is an inexact science in terms of assessment and benchmarks. Although various aspects of quality and information have been investigated [1, 4, 6, 7, 9, 12], there is still a critical need for a methodology that assesses how well organizations develop information products and deliver information services to consumers. Benchmarks developed from such a methodology can help compare information quality across organizations, and provide a baseline for assessing IQ improvements.
Article
Full-text available
Poor data quality has far-reaching effects and consequences. The article aims to increase the awareness by providing a summary of impacts of poor data quality on a typical enterprise. These impacts include customer dissatisfaction, increased operational cost, less effective decision-making and a reduced ability to make and execute strategy. More subtly perhaps, poor data quality hurts employee morale, breeds organizational mistrust, and makes it more difficult to align the enterprise. Creating awareness of a problem and its impact is a critical first step towards resolution of the problem. The needed awareness of the poor data quality, while growing, has not yet been achieved in many enterprises. After all, the typical executive is already besieged by too many problems, low customer satisfaction, high costs, a data warehouse project that is late, and so forth. Creating awareness of issues of the accuracy level and impacts within the enterprise is the first obstacle that practitioners must overcome when implementing data quality programs.
Article
Full-text available
Organizational databases are pervaded with data of poor quality. However, there has not been an analysis of the data quality literature that provides an overall understanding of the state-of-art research in this area. Using an analogy between product manufacturing and data manufacturing, this paper develops a framework for analyzing data quality research, and uses it as the basis for organizing the data quality literature. This framework consists of seven elements: management responsibilities, operation and assurance costs, research and development, production, distribution, personnel management, and legal function. The analysis reveals that most research efforts focus on operation and assurance costs, research and development, and production of data products. Unexplored research topics and unresolved issues are identified and directions for future research provided
Article
Protecting the integrity of data will challenge analytical labs as they become compliant with the requirements of 21 CFR Part 11. Other responsibilities include ensuring the reliability and trustworthiness of electronic records used to support particular decisions, such as release of a production batch.
Article
This article presents some general ideas on how various aspects of the sytems development life cycle might be measured as one means of assessing and controlling the quality of a system. We feel that this is a fruitful area for research with potentially important industrial and research implications. We strongly urge further research and investigation into systems development quality control based on the framework established in this article.
Article
IS and user departments expect correct information. When embarrassing information mistakes occur, the result is the loss of credibility, business, and customer confidence. This article reviews the causes of incorrect and inaccurate information, examines existing solutions to data quality problems, and discusses the importance of information integrity as the cornerstone to achieving total quality management, business process reengineering, and automated operations objectives.
Article
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
Article
Numerous researchers in a handful of disciplines have been concerned, in recent years, with the special role (or roles) that time seems to play in information processing. Designers of computerized information systems have had to deal with the fact that when an information item becomes outdated, it need not be forgotten. Researchers in artificial intelligence have pointed to the need for a realistic world model to include representations not only for snapshot descriptions of the real world, but also for histories, or the evolution of such descriptions over time. Many logicians have regarded classical logic as an awkward tool for capturing the relationships and the meaning of statements involving temporal reference, and have proposed special "temporal logics" for this purpose. Finally, the analysis of tensed statements in natural language is a principal concern of researchers in linguistics.
Article
A framework is described that was developed to help assess communications service quality in a manner that most closely reflects the broad array of performance parameters experienced by customers. Examples of its application are presented. An additional benefit experienced with the use of this framework is a more consistent, disciplined analysis of service parameters, adding to the thoroughness with which service providers and customers alike assess the quality of communications services.< >
Article
This article presents a methodology and tests its efficacy through a rigorous case study. The players in our study include information producers, who generate and provide information; information custodians, who provide and manage computing resources for storing, maintaining, and securing information; and information consumers, who access and use information [9]. We extend previous research on managing information as product to incorporate the service characteristics of information delivery. We draw upon the general quality literature, which discusses quality as conformance to specification and as exceeding consumer expectations. Our key contributions are: . Developing a two-by-two conceptual model for describing IQ. The columns capture quality as conformance to specifications and as exceeding consumer expectations, and the rows capture quality from its product and service aspects. We refer to this model as the product and service performance model for information quality (PSP/IQ)
Many factors can contribute to the Transmission was added because in network-based systems this phase represents a key information processing phase with special risk and control considerations that should not be overlooked
  • Understandability
Understandability/granularity/aggregation. Many factors can contribute to the (2004). Transmission was added because in network-based systems this phase represents a key information processing phase with special risk and control considerations that should not be overlooked. References AICPA/CICA SysTrust, Principles and Criteria for Systems Reliability, Version 2.0, New York/Toronto: American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants, January 2001.
Dirty data: inaccurate data can ruin supply chains The role of time in information processing: a survey
  • M Betts
  • A Bolour
  • Tl Anderson
  • Lj Dekeyser
  • Wong
Betts M. Dirty data: inaccurate data can ruin supply chains. Computerworld [December]. Bolour A, Anderson TL, Dekeyser LJ, Wong HKT. The role of time in information processing: a survey. SIGMOD 1982;Record 12(3):27 – 50.
Global data management survey
  • Pricewaterhousecoopers
PricewaterhouseCoopers. Global data management survey. New York (NY)7 PricewaterhouseCoopers; 2001.
Dirty data: inaccurate data can ruin supply chains
  • Betts
Information quality benchmarks: product and service performance
  • Kahn