Figure 2 - uploaded by Sameer Karali
Content may be subject to copyright.
Similar publications
Introduction
The aim of the study was to systematically review relevant studies to evaluate the diagnostic value of urinary kidney injury molecule 1 (uKIM-1) for acute kidney injury (AKI) in adults.
Method
We searched PubMed and Embase for literature published up to November 1st, 2019 and used the Quality Assessment Tool for Diagnosis Accuracy Stu...
Citations
... Sampling techniques help reduce the time required to calculate data quality by enabling the approximation of results. Commonly used methods include simple random sampling, systematic sampling, stratified sampling, cluster sampling, and reservoir sampling [14,57,81,113,124]. These techniques help determine sample sizes and select samples to effectively evaluate quality metadata considering the data source's features and the relevant quality dimensions. ...
... For instance, in [19], sampling improves calculation accuracy within the constraints of time or optimizes calculation time when accuracy meets user requirements. [81] discusses the use of simple random and systematic sampling to assess data quality dimensions like completeness, accuracy, and timeliness, demonstrating that systematic sampling is preferable for accuracy and completeness, while simple random sampling suits timeliness assessment best. Further, [118] explores bootstrap sampling as part of a Big Data Quality (BDQ) evaluation scheme, which uses sampled data to profile the dataset and choose suitable quality metrics, particularly for accuracy, completeness, and consistency. ...
Artificial intelligence (AI) has transformed various fields, significantly impacting our daily lives. A major factor in AI success is high-quality data. In this paper, we present a comprehensive review of the evolution of data quality (DQ) awareness from traditional data management systems to modern data-driven AI systems, which are integral to data science. We synthesize the existing literature, highlighting the quality challenges and techniques that have evolved from traditional data management to data science including big data and ML fields. As data science systems support a wide range of activities, our focus in this paper lies specifically in the analytics aspect driven by machine learning. We use the cause-effect connection between the quality challenges of ML and those of big data to allow a more thorough understanding of emerging DQ challenges and the related quality awareness techniques in data science systems. To the best of our knowledge, our paper is the first to provide a review of DQ awareness spanning traditional and emergent data science systems. We hope that readers will find this journey through the evolution of data quality awareness insightful and valuable.
... − Big data: as the goal of [28] is to evaluate data value in terms of data quality, they employed three data quality dimensions: accuracy, completeness, and redundancy as shown in Table 7, next based on these dimensions a linear model is established to calculate the quality scores. In addition, Liu et al. [36] proposed an approximate quality assessment model based on data set sampling to evaluate the quality of big data. Utilizing various sample sizes and sampling techniques, the authors chose three dimensions; completeness, accuracy, and timeliness to evaluate each sample. ...
... In Table 8, the three metrics provided, where S is a collection of data units, S acc is the subset of accurate data units in S, S cp is the subset of complete data units in S, N is the cardinality of S, and S acc and S cp 's combined cardinality is M. Table 8. Quality metrics used in [36] Accuracy Completeness Timeliness ...
Defining and evaluating data quality can be a complex task as it varies depending on the specific purpose for which the data is intended. To effectively assess data quality, it is essential to take into account the intended use of the data and the specific requirements of the data users. It is important to recognize that a standardized approach to data quality assessment (DQA) may not be suitable in all cases, as different uses of data may have distinct quality criteria and considerations. In order to advance research in the field of data quality, it is useful to determine the current state of the art by identifying, evaluating, and analyzing relevant research conducted in recent years. In light of this objective, the study proposes a systematic literature review (SLR) as a suitable approach to examine the landscape of data quality and investigate available research specifically pertaining to DQA. The findings of our SLR clearly reveal and demonstrate the criticality of data quality and point to new directions for future study and have consequences for researchers and practitioners interested in defining and assessing data quality.
... Наличие такого формата позволит с помощью регулярных выражений извлекать имя каждого автора и патентообладателя (правообладателя) по отдельности и выполнять связывание с профилями сотрудников и организаций, зарегистрированных в CRIS. Качество ОД оценивалось по полученным в процессе измерения качества данных результатам [24][25][26][27][28][29][30][31][32], по характеристикам, актуальным для CRIS [4,5]. Среди этих характеристик: полнота, точность (правильность, корректность), согласованность (непротиворечивость, целостность, уникальность), своевременность (актуальность). ...
... Документы, опубликованные в ОР на сайте ФИПС, не предназначены для массовой загрузки. Поэтому проверка соответствия значений и вычисление метрики М3 проводились с использованием простых случайных выборок [27,32]. Для этого из каждого набора ОД было извлечено 800 записей. ...
... Для связывания сведений об авторах и патентообладателях (правообладателях) с сотрудниками и организациями, зарегистрированными в CRIS, необходимо учитывать обнаруженные в настоящем исследовании проблемы несоответствия форматов. 32 ...