Managing and Sharing Research Data: A Guide to Good Practice
... Este requisito não é sinónimo de que os dados têm de estar abertos, promovendo, antes, a possibilidade de facilitar a pesquisa, disseminação e reutilização de dados, contribuindo para acelerar os avanços cientí cos e cumprindo o desígnio de que «os dados devem estar tão abertos quanto possível e tão fechados quanto necessário» ( European Comission 2017). Os princípios FAIR coincidem, pois, com o ciclo de vida dos dados de investigação e também promovem a validade, a fácil compreensão e usabilidade a longo prazo (Corti et al. 2019). Deste modo, a veri cação desses princípios vai bene ciar a abertura dos dados, tornando-os facilmente encontrados, acessíveis, interoperáveis e reutilizáveis, com destaque para esta última qualidade (Wilkinson et al. 2016). ...
... Neste contexto, estes princípios podem ser vistos como diretrizes para investigadores, editores e organizações, ajudando a aumentar a reutilização dos dados cientícos. Como a rma Corti et al. (2019), estes dados devem ser FAIR não só para os seres humanos, mas também para as máquinas, permitindo a recuperação, o acesso automatizado e a consequente reutilização (Corti et al. 2019). ...
... Neste contexto, estes princípios podem ser vistos como diretrizes para investigadores, editores e organizações, ajudando a aumentar a reutilização dos dados cientícos. Como a rma Corti et al. (2019), estes dados devem ser FAIR não só para os seres humanos, mas também para as máquinas, permitindo a recuperação, o acesso automatizado e a consequente reutilização (Corti et al. 2019). ...
The aim of this work is to demonstrate that the creation of a Data Management Plan contributes to the organisation and systematisation of data produced and managed within projects, thereby facilitating and supporting teams in decision-making, concerning the direct management and operation of a given project, such as the selection of a repository for data deposition. Using the IPAlliance project as a case study, a collaborative model was methodologically employed in which the Data Steward plays a central role, assuming a guiding function in the implementation of best practices and supporting data management, including data deposition. It is concluded that Data Management Plans can be highly benecial for structuring
projects, promoting research and development processes, as well as the reuse and replication of research data in other contexts. [Full text in Portuguese]
... More than half of the courses from ALA schools (53%) did not require a textbook, while more schools from the CC list required the use of textbooks (58%). Most textbooks used by ALA schools were about research data management, data sharing and reuse, and scholarship involving data, all in the context of libraries (e.g., Borgman, 2015;Corti et al., 2019;Krier & Strasser, 2014;Ray, 2014). The textbooks used by CC institutions were more technical (e.g., topics on data mining and database management), often statistical (e.g., Baum, 2016;Delwiche & Slaughter, 2003), or oriented toward data analytics in general (e.g., Kabacoff, 2015;Wickham & Grolemund, 2017), but sometimes also included research data sharing and management (e.g., Corti et al., 2019). ...
... Most textbooks used by ALA schools were about research data management, data sharing and reuse, and scholarship involving data, all in the context of libraries (e.g., Borgman, 2015;Corti et al., 2019;Krier & Strasser, 2014;Ray, 2014). The textbooks used by CC institutions were more technical (e.g., topics on data mining and database management), often statistical (e.g., Baum, 2016;Delwiche & Slaughter, 2003), or oriented toward data analytics in general (e.g., Kabacoff, 2015;Wickham & Grolemund, 2017), but sometimes also included research data sharing and management (e.g., Corti et al., 2019). ...
With growing emphasis on data curation practice in both science and industry, there has been a call for information professionals to take on a substantial role in data curation. Library and information science (LIS) education has been responding to this call by offering various training opportunities from Master's education to professional development. The most recent effort to systematically review a data curation curriculum offered by ALA-accredited LIS schools was in 2012, so it is time to revisit the progress and evolution of data curation education. The main goal of this study is to analyze the course content from the syllabi of various programs to understand what is being taught in LIS schools throughout graduate-level education. Further, because the need for data curation is apparent across different disciplines, and thus not only LIS but also other disciplines have been offering data curation courses, this study also analyzed syllabi from other disciplines. A total of 80 syllabi were analyzed in this study: 15 syllabi from 9 ALA-accredited institutions and 65 syllabi from 53 institutions of Carnegie Classification (CC). Our findings suggest a notable growth in LIS education in data curation since 2012, but LIS education still provides less training in technical skills. There was also a distinctive difference in educational approach to teach data curation between LIS (user- and service-oriented) and other disciplines (technical skills−focused), which brought different strengths and weaknesses in curriculum.
... How Will QR Data Sharing Affect Project Planning? Sharing data in accord with ERR and FAIR requires a substantial investment of time and planning (27). We focus here on three elements of planning. ...
... Data sharing is made significantly easier when best practices for data organization, storage, and labeling are followed. This involves storing all data in a secure, central location accessible to all team members; and storing all meta-data such as study protocols, consent forms, and data collection instruments such as interview guides, and documenting how these relate to the data (27,31). ...
In January 2023, a new NIH policy on data sharing went into effect. The policy applies to both quantitative and qualitative research (QR) data such as data from interviews or focus groups. QR data are often sensitive and difficult to deidentify, and thus have rarely been shared in the United States. Over the past 5 y, our research team has engaged stakeholders on QR data sharing, developed software to support data deidentification, produced guidance, and collaborated with the ICPSR data repository to pilot the deposit of 30 QR datasets. In this perspective article, we share important lessons learned by addressing eight clusters of questions on issues such as where, when, and what to share; how to deidentify data and support high-quality secondary use; budgeting for data sharing; and the permissions needed to share data. We also offer a brief assessment of the state of preparedness of data repositories, QR journals, and QR textbooks to support data sharing. While QR data sharing could yield important benefits to the research community, we quickly need to develop enforceable standards, expertise, and resources to support responsible QR data sharing. Absent these resources, we risk violating participant confidentiality and wasting a significant amount of time and funding on data that are not useful for either secondary use or data transparency and verification. data sharing | qualitative research | research compliance | FAIR principles | data de-identification
... In Theieberger and Berez's (2012) workflow, they explain that they decide on conventions for their file naming and metadata before collecting data in the field and, with this, point to a key practice in responsible data management: planning. Funding agencies, as described in Kung (chapter 8, this volume), are increasingly requiring data management plans (DMPs) as part of a grant application, but even where there is no requirement, investing time in crafting a plan can help to refine existing data practices (Mannheimer 2018:15) and encourage efficiency (Corti et al. 2014; Kung, chapter 8, this volume). DMPs generally include a description of the data that will be collected, the metadata and documentation that will be produced, the ways the data will be stored and backed up, security and privacy protections for relevant data, data access policies during a project, and a long-term plan for data sharing and pres- understand what their roles are in managing data (Corti et al. 2014:29). ...
... org / groups / linguistics -data -ig . 4. As a starting point, see Corti et al. (2014). Guides by academic libraries are valuable sources of information about data management practices: University of Minnesota Libraries, "Research Data Services," https:// www . ...
A guide to principles and methods for the management, archiving, sharing, and citing of linguistic research data, especially digital data.
“Doing language science” depends on collecting, transcribing, annotating, analyzing, storing, and sharing linguistic research data. This volume offers a guide to linguistic data management, engaging with current trends toward the transformation of linguistics into a more data-driven and reproducible scientific endeavor. It offers both principles and methods, presenting the conceptual foundations of linguistic data management and a series of case studies, each of which demonstrates a concrete application of abstract principles in a current practice.
In part 1, contributors bring together knowledge from information science, archiving, and data stewardship relevant to linguistic data management. Topics covered include implementation principles, archiving data, finding and using datasets, and the valuation of time and effort involved in data management. Part 2 presents snapshots of practices across various subfields, with each chapter presenting a unique data management project with generalizable guidance for researchers. The Open Handbook of Linguistic Data Management is an essential addition to the toolkit of every linguist, guiding researchers toward making their data FAIR: Findable, Accessible, Interoperable, and Reusable.
... An ELN is neither an end in itself nor is it a silver bullet. Nevertheless, it can clearly be a crucial component of your research data management [91][92][93], particularly given data to be both increasingly digital and voluminous. The workflows described here and implemented in the LabInform ELN work well for the authors in a spectroscopic setting deeply rooted in physical chemistry. ...
... Extending the functionality of the ASpecD framework in this direction is currently actively being considered. In terms of research data management [91][92][93] and the research data life cycle [94], further aspects that need to accompany an ELN are a (local) data repository as well as PIDs. Those concepts have been implemented in the wider LabInform infrastructure [55], particularly with the Datasafe [54] as repository and the Lab Object Identifier (LOI) concept for PIDs. ...
Scientific recordkeeping is a key prerequisite of reproducibility and hence an essential aspect of conducting science. Researchers need to be able to retrospectively figure out what they or others did, how they collected data and how they drew conclusions. This is typically the realm of laboratory notebooks, and with the advent of the digital era, there is an increasing move towards digitising those notebooks as well. Here, we present LabInform ELN, a lightweight and flexible electronic laboratory notebook for academic research based on the open-source software DokuWiki. Key features are its minimal system requirements, flexibility, modularity, and compliance with auditing requirements. The LabInform ELN is compared with other leading open-source solutions, its key concepts are discussed and full working examples for a spectroscopic laboratory as well as for quantum-chemical calculations presented. The minimalistic system requirements allow for using it in small groups and even by individual scientists and help with improving both, reproducibility and access to the notes taken. At the same time, thanks to its fine-grained access management, it scales well to larger groups. Furthermore, it can be easily adjusted to specific needs from within its web interface. Therefore, we anticipate LabInform ELN and the ideas behind its implementation to have a high impact in the field, particularly for groups with limited IT resources.
... Hence, resources, infrastructure, governance, policy institutional systems are still in transition in the institutions Cox et al., 2019). The volume (amount of data), velocity (the speed of data generation), variety (diversity and complexities of data formats), and veracity (reliability and integrity) of data itself present a significant challenge to offer RDM services (Clement et al., 2017;Corti et al., 2019;Federer, 2013;Koltay, 2016;Perrier et al., 2018). Moreover, because of the variety of research types and practices and marked disciplinary differences in data practices complicate the data management practices Cox et al., 2019;Whitmire et al., 2015). ...
... These include institutional commitment, collaboration, academic engagement, technology infrastructure, a lack of policies and financing. Then the volume of data, its velocity, diversity, and veracity (reliability and integrity) of data itself present a significant challenge to offer RDM services (Clement et al., 2017;Corti et al., 2019;Federer, 2013;Koltay, 2019;Perrier et al., 2018;Yu, 2017). Moreover, disciplinary differences in data practices complicate the data management practices. ...
This study provides insights into the evolution and conceptual framework of research data management (RDM). It also investigates the role of libraries and librarians in offering data management services and the challenges they face in this regard. The study is qualitative in nature and based on an extensive literature review survey. The analysis of the reviewed literature reveals that the idea of RDM has emerged as a new addition to library research support services. The more recent literature clearly established the pivotal role of libraries and librarians in developing and managing RDM services. However, data sharing practices and the development of RDM services in libraries are more prevalent in developed countries. While these trends are still lacking among researchers and libraries in developing countries. Creating awareness among researchers about the benefits of data sharing is a challenging task for libraries. Furthermore, institutional commitment, collaboration, academic engagement, technological infrastructure development, lack of policies, funding, and storage, skills, and competencies required for librarians to offer RDM-based services are some of the other significant challenges highlighted in the literature. Certainly, RDM services are difficult and complicated; therefore, librarians need to master the skills of research data to offer library-based RDM services.
... Social media content, particularly posts from activists and civil society organizations (CSOs), was analyzed to understand digital mobilization strategies. Additionally, academic studies and government documents provided scholarly insights and official perspectives on political developments (Corti et al. 2019). ...
... In 2013, they started to organize a seminar, at a time when there was no structured material (no textbook or teaching experiences) available. In 2019, there were some handbooks on RDM (Briney, 2015;Corti et al., 2014;Ludwig & Enke, 2013;Pryor, 2012;Ray, 2014), an Open Science book (Nielsen, 2014), online resources, online tutorials, Webinars and even a MOOC. ...
This article focuses on how data literacy education such as research data management skills can be integrated into teacher training programmes in order to adequately train the teachers of tomorrow. To this end, interviews were conducted with three lecturers from the Faculty of Education and analysed both qualitatively and quantitatively. The lecturers describe the topic of research data management as extremely relevant for students, especially in the Master's program. Even as future teachers, for example in computer science and the natural sciences, students will have a lot to do with data and need to be able to handle it competently. The article also discusses how research data management skills can be integrated into the teacher training program.
... The term "research data" according to Corti et al., (2014) encompasses a broad range of information types, from quantitative datasets and qualitative research outputs to multimedia files and experimental logs. Managing these diverse data forms requires sophisticated strategies and robust infrastructure. ...
... Social media content, particularly posts from activists and civil society organizations (CSOs), was analyzed to understand digital mobilization strategies. Additionally, academic studies and government documents provided scholarly insights and official perspectives on political developments (Corti et al. 2019). ...
This paper explores the critical role of civil society in political change in Bangladesh, with a particular focus on the 2024 July Revolution. The study investigates how civil society organisations (CSOs), grassroots movements, and advocacy groups effectively mobilised opposition against the country’s autocratic regime and fronted democratic change. The research aims to weigh the contributions of these organisations, particularly their role in organising the anti-discrimination student protests, which were pivotal to the revolution’s success. Using a mixed-method qualitative approach, the study draws on interviews with civil society leaders, focus group discussions with activists and protest participants, and an analysis of relevant documents, media reports, and public statements. Key findings expose that CSOs were essential in coordinating the protests, fostering coalitions between various social groups, and utilising digital platforms to swell their advocacy efforts. These engagements contributed to broader civic engagement, weakening the regime’s authoritarian grip. Despite facing awful state suppression, civil society’s nonstop efforts battered the moral authority of the dictatorship, allowing the eventual overthrow of the government. However, the study also admits CSOs' challenges, including state co-optation and control attempts, which limited their effectiveness at specific points. The paper stresses the need for continual support of civil society in authoritarian regimes. It highlights the importance of creating inclusive platforms for dialogue between civil society, political parties, and governmental institutions to prevent authoritarian backsliding. The insights from Bangladesh’s experience provide a framework for understanding civil society’s role in democratic transitions worldwide.
... In 2012, the European Commission called member states into action to improve access to and preservation of scientific information. An open data pilot was launched in their Horizon 2020 framework programme in 2013, and the use of DMPs in this pilot was implemented in 2016 (Corti et al., 2020). Soon thereafter, public funders in EU member states also started adopting data policies with DMP requirements. ...
At KU Leuven, a university in the Flemish region of Belgium, data management plans have become an important resource to drive and shape the development of data management support, services, and training. With 8,000 researchers and 7,000 PhD students in fundamental and applied research across a comprehensive range of disciplines, KU Leuven is the largest university in Belgium. Public research funding is provided by the federal and regional governments, mainly via the Research Foundation Flanders (FWO) and via research funding allocated to universities based on excellence criteria through the Special Research Fund (BOF) and the Industrial Research Fund (IOF). Since 2018, FWO and BOF-IOF incorporated data management into their policies, requiring researchers to submit Data Management Plans (DMPs) to their institutional research office. Since then, the number of DMPs that are developed each year has increased exponentially, from 150 in 2018 to nearly 700 per year now. The Research Coordination Office at KU Leuven decided to review all DMPs to provide feedback to ensure high-quality plans. To manage the submission, monitoring, review, and preservation of this volume of DMPs efficiently, an online platform was developed that is integrated with the university’s research information systems. Initially, the focus of the DMP review was on supporting the development of DMPs, as this was a new concept for researchers. The review process has significantly improved the quality of DMPs. Later, support shifted to provide advice on best practices in data management. Reviews of over 2600 DMPs provide a rich source of information to develop services and training. Based on findings from DMP reviews, the IT department developed an interactive storage guide; ethical and legal compliance in research projects can be monitored; new data management training modules are developed; and a collection of example DMPs has been developed. In addition, the growing DMP collection is a rich source of information on researchers’ data practices, providing the baseline information to develop further services. Future plans include implementing artificial intelligence in DMP reviews to automate problem detection and exploring machine-actionable DMPs for an institutional data register.
... Research data is generated during and as an end product of research and is usually retained by the scientific community as it is required to validate research findings (Corti et al., 2019). With advances in information and communication technologies and technical infrastructure, the generation of research data has increased significantly in both quantity and variety (Sheikh et al., 2023). ...
Purpose
This paper aims to address the pressing challenges in research data management within institutional repositories, focusing on the escalating volume, heterogeneity and multi-source nature of research data. The aim is to enhance the data services provided by institutional repositories and modernise their role in the research ecosystem.
Design/methodology/approach
The authors analyse the evolution of data management architectures through literature review, emphasising the advantages of data lakehouses. Using the design science research methodology, the authors develop an end-to-end data lakehouse architecture tailored to the needs of institutional repositories. This design is refined through interviews with data management professionals, institutional repository administrators and researchers.
Findings
The authors present a comprehensive framework for data lakehouse architecture, comprising five fundamental layers: data collection, data storage, data processing, data management and data services. Each layer articulates the implementation steps, delineates the dependencies between them and identifies potential obstacles with corresponding mitigation strategies.
Practical implications
The proposed data lakehouse architecture provides a practical and scalable solution for institutional repositories to manage research data. It offers a range of benefits, including enhanced data management capabilities, expanded data services, improved researcher experience and a modernised institutional repository ecosystem. The paper also identifies and addresses potential implementation obstacles and provides valuable guidance for institutions embarking on the adoption of this architecture. The implementation in a university library showcases how the architecture enhances data sharing among researchers and empowers institutional repository administrators with comprehensive oversight and control of the university’s research data landscape.
Originality/value
This paper enriches the theoretical knowledge and provides a comprehensive research framework and paradigm for scholars in research data management. It details a pioneering application of the data lakehouse architecture in an academic setting, highlighting its practical benefits and adaptability to meet the specific needs of institutional repositories.
... The excellent UK Data Archive guide to Managing and Sharing Data ( Van den Eynden et al. 2011; see also Corti et al. 2014) provides an effective summary of best-practice guidelines based on wide-ranging input from many disciplines. However, it has a national focus, and is perhaps more exhortative than analytical. ...
Sharing scientific data, with the objective of making it fully discoverable, accessible, assessable, intelligible, usable, and interoperable, requires work at the disciplinary level to define in particular how the data should be formatted and described. Each discipline has its own organization and history as a starting point, and this paper explores the way a range of disciplines, namely materials science, crystallography, astronomy, earth sciences, humanities and linguistics get organized at the international level to tackle this question. In each case, the disciplinary culture with respect to data sharing, science drivers, organization and lessons learnt are briefly described, as well as the elements of the specific data infrastructure which are or could be shared with others. Commonalities and differences are assessed. Common key elements for success are identified: data sharing should be science driven; defining the disciplinary part of the interdisciplinary standards is mandatory but challenging; sharing of applications should accompany data sharing. Incentives such as journal and funding agency requirements are also similar. For all, it also appears that social aspects are more challenging than technological ones. Governance is more diverse, and linked to the discipline organization. CODATA, the RDA and the WDS can facilitate the establishment of disciplinary interoperability frameworks. Being problem-driven is also a key factor of success for building bridges to enable interdisciplinary research.
... With the increasing amount of data generated in research and the associated requirement to make it reusable and comprehensible for further research, researchers are confronted with various new challenges in research data management (RDM). It includes all data practices, manipulations, enhancements, and processes that ensure that research data are of high quality, well organized, documented, preserved, sustainable, accessible, and reusable (Corti et al., 2020). The requirements for an RDM are often associated with the FAIR Data Principles according to Wilkinson et al., which aim to make research data findable, accessible, interoperable, and reusable (Wilkinson et al., 2016). ...
... The data could be in many forms and formats, depending on the discipline, research type, and method used. 2 Because of the complexity and vast volumes of data generated in research, proper research data management practices should be observed at every stage of the research process. Data must be well documented during the collection, processing, analysis, and sharing phases. ...
Effective research data management (RDM) is essential to modern scientific investigations. As the volume and complexity of research data increase, researchers, research institutions, and countries are pressured to improve data management practices to ensure transparency, reproducibility, sharing, and reuse of their findings. Researchers and institutions in Kenya, like those in many other developing countries, have begun to adopt the practice. This review examines the early indicators of improved research data management practices in Kenya to identify leaders who would drive the culture of RDM and thus improve research output.
... The data could be in many forms and formats, depending on the discipline, research type, and method used. 2 Because of the complexity and vast volumes of data generated in research, proper research data management practices should be observed at every stage of the research process. Data must be well documented during the collection, processing, analysis, and sharing phases. ...
This study adopted a qualitative design methodology. Relevant information in studies, reports, and policies was retrieved from websites, electronic databases, institutional repositories, and grey literature for analysis to identify the early indicators of RDM in Kenya. The data were then analyzed thematically using key vari-
ables identified by the RDM Capability Maturity Model: legal and policy provisions, information technology (IT) infrastructure, support services, and data literacy. The content analysis found notable RDM activities among researchers and some research institutions, such as the Kenya Medical Research Institute (KEMRI), the International Livestock Research Institute (ILRI), and the Kenya National Bureau of Statistics. Notable early indicators of RDM activities included data repositories, data management
policies, and multidisciplinary datasets archived in data repositories. Activities at the institutional level are limited, especially in universities. This implies that RDM in Kenya
is still in its infancy.
... Pode ser observado que esses modelos apresentam diferentes níveis de granularidade e detalhes, mas todos são organizados em processos/etapas que possuem semelhanças, sendo as principais: planejamento; criação, coleta ou captura; armazenamento; gestão a longo prazo/preservação; processamento ou análise; Publicação ou compartilhamento; uso e reúso. (CORTI et al., 2014;KOWALCZYK, 2018). Entretanto, estes modelos, se observados sob a ótica dos pesquisadores, podem ser abstratos e apresentar uma certa complexidade para orientar o processo de Publicação dos dados. ...
The use and reuse of research data are important and necessary actions in the current context of scientific practice. For this reason, the datasets produced during the research initiatives need to be published to be more easily accessed by the entire scientific community. However, publishing data does not mean merely making it available in a repository, but comprises a series of steps that must be considered so that they can be effectively used. Thus, this work has as main objective to discuss the process of publishing research datasets in the context of science. This is a descriptive and qualitative investigation, which makes use of bibliographic research. As results, elements involved in the process of publishing research data and discussions are presented, covering the stages of publication of datasets, relating to the deposit, description, identifier assignment and review, steps that involve the core of the publication process.
... This study employed the secondary data collection method over the primary data collection method since it helps in gathering real-time data which determines consumer behaviour in both developed and developing countries. As per the observation of [59], secondary sources help in saving time and resources from losing and providing a wide range of historical information. Further, the secondary data collection method enables comparative studies considering data from different sources, regions, or periods, enabling an effective analysis and understanding of various factors. ...
Sustainable consumption is crucial for mitigating environmental harm and combating climate change. This study examines sustainable consumer behaviour in developed and developing countries, aiming to identify drivers and barriers to responsible consumption patterns. Using qualitative data analysis, we explore factors influencing consumer attitudes and behaviours. Adopting an exploratory approach, we employ interpretivist philosophy and thematic analysis. Through case study methodology and secondary data sources, we analyse drivers, attitudes, and barriers to sustainable consumption. The study findings indicate that consumers in both developed and developing countries exhibit a keen interest in embracing sustainable consumption to contribute to environmental preservation. Corporate social responsibility towards sustainability influences consumer purchasing decisions, highlighting the importance of green initiatives within companies. Recommendations include implementing Green Fund Schemes at the governmental level and sustainability audits within corporations to promote and sustain sustainability efforts. Collaborative endeavours among consumers, corporations, and authorities are essential for promoting sustainability and safeguarding the environment.
... Não é uma questão nova (é o que se faz desde sempre nos censos da população), mas, efetivamente, o online e, em particular, as redes sociais trouxeram um novo palco de captação de dados, nem sempre devidamente regulado (a par da crescente capacidade de armazenamento de informação). Para além das questões éticas envolvidas, e ainda que haja obras dedicadas à gestão e partilha de dados (Corti, Van den Eynden, Bishop, & Woollard, 2014), nesta matéria há que levar em conta a legislação em vigor, no que diz respeito à proteção de dados. Tratando-se de questões que se colocam a propósito de qualquer técnicas, assumem contornos específicos quando se trata da análise de rede (Social Network Analysis). ...
Resumo: neste capítulo apresentam-se e discutem-se as principais técnicas de seleção de casos, recolha e análise de dados (mais) usadas em investigação quantitativa na área da Comunicação Organizacional/Institucional/Estratégica. Privilegia-se a apresentação de um panorama, enunciando os critérios para selecionar ou excluir cada uma delas, em detrimento de uma lógica de aprofundamento, deixando o trabalho de as examinar com mais detalhe (com vista à sua implementação) ao/à investigador/a que se inicia no trabalho de pesquisa (o público a que se dirige este texto). Procura-se criar um percurso, do paradigma positivista às preocupações éticas, passando pelos métodos e pelas técnicas, assente num princípio de coerência epistemológica e metodológica.
... This procedure is fundamental because it safeguards data integrity, particularly concerning private materials. This guarantees the security and protection of research materials, particularly those used to gather sensitive data, and the preservation and security of materials used during the research process to guarantee the safety of the storage [16]. Safety in storage is a direct result of material planning and arrangement. ...
This article delves into the paramount importance of upholding ethical integrity in doctoral research, particularly
focusing on classroom remediation within secondary education contexts. It provides a comprehensive framework
delineating ethical guidelines crucial for maintaining integrity throughout the research process. Key areas of focus
include risk assessment, informed consent, privacy and confidentiality, data handling and reporting, and strategies for
mitigating mistakes and negligence. Drawing from existing literature and ethical standards, this article underscores the
need for meticulous planning and adherence to ethical principles to safeguard the well-being of research participants
and ensure the validity and reliability of research outcomes. By prioritising ethical considerations, researchers can
navigate complex ethical dilemmas and contribute meaningfully to scientific knowledge while upholding the integrity
of their work.
... Effective RDG is a cost-effective solution and enhances collaboration and decision-making, saving valuable time and resources (Omar & Almaghthawi, 2020). Furthermore, good RDG can improve data quality, accuracy, and usability, leading to increased data trust and accessibility (Hendey et al., 2018;Jamiu et al., 2020) while also ensuring data security and regulatory compliance (Lefebvre et al., 2018;Corti et al., 2019). In addition, a good RDG strategy is crucial to maximizing data value (Abraham et al., 2019;Brous & Janssen, 2020;Lis & Otto, 2020) while also minimizing data-related risks (Austin et al., 2021;Downs, 2021;Matthewson, 2019;Redkina, 2019;Troccoli, 2018). ...
p>
Research data governance (RDG) plays a vital role in the effective management of research data within organizations. With the increasing volume and intricacy of research data, robust data sharing is necessary to facilitate advancements. However, despite its significance, RDG has not received the level of attention it deserves in comparison to other domains of research data management (RDM). Therefore, this desk study aims to fill the gap by identifying the key RDG activities implemented by top research performing organizations (RPOs) to provide insights into the key activities of governance that research organizations prioritize in their RDM policies. A content analysis of 36 policy documents was conducted, identifying 55 unique RDG activities. The findings showed that RPOs are more focused on implementing activities rather than defining or monitoring them, indicating an increasing awareness of the importance of RDG. The study identified two key activities that RPOs prioritize, highlighting a solid commitment to maintaining ethical and legal standards for RDM. As RDG practices evolve, it is crucial to identify emerging trends and technologies that impact RDG practices and explore innovative solutions to address challenges. Therefore, future research should explore innovative solutions for addressing challenges and developing more effective RDG practices. </p
... These features include essential functions like data retrieval, preview and download, as well as other services such as data authorization, analysis and the ability to view experimental logs. By utilizing a data web portal, users can efficiently access, explore and manipulate data while ensuring data security, facilitating analysis and promoting transparency in research activities (Corti et al., 2019). ...
In recent years, China’s advanced light sources have entered a period of rapid construction and development. As modern X-ray detectors and data acquisition technologies advance, these facilities are expected to generate massive volumes of data annually, presenting significant challenges in data management and utilization. These challenges encompass data storage, metadata handling, data transfer and user data access. In response, the Data Organization Management Access Software (DOMAS) has been designed as a framework to address these issues. DOMAS encapsulates four fundamental modules of data management software, including metadata catalogue, metadata acquisition, data transfer and data service. For light source facilities, building a data management system only requires parameter configuration and minimal code development within DOMAS. This paper firstly discusses the development of advanced light sources in China and the associated demands and challenges in data management, prompting a reconsideration of data management software framework design. It then outlines the architecture of the framework, detailing its components and functions. Lastly, it highlights the application progress and effectiveness of DOMAS when deployed for the High Energy Photon Source (HEPS) and Beijing Synchrotron Radiation Facility (BSRF).
... There are also resources offered by institutions' research data service groups (most commonly through the library) as well as data repositories on data management and security. (Corti, et al., 2014 is a prominent and early example; similar resources have since been created by other repositories.) The problems are that researchers may not be aware of these resources, that they are very rarely referenced in graduate training, and that they are not part of any of the formal channels that scholars utilize to prepare for fieldwork. ...
How do researchers in fieldwork-intensive disciplines protect sensitive data in the field, how do they assess their own practices, and how do they arrive at them? This article reports the results of a qualitative study with 36 semi-structured interviews with qualitative and multi-method researchers in political science and humanitarian aid/migration studies. We find that researchers frequently feel ill-prepared to handle the management of sensitive data in the field and find that formal institutions provide little support. Instead, they use a patchwork of sources to devise strategies for protecting their informants and their data. We argue that this carries substantial risks for the security of the data as well as their potential for later sharing and re-use. We conclude with some suggestions for effectively supporting data management in fieldwork-intensive research without unduly adding to the burden on researchers conducting it.
... EDI projects in many cases expect and plan for countervailing efforts. For all the data they collect and use, EDI canvassers should develop and follow sound data management plans that incorporate standard practices for secure data storage and access, confidentiality, robust anonymization for any published datasets or results, etc (Briney, 2015;Corti et al., 2014). Sound data management plans come from building strong relationships across the institution. ...
... The sharing of research data, which is an essential aspect of Open Research, is not a straightforward issue. Descriptions, metadata, and thorough documentation of data collection, analysis, and the technologies used are needed in order for research data to be of value for future research or other purposes (Borgman, 2015;Corti et al., 2014). ...
The widespread use of digital tools to access scientific literature and other relevant information combined with a shift towards Open Research is changing how research is conducted and academic libraries are adapting their research support services to respond to this change. In this paper, we present the case of MusicLab, and especially MusicLab Copenhagen – Absorption with The Danish String Quartet, which served as a mechanism for the University of Oslo Library to test how it can support researchers in developing Open Research practices. MusicLab is a collaboration between the University of Oslo Library and the University of Oslo's Centre for Interdisciplinary Studies in Rhythm, Time and Motion. Through this case, we outline the role of the research library in supporting Open Research including the different legal and data management issues occurring when conducting open research on intellectual works and people. The insights from this paper will be useful for other academic libraries looking to expand their research support services and for researchers looking to further develop their Open Research practices.
... Undertaking such an overhaul, then, must be preceded by a careful reflection on the purpose and design of the data set at hand. Also important is compliance with best practices and data management principles shared by the research community at large; Corti et al. (2019) and Mattern (chapter 5, this volume) provide in-depth recommendations on this very topic. (see table 6.1 for detailed specifications). ...
A guide to principles and methods for the management, archiving, sharing, and citing of linguistic research data, especially digital data.
“Doing language science” depends on collecting, transcribing, annotating, analyzing, storing, and sharing linguistic research data. This volume offers a guide to linguistic data management, engaging with current trends toward the transformation of linguistics into a more data-driven and reproducible scientific endeavor. It offers both principles and methods, presenting the conceptual foundations of linguistic data management and a series of case studies, each of which demonstrates a concrete application of abstract principles in a current practice.
In part 1, contributors bring together knowledge from information science, archiving, and data stewardship relevant to linguistic data management. Topics covered include implementation principles, archiving data, finding and using datasets, and the valuation of time and effort involved in data management. Part 2 presents snapshots of practices across various subfields, with each chapter presenting a unique data management project with generalizable guidance for researchers. The Open Handbook of Linguistic Data Management is an essential addition to the toolkit of every linguist, guiding researchers toward making their data FAIR: Findable, Accessible, Interoperable, and Reusable.
... Good record keeping in a laboratory notebook is critical for science, technology, engineering, and math (STEM) careers in industry, government, and academia (1)(2)(3)(4)(5). Teaching students how to construct, maintain, and record observations in a laboratory notebook is an important part of STEM education (6). ...
Proper laboratory notebook maintenance is a critical skill for science, technology, engineering, and math (STEM) workers. Laboratory notebook grading can be time-consuming and lead to instructor fatigue. After many hours of grading laboratory notebooks, instructors can become biased or not provide detailed feedback to students. I developed a simple protocol to alleviate these problems. Students maintained a laboratory notebook typical for most STEM courses. Then, they were given a short quiz with laboratory-specific questions and could only use their notebooks to answer. The presence or absence of major notebook sections (date, introduction, etc.) were checked, and the laboratory notebook score was a combination of these two components. The learning gains were not assessed, but the instructor grading time decreased by 80%. This technique was applied to both in-person and concurrent online laboratories. With the ever-increasing demands on instructors, anything that decreases the instructor workload and the time for students to receive feedback will likely lead to a better classroom environment.
... Our objective in this paper is to inform ways of collectively constructing open research practices and systems that are appropriate to, and get the best out of, the full range of qualitative and mixed-method approaches used in psychology. We build on the existing debates within psychology and other disciplines, which includes arguments for aspects of open science such as open data as well as sceptical arguments about constraints and conditions for open data (Bishop, 2005(Bishop, , 2007Branney et al., 2017Corti, 2006;Corti et al., 2014;Parry & Mauthner, 2004;Pownall et al., 2022;Reeves et al., 2021). The aim of this introductory paper is to increase the capability of qualitative researchers in psychology to make informed decisions about applying principles of open science to qualitative research 'one open research behaviour at a time' (Norris & O'Connor, 2019, p. 1403. ...
Principles and applications of open science (also referred to as open research or open scholarship) in psychology have emerged in response to growing concerns about the replicability, transparency, reproducibility, and robustness of psychological research alongside global moves to open science in many fields. Our objective in this paper is to inform ways of collectively constructing open science practices and systems that are appropriate to, and get the best out of, the full range of qualitative and mixed‐method approaches used in psychology. We achieve this by describing three areas of open research practice (contributorship, pre‐registration, and open data) and explore how and why qualitative researchers might consider engaging with these in ways that are compatible with a qualitative research paradigm. We argue it is crucial that open research practices do not (even inadvertently) exclude qualitative research, and that qualitative researchers reflect on how we can meaningfully engage with open science in psychology.
... Equally, data reuse allows the possibility of building incrementally on analyses that have already been done. While secondary analysis is more common for quantitative researchers, data reuse is now a feature of some qualitative research (Heaton, 2008;Corti et al, 2019). ...
This article makes a small contribution to Families, Relationships and Societies’ knowledge production. It addresses racialised and ethnicised inequalities experienced in the everyday lives of a family constituted through serial migration, where the adult interviewed (‘Lizzie’) reflected on her childhood experience of leaving the Caribbean to join parents she did not remember and siblings she had never met. It reuses material from a larger study of the retrospective narratives of adults who had been childhood serial migrants. A major finding is that Lizzie’s experience of serial migration was intersectional, linked to her social positioning and her experiences of racism at school and felt outsiderness at home in contrast to feelings of belonging and being valued at the Black-led church she attended. The article argues that, while such family experiences are frequently unrecognised, they pattern children’s experiences, their adult relationships and identities and contribute to, and arise from, historical and sociostructural constructions of society.
Artificial intelligence (AI) is increasingly important in scholarly communication. Despite concerns about academic integrity compliance, AI tools offer potential benefits for researchers navigating the complex landscape of research data repositories. This study explores whether Chat Generative Pre-training Transformer (ChatGPT) can effectively identify and recommend quantitative and qualitative datasets in social sciences. We compare how ChatGPT (version 3.5) identifies data repositories versus the specialized Re3Data.org registry. The results revealed that ChatGPT can respond with relevant repository recommendations that complement rather than duplicate those found through Re3Data.org, providing researchers with a broader range of options. Standard searches using Re3Data.org offered more structured results with disciplinary categorization, while ChatGPT provided repositories with richer contextual information about their contents. In specialized searches for datasets on generative AI in academic contexts, ChatGPT demonstrated the ability to identify specific datasets across multiple repositories with detailed metadata. However, when asked about broader empirical trends, such as the proportion of quantitative versus qualitative research, ChatGPT could only provide generalized responses without precise statistics, highlighting its limitations in accessing current empirical data. The conclusion reached is that while ChatGPT cannot yet generate repository data of suitable quality for advanced-level analyses in all contexts, it is a valuable complementary tool to traditional repository registries. As AI tools continue to develop, educators and scholars must shift their focus from negative expectations to the practical benefits these tools can provide in research data discovery.
Digitalisation as a mega-trend is affecting the library sector, triggering challenges and opportunities that demand new competencies for leaders of academic librarians. The objective of this qualitative study was to explore library leadership competencies in the digital age according to the perspectives of library leaders at the University of KwaZulu-Natal. The study adopted an interpretive paradigm and non-probability sampling method to purposively select nine library leaders from the University of KwaZulu-Natal for in-depth interviews. Interview data were analysed using thematic analysis. The results from the study have shown that library leaders at the University of KwaZulu-Natal demonstrate a variety of five leadership styles and three key leadership competencies for the academic library to adapt to the changes brought by digitalisation. The leadership struggles with clarity on the direction at UKZN, collective and shared leadership, an inclusive, partnership and alignment style, and the autocratic and adaptive leadership styles. Secondly, this study revealed three leadership competencies used by leaders of the academic library at UKZN to deal with adaptive change and provide services in a digital age. These leadership competences include developing people and programmes to achieve change within rules; strategic thinking competences; and leading teams. Lastly, the study provided a Leadership Competencies framework for library leaders in the digital age at UKZN, comprising seven different leadership competences and four tasks and roles by leaders at UKZN, which are necessary for effective academic library leadership in the digital era. The study recommended a variety of leadership competences that are key to leading the academic library through adaptive challenges in the digital era. Areas for future research were also highlighted.
Objective. This study aimed to understand the reasons for the non-sharing of datasets by Indian social science doctoral researchers and to determine whether researchers fully understand the significance of dataset sharing. Design/Methodology/Approach. A quantitative methodology was employed, entailing the administration of a questionnaire to 361 recent Indian doctoral recipients in the social sciences. The questionnaire comprised two sections: one ascertaining barriers to non-sharing of datasets and the other concerning the need of dataset sharing. Each section contained 10 statements, and responses were collected using a five-point Likert scale. Finally, a t-test was employed to ascertain if the sample means differed significantly from the population mean. Results/Discussion. Several barriers were identified that hinder the sharing of research datasets. These included the absence of specific provisions in the regulations of the Indian apex body (University Grants Commission [UGC]) for PhD research, the lack of encouragement from research supervisors and centers to share data, the exclusion of datasets during the final defense viva-voce, challenges in sharing datasets for reasons such as their non-existence or ad-hoc compilation, and the limited peer practice of Atul Kumar et al. dataset sharing. The average agreement for these barriers was 82%, which was statistically significant. Concurrently, researchers concurred with the need of dataset sharing, including enhanced transparency in the research process, improved reliability of research findings, facilitation of peer researchers' comprehension of its structure and other salient details, harmonization of Indian research with international practices of data sharing, adherence to COPE norms on research ethics, and enhancement of thesis presentation during various forums, including the final defense viva-voce. The average agreement among researchers for these needs was 87%, which was statistically significant. Conclusions. Despite the global endorsement of data sharing as a highly desirable research practice, Indian researchers from the social sciences domain often face various challenges that prevent them from sharing their datasets. The non-sharing of datasets could raise concerns about the authenticity and reliability of research. Concurrently, the study indicated that researchers were aware of the advantages associated with data sharing. Collective action from the apex regulatory authority (UGC), research centers, research supervisors, and research scholars could lead to a desired improvement, particularly in the sharing of underlying datasets, which would enhance the reliability and transparency of research. Originality/value. This study represented a novel effort to comprehend the significant barriers to data sharing practices in research within the context of Indian doctoral education in the social sciences and demonstrated that researchers exhibit a favorable disposition toward data sharing. This study contributed to the advancement of policy and practice regarding data sharing, with the objective of enhancing transparency, openness, and accountability in scientific research.
Background: Sexual and reproductive health (SRH) decision-making is key to understanding gender issues, especially for women in rural Bangladesh. In these communities, women’s health is shaped by family and societal power dynamics.
Research Objective: This study aims to understand how factors such as family authority, cultural norms, and economic conditions affect women’s choices and autonomy regarding SRH decision-making.
Data material and methods: Using qualitative methods, interviews with ten rural Bangladeshi women reveal the challenges they face in making SRH decisions. Thematic analysis identifies patterns in their experiences. The analysis is guided by three theories: Kabeer’s "power to choose," Kandiyoti’s "bargaining with patriarchy," and Crenshaw’s intersectionality theory. Together, these frameworks provide a comprehensive understanding of the factors limiting women’s SRH autonomy.
Findings: The findings show that educated and financially independent women can challenge gender roles to some extent, but their freedom is still limited by patriarchal norms and family structures. Older, financially stable women have more decision-making power but face cultural restrictions, while younger, less educated women experience stricter control. Support from male family members improves women’s emotional well-being, but male-dominated decisions and elder female relatives often reinforce traditional norms. To maintain marital harmony, some women discreetly resist these limitations by secretly using contraception.
Conclusion: Women’s SRH autonomy is shaped by the intersection of social, cultural, and economic factors. Policies that promote education, financial independence, and shared decision-making within families are essential for improving women’s health and well-being.
Publisher
This research investigates data journalism practices in Turkey during the COVID-19 pandemic, exploring their current usage, development, and prospects. The research focuses on data journalism methods, narrative structures, and journalists’ approaches to news production processes, including the use of big data and access to open data. The semi-structured interviews with 15 journalists were coded and analyzed using an interactive and cyclical method within a phenomenological research design. The findings indicate that journalists maintain pre-pandemic habits and that the COVID-19 big data did not significantly alter news production practices. Journalists are interested in data’s potential, processing skills, and analysis. They receive encouragement, technological opportunities, and training. However, journalists face challenges such as the need for an open data culture, innovative perspectives, revenue models, pressure for breaking news, loss of reader loyalty, reader habits, and news literacy levels. The findings reveal the experiences of journalists by comparing a critical period such as the pandemic with its predecessors and explain the critical factors influencing the development of data journalism. It also highlights implications for the future of data journalism in Turkey and suggests ways to encourage innovation in news production processes.
Regular backups have an outstanding influence in developing key innovations and advances to enhance ransomware protection mechanisms' functionality. Regular backups are also vital to recovery strategies, as they help the organization have the instrumental capacity to address pertinent challenges, providing sustainable steps to achieve the proper outcomes at whatever level is needed. The benefits of regular backups include the following:
This paper reports on a data sprint conducted as part of a PhD course on digital methods and data critique at the University of Klagenfurt. We reflect on how our data sprint contributed to this higher educational setting, and point to ways in which the data sprint method can be developed further based on our experience. The paper discusses how the sprint fabricated a moment of “critical proximity” for students that were mainly working with qualitative social science methods. The data sprint allowed them to put their critique on “big data” into practice by working with selected sets of data from Twitter and Scopus. We reflect on our collective experience and draw conclusions on the use of data sprints in teaching. Data sprints encourage us to engage with feelings of being underwhelmed and overwhelmed by data that provoke our social science way of critique. Our data sprint tangibly demonstrates that data work is in fact “messy”: transgressing ideals of good data management, biased, ambiguous and open-ended. But instead of turning away from this “wildness”, we urge to make use of it in teaching settings. This wildness allows to step out of conventional modes of critique, and into modes of action. We conclude with a protocol as a practical guide for everyone who wants to introduce data sprints in their teaching.
This paper provides some of the benefits and challenges making academics to either share or withhold their data sets. Much research concentrates on either information sharing or knowledge sharing of the researchers. For data collection, qualitative method was used and a total of 12 academics participated in an interview observing benefit and hitches of sharing that encourage data sharing activities in an academic atmosphere, the results showed the majority of the interviewees acknowledged the usefulness of data sharing but most of the them are not always ready to comply with the practices to support their colleagues’ data sharing exercises. This current work revealed benefits such as encouraging collaboration reputation and maximize transparency. Unsuitable infrastructure, community- culture, economic and legal challenges are the fences for data sharing. Researchers labelled sharing of data as a crucial portion aim at encouraging scholars’ careers and research improvement.
In this final chapter, we pause to reflect on the emergence of big qual and the associated epistemological and methodological values. We summarise the breadth-and-depth method as an iterative way of working with large volumes of qualitative data, and of bringing together multiple qualitative datasets. We consider the opportunities, challenges, and limitations this way of working can bring. We conclude by outlining the implications of the method, both for qualitative and for quantitative enquiry and big data debates.
This chapter focuses on the availability of data and initial preliminary strategies for identifying possible datasets of interest. The first part of the chapter discusses formalised repositories for data which form a key infrastructure for big qual analysis. We provide information on key research data archives and how to access them. The second part considers the diverse forms of data that exist outside and alongside formal large-scale repositories. Such sources might include data stored in community or personal archives, or data typically associated with quantitative endeavours, such as social media or open questions in large-scale surveys. The chapter examines the ways in which you can use the breadth-and-depth method to access different forms and permutations of data, such as combining archived data with data from your own project(s); amalgamating two or more datasets from an existing multi-site project; or pooling data collected separately but connected by a substantive topic or theme. Overall, we show how archives and alternative sources of data approaches can open new possibilities for qualitative research.
Purpose
The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.
Design/methodology/approach
A survey research design was employed to collect data from postgraduate students at the Nelson Mandela African Institution of Science and Technology (NM-AIST) in Arusha, Tanzania. The data were collected and analysed quantitatively and qualitatively. A census sampling technique was employed to select the sample size for this study. The quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS), whilst the qualitative data were analysed thematically.
Findings
Less than half of the respondents were aware of and were using open RDRs, including Zenodo, DataVerse, Dryad, OMERO, GitHub and Mendeley data repositories. More than half of the respondents were not willing to share research data and cited a lack of ownership after storing their research data in most of the open RDRs and data security. HILs need to conduct training on using trusted repositories and motivate postgraduate students to utilise open repositories (ORs). The challenges for underutilisation of open RDRs were a lack of policies governing the storage and sharing of research data and grant constraints.
Originality/value
Research data storage and sharing are of great interest to researchers in HILs to inform them to implement open RDRs to support these researchers. Open RDRs increase visibility within HILs and reduce research data loss, and research works will be cited and used publicly. This paper identifies the potential for additional studies focussed on this area.
New data-intensive research paradigms and surging volumes of research data have amplified the need for more effective systems of data curation, management, and sharing. This methodological paper focuses on the development and implementation of a workshop protocol aimed at enabling researchers in various sub-disciplines of data-intensive fields to achieve consensus about data-sharing and reproducibility practices. As such, it addresses a key gap in our understanding of how researchers from those sub-disciplines negotiate and reach consensus on disciplinary practices, which is crucially important to the design of research-data infrastructure and data-management support services. Specifically, we first convened a design-thinking workshop, attended by 17 principal investigators from the field of data-intensive health science, which simulated the identification and solution of real-life research problems in a multidisciplinary context. A key plank of our methodology is its use of boundary-negotiating artifacts as a conceptual model to delineate and analyze the artifacts (e.g., Post-its and other notes) that were created and used during the workshop. This approach provided insights into the development and evolution of common practices in the participants’ workplaces. Then, to validate and extend our methodology, we applied the workshop protocol to two additional case studies, respectively involving 30 academic librarians and 13 criminologists. The results confirm the potential applicability and adaptability of our approach to even more data-intensive disciplines, and provide insights that should be valuable to a range of stakeholders including but not limited to national funders, memory institutions, and research establishments.
Prediction markets (PMs) are virtual stock markets on which shares are traded taking advantage of the wisdom-of-crowds principle to access collective intelligence. It is claimed that the accumulation of information by groups leads to joint group decisions often better than individual participants’ approaches to solutions. A PM share represents a future event or a market condition (e.g. expected sales figures of a product for a specific month) and provides forecasts via its price which is interpreted as the probability of the event occurring. PMs can be used in competition with other forecasting tools; when applied for forecasting purposes within a company they are called corporate prediction markets (CPMs). Despite great praise in the (academic) literature for the use of PMs as an efficient instrument for bringing together scattered information and opinions, corporate usage and applications are limited.
This research was directed towards an examination of this discrepancy by means of focusing on the barriers to adoption within enterprises. Literature and reality diverged and neglected the important aspect of corporate culture. Screening existing research and interviews with business executives and corporate planners revealed challenges of company hierarchy as an inhibitor to the acceptance of CPM outcomes.
Findings from 55 interviews and a thematic analysis of the literature exposed that CPMs are useful but rarely used. Their lack of use arises from senior executives’ perception of the organisational hierarchy being taxed and fear of losing power as CPMs (can) include lower rungs of the corporate ladder in decision-making processes. If these challenges can be overcome the potential of CPMs can be released. It emerged – buttressed by ten additional interviews – that CPMs would be worthwhile for company forecasting, particularly supporting innovation management which would allow idea markets (as an embodiment of CPMs) to excel.
A contribution of this research lies in its additions to the PM literature, explaining the lack of adoption of CPMs despite their apparent benefits and making a case for the incorporation of CPMs as a forecasting instrument to facilitate innovation management. Furthermore, a framework to understand decision-making in the adoption of strategic tools is provided. This framework permits tools to be accepted on a more rational base and curb the emotional and political influences which can act against the adoption of good and effective tools.
Dieses Kapitel vermittelt folgende Lernziele: Die zentralen ethischen Richtlinien zum Umgang mit Untersuchungspersonen in der human- und sozialwissenschaftlichen Forschung kennen. Die wichtigsten Regeln guter wissenschaftlicher Praxis erläutern können. Eigene Forschungsaktivitäten an Prinzipien der Forschungs- und Wissenschaftsethik ausrichten können. Vorliegende Studien hinsichtlich möglicher ethischer Probleme bewerten können. Wissen, wie man forschungs- bzw. wissenschaftsethische Fragen selbst zum Gegenstand empirischer Forschung machen kann.
Evaluating the effectiveness of teaching methods for synchronous online instruction is integral to fostering student engagement and maximizing student learning, particularly in one-time workshops or seminars. Using the lens of social constructivism theory, this study investigated the effect of different approaches of synchronous online instruction on the development of graduate students’ research data management (RDM) skills during the post-pandemic era. One experimental group received teacher-centered instruction primarily via lecture and the second experimental group received student-centered instruction with active learning activities. A one-way ANCOVA was used to compare the post-test RDM scores between one control group and the two experimental groups, while controlling for the impact of their pre-test RDM scores. Both experimental groups who received online RDM instruction scored higher than participants from the control group who received no instruction. Additionally, our results indicated that learners who were exposed to more engaged and collaborative instruction demonstrated higher learning outcomes than students who received teacher-centered instruction. These findings suggest that interactive teaching that actively engages the audience is essential for successful synchronous online learning. Simply transferring a lecture-based approach to online teaching will not result in optimal student engagement and learning. The interactive online instructional strategies used in this study (e.g., collective note-taking, Google Jamboard activities) can be applied to any instructional content to engage learners and enhance student learning.
ResearchGate has not been able to resolve any references for this publication.