Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Educational Data Virtual Lab is an open-source platform for data exploration and analysis that combines the power of a coding environment, the convenience of an interactive visualization engine and the infrastructure needed to handle the complete data lifecycle. Based on the building blocks of the FIWARE European platform and Apache Zeppelin, this tool allows domain experts to become acquainted with data science methods using the data available within their own organization, ensuring that the skills they acquire are relevant to their field and driven by their own professional goals. We used EDVL in a pilot study in which we carried out a focus group within a multinational company to gain insight into potential users' perceptions of EDVL, both from the educational and operational point of view. The results of our evaluation suggest that EDVL holds a great potential to train the workforce in data science skills and to enable collaboration among professionals with different levels of expertise.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, in chemistry, VL are used to simulate experiments involving chemical reactions, titrations, and other practical applications that are too dangerous in traditional laboratories. Due to several advantages, these labs have been increasing usage dramatically over the past several years [22]. ...
Conference Paper
Full-text available
Virtual laboratories (VL) have become an essential tool for educational sectors, allowing students to develop practical skills in a remote environment. However, the accessibility of VL remains a significant challenge for learners. This research paper aimed to investigate the accessibility barrier in VL and explores potential solutions to overcome them. To achieve this, we conducted a comprehensive literature review spanning from 1997 to 2023, focusing on the accessibility of VL. Our search was conducted solely on the Scopus database, resulting in 164 papers, from which we carefully selected 21 primary studies for detailed analysis. The result indicates still there is high barrier to accessing VL. Based on the analysis, we identified four major barriers: technological, infrastructural, pedagogical, and cultural. To address the issues, a range of solutions have been proposed. These findings highlight the critical need to tackle accessibility barriers in VL, thereby enabling all students to have equal opportunities to develop their practical skills.
Chapter
Laboratories are a fundamental aspect of any academic learning that enables students to better understand the theoretical concepts of any academic course. Science, engineering, and medical fields are the main areas of study that heavily depend on laboratories for demonstrating the practicality of any theory. However, circumstances may arise where students face challenges in performing practicals due to insufficient materials, equipment, or the inaccessibility of the laboratory itself. To counter this challenge, various organizations and institutions have come forward to develop ‘Virtual Laboratories' or ‘Virtual Labs'. Virtual laboratories are platforms where the user can engage in practical sessions without the need to be present in person. The user can log in anywhere, even remotely, and be able to gain access to the same setting as in a traditional hands-on laboratory, albeit virtually. This chapter will detail virtual labs, what they are all about, and the effectiveness of virtual labs in comparison to traditional labs.
Preprint
Full-text available
Big data encompasses vast volumes of structured, semi-structured, and unstructured data. The exponential growth of unstructured data, driven by the proliferation of the Internet and social networks like Twitter, Facebook, and Yahoo, necessitates efficient processing solutions. Hadoop has emerged as a leading framework for big data analysis and processing. This paper discusses the configuration and integration of Apache Flume with Spark Streaming to stream data from Twitter. The streamed data is then stored in Apache Cassandra. Subsequently, the data is analyzed using Apache Zeppelin, with results displayed on a dashboard. The dashboard outcomes are further analyzed and validated using JSON.
Article
Full-text available
In recent years, a new business paradigm has emerged which revolves around effectively extracting value from data. In this scope, providing a secure ecosystem for data sharing that ensures data governance and traceability is of paramount importance as it holds the potential to create new applications and services. Protecting data goes beyond restricting who can access what resource (covered by identity and Access Control): it becomes necessary to control how data are treated once accessed, which is known as data Usage Control. Data Usage Control provides a common and trustful security framework to guarantee the compliance with data governance rules and responsible use of organizations’ data by third-party entities, easing and ensuring secure data sharing in ecosystems such as Smart Cities and Industry 4.0. In this article, we present an implementation of a previously published architecture for enabling access and Usage Control in data-sharing ecosystems among multiple organizations using the FIWARE European open source platform. Additionally, we validate this implementation through a real use case in the food industry. We conclude that the proposed model, implemented using FIWARE components, provides a flexible and powerful architecture to manage Usage Control in data-sharing ecosystems.
Conference Paper
Full-text available
We are experiencing a new digital revolution in which data are becoming a key pillar for business and industry. Promoting data sharing, without compromising data sovereignty and traceability, is fundamental since it provides a heterogeneous ecosystem with the potential to enrich the variety of applications and services that take part in this digital revolution. In this scope, the use of secure and trusted platforms for sharing and processing personal and industrial data is crucial for the creation of a data market and a data economy. Protecting data goes beyond restricting who can access what resource (covered by identity and access control respectively): it becomes necessary to control how data are treated, which is known as data usage control. Data usage control provides a common and trustful security framework to guarantee the sovereignty and the responsible use of organizations’ data by third-party entities, easing and ensuring data sharing in ecosystems such as industry or smart cities. In this article, we present an architecture proposal for achieving access and usage control in shared data ecosystems among multiple organizations. The proposed architecture is based on the UCON (Usage Control) model and an extended XACML (eXtensible Access Control Markup Language) Reference Architecture, relying on key aspects of the IDS (International Data Spaces) Reference Architecture Model. Its modular design and technology-agnostic nature provide an integral solution while maintaining flexibility of implementation.
Article
Full-text available
Changing work practice is critical when addressing global challenges. The expansion of work is mediated by a range of tensions inherent in the complex systems within which global challenges exist. This study examines tensions that inhibit the expansion of work practices contextualized within the global health challenge of Antimicrobial Resistance (AMR). The study traces how an AMR surveillance system is being set up in a low-to-middle-income country in Asia (Country A). The research identifies a range of tensions that need to be considered when designing technology-enhanced learning interventions for professionals. This study is significant in moving technology-enhanced learning toward a wholistic approach that takes into account the work environment. This research takes an original standpoint by placing attention on specific work practices, then examining how technology-supported activities can build capacity. This places professionals at the center of a critical approach examining the ways technologies can add value to their professional lives. This work highlights the importance of professionals' “voice” as a lens through which researchers document their reality. The study calls for a fundamental shift in the orientation of technology-enhanced learning interventions, moving attention toward work practice and mapping supporting technologies around this, rather than focusing primarily on the technology and planning learning activity with technology tools.
Article
Full-text available
Background A lack of transparency and reporting standards in the scientific community has led to increasing and widespread concerns relating to reproduction and integrity of results. As an omics science, which generates vast amounts of data and relies heavily on data science for deriving biological meaning, metabolomics is highly vulnerable to irreproducibility. The metabolomics community has made substantial efforts to align with FAIR data standards by promoting open data formats, data repositories, online spectral libraries, and metabolite databases. Open data analysis platforms also exist; however, they tend to be inflexible and rely on the user to adequately report their methods and results. To enable FAIR data science in metabolomics, methods and results need to be transparently disseminated in a manner that is rapid, reusable, and fully integrated with the published work. To ensure broad use within the community such a framework also needs to be inclusive and intuitive for both computational novices and experts alike. Aim of Review To encourage metabolomics researchers from all backgrounds to take control of their own data science, mould it to their personal requirements, and enthusiastically share resources through open science. Key Scientific Concepts of Review This tutorial introduces the concept of interactive web-based computational laboratory notebooks. The reader is guided through a set of experiential tutorials specifically targeted at metabolomics researchers, based around the Jupyter Notebook web application, GitHub data repository, and Binder cloud computing platform.
Conference Paper
Full-text available
Data science is a revolution that is already changing the way we do business, healthcare, politics, education and innovation. There is a great variety of online courses, masters, degrees, and modules that address the teaching of this interdisciplinary field, where is a growing demand of professionals. However, data science pedagogy has repeated a number of patterns that can be detrimental to the student. This position paper describes an ongoing educational innovation project for the study of methods, experiences, and tools for experiential learning in data sicience. In this approach, the student learns through reflection on doing instead of being a recipient of already made content.
Article
Full-text available
Purpose This is an opinion paper which purpose is to analyse the inadequacies of current business education in the tackling of the educational challenges inherent to the advent of a data-driven business world. It presents an analysis of the implications of digitization and more specifically big data analytics and data science on organizations with a special emphasis on decision-making processes and the function of managers. It argues that business schools and other educational institutions have well responded to the need to train future data scientists but have rather disregarded the question of effectively preparing future managers for the new data-driven business era. Design/methodology/approach The approach involves analysis and review of the literature. Findings The development of analytics skills shall not pertain to data scientists only, it must rather become an organizational cultural component shared among all employees and more specifically among decision-makers: managers. In the data-driven business era, managers turn into manager-scientists who shall possess skills at the crossroad of data management, analytical/modelling techniques and tools, and business. However, the multi-disciplinary nature of big data analytics and data science seems to collide with the dominant “functional silo design” that characterizes business schools. The scope and breadth of the radical digitally-enabled change we are facing, may necessitate a global questioning about the nature and structure of business education. Research limitations/implications For the sake of transparency and clarity, academia and the industry must join forces to standardize the meaning of the terms surrounding big data. Big data analytics/data science training programs, courses, and curricula shall be organized in such a way that students shall interact with an array of specialists providing them a broad enough picture of the big data landscape. The multidisciplinary nature of analytics and data science necessitates to revisit pedagogical models by developing experiential learning and implementing a spiral-shaped pedagogical approach. The attention of scholars is needed as there exists an array of unexplored research territories which investigation will help bridge the gap between education and the industry. Practical implications The findings will help practitioners understand the educational challenges triggered by the advent of the data-driven business era. The implications will also help develop effective trainings and pedagogical strategies that are better suited to prepare future professionals for the new data-driven business world. Originality/value By demonstrating how the advent of a data-driven business era is impacting the function and role of managers, the paper initiates a debate revolving around the question about how business schools and higher education shall evolve to better tackle the educational challenges associated with big data analytics and data science training. Elements of response and recommendations are then provided.
Conference Paper
Full-text available
Data has become ubiquitous and pervasive influencing our perceptions and actions in ever more areas of individual and social life. Data production, collection and editing are complex actions motivated by data use. In this paper we present and characterize the field of study of Human-Data Interaction by discussing the challenges of how to enable understanding of data and information in this complex context, and how to facilitate acting on this understanding considering the social impact. By understanding interaction with data as a sign process, and identifying the goal of designing human-data interaction as enabling stakeholders to promote desired and to avoid undesired consequences of data use, we employ a semiotic perspective and define research challenges for the field.
Chapter
Full-text available
With the emergence of data environments with growing data variety and volume, organizations need to be supported by processes and technologies that allow them to produce and maintain high-quality data facilitating data reuse, accessibility, and analysis. In contemporary data management environments, data curation infrastructures have a key role in addressing the common challenges found across many different data production and consumption environments. Recent changes in the scale of the data landscape bring major changes and new demands to data curation processes and technologies. This chapter investigates how the emerging big data landscape is defining new requirements for data curation infrastructures and how curation infrastructures are evolving to meet these challenges. Different dimensions of scaling-up data curation for big data are described, including emerging technologies, economic models, incentive models, social aspects, and supporting standards. This analysis is grounded by literature research, interviews with domain experts, surveys, and case studies and provides an overview of the state-of-the-art, future requirements and emerging trends in the field.
Article
Full-text available
Promoting lifelong learning has received increased attention recently from the educational and business communities. Scholars and trend forecasters, looking toward the needs of the 21st century, have reached nearly unanimous agreement about the importance of a constantly improving and technologically competent workforce that can compete in global markets. There is also general agreement about the importance of various attitudes or motivations as underlying lifelong learning in general and in particular technical fields. HOW can educational psychologists best apply what they know about motivation and learning to the issue of promoting lifelong learning attitudes and skills, from elementary through postsecondary educational levels, and in training settings that include business and industry? Implications for the role and preparation of educational psychologists for the 21st century include a greater emphasis on "grand theories" that integrate principles of learning and motivation, of cognition and affect, and thus address the whole person in context.
Chapter
Professional learning is an important component of productivity in contemporary work environments characterised by continual change. Learning for work takes various forms, from formal training to informal learning through work activities. In many work settings professionals collaborate via networked environments leaving various forms of digital traces and ‘clickstream’ data. These data can be exploited through learning analytics to make both formal and informal learning processes traceable and visible to support professionals with their learning. This chapter examines the state-of-the-art in professional learning analytics by considering the different ways professionals learn. As learning analytics techniques advance, the modelling techniques that underpin these methods become increasingly complex and the assumptions that underpin the analytics become ever-more embedded within the system. This chapter questions these assumptions and calls for a new, refreshed vision of professional learning analytics for the future which is based on how professionals learn. There is a need to broaden our thinking about the purpose of learning analytics build systems that effectively to address affective and motivational learning issues as well as technical and practical expertise; intelligently align individual learning activities with organisational learning goals and to be wary of attempts to embed professional expertise in code written by software developers, rather than by the professionals themselves. There are also ethical concerns about the degree of surveillance on learners as they work and learn with anxieties about whether people understand the (potentially serious) consequences [19]. Finally, learning analytics generally are developed for formal learning contexts, in schools, colleges and universities, missing opportunities to provide the support professionals need as they learn through everyday work.
Article
Companies are looking to harness the power of data, both big and small, to take their business to new levels. One major hurdle for companies seeking to become data-centric is facing a lack of data literate talent for hire in the current market of recent college graduates. This article establishes a conversation about data literacy in business education, discusses the role of the librarian in this work, and proposes a set of data literacy competencies that librarians could help incorporate into business school education, as has been similarly seen in other disciplines.
Article
The increasing generation and collection of personal data has created a complex ecosystem, often collaborative but sometimes combative, around companies and individuals engaging in the use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Interaction (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer science, statistics, sociology, psychology and behavioural economics. We expose the challenges that HDI raises, organised into three core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interested parties in the personal and big data ecosystems.
Book
This book is devoted to the graphics of patient data: good graphs enabling straight-forward and intuitive interpretation, efficient creation, and straightforward interpretation. We focus on easy access to graphics of patient data: the intention is to show a large variety of graphs for different phases of drug development, together with a description of what the graph shows, what type of data it uses, and what options there are. The main aim is to provide inspiration in form of a "graphics cookbook."Many graphs provide creative ideas about what can be done. The book is not intended to be technical. It introduces general principles of good visualization to make readers understand the concepts, but the main focus is on the creativity and usefulness: readers are enabled to browse through the book to get ideas of how their own data can be analyzed graphically. © 2012 Springer Science+Business Media, New York. All rights are reserved.
Article
This article guides readers through the decisions and considerations involved in conducting focus-group research investigations into students' learning experiences. One previously published focus-group study is used as an illustrative example, along with other examples from the field of pedagogic research in geography higher education. An approach to deciding whether to use focus groups is suggested, which includes a consideration of when focus groups are preferred over one-to-one interviews. Guidelines for setting up and designing focus-group studies are outlined, ethical issues are highlighted, the purpose of a pilot study is reviewed, and common focus-group analysis and reporting styles are outlined.
Chapter
The collection and storage of huge amounts of data is no longer a challenge by itself. However, rapidly growing data repositories are creating considerable challenges in many application areas. Visualizations that worked well with a few data items now produce confusing or illegible displays. Decision-makers struggle to act based on a severely restricted understanding of the situation. The goal of Visual Analytics is to overcome this information overload and create new opportunities with these large amounts of data and information. The key challenge is to intelligently combine visualization techniques and analytic algorithms, and to enable the human expert to guide the decision making process. This chapter covers interesting and relevant previous work on situation awareness, naturalistic decision making, and decision-centred visualization. These concepts are put into the context of Visual Analytics research and are further illustrated by application examples.
Learning and work: Professional learning analytics
  • A Littlejohn
A. Littlejohn, "Learning and work: Professional learning analytics," in The Handbook of Learning Analytics, 1st ed. Alberta, Canada: Soc. Learn. Anal. Res. (SoLAR), 2017, pp. 269-277.
Embodied human-data interaction
  • Elmqvist
N. Elmqvist, "Embodied human-data interaction," in Proc. Embodied Interaction: Theory and Practice in HCI, 2011, pp. 104-107.