Big Data: A Revolution that Will Transform how We Live, Work, and Think
... Big data's ascendance rests on two promises. First, that more data offers a more objective and accurate picture of reality, which can be leveraged for competitive advantage (e.g., Mayer-Schönberger & Cukier, 2013;Rust & Huang, 2014). Second, that "using big data leads to better predictions and better predictions yield better decisions" (McAfee et al., 2012, p. 64). ...
... Over past one decade or so, big data has been generating excitement amongst both practitioners and academics. Advocates of big data (e.g., Chen et al., 2012, Constantiou & Kallinikos, 2015, LaValle et al., 2011, Mayer-Schönberger & Cukier, 2013, McAfee et al., 2012, Rust & Huang, 2014, Varian, 2010, Woerner & Wixom, 2015, contend that big data is a revolution that will transform how we live, giving companies a competitive advantage by enabling them to assess, analyze and respond to environmental trends, and offer new types of service. Advocates argue that big data can provide a new era of empiricism where data can 'speak for itself' without theoretical framing of interpretation (Kitchin, 2021). ...
... As a result, managers know without understanding: the "what" trumps the "why". From this perspective, big data is touted as "a source of new economic value and innovation" (Mayer-Schönberger & Cukier, 2013, p.12, Chen et al., 2012. ...
Despite the ethical concerns over the datafication and surveillance of individuals and groups, companies are making ever greater investments in big data. The assumptions underpinning this movement are: (1) organizations are passive implementers of big data—more data is the inevitable consequence of technology and a competitive necessity for business, (2) more data offers a more objective and accurate picture of reality and (3) more data enables better prediction. We argue that this perspective is strategically unsustainable and abdicates ethical responsibility.
In this chapter, we adopt a sensemaking perspective (Weick in (1995) Sensemaking in organizations (Vol. 3). Sage) to challenge each of the assumptions of inevitability, objectivity, and predictability. Building on this critique, we discuss the role that organizations can play in creating alternative sustainable futures with big data and explore the legal and ethical consequences of their actions. In addition, we advocate that, from a sensemaking perspective, organizations can use big data to cultivate sustainable learning and innovating communities of both employees and customers.
... The exponential growth of big data has revolutionized industries, transforming how organizations operate, make decisions, and interact with consumers. In sectors ranging from healthcare to finance, big data enables predictive analytics, personalized services, and operational efficiencies (Mayer-Schönberger & Cukier, 2013). However, this rapid expansion of data collection and usage has also raised serious concerns regarding data privacy. ...
... Laws such as the GDPR require extensive data protection measures, including encryption, pseudonymization, and regular audits, which increase operational costs and require significant technical expertise. Organizations often struggle to achieve compliance while ensuring robust data security, especially when attempting to balance these requirements with user convenience and usability (Mayer-Schönberger & Cukier, 2013). Effective data privacy legislation must consider these conflicts and promote frameworks that allow security and privacy to coexist in big data contexts. ...
This study explores the complexities and challenges of implementing data privacy laws in the era of big data, where security, privacy, and innovation frequently intersect. The exponential growth of data collection, driven by advancements in technology and the widespread adoption of digital services, has intensified the need for effective data privacy regulations. However, balancing the protection of individual privacy with the demands of innovation and security presents considerable challenges for policymakers. Utilizing a qualitative approach, this study employs a literature review and library research methodology to analyze existing data privacy laws, regulatory frameworks, and scholarly discussions on the topic. Findings indicate that current data privacy laws often struggle to keep pace with rapid technological change, creating gaps that can be exploited by both private and public entities. Additionally, the study highlights conflicting priorities, as stringent privacy protections can inhibit technological innovation, while lenient policies may lead to significant privacy vulnerabilities. This analysis suggests that adaptive and scalable legal frameworks, alongside international cooperation, are essential to address these challenges effectively. Recommendations are provided for balancing privacy and innovation in a way that upholds data security without stifling technological growth. This study contributes to ongoing debates surrounding data privacy by offering insights that may guide future policy developments in the field.
... Along with these inquiries and studies on the field, especially the development of internet technology has been an innovation that has expanded the boundaries of the field of machine learning. With the development and increasing use of Internet technology, the term artificial intelligence has started to be associated with concepts such as data science and the Internet of Things (IoT), and issues such as how data are collected over the Internet and how they are processed using artificial intelligence technologies have become important (Mayer-Schönberger & Cukier, 2013;Le & Mikolov, 2014). Artificial intelligence technologies have become closely related to the concept of 'big data' as they can learn certain processes by analysing data sets and make decisions about these processes. ...
... Big data, which is one of the most important concepts in the current dominance of artificial intelligence, is also related to the concept of deep learning, which is a subbranch of machine learning. While these concepts are among the basic concepts of artificial intelligence, they play a role in analysing large data sets and providing machines with the ability to make decisions (Schönberger & Cukier, 2013;Goodfellow et al., 2016). ...
Since the mid-20th century, artificial intelligence technologies, which have their roots in neuroscience and the discovery of neural networks, have created a rapidly growing competitive field on a global scale. Systems based on artificial intelligence technology are used today in important areas affecting individuals and society such as health, finance, trade, education, media, industrial production, energy, cyber security. Artificial intelligence companies are funded by the world's leading companies and governments, and machine learning based on big data is becoming increasingly important. Although the existence of ethical problems arising from negative uses as well as the benefits arising from positive uses is a matter of debate, the fact that artificial intelligence technologies will shape the future requires consensus instead of moving away from the field. The research aims to discuss the potential of artificial intelligence in moving image production by considering this reality. In the journey of artificial intelligence that started with the question "Can machines think?", this research focuses on the question "Can artificial intelligence produce professional, hyper-realistic scenes?" and examines three important platforms operating in this field. Within the scope of the research, Runway, Luma Dream Machine and Imagine Art platforms were asked to produce moving images including two realistic and futuristic scenarios. The moving images produced were subjected to content analysis and analysed under the categories and subcategories determined. Although there are various errors in the moving images produced, these examinations have shown that artificial intelligence technologies will reshape the production of series, films and content that require expertise in a short time, and that artificial intelligence will replace various expertise in the sector. Keywords: Communication Studies, Artificial Intelligence, Moving Image
... La definizione tratta da (Laney, 2001), invece, che, risulterà nel tempo, la più citata e rielaborata, nonché essere la prima definizione in assoluto, dice: "i big data sono risorse informative ad alto volume, ad alta velocità e ad alta varietà che richiedono forme innovative e convenienti di elaborazione delle informazioni che consentono approfondimenti, processi decisionali e automazione dei processi" Notiamo come si sia aggiunta una nuova caratteristica, la variabilità, che approfondiremo più avanti; da qui derivano le note 3V che però restano, più che caratteristiche dei big data le procedure per crearli e organizzarli (Aragona, 2016) Altri ancora (Marz & Warren, 2012); (Mayer-Schonberger & Cukier, 2013) (kitchin, fonte (Glue-labs, 2019) 2014) hanno superato l'approccio delle 3V concentrandosi non su caratteristiche informatiche come nelle definizioni fin qui presentate, ma rivolgendo uno sguardo alle scienze sociali, introducendo l'esaustività, l'alta risoluzione, la relazionalità e la flessibilità. ...
... Alazab, & Luo, 2019). Ma parleremo di questi problemi nello specifico più avanti.Vediamo ora più nel dettaglio le caratteristiche individuate da(Marz & Warren, 2012),(Mayer-Schonberger & Cukier, 2013), (kitchin, 2014) con uno sguardo più sociologico e legato alla metodologia della ricerca. ...
... However, this reliance on data brings an inherent risk of compromising student privacy if robust security measures are not in place. According to Mayer-Schönberger and Cukier (2017), the collection of large volumes of data increases the potential for misuse or unintentional exposure. In educational settings, data breaches can have severe consequences, especially for marginalized students who may face heightened risks if their data is compromised. ...
... Institutions must implement strong encryption practices, comply with regulations like the General Data Protection Regulation (GDPR), and maintain transparent data policies. Additionally, students and parents should be informed about data usage, and they should be empowered to make decisions regarding their information (Mayer-Schönberger & Cukier, 2017). Thus, privacy and security concerns are not merely technical issues; they are fundamental to fostering trust in AIdriven educational technologies. ...
Q-Star AI in Education: Transforming Learning and Teaching for the Future is an exploration of how the transformative capabilities of Q-Star AI can enhance the education sector. This book is organized to guide readers through both the foundational and advanced applications of Q-Star AI, demonstrating its potential to create customized learning experiences, streamline administrative processes, and empower teachers with data-driven insights. By examining real-world case studies and exploring both opportunities and challenges, this text offers a nuanced view of how Q-Star AI is reshaping the educational landscape.
... This digitization, especially in K-12 education, has led to growing concerns of learning environments that convert learners' social actions into quantifiable data, a process known as "datafication." While datafication has been seen to support decision-making by adapting and personalizing the learning experience [9], it has also raised significant concerns for children's well-being. ...
In this chapter, we discuss the impact of digitizing education, the risks of datafication, data privacy loss, and surveillance, and their implications for children’s fundamental rights and freedoms. While debates around digital media’s impact on children’s health and development have grown, our goal is to present the most pertinent concerns emerging from the use of education technologies (edtech), focusing on data collection and surveillance in Anglo-American (Western) contexts. We shed light on the gaps in literature and provide recommendations addressing education stakeholders.
... De fato, mais dados foram registrados nosúltimos dois anos do que em todos os anos anteriores desde o início da história humana. Big data está sendo usado para aprimorar diferentes campos como medicina, engenharias, economia, entre outasáreas [2]. ...
The financial market encompasses a set of institutions, products, and services aimed at meeting the financial needs of individuals, companies, and governments. Its primary objective is to direct financial resources from investors to projects requiring funding. This is achieved through the issuance and trading of securities such as stocks, debt securities, among others. In this paper, the goal was to develop a machine learning application specifically for the Brazilian financial market, focusing on predicting the market value of eight companies that are representative of the financial sector on the stock exchange. The prediction is based on the closing price history and uses data from the last three years, with the inputs corresponding to the last 60 days immediately preceding the forecast date. For this task, three machine learning models were selected: Long Short-Term Memory (LSTM), Multilayer Perceptron (MLP), and Convolutional Neural Network (CNN). Each of these was fine-tuned using five different optimizers, resulting in a total of 15 models. Subsequently, all 15 models were combined into an Ensemble. After applying data transformations, the models achieved a satisfactory level of error for the analysis. Among the transformations used, the logarithmic transformation stood out as the one that resulted in the most well-adjusted models compared to the others. In second place, the Yeo-Johnson transformation showed slightly higher error but performed better on series with high variation. Additionally, the convolutional models and Ensemble were the most effective.
... This shift emphasizes empiricism driven by the abundance of data and advanced analytics, allowing for the generation of scientific conclusions without relying on pre-existing theories or hypotheses (Kitchin, 2014). This new approach changes how we make knowledge, do research, and understand reality (Boyd and Crawford, 2012;Mayer-Schönberger and Cukier, 2013). ...
The widespread adoption of digitalization and artificial intelligence, alongside the abundance of big data, has significantly transformed societies. Recently, there has been an increasing interest in leveraging big data and artificial intelligence to capture and analyze social transformative change in evaluation. However, there is no consensus on the ethical and appropriate use of these tools in evaluation. This article used a systematic literature review to provide an overview of using big data and artificial intelligence for evaluation purposes, identifying challenges faced. Unresolved issues encompass ethical, methodological, and ownership concerns. The study suggests ways to address these challenges and advocates for united efforts to mix big data and artificial intelligence with traditional approaches. To achieve this, it emphasizes the necessity of leveraging interconnected data platforms, mitigating ethical risks, and enhancing evaluators’ competencies in computer and data science, which is essential for the integration of big data and artificial intelligence in the evaluation field.
... Government officials also need to conduct real-time tracking of data regarding the quality, progress, and effectiveness of urban planning projects. Some scholars critically assert that issues such as insufficient data reliability, information islands, and the absence of virtual collaboration platforms are posing challenges to the validity of urban planning decisions [6,7]. ...
In the digital era, data collaboration constitutes a critical trend in urban planning and design. It is of paramount importance in addressing contemporary issues related to the misinterpretation, misapplication, and misunderstanding of spatial genes, as well as facilitating data sharing and value creation associated with spatial genes. In this paper, targeting the complex problems of multiple entities and threads in spatial gene research and practice through, initially, a literature review, the correlation process between urban planning and data collaboration is examined, the spatial gene concept and the background of its proposal are expounded, and the challenges confronted in spatial-gene data collaboration are analyzed. Then, with an elaboration of the data value chain concept, a data collaboration framework for spatial-gene research and practice is constructed, specifically encompassing three main links: convergence, mining, and application. Finally, from the three aspects of data collection and storage, data analysis and processing, and data circulation and sharing, technical implementation paths and suggestions are put forward. We firmly contend that through the establishment of the framework, it is anticipated to promote data collaboration among multiple entities, enhance the efficiency and scientificity of urban design, and thereby facilitate the preservation of cultural diversity and the sustainable development of cities.
... Moreover, the implications of data-driven decision-making and big data analytics are thoroughly examined by Mayer-Schönberger and Cukier (2013). They propose that big data analytics can lead to more informed economic decisions and innovations, though it also raises concerns regarding data privacy and ethical use of information, necessitating a balanced approach to harnessing big data's capabilities while safeguarding individual rights. ...
The digital world has penetrated various corners of space and dimensions in human life. Filling in previously unimaginable gaps. Humans are now more familiar with the presence of the digital world. Suddenly, and sometimes without realizing it, a new culture and behaviour has formed, namely digital culture. Along with the emergence of new habits, new cultures, new worlds, then at least two sides are formed, the good side and the bad side. The digital world offers convenience and new opportunities. Meanwhile, on a small scale we can see humans hiding behind anonymous identities, feel free to do anything in cyberspace. This has led to a new social problem. On a national scale, the digital world requires state apparatus to adapt and prepare policy directions and legal instruments to protect national interests and their citizens. The scope of economic activity cannot be separated from the influence of the presence of the digital world. One of its influences even touched on ethics. Many studies show that companies that apply ethics get a competitive advantage compared to companies that ignore it. When the wave of disruption comes, companies also feel the dynamics and make adaptations in their business activities and operations. Also adapting related to the ethics of the company. This paper specifically tries to identify problems faced by companies due to disruption in the era of digital transformation. Mainly viewed from an ethical perspective. At the same time, it also offers a study and a view of its impact. The method used is literature study from various sources related to digital disruption, companies’ ethics, and socio-cultural behaviour caused by digitization. The purpose of this paper is to find new study perspectives related to companies, social behaviour, and culture, resulting from disruption in the era of digital transformation, and the ethical side that goes with it. The digital revolution presents businesses with the challenge of leveraging technology for economic gain while navigating complex ethical issues related to human well-being, fairness, and social justice. To address these challenges, companies must adopt responsible innovation, invest in workforce development, establish ethical frameworks, and foster inclusive dialogue. Ensuring digital equity through an inclusive economy is vital, requiring a shift from profit maximization to prioritizing transparency and accountability. The findings of this study are to inspire and provide practical and strategic benefits for companies, what they must do ethically in surfing during a wave of disruption in this era of digital transformation and to increase competitive advantage and maintain sustainability. Future research should focus on adaptive strategies and policy interventions to manage technological disruptions, promoting economic growth without ethical compromises. By embracing ethical leadership and responsibility, businesses can unlock innovation, competitive advantage, and sustainability while contributing to societal good.
... O acesso facilitado a dados de qualidade constitui uma componente frequente nas discussões sobre avanços na pesquisa científica, especialmente em domínios como a ciência de dados, economia e saúde (JOHNSON, 2019). No entanto, a mediação entre as necessidades dos pesquisadores e as condições impostas pelas empresas para o compartilhamento de dados continua a ser uma área que demanda atenção constante e soluções inovadoras para maximizar o aproveitamento do potencial informacional disponível (MAYER-SCHÖNBERGER, 2013;HARDT, 2019). ...
As equipes geralmente utilizam ferramentas de gerenciamento para acompanhar as User Stories pendentes, controlar o seu código-fonte, registrar suas estimativas de esforço e os responsáveis pela abertura e fechamento dos chamados. Essas ferramentas contêm dados que podem ser utilizados em diversas pesquisas de
engenharia de software. É desafiador encontrar dados para pesquisas, pois as empresas privadas são relutantes em compartilhar seus dados. O objetivo deste artigo é apresentar um conjunto de dados contendo dados brutos de 33 Projetos de Software Ágil de código aberto, minerados do GitLab, totalizando 122.627 Story Points e 20.474 User Stories. Disponibilizamos esses dados publicamente nos formatos CSV para facilitar o seu uso pela comunidade científica interessada. Acreditamos que esse conjunto de dados pode ser utilizado em várias linhas de pesquisa de engenharia de software, incluindo classificação e vetorização de texto, aprendizagem de máquina em domínios da engenharia tais como, estimativa de esforço e priorização de tarefas.
... As such, discussions on the generativity of AI are at least in part an extension of the many ongoing controversies surrounding the concept of datafication. Datafication is understood as the practice and tendency of collecting, databasing, and quantifying information, and, further, the usage of these data for knowledge production, optimization, and the generation of economic value (Mayer-Schönberger and Cukier, 2013). The emergence of GenAI is in many ways reliant on the pre-existing proliferation of datafication (Kalpokas, 2023;Steinhoff, 2024). ...
Following the widely noticed launch of ChatGPT 3.5 in November 2022, an unprecedented number of users have started to experiment with generative AI for communication purposes. Prior studies have shown how users are commodified by platforms, and the unprecedented development of generative AI raises hence once again questions of platform dynamics vs user agency. In this study, we argue that platformized generative AI (GenAI) actively 'talks back' to their users, prompting them to act accordingly. Theoretically, we develop the concept of data reflectivity as a critical lens, showing that users exhibit reflective practices. These allow them to reflect upon their own role in relation to platformized GenAI and thus alter their patterns of action. The empirical case study, carried out in Spring 2023, draws on a survey with 60 early adopters and 14 subsequent semi-structured interviews, collected in an NGO operating in Southeast Asia. The thematic analysis shows that users relate to platformized GenAI in three distinct but related ways: (1) as a happy helper to organize and systematize knowledge and information; (2) as a creative tool to generate ideas; and (3) as a conversation partner for personal and life-related matters. In conclusion, we discuss the findings critically in relation to overall platform dynamics and the notion of systems 'speaking back' and further suggest that future research should aim to bring the two research fields of datafication and user studies even closer together.
... The more seamlessly datasets can be integrated and cross-queried, the greater value they can potentially provide. This requires not only technical interoperability (e.g., through common schemas and identifiers), but also semantic harmonization (e.g., through mappings and ontologies) (Berman, 2015;Mayer-Schönberger & Cukier, 2013;Ogunseye, 2020). ...
As organizations seek to maximize the value of their data assets, repurposing data is becoming increasingly important. Drawing on a synthesis of research across information systems, computer science and related fields, this paper characterizes data repurposing in terms of three key drivers: diversity, representativeness, and context-richness. We articulate six principles for how these drivers manifest and interact to enable responsible repurposing. We then discuss the implications of these principles for data collection, curation, and governance practices. Finally, we outline an agenda for future research and cross-sector collaboration to advance the theory and practice of data repurposing. Through this work, we aim to provide conceptual foundations and practical guidance for organizations seeking to steward their data as renewable assets for long-term value creation.
... It has been noted that data quality, integration, and sharing face many challenges in the energy and environment sectors. Although the volume of data is large and contains much valuable information, relatively little of it can be effectively utilized, and the quality of the data is poor, with timeliness, completeness, accuracy, and consistency needing to be improved [90]. Second, the lack of a unified datasharing platform limits the accessibility of data, and different companies and institutional organizations use different standards for data definition, storage, and management, leading to data redundancy and inconsistency and making full data integration more difficult. ...
The construction industry, being responsible for a large share of global carbon emissions, need to reduce its high carbon outputto meet carbon reduction goals. Artificial intelligence can provide eflicient and accurate technical support for carbon emission calculation and prediction. Here, we review the use of artificial intelligence techniques in forecasting, management andreal-time monitoring of carbon emissions, focusing on how they are applied, their impacts, and challenges. Compared totraditional methods, the prediction accuracy of artifcial intelligence models has increased by 20%. Artificial intelligencedriven systems could reduce carbon emissions by up to 15% through real-time monitoring and adaptive management strategies. Artificial intelligence applications improve energy efficiency in buildings by up to 25%, while reducing operationalcosts by up to 10%. Artificial inteligence supports the establishment of a digital carbon management system and contributesto the development of the carbon trading market.
... ITH the widespread adoption of the Internet and the rapid development of information technology, the world has entered an unprecedented era of information explosion [1]. Every day, massive amounts of data are generated, yet people often feel at a loss when faced with this data deluge [2]. ...
A great deal of geoscience knowledge exists in the form of unstructured text or maps which are difficult to use by structured models or to process by computers. Thus it is urgent to transform them to structured knowledge graph (KG). However, the development of geoscience knowledge graph (GKG) lags behind the general KG because it involves in the complexity of spatiotemporal relationships and knowledge from multi-sources. This study constructed a mountain vegetation knowledge graph (MVKG) incorporating with vegetation geographical principles, maps and remote sensing (RS) images with the support of ArcGIS and deep learning method, to facilitate the use of vegetation knowledge in various disciplines. The results showed that: 1) For the construction of a GKG such as the MVKG, it is first necessary to define a strict and compatible ontology to classify and organize all the knowledge in order to facilitate structured representation and storage of them. 2) The MVKG entities were labeled from vegetation maps with the support of ArcGIS, which indicated that the spatio-temporal representation, organization, and analysis techniques of GIS can effectively support the construction of the GKG. 3) The RS image features extracted by the deep learning method were embedded into the properties of the MVKG entities, which will be significant for the MVKG application because RS monitoring is indispensable for the study of vegetation distribution and changes. The MVKG can also enhance the application of vegetation knowledge and information in RS monitoring for vegetation cover and change, mountain ecology, and climate change.
... In contrast to human heuristics that may represent biased and poor substitutes for computations, BDA is expected to generate fact-based insights to overcome the deficiencies associated with human heuristics (Agarwal & Dhar, 2014;Dhar, 2013;Lyytinen & Grover, 2017;McAfee, 2013). For example, with BDA's greatly improved data, algorithms, and computational power, organizations can analyze entire populations rather than samples, uncover hidden correlations among variables, and develop more comprehensive and objective models to represent decision problems (Mayer-Schönberger & Cukier, 2013). Moreover, with the recent trend of integrating machine learning techniques in BDA, it has become more AI-driven, enabling efficient detection of patterns that experts may have overlooked (Rana et al., 2022;van den Broek et al., 2022). ...
... ML is a very large and mature discipline at the interface of computer science and statistics. As the volume, velocity, veracity, and variety of data that society generates and collects are all increasing rapidly, the analysis of so-called Big Data transcends the cognitive capability of humans [24,25]. Consequentially, there is a considerable and growing reliance on algorithms to structure, analyze, and model data. ...
Recent calls to take up data science either revolve around the superior predictive performance associated with machine learning or the potential of data science techniques for exploratory data analysis. Many believe that these strengths come at the cost of explanatory insights, which form the basis for theorization. In this paper, we show that this trade-off is false. When used as a part of a full research process, including inductive, deductive and abductive steps, machine learning can offer explanatory insights and provide a solid basis for theorization. We present a systematic five-step theory-building and theory-testing cycle that consists of: 1. Element identification (reduction); 2. Exploratory analysis (induction); 3. Hypothesis development (retroduction); 4. Hypothesis testing (deduction); and 5. Theorization (abduction). We demonstrate the usefulness of this approach, which we refer to as co-duction, in a vignette where we study firm growth with real-world observational data.
There is a lack of embedded, ethical frameworks for data governance that limit the socio-ethical dimensions of the use of data. This constraint is significant in increasingly complex digital innovation initiatives and contributes to the slow uptake of digital technologies in many contexts, highlighting the ongoing need for inclusive and responsible innovation in agri-food systems. We used an abductive reasoning approach to develop a citizen-centric data lifecycle which acknowledges that effective relationships are essential to make space for all data citizens to be equitably and ethically involved in governance and decision-making. This is supported by summary of the literature to depict the roles, responsibilities and challenges of data-citizens within data ecosystems. Use of the citizen-centric data lifecycle could accelerate digitalization efforts in the agri-food sector and position research organisations to support the primary sector through the ongoing digital transformation. It will also have value for agencies and organisations grappling with how to operationalise ethical data management practices to meet policy requirements and stakeholder expectations.
Epistemology plays a fundamental role in shaping the foundations of research methodology, guiding researchers in understanding the nature of knowledge and how it is acquired, justified, and disseminated. This chapter explores the significance of epistemology in research, examining its implications for the formulation of research questions, the selection of appropriate methodologies, and the interpretation of findings. the chapter explores the interplay between epistemology and research design, emphasizing the importance of aligning methodological approaches with epistemological assumptions to ensure the validity and rigor of research outcomes.
In this chapter, we explore the platformization of family life by concentrating both on the specific context of parenting in early childhood and on a core function of many platforms—datafication. Drawing on two case studies of infant feeding apps, including qualitative research interviews with users, the chapter explores how understandings of what it is to be a “good” parent are now defined through datafication and explicit metrics which demonstrably transform maternal and paternal roles as well as impacting on intergenerational discussions, traditional knowledge and understandings of what it means to bring up a baby. We examine these cases by considering the role of datafication in developing self-understandings of the family narrative and changes about the relationship between the family and other “social envelope” institutions like school, care, and welfare. The chapter concludes by summarising how the various activities of family life might be re-mediating how families understand themselves as individuals, social units, and institutions.
In an era where environmental consciousness is paramount, building sustainable brands has become a critical goal for businesses worldwide. This study explores the synergistic effects of influencer marketing, brand evangelism, and data-driven strategies on sustainable brand development. By leveraging the authenticity of influencers and the passion of brand evangelists, companies can drive deeper consumer engagement and loyalty. Advanced data analytics provide actionable insights, optimizing marketing efforts to align with sustainability goals. This comprehensive approach uniquely examines the interconnectedness of these elements, offering new insights and practical guidance for businesses. The findings reveal significant practical implications, demonstrating how integrated strategies can enhance brand reputation and drive long-term success. Preliminary data suggests that genuine influencers and dedicated brand evangelists significantly boost consumer trust and engagement, while data analytics fine-tune sustainability messages.
Cross-border data flows, which encompass a broad and diverse range of economic and non-economic dimensions, raise a number of new trade policy issues. However, as the status of data and data flows in International Economic Law remains ill-defined, no effective multilateral governance is currently exercised with respect to the digital transformation of trade. Notably, the proliferation of national data governance frameworks is a critical element for regulating trade in the digital economy, but one that receives only limited consideration under WTO law. As digital globalisation accelerates, a patchwork of country-specific data governance frameworks threatens to fragment the global data sphere and thus increase barriers to digital trade. The debate on transnational data governance is particularly pronounced with regard to data privacy laws, as these are a common element of domestic data governance and the global landscape of data privacy regulations is characterised by considerable heterogeneity. As a result, the impact of national data privacy laws on the cross-border flow of personal data is one of the most contentious issues associated with digital trade. This chapter provides an in-depth examination of the regulation of data flows through data privacy rules and explores the rationale behind a contemporary data privacy collision in digital trade.
The digital paradigm has far-reaching implications for the global economy and for cross-border trade. Digital technologies are driving a new phase of global economic integration. With respect to international trade, it is generally acknowledged that the digital transformation changes what is traded, who trades and how trade is conducted. Data has emerged as a vital economic resource in the digital economy, leading to an exponential increase in global data exchange and a surge in cross-border data flows. The economic shift from tangible to intangible assets in a digital economy is resulting in the emergence of a digital paradigm for cross-border trade. This chapter is dedicated to illustrating the novel dimensions of cross-border economic activity in the era of digital globalisation. This chapter lays the groundwork for analysing the development of regulations for these new elements within a trade context and provides a background for analysing the nexus with data privacy regulations.
The chapter discusses the transition from traditional competitive business strategies to innovative, collaborative, and sustainable models. A key focus is the shift to sustainable business practices, emphasizing the importance of integrating social and environmental responsibility into corporate strategies. The environmental, social, and governance (ESG) framework is presented as an evolution of corporate social responsibility (CSR) that integrates non-financial factors into business operations and attracts investors interested in long-term value and ethical practices. The chapter advocates stakeholder capitalism, where business success is measured by its impact on people and the planet, not just financial performance. This model emphasizes the alignment of business objectives with broader social and environmental goals. It also examines the rise of digital business platforms such as Google, illustrating how technology is reshaping capitalism. These platforms use strategies such as holographic wrapping to enter and dominate new markets, demonstrating the transformative power of digital innovation.
This paper provides a review on workplace monitoring, focusing on its relationship with the ongoing process of datafication. By examining various techniques and technologies, the contribution specifically highlights the coexistence of coercion and consent as a characteristic feature of digital Taylorism. To inform the analysis, it suggests a cross-reading of two theoretical frameworks, namely Labour Process Theory (LPT) and surveillance studies. Thus, it argues that more consensual practices are now used to hide the coercive dimension of management by transferring it from human bosses to automated ones, rather than truly replacing it. Furthermore, the paper highlights how this shift is closely linked to the increased use of gamification, rankings, self-tracking attitudes and real-time monitoring, as well as to the growing precariousness that permeates both productive and reproductive spheres.
FULL TEXT > https://rdcu.be/d1b4G
Fairness in AI and ML systems is increasingly linked to the proper treatment and recognition of data workers involved in training dataset development. Yet, those who collect and annotate the data, and thus have the most intimate knowledge of its development, are often excluded from critical discussions. This exclusion prevents data annotators, who are domain experts, from contributing effectively to dataset contextualization. Our investigation into the hiring and engagement practices of 52 data work requesters on platforms like Amazon Mechanical Turk reveals a gap: requesters frequently hold naive or unchallenged notions of worker identities and capabilities and rely on ad-hoc qualification tasks that fail to respect the workers’ expertise. These practices not only undermine the quality of data but also the ethical standards of AI development. To rectify these issues, we advocate for policy changes to enhance how data annotation tasks are designed and managed and to ensure data workers are treated with the respect they deserve.
Digital arrest scams, in which fraudsters impersonate law enforcement to intimidate victims into financial compliance, are an escalating threat. This research aims to propose an integrated framework that merges four critical areas: predictive crime script modeling, cognitive resilience strategies, advanced forensic tools, and digital literacy initiatives. By leveraging these approaches, this paper aims to enhance law enforcement capabilities, improve victim decision-making processes, advance forensic investigations, and foster public awareness to prevent digital arrest scams. We further explore the application of artificial intelligence (AI), big data, and blockchain in these domains to support future developments and enhance cybersecurity measures
Buku Pengantar Bisnis: Konvensional dan Era Digital menawarkan panduan komprehensif bagi pembaca yang ingin memahami dinamika dunia bisnis dalam dua perspektif, yaitu bisnis konvensional dan era digital. Ditulis oleh Menhard, S.E., M.Pd. dan Rahmadani Hidayat, S.E., M.M., buku ini menghadirkan ulasan mendalam tentang bagaimana teknologi telah mengubah cara bisnis beroperasi, mulai dari pemasaran hingga pengelolaan sumber daya manusia. Pembaca akan diajak untuk menjelajahi konsep-konsep dasar bisnis, serta bagaimana strategi manajemen yang tepat dapat menjadi kunci sukses dalam menghadapi tantangan dan memanfaatkan peluang yang ada dalam era digital yang terus berkembang. Dibagi dalam beberapa bab, buku ini menyentuh berbagai aspek penting seperti kewirausahaan, manajemen operasional, pemasaran digital, serta tanggung jawab sosial perusahaan. Dengan pendekatan yang jelas dan praktis, buku ini cocok untuk mahasiswa, pelaku bisnis, maupun profesional yang ingin meningkatkan pemahaman mereka tentang cara bisnis berevolusi di tengah pesatnya perkembangan teknologi. Jangan lewatkan kesempatan untuk mempersiapkan diri menghadapi masa depan bisnis yang penuh inovasi dan perubahan, dengan bekal wawasan yang solid dari buku ini.
Visible surveillance technologies are vanishing in the wake of dataveillance. The practice of surveillance is certainly not disappearing, but it has become inconspicuous. In a regime of invisible surveillance, the watchers and the watched alike are hard to identify, for the latter have also become socially invisible as human beings. I argue that the idea of becoming invisible in both digital and pre-digital surveillance societies is multifaceted. On the one hand, it suggests total deprivation of personal autonomy as a result of overexposure resulting in the disappearance of the subject and, on the other hand, it implies a possibility of resistance and self-assertion. Self-exposure is taken to extremes in Dave Eggers’s dystopian novel The Circle (2013) where the protagonist becomes the centre of attention in a “viewer society” (Mathiesen 1997) and “goes transparent” (Eggers 2013, 351). In contrast, Wolfgang Hilbig’s Stasi novel “Ich” (1993) works with different notions of invisibility. Recruited to spy for the Stasi, Hilbig’s protagonist – an unsuccessful poet – is at the same time targeted by East Germany’s secret police and wants to become invisible, hiding from the omnipresent Stasi surveillance in Berlin’s maze-like cellar corridors in an attempt of self-assertion. The comparison of these novels will elucidate how different discourses on (in)visibility and transparency contribute to an account of subjectivity that attempts to resist surveillance.
Introduction: This paper introduces the concept of Sustainable Digital Rent (SDR), highlighting the shift from traditional economic rent based on tangible assets to rent derived from digital platforms. At the heart of this shift is the "value state," a dynamic balance between constructive expectations and destructive information. As digital platforms generate increasing amounts of information, expectations are increasingly met and shared more efficiently with all users, leading to a reduction in individual and general motivational, emotional, and cognitive engagement. These platforms, now essential to modern life, facilitate online activities that reduce as well physical engagement and natural interactions, thereby impacting cognitive function and physical health. By extracting rent directly, digital platform operators limit the benefits users could gain to support their mental and physical well-being. Methods: This paper empirically defines and estimates SDR using the collective estimates of price, cost, and income (PCI) as practiced in North American real estate appraisal, demonstrated through abstract art rent. Our approach provides a new perspective on valuing intangible assets, such as knowledge, by showing the shift from expectation to information, governed by the value state in cognitive evaluations. Emphasizing interdisciplinary relevance, the method underscores the need for an efficient mechanism to redistribute SDR benefits to digital platform users, supporting fair and equitable digital development. Results and discussion: The results show that digital rent is driven primarily by cognitive and informational content, demonstrating the need for redistribution mechanisms to address the growing inequality on digital platforms. The use of abstract art as a case study provides a convenient and illustrative way to explore how intangible assets, like digital rents, can be evaluated and redistributed. SDR offers insights into how digital rents can be captured and redistributed equitably, ensuring that platform users and creators benefit from the knowledge economy's growth. The findings underscore the relevance of measuring SDR to guide policy recommendations aimed at reducing digital monopolization and promoting sustainable digital development.
Purpose
A more accurate comprehension of data elements and the exploration of new laws governing contemporary data in both theoretical and practical domains constitute a significant research topic.
Design/methodology/approach
Based on the perspective of evolutionary economics, this paper re-examines economic history and existing literature to study the following: changes in the “connotation of production factors” in economics caused by the evolution of production factors; the economic paradoxes formed by data in the context of social production processes and business models, which traditional theoretical frameworks fail to solve; the disruptive innovation of classical theory of value by multiple theories of value determination and the conflicts between the data market monopoly as well as the resulting distribution of value and the real economic society. The research indicates that contemporary advancements in data have catalyzed transformative innovation within the field of economics.
Findings
The research indicates that contemporary advancements in data have catalyzed disruptive innovation in the field of economics.
Originality/value
This paper, grounded in academic research, identifies four novel issues arising from contemporary data that cannot be adequately addressed within the confines of the classical economic theoretical framework.
The data revolution has reshaped our understanding of design, merging tradition and innovation in a world increasingly driven by complex data systems. This book explores the intersection of data science, design, and sustainability, with a particular focus on the fashion industry. It delves into how designers, historically skilled in tackling challenges posed by new technologies, are now navigating the data-rich environment. Through a systemic approach, the research proposes a framework for integrating data into design processes, promoting sustainable fashion practices.
Designers are tasked with crafting systems that respond to both environmental needs and human values, bridging the gaps between diverse fields – from fashion to cutting-edge digital technologies. The book highlights the importance of data literacy for designers, emphasizing the potential of data to transform not just products, but entire systems. With insights from various case studies, it offers a comprehensive overview of strategies that integrate Data with design, showcasing how this can lead to a more sustainable and adaptive future for the fashion industry.
Social media platforms have become ubiquitous sources of news and information for many users around the world. As the percentage of people who regularly get news from social media increases, new controversies emerge concerning the role of algorithms and their influence on the type of news users are exposed to on these platforms. Intrinsic in social media platforms are algorithms that facilitate and determine the flow of information across a labyrinth of networks. Algorithms have become a power force behind social media platforms, due to their ability to sort, filter, prioritize and recommend the media people encounter in the platforms. The power of algorithms to influence the information gateways evokes several questions concerning their implications for society. This chapter examines the social power of algorithms embedded in social media platforms and their implications on crisis news coverage. The first part of the chapter provides a contextual analysis of modern crises and illuminates how new digital media technologies are integral and integrated aspects of crises. The second part examines the power dimensions of the algorithms, how they work and the type of role they play in the production, distribution and consumption of crisis news. The third part discusses the implication of this power on the coverage of crises. Using examples from selected crises, the chapter argues that platform algorithms are invaluable in crisis news coverage, but at the same time challenge and distort crisis communication through amplification (and marginalization) of certain content. Algorithmized information gateways provide fertile grounds for misinformation. The last part explores the potential solutions that might mitigate the challenges of algorithm-driven crisis information.
Current commentaries on digital change have emphasised the reality of our increasing exposure to the power of algorithms. My intervention examines this assertion with a media-historical approach that traces arguments raised in a legal case to the point of software inception. I show that the power of algorithm is based on the eminent cultural techniques of reading and writing. As an antidote to this power, I propose the concept of source code critique, which draws upon historiography and so-called ‘literate programming’, and which could help to introduce transparency into an algorithm's opaque agency.
As the field of migration studies evolves in the digital age, big data analytics emerge as a potential game-changer, promising unprecedented granularity, timeliness, and dynamism in understanding migration patterns. However, the epistemic value added by this data explosion remains an open question. This paper critically appraises the claim, investigating the extent to which big data augments, rather than merely replicates, traditional data insights in migration studies. Through a rigorous literature review of empirical research, complemented by a conceptual analysis, we aim to map out the methodological shifts and intellectual advancements brought forth by big data. The potential scientific impact of this study extends into the heart of the discipline, providing critical illumination on the actual knowledge contribution of big data to migration studies. This, in turn, delivers a clarified roadmap for navigating the intersections of data science, migration research, and policymaking.
Each major advance in ICT (Information and Communications Technology) has brought about great changes in the productivity and production relations of human society and changes in thinking and research methods in the study of human economic geography.
Der Artikel beginnt mit einer Einführung in die Künstliche Intelligenz (KI), erläutert deren rapide Entwicklung und die Bedeutung für den wissenschaftlichen Bereich. Anschließend untersucht der Text, wie KI die kulturelle Entwicklung beeinflusst und adressiert Georg Simmels These von der ‚Tragödie der Kultur‘. Simmel beschreibt das wachsende Missverhältnis zwischen objektiver Kultur (z. B. Kunst, Wissenschaft) und subjektiver Kultur (persönliche Aneignung), das zu einer kulturellen Entfremdung führt. Der Autor argumentiert, dass KI-Systeme, wie ChatGPT, dieses Ungleichgewicht potenziell überwinden können, indem sie personalisiertes und zugängliches Wissen bereitstellen. Dies ermöglicht eine effektivere Aneignung und Nutzung der kulturellen Güter und könnte somit zur Lösung der kulturellen Tragödie beitragen.
Il volume esplora la contemporanea intersezione tra i dati e l’educazione, un territorio in cui i big data non solo influenzano, ma trasformano attivamente le modalità di insegnamento e apprendimento. L’opera si apre con un’analisi del ruolo dei big data nella società dell’informazione e dell’influenza che essi esercitano sulle nostre vite individuali e sociali, sollevando questioni di ordine cognitivo, etico e sociale: in che modo sta cambiando il nostro modo di concepire i dati? Quali sono le implicazioni etiche legate alla raccolta e all’utilizzo dei dati in ambito educativo? Quale impatto l’uso dei big data nell’istruzione può avere sull’accesso equo alle opportunità educative? Tali questioni richiedono nuove forme di consapevolezza che vanno promosse tra i cittadini del nuovo millennio per favorire lo sviluppo di data literacy. Il libro offre una panoramica sui significati del concetto di data literacy, soffermandosi sui suoi fondamenti teorici e fornendo un quadro complessivo di cosa significhi essere alfabetizzati ai dati nel XXI secolo...
ResearchGate has not been able to resolve any references for this publication.