Article

Big Data: The Next Frontier for Innovation, Comptetition, and Productivity

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The factories of the future will be run by smarter software that take inputs from product development and historical production data (e.g., order data, machine performance), and applies advanced computational methods to create a digital model of the entire manufacturing process, from production layout to step sequencing for a specific product (Manyika et al. 2011). Economist (2013b) 8 Clearly trending in manufacturing digitization is the additive manufacturing (AM) or industrial 3D printing technology. ...
... Continued expansion of digital economic activities and the proliferation of networking through the mobile Internet, social Web sites, and cloud computing are leading to digital data-rich environment with stronger emphasis on services (as opposed to products). Specifically, Big Data and increased data access, facilitated by the cloud (Manyika et al. 2011), represents a potential new source of supply chain competitive advantage. ...
... Another part of a new foundation for competitiveness, Big Data-large pools of data that can be captured, communicated, aggregated, stored, and analyzed-are becoming an essential basis for competition, alongside physical assets and human capital (Chui et al. 2013;Manyika et al. 2011;McKinsey 2011). A range of investments will be required in analytic capabilities and talent to leverage Big Data for competitive advantages (Manyika et al. 2011). ...
Technical Report
Full-text available
Globalization and technology have been major impetuses to change in the development of business practices and economic landscape. Intended as a resource for supply chain professionals, this paper provides an overview of technology trends that we believe are driving fundamental changes in global supply chains. In identifying these trends, we articulate the “new normal” characteristics of supply chain MC3—Mobility, Complexity, Competition, and Collaboration—that will dictate areas of strategic focuses in the foreseeable future.
... For data security, case studies emphasize the crucial need for safe access restrictions, strong user authentication, and frequent security assessments [16]. Successful open data initiatives with robust cybersecurity measures safeguard proprietary data and consumers in the commercial sector, emphasizing continuous threat assessment and proactive incident response [21]. Collaboration and data sharing encourage innovation, emphasizing the catalytic role of cybersecurity in data governance [4,16]. ...
... The integration of AI and machine learning into open data platforms complicates data privacy issues, requiring adaptive methods [19][20]. For future endeavors, user-centered design, data governance, education [22], and ongoing monitoring are advocated, highlighting the critical importance of data privacy and ethics in open data platforms [21][22][23][24]. ...
Article
Full-text available
Cybersecurity is critical for protecting open data. Transparency and innovation are facilitated by open data platforms; however, concerns about cybersecurity and privacy persist. This study examines the role of cybersecurity in public institutions in the Republic of Kosovo to determine methods of safeguarding data integrity. The main aim of this study was to examine the role of cybersecurity in securing open data in public organizations in the Republic of Kosovo. The study aimed to identify optimal cybersecurity practices in the context of open data and provide a comprehensive overview of the implementation of cybersecurity measures. This study employed a structured and methodical approach to assess cybersecu-rity and the effectiveness of open data platforms in public organizations in the Republic of Kosovo. Results: The study provides an overview of the status of open data platforms in the Republic of Kosovo and highlights the importance of cybersecurity, data privacy, and data integrity. Despite the stated concerns, such as enhancing security measures and increasing user knowledge, it is evident that public institutions have made significant progress in securing and enhancing their open data platforms. It is suggested that institutions in the Republic of Kosovo continue to invest in cybersecurity, promote privacy protection measures, and focus on enhancing the quality of open data to develop in this sector. Furthermore, collaboration and coordination across institutions and government agencies are required to enhance the efficiency and effectiveness of these platforms.
... HBase is a column-oriented NoSQL database in Hadoop (Manyika, 2011) that allows users to store massive amounts of data in both rows and columns. HBase allows (Santhanam, 2003). ...
... Due to its schema-based design, the Avro system decouples the reading and writing processes from the underlying programming language. If your data already has a schema, Avro can serialize it for you (Manyika, 2011). It's a system for transferring data between ...
Article
Full-text available
The objective of this research paper is to analyze how Indian commercial banks handle big data, which refers to an extremely large data set that requires analysis, management, and validation through traditional data management tools. Purpose: Banks are one of the financial services industries that deal with a vast amount of transaction data, which must be managed, scrutinized, and utilized for the benefit of both the bank and its customers. This study will examine the factors that have a greater impact on banks when handling big data and how analytics can create value for the business.Research methods: Secondary data was collected from various sources such as articles, journals, and websites. The study focuses on big data management, risk management, fraud detection, customer segmentation, and the business value of banking industries. A conceptual framework has been developed to highlight the factors that have a higher impact on big data management in the banking industry. Findings: The findings indicate that big data analytics has a significant impact on the business value of banks, and the factors influencing business value have been identified. Conclusion: By utilizing big data and embracing emerging technologies, companies can enhance the worth of their organization.
... The use of big data can help companies access and analyze large amounts of data from various sources to create more accurate and comprehensive reports. These reports can provide valuable insights into customer behaviour, market trends, and other key metrics that can help companies make more informed decisions (Manyika et al. 2011). By using analytics tools for big data, companies can also streamline their reporting processes by reducing the time and resources needed to produce reports. ...
... These data can be used to identify new market opportunities, optimize pricing strategies, and improve customer engagement. Overall, the role of big data in reporting and decision-making is becoming increasingly important as companies seek to gain a competitive edge in today's rapidly changing business environment (Manyika et al. 2011). ...
Article
Full-text available
This project developed a business intelligence process for a Lebanese company to enhance their financial management. The process involved creating a data model based on the company's needs and data availability, integrating their data into a physical database using Python and PostgreSQL, and generating dashboards with Power BI to visualize and analyse their revenue numbers. The project outcomes will help the company gain insights into their business performance, optimize their revenue and profit, and identify data entry issues that may affect their results. The project also provides a foundation for further data integration and analysis.
... This paper focusses on machine learning (ML) as a fundamental component of data analytics. The McKinsey Global Institute has stated that ML will be one of the main drivers of the Big Data revolution [8]. The reason for this is its ability to learn from data and provide data driven insights, decisions, and predictions [9]. ...
...  Batch-oriented processing, for example, Map Reduce based frameworks like Hadoop, for recurring tasks such as large-scale data mining or aggregation [8]. ...
Article
Full-text available
Big Data promise new levels of scientific discovery and economic value. Big Data bring new opportunities to modern society and challenges to data scientists The Big Data revolution promises to transform how we live, work, and think by enabling process optimization, empowering insight discovery and improving decision-making. The realization of this grand potential relies on the ability to extract value from such massive data through data analytics, machine learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. However, traditional machine learning approaches were developed in a different era and thus are based upon multiple assumptions, such as the dataset fitting entirely into memory, what unfortunately no longer holds true in this new context. Big data refers to the large volume of complex, (semi) structured, and unstructured data that are generated in a large size and that arrive (in a system) at a higher speed so that it can be analyzed for better decision making and strategic organization and business move
... From satellites in space to wearable computing devices and from credit card transactions to electronic health-care records, the deluge of data [1], [2], [3] has pervaded every walk of life. Our ability to collect, store, and access large volumes of information is accelerating at unprecedented rates with better sensor technologies, more powerful computing platforms, and greater on-line connectivity. ...
Preprint
Data science models, although successful in a number of commercial domains, have had limited applicability in scientific problems involving complex physical phenomena. Theory-guided data science (TGDS) is an emerging paradigm that aims to leverage the wealth of scientific knowledge for improving the effectiveness of data science models in enabling scientific discovery. The overarching vision of TGDS is to introduce scientific consistency as an essential component for learning generalizable models. Further, by producing scientifically interpretable models, TGDS aims to advance our scientific understanding by discovering novel domain insights. Indeed, the paradigm of TGDS has started to gain prominence in a number of scientific disciplines such as turbulence modeling, material discovery, quantum chemistry, bio-medical science, bio-marker discovery, climate science, and hydrology. In this paper, we formally conceptualize the paradigm of TGDS and present a taxonomy of research themes in TGDS. We describe several approaches for integrating domain knowledge in different research themes using illustrative examples from different disciplines. We also highlight some of the promising avenues of novel research for realizing the full potential of theory-guided data science.
... Within all these messages lie pieces of information that scientists, doctors or marketeers would like to extract and work with [1]. The amount of data has reached an enormous volume, it continues R. Polig [2], and generating value from it is a key competitive advantage. Information extraction is the task of extracting desired information from textual data and transforming it into a tabular data structure. ...
Preprint
The amount of textual data has reached a new scale and continues to grow at an unprecedented rate. IBM's SystemT software is a powerful text analytics system, which offers a query-based interface to reveal the valuable information that lies within these mounds of data. However, traditional server architectures are not capable of analyzing the so-called "Big Data" in an efficient way, despite the high memory bandwidth that is available. We show that by using a streaming hardware accelerator implemented in reconfigurable logic, the throughput rates of the SystemT's information extraction queries can be improved by an order of magnitude. We present how such a system can be deployed by extending SystemT's existing compilation flow and by using a multi-threaded communication interface that can efficiently use the bandwidth of the accelerator.
... In the modern world we are inundated with massive amounts of data [3]. The data deluge is increasing exponentially due to the increased digitization of modern life and the commoditization of data collection with the advancement in digital technology. ...
Preprint
Disasters have long been a scourge for humanity. With the advances in technology (in terms of computing, communications, and the ability to process and analyze big data), our ability to respond to disasters is at an inflection point. There is great optimism that big data tools can be leveraged to process the large amounts of crisis-related data (in the form of user generated data in addition to the traditional humanitarian data) to provide an insight into the fast-changing situation and help drive an effective disaster response. This article introduces the history and the future of big crisis data analytics, along with a discussion on its promise, challenges, and pitfalls.
... CPS, described by Monostori et al. (2016), bridges physical and computational realms, enabling real-time monitoring and control of physical systems. Manyika et al. (2011), showcased that Big data extracts actionable insights from vast data pools generated by IoT and CPS systems, driving innovation and decision-making. According to Teizer et al. (2017), AI emulates human intelligence processes, offering predictive analytics and computer vision solutions to optimize construction processes. ...
... In terms of computational efficiency, while adaptive time-warping can be more resourceintensive than simpler statistical methods, optimized implementations have shown to be feasible even on edge computing devices. This allows for distributed processing and reduced latency, critical for real-time industrial applications [11]. International journal of Management, IT and Engineering http://www.ijmra.us, ...
Article
Full-text available
Adaptive time-warping emerges as a powerful technique for analyzing high-frequency industrial time-series data, addressing the challenges of real-time pattern recognition in smart manufacturing environments. This paper explores the development and implementation of algorithms that dynamically adjust to varying data patterns and speeds, enabling efficient and accurate pattern matching in real-time. The integration of machine learning enhances the adaptability and accuracy of these algorithms, leading to significant improvements in product quality, operational efficiency, and downtime reduction in smart factories. We examine the principles of adaptive time-warping, its advantages over traditional time-series analysis methods, and its applications in real-time quality control, predictive maintenance, process optimization, anomaly detection, and energy consumption analysis. A comparative analysis demonstrates the superior performance of adaptive time-warping in terms of speed, accuracy, and scalability. The paper also addresses implementation challenges, particularly in integrating these algorithms with existing Manufacturing Execution Systems (MES) and ensuring scalability in large-scale manufacturing environments. By providing a comprehensive exploration of adaptive time-warping techniques, this research contributes to the advancement of data-driven decision-making in smart manufacturing, paving the way for more responsive and intelligent industrial processes.
... 8334 [21] Big data: The next frontier for innovation, competition, and productivity The figure displays a density visualization created using VOSviewer, highlighting the keyword "effectiveness" as the central theme in research related to government budget efficiency and effectiveness. The bright yellow area surrounding "effectiveness" indicates a high concentration of research activity and a dense cluster of related studies, reflecting its critical importance in the literature. ...
Article
Full-text available
This study conducts a comprehensive bibliometric analysis of the literature on government budget efficiency and effectiveness, spanning publications from 1956 to 2024. Utilizing data exclusively from Google Scholar and employing VOSviewer for visualization, the analysis identifies key trends, influential themes, and research gaps within the field. The centrality of "effectiveness" in the literature underscores its critical role in public administration and fiscal policy, while emerging topics such as "policy," "cost effectiveness," and "e-government" highlight the growing importance of integrating economic efficiency and digital innovation into government budgeting practices. Persistent challenges, including issues of "transparency" and "corruption," indicate areas that require ongoing attention to enhance the overall effectiveness of government budgetary processes. The study's findings offer valuable insights for policymakers and researchers, suggesting a balanced approach that incorporates strategic policy-making, technological advancements, and robust accountability measures to optimize the use of public resources.
... The internet is engendering a new round of change in the paradigm of economics. In other words, new technological innovation for the internet is originating new factors and modes of production (Manyika et al., 2011). The internet has accelerated the efficiency of innovation in China and realized the optimal allocation of strategic resources (H. ...
Article
Full-text available
The long-standing sloppy economic growth model that has led to resource misallocation between regions and industries has caused China’s economic development to enter a period of decelerated growth even though it has soared to the top class of the global economy. The issue of resource misallocation in China has been examined in previous studies, but less focus has been placed on whether resource misallocation has spatial spillover effect and the internet’s involvement in it. To fill in the gaps in the literature, this study uses data from China’s inter-provincial panel from 2010 to 2019 to measure four aspects of internet development: software and information technology services, key internet indicators, service capacity of the telecommunication industry, and the communication capacity of the telecommunication industry. This is done by using a multi-factor estimation method to approximate the actual real-world situation. With an appropriate selection of spatial matrix and threshold variables, a non-dynamic panel threshold model is built to empirically investigate the impact of internet development on resource mismatch. There are several significant findings in this study. First, internet development in China differs and correlates spatially. Second, as internet development advances, there is a significant negative impact on resource misallocation within the region itself and spillover effect to other regions. Third, there is regional variation in how resource misallocation is affected by internet development. Fourth, the connection between internet development and resource misallocation is threshold-dependent on the level of governmental support. To overcome the resource bottleneck and achieve coordinated development, we suggest that local governments should step up their support for the development of internet infrastructure, and underdeveloped regions should fully utilize the spillover effect of internet resources from developed regions. Jel Classification: R11; R15; C31
... The BigBench raw data volumes can be dynamically changed based on a scale factor. The simulated workload is based on a set of 30 queries covering the different aspects of Big Data analytics proposed by McKinsey (Manyika et al., 2011). The benchmark consists of four key steps: (i) System setup; (ii) Data generation; (iii) Data load; and (iv) Execute application workload. ...
... By making more types of information available and useful at a higher frequency, BD may unlock signi cant value by increasing product and service development, boosting performance, improving decision-making (DM), and leading to better and more informed management decisions (Manyika et al., 2011). To collect, analyze, link, and compare such datasets, however, appropriate technology, processing capacity, and algorithmic precision are required. ...
Preprint
Full-text available
This research examines the potential use of modern technologies such as big data, data science, artificial intelligence, and machine learning, which have penetrated several aspects of our lives, to address food concerns and problems, forming the nowadays called food analytics. We discuss the potential use of such technologies in relation to food problems and shortages. We analyze the opportunities and challenges associated with the use of such technological advancements and the potential benefits for the global food system. We also provide a research agenda with future directions for the application of big data, data science, artificial intelligence, and machine learning to the food ecosystem.
... Emails, images, videos, monitoring device data, PDF documents, and audio recordings are just a few examples of the heterogeneous data sources involved. Managing, storing, and analyzing this variety of data poses significant challenges (Manyika et al., 2011). ...
Conference Paper
Full-text available
The study examines the relevance of Big Data in the context of library management within the Fourth Industrial Revolution (4IR) landscape. The study concentrates on exploring the awareness, perspectives, and expected challenges faced by librarians as they strive to incorporate Big Data into their library operations. To achieve these objectives, a qualitative methodology was employed, involving the administration of open-ended questionnaires to librarians from six selected federal universities located in Southwest Nigeria. The findings of this research highlight that a significant proportion of librarians are well-acquainted with the relevance of Big Data and its potential to positively revolutionize library services. Librarians generally express favourable opinions concerning the relevance of Big Data, acknowledging its capacity to enhance decision-making, optimize services, and deliver personalized user experiences. Concurrently, they are cognizant of the challenges surrounding data privacy, data quality assurance, and ethical considerations. Among the challenges foreseen by librarians are concerns regarding data privacy, the assurance of data accuracy, the demand for adequately trained personnel, the scalability of systems, data security, and the effective integration of data from diverse sources. The study recommends that Policymakers should prioritize ethical data handling and the safeguarding of user privacy while allocating resources for comprehensive librarian training in data management and analytics tools. These measures will empower libraries to harness the potential of Big Data effectively, ultimately enhancing library services and addressing the unique challenges posed by the 4IR era.
... However, as big data technologies continually improve, there remains a shortage of skilled professionals who can take full advantage of these technologies. In an article titled "Improving Decision Making in the World of Big Data", Forbes magazine reports that for every one manager with big data skills, there will be ten positions left vacant in 2013 [8]. ...
Conference Paper
Full-text available
In this article we will introduce what Big Data is and the applications it can provides to your business. We will provide some technical details on how to adopt it into your business [1]. This study aims at mapping the massive knowledge landscape in Asian country and scrutinizing the challenges and opportunities in it. The blueprint of size, rate of growth of this sector and infrastructural challenges can enhance the data on Indian huge knowledge Landscape. As India's huge knowledge landscape continues to be growing and is facing infrastructural and policy level challenges, issue analysis methodology is ideal for analyzing the state of the art, challenges and opportunities [2].
... In order to invent new and robust technologies, the demand for the capacity to analyze and comprehend data is increasing. Mckinsey industry reveals that there is a 50% increase in data generation every year, a 40-fold increase since 2001 [43]. Pictures are captured and analyzed using DL and ML to detect various amounts and types of challenges (Table 2), such as contents of aflatoxin in maize [44], salinity stress on chickpeas [3], cucumber's powdery mildews [45], and rot on wheat leaves [46]. ...
Article
Full-text available
Biotic and abiotic stresses significantly affect plant fitness, resulting in a serious loss in food production. Biotic and abiotic stresses predominantly affect metabolite biosynthesis, gene and protein expression, and genome variations. However, light doses of stress result in the production of positive attributes in crops, like tolerance to stress and biosynthesis of metabolites, called hormesis. Advancement in artificial intelligence (AI) has enabled the development of high-throughput gadgets such as high-resolution imagery sensors and robotic aerial vehicles, i.e., satellites and unmanned aerial vehicles (UAV), to overcome biotic and abiotic stresses. These High throughput (HTP) gadgets produce accurate but big amounts of data. Significant datasets such as transportable array for remotely sensed agriculture and phenotyping reference platform (TERRA-REF) have been developed to forecast abiotic stresses and early detection of biotic stresses. For accurately measuring the model plant stress, tools like Deep Learning (DL) and Machine Learning (ML) have enabled early detection of desirable traits in a large population of breeding material and mitigate plant stresses. In this review, advanced applications of ML and DL in plant biotic and abiotic stress management have been summarized.
... Likewise, it is important to consider various notions and concepts related to the environment of large volumes of data-Big Data-generated in EHR and in the health system (Kulynych and Greely 2017). Big Data is an expression used generically to indicate the grouping of data, information, databases, open internet networks and other accessible data that initially aimed to improve strategic planning, marketing and commercial business (Manyika et al. 2011). This context, marked by fluidity, uncertainty and fugacity of data and information, required multiple sources to seek to understand complex and broad phenomena that AI systems can help to interpret. ...
Chapter
Full-text available
The use of systems that include Artificial Intelligence (AI) imposes an assessment of the risks and opportunities associated with their incorporation in the health area. Different types of AI present multiple ethical, legal and social challenges. AI systems involved incorporated with new imaging and signal processing technologies. AI systems in the area of communication have made it possible to carry out previously non-existent interactions and facilitate access to data and information. The greatest concern involves the areas of planning, knowledge and reasoning, as AI systems are directly associated with the decision-making process. So, the central objective of this chapter is to reflect and suggest recommendations, with the foundation of the Complex Bioethics Model, about the decision-making process in health with AI support, considering risks and opportunities. The chapter is organized in two parts: (1) The decision-making processes in health and AI; (1.1) The health area the use of AI and decision-making processes: opportunities and risks to treat electronic health records (EHR) and (2) Complex Bioethics Model (CBM) and AI.
... Especially, when The data are continuously and promptly received.Thus, the need for a system capable of processing large amounts of real-time geospatial data has become very important. The McKinsey Global Institute reports that better use of geolocation data could result 100$ billion gain for companies and 700$ billion gain for customers [23], this gain is mainly earned from time and fuel saving due to location-based services [24]. New technologies for storage and querying spatial Big data have recently emerged: NoSQL data models such as MongoDB and Neo4j integrate advanced spatial component processing that seems well suited to the IoT Geo-Big data context. ...
Conference Paper
Abstract—Internet of Things (IoT) is one of the most important technologies of the century. This ubiquitous paradigm generates valuable data in which the geospatial aspect is crucial to unlock its full potential. Every Geographic Information System (GIS) provides three main functionalities: data acquisition, data analysis and data mapping, that lack the ability to deal with realtime heterogeneous data and must be adjusted to fit the new needs and challenges. Indeed, many research works make use of GIS in IoT environments. Nevertheless , it is not obvious to make an efficient deployment of these systems with the current challenges, which gives good opportunities to explore the integration of GIS and IoT that is promising. In this paper, we present A state of the art of this topic and discuss some issues that must be considered. We focus more on geographical data acquisition since it is the first layer in GIS-IoT architecture and any error in this phase may be irreversible. Therefore, we propose to decompose it into three sub-functionalities: data collection, data transmission, and data storage. Index Terms—GIS, IoT, Big spatial data, GIS functionalities, Data acquisition I
... The vast eld of BDA applications as well as the tremendous potential they hold to help organizations gain and sustain a competitive edge in the market, are well acknowledged in the industry in the last decade [12,13]. Τhe successful integration of BDA and business processes provides the organizations the opportunity to extract valuable insights that would otherwise remain hidden and help the top-performing ones rede ne their business and dominate in their eld [14]. ...
Preprint
Full-text available
Big data analytics (BDA) has been introduced in the past few years in most industries as a factor capable of revolutionizing their operations by offering significant efficiency opportunities and benefits. To compete in this digital age, businesses must adopt a client centric service model, founded on data delivering continuous value, achieving optimal performance whilst also upgrading their own decision making and reporting processes. This study focuses on value outcomes (i.e. the end results of the implementation process) associated with the BDA adoption in the Facilities Management (FM) sector in United Kingdom (UK). Drawing upon qualitative case-study findings and an industry-wide questionnaire survey, a novel fifteen-variable model for BDA outcomes was developed and validated. This paper further uses the Confirmatory Factor Analysis (CFA) to establish the relationships between the variables and reveal the model’s principal dimensions. The identified themes focus on improved client experiences and efficient resource management and planning. In the current dynamic market environment, the findings of this study will help FM organisations to formulate effective data-driven strategies and client facing business models.
... Uno de los primeros y más destacados artículos publicados en el tema de big data fue presentado por Chen et al. (2012), donde se aborda el impacto de estas tecnologías en las organizaciones y se describe cómo el análisis de la información ha evolucionado a un ritmo acelerado, lo que plantea la posibilidad de una escasez de profesionales capacitados en esta área del conocimiento en el corto plazo. Los datos generados por el big data se han extendido a diversas industrias y funciones organizacionales, convirtiéndose en una fuente de valor, acelerando el crecimiento y promoviendo la competitividad (Elgendy et al., 2022), mientras también apoya la toma de decisiones (Manyika et al., 2015). A pesar de esto, el potencial completo de estas tecnologías aún no ha sido explotado, y la curva de adopción del big data para la gestión de la cadena de suministro en las organizaciones sigue siendo relativamente baja (Akter et al., 2016). ...
Article
Full-text available
Los mercados contemporáneos requieren la gestión de grandes cantidades de datos, por lo que el big data se ha convertido en una tecnología para responder a esta necesidad. En consecuencia, las empresas competitivas los emplean en diversos procesos, como la gestión de la cadena de suministro. En este contexto, el presente artículo tuvo como objetivo analizar la investigación existente sobre la implementación del big data en la cadena de suministro. Para ello, se realizó una revisión sistemática de la literatura utilizando la metodología PRISMA y seleccionando documentos de las bases de datos Scopus y Web of Science. Se aplicaron herramientas bibliométricas y se clasificaron los documentos en tres grupos: raíces, tronco y hojas, según la metáfora del árbol del conocimiento, y se identificaron los clústeres de investigación. Los resultados revelaron que el big data en la cadena de suministro permite mejorar la toma de decisiones, la competitividad y la eficiencia logística. Se concluye que es un tema con creciente interés investigativo, liderado por China; que requiere cambios organizacionales estratégicos. Aporta beneficios en eficiencia y toma de decisiones, pero enfrenta desafíos en transición y resistencia al cambio. Los clústeres abordan el rendimiento, la adaptabilidad, la capacidad de gestión y la conectividad. Se proponen líneas futuras de estudio relacionadas con problemáticas globales, automatización y IoT.
... The impact of Industry 4.0 may not be entirely positive, and it is therefore necessary to comprehensively assess the factors that facilitate or hinder its success [67]. In general terms, it can be summarized that the technological basis of 4.0. in the elements: Cloud computing [68], big data [9,[69][70][71][72], cybersecurity [73], IoT, Simulation [74] and 3D printing [75] and robotic [76]. And alongside the enablers of 4.0. ...
Preprint
Full-text available
Technological development has profoundly marked the evolution of the economy. The constant changes brought about by scientific and technological advances have been decisive in the transition from an analogue to a digital world. In this context, the impact of the fourth industrial revolution (or Industry 4.0.) manifests itself in many ways. Environmental impact is one of these. The energy sector has been evolving and changing just like the economy and society. Therefore, a study of this sector, and of the other related elements, is of interest to better understand the 4.0 concept. The promotion of sustainability at both political and social levels has led to changes in different areas, such as the productive vision, the use of green energies or the implementation of green taxes. Energy as a key factor in Industry 4.0. involves studying it both quantitatively and qualitatively. This is to under-stand the lights and shadows that the concept currently presents. Therefore, this work aims to bring the reality of the energy sector closer to reality, both in its positive and negative aspects, considering the main factors of incidence, in order to show the strengths and weaknesses that can be deduced.
... In the very near future, extensive, thorough, multidirectional, and multifield geotechnical monitoring will be a reality. Therefore, specialists in geoscience and geoengineering must give big data research more attention, foster an atmosphere where data can be used to advance our areas, and encourage cooperation with data analysts from other domains (Manyika et al., 2011). ...
Chapter
Full-text available
With rapid development in new technologies our intelligence and expertise in artificial intelligence (AI) have increased significantly. Intelligent machines are preferred, which motivates us to incorporate highly sophisticated technologies. Geographic analysis for environmental applications has advanced recently, owing to the vast explosion of geospatial data, the accessibility of powerful computing resources, and advancement in AI. Geospatial analytics at a high-resolution scale is now possible because AI reshapes our research environment. High-resolution satellite imaginaries used in geospatial analysis always include bigdata; thus, alternative methods other than traditional data-processing applications are needed to deal with these large datasets. AI has become an alternative method to handle big data in recent decades. Geospatial information from high-resolution remote sensing and other environmental sensors generates enormous data. AI makes the process more effective and makes it possible to derive deep understandings and information from the data.
... From a public organization perspective, open government can be seen as an evolution of eGovernment, in which the governance paradigm is achieved, and the ICT role and its degree of adoption is a key driver that has important implications (Jimenez andGasco 2008, cited in Jiménez et al. 2015). Big data are formed through the recording and storage of traces of various acts performed by several individuals over time, such as financial transactions, social media traffic, health records, and GPS coordinates, often by means of mobile tools (Manyika et al. 2011, cited in Asquer 2015. Mellouli observes that increased citizen's engagement introduces this new form of government to engage citizens meaningfully and intelligently. ...
... Statistics show that the data volume will increase by 40% per year, and will grow by 44 times over the period between 2009 and 2020 [3]. The massive information growth produced along with the ongoing academic activities poses challenges in the form of system reliability. ...
Article
Full-text available
The rapid development of internet technology has increased the need of data storage and processing technology application. One application is to manage academic data records at educational institutions. Along with massive growth of information, decrement in the traditional database performance is inevitable. Hence, there are many companies choose to migrate to NoSQL, a technology that is able to overcome the traditional database shortcomings. However, the existing SQL to NoSQL migration tools have not been able to represent SQL data relations in NoSQL without limiting query performance. In this paper, a relational database transformation system transforming MySQL into non-relational database MongoDB was developed, using the Multiple Nested Schema method for academic databases. The development began with a transformation scheme design. The transformation scheme was then implemented in the migration process, using PDI/Kettle. The testing was carried out on three aspects, namely query response time, data integrity, and storage requirements. The test results showed that the developed system successfully represented the relationship of SQL data in NoSQL, provided complex query performance 13.32 times faster in the migration database, basic query performance involving SQL transaction tables 28.6 times faster on migration results, and basic performance Queries without involving SQL transaction tables were 3.91 times faster in the migration source. This shows that the theory of the Multiple Nested Schema method, aiming to overcome the poor performance of queries involving many JOIN operations, is proved. In addition, the system is also proven to be able to maintain data integrity in all tested queries. The space performance test results indicated that the migrated database transformed using the Multiple Nested Schema method showed a storage requirement of 10.53 times larger than the migration source database. This is due to the large amount of data redundancy resulting from the transformation process. However, at present, storage performance is not a top priority in data processing technology, so large storage requirements are a consequence of obtaining efficient query performance, which is still considered as the first priority in data processing technology.
... To store this huge, ever-increasing data size, ranging from tens of terabytes to many petabytes of data in a single dataset to make good use of it (Russom, 2011). Big data refers to large datasets that cannot be captured, stored, managed and analysed by typical software tools (Manyika et al., 2011). These sets of data are not only huge in size, but there is a state of heterogeneity and complexity where they include structured, semi-structured and unstructured data, as well as operating data, transactions, sales, marketing and other data, in addition to big data includes data that come in many forms such as text, images and voice these unstructured data grow faster than their structure and account for 90% of the data volume (Gantz and Reinsel, 2011). ...
... As per the record of IBM [3], McKinse [6,7,9], and Gartner [4,5], per day very high amounts of various kinds of data are generated; these data represent 2.5 quintillion bytes {Exabyte {EB} = 1018 bytes}. Content courtesy of Springer Nature, terms of use apply. ...
Article
Full-text available
This is an informative article of Data Analytics for data. This article covers the various aspects of Data Analytics such as definition of Data analytics, various terminology used in Data Analytics related to data, Types of Data, and various methods of data collection. This paper include the concept the correlation coefficient to evaluate the student performance in internal as well as external examination. I applying linear correlation coefficient technique to find the highly correlated features that is student attendance from the student data set and its impact over the student performance in internal as well as external assessment. Student performance Variable is considered as dependent and independent variable considered as attendance. Using correlation coefficient methodology, I predicate the value of r (Correlation Coefficient) in its throughput range that is 1 or -1 to indicate the strong and weak relationship.
Article
Full-text available
This Article explores the implications of Artificial Intelligence (AI)-driven innovation across sectors, highlighting the resulting legal uncertainties. Despite the transformative influence of AI in healthcare, retail, finance and more, regulatory responses to these developments are often contradictory, contributing to the opacity of the legal underpinnings of AI business. Our Article notes the common trend of commercializing AI tools amidst legal uncertainty, using innovative contractual solutions to secure claims. Over time, these innovations trigger overlooked legal conflicts, sometimes leading to outright bans on AI products due to negative impacts on some fundamental rights and democratic Governance. The core argument of our Article is that an over-reliance on co-regulatory strategies, such as those proposed by the European AI Act, exacerbates legal instability in emerging technological markets. This panorama creates an ’extended legal present’ when alternative legal expectations coexist, thus causing economic and political uncertainty that may elicit legal instability in the future. The concept of ’competing legal futures’ is introduced to illustrate how economic actors must bet on a legal future in the absence of guarantees that this future will materialize. To help analyze this complex narrative, we propose a theoretical framework for understanding legal, technological, and economic dynamics, highlighting anomalies in market exchanges within the co-regulatory model. Despite the focus on European developments, the practical and theoretical implications extend beyond the EU, making the Article relevant to a broader understanding of the legal-economic challenges posed by AI and digital innovation. We conclude by arguing for a course correction, proposing institutional diversification for resilient governance of legal innovation under uncertainty.
Article
Full-text available
This paper delves into the realm of big data customer analytics, employing a case study approach to investigate its utilization for acquiring profound insights into customer behavior, preferences, and trends. It explores diverse industries within the framework of Industry 4.0, elucidating strategies to amplify customer engagement, loyalty, and satisfaction. Through an examination of real-world scenarios, the paper highlights the transformative potential of leveraging big data analytics in understanding and enhancing customer relationships.
Article
In large cities, parking availability is a major problem. Car owners are confused by crowded reserved parking spaces during rush hour. To overcome this obstacle, modernizing the parking system is crucial. This research will conduct an in-depth and thorough literature review of past studies and project implementations to identify best practices and gaps in the existing research. The goal is to analyze previous work comprehensively, highlighting successful approaches and areas where further investigation is needed.In this study aims to perform a detailed and comprehensive examinations of previous research and research executions to precise effective strategies and deficiencies in the current body of research.The objective is to thoroughly evaluate prior efforts,emphasizing successful methodologies and areas requiring additional investigation.A prototype of the digital vehicle Parking System is implemented via an Android application leveraging Firebase and IoT based on AI . This makes it easy for drivers to find parking spaces with assigned spaces.
Book
Full-text available
Ilmu komputer telah menjadi landasan tak tergantikan dalam era digital yang semakin berkembang pesat saat ini. Dari perangkat lunak hingga hardware, dari algoritma hingga keamanan jaringan, ilmu komputer memainkan peran penting dalam hampir setiap aspek kehidupan kita. Buku Ajar ini disusun sebagai buku panduan komprehensif yang menjelajahi kompleksitas dan mendalamnya tentang ilmu komputer. Buku ini dapat digunakan oleh pendidik dalam melaksanakan kegiatan pembelajaran di bidang ilmu komputer dan diberbagai bidang Ilmu terkait lainnya. Selain itu, buku ini juga dapat digunakan sebagai panduan dan referensi mengajar mata kuliah pengantar ilmu komputer dan menyesuaikan dengan rencana pembelajaran semester tingkat perguruan tinggi masing-masing.
Article
The dizziness of permanent changes in educational institutions, have generated large volumes of data storage. These presented in different formats requested by regulatory agencies, cause new challenges supported by information and communication technologies as the clouds of information that guide towards manipulation / analysis which result from this, result in large socio-technological impacts and benefits to a country, exclusively its population. The present work describes how the analysis of massive data "BigData" contributes in the decision-making in education, generated by causes such as the restructuring and / or reordering of educational institutions, growth of the school population, increase of teachers among other associated indicators. The methodology applied in this study is based on the type of research: descriptive, documentary, historical, longitudinal and transversal. Statistical results that allow to know and deepen the Ministry of Education (MINEDUC) in Ecuador are analyzed, with the purpose of strengthening education through of pertinent data, which at the same time opportunity to achieve high quality standards.
Article
Full-text available
Purpose This study aims to examine Big Data and the management of libraries in the era of the Fourth Industrial Revolution and its implications for policymakers in Nigeria. Design/methodology/approach A qualitative methodology was used, involving the administration of open-ended questionnaires to librarians from six selected federal universities located in Southwest Nigeria. Findings The findings of this research highlight that a significant proportion of librarians are well-acquainted with the relevance of big data and its potential to positively revolutionize library services. Librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences. Research limitations/implications This study exclusively focuses on the Nigerian context, overlooking insights from other African countries. As a result, it may not be possible to generalize the study’s findings to the broader African library community. Originality/value To the best of the authors’ knowledge, this study is unique because the paper reported that librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences.
Article
Full-text available
The significant rise in data volume and frequency, accompanied by the emergence of sophisticated technologies like artificial intelligence and machine learning for data analysis, has sparked a notable transformation across multiple sectors, particularly within the domain of central banking. Globally, many central banks are increasingly inclined towards harnessing the potential of big data to streamline their functions and strengthen decision-making processes. This study aims to spotlight the potential uses of big data within the operations of central banks, emphasizing both the opportunities and obstacles linked to its implementation. Additionally, it aims to assess the actual situation and hurdles in applying big data within Arab central banks by utilizing a questionnaire directed specifically to these institutions. The questionnaire sought to elicit the perspectives of Arab central banks on big data, shedding light on the most significant opportunities and challenges in this domain. Responses were gathered from eleven central banks situated in Jordan, the Emirates, Bahrain, Saudi Arabia, Sudan, Iraq, Qatar, Lebanon, Libya, Morocco, and Yemen. The study's results indicate that the majority of Arab central banks lack a clear comprehension and strategy for employing big data, showcasing diverse levels of consideration and application. These banks, however, foresee substantial benefits from embracing big data, especially in fraud detection, early warning system structuring, extensive regulatory supervision, utilizing Regulatory Technology (RegTech), and combating money laundering and terrorism financing. Yet, these banks face notable hurdles, including a scarcity of skilled professionals, managing vast data volumes, ensuring legal compliance, and prioritizing privacy protection, which remains paramount. Addressing these challenges requires a focused approach, including skill development, refining data management protocols, and bolstering cybersecurity measures within the organizational setup. Furthermore, fostering cooperation at local, regional, and global levels emerges as crucial for knowledge. The study emphasizes that while Arab central banks acknowledge big data's potential, significant impediments hinder its practical application. Therefore, overcoming these implements is essential to effectively leverage the potential of big data. To address these challenges, the study recommends increased investment in big data infrastructure and talent development, the establishment of robust governance frameworks for big data, and promoting cooperation to exchange knowledge and good practices. Implementing these measures is poised to empower Arab central banks, enhancing their operational efficiency and enabling them to execute their tasks in a more effective manner. Keywords: big data, central banks, monetary policy, financial stability, regulatory oversight, data privacy, cybersecurity, artificial intelligence, machine learning.
Article
Full-text available
This paper is based on the fair analysis on the economic benefits of software defined networking. The paper will explain vividly the concept of software defined networking and its applications together with the economic advantage(s) with respect to both the client perspective and server perspective. In general, SDN provides lower hardware and operational cost in an excessive traffic load of nowadays users. It has tighter security and faster response time with improved management and planning of server controller. SDN when deployed provides better service(s) with respect to lowering cost and of course with better efficiency. Software Defined Network (SDN) is a great revolutionary idea to change IT Industry of which includes cloud computing and many others. The beauty of its architecture is the decoupling of the essential control and data planes from the application layer, thus the underlying infrastructure is abstracted but the network intelligence is logically centralized with state. This architecture gives access to the enterprise to control, program, automate, scale and secure the network according to the business needs, together with securing a safe benefit in competiveness that has risen due to fast advancement in technology. This architecture is even more important to fulfilling the demands of various clients in the current model of networking. The concept of SDN has the potential to change the current networking model. With SDN, for network services, the underlying network infrastructure can be abstracted by the administrators for many applications. In theory, it can be shown that SDN has many benefits over current IT systems but in practical systems, IT companies are reluctant to work with, tend to oppose nor are they ready to deploy SDN. In this paper, the economic benefits are explored and discussed without bias, and it will be shown that new protocol of SDN provides many economic benefits and guidance to IT companies to establish more secure and efficient network.
Article
Today, the Internet of Things (IoT) is changing the world, in addition, it is also creating various connectivity mechanisms. Communication between people and machines and between machines is only possible today thanks to the Internet of Things. Since last 15 years, Internet of Things (IoT) is the leading application extending from connected smart homes to wearable devices and healthcare. Industry 4.0 is the recent revolution in the industry. It contains cyber-physical systems (CPS) that monitor the factory's production and manufacturing process and make independent decisions. In a wireless sensor network, the sensor detects the data from the device and that collected data will be sent to the router. That sensor may be different depending on your applications. As there will be so many devices connected to the Internet of Things (IoT), massive data will be generated. To extract hidden information from the generated data we have to apply different types of algorithms. A large amount of data can be monitored and controlled with the use of Wi-Fi, Internet of Things (IoT), Cloud Computing (CC), and Cyber- Physical System (CPS).
Chapter
Full-text available
Dijital dönüşüm günümüzde gerek bireyleri ve gerekse işletmeleri etkileyen kaçınılmaz gelişmeleri içermektedir. Bu gelişme ve dönüşüme ayak uyduramayan örgütlerin ayakta kalamaması artık sıradan bir durum olarak kabul edilmektedir. Gerçekleştirdikleri dijital dönüşüm uygulamaları ile organizasyonlarını dönüştüren şirketlerin neler yaptığı ve nasıl yaptığı büyük merak konusudur. Çalışmada dijital dönüşüm uygulamalarının Türkiye ve Dünyadaki kurum ve işletmelerde nasıl yürütüldüğü, çeşitli sektörlerde etkin faaliyet gösteren şirketlerden örnekler sunularak ortaya konulmaya çalışılmıştır. Çalışmanın ilk bölümünde dijital dönüşüm kavramı ve dijital dönüşüm bileşenleri açıklanmış, ikinci bölümde ise dijital dönüşüm uygulamaları ve literatürde yer alan önemli Türkiye ve Dünya örnekleri sektör bazlı olarak paylaşılmıştır.
Article
Full-text available
Yapay zekâ ve akıllı öğrenme, son yılların en önemli teknolojik gelişmelerinden biri olarak kabul edilmektedir. Bu teknoloji, bilgisayar ve robotların insan benzeri zekâ ve öğrenme yetenekleri kazanması üzerine odaklanmaktadır. Yapay zekâ, birçok alanda kullanılmakta olup, özellikle sanayi, sağlık, internet uygulamaları, bilişim teknolojileri, finans ve eğitim gibi sektörlerde büyük bir etkiye sahiptir. Yapay zekâ ve akıllı öğrenme teknolojisi daha hızlı, daha doğru ve daha verimli kararlar verme imkânı sağlayarak insanların hayatını kolaylaştırmakta ve daha üretken bir hâle getirmektedir. Yapay zekâ ve akıllı öğrenme teknolojilerinin olumlu etkilerinin yanında birçok olumsuz etkiyi de beraberinde getirdiği görülmektedir. Bu konuda ikiye ayrılan araştırmacıların bir kısmı gelişmeleri iyimser karşılarken, bir kısmı ise katı şekilde eleştirmektedir. Yapay zekâ ve akıllı öğrenme teknolojilerinin gelecekte insan hayatına yapacağı olumlu ya da olumsuz etkileri büyük bir merak ve endişe konusudur. Bu çalışma son günlerin popüler bir yapay zekâ ve akıllı öğrenme teknolojisi örneği olan ChatGPT'nin potansiyelini anlamak amacıyla yapılmıştır. Hazırlanmasında doğrudan ChatGPT kullanıldığı için ortak yazar olarak eklenmiştir.
Article
Full-text available
A digital twin is a digital representation of a physical entity that reproduces the data model, behavior, and communication with other physical entities. Digital twins act as a digital copy for the physical object or process they represent, providing near real-time monitoring and evaluation without being in close proximity. While most of its tangible applications are found mainly in industrial contexts, healthcare represents another relevant area where digital twins can have a major impact. The aim of this article is to give theoretical information about the definition, principles, roles, stakeholders and history of the digital twin concept. From this point of view, the articles in the field were examined and a review paper was created. It is also to create a comprehensive framework about digital twin applications in healthcare. After providing an overview of the application of digital twins in health services, the vision of this concept, which has recently found a place in research in Turkey, is discussed.
Article
Full-text available
This study investigated the effect of data analytics on marketing communication performance of Nigerian organizations based on insights from JCDecaux Nigeria in Lagos state. The methodology of the study was based on quantitative research approach based on sample size of one hundred (100) respondents selected through multistage sampling techniques was consisting of purposive sampling technique to select the organization of choice (JCDecaux Nigeria and its employees in Lagos state and convenience sampling technique was used to select individual respondents. The data in this study was analysed using frequency distribution, simple percentages and means as descriptive statistics while Pearson Correlation analysis was used to test the hypotheses using Statistical Package for Social Sciences (SPSS). The results indicated that there is significant effect of data analytics on conversion rate of marketing communication in JCDecaux Nigeria in Lagos state based on the hypotheses one tested and indicated by R square value of 84.6%. Also, there is significant effect of data analytics on advertising effectiveness in JCDecaux Nigeria in Lagos state, based on the hypotheses two tested and indicated by R square value of 71.4%. Finally, data analytics has significant effect on revenue generation in JCDecaux Nigeria in Lagos state, based on the hypotheses three tested and indicated by R square value of 92.2%. This study concluded that data analytics plays significant role in affecting marketing communication performance of JCDecaux Nigeria in Lagos state. This is based on the consideration that data analytics from customer information assists in improving customer conversion rate, advertising effectiveness and overall revenue generation, based on the fact that organizations can effectively tailor marketing communication to customers. In view of the findings and conclusion, this study recommended among other things that organizations and management should develop strategic marketing communication that are well aligned with the unique characteristics of customer information from data analytics.
Preprint
Full-text available
Teknologi dan inovasi perlu dimanfaatkan untuk membantu dan memajukan masyarakat, bukan untuk menggantikan peran manusia. Dengan demikian perubahan ini diharapkan dapat membantu manusia dalam menjalani kehidupan sehari-hari. Karakteristik kedua era hampir sama, yaitu meliputi digitalisasi, optimasi dan produksi dengan kustomisasi, otomasi, interaksi antara manusia dengan mesin, value added services and business, penggunaan teknologi informasi serta kekayaan data yang dimiliki. Melalui kombinasi dan kesinambungan antara revolusi industri 4.0 dan society 5.0, dapat membentuk pola tatanan kehidupan bermasyarakat yang lebih baik, sehingga dapat meningkatkan kualitas kehidupan sosial masyarakat. Salah satu tuntutan dari fenomena industri 4.0 dan Society 5.0 adalah tersedianya data berkualitas yang selalu ter-update. Makalah ini akan membahas secara rinci mengenai arah perubahan kehidupan perusahaan dari industri 4.0 menuju society 5.0., semakin berkembangnya ekonomi digital dan strategi perusahaan untuk bertahan hidup dalam menghadapi perubahan yang ada. PENDAHULUAN Perkembangan teknologi dan informasi yang luar biasa membawa dampak yang juga luar biasa pula bagi perkembangan industri dan tatanan hidup masyarakat. Belum banyak yang menyadari bahwa adanya fenomena industri 4.0 dan Society 5.0 membawa trend perubahan di tingkat perusahaan dan bahkan di tingkat individu. Untuk dapat menghadapi era yang baru ini, diperlukan kemampuan dan strategi khusus sebagai persiapan berkompetisi. Setiap organisasi bisnis berupaya untuk menjadi unik sehingga perusahaan dapat memiliki keunggulan kompetitif untuk bertahan menghadapi era yang penuh dengan ketidakpastian ini.
Research Proposal
Full-text available
The objective of this research is to develop efficient and scalable optimization algorithms that can handle large-scale data analysis in dynamic environments, where data can be continuously added, removed or updated. The proposed algorithms will be designed to optimize multiple objectives, such as accuracy, scalability, and computational efficiency. The research will also focus on investigating the trade-offs between different optimization techniques, such as gradient-based methods and metaheuristic algorithms, in the context of large-scale data analysis. The proposed algorithms will be tested on real-world datasets from various domains, such as social media, finance, and healthcare, to evaluate their effectiveness and scalability. The research will contribute to the development of new techniques and strategies for handling large-scale data analysis problems in dynamic environments, and provide insights into the future of optimization algorithms in the context of big data.
Article
Purpose: The purpose of this study is to systematically review the existing literature on the blood supply chain (BSC) from a network design perspective and highlight the research gaps in this area. Moreover, it also aims to pinpoint new research opportunities based on the recent innovative technologies for the BSC network design. Design/methodology/approach: The study gives a comprehensive systematic review of the BSC network design studies until October 2021. This review was carried out in accordance with preferred reporting items for systematic reviews and meta-analyses (PRISMA). In the literature review, a total of 87 studies were analyzed under six main categories as model structure, application model, solution approach, problem type, the parties of the supply chain and innovative technologies. Findings: The results of the study present the researchers’ tendencies and preferences when designing their BSC network models. Research limitations/implications: The study presents a guide for researchers and practitioners on BSC from the point of view of network design and encourages adopting innovative technologies in their BSC network designs. Originality/value: The study provides a comprehensive systematic review of related studies from the BSC network design perspective and explores research gaps in the collection and distribution processes. Furthermore, it addresses innovative research opportunities by using innovative technologies in the area of BSC network design.
Conference Paper
Full-text available
This research-in-progress paper introduces the construct of a data dominant logic. The findings of a two-step exploratory study indicate that SME's established mindset (dominant logic) hinders them to turn data in innovative products, services, or business models. Although the availability of large amounts of data and their use through data science enables new and promising possibilities for firms to innovate, the actual use of data and data science proves to be difficult. The firms under consideration recognize that the availability of data fundamentally changes their businesses. They are lacking the appropriate culture and mindset for turning data into innovation. It can be concluded that firms first need to establish a new mindset in which data play a central role. I term this mindset a data dominant logic (DDL). Future research is required to further concretize the construct.
Conference Paper
Full-text available
This research-in-progress paper introduces the construct of a data-driven (digital) dominant logic. It is found to lack in SME's that aim at making use of data and data science within their company. The availability of large amounts of data and their use through data science enables new and promising possibilities for firms, including new products, new services, or data-driven business models. However, the actual use of data and data science still proves to be difficult. Based on an inductive empirical study (interviews, survey), established organizational and managerial structures were found to be the most critical hindering factor in doing so. More concretely, the established mindset or, dominant logic, seems to be the most important factor. In case a firm wishes to make effective use of data and data science, it can be concluded, it firstly needs to transfer their established dominant logic into a new mindset that I name a data-driven (digital) dominant logic (DDL).
Article
With the big data bibliometric analysis function of CNKI, 925 documents were retrieved with "research travel" as the keyword. Visual analysis and index analysis were performed on these documents. In terms of development trend, authors, keywords, fund support and research level, the characteristics of research progress of domestic research travel are summarized.According to the quantitative analysis results of big data, it can be seen that the research on research travel is in the exploratory stage and the research level is single.In the future, researchers need to continuously deepen their research, increase the research of industry guidance, vocational guidance, basic and applied basic research and other types, and constantly enrich the research levels; grasp the hot spots such as geographical practice, core literacy, geographic core literacy, curriculum development, quality education, small research topics, etc, and deepen the research contents of research travel.
ResearchGate has not been able to resolve any references for this publication.