Article

The Computer for the 21st Century

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This chapter discusses about the computer for the 21st century and the tabs. Tabs are the smallest components of embodied virtuality. Because they are interconnected, tabs will expand on the usefulness of existing inch-scale computers, such as the pocket calculator and the pocket organizer. Tabs will also take on functions that no computer performs today. For example, computer scientists at PARC and other research laboratories around the world have begun working with active badges—clip-on computers roughly the size of an employee ID card, first developed by the Olivetti Cambridge research laboratory. These badges can identify themselves to receivers placed throughout a building, thus making it possible to keep track of the people or objects to which they are attached. The chapter also discusses about page-size machines known as pads.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Combined with the range of in-home devices in use by many participants, it is clear that they have created for themselves an ecosystem of devices that help them to manage their day-to-day lives and their health conditions. On the surface, this seems to suggest a form of ubiquitous computing similar to Mark Weiser's vision [69], with a multitude of (networked) computing devices within reach at all times. However, unlike in Weiser's vision, there was a clear distinction for participants between lifestyle management technologies and specific healthcare devices. ...
... However, the design needs to consider specific use cases of those who can benefit from it the most, while also being "non-fatiguing". This could, for example, be achieved by technology, including smart mirrors, being unobtrusive and operating in the "background", which would "not require active attention", as described by Weiser [69]. Moreover, any health technology should take into consideration economical, physical, and social external factors [70]. ...
Preprint
Full-text available
The home is becoming a key location for healthcare delivery, including the use of technology driven by autonomous systems (AS) to monitor and support healthcare plans. Using the example of a smart mirror, this paper describes the outcomes of focus groups with people with multiple sclerosis (MS; n=6) and people who have had a stroke (n=15) to understand their attitudes towards the use of AS for healthcare in the home. We thematic analysis to analyse the data. The results indicate that the use of such technology depends on the level of adaptability and responsiveness to the users’ specific circumstances, including their relationships with the healthcare system. A smart mirror would need to support manual entry, responsive goal setting, effective aggregation of data sources and integration with other technology, have a range of input methods, be supportive rather than prescriptive in messaging, and give the user full control of their data. Barriers to adoption include a perceived lack of portability and practicality, lack of accessibility and inclusivity, a sense of redundancy, being overwhelmed by multiple technological devices, and a lack of trust in data sharing. These results inform the development and deployment of future health technologies based on the lived experiences of people with health conditions who require ongoing care.
... Weiser's initial vision of ubiquitous computing foresaw computers seamlessly integrating into daily life, operating inconspicuously to enhance human experience without intrusion [1]. Today, this vision has materialized, with computers seamlessly incorporated into personal smart devices. ...
Article
Full-text available
With the increasing availability of wearable devices for data collection, studies in human activity recognition have gained significant popularity. These studies report high accuracies on k-fold cross validation, which is not reflective of their generalization performance but is a result of the inappropriate split of testing and training datasets, causing these models to evaluate the same subjects that they were trained on, making them subject-dependent. This study comparatively discusses this validation approach with a universal approach, Leave-One-Subject-Out (LOSO) cross-validation which is not subject-dependent and ensures that an entirely new subject is used for evaluation in each fold, validated on four different machine learning models trained on windowed data and select hand-crafted features. The random forest model, with the highest accuracy of 76% when evaluated on LOSO, achieved an accuracy of 89% on k-fold cross-validation, demonstrating data leakage. Additionally, this experiment underscores the significance of hand-crafted features by contrasting their accuracy with that of raw sensor models. The feature models demonstrate a remarkable 30% higher accuracy, underscoring the importance of feature engineering in enhancing the robustness and precision of HAR systems.
... In this decentralized framework, the training process is distributed across multiple data sources, with only model parameters being communicated instead of raw, privacy-sensitive user data. Despite its promising potential, integrating FL into pervasive computing environments [51], where performance must be user-centric, presents several challenges. A significant limitation in FL is the heterogeneity inherent in real-world scenarios, where each user's data distribution may vary significantly due to differences in user behavior, local environments, and other contextual factors [19,27,28,26]. ...
Preprint
Federated Learning (FL) enables collaborative, personalized model training across multiple devices without sharing raw data, making it ideal for pervasive computing applications that optimize user-centric performances in diverse environments. However, data heterogeneity among clients poses a significant challenge, leading to inconsistencies among trained client models and reduced performance. To address this, we introduce the Alignment with Prototypes (ALP) layers, which align incoming embeddings closer to learnable prototypes through an optimal transport plan. During local training, the ALP layer updates local prototypes and aligns embeddings toward global prototypes aggregated from all clients using our novel FL framework, Federated Alignment (FedAli). For model inferences, embeddings are guided toward local prototypes to better reflect the client's local data distribution. We evaluate FedAli on heterogeneous sensor-based human activity recognition and vision benchmark datasets, demonstrating that it outperforms existing FL strategies. We publicly release our source code to facilitate reproducibility and furthered research.
... Digital technology was no longer hidden in beige boxes in the back offices of large corporations. Rather, it was in the hands of people in their everyday contexts (Weiser 1991, Yoo 2010. The ubiquity of digital infrastructures (Tilson et al. 2010, Henfridsson andBygstad 2013) and the availability of smartphones as a delivery mechanism (Lyytinen and Yoo 2002) radically lowered the entry barriers for entrepreneurs to pursue unprecedented market opportunities through digital innovation. ...
... The concept of antifragility has been applied in many domains: physics, risk analysis [63,66], molecular biology [67,68], transportation planning [69,70], engineering [71], aerospace (NASA) [72], megaproject management [73], and computer science [74,75]. Computer science has structured a proposal, an "Antifragile Software Manifesto", to react to traditional system designs. ...
Article
This study presents a novel approach for developing a sustainable property tax system, aimed at enhancing economic stability and promoting sustainable regional development. This research employs a phenomenological methodology, which includes a comprehensive review of the scientific and practical literature, and their critique and synthesis. The authors also draw on their experiences with the tax system transformation within their own country. This study explores the integration of a consensual governance approach and the concept of antifragility into the complex issue of property taxation. The primary objective is to design a property tax management model that not only fulfills its economic functions, but also fosters an antifragile taxpayer society, contributing to the creation of a resilient and socially cohesive community. The findings demonstrate that a consensual and transparent property tax system, actively involving local stakeholders in decision-making processes, not only reduces resistance to tax reforms but also strengthens a community’s ability to adapt to economic fluctuations. By integrating the principles of good governance and sustainable development, the proposed model promotes socio-economic stability and provides a flexible framework that can accommodate diverse stakeholders needs, ultimately benefiting the broader community through enhanced social cohesion and long-term sustainability.
... The IoT is a group of embedded technologies that includes wired and wireless communications, hardware for sensors and actuators, and actual objects connected to the Internet [1]. One of computing's main aims has long been to improve and simplify human activities and experiences (for instance, have a look at the visions associated with "The Computer for the 21st Century" [2] or "Computing for Human Experience" [3]). In order to provide consumers with improved services or to enhance the functionality of the IoT framework, IoT needs data. ...
Article
Full-text available
The implementation between machine learning and the Internet of Things (IoT) has been scientifically investigated in many studies. However, not many bibliometric studies categorize the output in this area. By keeping an eye on the publications posted on the Web of Science (WoS) platform, this study aims to give a bibliometric analysis of research on Machine Learning and IoT, identifying the state of the art, trends, and other indicators. 6.170 different articles made up the sample. The VOS viewer software was used to process the data and graphically display the results. The study examined the concurrent occurrence of publications by year, keyword trends, co-citations, bibliographic coupling, and analysis of co-authorship, countries, and institutions. several prolific authors are discovered. However, the body of literature on machine learning and IoT issues is expanding quickly; only five papers accounted for more than 2193 citations. Then, 40.34 percent of the articles from the 694 sources reviewed were published as the most important paper. At the same time, the USA is the top nation for research on this subject area. In addition to identifying gaps and promising areas for future research, this study offers insight into the current state of the art and the field of machine learning and IoT.
... Therefore, if algorithmic governance is understood as an invisible knowledge regime that produces interpretations of normality and deviation on the basis of digital data, which seep deeper into social processes and interactions and take on a life of their own, this speaks for the establishment of a subtle form of power whose legitimacy must remain largely unquestioned, because "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it" [49]. In this perspective, algorithms appear not only as neutral sorting machines but also as performative instruments of domination that legitimize power. ...
Article
Background Responses to public health crises are increasingly technological in nature, as the prominence of COVID-19–related statistics and simulations amply demonstrates. However, the use of technologies is preconditional and has various implications. These implications can not only affect acceptance but also challenge the acceptability of these technologies with regard to the ethical and normative dimension. Objective This study focuses on pandemic simulation models as algorithmic governance tools that played a central role in political decision-making during the COVID-19 pandemic. To assess the social implications of pandemic simulation models, the premises of data collection, sorting, and evaluation must be disclosed and reflected upon. Consequently, the social construction principles of digital health technologies must be revealed and examined for their effects with regard to social, ethical, and ultimately political issues. Methods This case study starts with a systematization of different simulation approaches to create a typology of pandemic simulation models. On the basis of this, various properties, functions, and challenges of these simulation models are revealed and discussed in detail from a socioscientific point of view. Results The typology of pandemic simulation methods reveals the diversity of model-driven handling of pandemic threats. However, it is reasonable to assume that the use of simulation models could increasingly shift toward agent-based or artificial intelligence models in the future, thus promoting the logic of algorithmic decision-making in response to public health crises. As algorithmic decision-making focuses more on predicting future dynamics than statistical practices of assessing pandemic events, this study discusses this development in detail, resulting in an operationalized overview of the key social and ethical issues related to pandemic crisis technologies. Conclusions This study identifies 3 major recommendations for the future of pandemic crisis technologies.
... It should be noted that along with the currently widely used methods for increasing the speed and reliability of CSs operating in the conventional binary positional number system (PNS), great prospects open up through the development and implementation of new, non-traditional methods for representing and processing data in a non-positional number system. In particular, at present, data coding options are being considered based on mathematical models and methods arising from a special branch of mathematicsnumber theory [1][2][3]. As a result, the search for alternative ways to increase the speed of information processing and increase the reliability of the result of solving computational tasks leads to an increase in interest in the use of a non-positional system of residual classes in related fields of science and technology. ...
Article
It is known that the use of a non-positional number system in residual classes (SRC) in computer systems (CS) can significantly increase the speed of the implementation of integer arithmetic operations. The use of such properties of a non-positional number system in the SRC as independence, equality and low-bitness (low-digit capacity) of the residues that define the non-positional code data structure of the SRC provides high user performance for the implementation in the CS of computational algorithms consisting of a set of arithmetic (modular) operations. The greatest efficiency from the use of the SRC is achieved when the implemented algorithms consist of a set of arithmetic operations such as addition, multiplication and subtraction. There is a large class of algorithms and tasks (tasks of implementing cryptoalgorithms, optimization tasks, computational tasks of large dimension, etc.), where, in addition to performing integer arithmetic operations of addition, subtraction, multiplication, raising integers modulo and others in a positive numerical range, there is a need to implement the listed above arithmetic and other operations, in the negative numerical range. The need to perform these operations in a negative numerical range significantly reduces the overall efficiency of using the SRC as a number system of the CS. In this aspect, the lack of a mathematical model for the process of raising integers in the SRC in the negative numerical region makes it difficult to develop methods and procedures for raising integers to an arbitrary power of a natural number in the SRC, both in positive and negative numerical ranges. The purpose of the article is the synthesis of a mathematical model of the process of raising integers to an arbitrary power of a natural number in the SRC, both in positive and negative numerical ranges.
... These initiatives include generating new knowledge and developing innovative solutions to promote eco-friendly production and consumption or to prevent, diagnose, monitor, treat and cure diseases [1]. Some computing paradigms such as ubiquitous [2] or pervasive computing [3] have discussed how information technology could be woven into everyday objects and settings under the Disappearing Computing vision [4] to offer new ways of supporting and enhancing people's lives beyond the traditional support offered by conventional desktop computers [1]. Pervasive computing involves intelligent environments augmented by mobile, wearable, and environmental sensors in a diversity of computing devices that collect data from multiple sources. ...
Article
Full-text available
(Open Access) Recent technological improvements have made it possible for pervasive computing intelligent environments, augmented by sensors and actuators, to offer services that support society's aims for a wide variety of applications. This requires the fusion of data gathered from multiple sensors to convert them into information to obtain valuable knowledge. Poor implementation of data fusion hinders the appropriate actions from being taken and offering the appropriate support to users and environment needs, particularly relevant in the healthcare domain. Data fusion poses challenges that are mainly related to the quality of the data or data sources, the definition of a data fusion process and evaluating the data fusion carried out. There is also a lack of holistic engineering frameworks to address these challenges. These frameworks should be able to support automated methods of extracting knowledge from information, selecting algorithms and techniques, assessing information and evaluating information fusion systems in an automatic and standardized manner. This work proposes a holistic framework to improve data fusion in pervasive systems, addressing the issues identified by means of two processes: the first of which guides the design of the system architecture and focuses on data management. It is based on a previous proposal that integrated aspects of Data Fabric and Digital Twins to solve data management and data contextualization and representation issues, respectively. The extension of the previous proposal presented here was mainly defined by integrating aspects and techniques from different well-known multi-sensor data fusion models. The previous proposal identified high-level data processing activities and was intended to facilitate their traceability to components in the system architecture. However, the previously defined stages are not completely adequate in a data fusion process and the data processing tasks to be performed in each stage are not described in detail, especially in the data fusion stages. The second process of the framework deals with evaluating data fusion systems and is based on international standards to ensure the quality of the data fusion tasks performed by such systems. This process also offers guidelines for designing the architecture of an evaluation subsystem to automatically perform data fusion evaluation in runtime as part of the system. To illustrate the proposal, a system for preventing the spread of COVID-19 in nursing homes is described that was developed using the proposed guidelines It is also illustrated by a description of how the data fusion tasks it supports are evaluated by the proposed evaluation process. The overall evaluation of the data fusion performed by this system was considered satisfactory, which indicates that the proposal facilitates the design and development of data fusion systems and helps to achieve the necessary quality requirements.
... With the growth of the Internet and the popularity of mobile communication devices, location information has steadily evolved into the most significant information in people's daily lives, producing a slew of location-based services. However, people spend 80% of their time indoors [1]. Therefore, there is an urgent need for a convenient and fast method to achieve high-precision indoor positioning. ...
Article
Full-text available
In visual indoor positioning systems, the method of constructing a visual map by point-by-point sampling is widely used due to its characteristics of clear static images and simple coordinate calculation. However, too small a sampling interval will cause image redundancy, while too large a sampling interval will lead to the absence of any scene images, which will result in worse positioning efficiency and inferior positioning accuracy. As a result, this paper proposed a visual map construction method based on pre-sampled image features matching, according to the epipolar geometry of adjacent position images, to determine the optimal sampling spacing within the constraints and effectively control the database size while ensuring the integrity of the image information. In addition, in order to realize the rapid retrieval of the visual map and reduce the positioning error caused by the time overhead, an image retrieval method based on deep hashing was also designed in this paper. This method used a convolutional neural network to extract image features to construct the semantic similarity structure to guide the generation of hash code. Based on the log-cosh function, this paper proposed a loss function whose function curve was smooth and not affected by outliers, and then integrated it into the deep network to optimize parameters, for fast and accurate image retrieval. Experiments on the FLICKR25K dataset and the visual map proved that the method proposed in this paper could achieve sub-second image retrieval with guaranteed accuracy, thereby demonstrating its promising performance.
... Consumer behavior and privacy concerns have been explored to a certain extent in the literature-from home users [17,[27][28][29][30][31] to travelers [32,33] and in IoT design frame-IoT 2023, 4 81 works [34][35][36][37]. But other aspects of IoT devices, such as financial risks and performance risks on home users' adoption of IoT devices, have not been reviewed. ...
Article
Full-text available
Home appliance manufacturers have been adding Wi-Fi modules and sensors to devices to make them ‘smart’ since the early 2010s. However, consumers are still largely unaware of what kind of sensors are used in these devices. In fact, they usually do not even realize that smart devices require an interaction of hardware and software since the smart device software is not immediately apparent. In this paper, we explore how providing additional information on these misunderstood smart device features (such as lists of sensors, software updates, and warranties) can influence consumers’ purchase decisions. We analyze how additional information on software update warranty (SUW) and the type of sensors in smart devices (which draw attention to potential financial and privacy risks) mediates consumer purchase behavior. We also examine how other moderators, such as brand trust and product price, affect consumers’ purchase decisions when considering which smart product option to buy. In the first qualitative user study, we conducted interviews with 20 study participants, and the results show that providing additional information about software updates and lists of sensors had a significant impact on consumer purchase preference. In our second quantitative study, we surveyed 323 participants to determine consumers’ willingness to pay for a SUW. From this, we saw that users were more willing to pay for Lifetime SUW on a smart TV than to pay for a 5-year SUW. These results provide important information to smart device manufacturers and designers on elements that improve trust in their brand, thus increasing the likelihood that consumers will purchase their smart devices. Furthermore, addressing the general consumer smart device knowledge gap by providing this relevant information could lead to a significant increase in consumer adoption of smart products overall, which would benefit the industry as a whole.
... El concepto de Internet de las Cosas (IoT) surge en 1982, con el diseño de una máquina expendedora de productos y bebidas principalmente, la cual fue conectada a Internet para obtener información sobre la cantidad de bebidas o refrescos que contenía la máquina y en qué momento las bebidas ya se encontraban frías (Daimler, 2020). Posteriormente, en 1991 Mark Weiser (Weiser, 2002) propone el concepto de computación ubicua que es el concepto que describe al paradigma que permite ofrecer servicios de computación a través de la red, comúnmente internet. La computación ubicua puede ocurrir al utilizar cualquier dispositivo en cualquier ubicación y en cualquier formato (Poslad, 2009). ...
Article
Full-text available
Los avances acelerados en tecnologías de la información combinados con los métodos de industrialización han estimulado el progreso en el desarrollo de nuevas generaciones en tecnologías de manufactura que hoy en día se vive el auge de la cuarta revolución industrial. La industria 4.0 demanda la aplicación de tecnologías emergentes que se originan de diferentes disciplinas incluyendo los sistemas físico-cibernéticos, el internet de las cosas, cómputo en la nube, integración industrial, arquitecturas orientadas a servicios, administración de procesos de negocios, integración de la información industrial, entre otros. La falta de herramientas sigue siendo un obstáculo para explotar el gran potencial de la industria 4.0, la cual presenta grandes retos para las empresas donde se requiere implementar métodos y sistemas formales para entrar a la industria 4.0. En esta investigación nos concentramos en los retos para la integración del Internet de las Cosas como una oportunidad para la manufactura sustentable en la Industria 4.0 y con ello generar nuevos servicios cualitativos los cuales representan una evolución del internet.
... Mark Weiser first proposed the idea of ubiquitous computing in 1988 [5]. The authors' impression was to integrate computing into the everyday life of people transparently and seamlessly. ...
Chapter
Full-text available
Cloud Computing has emerged as an alternative to the traditional client‐server paradigm. It offers computation, storage, and applications as on‐demand services. Service‐oriented architecture and virtualization are the core technologies that form the main pillars of cloud computing. Researchers, practitioners, and cloud service providers have been attracted to cloud computing's benefits since the early 2000s. The cloud has matured since its inception and deployed in several sectors including industry and academia. This chapter presents a comprehensive overview of cloud computing. It describes common terminologies and technology that are associated with cloud computing. This chapter also discusses cloud computing research issues and challenges, and emerging cloud computing applications.
... Un récit fondateur Dans un article du Scientific American publié en 1991 intitulé de façon évocatrice : « l'ordinateur pour le XXIème siècle » (Weiser, 1991), Mark Weiser, alors directeur de recherche technologique au PARC, posait les bases d'un nouveau paradigme pour l'ingénierie informatique : l'informatique ubiquitaire. Dans son quotidienne sur un mode littéraire très proche de la nouvelle. ...
Article
La convergencia entre el espacio público y las nuevas tecnologías, como preocupación fundacional del arte sonoro, suscita un análisis crítico de tópicos recientes sobre cómo las prácticas artísticas interactúan con el sonido. Tras reconocer los atributos socializantes del sonido como fuerza creadora, se abordan repertorios de arte sonoro que han reflexionado sobre la experiencia urbana. El concepto de ciudades inteligentes nos ayuda a pensar la emergencia de una cultura de escucha móvil, que moldea un sentido de territorialidad informacional y construye nuevos modos de ciudadanía digital. Desde el punto de vista del arte sonoro, los temas medioambientales también se analizan, reconociendo en la problemática de la contaminación del aire una estrategia de adaptación a la crisis climática. La práctica interdisciplinar de la sonificación será discutida en detalle, analizando las diferentes vertientes y potenciales desdoblamientos tanto en el campo artístico como en el científico. Además, se reconocerán escenarios de pertinencia de la sonificación bajo lógicas determinadas en una economía del conocimiento. Como complemento a la presentación de las discusiones teóricas y de los ejemplos de problemas abordados, en el texto se reportarán trabajos artísticos realizados durante procesos de investigación-creación. Estas piezas fueron creadas junto a estudiantes como producto de actividades académicas y sintetizan las preocupaciones y determinaciones forjadas en el plano teórico.
Chapter
Each major advance in ICT (Information and Communications Technology) has brought about great changes in the productivity and production relations of human society and changes in thinking and research methods in the study of human economic geography.
Article
Full-text available
Accessibility research has matured over the last three decades and developed a better understanding of accessibility technologies, design and evaluation methods, systems and tools as well as empirical studies in accessibility. We envision how progress in new contexts over the next decade can be made to develop stronger links to other areas in Human-Centered Computing and address the research communities. A human-centered perspective on disability needs to develop from a medical model to a social model. New methods will utilize generative AI in design and development processes that address accessibility from the start of system design. We build on AI embedded into future design processes to address participation of small numbers of users better, and new technologies to allow for personalization of multi-modal interaction to improve verbal and non-verbal communication, making body-centric computing and natural interaction truly accessible.
Chapter
In this chapter, I review how the fields of Human Computer Interaction (HCI) and Computer Science have come together over the last 60 years to create and support novel forms of interaction. We see how interaction models have progressively incorporated human skills and abilities, as well as the physical and social properties taken from the real world. I organize this evolution into three periods—pre-HCI, seminal HCI, ubiquitous HCI. Each of these periods is illustrated with key reference works that have influenced my own research as well as contributions of French researchers to the field.
Article
Ubiquitous computing encapsulates the idea for technology to be interwoven into the fabric of everyday life. As computing blends into everyday physical artifacts, powerful opportunities open up for social connection. Prior connected media objects span a broad spectrum of design combinations. Such diversity suggests that people have varying needs and preferences for staying connected to one another. However, since these designs have largely been studied in isolation, we do not have a holistic understanding around how people would configure and behave within a ubiquitous social ecosystem of physically-grounded artifacts. In this paper, we create a technology probe called Social Wormholes, that lets people configure their own home ecosystem of connected artifacts. Through a field study with 24 participants, we report on patterns of behaviors that emerged naturally in the context of their daily lives and shine a light on how ubiquitous computing could be leveraged for social computing.
Article
With rapid advances in computing, we are beginning to see the expansion of technology into domains far afield from traditional office settings historically at the center of CSCW research. Manufacturing is one industry undergoing a new phase of digital transformation. Shop-floor workers are being equipped with tools to deliver efficiency and support data-driven decision making. To understand how these kinds of technologies are affecting the nature of work, we conducted a 15-month qualitative study of the digitalization of the shipping and receiving department at a small manufacturer located in the Southeastern United States. Our findings provide an in-depth understanding of how the norms and values of factory floor workers shape their perception and adoption of computing services designed to augment their work. We highlight how emerging technologies are creating a new class of hybrid workers and point to the social and human elements that need to be considered to preserve meaningful work for blue-collar professionals.
Conference Paper
Context-awareness is an important component of modern software systems. For example, in Ambient Assisted Living (AAL), the concept of context-awareness empowers users by reducing their dependence on others. Due to this role in healthcare, such systems need to be reliable and usable by their intended users. Our research addresses the development, testing and validation of context-aware systems in an emerging field which currently lacks sufficient systems engineering processes and disciplines. One specific issue being that developers often focus on delivering a system that works at some level, rather than engineering a system that meets a specified set of system requirements and their corresponding qualities. Our research aims to contribute towards improving the delivery of system quality by tracing, developing and linking systems development data for requirements, contexts including sensors, test cases and their results, and user validation tests and their results. We refer to this approach as the “quality traceability of context-aware systems”. In order to support the developer, the quality traceability of context-aware systems introduces a systems development approach tailored to context-aware systems in intelligent environments, an automated system testing tool and system validation process. We have implemented a case study to inform the research. The case study is in healthcare and based on an AAL system used to remotely monitor and manage, in real time, an individual prone to depressive symptoms.
Conference Paper
The vision of a `metaverse' may soon bring a ubiquitous(ly) Augmented Reality (UAR) delivering context-aware, geo-located, and continuous blends of real and virtual elements into reach. This paper draws on speculative design to explore, question, and problematize consequences of AR becoming pervasive. Elaborating on Desjardin et al.'s bespoke booklets, we co-speculate together with 12 globally dispersed participants. Each participant received a custom-made design workbook containing pictures of their immediate surroundings, which they elaborated on in situated brainstorming activities. We present an integration of their speculative ideas and lived experiences in 3 overarching themes from which 7 `dark' scenarios caused by UAR were formed. The Scenarios are indicative of deceptive design patterns that can (and likely will be) devised to misuse UAR, and anti-patterns that could cause unintended consequences. These contributions enable the timely discussion of potential antidotes and to which extent they can mitigate imminent harms of UAR.
Article
Unobtrusiveness has been highlighted as an important design principle in Human-Computer Interaction (HCI). However, the understanding of unobtrusiveness in the literature varies. Researchers often claim unobtrusiveness for their interaction method based on their understanding of what unobtrusiveness means. This lack of a shared definition hinders effective communication in research and impedes comparability between approaches. In this article, we approach the question “What is unobtrusive interaction?” with a systematic and extensive literature review of 335 papers and an online survey with experts. We found that not a single definition of unobtrusiveness is universally agreed upon. Instead, we identify five working definitions from the literature and experts’ responses. We summarize the properties of unobtrusive interaction into a design framework with five dimensions and classify the reviewed papers with regard to these dimensions. The article aims to provide researchers with a more unified context to compare their work and identify opportunities for future research.
Chapter
In the last couple of years, there are applications relevant to Green Internet of Things (GIoT) and the major focus on two development trending and admired technologies is upcoming: Green Cloud Computing Application (GCCA) and GIoT are current buzz discussions in the field of crop growing (agriculture) and medical related things, i.e., healthcare industry–based applications. Motivated by achieving a sustainable globe, this chapter discusses a variety of technology and issue concerning GCCA and GIoT and, additionally, further improves the conversation with the suppression of energy utilization of the combination of these two techniques (CCA and GIoT) in farming industry, i.e., one is agriculture‐based and the other is healthcare industry–based system. The past and perception of the hot green information and communication technologies (GICTs) which enabled GIoT have been discussed rigorously. Green mathematical computational calculations opens first and, furthermore, or we can say, afterward focuses on the modern significant works completed concerning of these two upcoming emerging technologies in both agriculture and healthcare cases. In addition, this chapter has contributed significant information by presenting GIoT farming and healthcare applications linear time‐invariant system (GIoT‐AHAS) using digital wireless sensor cloud discrete integration or digital summation modelling. Finally, we have summarized the limitations, advantages, challenges, and prospects of the research guidelines associated to emerging and advanced green‐based application oriented development in relevant field. The aim of our chapter is to research and create broad green area and also to make contribution to sustainable application around the globe.
Chapter
Full-text available
Now we know what AI is and have seen how the technology has made the transition from the lab to society in recent years, we turn our attention to the process of embedding AI into society. What is required to incorporate AI into our society? To answer that question, this chapter presents a framework within which AI can be viewed as a particular type of technology, namely a system technology, with a number of historical precedents. By viewing AI in this way, we can draw various conclusions from the history of other system technologies. That in turn provides a basis for reflecting on what we need to do with AI and how we can address the many issues associated with it. It is not our intention to imply that history always repeats itself or that technological development has deterministic characteristics. We do not set out a rigid framework but identify general patterns that shed light on the present, while recognizing that the past and the present differ. By adopting this approach, we seek to look beyond the current situation and thus beyond the whims of the day.
Article
Recently, with the downsizing of computers and the development of wireless communication advances, sensor networks are being widely studied. However, it is necessary to know the location of each node, in order to apply sensor data. Many researchers have tried to find a good approach to position estimation in indoor environment. In our study, we focus on position estimation by using Received Signal Strength Indication (RSSI). It has the advantage of implementation with limited resources in the sensor network. However, since RSSI value is affected by multipath and obstacles, position estimation may yield considerable errors. In our research, we propose a range estimation technique with RSSI on Low Frequency (LF) waves. Since RSSI value on LF waves is less affected by multipath and obstacles compared with RSSI on Ultra High Frequency (UHF) waves used for a communication, position estimation with high accuracy can be calculated using this method. We show an RSSI measurement sensor which measures the RSSI on LF waves and a transmitter which sends radio waves on the 125 kHz band. Results of experiments using our developed modules and a ZigBee module demonstrated the robustness of RSSI on LF waves against multipath and obstacles compared with UHF waves. In this paper, a range estimation experiment was performed by applying the proposed modules and range estimation accuracy was evaluated through experiments.
Chapter
Full-text available
This paper presents a study concerning the quality of education with the Internet of Things in making students self-reliant, self-enhanced and self-motivated for their bright future along with nation. Students will sufficient for their study accordingly and learn whatever they want to. It was assumed that traditional methods have not any scope for creativity and it is only based on textbook education and make students running after grades and exams. So with IoT, online learning students can enhance their creativity and make them dependent on their own learning strategies through the internet of things. Online learning save the time and money as we have some negative sides of traditional method of teaching like gossiping, peer bullying, class bunk etc. With internet of things students can access quality content apart from boring textbooks content and they have also a space for themselves for enhancing creativity and innovation. It will help for our future generation to think, analyse and get more freely about what they want to do and what is right or wrong. So online learning leads students to becomes more sufficient, self-supporting and independent.This up to date technology shall be used in education for enhancement of students and teachers, so with the foremost recent insights on however specifically lecturers can be more engaging with the help of recent trends in IoT. This paper has mentioned all outlooks of IoT with their productivity level which may facilitate all education-practitioners, and learners of recent era to search out from the results of processed knowledge, that shows totally non-identical educational trends and with successful quantitative relation of those applied ways. Additionally, it has a tendency to find out the area to check the realm for enhancements in various sections supporting everybody for their self-reliance.
Article
The notion of context awareness and the challenge of reasoning with partially reliable sources are two important aspects within Information Fusion. Context is the information relevant to, but not directly affecting, the problem at hand and can be broadly categorised into either context-for or context-of, referring either to the information related to some situation or to the environment induced by some situation, respectively. In evidence theory, the Behaviour-Based Correction (BBC) model generalises reasoning with partially reliable sources as well as contextual belief correction. In this paper, we propose a model for contextual reasoning framed into evidence theory, which captures both the notions of context-for and context-of. We rephrase the BBC model to explicitly account for variation of metaknowledge regarding source behaviour, and subsequently include within it the variables defining the context-for the problem and the context-for the source. The benefit is two-fold: on the one hand, the explicit inclusion of context in the reasoning provides a better insight into the problem and on the other hand, it can improve the expressiveness of the model. This is illustrated on a case of maritime surveillance involving a missing vessel, where it is shown that this model is not only more expressive than the simple fusion of sources model but also more interpretable.
Article
The manufacturing industry is poised to undergo a paradigm shift with the advent of the Fourth Industrial Revolution, popularly known as ‘Industry 4.0’, which will integrate the physical and digital worlds. As a widely acknowledged phrase among research institutions and universities, the ‘Industry 4.0’ paradigm has attracted significant interest from the academic, business and scientific communities. Even though the concept is not new and has been at the forefront of scholarly research for many years with many interpretations, the ‘Industry 4.0’ concept has just recently been introduced. It is widely accepted not only in the research field but also in the manufacturing ecosystems. However, there is a need to comprehend industry-specific research advancements, trends and gaps due to the diverse applications of different technologies. This article systematically reviews and comprehensively evaluates the research on different Industry 4.0 efforts and their applicability in the textile and manufacturing industries. This article aims to map the current Industry 4.0 literature in the textile and apparel industry to analyse and categorize existing research and identify research gaps. We utilized the PRISMA framework to conduct a systematic review of the literature and 34 research publications on Industry 4.0 in the textile and apparel industry were located using a well-organized keyword search of the Web of Science and Scopus databases. The findings indicate that, of all Industry 4.0 technologies, the internet of things (IoT) and RFID applications are the most extensively investigated in the textile and manufacturing industries. Despite the extensive applications of additive manufacturing (AM)/3D printing and augmented reality (AR), research in both fields is still in its infancy. The study revealed that Germany is the country that has published the most literature on Industry 4.0 projects in the textile and apparel industries. We urge that future academics focus on determining the relevance of Industry 4.0 initiatives to small and medium-sized enterprises (SMEs) as the textile and apparel industry is dominated by SMEs in many developing countries.
Article
Full-text available
Several computing paradigms have emerged along the years integrated with the Internet of Things (IoT) as the base to realize the complex hyperspace associated to the ubiquitous Cyber-Physical-Social-Thinking hyperspace that society expects. An overlap of the principles that define those paradigms exists and, despite of previous efforts, a unified and appropriate definition of each of them is still a challenge. Therefore, the purpose of this work is to survey the existing literature about IoT and their related paradigms to obtain a model that provides a definition usable to guide in the selection of that paradigm that fits better the requirements of the system-to-be. For this aim, a rigorous and systematic Thematic Synthesis has been conducted to analyze the most relevant studies of the selected paradigms and specify a model that integrates their definitions, their relations and differences. Furthermore, Cyber-Physical-Social Systems (CPSS) has been identified as the paradigm focusing on social and human factors that better realizes the complex hyperspace of the smart world since it entails relevant and convenient aspects from other paradigms.
Article
With the transformation led by the Fourth Industrial Revolution, the nature of product innovation has shifted from traditional physical products to smart products or Smart, Connected Product (SCP). There is not a comprehensive understanding of SCP since it is proposed in 2014. And, this is a new and unfamiliar arena for designers. To fill the gap in the research, a holistic frame of SCP was established based on explored product attributes through systematic literature review and expert interview. As a result, 20 attributes and 44 sub-attributes were obtained and classified into four clusters—Appearance, Function, Experience and Meaning. It explains the concepts and four capabilities of SCP, as well as guides designers’ practice on developing new ideas of SCP.
Conference Paper
Hybrid threat events are rare and cannot be modelled solely based on data. Instead they require a focus on discovery of emergent knowledge through information sharing across agencies and systems. However, multi-intelligence can bring about reasoning challenges with multiple sources such as confirmation biases. In this paper, we present how context can be used to combat these reasoning biases. Firstly, we show how it can reduce the impact of the overly confident sources and secondly, how it can be used to provide counter-evidence. It is shown that when context is used in such a manner the reasoning results display less false confidence while still supporting the original hypothesis. We apply the reasoning scheme to the post-analysis of a real case event. The story of Andromeda was widely reported upon when the vessel loaded with 410 tonnes of explosives supposedly sailing to Libya was arrested near Crete in early 2018. Using media headlines, AIS signals and analyst reports, we show how realistic, uncertain, heterogeneous reports and contextual information can be put together to reason about its intent. We propose a reasoning model framed within the theory of evidence to combine the information from these sources. The modularity of our method allows us to easily compare different approaches to context-aware reasoning. We finally conclude on future steps for this work.
ResearchGate has not been able to resolve any references for this publication.