Questions related to Ubiquitous Computing
Affective technologies are the interfaces concerning the emotional artificial intelligence branch known as affective computing (Picard, 1997). Applications such as facial emotion recognition technologies, wearables that can measure your emotional and internal states, social robots interacting with the user by extracting and perhaps generating emotions, voice assistants that can detect your emotional states through modalities such as voice pitch and frequency and so on...
Since these technologies are relatively invasive to our private sphere (feelings), I am trying to find influencing factors that might enhance user acceptance of these types of technologies in everyday life (I am measuring the effects with the TAM). Factors such as trust and privacy might be very obvious, but moderating factors such as gender and age are also very interesting. Furthermore, I need relevant literature which I can ground my work on since I am writing a literature review on this topic.
I am thankful for any kind of help!
Any one can help with explanation about the difference between Ambient learning and context-aware ubiquitous learning?
The Fourth Industrial Revolution is the next generation of industry based cyber-physical systems, such as artificial intelligence, robotics, augmented reality, IoT, 3-D printing, nanotechnology, biotechnology, etc.
How could you image the classroom of the future?
Some researchers confuse IoT and ubiquitous computing, Is there a clear difference between these terms? What's the role and impact of context-awareness in both fields?
I am currently diving into the sea of open-source project for SPH flow simulation. The two candidates I have come across are DualSPHysics and LAMMPS. Both are open-source and have a certain degree of user-modifiable codes. However, I wonder as I go along with my project (I have only spent a couple of hours playing with each of them), which one will be easier for me to handle? In terms of robustness and code sharing (for reproducibility in the future.)
People who have experience with both packages, may I ask for some insight views?
I've developed some new metrics that attempt to gauge the effectiveness of different analytic methods (especially in their ability to discover new findings) and now want to do a more complete literature search. Does anyone have some recommendations for good articles on the development of new metrics for evaluating the effectiveness or power of different analytic/mathematical methods to discover new patterns or relationships?
I am working in IoT with Hierarchical Cloud (Edge Computing). I have designed some framework intuitively. For results, should I go for "Simulation" or "Practical Implementation"? Also answer WHY and HOW?
I am trying to find the IoT term definition for my research. It seems that it is good opportunity to ask this question once more. In most cases I know the term IoT/IIoT can be replaced by SCADA (Supervisory Control an Data Acquisition) , ICT (Information and Communication Technology) and the text still will be perfectly OK.
Do you think the box (or even pack) of cigarettes could be the “thing”. It has a barcode, so it is the source of data. Is it the sensor - no because in this case the barcode reader (industrial scanner) is the “sensor”. Can we recognize the bar code reader as the “thing” – the answer is not if the goal is to provide GLOBAL cigarettes tracking system. The same applies to drags for example. Is it IoT/IIoT solution - my answer is YES no doubts - it is vital for selected industries.
Is the "thing" smart - I don't think we can call the bar code something smart. The most interesting observation is that we can recognize this case as the IoT solution, but we have not mentioned Internet, wireless, etc. at all, but only that we have important mobile data and the solution is globally scoped.
Let’s now replace the word GLOBAL by LOCAL (for example cash desks farm in the shop) and the same application is no longer IoT deployment , isn’t it? It is true even if the cash desks are interconnected using IP protocol !
My point is that a good term definition is important to work together on: common rules, architecture, solutions, requirements, capabilities, limitations, etc. The keyword in the previous sentence is COMMON. Importance of the sensor and data robustness requirement could be applicable to many applications, e.g. controlling an airplane engine during flight. The same engine could be monitored and tracked after landing in any airport using local WIFI by uploading archival data to a central advanced analytics system. Is it IIoT? During the flight it isn't, but the solution is life sensitive. After landing it is IIoT, but the reliability of the data and data transfer is not so important, isn't it.
My concern is that your definition provides pretty good description of the Universe, but working on engineering standards is like carving on the stone - it is one way ticket. To buy one way ticket you must be sure where you are going.
To be constructive my proposal for the definition is as follows:
Try it against the above example.
In the above proposal the open question is: what is the "mobile data", but I believe that the definition is much closer to the final expectation. To answer this question I propose this approach: Data is Data It Doesn’t Matter Where It Comes From!
For implementation of this concept we can use Object Oriented Internet paradigms coved by the:
The only missing thing is how to use these building blocks to make the consistent IoT puzzle (deployment domain). In this case a sponsor is needed to scope globally the outcome.
I believe that finally this way we will get good starting point for the further standardization.
Let me know how this scenario works for you.
Dan Chalmers has an excellent paper about Pervasive Computing courses, but I'm interested in listen from other professors what they think is harder to teach.
SensorML is a markup language developed for open geographical information systems. There are java libraries, XSchema definitions for validation, OWL support for semantic reasoning. WHen I did a quick browse through it, then it seems as if it is possible to
- define self-describing sensors
- define processing (from source to recipient)
- and much more
To me, this standard seems to be a real contender for interoperability issues in ubiquitous computing, ambient computing and internet of things. So, have you applied it? What is you experience? What are the advantages and disadvantes? Concerning interoperability, are there any real competitors?
I am diving into the implications of ubiquitous computing on the future of learning.
I am working on a project where a statistical estimator (ACE, abundance-based coverage estimation) is used. This estimator is like a function, which maps a set of samples to a single real value. I need to evaluate the performace of this estimator without groundtruth。
I want to evaluate it by LOOCV. If the variance of results from LOOCV is small, which implies that one less sample won't affect the estimation, then I think the perforamce of the estimatior is good.
I have been trying to search relevant papers but still fail to know how other people handle the evaluation problem without groud truth and wheter LOOCV can be used in this way.
Any suggestions? :)
Knowing that in real-world ubiquitous learning, learning strategies are different from the e-learning ones, did learner's needs, preferences and their descriptions evolve with u-environments?
I am finding the core security issues in Openmoko open source Operating System for Ubiquitous Computing...
Would you please help me to find out some security management issues in Openmoko regarding Ubiquitous Computing.
Components are units of composition and reuse, and as such they are carriers of a piece of functionality that can be utilized in fulfilling operational demands for systems. In the literature, various types of analogue and digital hardware components, system, application and utility software components, as well as cyberware (information and knowledge structure) components are discussed. However, it is very difficult to find publications in which comprehensive taxonomies or classifications of these are proposed or applied. Are you aware of any general taxonomy or classification schemes of hardware, software and/or cyberware components, no matter if they are off-the-shelf or custom-developed components? Are there any standards or specifications in these fields?
Ubiquitous computing emerged at the beginning of the 1980s. Its main assumption is that computing can be available anywhere, anytime and any context and in anything due to technological developments, new affordances, and societal demand. This idea has been introduced and exploited successfully in many application fields over the years. However, my impression is that ubiquitous computing has so far had only a rather limited impact on computer aided design. Though many researchers studied the affordances and the possible applications of ubiquitous technologies, it would be contra-factual to claim that ubiquitous computing has managed to revolutionize either the methodologies, or the tools and systems of computer-aided design. As the related literature exemplifies it, certain new functionalities and novel tools have been developed by researchers at the academia, but they have not been integrated into commercial systems and industrial best practices. What is your opinion? How has ubiquitous computing influenced the development of CAD systems, tools, and methods? What new functionalities can still be expected?
Is there some recent study of forms and tools of context-aware information retrieval? In ubiquitous systems, ontologies are widely used to represent context information, and information retrieval is a highly complex task, because the heterogeneity of sources and devices.
SensorML is a standard for specifying sensor data processing, e.g., information fusion, sensor fusion, data fusion. In http://www.opengeospatial.org/, they are working on SensorML (as well as TransducerML).
The research and growth in mobile, ubiquitous and pervasive computing is occurring at an unprecedented rate. With the emergence of commodity wearable computers such as Google Glass, Recon Jet, etc., and with VLSI and SSD memory apparently exceeding Moore's predictions, where does this leave us by 2020? What will contextually-aware apps do? Will the synergy between ubiquitous computing, machine learning, adaptive systems and HCI reach a singularity where apps that actually pass the Turing Test will be commonplace? What do you think?
Mobile learning technology strategies across educational institutions are not unilateral. A review of the strategies shows diverse information and requirements. Please take 5 minutes to complete this survey in support of mobile strategy research.
In classroom environments, teachers or lecturers may decide to conduct tests or examinations for students. What are some of the challenges that can be encountered while applying ubiquitous computing devices to classroom environments?
As an advanced computing paradigm, pervasive computing has numerous benefits. How can pervasive computing be applied to classroom environments, especially in enhancing learning and promoting well-being of students?
Good answers will be really appreciated.
I will be excited actually to collect and read the most controversial papers about the ubiquitous and pervasive computing fields. I am sure that there are a lot of interesting material to read but of course collecting a refined reading list (more or less) is not that easy for one researcher, this should be driven by the community for richer content. Such material could be also used I think by others and more importantly for a hot discussion at the ubiquitous seminars.
What I have seen until now is that these two concepts have mixed with some recent concepts such as Ambient Intelligence and have converged into something fuzzy. People use both concepts synonymously now.