Article

Multiagent mobility and lifestyle recommender system for individuals with visual impairment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background: Individuals with visual impairment currently rely on walking sticks and guide dogs for mobility. However, both tools require the user to have a mental map of the area and cannot help the user establish detailed information about their surroundings, including weather, location, and businesses. Purpose and Methods: This study designed a navigation and recommendation system with context awareness for individuals with visual impairment. The study used Process for Agent Societies Specification and Implementation (PASSI), which is a multiagent development methodology that follows the Foundation for Intelligent Physical Agents framework. The model used the Agent Unified Modeling Language (AUML). Results: The developed system contains a context awareness module and a multiagent system. The context awareness module collects data on user context through sensors and constructs a user profile. The user profile is transferred to the multiagent system for service recommendations. The multiagent system has four agents: a consultant agent, search agent, combination agent, and dispatch agent and integrates machine and deep learning. AUML tools were used to describe the implementation and structure of the system through use-case graphics and kit, sequence, class, and status diagrams. Conclusions: The developed system understands the needs of the user through the context awareness module and finds services that best meet the user's needs through the agent recommendation mechanism. The system can be used on Android phones and tablets and improves the ease with which individuals with visual impairment can obtain the services they need.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Neuroscience as a Service (NaaS) may enable neuroscience-related healthcare and scientific research to be conducted in nat�ural environments and settings versus equipment rooms in laboratories and medical centers NaaS is somewhat analogous to the concept of Software as a service (SaaS)—decentralized cloud-based computing where a third-party provider, has the host applications. Stakeholders through the internet enable customers to focus on their domain expertise versus attempting to run complex data centers, technology stacks, and other network infrastructures. By leveraging the interdisciplinary domains of state-of-the-art AI, machine learning, neuroscience, engineering, healthcare, and physics, NaaS can create innovative platforms that may accelerate neuroscience deployment. Recent advancements in multimedia using emerging technologies contribute the state-of-the-art methodologies, systems, and innovative use of multimedia-based emerging technology services for health care.
Article
Full-text available
COVID-19 epidemic has swiftly disrupted our day-to-day lives affecting the international trade and movements. Wearing a face mask to protect one's face has become the new normal. In the near future, many public service providers will expect the clients to wear masks appropriately to partake of their services. Therefore, face mask detection has become a critical duty to aid worldwide civilization. This paper provides a simple way to achieve this objective utilising some fundamental Machine Learning tools as TensorFlow, Keras, OpenCV and Scikit-Learn. The suggested technique successfully recognises the face in the image or video and then determines whether or not it has a mask on it. As a surveillance job performer, it can also recognise a face together with a mask in motion as well as in a video. The technique attains excellent accuracy. We investigate optimal parameter values for the Convolutional Neural Network model (CNN) in order to identify the existence of masks accurately without generating over-fitting.
Article
Full-text available
In the rapidly growing world of technology and evolution, the outbreak and emergences diseases have become a critical issue. Precaution, prevention and controlling the diseases by technology has become the major challenge for healthcare professionals and health care industries. Maintaining a healthy lifestyle has become impossible in the busy work schedules. Smart health monitoring system is the solution to the above poses challenges. The recent revolution of industry 5.0 and 5G has led to development of smart cum cost effective sensors which help in real time health monitoring or individuals. The SHM has led to fast, cost effective, and reliable health monitoring services from remote locations which was not possible with traditional health care systems. The integration of blockchain framework improved data security and data privacy of confidential data of patient to prevent the data misuse against patients. Involvement of Deep Learning and Machine learning to analyze health data to achieve multiple targets has helped attain preventive healthcare and fatality management in patients. This has helped in the early detection of chronic diseases which was not possible recently. To make the services more cost effective and real time, the integration of cloud computing and cloud storage has been implemented. The work presents the systematic review of SHM along with recent advancements in SHM with existing challenges.
Article
Full-text available
On an MRI scan of the brain, the boundary between endocrine tissues is highly convoluted and irregular. Outdated segmentation algorithms face a severe test. Machine learning as a new sort of learning Here, researchers categorize normal and abnormal tissue using the fuzzy min-max neural network approach, which helps classify normal and abnormal tissues such as GM, CSF, WM, OCS, and OSS. This classification helps to explain the fuzzy min-max neural network method. Osseous Spongy Substance, SCALP, and Osseous Compact Substance are all MRI-classified as aberrant tissue in these tissues. Denoising and improving images can be accomplished using the Gabor filtering technique. Using the filtering method, the tumor component will be accurately identified during the segmentation operation. A dynamically changed region growing approach may be applied to a picture by modifying the Modified Region Growing method's two thresholds. This helps to raise Modified Region Growing's upper and lower bounds. Once the Region Growth is accomplished, the edges may be observed using the Modified Region Growing segmented image's Edge Detection approach. After removing the texture, an entropy-based method may be used to abstract the colour information. After the Dynamic Modified Region Growing phase findings have been merged with those from the texture feature generation phase, a distance comparison within regions is performed to combine comparable areas in the region merging phase. After tissues have been identified, a Fuzzy Min-Max Neural Network may be utilised to categorise them.
Research Proposal
Full-text available
Neuroscience as a Service (NaaS) may enable neuroscience-related healthcare and scientific research to be conducted in natural environments and settings versus equipment rooms in laboratories and medical Centers. NaaS is somewhat analogous to the concept of Software as a service (SaaS)—decentralized cloud-based computing where a third-party provider host applications. Stakeholders through the internet, enabling customers to focus on their domain expertise versus attempting to run complex data centers, technology stacks and other network infrastructures. By leveraging the interdisciplinary domains of state-of-the-art AI, machine learning, neuroscience, engineering, healthcare, and physics, NaaS can create innovative platforms that may accelerate neuroscience deployment. Recent advancements in multimedia on the emerging technologies contributes the state-of-the-art methodologies, systems, and innovative use of multimedia-based emerging technology services for health care. Topics included but not restricted to: Emerging multimedia processing in neuroscience as a service for health care Emerging applications for managing neuroscience as a service for medical media data The Innovative use of artificial intelligence techniques, algorithms and methods to monitor and track casualties and contacts against the outbreak of epidemic diseases and beyond Case study along with design or development of innovative multimedia smart healthcare materials, tools, and devices Mobile multimedia emerging technologies for health care Emerging technologies based health monitoring M-QoE/M-QoS/M-QoC variations in health- emerging technologies applications Emerging technologies -based remote display Protocol for health care Media-cloud based resource allocation approaches Emerging technologies-based model for speech-enabling healthcare ML/DL-based patient condition screening, visualization, and monitoring AI-empowered multimedia healthcare data analytics in infectious diseases and beyond Emerging media cloud protocols, surveys, applications and new research approaches Emerging technologies-based model for automatic detection of mental disease at home Usage of neuroscience as a service in improving customer services Use of neuroscience as a service for business improvement
Article
Full-text available
Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure. Link: https://www.mdpi.com/1424-8220/19/15/3404
Article
Full-text available
Introduction This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method Twelve participants who are blind and 12 age-matched sighted controls participated in the study. Comparisons are made for route-guidance performance between use of updated, real-time route instructions (system-aided condition) and a system-unaided (memory-based condition) where the same instructions were only provided in advance of route travel. The sighted controls (who navigated under typical visual perception but used the system for route guidance) represent a best case comparison benchmark with the blind participants who used the system. Results Results across all three test measures provide compelling behavioral evidence that blind navigators receiving real-time verbal information from the MagNav system performed route travel faster (navigation time), more accurately (fewer errors in reaching the destination), and more confidently (fewer requests for bystander assistance) compared to conditions where the same route information was only available to them in advance of travel. In addition, no statistically reliable differences were observed for any measure in the system-aided conditions between the blind and sighted participants. Posttest survey results corroborate the empirical findings, further supporting the efficacy of the MagNav system. Discussion This research provides compelling quantitative and qualitative evidence showing the utility of an infrastructure-free, low-memory demand navigation system for supporting route guidance through complex indoor environments and supports the theory that functionally equivalent navigation performance is possible when access to real-time environmental information is available, irrespective of visual status. Implications for designers and practitioners Findings provide insight for the importance of developers of accessible navigation systems to employ interfaces that minimize memory demands.
Article
Full-text available
Agile methods in general and the Scrum method in particular are gaining more and more trust from the software developer community. When it comes to writing a functional requirement, user stories become more and more usable by the community. Furthermore, a considerable effort has already been made by the community in relation to the use of the use case tool when drafting requirements and in terms of model transformation. We have reached a certain stage of maturity at this level. The idea of our paper is to profit from these richness and to invest it in the drafting of user stories. In this paper, we propose a process of transforming user stories into use cases and we will be able to benefit from all the work done in the transformation of the models according to the MDA approach. To do this, we used natural language processing (NLP) techniques, by applying TreeTagger parser. Our work was validated by a case study where we were able to obtain very positive precisions between 87% and 98%.
Conference Paper
Full-text available
Continuous and accurate smartphone-based localization is a promising technology for supporting independent mobility of people with visual impairments. However, despite extensive research on indoor localization techniques, they are still not ready for deployment in large and complex environments, like shopping malls and hospitals, where navigation assistance is needed. To achieve accurate, continuous, and real-time localization with smartphones in such environments, we present a series of key techniques enhancing a probabilistic localization algorithm. The algorithm is designed for smartphones and employs inertial sensors on a mobile device and Received Signal Strength (RSS) from Bluetooth Low Energy (BLE) beacons. We evaluate the proposed system in a 21,000 m2 shopping mall which includes three multi-story buildings and a large open underground passageway. Experiments in this space validate the effect of the proposed technologies to improve localization accuracy. Field experiments with visually impaired participants confirm the practical performance of the proposed system in realistic use cases.
Article
Full-text available
A novel visual and infrared sensor data-based system to assist visually impaired users in detecting obstacles in their path while independently navigating indoors is presented. The system has been developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful graphics processor and several sensors which allow it to track its motion and orientation in 3D space in real-time. It exploits the inbuilt functionalities of the Unity engine in the Tango SDK to create a 3D reconstruction of the surrounding environment, then associates a Unity collider component with the user and utilizes it to determine his interaction with the reconstructed mesh in order to detect obstacles. The user is provided with audio feedback consisting of obstacle warnings. An extensive empirical evaluation of the obstacle detection component has yielded favorable results, thus, confirming the potential of this system for future development work. (Open access article: full text available at DOI: 10.1109/ACCESS.2017.2766579)
Article
Full-text available
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Article
Full-text available
This paper presents a 6-DOF pose estimation (PE) method and an indoor wayfinding system based on the method for the visually impaired. The PE method involves two graph SLAM processes to reduce the accumulative pose error of the device. In the first step, the floor plane is extracted from the 3D camera’s point cloud and added as a landmark node into the graph for 6-DOF SLAM to reduce roll, pitch and Z errors. In the second step, the wall lines are extracted and incorporated into the graph for 3-DOF SLAM to reduce X, Y and yaw errors. The method reduces the 6-DOF pose error and results in more accurate pose with less computational time than the state-of-the-art planar SLAM methods. Based on the PE method, a wayfinding system is developed for navigating a visually impaired person in an indoor environment. The system uses the estimated pose and floorplan to locate the device user in a building and guides the user by announcing the points of interest and navigational commands through a speech interface. Experimental results validate the effectiveness of the PE method and demonstrate that the system may substantially ease an indoor navigation task.
Article
Full-text available
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visually-impaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.
Article
Full-text available
In this paper, we introduce a real-time face recognition (and announcement) system targeted at aiding the blind and low-vision people. The system uses a Microsoft Kinect sensor as a wearable device, performs face detection, and uses temporal coherence along with a simple biometric procedure to generate a sound associated with the identified person, virtualized at his/her estimated 3-D location. Our approach uses a variation of the K-nearest neighbors algorithm over histogram of oriented gradient descriptors dimensionally reduced by principal component analysis. The results show that our approach, on average, outperforms traditional face recognition methods while requiring much less computational resources (memory, processing power, and battery life) when compared with existing techniques in the literature, deeming it suitable for the wearable hardware constraints. We also show the performance of the system in the dark, using depth-only information acquired with Kinect’s infrared camera. The validation uses a new dataset available for download, with 600 videos of 30 people, containing variation of illumination, background, and movement patterns. Experiments with existing datasets in the literature are also considered. Finally, we conducted user experience evaluations on both blindfolded and visually impaired users, showing encouraging results.
Conference Paper
Full-text available
In recent years, Cloud computing is gaining much popularity as it can efficiently utilize the computing resources and hence can contribute to the issue of Green IT to save energy. So to make the Cloud services commercialized, Cloud markets are necessary and are being developed. As the increasing numbers of various Cloud services are rapidly evolving in the Cloud market, how to select the best and optimal services will be a great challenge. In this paper we present a Cloud service selection framework in the Cloud market that uses a recommender system (RS) which helps a user to select the best services from different Cloud providers (CP) that matches user requirements. The RS recommends a service based on the network QoS and Virtual Machine (VM) platform factors of difference CPs. The experimental results show that our Cloud service recommender system (CSRS) can effectively recommend a good combination of Cloud services to consumers.
Article
Full-text available
If you only need agents to search the Web for cheap CDs, scalability is not an issue. The Web can support numerous agents if each acts independently. In short order, however, billions of embedded agents that sense their environment and interact with us and other agents will fill our world, making the human environment friendlier and more efficient. These agents will need not only scalable infrastructures and communication services, but also scalable social services encompassing ethics and laws. Research projects are under way around the world to develop and deploy such services. The author takes a look at the critical relationship between scalability and intelligent agents.
Article
Objective: To investigate structural and functional alterations in patients with idiopathic rapid eye movement (REM) sleep behavior disorder (iRBD) compared with healthy controls. Methods: Twenty-seven patients with polysomnography-confirmed iRBD and 33 healthy subjects were recruited. All subjects underwent a 3-tesla structural and resting-state functional magnetic resonance imaging (fMRI) examination. Voxel-based morphometry (VBM) analysis was performed to assess grey matter alterations between groups. The amplitude of low-frequency fluctuations (ALFF) was calculated and then compared to measure differences in spontaneous brain activity. Correlations were performed to explore associations between imaging metrics and clinical characteristics in iRBD patients. Results: Compared with healthy controls, patients with iRBD had decreased grey matter volume in the frontal, temporal, parietal, occipital cortices as well as increased grey matter volume in cerebellum posterior lobe, putamen, and thalamus. Patients with iRBD also exhibited increased ALFF values in the right parahippocampal gyrus. Olfaction correlated with ALFF value changes in occipital cortices. Conclusions: Patients with iRBD had widespread decreases of grey matter volume. Increases of grey matter volume in cerebellum, putamen, and thalamus may suggest a compensatory effect, while the altered ALFF values in parahippocampal gyrus and occipital cortices may play a role in theunderlying process of neurodegeneration in this disorder.
Article
Recent statistics of the World Health Organization (WHO), published in October 2017, estimate that more than 253 million people worldwide suffer from visual impairment (VI) with 36 million of blinds and 217 million people with low vision. In the last decade, there was a tremendous amount of work in developing wearable assistive devices dedicated to the visually impaired people, aiming at increasing the user cognition when navigating in known/unknown, indoor/outdoor environments, and designed to improve the VI quality of life. This paper presents a survey of wearable/assistive devices and provides a critical presentation of each system, while emphasizing related strengths and limitations. The paper is designed to inform the research community and the VI people about the capabilities of existing systems, the progress in assistive technologies and provide a glimpse in the possible short/medium term axes of research that can improve existing devices. The survey is based on various features and performance parameters, established with the help of the blind community that allows systems classification using both qualitative and quantitative measu.res of evaluation. This makes it possible to rank the analyzed systems based on their potential impact on the VI people life.
Article
This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding.
Article
Les pathologie neuro-ophtalmologiques sont à la frontière entre ophtalmologie et neurologie. Elles peuvent être divisées entre troubles de la vision et troubles de l’oculomotricité. Nous ne parlerons que des troubles visuels. Ils se traduisent par une baisse d’acuité visuelle (le patient ne voit pas bien ou trouble) ou une atteinte du champ visuel (le patient ne voit pas une partie de ce qui l’entoure). Une imagerie à la recherche de la pathologie causant ces troubles est souvent demandée, parfois en urgence. Cette imagerie, essentiellement par IRM, doit respecter un protocole très particulier et, tous les radiologues n’ont pas l’habitude de prendre en charge ces patients. L’IRM permet d’effectuer un contrôle des voies optiques ainsi que de l’encéphale. Sa réalisation nécessite une connaissance théorique et pratique de l’anatomie des structures visuelles et des principales pathologies qui peuvent les atteindre. Le rôle du manipulateur est donc primordial, la qualité de l’examen permettant, si le radiologue ne connaît pas bien cette pathologie, de le faire relire dans un centre spécialisé. Nous analyserons ensemble les pathologies les plus fréquentes susceptibles de provoquer des troubles visuels : baisse visuelle brutale par atteinte vasculaire, baisse visuelle progressive par atteinte inflammatoire ou tumorale (souvent compressive). Nous insisterons sur le protocole IRM, la préparation du patient avec ses petits trucs pour améliorer les images, l’appréciation de la qualité de l’examen. Une explication des différentes séquences nécessaire avec le paramétrage sera proposée afin de permettre aux manipulateurs d’appréhender au mieux ces types d’examens spécialisés.
Conference Paper
People with visual impairments face challenges when navigating indoor environments, such as train stations and shopping malls. Prior approaches either require dedicated hardware that is expensive and bulky or may not be suitable for such complex spaces. This paper aims to propose a practical solution that enables blind travelers to navigate a complex train station independently using a smartphone without the need for any special hardware. Utilizing Bluetooth Low Energy (BLE) technology and a smartphone's built-in compass, we developed StaNavi -- a navigation system that provides turn-by-turn voice directions inside Tokyo Station, one of the world's busiest train stations, which has more than 400,000 passengers daily. StaNavi was iteratively co-designed with blind users to provide features tailored to their needs that include interfaces for one-handed use while walking with a cane and a route overview to provide a picture of the entire journey in advance. It also offers cues that help users orient themselves in convoluted paths or open spaces. A field test with eight blind users demonstrates that all users could reach given destinations in real-life scenarios, showing that our system was effective in a complex and highly crowded environment and has great potential for large-scale deployment.
Article
This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.
Article
The OMG DDS (Data Distribution Service) standard specifies a middleware for distributing real-time data using a publish-subscribe data-centric approach. Until now, DDS systems have been restricted to a single and isolated DDS domain, normally deployed within a single multicast-enabled LAN. As systems grow larger, the need to interconnect different DDS domains arises. In this paper, we consider the problem of communicating disjoint data-spaces that may use different schemas to refer to similar information. In this regard, we propose a DDS interconnection service capable of bridging DDS domains as well as adapting between different data schemas. A key benefit of our approach is that is compliant with the latest OMG specifications, thus the proposed service does not require any modifications to DDS applications. The paper identifies the requirements for DDS data-spaces interconnection, presents an architecture that responds to those requirements, and concludes with experimental results gathered on our prototype implementation. We show that the impact of the service on the communications performance is well within the acceptable limits for most real-world uses of DDS (latency overhead is of the order of hundreds of microseconds). Reported results also indicate that our service interconnects remote data-spaces efficiently and reduces the network traffic almost N times, with N being the number of final data subscribers.
Article
Ontology evaluation can be defined as assessing the quality and the adequacy of an ontology for being used in a spe- cific context, for a specific goal. Although ontology reuse is being extensively addressed by the Semantic Web community, the lack of appropriate support tools and automatic techniques for the evaluation of certain ontology features are often a barrier for the implementation of successful ontology reuse methods. In this work, we describe the recommender module of CORE (5), a sys- tem for Collaborative Ontology Reuse and Evaluation. This mod- ule has been designed to confront the challenge of evaluating those ontology features that depend on human judgements and are by their nature, more difficult for machines to address. Taking advan- tage of collaborative filtering techniques, the system exploits the ontology ratings and evaluations provided by users to recommend the most suitable ontologies for a given domain. Thus, we claim two main contributions: the introduction of collaborative filtering notion like a new methodology for ontology evaluation and reuse, and a novel recommendation algorithm, which considers specific user requirements and restrictions instead of general user profiles or item-based similarity measures.
Conference Paper
Radio frequency identification (RFID) technology enables information to be remotely stored and retrieved by means of electromagnetic radiation. Compared to other automatic identification technologies, RFID provides an efficient, flexible and inexpensive way of identifying and tracking objects. Asset management is one of the potential applications for RFID technology. Asset management using RFID reduces the workload on asset audit administrators while eliminating the error prone manual audit processes. Successful implementation of RFID asset management system requires an intelligent use of the data harvested from the RFID system. This work describes the development of a multi-agent based middleware solution for processing and managing the data produced by RFID system for asset management applications. The middleware is developed using the agent-oriented software engineering (AOSE) methodology PASSI (Process for Agent Societies Specification and Implementation).
Article
predicated on the belief that information filtering can be more effective when humans are involved in the filtering process. Tapestry was designed to support both content-based filtering and collaborative filtering, which entails people collaborating to help each other perform filtering by recording their reactions to documents they read. The reactions are called annotations; they can be accessed by other people’s filters. Tapestry is intended to handle any incoming stream of electronic documents and serves both as a mail filter and repository; its components are the indexer, document store, annotation store, filterer, little box, remailer, appraiser and reader/browser. Tapestry’s client/server architecture, its various components, and the Tapestry query language are described.
Article
Agents Interaction Protocols (AIPs) play a crucial role in multi-agents systems development. They allow specifying sequences of messages between agents. Major proposed protocols suffer from many weaknesses. We present, in this paper, a formal approach supporting the verification of agents' interaction protocols described by using AUML formalism. The considered AUML diagrams are formally translated into Maude specifications. Based on rewriting logic, the formal and object-oriented language Maude offers an interesting way for concurrent systems formal specification and programming. The Maude environment integrates a model-checker based on Linear Temporal Logic (LTL) supporting formal verification of distributed systems. The proposed approach essentially allows: (1) translating the description of agents' interactions, specified using AUML formalism, in a Maude specification and, (2) applying the model-checking techniques supported by Maude to verify some properties of the described system. A case study is presented to illustrate our approach.
Project BLAID: Toyota's Contribution to Indoor Navigation for the Blind Available at
  • J Pauls