Thesis

Working with Trust and Precision of Information and Data in Knowledge Processing Systems

Authors:
  • Pro2Future GmbH
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Large and complex systems are used in every field of industry and research. Most of these systems can be classified as knowledge processing systems or subgroups of these. We investigate how someone can trust such systems and their outputs when only knowing how trustable the used inputs and sources are and how the system is working. After broad structured investigations the conclusion was that there exist some ways and methods in a larger context, but there is a strong need for embedding trust in particular into the work with knowledge processing systems, so the first contribution of this thesis is a sound and comparing literature review on knowledge processing and trust and their related research fields. One hurdle in this is also the multidisciplinary application of the term "Trust" and finding a distinguished handling and definition for applying trust in a technical domain. The second contribution of this thesis is the proposing of a definition of the "Trust Model" terminology in the context of knowledge processing and the investigation of suitable trust models. These models are the "Binary Trust Model", the "Probabilistic Trust Model", the "Opinion-Space Trust Model", and our self developed "Weighted Arithmetic Mean Trust Model" which suits in particular for the application in knowledge processing systems. Furthermore as a third contribution, we discuss these models for measurement, application, and ways of how to work with trust in knowledge processing systems. We focus on the possibilities of how to propagate trust through (multiple calculation steps executing) knowledge processing systems and evaluate and compare the investigated and developed trust models on several scenarios. We are convinced that the field of knowledge processing could highly benefit by using trust. With our research work and the evaluation of the models we are one step closer to our initial motivation of finding suitable ways for using trust in knowledge processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Thesis
Full-text available
Kommunikationsstrategien gibt es für viele Einrichtungen und Anlässe, nicht jedoch zur Bewerbung von (kombinierten) eSports-Veranstaltungen - dies hat sich auch nach längeren Recherchen des Autors und der Veranstalter der 10. JKU LAN und JKU DICE herausgestellt. Somit ist die Notwendigkeit der Erstellung einer Kommunikationsstrategie zur Bewerbung eben dieser Veranstaltungen gegeben. In der vorliegenden Arbeit wird definiert, was es für eine solche gute Kommunikationsstrategie braucht. Nach einer ausführlichen Literaturrecherche und Erarbeitung der konzeptionellen Grundlagen wird eine Ist-Analyse der bestehenden Bewerbungs- und Marketingmaßnahmen durchgeführt. Anschließend wird mit einer empirischen Untersuchung, welche bei der Veranstaltung selbst mittels eines Fragebogens durchgeführt wurde, analysiert, wie die aktuelle Kommunikationsstrategie bei den Veranstaltungsteilnehmern wirkt bzw. wie diese von ihnen empfunden wird. Diese Untersuchung gab vor allem auch darüber Auskunft, über welche Kommunikationswege die Veranstaltung hauptsächlich wahrgenommen wurde, was eine direkte Richtungsweisung für die Bewerbung künftiger gleichartiger Veranstaltungen darstellt. Basierend auf der konzeptionellen Grundlagen, der Ist-Analyse und den Erkenntnissen der empirischen Untersuchung wird im letzten Teil der Arbeit eine Handlungsempfehlung gegeben, wie die bestehende Kommunikationsstrategie angepasst und erweitert werden kann um bei der Bewerbung künftiger Veranstaltungen einen besseren Rücklauf zu erzielen.
Article
Full-text available
On-line platforms foster the communication capabilities of the Internet to develop large-scale influence networks in which the quality of the interactions can be evaluated based on trust and reputation. So far, this technology is well known for building trust and harnessing cooperation in on-line marketplaces, such as Amazon (www.amazon.com) and eBay (www.ebay.es). However, these mechanisms are poised to have a broader impact on a wide range of scenarios, from large scale decision making procedures, such as the ones implied in e-democracy, to trust based recommendations on e-health context or influence and performance assessment in e-marketing and e-learning systems. This contribution surveys the progress in understanding the new possibilities and challenges that trust and reputation systems pose. To do so, it discusses trust, reputation and influence which are important measures in networked based communication mechanisms to support the worthiness of information, products, services opinions and recommendations. The existent mechanisms to estimate and propagate trust and reputation, in distributed networked scenarios, and how these measures can be integrated in decision making to reach consensus among the agents are analysed. Furthermore, it also provides an overview of the relevant work in opinion dynamics and influence assessment, as part of social networks. Finally, it identifies challenges and research opportunities on how the so called trust based network can be leveraged as an influence measure to foster decision making processes and recommendation mechanisms in complex social networks scenarios with uncertain knowledge, like the mentioned in e-health and e-marketing frameworks.
Conference Paper
Full-text available
Everybody has a sense of trusting people or institutions, but how is trust defined? It always depends on the specific field of research and application and is different most of the time, which makes it hard to answer this question in general at a computational level. Thinking on knowledge processing systems we have this question twice. How can we define and calculate trust values for the input data and, much more challenging, what is the trust value of the output? Meeting this challenge we first investigate appropriate ways of defining trust. Within this paper we consider three different existing trust models and a self developed one. Then we show ways, how knowledge processing systems can handle these trust values and propagate them through a network of processing steps in a way that the final results are representative. Therefore we show the propagation of trust with the three existing trust models and with a recently self developed approach, where also precision-and importance-values are considered. With these models, we can give insights to the topic of defining and propagating trust in knowledge processing systems.
Article
Full-text available
Gesellschaftliches Vertrauen in Reflektion. Eine agentenbasierte Modellierung von Vertrauensdynamiken. High levels of trust have been linked to a variety of benefits including the well-functioning of markets and political institutions or the ability of societies to solve public goods problems endoge-nously. While there is extensive literature on the macro-level determinants of trust, the micro-level processes underlying the emergence and stability of trust are not yet sufficiently understood. We address this lacuna by means of a computer model. In this paper, conditions under which trust is likely to emerge and be sustained are identified. We focus our analysis mainly on the individual characteristics of agents: their social or geographical mobility, their attitude towards others or their general uncertainty about the environment. Contrary to predictions from previous literature, we show that immobile agents are detrimental to both, the emergence and robustness of trust. Additionally, we identify a hidden link between trusting others and being trustworthy. © 2018 GESIS - Leibniz Institute for the Social Sciences. All rights reserved.
Conference Paper
Full-text available
Existing knowledge processing systems, especially expert systems, do not always fit to a company's needs. This reduces the benefits of such a technology, or even completely prevents their usage. Therefore, an architectural guideline is needed to enable software engineers to design and implement custom knowledge processing systems. In this paper a first approach via a pattern language for knowledge processing systems, consisting of five patterns covering the basic components needed, is presented. The patterns were extracted from three different open source expert systems / rule engines. The applicability of the patterns is discussed by applying them on an example custom knowledge processing system project that shows how the pattern language supports the design and implementation.
Chapter
Full-text available
As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.
Article
Full-text available
The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. An earlier version of this paper presented a revised knowledge-KM pyramid that included processes such as filtering and sense making, reversed the pyramid by positing there was more knowledge than data, and showed knowledge management as an extraction of the pyramid. This paper expands the revised knowledge pyramid to include the Internet of Things and Big Data. The result is a revision of the data aspect of the knowledge pyramid. Previous thought was of data as reflections of reality as recorded by sensors. Big Data and the Internet of Things expand sensors and readings to create two layers of data. The top layer of data is the traditional transaction / operational data and the bottom layer of data is an expanded set of data reflecting massive data sets and sensors that are near mirrors of reality. The result is a knowledge pyramid that appears as an hourglass.
Conference Paper
Full-text available
In knowledge processing systems, when gathered data and knowledge from several (external sources) is used, the trustworthiness and quality of the information and data has to be evaluated before continuing processing with these values. We try to address the problem of the evaluation and calculation of possible trusting values by considering established methods from known literature and recent research.
Conference Paper
Full-text available
In an effort to increase throughput for IFIN, a frequent itemsets mining algorithm, in this paper we introduce a solution, called IFIN⁺, for parallelizing the algorithm IFIN with shared-memory multithreads. The inspiration for our motivation is that today commodity processors’ computational power is enhanced with multi physical computational units; and therefore, exploiting full advantage of this is a potential solution for improving performance in single-machine environments. Some portions in the serial version are changed in means which increase efficiency and computational independence for convenience in designing parallel computation with Work-Pool model, be known as a good model for load balance. We conducted experiments to evaluate IFIN⁺ against its serial version IFIN, the well-known algorithm FP-Growth and other two state-of-the-art ones FIN and PrePost⁺. The experimental results show that the running time of IFIN⁺ is the most efficient, especially in the case of mining at different support thresholds in the same running session. Compare to its serial version, IFIN⁺ performance is improved significantly.
Article
Full-text available
In this article, we want to give a first insight to the question “how can you trust data, information or knowledge” - especially in the context of measuring trustworthiness. We think that the topic of smart home security is a very good research environment for this question. Building on this research environment, we made some investigations concerning the measurement of trust.
Thesis
Full-text available
Virtualisierung ist heutzutage sehr stark verbreitet, umso wichtiger ist ihre Sicherheit, welche in dieser Arbeit thematisiert wird. Kapitel 1 gibt eine kurze Einführung in die Thematik, Definitionen und einen historischen Rückblick; in Kapitel 2 wird eine Klassifikation von Virtualisierungsarten vorgenommen und das Modell der Privilegierungen erläutert. Kapitel 3 behandelt allgemeine Überlegungen zur Erhaltung der Sicherheit, besonders bei Virtualisierungen, gibt einen Überblick über die Arten und Funktionsweisen von Schadsoftware und erklärt, warum die Vorgehensweise des Sandboxings für Virtualisierungssicherheit wichtig ist. Weiters werden detaillierte Sicherheitsaspekte bei der System- bzw. Server-Virtualisierung erläutert. Hypervisoren bzw. Virtual Machine Monitors (VMM) haben einen besonderen Stellenwert bei groß angelegten Virtualisierungsumsetzungen. Die verbreitetsten VMMs werden in Kapitel 4 behandelt und in Hinblick auf mögliche Sicherheitsoptionen untersucht. Einen besonderen Stellenwert nehmen Virtual Machine Based Rootkits (VMBR) bzw. hardwarevirtualisierende Rootkits ein (Kapitel 5). Sie sind in Bezug auf Virtualisierung deswegen interessant, da es einigen Ausführungen von VMBRs gelingt, nicht nur laufende Betriebssysteme sondern auch VMMs zu untergraben und die Kontrolle über die Hardware zu erlangen. Kapitel 6 behandelt Virtualisierungssicherheit in Zusammenhang mit Cloud Computing und Kapitel 7 gibt einen Ausblick, besonders in Bezug auf mobile Systeme.
Conference Paper
Full-text available
In countries, where many small rivers exist, the geography can be used to implement environment-friendly small hydro power plants for the generation of energy. The smaller such hydro power plants are, the higher is the impact of environmental incidents. Usually, there is more than one small hydro power plant located alongside one river, mostly operated by different owners. To increase the overall power generating efficiency of all hydro power plants alongside one river, a good communication-and cooperating concept is needed. In our work, we propose a system concept and a prototype implementation for several small, private and independent hydro power plants to increase the energy production through a networked intelligent control system. We also show possibilities for avoiding events, which usually induce downtimes of the small hydro power plants. If these events can be minimized in number and duration, the overall energy production time is higher.
Article
Full-text available
In countries, where many small rivers exist, the geography can be used to implement environment-friendly small hydro power plants for the generation of energy. The smaller such hydro power plants are, the higher is the impact of environmental incidents. Usually, there is more than one small hydro power plant located alongside one river, mostly operated by different owners. To increase the overall power generating efficiency of all hydro power plants alongside one river, a good communication-and cooperating concept is needed. In our work, we propose a system concept and a prototype implementation for several small, private and independent hydro power plants to increase the energy production through a networked intelligent control system. We also show possibilities for avoiding events, which usually induce downtimes of the small hydro power plants. If these events can be minimized in number and duration, the overall energy production time is higher.
Article
Full-text available
In order to ensure the reliability and credibility of the data in wireless sensor networks (WSNs), this paper proposes a trust evaluation model and data fusion mechanism based on trust. First of all, it gives the model structure. Then, the calculation rules of trust are given. In the trust evaluation model, comprehensive trust consists of three parts: behavior trust, data trust, and historical trust. Data trust can be calculated by processing the sensor data. Based on the behavior of nodes in sensing and forwarding, the behavior trust is obtained. The initial value of historical trust is set to the maximum and updated with comprehensive trust. Comprehensive trust can be obtained by weighted calculation, and then the model is used to construct the trust list and guide the process of data fusion. Using the trust model, simulation results indicate that energy consumption can be reduced by an average of 15%. The detection rate of abnormal nodes is at least 10% higher than that of the lightweight and dependable trust system (LDTS) model. Therefore, this model has good performance in ensuring the reliability and credibility of the data. Moreover, the energy consumption of transmitting was greatly reduced.
Conference Paper
Full-text available
The origin of data (data provenance), should always be measured or categorized within the context of trusting the source of data. Can we be sure that the information we receive is trustworthy and reliable? Is the source trustable? Is the data certain? And how important is the received data the our current and next step of processing? We face these questions in the context of knowledge processing systems by developing a convenient approach to bring all these questions and values – trustability, certainty, importance – into a computable, measurable, and comparable way of expression. Not yet facing the question “How to compute trust or certainty?”, but how to incorporate and process their measured values in knowledge processing systems to receive a representative view on the whole environment and its output.
Conference Paper
Full-text available
Today, software projects often have several independent subsystems which provide resources to clients. To protect all subsystems from unauthorized access, the mechanisms proposed in the OAuth2.0 framework and the OpenID Standard are often used. The communication between the servers, described in the OAuth2.0 framework, must be encrypted. Usually, this is achieved using Transport Layer Security (TLS), but administrators can forget to activate this protocol in the server configuration. This makes the whole system vulnerable. Neither the developer, nor the user of the system is able to check whether the communication between servers is safe. This paper presents a way to ensure secure communication between authentication-, authorization-, and resource servers without relying in on a correct server configuration. For this purpose, this paper introduces an additional encryption of the transmitted tokens to secure the transmission independently from the server configuration. Further this paper introduces the Central Authentication & Authorization System (CAAS), an implementation of the OpenId standard and the OAuth2.0 framework that uses the token encryption presented in this paper.
Article
Full-text available
The mining methods are classified as the methods of data analysis and the knowledge acquisition and they are derived from the methods of “Knowledge Discovery”. Within the scope of these methods, there are two main variants associated with a form of data, i.e.: “data” and “text mining”. The author of the paper tries to find an answer to a question about helpfulness and usefulness of these methods for the purpose of knowledge acquisition in the construction industry. The very process of knowledge acquisition is essential in terms of the systems and tools operating based on knowledge. Nowadays, they are the basis for the tools which support the decision-making processes. The paper presents three cases studies. The mining methods have been applied to practical problems - the selection of an adhesive mortar coupled with alternative solutions, analysis of residential real estate locations under construction by a developer company as well as support of technical management of a building facility with a large floor area.
Conference Paper
Full-text available
In this paper, we principally devote our effort to proposing a novel MapReduce-based approach for efficient similarity search in big data. Specifically, we address the drawbacks of using inverted index in similarity search with MapReduce and then propose a simple yet efficient redundancy-free MapReduce scheme, which not only takes advantages over the baseline inverted index-based procedures but also adapts to various similarity measures and similarity searches. Additionally, we present other strategic methods in order to potentially contribute to eliminating unnecessary data and computations. Last but not least, empirical evaluations are intensively conducted with real massive datasets and Hadoop framework in the cluster of commodity machines to verify the proposed methods, whose promising results show how much beneficial they are when dealing with big data.
Article
Full-text available
The Data Information Knowledge and Wisdom Hierarchy (DIKW) has been gaining popularity in many domains. While there has been a lot of articulation of the hierarchy itself, the origins of this ubiquitous and frequently used hierarchy are largely unexplored. In this short piece we trace the trails of this hierarchy. Like an urban legend, it’s everywhere yet few know where it came from.
Article
Research into data provenance has been active for almost twenty years. What has it delivered and where will it go next? What practical impact has it had and what might it have? We provide speculative answers to these questions which may be somewhat biased by our initial motivation for studying the topic: the need for provenance information in curated databases. Such databases involve extensive human interaction with data; and we argue that the need continues in other forms of human interaction such as those that take place in social media.
Chapter
In this chapter, through the lens of information security, we discuss the use of blockchain technology as a mechanism for facilitating trust between various supply chain agents. The goal is to explicate the use of blockchain technology as a distributed ledger to mitigate a varied set of risks to supply chain virtualization, whereby various agents within the supply chain context can engage in transactions with an immutable and cryptographically secure record. This immutable and secure record would then serve as the foundation of a mutually beneficial relationship, built upon the blockchain technology, by engendering greater trust among supply chain agents. The usefulness of blockchain technology is such that it enables information to be stored using a cryptographically secure hash and be distributed among multiple record-keeping nodes. Each agent within the supply chain context can act as a node and, by maintaining a copy of the record, create a distributed ledger of information. This provides two distinct benefits that facilitate trust among agents within the supply chain context; first, agents acting as nodes within the blockchain possess a copy of the information record, which cannot be altered without their consent. Second, information is stored on the blockchain using a secure method of encryption that provides protection against tampering from malicious sources and security of the information contained on the chain. Therefore, these two benefits of blockchain technology provide supply chain agents with powerful trust-building mechanisms, which extricate fear and allay concerns when interacting with new or unknown agents.
Article
The success of e-commerce companies is becoming increasingly dependent on product recommender systems which have become powerful tools that personalize the shopping experience for users based on user interests and interactions. Most modern recommender systems concentrate on finding the relevant items for each user based on their interests only, and ignore the social interactions among users. Some recommender systems, rely on the ‘trust’ of users. However in social science, trust, as a human characteristic, is a complex concept with multiple facets which has not been fully explored in recommender systems. In this paper, to model a realistic and accurate recommender system, we address the problem of social trust modeling where trust values are shaped based users characteristics in a social network. We propose a method that can predict rating for personalized recommender systems based on similarity, centrality and social relationships. Compared with traditional collaborative filtering approaches, the advantage of the proposed mechanism is its consideration of social trust values. We use the probabilistic matrix factorization method to predict user rating for products based on user-item rating matrix. Similarity is modeled using a rating-based (i.e., Vector Space Similarity and Pearson Correlation Coefficient) and connection-based similarity measurements. Centrality metrics are quantified using degree, eigen-vector, Katz and PageRank centralities. To validate the proposed trust model, an Epinions dataset is used and the rating prediction scheme is implemented. Comprehensive analysis shows that the proposed trust model based on similarity and centrality metrics provide better rating prediction rather than using binary trust values. Based on the results, we find that the degree centrality is more effective compared to other centralities in rating prediction using the specific dataset. Also trust model based on the connection-based similarity performs better compared to the Vector Space Similarity and Pearson Correlation Coefficient similarities which are rating based. The experimental results on real-world dataset demonstrate the effectiveness of our proposed model in further improving the accuracy of rating prediction in social recommender systems.
Chapter
Computational trust is the digital counterpart of the human notion of trust as applied in social systems. Its main purpose is to improve the reliability of interactions in online communities and of knowledge transfer in information management systems. Trust models are typically composed of two parts: a trust computing part and a trust manipulation part. The former serves the purpose of gathering relevant information and then use it to compute initial trust values; the latter takes the initial trust values as granted and manipulates them for specific purposes, like, e.g., aggregation and propagation of trust, which are at the base of a notion of reputation. While trust manipulation is widely studied, very little attention is paid to the trust computing part. In this paper, we propose a formal language with which we can reason about knowledge, trust and their interaction. Specifically, in this setting it is possible to put into direct dependence possessed knowledge with values estimating trust, distrust, and uncertainty, which can then be used to feed any trust manipulation component of computational trust models.
Chapter
Information has been an essential element in the development of collaborative and cooperative models. From decision making to the attainment of varying goals, people have been relatively adept at making judgments about the trustworthiness of information, based on knowledge and understanding of a normative model of information. However, recent events, for example in elections and referenda, have stretched the ability of people to be able to measure the veracity and trustworthiness of information online. The result has been an erosion of trust in information online, its source, its value and the ability to objectively determine the trustworthiness of a piece of information, a situation made more complex by social networks, since social media have made the spread of (potentially untrustworthy) information easier and faster. We believe that this exacerbated the need for assisting humans in their judgment of the trustworthiness of information. We have begun working on a social cognitive construct: a trust model for information. In this paper we outline the problems and the beginnings of our trust model and highlight future work.
Article
Pattern warehousing is the most significant new technology to model, store, retrieve and manipulate the patterns. Pattern cube and On-Line Knowledge Processing (OLKP) are introduced in this work to make pattern representation and processing more efficient. OLKP is the process of carrying out operations over the patterns which are stored in the pattern warehouse. The complexities of operations on semantic rich patterns make inappropriate the standard algebra database technology. Moreover, there is no qualified model for OLKP introduced till date. In this work, we address this issue by introducing Pattern Cube and Pattern Algebra with the underlying fundamental operators to support. For simplicity, association kinds of patterns are used to illustrate various operations and properties.
Book
This book provides a major forum for the technical advancement of knowledge management and its applications across diversified domains. Pursuing an interdisciplinary approach, it focuses on methods used to identify and acquire valid, potentially useful knowledge sources. Managing the gathered knowledge and applying it to multiple domains including health care, social networks, data mining, recommender systems, image processing, pattern recognition and predictions using machine learning techniques is the major strength of this book. Effective knowledge management has become a key to the success of business organizations, and can offer a substantial competitive edge. So as to be accessible to all scholars, this book combines the core ideas of knowledge management and its applications in numerous domains, illustrated in case studies. The techniques and concepts proposed here can be extended in future to accommodate changing business organizations’ needs as well as practitioners’ innovative ideas.
Conference Paper
To increase the quality of knowledge processing systems and provide help to software developers, selected existing knowledge processing systems are analysed for the occurrence of used object-oriented design patterns (especially from the Gang-of-Four catalogue). This analysis intends to draw attention to the lack of good software design in the area of knowledge processing systems and at the same time provides a smaller catalogue of design patterns with proven usage in practice, to support development. The design patterns were identified manually in a structured analysis by reverse engineering the source code, supported by a design pattern detection tool. As a result, Gang-of-Four design patterns, suitable for developing custom knowledge processing systems, are presented and discussed.
Article
Farm equipment, including sensors and mobile machinery, create increasing amounts of data, and data can also be gained from third-party services. In order to be able to fully take advantage of this a farmer needs to be able to gather, store, process, and share the data as needed. In this work we describe a prototype for open environment that can gather, combine, store, select, and share data from arbitrary sources and with external partners, as well as use the data in decision making and provide it as input for various services. The environment is built using the Service Oriented Architecture paradigm and is therefore not tied to any specific operating system or software framework. We have tested the environment on the farm scale in Finland. The system was found suitable to improve the work in all tested tasks.
Chapter
There are a wide variety of data resources description methods, which are applied for different application backgrounds. Military data resources classified generally in accordance with branches or organization structure to describe. This paper focus on describing the relationship between the data and tracking the provenance of data, and proposes a method for describing data sources faced to data provenance, which is verified by simple example. Not only explores how to resolve the current military data problems about collection, verification, using and sharing difficulties, but also researches how to resolve the problems about data credibility, data quality and version information when sharing data by the method of data provenance, which can help the users to verify the quality of data, audit data origin, locate the error position, optimize the integration process and so on.
Conference Paper
Many architecture and design patterns exist for enterprise software development. Nowadays interest of knowledge processing systems has been heightened, as these technologies can provide a valuable benefit for a company (e.g., supporting decision making). Nevertheless, the algorithms and technologies used in this domain can be complex and difficult to implement. Some parts can even outreach standard software development. This paper tries to identify similarities to enterprise systems and present a selection of existing design patterns that can be used to solve knowledge processing difficulties. The aim is to provide a pattern collection to allow also software designers and developers not familiar with knowledge processing principles, to easily design, implement and integrate such systems.
Conference Paper
In this paper we present a software architecture for a Knowledge Management and Processing Framework initially for usage in the agricultural domain, but customizable for any domain. In contrast to existing Knowledge Management and Processing Systems, this proposed architecture mainly focuses on the usage of a cloud platform as execution environment and therefore pays special attention to the design aspects to utilize the benefits of a cloud infrastructure, by designing the system parallelizable and distributable. We identified the main aspects of a cloud-ready system platform and combined them with the needed functionality for a custom Knowledge Management and Processing Framework.
Conference Paper
Similarity search has become a principal operation not only in databases but also in diverse application domains. Very large datasets, however, pose a big challenge on its enormous volume-processing capability. In order to deal with the challenge, we propose a two-level clustering approach aiming at supporting fast similarity searches in massive datasets. In addition, we embed some pruning and filtering strategies into our methods so that redundancy-free data, data accuracy, inessential data accesses, unnecessary distance computations, and other following consequences are taken into account. Furthermore, we validate our methods by a series of empirical experiments in real big datasets. The results show that our approach performs better than the two inverted index-based approaches, especially when given big query batches.
Conference Paper
In any mission that requires cooperation and teamwork of multiple agents, it is vital that each agent is able to trust one another to accomplish the mission successfully. This work looks into incorporating the concept of trust in a multi agent environment, allowing agents to compute trust they have on each other. The proposed trust evaluation model called TD Trust model is developed by adapting temporal difference (TD) learning algorithm into its evaluation framework. The proposed trust model evaluates the trust of an agent based on experience gained from interaction among agents. The proposed model is then tested using simulation experiments and its performance is compared against the Secure Trust model, which is a comprehensive model reported in literature.
Conference Paper
We present OPTIMo: an Online Probabilistic Trust Inference Model for quantifying the degree of trust that a human supervisor has in an autonomous robot "worker". Represented as a Dynamic Bayesian Network, OPTIMo infers beliefs over the human's moment-to-moment latent trust states, based on the history of observed interaction experiences. A separate model instance is trained on each user's experiences, leading to an interpretable and personalized characterization of that operator's behaviors and attitudes. Using datasets collected from an interaction study with a large group of roboticists, we empirically assess OPTIMo's performance under a broad range of configurations. These evaluation results highlight OPTIMo's advances in both prediction accuracy and responsiveness over several existing trust models. This accurate and near real-time human-robot trust measure makes possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.