Large and complex systems are used in every field of industry and research. Most of these systems can be classified as knowledge processing systems or subgroups of these. We investigate how someone can trust such systems and their outputs when only knowing how trustable the used inputs and sources are and how the system is working.
After broad structured investigations the conclusion was that there exist some ways and methods in a larger context, but there is a strong need for embedding trust in particular into the work with knowledge processing systems, so the first contribution of this thesis is a sound and comparing literature review on knowledge processing and trust and their related research fields.
One hurdle in this is also the multidisciplinary application of the term "Trust" and finding a distinguished handling and definition for applying trust in a technical domain.
The second contribution of this thesis is the proposing of a definition of the "Trust Model" terminology in the context of knowledge processing and the investigation of suitable trust models. These models are the "Binary Trust Model", the "Probabilistic Trust Model", the "Opinion-Space Trust Model", and our self developed "Weighted Arithmetic Mean Trust Model" which suits in particular for the application in knowledge processing systems.
Furthermore as a third contribution, we discuss these models for measurement, application, and ways of how to work with trust in knowledge processing systems. We focus on the possibilities of how to propagate trust through (multiple calculation steps executing) knowledge processing systems and evaluate and compare the investigated and developed trust models on several scenarios. We are convinced that the field of knowledge processing could highly benefit by using trust.
With our research work and the evaluation of the models we are one step closer to our initial motivation of finding suitable ways for using trust in knowledge processing.