Article

A semiotic-inspired machine for personalized multi-criteria intelligent decision support

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The need for appropriate decisions to tackle complex problems increases every day. Selecting destinations for vacation, comparing and optimizing resources to create valuable products, or purchasing a suitable car are just a few examples of puzzling situations in which there is no standard form to find an appropriate solution. Such scenarios become arduous when the number of possibilities, restrictions, and factors affecting the decision rise, thereby turning decision makers into almost mere spectators. In such circumstances, decision support systems (DSS) can play an important role in guiding people and organizations towards more accurate decision making. However, conventional DSS lack the necessary adaptability to account for dynamic changes and are frequently inadequate to tackle the subjectivity inherent in decision-maker's preferences and intention. We argue that these shortcomings can be addressed by a suitable combination of Semiotic Theory and Computational Intelligence algorithms, which together can make up a new generation of DSS. In this article, a formal description of an Intelligent Semiotic Machine is provided and tried out in practical decision contexts. The results obtained show that our approach can provide well-suited decisions based on user preferences, achieving appropriateness while fanning out subjective options without losing decision context, objectivity, or accuracy.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this context, we introduce a novel database querying approach, following the classical concept of Query By Example (QBE) [3], in which user-provided examples are used to iteratively construct queries in order to better match user intention and information needs, without the need of exploiting any database-specific knowledge such as query language nor database schema. And for that we center our computations in the new constructs called Semiotic Machine (SM) [4]. ...
... This sequence aims at capturing various not clearly mentioned, however relevant, information that can be used to better understand user intention and needs. In the iSM, interactions and feedback are processed as three-part sequence of general operations [4], depicted in Figure 1, namely: Selection, Assessment+Discrimination, and Adjustment (SADA), relating, respectively, to: a selection 'S' of candidate examples similar to the ones given as input; 'A+D' as a means for evaluating and qualifying the constructed query; and an adjustment 'A' referring to re-focusing the learned query to the user. This process was inspired by the ideas of Alexander M. Meystel [19] who envisioned a computational semiosis as a sequence of three elementary operations, namely, Grouping, Focusing Attention, and Combinatorial Search (GFACS). ...
... In this scenario, an initial population is generated representing SQL queries and the evolutionary cycle performs a search in the space of queries. Each individual is assessed and discriminated by the fitness function represented by Equation 4. The most prominent individual is w.r.t. ...
... This Machine not only obeys a cause-effect relation when an input came out -it rather associates a meaning to this given input. In this sense, according to the Lima Neto et al. [28] a materialization of the concept of Semiotic Machine may be achieved by a sequence of reasoning in which an input has its interpretation enhanced at least in a sub-symbolic level. For instance, a given query would be automatically refined by means of adaptive query expansion and semantic interpretation methods so that it would be transformed into a new query, more appropriate for solving the problem at hand. ...
Article
Full-text available
Life sciences researchers are under pressure to innovate faster than ever. Big data offer the promise of unlocking novel insights and accelerating breakthroughs. Ironically, although more data are available than ever, only a fraction is being integrated, understood, and analyzed. The challenge lies in harnessing volumes of data, integrating the data from hundreds of sources, and understanding their various formats. New technologies such as cognitive computing offer promise for addressing this challenge because cognitive solutions are specifically designed to integrate and analyze big datasets. Cognitive solutions can understand different types of data such as lab values in a structured database or the text of a scientific publication. Cognitive solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modeling, and machine learning techniques to advance research faster. Watson, a cognitive computing technology, has been configured to support life sciences research. This version of Watson includes medical literature, patents, genomics, and chemical and pharmacological data that researchers would typically use in their work. Watson has also been developed with specific comprehension of scientific terminology so it can make novel connections in millions of pages of text. Watson has been applied to a few pilot studies in the areas of drug target identification and drug repurposing. The pilot results suggest that Watson can accelerate identification of novel drug candidates and novel drug targets by harnessing the potential of big data.
Article
Full-text available
One of the challenges in the field of content-based image retrieval is to bridge the semantic gap that exists between the information extracted from visual data using classifiers, and the interpretation of this data ma de by the end users. The semantic gap is a cascade of 1) the transformation of image pixels into labelled objects and 2) the semantic distance between the label used to name the classifier and that what it refers to for the end-user. In this paper, we focus on the second part and specifically on (semantically) scalable solutions that are independent from domain-specific vocabularies. To this end, we propose a generic semantic reasoning approach that applies semiotics in its query interpretation. Semiotics is about how humans interpret signs, and we use its text analysis structures to guide the query expansion that we apply. We evaluated our approach using a general-purpose image search engine. In our experiments, we compared several semiotic structures to determine to what extent semiotic structures contribute to the semantic interpretation of user queries. From the results of the experiments we conclude that semiotic structures can contribute to a significantly higher semantic interpretation of user queries and significantly higher image retrieval performance, measured in quality and effectiveness and compared to a baseline with only synonym expansions.
Book
Full-text available
The fields of artificial intelligence, intelligence control, and intelligent systems are constantly changing in the subject area of information science and technology. Semiotics and Intelligent Systems Development assembles semiotics and artificial intelligence techniques in order to design new kinds of intelligent systems. A reference publication, Semiotics and Intelligent Systems Development brings a new light to the research field of artificial intelligence by incorporating the study of meaning processes (semiosis), from the perspective of formal sciences, linguistics, and philosophy.
Article
Full-text available
MCDM is considered as a complex decision-making tool involving both quantitative and qualitative factors. In recent years, several fuzzy FMCDM tools have been suggested to choosing the optimal probably options. The purpose of this paper is to review systematically the applications and methodologies of the fuzzy multi decision-making (FMCDM) techniques. This study reviewed a total of 403 papers published from 1994 to 2014 in more than 150 peer reviewed journals (extracted from online databases such as ScienceDirect, Springer, Emerald, Wiley, ProQuest, and Taylor & Francis). According to experts’ opinions, these papers were grouped into four main fields: engineering, management and business, science, and technology. Furthermore, these papers were categorized based on authors, publication date, country of origin, methods, tools, and type of research (FMCDM utilizing research, FMCDM developing research, and FMCDM proposing research). The results of this study indicated that, in 2013, scholars have published papers more than other years. In addition, hybrid fuzzy MCDM in the integrated method and fuzzy AHP in the individual section were ranked as the first and second methods in use. Additionally, Taiwan was ranked as the first country that contributed to this survey, and engineering was ranked as the first field that has applied fuzzy DM tools and techniques.
Article
Full-text available
What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with "computational intelligence" in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their appli- cations, and selected fashionable methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer science devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to per- ception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed.
Article
Full-text available
Data is becoming more and more of a commodity, so that it is not surprising that data has reached the status of tradable goods. An increasing number of data providers is recognizing this and is consequently setting up platforms that deserve the term “marketplace” for data. We identify several categories and dimensions of data marketplaces and data vendors and provide a survey of the current situation.
Conference Paper
Full-text available
This paper presents a new framework to model an assessment process for a complex and multidimensional syndrome such as depression. Since the measurements of depression are inherently imprecise, we explicitly model the context of the assessment process, and we analyze various aspects of imprecision (syntactic, semantic, and pragmatic). The framework is based on fuzzy logic and semiotics. The fuzzy-logic approach allows for the representation of quantitative imprecision of the measurements and the semiotic approach allows for the representation of the qualitative imprecision of the concepts. We have applied this fuzzy-semiotic framework to two types of clinical measurements: the rating by the experts and the filling out of self-administered questionnaires. The proposed framework provides a conceptual foundation for the construction of a medical decision support tool.
Conference Paper
Full-text available
We consider a conversational recommender system based on example- critiquing where some recommendations are suggestions aimed at stimulating preference expression to acquire an accurate preference model. User studies show that suggestions are particularly effective when they present additional opportunities to the user according to the look-ahead principle (32). This paper proposes a strategy for producing suggestions that exploits prior knowledge of preference distributions and can adapt relative to users' reactions to the displayed examples. We evaluate the approach with simulations using data acquired by previous interactions with real users. In two different settings, we measured the effects of prior knowledge and adaptation strate- gies with satisfactory results.
Article
Full-text available
This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification, clustering, statistical learning, association analysis, and link mining, which are all among the most important topics in data mining research and development. Yes Yes
Article
Full-text available
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of "lying" tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
Conference Paper
Full-text available
This paper presents an enhanced version of the AED (Appropriate Executive Decisions) algorithm, which is based on biological immune system (BIS) and whose purpose is the generation of appropriate executive decisions aimed at business environments. A new metric has been incorporated to the algorithm and a larger and more representative database was used to train and validate results. Moreover, this paper offers better directions on how to apply AED in executive decisions, affording the learning process quality improvement through immunoinformatics concepts, namely decision cells, thereby producing more appropriate executive decisions. Experiments were carried out with executive officers experienced in executive decisions in order to suitably validate the appropriateness of responses generated by the enhanced AED algorithm.
Conference Paper
Full-text available
This work presents a suite of hybrid intelligent techniques helpful in decision making, the Hybrid Intelligent Decision Suite (HIDS). The system is composed of two complementary modules, one for forecasting new decision variables and the other, for searching among generated results of candidate decisions. Using this synergistic approach, HIDS is also suitable to obtain conditioning factors leading to desired decision, thus, overcoming some of the challenges posed by the `Inverse Problem'. To test this concept we have applied our approach on two distinct problems: (1) diagnosis of cardiologic diseases (of the proben-1 data-set) and (2) automobile feature selection (of UCI data-set). In the simulations carried out here, the HIDS comprised Artificial Neural Networks (ANNs) and Fuzzy Logic Controllers. Results proved that the ideas presented here can be effective to assemble tools which reduce uncertainty and improve quality in decision making about future scenarios.
Article
Full-text available
The overall purpose of this paper is to demonstrate the relevance of semiotics concepts to the analysis of intelligent control systems. Semiotics has only a minor impact on research within intelligent control or robotics. These areas are currently dominated by mathematical concepts of control theory and information processing concepts of artificial intelligence. This situation is unfortunate because the understanding of a complex control problem requires an analysis of the sign relations between sensory data and their meanings and the sign relations between the physical result of a control action and the intentions of control agents. These problems of sign interpretation inherent in all control situations are relevant for the design of automated controls and of the interaction between human operators and the automation. The relevance of semiotics to control problems is demonstrated by applying the semiotics of action developed by Charles Morris to various aspects of a control situatio...
Chapter
This chapter discusses Google DeepMind and Google AlphaGo and then moves on to the future of Reinforcement Learning and compares what’s happening with man versus machine.
Article
Human decision-making involves cognitive processes of selection, evaluation, and interpretation among candidate solutions in order to solve decision problems. Nonintelligent decision support systems (DSS) lack automatic interpretations, at least in a low level scale, which can lead to undesired solutions. To tackle this limitation, hence producing enhanced decision making, a hybrid intelligent decision support approach is presented, which combines case-based reasoning cycle, semiotic concepts, and self-organizing maps. In addition, a novel sign deconstruction mechanism is introduced as foundation of the new approach and affords better interpretability and contextualization of candidate solutions without compromising efficiency and precision. The obtained results confirm that our proposed approach has the potential to be readily applicable to decision problems, particularly the ones that are of subjective nature. Moreover, the put forward approach may integrate some unlikely elements of linguistics and cognitive science which could fundamentally help the enhancement of DSS.
Conference Paper
Data and data-related services are increasingly being traded on data marketplaces. However, value attribution of data is still not well-understood, in particular when two competing offers are to be compared. This paper discusses the role data quality can play in this context and suggests a weighted quality score that allows for ‘quality for money’ comparisons of different offerings.
Conference Paper
With the development of data market, data resources play a key role as the part of business resources. However, existing data markets are too simple to reveal the real data values in practical application. Motivated by the effectiveness and fairness of the data market, we develop a fair data market system that takes data quality into consideration. In our system, we design a fair data price evaluation mechanism, which aims at meeting the needs of both supply and demand. For the data quality issues in the data market, several critical factors, including accuracy, completeness, consistency, and currency, are integrated in order to show comprehensive assessment of the data. Moreover, our system can also provide data repairing recommendation based on data quality evaluation.
Article
Big Data concern large-volume, complex, growing data sets with multiple, autonomous sources. With the fast development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. This paper presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective. This data-driven model involves demand-driven aggregation of information sources, mining and analysis, user interest modeling, and security and privacy considerations. We analyze the challenging issues in the data-driven model and also in the Big Data revolution.
Article
Introduction, 99. — I. Some general features of rational choice, 100.— II. The essential simplifications, 103. — III. Existence and uniqueness of solutions, 111. — IV. Further comments on dynamics, 113. — V. Conclusion, 114. — Appendix, 115.
Conference Paper
Computer-based support for medical decision making has been a subject of many research projects since the earliest days of computers. Although the early expert systems promised to automate medical diagnosis, very few systems were actually utilized in clinical settings. In the last twenty years, the intent to use computers to replace or simulate medical experts has changed to a less ambitious goal of supporting and assisting the medical decision-making process. Recently, the growing availability of electronically stored patient records has provided a new opportunity for the decision support systems to utilize the data mining tools. In all these types of decision support systems, data play a central role. This paper examines three fundamental issues surrounding data: modeling of data as representation of medical concepts, interpretation of data in multiple contexts, and utilization of data in the decision-making process. The paper introduces a semiotic approach to the analysis of the role of data in medical decision making. This approach assumes that the processes of data modeling, collection, storage, processing, and interpretation are components of a larger communication process - the medical decision-making process. Furthermore, the semiotic approach describes the medical decision-making process in a broader context of representation, interpretation, and meaning making in a social context.
Conference Paper
This article describes a set function that maps a set of Pareto optimal points to a scalar. A theorem is presented that shows that the maximization of this scalar value constitutes the necessary and sufficient condition for the function’s arguments to be maximally diverse Pareto optimal solutions of a discrete, multi-objective, optimization problem. This scalar quantity, a hypervolume based on a Lebesgue measure, is therefore the best metric to assess the quality of multiobjective optimization algorithms. Moreover, it can be used as the objective function in simulated annealing (SA) to induce convergence in probability to the Pareto optima. An efficient, polynomial-time algorithm for calculating this scalar and an analysis of its complexity is also presented.
Article
Contenido: Parte I Soporte de decisiones e inteligencia en los negocios: 1) Sistemas de soporte de decisiones e inteligencia en los negocios. - Parte II Soporte computarizado de decisiones; 2) Toma de decisiones, sistemas, modelado y soporte; 3) Concepto de sistemas de soporte de decisiones, metodologías y tecnologías: un repaso; 4) Modelamiento y análisis. - Parte III Inteligencia en los negocios: Sección introductoria especial: fundamentos de inteligencia en los negocios; 5) Data Warehousing; 6) Análisis de negocios y visualización de datos; 7) Datos, textos y Web Mining; 8) Redes neurales para minería de datos; 9) Rendimiento de la administración de negocios. - Parte IV Colaboración, comunicación, grupo de sistemas de apoyo y administración del conocimiento; 10) Tecnologías de apoyo y de sistemas de soporte en computadoras; 11) Administración del conocimiento. - Parte V Sistemas inteligentes: 12) Inteligencia artificial y sistemas expertos; 13) Sistemas inteligentes avanzados; 14) Sistemas inteligentes a través de Internet. - Parte VI Implementación sistemas de soporte de decisiones: 15) Sistemas de desarrollo y adquisición; 16) Integración, impacto y futuro de la administración de sistemas de soporte. Material en línea: 17) Sistemas de la empresa; 18) Adquisición del conocimiento, representación y razonamiento. Tutoriales en línea.
Conference Paper
Intelligent systems have been in the focus of attention of scientific community for a long time. Nevertheless, the concept of intelligent system is not fully understood, and it affects interpretation of the existing research results as well as the choice of new research directions. In this paper, the subject is considered both from the semiotic and control theory point of view. The phenomenon of intelligence is demonstrated as a result of joint functioning of three operators: grouping, focusing attention, and combinatorial search (GFACS). When information is processed by GFACS, the multiresolutional systems of knowledge develop, and nested loops of control emerge
  • W Nöth
  • Semiotic
  • Machines
Nöth, W. Semiotic Machines. Cybernetics & Human Knowing 9 (2002), 5-21.
Practical Decision Making From the Legacy of Herbert Simon to Decision Support Systems. Decision Support in an Uncertain and Complex World
  • J Pomerol
Pomerol, J., and Adam, F. Practical Decision Making From the Legacy of Herbert Simon to Decision Support Systems. Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004, May 2016 (2004), 647-657.