Toward Human-Level Artificial Intelligence - Representation and Computation of Meaning in Natural Language
Abstract
How can human-level artificial intelligence be achieved? What are the potential consequences? This book describes a research approach toward achieving human-level AI, combining a doctoral thesis and research papers by the author.
The research approach, called TalaMind, involves developing an AI system that uses a 'natural language of thought' based on the unconstrained syntax of a language such as English; designing the system as a collection of concepts that can create and modify concepts to behave intelligently in an environment; and using methods from cognitive linguistics for multiple levels of mental representation. Proposing a design-inspection alternative to the Turing Test, these pages discuss 'higher-level mentalities' of human intelligence, which include natural language understanding, higher-level forms of learning and reasoning, imagination, and consciousness. Dr. Jackson gives a comprehensive review of other research, addresses theoretical objections to the proposed approach and to achieving human-level AI in principle, and describes a prototype system that illustrates the potential of the approach.
This book discusses economic risks and benefits of AI, considers how to ensure that human-level AI and superintelligence will be beneficial for humanity, and gives reasons why human-level AI may be necessary for humanity's survival and prosperity.
... An approach different from the Turing Test was proposed in [Jackson, 2014]: ...
... The TalaMind thesis (Jackson, 2014) advocates this approach toward achieving human-level AI. ...
... The 'TalaMind thesis' (Jackson, 2014) discusses representation and processing of goals only to a limited extent, sufficient to support the more general discussion. It does not discuss goal reasoning per se, though the prototype illustrates potential to support goal reasoning. ...
What is the nature of goal reasoning needed for human-level artificial intelligence? This presentation contends that to achieve human-level AI, a system architecture for human-level goal reasoning would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’ and would be greatly handicapped if relying only on formal logic systems.
... They are relevant specifically to this paper: The next section will discuss how human-level goal reasoning is important for the higher-level mentalities and following sections will discuss how a natural language of thought could support human-level goal reasoning in relation to the higher-level mentalities. More general discussions of the higher-level mentalities are given in (Jackson, 2019), beginning in Chapter 2 section 1.2. ...
... This approach is what I describe as implementing a 'natural language of thought' in an AI system. (Jackson, 2019) Other symbolic languages could be used internally to support this internal use of natural language, e.g., to support pattern-matching of internal natural language data structures, or to support interpretation of natural language data structures. This approach could also be combined with neural networks, in hybrid approaches for processing natural language. ...
... This approach involves more than just representing and using the syntax of natural language expressions to represent thoughts: It also involves representing and using the semantics of natural language words and expressions, to represent thoughts. (Jackson, 2019) And it involves more than representations to annotate meaning of natural language expressions (e.g., Van Gysel et al., 2021;Banarescu et al., 2013). The TalaMind approach envisions annotating and using natural language expressions within an AI system, as representations of thoughts. ...
What is the nature of goal reasoning needed for human-level artificial intelligence? This research position paper contends that to achieve human-level AI, a system architecture for human-level goal reasoning would benefit from a neuro-symbolic approach combining deep neural networks with a 'natural language of thought' and would be greatly handicapped if relying only on formal logic systems.
... This paper is based on the author's previous works ( [1] et seq.) about a proposed approach toward eventually achieving human-level artificial intelligence, called the 'TalaMind' approach. Regarding limitations, it should be said at the outset that this position paper can only present reasons why it is plausible this approach may achieve human-level AI and provide better support for human-level knowledge representation than other approaches. ...
... It may be sufficient (and even important, for achieving beneficial human-level AI) to develop systems that are human-like, and understandable by humans, rather than human-identical. [2] Therefore, an approach different from the Turing Test is proposed in [1] and [3]: to define human-level intelligence by identifying capabilities achieved by humans and not yet achieved by any AI system, and to inspect the internal design and operation of any proposed system to see if it can in principle robustly support these capabilities, which I call higher-level mentalities: ...
... • Reflective observation: observation of having observations. This definition was proposed in [1] (p. 136) and adapted from the "axioms of being conscious" proposed by Aleksander and Morton [10]. ...
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’ and would be greatly handicapped if relying only on formal logic systems.
... [Jackson, 2018] Therefore, an approach different from the Turing Test was proposed in [Jackson, 2014]: to define human-level intelligence by identifying capabilities achieved by humans and not yet achieved by any AI system, and to inspect the internal design and operation of any proposed system to see if it can The higher-level mentalities together comprise a qualitative difference which would distinguish human-level AI from current AI systems and computer systems in general. Discussions of the higher-level mentalities are given in [Jackson, 2019a], beginning in Chapter 2 section 1.2. ...
... This paper will advocate a class of hybrid architectures called the 'TalaMind architecture'. [Jackson, 2019a] The approach advocated here belongs in the camp of "neuro-symbolic AI" research. ...
... This approach involves more than just representing and using the syntax of natural language expressions to represent thoughts: It also involves representing and using the semantics of natural language words and expressions, to represent thoughts. [Jackson, 2019a] There is not a consensus based on analysis and discussion among scientists that an AI system cannot use a natural language like English as an internal language for representation and processing of thoughts. Rather, in general it has been an assumption by AI scientists over the decades that computers should use formal logic languages (or simpler symbolic languages) for internal representation and processing within AI systems. ...
How could AI systems achieve human-level qualitative reasoning? This research position paper proposes that a system architecture for human-level qualitative reasoning would benefit from a neuro-symbolic approach combining a 'natural language of thought' with qualitative process semantics.
... Yet all such concepts can also be expressed in natural language, perhaps more completely and understandably across fields of thought [4] [5] [7], especially if augmented by diagrams, pictures, and formal notations for logic, mathematics, fields of science, and metascience. ...
... Linguistics in general is a subdomain of the Social domain, and natural languages like English are profoundly important for scientists in social groups interacting to develop knowledge in all scientific domains. Computational Linguistics exists in the intersection of the domains of Computing and Linguistics, and is arguably a profoundly important topic for achieving human-level artificial intelligence [4]. This suggests that computational linguistics is potentially very important for metascience, and that AI systems which can understand natural language may also support metascience. ...
... Natural language understanding and metacognition are both core problems for achieving human-level AI [4]. Also, there is a connection between natural language and metacognition: natural language can be used to express thoughts about thoughts, thoughts about perceptions, etc. ...
Rosenbloom gave reasons why Computing should be considered as a fourth great domain of science, along with the Physical sciences, Life sciences, and Social sciences. This paper considers Metascience as the future, fifth great domain of science, and discusses reasons why metascience may be closely related to metacognition in human intelligence and human-level artificial intelligence, suggesting that the representation and processing which could support an AI system’s metacognition could also support an AI system reasoning metascientifically about domains of science.
... A brief discussion of his philosophy provides a starting point for considering more recent perspectives of understanding and explanation. (This section repeats information given in [22] et seq. ) Peirce described understanding as a process of developing and using explanations of how (by what cause) and why (for what purpose) something happens. He used the term 'abduction' to refer to reasoning that develops explanations: If one observes something, B, then one considers what fact A might naturally cause or explain B, and one concludes it is reasonable to think A might be true (Peirce, [33] 5.189). ...
... It does not appear there is any valid theoretical reason why the syntax and semantics of a natural language like English cannot be used directly by an AI system as its 'language of thought', without translation into formal languages, to help achieve human-level AI (cf. Jackson [22], pp.153-174). ...
... Human thoughts, beliefs, and intentions may in general be better represented by natural language expressions than by formal logic: Formal logic is handicapped in representing ambiguities and contradictions for the broad range of human intentions, thoughts, and beliefs (cf. Sowa, [37]; Jackson [22], pp.60-70). Natural languages have been developed for millennia to do this. ...
How can we define and understand the nature of understanding itself? This paper discusses cognitive processes for understanding the world in general and for understanding natural language. The discussion considers whether and how an artificial cognitive system could use a ‘natural language of thought’, and whether the ambiguities of natural language would be a theoretical barrier or could be a theoretical advantage for such a system, in a research approach toward human-level artificial intelligence.
... It may be sufficient (and even important, for achieving beneficial human-level AI) to develop systems that are human-like, and understandable by humans, rather than human-identical. [28] Therefore, an approach different from the Turing Test was proposed in [26]: to define human-level intelligence by identifying capabilities achieved by humans and not yet achieved by any AI system, and to inspect the internal design and operation of any proposed system to see if it can in principle support these capabilities, which I call higher-level mentalities: ...
... The following subsections briefly describe five of the higher-level mentalities. Further discussions are given in [26] and [32], beginning in §2.1.2. 1 ...
... Section 8 discusses implementation and demonstration in a prototype system. Additional discussions are given in [26] et seq. ...
Note: Rather than this paper, I (the author) recommend reading the more recent, published paper "On Achieving Human-Level Knowledge Representation by Developing a Natural Language of Thought", which contains additional discussions. -- PCJ 9/21/21 --
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’, and be greatly handicapped if relying only on formal logic systems.
... It is easy to make the mistake of assuming that a word usage can only refer to a single word sense. The syntax for Tala in (Jackson, 2014) showed a single word sense as the meaning of a word usage. It was only after studying Kilgarriff's (1997) paper that I realized the syntax should support multiple word senses for a word usage. ...
... Depending on the particular cognitive system, a symbolic language may be a simple notation (e.g. n-tuples of symbols), or it could be a formal, logical language like predicate calculus, or in theory it could even be a natural language like English -an approach investigated by (Jackson, 2014), which will be discussed in the following pages. At this level, meanings of words and sentences can be represented by expressions in formal or natural languages. ...
... At the linguistic level of a TalaMind architecture for a cognitive agent (Jackson, 2014), each of these word senses could be represented by a Tala symbolic expression representing a dependency grammar parse-tree for an English definition of the word sense. Thus in the TalaMind prototype's 'discovery of bread' simulation there is a step where one cognitive agent (Leo) says to another agent (Ben): Can you turn grain into fare for people? ...
Understanding the meanings of words is essential for natural language understanding in cognitive systems. This paper discusses the nature and existence of word meanings, and how word meanings can be represented in artificial cognitive systems. An analysis is given of Kilgarriff's (1997) arguments that word senses do not exist, showing there is room to take exception. Kilgarriff's (2007) paper advocating Gricean semantics is compatible with a cognitive systems approach to word senses. Representation of word meanings is discussed for a research approach to a cognitive architecture for human-level artificial intelligence.
... Linde's and Rosenbloom's questions are of interest to me, relevant to my previous paper [15] about the Common Model, and relevant to the research approach I advocate toward human-level AI [14]. So, the following pages give my answers to these questions, and to some additional questions asked by anonymous reviewers of a draft version of this paper. ...
... The TalaMind thesis [14] advocates a very different direction, which involves developing an AI system using an internal language (called Tala) based on the unconstrained syntax of a natural language (English), and taking a principled approach toward supporting the unconstrained semantics of natural language. Tala is used as a symbolic language for representing information and procedures. ...
... Since progress has been very slow in developing natural language understanding systems by translation into formal languages, Jackson [14] investigated whether it may be possible and worthwhile to perform cognitive processing directly with unconstrained natural language. ...
(This paper is available online at https://doi.org/10.1016/j.procs.2018.11.051 )This paper discusses how natural language could be supported in further developing the Common Model of Cognition following the TalaMind approach toward human-level artificial intelligence. These thoughts are presented as answers to questions posed by Peter Lindes, Paul Rosenbloom, and reviewers of this paper, followed by a description of the TalaMind demonstration system, and a general discussion of theoretical and strategic issues for the Common Model of Cognition.
... Simply put, metacognition is cognition about cognition. Thus it includes, for example, reasoning about reasoning, reasoning about learning, and learning about reasoning [28,34,43]. Broadly construed, it is any cognitive process or structure about another cognitive process or structure (e.g., data about memory held in memory). ...
... Jackson [28] discussed how computers could potentially obtain enough self-awareness to achieve human-level AI by adapting the 'axioms of being conscious' proposed by Aleksander and Morton [2] for research on artificial consciousness. For a system to approach artificial consciousness, there are a set of metacognitive "observations" it must achieve: ...
This paper provides a starting point for the development of metacognition in a common model of cognition. It identifies significant theoretical work on metacognition from multiple disciplines that the authors believe worthy of consideration. After first defining cognition and metacognition, we outline three general categories of metacognition, provide an initial list of its main components, consider the more difficult problem of consciousness, and present examples of prominent artificial systems that have implemented metacognitive components. Finally, we identify pressing design issues for the future. (This paper is available online at https://doi.org/10.1016/j.procs.2018.11.046 )
... The artificial neural network (ANN) is a mathematical model based on the natural activity of neurons and the architecture of the human nervous system [26], [27]. Perceptron's are the essential components of a multi-layer neural network that forms an ANN that is arranged hierarchically based on layers [28]. Neurons are made up of layers that are not connected to one another and whose input data originates from the same source (the outside or another layer) and that delivers their information to the same destination (another layer or the outside). ...
According to this study, because of its light weight, high specific strength, and stiffness at high temperatures, Al6061 is the most appropriate material in the transportation business. The major goal of this research is to evaluate the physical properties of Al6061, such as thermal conductivity and electrical resistivity, by experimental investigation utilizing the multivolt drop approach. As Artificial Intelligence techniques become more widespread, they are being used to forecast material properties in engineering research. So, the second goal of this research is to employ Artificial Neural Networks to build a prediction model with fewer errors by utilizing experimental data. It will reduce the situation of direct observations throughout a wide range of temperatures where the physical properties of Al6061 are significant. As a consequence, it was discovered that the enhanced optimum ANN has significant mechanical properties that impact prediction. The anticipated results in electrical resistivity and thermal conductivity had Root Mean Squared Errors of 0.99966 and 0.99401, respectively, with R-Square average values of 0.820105. Various tests and ANN methodologies were used to validate and compare the suggested results. The comparison of predicted values with multivolt drop experimental results demonstrated that the projected ANN model provided efficient Al6061 accuracy qualities.
... In addition, the generation of concepts will probably extract and digest the contents. In general, the meta-cognition includes, for example, reasoning about reasoning, reasoning about learning, and learning about reasoning [143,144]. The meta-cognition is also defined [145] as Meta-cognitive experiences are any conscious cognitive or affective experiences that accompany and pertain to any intellectual enterprise. ...
Aspired to build intelligent agents that can assist humans in daily life, researchers and engineers, both from academia and industry, have kept advancing the state-of-the-art in domestic robotics. With the rapid advancement of both hardware (e.g., high performance computing, smaller and cheaper sensors) and software (e.g., deep learning techniques and computational intelligence technologies), robotic products have become available to ordinary household users. For instance, domestic robots have assisted humans in various daily life scenarios to provide: (1) physical assistance such as floor vacuuming; (2) social assistance such as chatting; and (3) education and cognitive assistance such as offering partnerships. Crucial to the success of domestic robots is their ability to understand and carry out designated tasks from human users via natural and intuitive human-like interactions, because ordinary users usually have no expertise in robotics. To investigate whether and to what extent existing domestic robots can participate in intuitive and natural interactions, we survey existing domestic robots in terms of their interaction ability, and discuss the state-of-the-art research on multi-modal human–machine interaction from various domains, including natural language processing and multi-modal dialogue systems. We relate domestic robot application scenarios with state-of-the-art computational techniques of human–machine interaction, and discuss promising future directions towards building more reliable, capable and human-like domestic robots.
... The Activation Bit Vector Machine is a mathematical model of the brain, an abstract model of the brain hardware, designed to allow implementation of Context Logic reasoners. It is based on the theory, and leverages the operators of, Vector Symbolic Architectures (Kanerva, 1988(Kanerva, , 2009Gayler, 2006). 2 Language, as a mathematical structure, is fundamental to thought, and one may even argue that the mind is the same as language (Jackson, 2019). Predominantly, thinking is experienced as a stream of words, but thinking is more: imagined static constructions such as mental maps and mental images and dynamic constructions such as mental models or musical pieces. ...
Research towards a new approach to the abstract symbol grounding problem showed that through model counting there is a correspondence between logical/linguistic and coordinate representation in the visuospatial domain. The logical/verbal description of a spatial layout directly gives rise to a coordinate representation that can be drawn, with the drawing reflecting what is described. The main characteristic of this logical property is that it does not need any semantic information or ontology apart from a separation into symbols/words referring to relations and symbols/words referring to objects. Moreover, the complete mechanism can be implemented efficiently on a brain inspired cognitive architecture, the Activation Bit Vector Machine (ABVM), an architecture that belongs to the Vector Symbolic Architectures. However, the natural language fragment captured previously was restricted to simple predication sentences, with the corresponding logical fragment being atomic Context Logic (CLA), and the only actuation modality leveraged was visualization. This article extends the approach on all three aspects: adding a third category of action verbs we move to a fragment of first-order Context Logic (CL1), with modalities requiring a temporal dimension, such as film and music, becoming available. The article presents an ABVM generating sequences of images from texts.
... A longer paper [5] discusses the relation of mathematics to science, and considers how computational linguistics and human-level artificial intelligence [4] are potentially important for studies of metascience and metacognition. ...
Rosenbloom (2013) gave reasons why Computing should be considered as a fourth great domain of science, along with the Physical sciences, Life sciences, and Social sciences. This paper adapts Rosenbloom’s ‘metascience expression language’ to support descriptions and comparison of metascience and metacognition, and discusses the similarity of metascience and metacognition.
Before the study of semantic analysis, this chapter explores meaning representation, a vital component in NLP before the discussion of semantic and pragmatic analysis. It studies four major meaning representation techniques which include: first-order predicate calculus (FOPC), semantic net, conceptual dependency diagram (CDD), and frame-based representation. After that it explores canonical form and introduces Fillmore’s theory of universal cases followed by predicate logic and inference work using FOPC with live examples.
Decision-making is an inevitable part of software engineering. Software engineers
make a considerable number of decisions during the software development life cycle. Thus, as a subset of software engineering, software production can be considered
a continuous decision-making process. The decision process refers to the steps involved in choosing and evaluating the best fitting alternative solution(s) for software
engineers, as decision-makers, according to their preferences and requirements. Additionally, a software product is typically a long-living system to determine the future
of the product and the costs associated with its development.
In order to make informed decisions, the decision-makers around a software product should either acquire knowledge themselves or hire external experts to support
them with their decision-making process. The process gets more complicated as the
number of decision-makers, alternatives, and criteria increases. Therefore, software
production is a suitable domain to deploy decision support systems that intelligently
support these decision-makers in the decision-making process. A decision model for
each decision-making problem is required to externalize and organize knowledge regarding the selection context.
In this dissertation, we focus on pragmatically selected decision-making problems
that software engineers face in software production. The following categories of
software production decisions are discussed: (1) decision-making regarding COTS
components for inclusion into software products. (2) decision problems related to
software development technologies that deal with finding the best fitting technologies for developing a software product. (3) architectural design decisions concerning
pattern-driven software design.
We developed a theoretical framework to assist software engineers with a set
of Multi-Criteria Decision-Making (MCDM) problems in software production. The
framework provides a guideline for software engineers to systematically capture
knowledge from different knowledge sources to build decision models for MCDM
problems in software production. Knowledge has to be collected, organized, and
quickly retrieved when it is needed to be employed. We designed, implemented,
and evaluated a decision support system (DSS) that utilizes such decision models
to facilitate decision-making and support software engineers with their daily MCDM
problems.
The framework and the decision support system have been used to model and
support software engineers with the following decision-making problems:
1. COTS component selection problems:
˛ Database Technology Selection
˛ Cloud Service Provider Selection
˛ Blockchain Platform Selection
2. Software development technology selection problems:
˛ Programming Language Ecosystem Selection
˛ Model-Driven Software Development Platform selection
3. Decision-Making in Pattern-Driven Design:
˛ Software Architecture Pattern Selection
A broad study has been carried out based on qualitative and quantitative research
to evaluate the DSS’s efficiency and effectiveness and the decision models inside its
knowledge base to support software engineers with their decision-making process in
software production. The DSS and the decision models have been evaluated through
21 real-world case studies at different software-producing organizations located in
the Netherlands and Iran. The case study participants asserted that the approach and
tooling provide significantly more insight into their selection process, provide a richer
prioritized option list than if they had done their research independently, and reduce
the time and cost of the decision-making process. However, we also asserted that it
is not easy to implement, adopt, and maintain such a system as its knowledge base
must be updated regularly. Moreover, software engineers’ strong opinions surrounding technology alternatives make it somewhat more complicated to find consensus in
the data. We conducted 89 qualitative semi-structured interviews with senior software engineers to explore expert knowledge about the decision-making problems,
decision models, and the outcomes of our study.
The dissertation concludes that software production decisions are best made with
decision support systems but that the steps towards full adoption of such systems are
hampered. First, gathering and maintaining appropriate knowledge in a centralized
manner is relatively costly and requires more time investment than traditional decision methods. Secondly, software engineers are not used to using such technologies
and find it challenging to adopt it into their daily practice.
How could AI systems achieve human-level qualitative reasoning? This research position paper proposes that a system architecture for human-level qualitative reasoning would benefit from a neuro-symbolic approach combining a ‘natural language of thought’ with qualitative process semantics.KeywordsQualitative reasoningHuman-level artificial intelligenceNeuro-symbolicNatural language of thought
In this work, we attempt to answer the question: "How to learn robust and interpretable rule-based models from data for machine learning and data mining, and define their optimality?".Rules provide a simple form of storing and sharing information about the world. As humans, we use rules every day, such as the physician that diagnoses someone with flu, represented by "if a person has either a fever or sore throat (among others), then she has the flu.". Even though an individual rule can only describe simple events, several aggregated rules can represent more complex scenarios, such as the complete set of diagnostic rules employed by a physician.The use of rules spans many fields in computer science, and in this dissertation, we focus on rule-based models for machine learning and data mining. Machine learning focuses on learning the model that best predicts future (previously unseen) events from historical data. Data mining aims to find interesting patterns in the available data. To answer our question, we use the Minimum Description Length (MDL) principle, which allows us to define the statistical optimality of rule-based models. Furthermore, we empirically show that this formulation is highly competitive for real-world problems.
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.
This thesis broadens current understanding of how location-based games can promote meaningful social interaction in citizens' own neighbourhoods. It investigates social cohesion and the role of social interaction to its promotion, delves into which requirements users have for playing in their neighbourhood and with its citizens, and takes a technical perspective into how this type of games should be designed to be successful at triggering interaction in public space. From this understanding, which stems from adolescents and adults from Rotterdam and The Hague, NL, a specific design and prototype of a location-based game is proposed and tested. This thesis addresses several gaps found in the current body of knowledge. On the one hand, meaningful interactions are person-dependent, can occur in various forms, and their impact on societies is not well understood. On the other hand, it is not well understood how to build location-based games for such aim: it is not known which requirements should be considered, attempts to build location-based games are often a product of in-house development not centred early on around users, no known guidelines exist for meaningful social interaction, and no consensus exists on what to consider when building location-based games from a technical perspective.This thesis offers learnings on how to best design location-based games to promote interaction that matters to local communities. It firstly offers an overview of social cohesion and how multiple factors and actors have the power to influence local communities. It then argues that meaningful social interaction bears the power to break down stereotypes and prejudice, empowers people's agencies to act, has a positive impact on cohesion, emerges at people's own pace, and addresses conflict. From this, it dives into the preferences, needs and desires of adolescents and adults to better understand what sorts of interactions are meaningful to them. This thesis explores throughout several case studies the requirements that these target groups have, and advances gameplay dynamics and game activity types that location-based games should implement to be successful at inviting meaningful social interaction in public space. These case studies also research different sorts of interaction that each game activity type invites players to have, and elicit specific game ideas that are particularly tailored around perceived-to-be socially challenging neighbourhoods in The Netherlands. These case studies culminate in the recommendation of several guidelines to be used at different stages of the game design: gameplay requirements, guidelines for meaningful social interaction to occur in the studied groups, and the sorts of game activities that designers should include to invite specific forms of social interaction. This thesis also proposes a systems architecture with key architectural components, to drive consensus and inform on what to consider when building location-based games for this purpose from a technical perspective.The lessons learned that are advanced in this thesis help practitioners design location-based games that are more tailored to what future players want to play, and help researchers understand what it means to design for meaningful social interaction in any public space around the world. Players have distinct preferences with regard to the ways they are exposed to their own neighbourhood, and the forms of interaction they would rather experience. Understanding this, and incorporating such preferences in game design, lead to gameplay experiences that can have a positive effect on societies, as they have the power to promote interaction and positive relationships in local communities. These gameplay experiences invite individuals to come together and have meaningful interactions in a playful way, (re)engage with their own neighbourhood, and be part of their local community.
For a long time, humanity has lived upon the paradigm that the amounts of natural resources are unlimited and that the environment has ample regenerative capacity. However, the notion to shift towards sustainability has resulted in a worldwide adoption of policies addressing resource efficiency and preservation of natural resources.
One of the key environmental and economic sustainable operations that is currently promoted and enacted in the European Union policy is Industrial Symbiosis. In industrial symbiosis, firms aim to reduce the total material and energy footprint by circulating traditional secondary production process outputs of firms to become part of an input for the production process of other firms.
This thesis directs attention to the design considerations for recommender systems in the highly dynamic domain of industrial symbiosis. Recommender systems are a promising technology that may facilitate in multiple facets of the industrial symbiosis creation as they reduce the complexity of decision making. This typical strength of recommender systems has been responsible for improved sales and a higher return of investments. That provides the prospect for industrial symbiosis recommenders to increase the number of synergistic transactions that reduce the total environmental impact of the process industry in particular
Microblogs such as Twitter represent a powerful source of information. Part of this information can be aggregated beyond the level of individual posts. Some of this aggregated information is referring to events that could or should be acted upon in the interest of e-governance, public safety, or other levels of public interest. Moreover, a significant amount of this information, if aggregated, could complement existing information networks in a non-trivial way. This dissertation proposes a semi-automatic method for extracting actionable information that serves this purpose. First, we show that predicting time to event is possible for both in-domain and cross-domain scenarios. Second, we suggest a method which facilitates the definition of relevance for an analyst's context and the use of this definition to analyze new data. Finally, we propose a method to integrate the machine learning based relevant information classification method with a rule-based information classification technique to classify microtexts. Fully automatizing microtext analysis has been our goal since the first day of this research project. Our efforts in this direction informed us about the extent this automation can be realized. We mostly first developed an automated approach, then we extended and improved it by integrating human intervention at various steps of the automated approach. Our experience confirms previous work that states that a well-designed human intervention or contribution in design, realization, or evaluation of an information system either improves its performance or enables its realization. As our studies and results directed us toward its necessity and value, we were inspired from previous studies in designing human involvement and customized our approaches to benefit from human input.
Real-life processes are characterized by dynamics involving time. Examples are walking, sleeping, disease progress in medical treatment, and events in a workflow. To understand complex behavior one needs expressive models, parsimonious enough to gain insight. Uncertainty is often fundamental for process characterization, e.g., because we sometimes can observe phenomena only partially. This makes probabilistic graphical models a suitable framework for process analysis. In this thesis, new probabilistic graphical models that offer the right balance between expressiveness and interpretability are proposed, inspired by the analysis of complex, real-world problems. We first investigate processes by introducing latent variables, which capture abstract notions from observable data (e.g., intelligence, health status). Such models often provide more accurate descriptions of processes. In medicine, such models can also reveal insight on patient treatment, such as predictive symptoms. The second viewpoint looks at processes by identifying time points in the data where the relationships between observable variables change. This provides an alternative characterization of process change. Finally, we try to better understand processes by identifying subgroups of data that deviate from the whole dataset, e.g., process workflows whose event dynamics differ from the general workflow.
Transportation sector plays an important role in the growth of national economies. Advances in information
technology have facilitated newcollaboration opportunities among transport companies. Ubiquitous and faster
internet now enables transport companies to access real time data about events in a transport network. New
collaboration opportunities and access to real time data, have given rise to new transport practices.
Synchromodal Transport (SmT) or Synchromodality, is a new transport practice, where the route and transport
mode for transporting cargo is chosen dynamically, i.e., while the cargo is in-transit. If a disruption event
occurs, causing a delay in transportation, the cargo may be shifted to another transport mode.
Existing research over SmT is biased towards routing and capacity planning challenges posed by SmT. Data
integration challenges posed by SmT, have not received their due attention from researchers. The primary data
integration challenge is the integration of contextual events data and transport planning data. This dissertation
provides a solution to data integration challenges posed by SmT, by designing a Synchromodal Transport
Integration Platform (SmTIP).
I, designed SmTIP based on the results of three research activities. The first research activity is SmT stakeholders’
interview, which resulted in a list of requirements for SmTIP. The second research activity is analysis
of SmT practices, which resulted in a list of relevant contextual events and processes for a SmT scenario. The
third research activity is studying the state-of-the-art in integration platform design, which resulted in a reference
architecture for integration platforms.
I, then, developed a prototype based on SmTIP. The prototype integrates transport data and contextual events
data, enables dynamic transport planning and in case of a disruption event, changing the transport mode of
cargo. When representatives from transport companies used SmTIP prototype, their responses induced improvements
in SmTIP design.
This dissertation is useful for transport companies, researchers in transportation sector and information technology
sector. Transport companies can get acquainted with, SmT processes, relevant contextual events, data
integration challenges posed by SmT and how to overcome them.
Researchers in transportation sector, can use this dissertation as an introduction to SmT. It will help them understand
SmT scenario, SmT processes and relevant disruption events. Documented responses of transport
companies’ representatives during SmTIP validation will help researchers in the future improvement of SmTIP
and in designing validation experiment setups. This dissertation enhances SmT research. It fills the research
gap of SmT data integration challenges by: (1) identifying the data integration challenges, (2) listing the requirements
for SmTIP, and (3) designing SmTIP to overcome them.
Researchers and practitioners in information technology, can use the reference architecture for integration platforms
to address data integration challenges in different application domains. For that purpose, the refinement
of the reference architecture to SmT domain, as shown in this dissertation, may be used as a guide.
Automatic control is a technique about designing control devices for controlling ma- chinery processes without human intervention. However, devising controllers using conventional control theory requires first principle design on the basis of the full under- standing of the environment and the plant, which is infeasible for complex control tasks such as driving in highly uncertain traffic environment. Intelligent control offers new op- portunities about deriving the control policy of human beings by mimicking our control behaviors from demonstrations. In this thesis, we focus on intelligent control techniques from two aspects: (1) how to learn control policy from supervisors with the available demonstration data; (2) how to verify the controller learned from data will safely control the process.
ResearchGate has not been able to resolve any references for this publication.