Trond Grenager's research while affiliated with Stanford University and other places

Publications (15)

Article
The area of learning in multi-agent systems is today one of the most fertile grounds for interaction between game theory and artificial intelligence. We focus on the foundational questions in this interdisciplinary area, and identify several distinct agendas that ought to, we argue, be separated. The goal of this article is to start a discussion in...
Article
Full-text available
This paper presents our work on textual inference and situates it within the context of the larger goals of ma-chine reading. The textual inference task is to deter-mine if the meaning of one text can be inferred from the meaning of another and from background knowledge. Our system generates semantic graphs as a representa-tion of the meaning of a...
Conference Paper
Full-text available
Historically, unsupervised learning tech- niques have lacked a principled technique for selecting the number of unseen compo- nents. Research into non-parametric priors, such as the Dirichlet process, has enabled in- stead the use of infinite models, in which the number of hidden categories is not fixed, but can grow with the amount of training dat...
Article
Full-text available
We describe an approach to textual inference that improves alignments at both the typed dependency level and at a deeper semantic level. We present a machine learning approach to alignment scoring, a stochastic search procedure, and a new tool that finds deeper semantic alignments, allowing rapid development of semantic features over the aligned gr...
Conference Paper
Full-text available
This paper presents our work on textual inference and situates it within the context of the larger goals of ma- chine reading. The textual inference task is to deter- mine if the meaning of one text can be inferred from the meaning of another combined with background knowl- edge. Most existing work either provides only very lim- ited text understan...
Conference Paper
Full-text available
This paper advocates a new architecture for tex- tual inference in which finding a good alignment is separated from evaluating entailment. Current ap- proaches to semantic inference in question answer- ing and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a l...
Conference Paper
Full-text available
This paper demonstrates how unsupervised tech- niques can be used to learn models of deep linguis- tic structure. Determining the semantic roles of a verb's dependents is an important step in natural language understanding. We present a method for learning models of verb argument patterns directly from unannotated text. The learned models are sim-...
Conference Paper
Full-text available
The applicability of many current information ex- traction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic ci- tations, small amounts of prior knowledge can be used to learn effective models in a primarily...
Conference Paper
Full-text available
Most current statistical natural language process- ing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sam- pling, a simple Monte Carlo method used to per- form...
Article
We survey the recent work in AI on multi-agent reinforce-ment learning (that is, learning in stochastic games). After tracing a representative sample of the recent literature, we ar-gue that, while exciting, much of this work suffers from a fun-damental lack of clarity about the problem or problems being addressed. We then propose five well-defined...
Conference Paper
Full-text available
We propose a general model for joint inference in correlated natural language processing tasks when fully annotated training data is not available, and apply this model to the dual tasks of word sense disambiguation and verb subcategorization frame determination. The model uses the EM algorithm to simultaneously complete partially annotated trainin...
Article
We survey the recent work in AI on multi-agent reinforcement learning (that is, learning in stochastic games). We then argue that, while exciting, this work is flawed. The fundamental flaw is unclarity about the problem or problems being addressed. After tracing a representative sample of the recent literature, we identify four well-defined problem...
Article
Dispersion games are the generalization of the anti-coordination game to arbitrary numbers of agents and actions. In these games agents prefer outcomes in which the agents are maximally dispersed over the set of possible actions. This class of games models a large number of natural problems, including load balancing in computer science, niche selec...
Article
Full-text available
This paper proposes a new architecture for tex-tual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question an-swering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, us-ing a local...
Article
Full-text available
The applicability of current information extraction techniques is severely limited by the need for su-pervised training data. We demonstrate that for cer-tain field structured extraction tasks, small amounts of prior knowledge can be used to effectively learn models in a primarily unsupervised fashion. Many text information sources exhibit a latent...

Citations

... There are many studies that aim to extend traditional table-based reinforcement learning paradigms from single to multi-agent, but this extension is a challenging issue owing to the intricate interrelationships and learning instability, which are due to various factors, such as the mutual effect of simultaneous learning in independent multiple agents, a large number of states (including other agents), interpretation of joint actions, and coordinated/cooperative tasks that may not bring immediate rewards [6,25]. Multi-agent reinforcement learning (MARL) also addresses sequential decision-making problems in which agents interact with their surroundings; thus, the credit assignment problem with feedback mechanisms is needed to reinforce appropriate (sequences of ) actions for individual agents [2,5,7,8,26,28]. ...
... As dependency-style annotation became popular in recent years, tools that support dependency patterns have also been developed. For example, Stanford CoreNLP's Semgrex (Chambers et al. 2007) and SpaCy tookit's JSON-based format (Honnibal et al. 2020) are publicly available. ...
... Evaluation accounted only for correctly identified negated medical terms. The growing demand for reliable negation detection in other computational fields such as, inter alia, sentiment analysis (Councill et al., 2010), textual entailment (de Marneffe et al., 2006), and machine translation (Baker et al., 2010) promoted the development of negation detection as an NLP task in its own right. ...
... We have not implemented all algorithms that have been used in repeated game research. This includes the work of Yoav Shoham (Shoham et al., 2004; Shoham et al., 2006), Rob Powers (Powers and Shoham, 2004; Powers and Shoham, 2005) and Thuc Vu (Vu et al., 2005) who have presented algorithms for learning against specific classes of opponents, in addition to critically looking at the direction of research in multiagent learning. Though we have presented a number of different metrics, we have not covered all of the possible equilibrium types. ...
... To check for wellformedness, we perform POS (Part-of-Speech) tagging [15] (to identify nouns, verbs, etc) and dependency parsing [2] (to determine relationships between words) on top of the generated sentence. A sentence is then considered well-formed if the root node of its semantic graph [3] is a verb and the object for that verb is explicit. (3) If the sentence is well-formed, we use it as the use case name and move forward to generate a use case name for the next endpoint. ...
... MADRL has achieved important milestones such as defeating the DOTA II world champion [10] and reaching expert performance in Starcraft II [11]. We should note that several technical challenges make multiagent learning fundamentally more difficult than the single-agent case, such as the moving target problem (non-stationarity) [12], the curse of dimensionality, multiagent credit assignment [12] and global exploration [13]. Another limitation in further developing these techniques concerns the lack of environments in which cooperation between multiple agents is incorporated. ...
... In the second stage, the graph partitioning creates clusters of vertices by using a variant of Chinese Whispers [Biemann 2006]. Grenager and Manning [2006] introduce a directed graph model by relating a verb predicate, its semantic roles, and their possible syntactic realizations. The latent variables indicate the semantic roles, and these roles are classified by utilizing the states of these latent variables. ...
... The potential for joint inference of complementary information, such as syntactic verb and semantic argument classes, has a clear and interpretable way forward, in contrast to the pipelined methods described above. This was demonstrated in Andrew et al. (2004), where a Bayesian model was used to jointly induce syntactic and semantic classes for verbs, although that study relied on manually annotated data and a predefined SCF inventory and MLE. More recently, Abend and Rappoport (2010) trained ensemble classifiers to perform argumentadjunct disambiguation of PP complements, a task closely related to SCF acquisition. ...
... However, this model is slightly different from a TD HTMM since it allows hidden states with no associated visible label. Finkel et al. (2007) introduced three different version of infinite TD HTMM: all of them are obtained using the HDP theory, but assuming a different independence assumptions among children. The first one generates the states of all of the children of a node u (no independence assumption); the second one assume a first-order process to generate the children (Markovian independence assumption); the third one generates children independently of each other (conditional independence assumption). ...
... Every data point is defined as a separate cluster in hierarchical clustering algorithms, which then progressively combine or agglomeration [37] (bottom-up technique) the pairs of clusters that this treatment has formed. Finally, a dendrogram, often known as a tree structure, depicts the hierarchical relationship between the groups. ...