Conference Paper

Indian Buffet Processes with Power-law Behavior.

Conference: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada.
Source: DBLP

ABSTRACT The Indian buffet process (IBP) is an exchangeable distribution over binary ma- trices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interest- ing relationships between the stable-beta process and the Pitman-Yor process (an- other stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora.

0 Bookmarks
 · 
116 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this work is to describe a unified, and indeed simple, mechanism for non-parametric Bayesian analysis, construction and generative sampling of a large class of latent feature models which one can describe as generalized notions of Indian Buffet Processes(IBP). This is done via the Poisson Process Calculus as it now relates to latent feature models. The IBP was ingeniously devised by Griffiths and Ghahramani in (2005) and its generative scheme is cast in terms of customers entering sequentially an Indian Buffet restaurant and selecting previously sampled dishes as well as new dishes. In this metaphor dishes corresponds to latent features, attributes, preferences shared by individuals. The IBP, and its generalizations, represent an exciting class of models well suited to handle high dimensional statistical problems now common in this information age. The IBP is based on the usage of conditionally independent Bernoulli random variables, coupled with completely random measures acting as Bayesian priors, that are used to create sparse binary matrices. This Bayesian non-parametric view was a key insight due to Thibaux and Jordan (2007). One way to think of generalizations is to to use more general random variables. Of note in the current literature are models employing Poisson and Negative-Binomial random variables. However, unlike their closely related counterparts, generalized Chinese restaurant processes, the ability to analyze IBP models in a systematic and general manner is not yet available. The limitations are both in terms of knowledge about the effects of different priors and in terms of models based on a wider choice of random variables. This work will not only provide a thorough description of the properties of existing models but also provide a simple template to devise and analyze new models.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce the four-parameter IBP compound Dirichlet process (ICDP), a stochastic process that generates sparse non-negative vectors with potentially an unbounded number of entries. If we repeatedly sample from the ICDP we can generate sparse matrices with an infinite number of columns and power-law characteristics. We apply the four-parameter ICDP to sparse nonparametric topic modelling to account for the very large number of topics present in large text corpora and the power-law distribution of the vocabulary of natural languages. The model, which we call latent IBP compound Dirichlet allocation (LIDA), allows for power-law distributions, both, in the number of topics summarising the documents and in the number of words defining each topic. It can be interpreted as a sparse variant of the hierarchical Pitman-Yor process when applied to topic modelling. We derive an efficient and simple collapsed Gibbs sampler closely related to the collapsed Gibbs sampler of latent Dirichlet allocation (LDA), making the model applicable in a wide range of domains. Our nonparametric Bayesian topic model compares favourably to the widely used hierarchical Dirichlet process and its heavy tailed version, the hierarchical Pitman-Yor process, on benchmark corpora. Experiments demonstrate that accounting for the power-distribution of real data is beneficial and that sparsity provides more interpretable results.
    IEEE Transactions on Pattern Analysis and Machine Intelligence 01/2014; 37(2). DOI:10.1109/TPAMI.2014.2313122 · 5.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The quest for a model that is able to explain, describe, analyze and simulate real-world complex networks is of uttermost practical as well as theoretical interest. In this paper we introduce and study a network model that is based on a latent attribute structure: each node is characterized by a number of features and the probability of the existence of an edge between two nodes depends on the features they share. Features are chosen according to a process of Indian-Buffet type but with an additional random "fitness" parameter attached to each node, that determines its ability to transmit its own features to other nodes. As a consequence, a node's connectivity does not depend on its age alone, so also "young" nodes are able to compete and succeed in acquiring links. One of the advantages of our model for the latent bipartite "node-attribute" network is that it depends on few parameters with a straightforward interpretation. We provide some theoretical, as well experimental, results regarding the power-law behaviour of the model and the estimation of the parameters. By experimental data, we also show how the proposed model for the attribute structure naturally captures most local and global properties (e.g., degree distributions, connectivity and distance distributions) real networks exhibit. keyword: Complex network, social network, attribute matrix, Indian Buffet process

Preview

Download
0 Downloads
Available from