Conference Paper

Indian Buffet Processes with Power-law Behavior.

Conference: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada.
Source: DBLP

ABSTRACT The Indian buffet process (IBP) is an exchangeable distribution over binary ma- trices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interest- ing relationships between the stable-beta process and the Pitman-Yor process (an- other stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora.

10 Reads
  • Source
    • "We will describe the dynamics using a culinary metaphor (similarly to what some authors do for other models, see Chinese Restaurant [29], Indian Buffet process [15] [16] [33] and their generalizations [4] [5]). We identify the nodes with the customers of a restaurant and the attributes with the dishes, so that the dishes tried by a customer represent the attributes that a node exhibits. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The quest for a model that is able to explain, describe, analyze and simulate real-world complex networks is of uttermost practical as well as theoretical interest. In this paper we introduce and study a network model that is based on a latent attribute structure: each node is characterized by a number of features and the probability of the existence of an edge between two nodes depends on the features they share. Features are chosen according to a process of Indian-Buffet type but with an additional random "fitness" parameter attached to each node, that determines its ability to transmit its own features to other nodes. As a consequence, a node's connectivity does not depend on its age alone, so also "young" nodes are able to compete and succeed in acquiring links. One of the advantages of our model for the latent bipartite "node-attribute" network is that it depends on few parameters with a straightforward interpretation. We provide some theoretical, as well experimental, results regarding the power-law behaviour of the model and the estimation of the parameters. By experimental data, we also show how the proposed model for the attribute structure naturally captures most local and global properties (e.g., degree distributions, connectivity and distance distributions) real networks exhibit. keyword: Complex network, social network, attribute matrix, Indian Buffet process
  • Source
    • "To the best of our knowledge, the only known fact is the a.s. behavior of L n (defined below) and some other related quantities for large n; see [9] and [31]. Nothing is known as regards limiting distributions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The three-parameter Indian buffet process is generalized. The possibly different role played by customers is taken into account by suitable (random) weights. Various limit theorems are also proved for such generalized Indian buffet process. Let L_n be the number of dishes experimented by the first n customers, and let {\bar K}_n = (1/n)\sum_{i=1}^n K_i where K_i is the number of dishes tried by customer i. The asymptotic distributions of L_n and {\bar K}_n, suitably centered and scaled, are obtained. The convergence turns out to be stable (and not only in distribution). As a particular case, the results apply to the standard (i.e., not generalized) Indian buffet process.
    The Annals of Applied Probability 04/2013; 25(2). DOI:10.1214/14-AAP1002 · 1.45 Impact Factor
  • Source
    • "There has been significant recent interest in the Indian buffet process (IBP) [1] [2] [3] [4] [5] and in the related beta process (BP) [6] [7] [8] [9]. These models have been applied to factor analysis [2] [3] [6] [7] [8] [9] to infer a set of factors (features/dictionary atoms) with which data may be sparsely represented . In many applications the signal (and hence features ) are dependent on observable covariates. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features (dictionary elements), with covariate dependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are likely to be manifested in terms of similar features. As an application, we consider the simultaneous sparse modeling of multiple images, with the covariate of a given image linked to its similarity to all other images (as applied in manifold learning). Efficient inference is performed using hybrid Gibbs, Metropolis-Hastings and slice sampling.
    Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on; 06/2011
Show more


10 Reads
Available from