Matt Richardson’s research while affiliated with University of Washington and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Mining the Network Value of Customers
  • Article

November 2001

·

728 Reads

·

2,474 Citations

·

Matt Richardson

One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers -- also known as viral marketing -- can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases.


The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank

November 2001

·

201 Reads

·

384 Citations

The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines. 1

Citations (2)


... Intuitively, this more closely mimics how a user is likely to use a standard -being more likely to jump to higher-level topics than to provisions deep in the hierarchy. The PageRank algorithm can also be modified into the so-called query-dependent PageRank (QD-PageRank), where the β parameter for each node is tuned based on the node's relationship to a user query or a topic [35]. Figure 5. ACI 318 [26,27] complete networks PageRank centrality (CPR) probability distribution. ...

Reference:

Leveraging network analysis to improve navigability of design standards
The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank
  • Citing Article
  • November 2001