Jafar Adibi

Sharif University of Technology, Tehrān, Ostan-e Tehran, Iran

Are you Jafar Adibi?

Claim your profile

Publications (44)10.12 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Personal blogs are one of the most interconnected and socially networked type of social media. The capability of placing "comments'' on blog posts makes the blogosphere rather a complex environment.In this paper, we study the behavior of bloggers who place comments on others' posts and examine if it is possible to detect spam comments.We look at the functionality of different network motif profiles in the comment network, and identify certain subgraphs that associate with spam comments. We illustrate that some of these patterns and their statistical features could be exploited to classify comments and bloggers to spammers and non-spammers. Our preliminary results are encouraging and show reasonable results on rich and dense blog networks.
    Data Mining Workshops, 2008. ICDMW '08. IEEE International Conference on; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Blogs form a large social network, and their analysis are be- coming an important research area today. Blogs are growing rapidly in the Internet, because bloggers can rapidly change the content and linking patterns of them. Visitors of blogs may comment on the postings of a blog, and this leads to a complex interaction between groups of bloggers. One of ...
    01/2007;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Team ISIS (ISI Synthetic) successfully participated in the first international RoboCup soccer tournament (RoboCup'97) held in Nagoya, Japan, in August 1997. ISIS won the third-place prize in over 30 teams that participated in the simulation league of RoboCup'97 (the most popular among the three RoboCup'97 leagues. In terms of research accomplishments, ISIS illustrated the usefulness of an explicit model of teamwork both in terms of reduced development time and improved teamwork flexibility. ISIS also took some initial steps towards learning of individual player skills. This paper discusses the design of ISIS in detail, with particular emphasis on its novel approach to teamwork.
    11/2006: pages 123-131;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Robocup 97 competition provides an excellent opportunity to demonstrate the techniques and methods of artificial intelligence, autonomous agents and computer vision. On a soccer field the core capabilities a player must have are to navigate the field, track the ball and other agents, recognize the difference between agents, collaborate with other agents, and hit the ball in the correct direction. USC's Dreamteam of robots can be described as a group of mobile autonomous agents collaborating in a rapidly changing environment. The key characteristic of this team is that each soccer robot is an autonomous agent, self-contained with all of its essential capabilities on-board. Our robots share the same general architecture and basic hardware, but they have integrated abilities to play different roles (goalkeeper, defender or forward) and utilize different strategies in their behavior. Our philosophy in building these robots is to use the least possible sophistication to make them as robust as possible. In the 1997 RoboCup competition, the Dreamteam played well and won the world championship in the middle-sized robot league.
    04/2006: pages 295-304;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of this work is to gain insight into whether processing- in-memory (PIM) technology can be used to accelerate the performance of link discovery algorithms, which represent an important class of emerging knowledge discovery techniques. PIM chips that integrate processor logic into memory devices offer a new opportunity for bridging the growing gap between processor and memory speeds, especially for applications with high memory-bandwidth requirements. As LD algorithms are data-intensive and highly parallel, involving read-only queries over large data sets, parallel computing power extremely close (physically) to the data has the potential of providing dramatic computing speedups. For this reason, we evaluated the mapping of LD algorithms to a processing-in-memory (PIM) workstation- class architecture, the DIVA / Godiva hardware testbeds developed by USC/ISI. Accounting for differences in clock speed and data scaling, our analysis shows a performance gain on a single PIM, with the potential for greater improvement when multiple PIMs are used. Measured speedups of 8x are shown on two additional bandwidth benchmarks, even though the Itanium-2 has a clock rate 6X faster.
    Workshop on Data Management on New Hardware, DaMoN 2006, Chicago, Illinois, USA, June 25, 2006; 01/2006
  • Jafar Adibi, Hans Chalupsky
    01/2005;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we provide a summary of the workshop on Link Discovery: Issues, Approaches and Applications (LinkKDD-2005) held in conjunction with ACM SIGKDD 2005, on August 21st in Chicago, Illinois, USA. We report in detail about the research issues addressed in the talks at the workshop.
    Sigkdd Explorations. 01/2005; 7(2):123-125.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A Bayesian blackboard is just a conventional, knowledge-based blackboard system in which knowledge sources modify Bayesian networks on the blackboard. As an architecture for intelligence analysis and data fusion this has many advantages: The blackboard is a shared workspace or "corporate memory" for collaborating analysts; analyses can be developed over long periods of time with information that arrives in dribs and drabs; the computers contribution to analysis can range from data-driven statistical algorithms up to domain-specific, knowledge-based inference; and perhaps most important, the control of intelligence-gathering in the world and inference on the blackboard can be rational, that is, grounded in probability and utility theory. Our Bayesian blackboard architecture, called AIID, serves both as a prototype system for intelligence analysis and as a laboratory for testing mathematical models of the economics of intelligence analysis.
    07/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The need to search for complex and recurring patterns in database sequences is shared by many applications. In this paper, we investigate the design and optimization of a query language capable of expressing and supporting efficiently the search for complex sequential patterns in database systems. Thus, we first introduce SQL-TS, an extension of SQL to express these patterns, and then we study how to optimize the queries for this language. We take the optimal text search algorithm of Knuth, Morris and Pratt, and generalize it to handle complex queries on sequences. Our algorithm exploits the interdependencies between the elements of a pattern to minimize repeated passes over the same data. Experimental results on typical sequence queries, such as double bottom queries, confirm that substantial speedups are achieved by our new optimization techniques.
    ACM Trans. Database Syst. 01/2004; 29:282-318.
  • Source
    Jitesh Shetty, Jafar Adibi
    [Show abstract] [Hide abstract]
    ABSTRACT: Email logs have been considered as a useful resource for research in fields like link analysis, social network analysis and textual analysis. Most of the experiments in these fields of research are performed on synthetic data due to lack of an adequate and real life benchmark. The Enron email dataset is a touchstone for such research. This dataset is very similar to the kind of the data collected for fraud detection and counter terrorism hence it is a perfect test bed for testing the effectiveness of techniques used for counter terrorism and fraud detection. In this report we describe the MySql database prepared for the dataset and also statistically analyze its appropriateness for research. We further derive a social network constituting of 151 employees from the email logs, by defining a social contact to be someone with whom an individual has exchanged a pre decided threshold number of emails.
    01/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Link discovery is a new challenge in data mining whose primary concerns are to identify strong links and discover hidden relationships among entities and organizations based on low-level, incomplete and noisy evidence data. To address this challenge, we are developing a hybrid link discovery system called KOJAK that combines state-of-the- art knowledge representation and reasoning (KR&R) technology with statistical clustering and analysis techniques from the area of data mining. In this paper we report on the architecture and technology of its first fully completed module called the KOJAK Group Finder. The Group Finder is capable of finding hidden groups and group members in large evidence databases. Our group finding approach addresses a variety of important LD challenges, such as being able to exploit heterogeneous and structurally rich evidence, handling the connectivity curse, noise and corruption as well as the capability to scale up to very large, realistic data sets. The first version of the KOJAK Group Finder has been successfully tested and evaluated on a variety of synthetic datasets.
    Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, July 25-29, 2004, San Jose, California, USA; 01/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we provide a summary of the workshop on Link Analysis and Group Detection (LinkKDD-2004) held in conjunction with ACM SIGKDD 2004, on August 22, Seattle, Washington, USA. We report in details about the research issues addressed in the talks and the workshop.
    Sigkdd Explorations. 01/2004; 6(2):136-139.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently there are a handful study and research on observing self-similarity and fractals in natural structures and scientific database such as traffic data from networks. However, there are few works on employing such information for predictive modeling, data mining and knowledge discovery. In this paper we study, analyze our experiments and observation of self-similar structure embedded in Network data for prediction through Self Similar Layered Hidden Markov Model (SSLHMM). SSLHMM is a novel alternative of Hidden Markov Models (HMM) which proven to be useful in a variety of real world applications. SSLHMM leverage HMM power and extend such capability to self-similar structures and exploit this property to reduce the complexity of predictive modeling process. We show that SSLHMM approach can captures self-similar information and provides more accurate and interpretable model comparing to conventional techniques.
    Advances in Knowledge Discovery and Data Mining, 6th Pacific-Asia Conference, PAKDD 2002, Taipei, Taiwan, May 6-8, 2002, Proceedings; 03/2002
  • J. Adibi, W M Shen
    01/2002;
  • Source
    Jafar Adibi, Christos Faloutsos
    [Show abstract] [Hide abstract]
    ABSTRACT: In this report we provide a summary of the first workshop on application of self-similarity and fractals in data mining: issues and approaches held in conjunction with ACM SIGKDD 2002, July 23 at Edmonton, Alberta, Canada.
    Sigkdd Explorations. 01/2002; 4(2):115-117.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The need to search for complex and recurring patterns in database sequences is shared by many applications. In this paper, we discuss how to express and support efficiently sophisticated sequential pattern queries in databases. Thus, we first introduce SQL-TS, an extension of SQL, to express these patterns, and then we study how to optimize search queries for this language. We take the optimal text search algorithm of Knuth, Morris and Pratt, and generalize it to handle complex queries on sequences. Our algorithm exploits the inter-dependencies between the elements of a sequential pattern to minimize repeated passes over the same data. Experimental results on typical sequence queries, such as double bottom queries, confirm that substantial speedups are achieved by our new optimization techniques.
    Proceedings of the Twentieth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, May 21-23, 2001, Santa Barbara, California, USA; 01/2001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Increasingly, multi-agent systems are being designed for a variety ofcomplex, dynamic domains. Effective agent interactions in such domains raise someof the most fundamental research challenges for agent-based systems, in teamwork,multi-agent learning and agent modelling. The RoboCup research initiative, particularlythe simulation league, has been proposed to pursue such multi-agent researchchallenges, using the common testbed of simulation soccer. Despite the significantpopularity of...
    Autonomous Agents and Multi-Agent Systems 01/2001; 4:115-129. · 0.79 Impact Factor
  • Source
    VLDB 2001, Proceedings of 27th International Conference on Very Large Data Bases, September 11-14, 2001, Roma, Italy; 01/2001
  • Source
    Jafar Adibi, Wei-Min Shen
    [Show abstract] [Hide abstract]
    ABSTRACT: Hidden Markov Models (HMM) have proven to be useful in a variety of real world applications where considerations for uncertainty are crucial. Such an advantage can be more leveraged if HMM can be scaled up to deal with complex problems. In this paper, we introduce, analyze and demonstrate Self-Similar Layered HMM (SSLHMM), for a certain group of complex problems which show self-similar property, and exploit this property to reduce the complexity of model construction. We show how the embedded knowledge of selfsimilar structure can be used to reduce the complexity of learning and increase the accuracy of the learned model. Moreover, we introduce three different types of self-similarity in SSLHMM, and investigate their performance in the context of synthetic data and real-world network databases. We show that SSLHMM has several advantages comparing to conventional HMM techniques and it is more efficient and accurate than one-step, flat method for model construction.
    Principles of Data Mining and Knowledge Discovery, 5th European Conference, PKDD 2001, Freiburg, Germany, September 3-5, 2001, Proceedings; 01/2001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The annual Robocup soccer competition is an excellent opportunity for our robotics and agent research. We view the competition as a rigorous testbed for our methods and a unique way of validating our ideas. After two years of competition, we have begun to understand what works (we won the competition in Tokyo 97) and what does not work (we failed to advance to the second round in Paris 98). This paper presents an overview of our goals in Robocup, our philosophy in building soccer playing robots and the methods we are employing in our efforts.
    12/1999: pages 59-64;

Publication Stats

777 Citations
10.12 Total Impact Points

Institutions

  • 2009
    • Sharif University of Technology
      • Department of Computer Engineering
      Tehrān, Ostan-e Tehran, Iran
  • 2004
    • University of California, Los Angeles
      Los Angeles, California, United States
  • 1970–2002
    • University of Southern California
      • • Information Sciences Institute
      • • Spatial Sciences Institute
      Los Angeles, California, United States