D. McLeod

University of Southern California, Los Angeles, California, United States

Are you D. McLeod?

Claim your profile

Publications (188)277.16 Total impact

  • Vesile Evrim, Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Finding the relevant set of information that satisfies an information request of a Web user in the availability of today’s vast amount of digital data is becoming a challenging problem. Currently, available Information Retrieval (IR) Systems are designed to return long lists of results, only a few of which are relevant for a specific user. In this paper, an IR method called Context-Based Information Analysis (CONIA) that investigates the context information of the user and user’s information request to provide relevant results for the given domain users is introduced. In this paper, relevance is measured by the semantics of the information provided in the documents. The information extracted from lexical and domain ontologies is integrated by the user’s interest information to expand the terms entered in the request. The obtained set of terms is categorized by a novel approach, and the relations between the categories are obtained from the ontologies. This categorization is used to improve the quality of the document selection by going beyond checking the availability of the words in the document by analyzing the semantic composition of the mapped terms.
    Knowledge and Information Systems 01/2014; · 2.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many location-based applications are enabled by handling numerous moving queries over mobile objects. Efficient processing of such queries mainly relies on effective probing, i.e., polling the objects to obtain their current locations (required for processing the queries). With effective probing, one can monitor the current location of the objects with sufficient accuracy for the existing queries, by striking a balance between communication cost of probing and accuracy of the knowledge about current location of the objects. In this paper, we focus on location-based applications that reduce to processing a large set of proximity monitoring queries simultaneously, where each query continuously monitors if a pair of objects are within a certain predefined distance. Accordingly, we propose an effective object probing solution for efficient processing of proximity monitoring queries. In particular, with our proposed solution for the first time we formulate optimal probing as a batch processing problem and propose a method to prioritize probing the objects such that the total number of probes required to answer all queries is minimized. Our extensive experiments demonstrate the efficiency of our proposed solution for a wide range of applications involving up to hundreds of millions of queries.
    Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems; 11/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Earthquake science and emergency response require integration of many data types and models that cover a broad range of scales in time and space. Timely and efficient earthquake analysis and response require automated processes and a system in which the interfaces between models and applications are established and well defined. Geodetic imaging data provide observations of crustal deformation from which strain accumulation and release associated with earthquakes can be inferred. Data products are growing and tend to be either relatively large in size, on the order of 1 GB per image with hundreds or thousands of images, or high data rate, such as from 1 second GPS solutions. The products can be computationally intensive to manipulate, analyze, or model, and are unwieldy to transfer across wide area networks. Required computing resources can be large, even for a few users, and can spike when new data are made available or when an earthquake occurs. A cloud computing environment is the natural extension for some components of QuakeSim as an increasing number of data products and model applications become available to users. Storing the data near the model applications improves performance for the user.
    Aerospace Conference, 2013 IEEE; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advances in understanding earthquakes require the integration of models and multiple distributed data products. Increasingly, data are acquired through large investments, and utilizing their full potential requires a coordinated effort by many users, independent researchers, and groups who are often distributed both geographically and by expertise.
    Computing in Science and Engineering 01/2012; 14(5):31-42. · 1.73 Impact Factor
  • Dongwoo Won, Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Association rules are a fundamental data mining technique, used for various applications. In this paper, we present an efficient method to make use of association rules for discovering knowledge from transactional data. First, we approach this problem using an ontology. The hierarchical structure of an ontology defines the generalisation relationship for the concepts of different abstraction levels that are utilised to minimise the search space. Next, we have developed an efficient algorithm, hierarchical association rule categorisation (HARC), which use a novel metric called relevance for categorising association rules. As a result, users are now able to find the needed rules efficiently by searching the compact generalised rules first and then the specific rules that belong to them rather than scanning the entire list of rules.
    Int. J. of Data Mining. 01/2012; 4(4):309 - 333.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.
    AGU Fall Meeting Abstracts. 12/2011;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, the geo-science community has expanded its need for spaceborne data to study the Earth and its deformations. QuakeTables, the ontology-based federated database system, expanded its radar-based data repository from only housing InSAR interferograms to also include Repeat Pass Interferometry (RPI) products for Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). Each RPI product is cataloged within QuakeTables using its metadata and the number of products available in the RPI release, allowing users to access all related data files and annotations. Further, QuakeTables provides visualization utilizing multiple levels of resolution via Google Maps and Google Earth. As illustrated by the recent earthquake in Japan, there is an urgent need for scientific data after a natural disaster and the interferograms generated from an easily deployable UAVSAR repeat passes can help scientists and first responders study the deformation on the Earth's surface and act accordingly. The QuakeTables infrastructure assures a speedy deployment of such products as soon as they are available. UAVSAR RPI products are constantly being added to the repository as they are released by the JPL UAVSAR group. QuakeTables provides access to both its fault-based and radar-based datasets via a web interface, an API and a web-services interface. The UAVSAR data repository was developed by the QuakeSim group on USC and IU facilities and with the goal of transferring the capabilities Alaska Satellite Facility UAVSAR DAAC.
    AGU Fall Meeting Abstracts. 12/2011;
  • Source
    Sang Su Lee, Tagyoung Chung, Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: The need to identify an approach that recommends items that match users' preferences within social networks has grown in tandem with the increasing number of items appearing within these networks. This research presents a novel technique for item recommendation within social networks that matches user and group interests over time. Users often tag items in social networks with words and phrases that reflect their preferred "vocabulary." As such, these tags provide succinct descriptions of the resource; implicitly reveal user preferences, and, as the tag vocabulary of users tends to change over time, reflect the dynamics of user preferences. Based on evaluation of user and group interests over time, we present a recommendation system employing a modified latent Dirichlet allocation (LDA) model in which users and tags associated with an item are represented and clustered by topics, and the topic-based representation is combined with the item's timestamp to show time-based topic distribution. By representing users via topics, the model can cluster users to reveal the group interests. Based on this model, we developed a recommendation system that reflects user as well as group interests in a dynamic manner that accounts for time, allowing it to perform in a manner superior to that of static recommendation systems in terms of precision rate. Index Terms—Web mining, Tagging, Recommender systems, Information analysis, Social network services.
    Eighth International Conference on Information Technology: New Generations, ITNG 2011, Las Vegas, Nevada, USA, 11-13 April 2011; 01/2011
  • Source
    Conference Paper: Geostreaming in cloud.
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, geospatial databases have been commercialized and widely exposed to mass users. Current exponential growth in data generation and querying rates for these data highlights the importance of efficient techniques for streaming. Traditional database technology, which operates on persistent and less dynamic data objects does not meet the requirements for efficient geospatial data streaming. Geostreaming, the intersection of data stream processing and geospatial querying, is an ongoing research focus in this area. In this paper, we describe why cloud is the most appropriate infrastructure in which to support geospatial stream data processing. First, we argue that cloud best fits the requirements of a large-scale geostreaming application. Second, we propose ElaStream, a general cloud-based streaming infrastructure that enables huge parallelism by means of the divide, conquer, and combine paradigm. Third, we examine key related work in the data streaming and (geo)spatial database fields, and describe the challenges ahead to build scalable cloud-based geostreaming applications.
    Proceedings of the 2011 ACM SIGSPATIAL International Workshop on GeoStreaming, IWGS 2011, November 1, 2011, Chicago, IL, USA; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: QuakeTables is an ontology-based infrastructure that supports the diverse data types and federated data sets needed to support large-scale modeling of inter-seismic and tectonic processes using boundary element, finite element and analytic applications. This includes fault, paleoseismic and space-bourn generated data. Some of fault data housed in QuakeTables includes CGS 1996, CGS 2002 and the official UCERF 2 deformation models. Currently, QuakeTables supports two forms of radar data, namely, Interferometric Synthetic Aperture Radar (InSAR) and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Repeat Pass Interferometry (RPI) products in the form of interferograms. All data types are integrated and presented to the end-user with tools to map and visualize the data with the added ability to download it in the desired format for local and/or remote processing. In QuakeTables, each dataset is represented in a self-consistent form as it was originally found in a publication or resource along with its metadata. To support the modelers and scientists need to view different interpretations of the same data, an ontology processor is used to generate such derivations to the desired models and formats while preserving the original dataset and maintaining the metadata for the different models and the links to the original dataset. The QuakeSim team developed a reference model that is used by applications such as Simplex and GeoFest. As a result, this allows the preservation of data and provides a reference for result comparison in the same tool. Through its API and web-services interfaces, QuakeTables delivers data to both the end-users and the QuakeSim portal.
    AGU Fall Meeting Abstracts. 12/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pseudorapidity distributions of charged particles emitted in $Au+Au$, $Cu+Cu$, $d+Au$, and $p+p$ collisions over a wide energy range have been measured using the PHOBOS detector at RHIC. The centrality dependence of both the charged particle distributions and the multiplicity at midrapidity were measured. Pseudorapidity distributions of charged particles emitted with $|\eta|<5.4$, which account for between 95% and 99% of the total charged-particle emission associated with collision participants, are presented for different collision centralities. Both the midrapidity density, $dN_{ch}/d\eta$, and the total charged-particle multiplicity, $N_{ch}$, are found to factorize into a product of independent functions of collision energy, $\sqrt{s_{_{NN}}}$, and centrality given in terms of the number of nucleons participating in the collision, $N_{part}$. The total charged particle multiplicity, observed in these experiments and those at lower energies, assumes a linear dependence of $(\ln s_{_{NN}})^2$ over the full range of collision energy of $\sqrt{s_{_{NN}}}$=2.7-200 GeV.
    Physical Review C 11/2010; 83(2). · 3.72 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The QuakeSim Project improves understanding of earthquake processes by integrating model applications and various heterogeneous data sources within a web services environment. The project focuses on the earthquake cycle and related crustal deformation. Spaceborne GPS and Interferometric Synthetic Aperture data provide information on near-term crustal deformation, while paleoseismic geologic data provide longer-term information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database and are accessible by users or various model applications. An increasing amount of UAVSAR data is being added to the QuakeTables database through a map browsable interface. Model applications can retrieve data from QuakeTables or remotely served GPS velocity data services or users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful for detecting changes in processing of the data. Development of the QuakeSim computational infrastructure has benefitted greatly from having the user in the development loop. Improved visualization tools enable more efficient data exploration and understanding. Tools must provide flexibility to science users for exploring data in new ways, but also must facilitate standard, intuitive, and routine uses for end users such as emergency responders.1
    IEEE Aerospace Conference Proceedings 01/2010;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The NASA QuakeSim program unites many components of earthquake fault data and modeling toward short-term forecasts of major earthquake events. The QuakeTables component enables widespread web access to multiple self-consistent earthquake fault models and an increasing set of GPS and InSAR dispalcement data. These data are ingested by a variety of QuakeSim models and pattern analysis techniques, including elastic half space inversions, finite element continuum models, Hidden Markov models and Pattern Informatics-based forecasting methods. These tools are migrating to Web 2.0 tools, such as Google Gadgets.
    AGU Fall Meeting Abstracts. 12/2009;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The 13 papers in this special issue focus on knowledge and data engineering for e-learning. Some of these papers were recommended submissions from the best ranked papers presented at the Sixth International Conference on Web-Based Learning (ICWL '07), held in August 2007 in Edinburgh, United Kingdom.
    IEEE Transactions on Knowledge and Data Engineering 07/2009; · 1.89 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We are using the QuakeSim environment to model interacting fault systems. One goal of QuakeSim is to prepare for the large volumes of data that spaceborne missions such as DESDynI will produce. QuakeSim has the ability to ingest distributed heterogenous data in the form of InSAR, GPS, seismicity, and fault data into various earthquake modeling applications, automating the analysis when possible. Virtual California simulates interacting faults in California. We can compare output from long time-history Virtual California runs with the current state of strain and the strain history in California. In addition to spaceborne data we will begin assimilating data from UAVSAR airborne flights over the San Francisco Bay Area, the Transverse Ranges, and the Salton Trough. Results of the models are important for understanding future earthquake risk and for providing decision support following earthquakes. Improved models require this sensor web of different data sources, and a modeling environment for understanding the combined data.
    Aerospace conference, 2009 IEEE; 04/2009
  • Source
    Seongwook Youn, Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: The increase of image spam, a kind of spam in which the text message is embedded into an attached image to defeat spam filtering techniques, is becoming an increasingly major problem.. For nearly a decade, content based filtering using text classification or machine learning has been a major trend of anti- spam filtering systems. A Key technique being used by spammers is to embed text into image(s) in spam email. In (4), we proposed two levels of ontology spam filters: a first level global ontology filter and a second level user-customized ontology filter. However, that previous system handles only text e-mail and the percentage of attached images is increasing sharply. The contribution of the paper is that we add an image e-mail handling capability to the previous anti-spam filtering system, enhancing the effectiveness of spam filtering.
    Proceedings of the 2009 ACM Symposium on Applied Computing (SAC), Honolulu, Hawaii, USA, March 9-12, 2009; 01/2009
  • Source
    Anne Yun, An Chen, Dennis Mcleod
    [Show abstract] [Hide abstract]
    ABSTRACT: Many data representation structures, such as web site categories and domain ontologies, have been established for semantic-based information search and retrieval on the web. These structures consist of concepts and their interrelationships. Approaches to determine the similarity in semantics among concepts in data representation structures have been developed in order to facilitate information retrieval and recommendation processes. Some approaches are only suitable for similarity computations in pure tree structures. Other approaches designed for the Directed Acyclic Graph structures yield high computational complexity for online similarity decisions. In order to provide efficient similarity computations for data representation structures, we propose a geometry-based solution. Similarity computations are based on geometric properties. The similarity model is based on the proposed geometry-based solution, and the online similarity computation is performed in a constant time.
    01/2009;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of ex- pression between humans that could be used to communi- cate with computers in a more natural manner. Most of the current research on hand gesture recognition for Human- Computer Interaction rely on either the Neural Networks or Hidden Markov Models (HMMs). In this paper, we compare different approaches for gesture recognition and highlight the major advantages of each. We show that gestures recog- nition based on the Bio-mechanical characteristic of the hand provides an intuitive approach which provides more accuracy and less complexity.
    Human-Computer Interaction. Novel Interaction Methods and Techniques, 13th International Conference, HCI International 2009, San Diego, CA, USA, July 19-24, 2009, Proceedings, Part II; 01/2009
  • Source
    Seongwook Youn, Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: E-mail is one of the most common communication methods among people on the Internet. However, the increase of e-mail misuse/abuse has resulted in an increasing volume of spam e-mail over recent years. As spammers always try to find a way to evade existing spam filters, new filters need to be developed to catch spam. A statistical learning filter is at the core of many commercial anti-spam filters. It can either be trained globally for all users, or personally for each user. Generally, globally-trained filters outperform personally-trained filters for both small and large collections of users under a real environment. However, globally-trained filters sometimes ignore personal data. Globally- trained filters cannot retain personal preferences and contexts as to whether a feature should be treated as an indicator of legitimate e-mail or spam. Gray e-mail is a message that could reasonably be considered either legitimate or spam. In this paper, a personalized ontology spam filter was implemented to make decisions for gray e-mail. In the future, by considering both global and personal ontology-based filters, we can show a significant improvement in overall performance.
    Proceedings of the 2009 ACM Symposium on Applied Computing (SAC), Honolulu, Hawaii, USA, March 9-12, 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: QuakeSim is a project to develop a modeling environment for studying earthquake processes using a Web services environment. In order to model interseismic processes multiple data types must be ingested including spaceborne GPS and InSAR data, geological fault data, and seismicity data. QuakeSim federates data from these multiple sources and integrates the databases with modeling applications. Because the models are complex and compute intensive we are using the Columbia computer located at NASA Ames to integrate and run software programs to improve our understanding of the solid Earth and earthquake processes. The complementary software programs are used to simulate interacting earthquake fault systems, model nucleation and slip on faults, and calculate run-up and inundation from tsunamis generated by offshore earthquakes. QuakeSim also applies pattern recognition techniques to real and simulated data to elucidate subtle features in the processes.
    Aerospace Conference, 2008 IEEE; 04/2008

Publication Stats

1k Citations
277.16 Total Impact Points

Institutions

  • 1996–2014
    • University of Southern California
      • Department of Computer Science
      Los Angeles, California, United States
  • 2009
    • California Institute of Technology
      • Jet Propulsion Laboratory
      Pasadena, CA, United States
  • 1997–2007
    • University of California, Los Angeles
      • Department of Computer Science
      Los Angeles, California, United States
  • 1979–2006
    • University of Illinois at Chicago
      • Department of Physics
      Chicago, IL, United States
  • 2003
    • Western Michigan University
      Kalamazoo, Michigan, United States
  • 2000–2001
    • Argonne National Laboratory
      • Division of Physics
      Lemont, Illinois, United States
  • 1986–1987
    • University of Maryland, College Park
      Maryland, United States