D. McLeod

University of Southern California, Los Angeles, California, United States

Are you D. McLeod?

Claim your profile

Publications (228)413.32 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spectator fragments resulting from relativistic heavy ion collisions, consisting of single protons and neutrons along with groups of stable nuclear fragments up to Nitrogen (Z=7), are measured in PHOBOS. These fragments are observed in Au+Au (sqrt(sNN)=19.6 GeV) and Cu+Cu (22.4 GeV) collisions at high pseudorapidity ($\eta$). The dominant multiply-charged fragment is the tightly bound Helium ($\alpha$), with Lithium, Beryllium, and Boron all clearly seen as a function of collision centrality and pseudorapidity. We observe that in Cu+Cu collisions, it becomes much more favorable for the $\alpha$ fragments to be released than Lithium. The yields of fragments approximately scale with the number of spectator nucleons, independent of the colliding ion. The shapes of the pseudorapidity distributions of fragments indicate that the average deflection of the fragments away from the beam direction increases for more central collisions. A detailed comparison of the shapes for $\alpha$ and Lithium fragments indicates that the centrality dependence of the deflections favors a scaling with the number of participants in the collision.
    Full-text · Article · Nov 2015
  • S. Leone · A. de Spindler · M.C. Norrie · D. McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Today, web development platforms often follow a modular architecture that enables platform extension. Popular web development frameworks such as Ruby on Rails and Symfony, as well as content management systems (CMS) such as WordPress and Drupal offer extension mechanisms that allow the platform core to be extended with additional functionality. However, such extensions are typically isolated units defining their own data structures, application logic and user interfaces, and are difficult to combine. We address the fact that applications need to be configured more freely through the composition of such extensions. We present an approach and model for component-based web engineering based on the concept of components and connectors between them, supporting composition at the level of the schema and data, the application logic and the user interface. We have realised our approach in two popular web development settings. First, we demonstrate how our approach can be integrated into web development frameworks, thus bringing component-based web engineering to the developer. Second, we present, based on the example of WordPress, how advanced end-users can be supported in component-based web engineering by integrating our approach into CMS. The applicability of our approach in both settings demonstrates its generality.
    No preview · Article · Jul 2014
  • Source
    Vesile Evrim · Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Finding the relevant set of information that satisfies an information request of a Web user in the availability of today’s vast amount of digital data is becoming a challenging problem. Currently, available Information Retrieval (IR) Systems are designed to return long lists of results, only a few of which are relevant for a specific user. In this paper, an IR method called Context-Based Information Analysis (CONIA) that investigates the context information of the user and user’s information request to provide relevant results for the given domain users is introduced. In this paper, relevance is measured by the semantics of the information provided in the documents. The information extracted from lexical and domain ontologies is integrated by the user’s interest information to expand the terms entered in the request. The obtained set of terms is categorized by a novel approach, and the relations between the categories are obtained from the ontologies. This categorization is used to improve the quality of the document selection by going beyond checking the availability of the words in the document by analyzing the semantic composition of the mapped terms.
    Full-text · Article · Jan 2014 · Knowledge and Information Systems
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many location-based applications are enabled by handling numerous moving queries over mobile objects. Efficient processing of such queries mainly relies on effective probing, i.e., polling the objects to obtain their current locations (required for processing the queries). With effective probing, one can monitor the current location of the objects with sufficient accuracy for the existing queries, by striking a balance between communication cost of probing and accuracy of the knowledge about current location of the objects. In this paper, we focus on location-based applications that reduce to processing a large set of proximity monitoring queries simultaneously, where each query continuously monitors if a pair of objects are within a certain predefined distance. Accordingly, we propose an effective object probing solution for efficient processing of proximity monitoring queries. In particular, with our proposed solution for the first time we formulate optimal probing as a batch processing problem and propose a method to prioritize probing the objects such that the total number of probes required to answer all queries is minimized. Our extensive experiments demonstrate the efficiency of our proposed solution for a wide range of applications involving up to hundreds of millions of queries.
    No preview · Conference Paper · Nov 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Popular content management systems such as WordPress and Drupal offer a plug-in mechanism that allows users to extend the platform with additional functionality. However, plug-ins are typically isolated extensions defining their own data structures, application logic and user interface, and are difficult to combine. We address the fact that users may want to configure their applications more freely through the composition of such extensions. We present an approach and model for component-based web engineering based on the concept of components and connectors between them, supporting composition at the level of the schema and data, the application logic and the user interface. We show how our approach can be used to integrate component-based web engineering into platforms such as WordPress. We demonstrate the benefits of the approach by presenting a composition plug-in that showcases component composition through configurable connectors based on an eCommerce application scenario.
    No preview · Conference Paper · Jul 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Earthquake science and emergency response require integration of many data types and models that cover a broad range of scales in time and space. Timely and efficient earthquake analysis and response require automated processes and a system in which the interfaces between models and applications are established and well defined. Geodetic imaging data provide observations of crustal deformation from which strain accumulation and release associated with earthquakes can be inferred. Data products are growing and tend to be either relatively large in size, on the order of 1 GB per image with hundreds or thousands of images, or high data rate, such as from 1 second GPS solutions. The products can be computationally intensive to manipulate, analyze, or model, and are unwieldy to transfer across wide area networks. Required computing resources can be large, even for a few users, and can spike when new data are made available or when an earthquake occurs. A cloud computing environment is the natural extension for some components of QuakeSim as an increasing number of data products and model applications become available to users. Storing the data near the model applications improves performance for the user.
    No preview · Conference Paper · Jan 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advances in understanding earthquakes require the integration of models and multiple distributed data products. Increasingly, data are acquired through large investments, and utilizing their full potential requires a coordinated effort by many users, independent researchers, and groups who are often distributed both geographically and by expertise.
    Full-text · Article · Sep 2012 · Computing in Science and Engineering
  • Source
    Hyun Woong Shin · Eduard Hovy · Dennis Mcleod · Larry Pryor
    [Show abstract] [Hide abstract]
    ABSTRACT: Most information retrieval systems, including Web search engines, use similarity ranking algorithms based on a vector space model to find relevant information in response to a user's request. However, the retrieved information is frequently irrelevant, because most of the current information systems employ index terms or other techniques that are variants of term frequency. In this paper, we propose a new criterion, "generality," that provides an additional basis on which to rank retrieved documents. We compared our generality quantification algorithm with human judges' weighting of values to show that the developed algorithm is significantly correlated.
    Full-text · Article · Mar 2012
  • Dongwoo Won · Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Association rules are a fundamental data mining technique, used for various applications. In this paper, we present an efficient method to make use of association rules for discovering knowledge from transactional data. First, we approach this problem using an ontology. The hierarchical structure of an ontology defines the generalisation relationship for the concepts of different abstraction levels that are utilised to minimise the search space. Next, we have developed an efficient algorithm, hierarchical association rule categorisation (HARC), which use a novel metric called relevance for categorising association rules. As a result, users are now able to find the needed rules efficiently by searching the compact generalised rules first and then the specific rules that belong to them rather than scanning the entire list of rules.
    No preview · Article · Jan 2012 · International Journal of Data Mining Modelling and Management
  • Jinwoo Kim · D. McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Currently keyword search is a prominent data retrieval method for the Web because the simple and efficient nature of the keyword processing allows it to process a large amount of information with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships between keywords. As a result, its recall rate and precision rate are often unsatisfactory, and therefore its ranking algorithms fail to properly reflect the semantic relevance of keywords. This means that the accuracy (the precision and the recall rate) of search results is often low. Therefore, we proposed a new 3-tuple query interface and corresponding ranking algorithm. And we presented a comparison of the search accuracy of our new query interface and conventional search approaches.
    No preview · Conference Paper · Jan 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.
    No preview · Article · Dec 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, the geo-science community has expanded its need for spaceborne data to study the Earth and its deformations. QuakeTables, the ontology-based federated database system, expanded its radar-based data repository from only housing InSAR interferograms to also include Repeat Pass Interferometry (RPI) products for Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). Each RPI product is cataloged within QuakeTables using its metadata and the number of products available in the RPI release, allowing users to access all related data files and annotations. Further, QuakeTables provides visualization utilizing multiple levels of resolution via Google Maps and Google Earth. As illustrated by the recent earthquake in Japan, there is an urgent need for scientific data after a natural disaster and the interferograms generated from an easily deployable UAVSAR repeat passes can help scientists and first responders study the deformation on the Earth's surface and act accordingly. The QuakeTables infrastructure assures a speedy deployment of such products as soon as they are available. UAVSAR RPI products are constantly being added to the repository as they are released by the JPL UAVSAR group. QuakeTables provides access to both its fault-based and radar-based datasets via a web interface, an API and a web-services interface. The UAVSAR data repository was developed by the QuakeSim group on USC and IU facilities and with the goal of transferring the capabilities Alaska Satellite Facility UAVSAR DAAC.
    No preview · Article · Dec 2011
  • Source
    Sang Su Lee · Tagyoung Chung · Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: The need to identify an approach that recommends items that match users' preferences within social networks has grown in tandem with the increasing number of items appearing within these networks. This research presents a novel technique for item recommendation within social networks that matches user and group interests over time. Users often tag items in social networks with words and phrases that reflect their preferred "vocabulary." As such, these tags provide succinct descriptions of the resource; implicitly reveal user preferences, and, as the tag vocabulary of users tends to change over time, reflect the dynamics of user preferences. Based on evaluation of user and group interests over time, we present a recommendation system employing a modified latent Dirichlet allocation (LDA) model in which users and tags associated with an item are represented and clustered by topics, and the topic-based representation is combined with the item's timestamp to show time-based topic distribution. By representing users via topics, the model can cluster users to reveal the group interests. Based on this model, we developed a recommendation system that reflects user as well as group interests in a dynamic manner that accounts for time, allowing it to perform in a manner superior to that of static recommendation systems in terms of precision rate. Index Terms—Web mining, Tagging, Recommender systems, Information analysis, Social network services.
    Preview · Conference Paper · Apr 2011
  • Source
    Conference Paper: Geostreaming in cloud
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, geospatial databases have been commercialized and widely exposed to mass users. Current exponential growth in data generation and querying rates for these data highlights the importance of efficient techniques for streaming. Traditional database technology, which operates on persistent and less dynamic data objects does not meet the requirements for efficient geospatial data streaming. Geostreaming, the intersection of data stream processing and geospatial querying, is an ongoing research focus in this area. In this paper, we describe why cloud is the most appropriate infrastructure in which to support geospatial stream data processing. First, we argue that cloud best fits the requirements of a large-scale geostreaming application. Second, we propose ElaStream, a general cloud-based streaming infrastructure that enables huge parallelism by means of the divide, conquer, and combine paradigm. Third, we examine key related work in the data streaming and (geo)spatial database fields, and describe the challenges ahead to build scalable cloud-based geostreaming applications.
    Preview · Conference Paper · Jan 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: QuakeTables is an ontology-based infrastructure that supports the diverse data types and federated data sets needed to support large-scale modeling of inter-seismic and tectonic processes using boundary element, finite element and analytic applications. This includes fault, paleoseismic and space-bourn generated data. Some of fault data housed in QuakeTables includes CGS 1996, CGS 2002 and the official UCERF 2 deformation models. Currently, QuakeTables supports two forms of radar data, namely, Interferometric Synthetic Aperture Radar (InSAR) and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Repeat Pass Interferometry (RPI) products in the form of interferograms. All data types are integrated and presented to the end-user with tools to map and visualize the data with the added ability to download it in the desired format for local and/or remote processing. In QuakeTables, each dataset is represented in a self-consistent form as it was originally found in a publication or resource along with its metadata. To support the modelers and scientists need to view different interpretations of the same data, an ontology processor is used to generate such derivations to the desired models and formats while preserving the original dataset and maintaining the metadata for the different models and the links to the original dataset. The QuakeSim team developed a reference model that is used by applications such as Simplex and GeoFest. As a result, this allows the preservation of data and provides a reference for result comparison in the same tool. Through its API and web-services interfaces, QuakeTables delivers data to both the end-users and the QuakeSim portal.
    No preview · Article · Dec 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pseudorapidity distributions of charged particles emitted in $Au+Au$, $Cu+Cu$, $d+Au$, and $p+p$ collisions over a wide energy range have been measured using the PHOBOS detector at RHIC. The centrality dependence of both the charged particle distributions and the multiplicity at midrapidity were measured. Pseudorapidity distributions of charged particles emitted with $|\eta|<5.4$, which account for between 95% and 99% of the total charged-particle emission associated with collision participants, are presented for different collision centralities. Both the midrapidity density, $dN_{ch}/d\eta$, and the total charged-particle multiplicity, $N_{ch}$, are found to factorize into a product of independent functions of collision energy, $\sqrt{s_{_{NN}}}$, and centrality given in terms of the number of nucleons participating in the collision, $N_{part}$. The total charged particle multiplicity, observed in these experiments and those at lower energies, assumes a linear dependence of $(\ln s_{_{NN}})^2$ over the full range of collision energy of $\sqrt{s_{_{NN}}}$=2.7-200 GeV.
    Full-text · Article · Nov 2010 · Physical Review C
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The QuakeSim Project improves understanding of earthquake processes by integrating model applications and various heterogeneous data sources within a web services environment. The project focuses on the earthquake cycle and related crustal deformation. Spaceborne GPS and Interferometric Synthetic Aperture data provide information on near-term crustal deformation, while paleoseismic geologic data provide longer-term information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database and are accessible by users or various model applications. An increasing amount of UAVSAR data is being added to the QuakeTables database through a map browsable interface. Model applications can retrieve data from QuakeTables or remotely served GPS velocity data services or users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful for detecting changes in processing of the data. Development of the QuakeSim computational infrastructure has benefitted greatly from having the user in the development loop. Improved visualization tools enable more efficient data exploration and understanding. Tools must provide flexibility to science users for exploring data in new ways, but also must facilitate standard, intuitive, and routine uses for end users such as emergency responders.1
    Full-text · Article · Jan 2010 · IEEE Aerospace Conference Proceedings
  • [Show abstract] [Hide abstract]
    ABSTRACT: The NASA QuakeSim program unites many components of earthquake fault data and modeling toward short-term forecasts of major earthquake events. The QuakeTables component enables widespread web access to multiple self-consistent earthquake fault models and an increasing set of GPS and InSAR dispalcement data. These data are ingested by a variety of QuakeSim models and pattern analysis techniques, including elastic half space inversions, finite element continuum models, Hidden Markov models and Pattern Informatics-based forecasting methods. These tools are migrating to Web 2.0 tools, such as Google Gadgets.
    No preview · Article · Dec 2009
  • Source
    Farid Parvini · Dennis McLeod
    [Show abstract] [Hide abstract]
    ABSTRACT: Feature Subset Selection has become the focus of much research in areas of application for Multivariate Time Series (MTS). MTS data sets are common in many multimedia and medical applications such as gesture recognition, video sequence matching and EEG/ECG data analysis. MTS data sets are high dimensional as they consist of a series of observations of many variables at a time. The objective of feature subset selection is two-fold: providing a faster and more cost-effective process and a better understanding of the underlying process that generated the data. We propose a subset selection approach based on biomechanical characteristics, a simple yet effective technique for MTS. We apply our approach for recognizing ASL static signs using Neural Network and Multi-Layer Neural Network and show that we can maintain the same accuracy by selecting just 50% of the generated data.
    Preview · Conference Paper · Nov 2009

  • No preview · Article · Nov 2009

Publication Stats

5k Citations
413.32 Total Impact Points


  • 1994-2014
    • University of Southern California
      • • Department of Computer Science
      • • Information Sciences Institute
      Los Angeles, California, United States
  • 1978-2010
    • University of Illinois at Chicago
      • Department of Physics
      Chicago, IL, United States
  • 1993-2009
    • University of California, Los Angeles
      • Department of Computer Science
      Los Ángeles, California, United States
  • 2001-2005
    • University of Rochester
      • Department of Physics and Astronomy
      Rochester, NY, United States
  • 2003
    • University of Maryland, College Park
      CGS, Maryland, United States
  • 2001-2003
    • Argonne National Laboratory
      • Division of Physics
      Лимонт, Illinois, United States
  • 2000
    • Yonsei University
      Sŏul, Seoul, South Korea
  • 1992
    • University of Freiburg
      Freiburg, Baden-Württemberg, Germany
  • 1986-1987
    • Indiana University Bloomington
      Bloomington, Indiana, United States