Conference Paper

FINESSD: Near-Storage Feature Selection with Mutual Information for Resource-Limited FPGAs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Feature selection is the data analysis process that selects a smaller and curated subset of the original dataset by filtering out data (features) which are irrelevant or redundant. The most important features can be ranked and selected based on statistical measures, such as mutual information. Feature selection not only reduces the size of dataset as well as the execution time for training Machine Learning (ML) models, but it can also improve the accuracy of the inference. This paper analyses mutual-information-based feature selection for resource-constrained FPGAs and proposes FINESSD, a novel approach that can be deployed for near-storage acceleration. This paper highlights that the Mutual Information Maximization (MIM) algorithm does not require multiple passes over the data while being a good trade-off between accuracy and FPGA resources, when approximated appropriately. The new FPGA accelerator for MIM generated by FINESSD can fully utilize the NVMe bandwidth of a modern SSD and perform feature selection without requiring full dataset transfers onto the main processor. The evaluation using a Samsung SmartSSD over small, large and out-of-core datasets shows that, compared to the mainstream multiprocessing Python ML libraries and an optimized C library, FINESSD yields up to 35x and 19x speedup respectively while being more than 70x more energy efficient for large, out-of-core datasets.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The Compute Express Link (CXL) is an open industry-standard interconnect between processors and devices such as accelerators, memory buffers, smart network interfaces, persistent memory, and solid-state drives. CXL offers coherency and memory semantics with bandwidth that scales with PCIe bandwidth while achieving significantly lower latency than PCIe. All major CPU vendors, device vendors, and datacenter operators have adopted CXL as a common standard. This enables an inter-operable ecosystem that supports key computing use cases including highly efficient accelerators, server memory bandwidth and capacity expansion, multi-server resource pooling and sharing, and efficient peer-to-peer communication. This survey provides an introduction to CXL covering the standards CXL 1.0, CXL 2.0, and CXL 3.0. We further survey CXL implementations, discuss CXL's impact on the datacenter landscape, and future directions.
Article
Full-text available
The growth of Big Data has resulted in an overwhelming increase in the volume of data available, including the number of features. Feature selection, the process of selecting relevant features and discarding irrelevant ones, has been successfully used to reduce the dimensionality of datasets. However, with numerous feature selection approaches in the literature, determining the best strategy for a specific problem is not straightforward. In this study, we compare the performance of various feature selection approaches to a random selection to identify the most effective strategy for a given type of problem. We use a large number of datasets to cover a broad range of real-world challenges. We evaluate the performance of seven popular feature selection approaches and five classifiers. Our findings show that feature selection is a valuable tool in machine learning and that correlation-based feature selection is the most effective strategy regardless of the scenario. Additionally, we found that using improper thresholds with ranker approaches produces results as poor as randomly selecting a subset of features.
Article
Full-text available
An Orthogonal Least Squares (OLS) based feature selection method is proposed for both binomial and multinomial classification. The novel Squared Orthogonal Correlation Coefficient (SOCC) is defined based on Error Reduction Ratio (ERR) in OLS and used as the feature ranking criterion. The equivalence between the canonical correlation coefficient, Fisher’s criterion, and the sum of the SOCCs is revealed, which unveils the statistical implication of ERR in OLS for the first time. It is also shown that the OLS based feature selection method has speed advantages when applied for greedy search. The proposed method is comprehensively compared with the mutual information based feature selection methods and the embedded methods using both synthetic and real world datasets. The results show that the proposed method is always in the top 5 among the 12 candidate methods. Besides, the proposed method can be directly applied to continuous features without discretisation, which is another significant advantage over mutual information based methods.
Article
Full-text available
In Machine Learning (ML), Feature Selection (FS) plays a crucial part in reducing data’s dimensionality and enhancing any proposed framework’s performance. However, in real-world applications, FS work suffers from high dimensionality, computational and storage complexity, noisy or ambiguous nature, high performance, etc. The area of FS is very vast and challenging in its nature. There are lots of work that have been reported on FS over the various area of applications. This paper has discussed FS’s framework and the multiple models of FS with detailed descriptions. We have also classified the various FS algorithms with respect to the data, i.e., structured or labeled data and unstructured data for the different applications of ML. We have also discussed what essential features are, the commonly used FS methods, the widely used datasets, and the widely used work done in the various ML fields for the FS task. Here we try to view the multiple comparison experimental results of FS work in different result discussions. This paper draws a descriptive survey on FS with the associated area of real-world problem domains. This paper’s main objective is to understand the main idea of FS work and identify the core idea of how FS will be applicable in various problem domains.
Article
Full-text available
Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis.
Article
Full-text available
The massive growth of data in recent years has led challenges in data mining and machine learning tasks. One of the major challenges is the selection of relevant features from the original set of available features that maximally improves the learning performance over that of the original feature set. This issue attracts researchers’ attention resulting in a variety of successful feature selection approaches in the literature. Although there exist several surveys on unsupervised learning (e.g., clustering), lots of works concerning unsupervised feature selection are missing in these surveys (e.g., evolutionary computation based feature selection for clustering) for identifying the strengths and weakness of those approaches. In this paper, we introduce a comprehensive survey on feature selection approaches for clustering by reflecting the advantages/disadvantages of current approaches from different perspectives and identifying promising trends for future research.
Article
Full-text available
Image analysis is a prolific field of research which has been broadly studied in the last decades, successfully applied to a great number of disciplines. Since the apparition of Big Data, the number of digital images is explosively growing, and a large amount of multimedia data is publicly available. Not only is it necessary to deal with this increasing number of images, but also to know which features extract from them, and feature selection can help in this scenario. The goal of this paper is to survey the most recent feature selection methods developed and/or applied to image analysis, covering the most popular fields such as image classification, image segmentation, etc. Finally, an experimental evaluation on several popular datasets using well-known feature selection methods is presented, bearing in mind that the aim is not to provide the best feature selection method, but to facilitate comparative studies for the research community.
Conference Paper
Full-text available
With the growth of high dimensional data, feature selection is a vital component of machine learning as well as an important stand alone data analytics tool. Without it, the computation cost of big data analytics can become unmanageable and spurious correlations and noise can reduce the accuracy of any results. Feature selection removes irrelevant and redundant information leading to faster, more reliable data analysis. Feature selection techniques based on information theory are among the fastest known and the Manchester AnalyticS Toolkit (MAST) provides an efficient, parallel and scalable implementation of these methods. This paper considers a number of data structures for storing the frequency counters that underpin MAST. We show that preprocessing the data to reduce the number of zero-valued counters in an array structure results in an order of magnitude reduction in both memory usage and execution time compared to state of the art structures that use explicit mappings to avoid zero-valued counters. We also describe a number of parallel processing techniques that enable MAST to scale linearly with the number of processors even on NUMA architectures. MAST targets scale-up servers rather than scale-out clusters and we show that it performs orders of magnitude faster than existing tools. Moreover, we show that MAST is 3.5 times faster than a scale-out solution built for Spark running on the same server. As an example of the performance of MAST, we were able to process a dataset of 100 million examples and 100,000 features in under 10 minutes on a four socket server which each socket containing an 8-core Intel Xeon E5-4620 processor.
Article
Full-text available
Mutual information (MI) is a powerful method for detecting relationships between data sets. There are accurate methods for estimating MI that avoid problems with "binning" when both data sets are discrete or when both data sets are continuous. We present an accurate, non-binning MI estimator for the case of one discrete data set and one continuous data set. This case applies when measuring, for example, the relationship between base sequence and gene expression level, or the effect of a cancer drug on patient survival time. We also show how our method can be adapted to calculate the Jensen-Shannon divergence of two or more data sets.
Conference Paper
Full-text available
Probability density functions (PDFs) have a wide range of uses across an array of application domains. Since computing the PDF of real-time data is typically expensive, various estimations have been devised that attempt to approximate the real PDFs based on fitting data to an expected underlying distribution. As we move to more adaptive systems, real-time monitoring of signal statistics increases in importance. In this paper, we present a technique that leverages the heterogeneous resources on modern FPGAs to enable real time computation of PDFs of sampled data at speeds of over 200 Msamples per second. We detail a flexible architecture that can be used to extract statistical information in real time while consuming a moderate amount of area, allowing it to be incorporated into existing FPGA-based applications.
Article
Full-text available
We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This “inconsistency” theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if “bias-corrected” estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods. Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Article
Full-text available
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Article
Full-text available
We present two classes of improved estimators for mutual information M(X,Y), from samples of random points distributed according to some joint probability density mu(x,y). In contrast to conventional estimators based on binnings, they are based on entropy estimates from k -nearest neighbor distances. This means that they are data efficient (with k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/N for N points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator M(X,Y) vanishes (up to statistical fluctuations) if mu(x,y)=mu(x)mu(y). This holds for all tested marginal distributions and for all dimensions of x and y. In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.
Article
As the size of data generated every day grows dramatically, the computational bottleneck of computer systems has shifted toward storage devices. The interface between the storage and the computational platforms has become the main limitation due to its limited bandwidth, which does not scale when the number of storage devices increases. Interconnect networks do not provide simultaneous access to all storage devices and thus limit the performance of the system when executing independent operations on different storage devices. Offloading the computations to the storage devices eliminates the burden of data transfer from the interconnects. Near-storage computing offloads a portion of computations to the storage devices to accelerate big data applications. In this article, we propose a generic near-storage sort accelerator for data analytics, NASCENT2, which utilizes Samsung SmartSSD, an NVMe flash drive with an on-board FPGA chip that processes data in situ. NASCENT2 consists of dictionary decoder, sort, and shuffle FPGA-based accelerators to support sorting database tables based on a key column with any arbitrary data type. It exploits data partitioning applied by data processing management systems, such as SparkSQL, to breakdown the sort operations on colossal tables to multiple sort operations on smaller tables. NASCENT2 generic sort provides 2 × speedup and 15.2 × energy efficiency improvement as compared to the CPU baseline. It moreover considers the specifications of the SmartSSD (e.g., the FPGA resources, interconnect network, and solid-state drive bandwidth) to increase the scalability of computer systems as the number of storage devices increases. With 12 SmartSSDs, NASCENT2 is 9.9× (137.2 ×) faster and 7.3 × (119.2 ×) more energy efficient in sorting the largest tables of TPCC and TPCH benchmarks than the FPGA (CPU) baseline.
Article
inline-formula> K -nearest neighbor search is one of the fundamental tasks in various applications and the hierarchical navigable small world (HNSW) has recently drawn attention in large-scale cloud services, as it easily scales up the database while offering fast search. On the other hand, a computational storage device (CSD) that combines programmable logic and storage modules on a single board becomes popular to address the data bandwidth bottleneck of modern computing systems. In this paper, we propose a computational storage platform that can accelerate a large-scale graph-based nearest neighbor search algorithm based on SmartSSD CSD. To this end, we modify the algorithm more amenable on the hardware and implement two types of accelerators using HLS- and RTL-based methodology with various optimization methods. In addition, we scale up the proposed platform to have 4 SmartSSDs and apply graph parallelism to boost the system performance further. As a result, the proposed computational storage platform achieves 75.59 query per second throughput for the SIFT1B dataset at 258.66W power dissipation, which is 12.83x and 17.91x faster and 10.43x and 24.33x more energy efficient than the conventional CPU-based and GPU-based server platform, respectively. With multi-terabyte storage and custom acceleration capability, we believe that the proposed computational storage platform is a promising solution for cost-sensitive cloud datacenters.
Article
The Bluespec hardware-description language presents a significantly higher-level view than hardware engineers are used to, exposing a simpler concurrency model that promotes formal proof, without compromising on performance of compiled circuits. Unfortunately, the cost model of Bluespec has been unclear, with performance details depending on a mix of user hints and opaque static analysis of potential concurrency conflicts within a design. In this paper we present Koika, a derivative of Bluespec that preserves its desirable properties and yet gives direct control over the scheduling decisions that determine performance. Koika has a novel and deterministic operational semantics that uses dynamic analysis to avoid concurrency anomalies. Our implementation includes Coq definitions of syntax, semantics, key metatheorems, and a verified compiler to circuits. We argue that most of the extra circuitry required for dynamic analysis can be eliminated by compile-time BSV-style static analysis.
Article
Faced with the increasing disparity between SSD throughput and CPU-based compute capabilities, there have been growing interests to move compute closer to storage and accelerate the data analytic workloads. In this paper, we propose SmartSSD, an SSD with onboard FPGA, which enables offloading computation within SSD. We perform a detailed model-based evaluation to evaluate the end-to-end performance and energy benefit of SmartSSD for the representative data analytic workloads with Spark SQL and Parquet columnar data format. Our evaluation shows that SmartSSD has the potential to have a transformative impact when building a high performance data analytic system, which enables 3.04x performance improvement and consuming only 45.8% of energy compared to the conventional CPU-based approach.
Article
Since wearable computing systems have grown in importance in the last years, there is an increased interest in implementing machine learning algorithms with reduced precision parameters/computations. Not only learning, also feature selection, most of the times a mandatory preprocessing step in machine learning, is often constrained by the available computational resources. This work considers mutual information —one of the most common measures of dependence used in feature selection algorithms— with a limited number of bits. In order to test the procedure designed, we have implemented it in several well-known feature selection algorithms. Experimental results over several synthetic and real datasets demonstrate that low bit representations are sufficient to achieve performances close to that of double precision parameters and thus open the door for the use of feature selection in embedded platforms that minimize the energy consumption and carbon emissions.
Article
Feature selection is a crucial step nowadays in machine learning and data analytics to remove irrelevant and redundant characteristics and thus to provide fast and reliable analyses. Many research works have focused on developing new methods that increase the global relevance of the subset of selected features while reducing the redundancy of information. However, those methods that select features with high relevance and low redundancy are extremely time-consuming when processing large datasets. In this work we present CUDA-JMI, a tool based on Joint Mutual Information that accelerates feature selection by exploiting the computational capabilities of modern heterogeneous systems that contain several CPU cores and GPU devices. The experimental evaluation has been carried out in three systems with different type and amount of CPUs and GPUs using five publicly available datasets from different fields. These results show that CUDA-JMI is significantly faster than its original sequential counterpart for all systems and input datasets. For instance, the runtime of CUDA-JMI is up to 52 times faster than an existing sequential JMI-based implementation in a machine with 24 CPU cores and two NVIDIA M60 boards (four GPUs). CUDA-JMI is publicly available to download from https://sourceforge.net/projects/cuda-jmi.
Conference Paper
Mutual Information (MI) and Transfer Entropy (TE) algorithms compute statistical measurements on the information shared between two dependent random processes. These measurements have focused on pairwise computations of time series in a broad range of fields, such as Econometrics, Neuroscience, Data Mining and Computer Vision. Unlike previous works which mostly focus on 8-bit Computer Vision applications, this work proposes the first generic hardware architectures for the acceleration of the MI and TE algorithms to target any dataset for a realistic, multi-FPGA platform. We evaluate and compare two such systems, the Maxeler MAX3A Vectis and the Convey HC-2ex platforms, and provide insight into each one's benefits and limitations. All reported results are from actual experimental runs, including I/O overhead, and comprise lower bounds of our systems' full capabilities for large-scale datasets. These are compared to equivalent optimized multi-threaded software implementations, yielding ~19x speedup vs. out-of-the-box software packages and ~2.5x speedup vs. highly optimized software that is presented in the related work. These hardware architectures are obtained with a small fraction of the FPGA resources, and are limited by I/O bandwidth. This means that with near-future FPGA I/O capabilities, the performance of the architectures presented in this work for the O(n²) Mutual Information and the O(n³) Transfer Entropy problems will easily scale up.
Article
With the advent of large-scale problems, feature selection has become a fundamental preprocessing step to reduce input dimensionality. The minimum-redundancy-maximum-relevance (mRMR) selector is considered one of the most relevant methods for dimensionality reduction due to its high accuracy. However, it is a computationally expensive technique, sharply affected by the number of features. This paper presents fast-mRMR, an extension of mRMR, which tries to overcome this computational burden. Associated with fast-mRMR, we include a package with three implementations of this algorithm in several platforms, namely, CPU for sequential execution, GPU (graphics processing units) for parallel computing, and Apache Spark for distributed computing using big data technologies.
Article
Particle colliders enable us to probe the fundamental nature of matter by observing exotic particles produced by high-energy collisions. Because the experimental measurements from these collisions are necessarily incomplete and imprecise, machine learning algorithms play a major role in the analysis of experimental data. The high-energy physics community typically relies on standardized machine learning software packages for this analysis, and devotes substantial effort towards improving statistical power by hand-crafting high-level features derived from the raw collider measurements. In this paper, we train artificial neural networks to detect the decay of the Higgs boson to tau leptons on a dataset of 82 million simulated collision events. We demonstrate that deep neural network architectures are particularly well-suited for this task with the ability to automatically discover high-level features from the data and increase discovery significance.
Article
The vast majority of the literature evaluates the performance of classification models using only the criterion of predictive accuracy. This paper reviews the case for considering also the comprehensibility (interpretability) of classification models, and discusses the interpretability of five types of classification models, namely decision trees, classification rules, decision tables, nearest neighbors and Bayesian network classifiers. We discuss both interpretability issues which are specific to each of those model types and more generic interpretability issues, namely the drawbacks of using model size as the only criterion to evaluate the comprehensibility of a model, and the use of monotonicity constraints to improve the comprehensibility and acceptance of classification models by users.
Article
We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This is in response to the question: "what are the implicit statistical assumptions of feature selection criteria based on mutual information?". To answer this, we adopt a different strategy than is usual in the feature selection literature--instead of trying to define a criterion, we derive one, directly from a clearly specified objective function: the conditional likelihood of the training labels. While many hand-designed heuristic criteria try to optimize a definition of feature 'relevancy' and 'redundancy', our approach leads to a probabilistic framework which naturally incorporates these concepts. As a result we can unify the numerous criteria published over the last two decades, and show them to be low-order approximations to the exact (but intractable) optimisation problem. The primary contribution is to show that common heuristics for information based feature selection (including Markov Blanket algorithms as a special case) are approximate iterative maximisers of the conditional likelihood. A large empirical study provides strong evidence to favour certain classes of criteria, in particular those that balance the relative size of the relevancy/redundancy terms. Overall we conclude that the JMI criterion (Yang and Moody, 1999; Meyer et al., 2008) provides the best tradeoff in terms of accuracy, stability, and flexibility with small data samples.
Article
In this issue, “Best of the Web” presents the modified National Institute of Standards and Technology (MNIST) resources, consisting of a collection of handwritten digit images used extensively in optical character recognition and machine learning research.
Article
This paper explores the cultural significance of the AIDS Memorial Quilt which is concerned with the public commemoration of who have died from AIDS related causes. Using ethnographic and iconographic research methods, the analysis of the Quilt provokes questions concerning the cultural importance of public grief and mourning. Moreover, questions of fundraising, public solidarity and the visibility of the Memorial Quilt in places such as Vancouver raise important issues such as society's attitudes towards those who die from AIDS and their surviving family and friends. In conclusion, the author suggests that this Memorial Quilt acts as a public space of citizenship because it allowed people to come together and negotiate their personal and collective memories of those people who died from AIDS. This in turn may well have important implications for understanding the cultural and geographical implications of personal and political activity.
Article
In enriched gas drives, for correct interpretation of slim tube displacement (STD) tests and for determination of minimum miscibility pressure (MMP) or minimum enrichment (ME) required to achieve dynamic miscibility, it is extremely important to identify the correct mechanism of misciblity development. Traditional, pseudo-ternary diagram construction and limiting tie-line method or the criteria of very high (90-95% plus) ultimate oil recoveries or the criteria of breakover point in ultimate oil recovery versus pressure diagram in STD tests, used to determine MMP or ME are not always valid and can lead to solvents designed either too rich or too lean. In this study, STD tests supported by equation-of-state (EOS) predictions were used to evaluate the ability of various solvents such as CO2, n-butane and various mixtures of Prudhoe Bay natural gas (PBG) and natural gas liquids (NGL), to miscibly displace heavy, asphaltic West Sak crude. Results indicate that for enriched gas drives, the development of dynamic miscibility occurs via simultaneous vaporizing and condensing mechanisms. The solvent minimum enrichments for this dual mechanism were obtained from the solvent-oil, pressure-composition isotherms, compositional path in multicontact test (MCT) calculations and the methane spike disappearance phenomena in STD tests and were compared to those determined by condensing type, pseudo-ternary diagram construction method. STD test results indicate that the ultimate oil recoveries, even for first contact miscible (FCM) solvent were considerably lower due to asphaltene precipitation. Asphaltene tests were conducted for various solvent-West Sak crude mixtures to determine the amount of precipitation and it's effect on oil composition. STD results and EOS predictions indicate that CO2 was unable to develop dynamic miscibility with West Sak crude at reservoir pressure and temperature conditions and the process mechanism for CO2 drive is simultaneous vaporizing-condensing drive.
Article
LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us (letor@microsoft.com). Our goal is to make the dataset reliable and useful for the community.
Article
Bell System Technical Journal, also pp. 623-656 (October)
Article
An abstract is not available.
Conference Paper
Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.
Computational storage: Where are we today?
  • Barbalace
A. Barbalace and J. Do, "Computational storage: Where are we today?" Jan. 2021, conference on Innovative Data Systems Research 2020.
Parallel Programming for FPGAs
  • R Kastner
  • J Matai
  • S Neuendorffer
R. Kastner, J. Matai, and S. Neuendorffer, "Parallel Programming for FPGAs," ArXiv e-prints, May 2018.
FEAST: A FEAture Selection Toolbox for C/C++ & MATLAB/OCTAVE
  • A Pocock
A. Pocock, FEAST: A FEAture Selection Toolbox for C/C++ & MATLAB/OCTAVE, v2.0.0., 2017. [Online]. Available: https://github.com/Craigacp/FEAST
Reading digits in natural images with unsupervised feature learning
  • Y Netzer
  • T Wang
  • A Coates
  • A Bissacco
  • B Wu
  • A Y Ng
Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," in NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
  • T Qin
  • T Liu
T. Qin and T. Liu, "Introducing LETOR 4.0 datasets," CoRR, vol. abs/1306.2597, 2013. [Online]. Available: http://arxiv.org/abs/1306.2597
Parallel Programming for FPGAs
  • Kastner