Recent publications
While
learning to rank
(LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs with ranking scores to cover search queries of various popularity, and 2) ill-trained models based on a limited number of training samples with poor generalization performance. To improve the performance of LTR models, tremendous efforts have been done from above two aspects, such as enlarging training sets with pseudo-labels of ranking scores by self-training, or refining the features used for LTR through feature extraction and dimension reduction. Though LTR performance has been marginally increased, we still believe these methods could be further improved in the newly-fashioned “interpolating regime”. Specifically, instead of lowering the number of features used for LTR models, our work proposes to transform original data with random Fourier feature, so as to over-parameterize the downstream LTR models (e.g., GBRank or LightGBM) with features in ultra-high dimensionality and achieve superb generalization performance. Furthermore, rather than self-training with pseudo-labels produced by the same LTR model in a “self-tuned” fashion, the proposed method incorporates the diversity of prediction results between the listwise and pointwise LTR models while co-training both models with a cyclic labeling-prediction pipeline in a “ping-pong” manner. We deploy the proposed
C o-trained and O ver-parameterized LTR
system
COLTR
at Baidu search and evaluate
COLTR
with a large number of baseline methods. The results show that
COLTR
could achieve
$\Delta NDCG_{4}$
=3.64%
$\sim$
4.92%, compared to baselines, under various ratios of labeled samples. We also conduct a 7-day A/B Test using the realistic web traffics of Baidu Search, where we can still observe significant performance improvement around
$\Delta NDCG_{4}$
=0.17%
$\sim$
0.92% in real-world applications.
COLTR
performs consistently both in online and offline experiments.
In most blockchain-based application scenarios, a complete application logic consists of multiple continuous transactions, in which the initiation of one transaction depends on the confirmation result of the previous one. This mandates that continuous transactions must be processed in the correct order. Unfortunately, existing chain-based blockchains fail to effectively support continuous transaction processing due to considerable latency in confirming continuous transactions. Recent studies shifted from chain-based blockchains to
Directed Acyclic Graph
(DAG) based blockchains, which reduced transaction confirmation latencies. However, DAG-based blockchains store transactions in an out-of-order manner that leads to unordered transaction processing. To address this challenge, we propose FLUID, a new DAG-based blockchain that supports continuous transaction processing while delivering high performance. The fundamental idea of FLUID is to design a transaction dependency tracking structure to ensure that continuous transactions can be processed in the correct order. FLUID utilizes a conflict resolution mechanism to provide instant confirmation and to support concurrent transaction processing with lower latencies. In addition, FLUID builds a checkpoint-based verification mechanism to achieve deterministic consensus on transaction processing results in the DAG. Extensive experiments demonstrate that our proposed FLUID can improve the throughput over state-of-the-art OHIE by
$66\%$
with two orders of magnitude lower latencies.
The hardware description of circuits usually contains many loops. Register Transfer Level (RTL) simulation is a critical step to verify the correctness of circuits and is time-consuming. Thus, it is necessary to speed up its process. However, the speedup of existing RTL simulation acceleration techniques is usually small. Although the speedup of hardware acceleration is large, the hardware cost is high. Some methods utilize performance models without performing RTL simulation to obtain rough simulation performance and have large speedup. However, they do not support functional verification. In order to address the problems, we propose a loop-oriented RTL simulation acceleration approach based on code instrumentation for designs synthesized by High-Level Synthesis. Our approach reduces the RTL simulation time by skipping a large number of repeated loop iterations, and maintains high accuracy for the prediction of the number of cycles by reserving some loop iterations. We establish a performance prediction model and an interval value formula for skipping loop iterations. We conduct experiments on the MachSuite benchmark. The results show that for the RTL simulation of single data processing and batch data processing, the average speedup of our approach can reach 7.49× and 43.3×, respectively, and the average prediction errors of the number of cycles are 1.71% and 1.06%, respectively. It also reveals that the interval value obtained by our approach for skipping loop iterations can quickly and effectively balance between the accuracy of prediction of the number of cycles and speedup. Compared to the state-of-the-art approach ESSENT, the speedup of our approach is better and the accuracy of prediction of the number of cycles remains at the same level as that of performance models.
Transforming physical surfaces into virtual interfaces can extend the interaction capability of many exciting metaverse applications in the future. Recent advances in vibration-based tap sensing show promise for this vision using passive vibration signals. However, current approaches based on Time-Difference-of-Arrival (TDoA) triangulation suffer the impact of fluctuant wave velocity due to the dispersive and heterogeneous nature of solid mediums, failing to meet the performance requirement for practical use. In this paper, we present MM-Tap, a vibration-based tap localization system that can transform ubiquitous surfaces into virtual touch screens with low overhead. A novel localization scheme is proposed based on the finding of spatio-temporal mapping between tap locations and TDoA values, which pushes the accuracy limits of vibration-based tap sensing from unstable cm-level to mm-level. We investigate the geometry of the sensor layout and design a model-based method to synthesize tap data, which enables MM-Tap to adapt to various surface materials and respond to arbitrary sensing scales after a few seconds of calibration. We combine MM-Tap with a COTS projector and facilitate a digitally augmented surface where users can play video games with low latency.
This work proposes a statistical modeling approach for the artificial neural network (ANN) based compact model (CM). The method of retaining part of the network features of the nominal device and further finetuning the network parameters (variational neurons) is found to accurately reproduce the static variation. A mapping from process variation to network parameters is derived by combining the proposed variational neuron selection algorithm and the backward propagation of variance (BPV) method. In addition, a secondary classification of the selected variational neurons is applied to model the fabrication-induced correlation between n-and p-type devices. The NN-based statistical modeling approach has been well implemented and verified on the GAA simulation data and the 16nm node foundry FinFET, which indicates its great potential in modeling emerging and advanced device technology.
As one of the key techniques for resolution enhancement technologies (RETs), optical proximity correction (OPC) suffers from prohibitive computational costs as feature sizes continue to shrink. Inverse lithography techniques (ILT) treat the mask optimization process as an inverse imaging problem, yielding high-quality curvilinear masks. However, ILT methods often fall short of printability and manufacturability due to their time-consuming procedures and excessive computational overhead. In this paper, we propose DevelSet, a potent metal layer OPC engine that replaces discrete pixel-based masks with implicit level set-based representations. With a GPU-accelerated lithography simulator, DevelSet achieves end-to-end mask optimization using a neural network to provide quasi-optimized level set initialization and further evolution with a CUDA-based mask optimizer for fast convergence. The backbone of DevelSet-Net is a transformer-based multi-branch neural network that offers a parameter selector to eliminate the need for manual parameter initialization. Experimental results demonstrate that the DevelSet framework outperforms state-of-the-art approaches in terms of printability while achieving fast runtime performance (around 1 second). We expect this enhanced level set technique, coupled with a CUDA/DNN accelerated joint optimization paradigm, to have a substantial impact on industrial mask optimization solutions.
Multivariate time series forecasting has wide applications such as traffic flow prediction, supermarket commodity demand forecasting and etc., and a large number of forecasting models have been developed. Given these models, a natural question has been raised: what theoretical limits of forecasting accuracy can these models achieve? Recent works of urban human mobility prediction have made progress on the maximum predictability that any algorithm can achieve. However, existing approaches on maximum predictability on the multivariate time series fully ignore the interrelationship between multiple variables. In this paper, we propose a methodology to measure the upper limit of predictability for multivariate time series with multivariate constraint relations. The key of the proposed methodology is a novel entropy, named Multivariate Constraint Sample Entropy (
McSE
), to incorporate the multivariate constraint relations for better predictability. We conduct a systematic evaluation over eight datasets and compare existing methods with our proposed predictability and find that we get a higher predictability. We also find that the forecasting algorithms that capture the multivariate constraint relation information, such as GNN, can achieve higher accuracy, confirming the importance of multivariate constraint relations for predictability.
Refinement checking is an important formal verification method that checks if a hardware implementation complies with (in other words, refines) a given specification. It has been widely used in processor and non-processor verification. In refinement checking, a refinement mapping is needed to relate the implementation and the specification. Despite the wide adoption of refinement checking, there is currently no general format or standard for the mapping-most prior works employed a certain property specification language (e.g., the SystemVerilog assertion) to write ad-hoc properties that describe the mapping relation. These manually written properties are usually not wellstructured, and are often difficult to design or understand. In this paper, we present r-map, a language for refinement mapping. r-map relates the implementation and the specification in a more concise and comprehensible way. We evaluate r-map in the refinement checking of practical hardware designs. In our case study, r-map shows a significant reduction of human efforts compared to manually writing refinement properties. We also show how r-map can help to scale up formal verification.
This paper investigates the age of information (AoI)-based online scheduling in multi-sensor wireless powered communication networks (WPCNs) for time-sensitive Internet of Things (IoT). Specifically, we consider a typical WPCN model, where a wireless power station (WPS) charges multiple sensor nodes (SNs) by wireless power transfer (WPT), and then the SNs are scheduled in the time domain to transmit their sampled status information with their harvested energy to a mobile edge server (MES) for decision making. For such a system, we first derive a closed-form expression of the successful data transmission probability in Nakagami-m fading channels. To pursue an efficient online scheduling policy that minimizes the Expected Weighted Sum AoI (EWSAoI) of the system, a discrete-time scheduling problem is formulated. As the problem is non-convex with non-explicit expression of the EWSAoI, we propose a Max-Weight policy based on the Lyapunov optimization theory, which schedules the SNs at the beginning of each time in terms of the one-slot conditional Lyapunov Drift. Simulations demonstrate our presented theoretical results and show that our proposed scheduling policy outperforms other baselines such as greedy policy and random round-robin (RR) policy. Especially, when the number of SNs is relatively small, the gain achieved by the proposed policy compared to the greedy policy is considerable. Moreover, some interesting insights are also observed: 1) as the number of SNs increases, the EWSAoI also increases; 2) when the transmit power is relatively small, the larger the number of SNs, the smaller the EWSAoI; 3) the EWSAoI decreases with the increment of transmit power of the WPS and then tends to be flat; 4) the EWSAoI increases with the increment of the distance between the SNs and the MES.
The prosperity of blockchain has pushed various decentralized applications, e.g., cross-regional finance, due to its advantages of openness, immutability, and decentralization. The feature of openness inevitably leads to a serious privacy breach. Recently, various privacy-enhanced works (e.g., Zcash, Monero) were proposed focusing on this problem. However, most existing solutions either aim for the unspent transaction output (UTXO) model, or fail to provide full privacy protection for the account-based model with efficient performance. In this paper, we put forward LedgerMaze, an efficient privacy-preserving non-interactive zero-knowledge (NIZK) scheme over account-model blockchain. We design a novel scheme called cheque mechanism to cut the link between the sender/receiver relationship. Namely, a sender transfers money to a receiver’s cheque, then the receiver can retrieve the cheque among a set of cheques for obfuscation without revealing the original one. We construct several efficient NIZK proofs for initializing the mechanism. Moreover, we further analyze the security properties of LedgerMaze. Experimental results show that LedgerMaze achieves comparable performance in communication and computation costs while retaining a full privacy guarantee, compared to previous similar constructions.
With the wide application of deep learning, the amount of data required to train deep learning models is becoming increasingly larger, resulting in an increased training time and higher requirements for computing resources. To improve the throughput of a distributed learning system, task scheduling and resource scheduling are required. This paper proposes to combine ARIMA and GRU models to predict the future task volume. In terms of task scheduling, multi-priority task queues are used to divide tasks into different queues according to their priorities to ensure that high-priority tasks can be completed in advance. In terms of resource scheduling, the reinforcement learning method is adopted to manage limited computing resources. The reward function of reinforcement learning is constructed based on the resources occupied by the task, the training time, the accuracy of the model. When a distributed learning model tends to converge, the computing resources of the task are gradually reduced so that they can be allocated to other learning tasks. The results of experiments demonstrate that RLPTO tends to use more compu-ting nodes when facing tasks with large data scale and has good scalability. The distributed learning system reward experiment shows that RLPTO can make the computing cluster get the largest reward.
3D point cloud maps are widely used in robotic tasks like localization and planning. However, dynamic objects, such as cars and pedestrians, can introduce ghost artifacts during the map generation process, leading to reduced map quality and hindering normal robot navigation. Online dynamic object removal methods are restricted to utilize only local scope information and have limited performance. To address this challenge, we propose DORF (Dynamic Object Removal Framework), a novel coarse-to-fine offline framework that exploits global 4D spatial-temporal LiDAR information to achieve clean static point cloud map generation, which reaches the state-of-the-art performance among existing offline methods. DORF first conservatively preserves the definite static points leveraging the Receding Horizon Sampling (RHS) mechanism proposed by us. Then DORF gradually recovers more ambiguous static points, guided by the inherent characteristic of dynamic objects in urban environments which necessitates their interaction with the ground. We validate the effectiveness and robustness of DORF across various types of highly dynamic datasets.
Teleoperation has widely contributed to many applications. Consequently, the design of intuitive and ergonomic control interfaces for teleoperation has become crucial. The rapid advancement of Mixed Reality (MR) has yielded tangible benefits in human-robot interaction. MR provides an immersive environment for interacting with robots, effectively reducing the mental and physical workload of operators during teleoperation. Additionally, the incorporation of haptic rendering, including kinaesthetic and tactile rendering, could further amplify the intuitiveness and efficiency of MR-based immersive teleoperation. In this study, we developed an immersive, bilateral teleoperation system, integrating Digital Twin-driven Mixed Reality (DTMR) manipulation with haptic rendering. This system comprises a commercial remote controller with a kinaesthetic rendering feature and a wearable cost-effective tactile rendering interface, called the Soft Pneumatic Tactile Array (SPTA). We carried out two user studies to assess the system's effectiveness, including a performance evaluation of key components within DTMR and a quantitative assessment of the newly developed SPTA. The results demonstrate an enhancement in both the human-robot interaction experience and teleoperation performance. For more project details, please view our website:
https://sites.google.com/view/hbts-brl/home</uri
Medical image data are often limited due to the expensive acquisition and annotation process. Hence, training a deep-learning model with only raw data can easily lead to overfitting. One solution to this problem is to augment the raw data with various transformations, improving the model’s ability to generalize to new data. However, manually configuring a generic augmentation combination and parameters for different datasets is non-trivial due to inconsistent acquisition approaches and data distributions. Therefore, automatic data augmentation is proposed to learn favorable augmentation strategies for different datasets while incurring large GPU overhead. To this end, we present a novel method, called Dynamic Data Augmentation (DDAug), which is efficient and has negligible computation cost. Our DDAug develops a hierarchical tree structure to represent various augmentations and utilizes an efficient Monte-Carlo tree searching algorithm to update, prune, and sample the tree. As a result, the augmentation pipeline can be optimized for each dataset automatically. Experiments on multiple Prostate MRI datasets show that our method outperforms the current state-of-the-art data augmentation strategies.
It is well known that locomotion-dominated navigation tasks may highly provoke cybersickness effects. Past research has proposed numerous approaches to tackle this issue based on offline considerations. In this work, a novel approach to mitigate cybersickness is presented based on online adaptive navigation. Considering the Proportional-Integral-Derivative (PID) control method, we proposed a mathematical model for online adaptive navigation parametrized with several parameters, taking as input the users’ electro-dermal activity (EDA), an efficient indicator to measure the cybersickness level, and providing as output adapted navigation accelerations. Therefore, minimizing the cybersickness level is regarded as an argument optimization problem: find the PID model parameters which can reduce the severity of cybersickness. User studies were organized to collect non-adapted navigation accelerations and the corresponding EDA signals. A deep neural network was then formulated to learn the correlation between EDA and navigation accelerations. The hyperparameters of the network were obtained through the Optuna open-source framework. To validate the performance of the optimized online adaptive navigation developed through the PID control, we performed an analysis in a simulated user study based on the pre-trained deep neural network. Results indicate a significant reduction of cybersickness in terms of EDA signal analysis and motion sickness dose value. This is a pioneering work which presented a systematic strategy for adaptive navigation settings from a theoretical point.
A unique bubble foam flow presents in subcooled flow boiling of 3.5 wt% artificial seawater. Depending on the inlet temperatures of seawater, slug bubbles resulting from foam ruptures might prevail during the boiling period. It is of importance to reveal the Sauter mean diameters of bubbles in foam flow under different experimental conditions for seawater. Such characteristic of bubbles including maximum/minimum diameters and Sauter mean diameters is collected and analyzed from shadowgraph measurements by Image J as current datasets covering various inlet temperatures, heat and mass fluxes for both seawater and de-ionized water. Evidence is presented in this paper to describe the relationship between the maximum/minimum diameters and the Sauter mean diameters. Two existing correlations assuming the Sauter mean diameter is proportional to the maximum diameter are adopted in comparisons with current datasets. One relationship correlating the Sauter mean diameter with the shape characteristic including maximum and minimum diameter is also compared with the dataset. Large mean deviations of 57.10-80.27% and 54.06-76.79% for seawater and de-ionized water are found, respectively. It suggests that the poor applicability of these linear correlations. A new linear relationship with a different constant of 0.376 as its slope for seawater is shown at last with the mean deviation of 8.99% only. It provides the possibility to predict the Sauter mean diameters only with known measurements of geometric characteristics of bubbles from footages by high-speed cameras.
Recent advances in extended reality (XR) technologies have enabled new and increasingly realistic empathy tools and experiences. In XR, all interactions take place in different spatial contexts, all with different features, affordances, and constraints. We present a systematic literature survey of recent work on empathy in XR. As a result, we contribute a research roadmap with three future opportunities and six open questions in XR-enabled empathy research across both physical and virtual spaces.
Car-following is a control process in which a following vehicle adjusts its acceleration to keep a safe distance from the lead vehicle. Recently, there has been a booming of data-driven models that enable more accurate modeling of car-following through real-world driving datasets. Although there are several public datasets available, their formats are not always consistent, making it challenging to determine the state-of-the-art models and how well a new model performs compared to existing ones. To address this gap and promote the development of microscopic traffic flow modeling, we establish the first public benchmark dataset for car-following behavior modeling. This benchmark consists of more than 80 K car-following events extracted from five public driving datasets under the same criteria. To give an overview of current progress in car-following modeling, we implemented and tested representative baseline models within the benchmark. The established benchmark provides researchers with consistent data formats and metrics for cross-comparing different car-following models, coming with open datasets and codes.
The dynamic behaviors of large-scale hoop truss antennas (HTAs) are significantly influenced by the nonlinear torque transmission properties of flexible hinges. Due to the nonlinearity of hinge, the HTR can be easily excited to exhibit internal resonances that result in the energy exchanging between the adjacent two or three modes. This paper focuses on the 3:1 internal resonant response of an articulated HTR induced by the nonlinearity of hinge. The analytical modes are obtained by using the global mode method (GMM), and are validated by finite element method (FEM). Then the partial differential equations (PDEs) of planar motions are discretized into ordinary differential equations (ODEs) by Galerkin’s technique. The multiple time scale method is employed to obtain the four-dimensional modulation equations of HTA with primary and 3:1 internal resonance. By using Newton-Raphson iteration and pseudo arclength continuation, frequency-response and force-response curves are obtained to investigate the theoretical steady-state vibrations of HTA. Moreover, the influence of the cubic spring stiffness, damping, and external excitation amplitude on the system’s nonlinear dynamic behaviors are investigated. The numerical simulation reveals that the articulated multi-beam hoop structure exhibits typical nonlinear phenomena such as hardening-spring type characteristic, jumps, bi-stability and double peaks. The research findings contribute to the development of a comprehensive framework for analyzing internal resonances in a class of hinge-connected flexible structures.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Kowloon, Hong Kong
Website
http://www.ust.hk/