December 2025
·
6 Reads
The Journal of Finance and Data Science
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
December 2025
·
6 Reads
The Journal of Finance and Data Science
January 2025
·
9 Reads
·
1 Citation
Advances in Data Analysis and Classification
This paper introduces the Side Information Boosted Symbolic Regression (SIBSR) model, an enhanced approach in symbolic regression aimed at improving data analysis. SIBSR integrates side information to increase the accuracy and efficiency of modeling complex data relationships. In addition, we introduce the Side Information Generator, a complementary tool designed to assist in generating a range of potential side information options. This enables users to select the most effective side information for specific tasks, thereby enhancing practical utility. Our experimental findings demonstrate the efficacy of SIBSR in standard symbolic regression tasks and its practical application in economic contexts, notably in formulating Nash Equilibrium expressions in Game Theory. These results underscore SIBSR’s potential in advancing the field of data analysis. The source codes are available at: https://github.com/dkflame/SIBSR.
December 2024
·
14 Reads
August 2024
·
1 Read
·
4 Citations
The Journal of Finance and Data Science
November 2022
·
12 Reads
·
2 Citations
September 2021
·
138 Reads
·
9 Citations
Phage P1 has been shown potentially to play an important role in disseminating antibiotic resistance among bacteria during lysogenization, as evidenced by the prevalence of P1 phage-like elements in animal and human pathogens. In contrast to phage λ, a cell fate decision-making paradigm, P1 lysogenization was shown to be independent of MOI.
January 2021
·
9 Reads
·
12 Citations
June 2020
·
21 Reads
·
9 Citations
June 2020
·
38 Reads
·
11 Citations
May 2020
·
36 Reads
·
30 Citations
IEEE Journal on Selected Areas in Information Theory
When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the NeuralNet’s performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the NeuralNet’s performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the NeuralNet as a function of its input, the error correction scheme is function-oriented. A main challenge is that a deep NeuralNet often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its NeuralNet’s performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC’s redundancy and NeuralNet’s performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm can achieve substantially better performance for the functional error correction task.
... Transformer-based models, Generative Adversarial Networks (GAN) variants (Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville and Bengio, 2014), and diffusion-based techniques (Yang, Zhang, Song, Hong, Xu, Zhao, Zhang, Cui and Yang, 2023) can further integrate text logs, biometric signals, or mobile data, creating multifaceted synthetic records. Similar methods have been used in healthcare (e.g., blending clinical notes and diagnostic images) (Moor, Banerjee, Abad, Krumholz, Leskovec, Topol and Rajpurkar, 2023;Giuffrè and Shung, 2023) and finance (e.g., simulating trader actions) (Zuo, Jiang and Zhou, 2024), enabling research free of confidentiality breaches. By aligning generation strategies with the unique traits of regional betting cultures, researchers can reveal subtle risk patterns that standardized models might miss, opening opportunities for more focused and relevant interventions. ...
August 2024
The Journal of Finance and Data Science
... In 2021, Yu et al. [17] proposed a novel method that incorporates template-based QG model with a sequence-to-sequence model for diversity-aware QG. They did not apply stringent templates, instead they used adjustable patterns that can be collected effectively with less cost. ...
January 2021
... The finding of an optimal MOI of 1 has important guiding significance for the dose design of future phage preparations. However, it should be noted that the optimal MOI obtained under laboratory conditions may differ from actual application environments, so more factors need to be considered when developing actual treatment plans, such as the physiological state of the host animal and the method of administration [23][24][25]. ...
September 2021
... These benchmarks and datasets are often adapted from real-life applications, with many containing domain-specific knowledge that may not generalize effectively to unseen SQL domains. Hence, largescale cross-domain datasets featuring professional SQL queries, such as Squall (Shi et al., 2020), Spider (Yu et al., 2018a), Spider-Syn (Gan et al., 2021), WikiSQL (Zhong et al., 2017), and SparC (Yu et al., 2020), have been introduced to facilitate comprehensive method analyses. In retrospect, we realize two concurrent works which perform systematical benchmarking on text-to-SQL methods. ...
May 2020
... While average-case robustness is more suited for applications such as malware detection, worst-case robustness is relevant in critical applications such as neuromorphic computing. It was recently shown in Raviv et al. (2020) that worst-case robustness is impossible even against one bit erasure (i.e., setting x i = 0 for some i), unless redundancy is added, and a simple methods of adding such redundancy was given. ...
June 2020
... Conventional fault tolerance methods, such as Error Correction Codes (ECCs) and Triple Modular Redundancy (TMR), often impose significant overheads, undermining the advantages of approximate computing [25], [26]. A comprehensive list of such techniques for AccDNN fault detection and mitigation can be found in [27], [28]. ...
June 2020
... The issue of using error-correcting codes to improve the efficiency of RTCSs is considered in detail in [16,17]. The studies report the results of experimental tests, which confirm that the algorithms developed by the authors demonstrate significantly higher performance in the context of functional error correction, compared to conventional approaches. ...
May 2020
IEEE Journal on Selected Areas in Information Theory
... The work in [54] also extend the analog computing architectures to support dynamic precision with redundant coding by repeating operations and averaging the result. The protection of the weights and bias of neural network from noise using linear and nonlinear analog error correction codes in order to prevent performance degradation has been proposed in [55], [56]. The works also explored the use of unequal error protection method for weights at different layers of a binarized network due to the uneven effect of noise in different layers. ...
October 2019
... An effective way to solve the above problems is to fully exploit the natural redundancy in the source and protocol stack, so as to improve the forward error correction ability of the received data. Source natural redundancy [11][12][13] refers to the redundancy law that remains when the source is not compressed or compressed incompletely, and protocol natural redundancy refers to the redundancy law that is inevitably introduced into the protocol fields of each layer in the independent design of the network layer. Although it is artificially designed, the purpose is not to improve the communication reliability. ...
May 2019
... In recent works (including results from the authors of this work), machine learning and algorithmic techniques have been used to exploit NR to correct errors in data [21], [22], [23], [27], [33], [48], [49], [53], [54]. This work studies the Representation-Oblivious scheme for the first time, and also presents new theoretical analysis for the Representation-Aware scheme. ...
October 2017