233 reads in the past 30 days
Theory and Application of Zero Trust Security: A Brief SurveyNovember 2023
·
4,817 Reads
·
48 Citations
Published by MDPI
Online ISSN: 1099-4300
233 reads in the past 30 days
Theory and Application of Zero Trust Security: A Brief SurveyNovember 2023
·
4,817 Reads
·
48 Citations
127 reads in the past 30 days
Water Quality Prediction Based on Machine Learning and Comprehensive Weighting MethodsAugust 2023
·
1,707 Reads
·
43 Citations
121 reads in the past 30 days
Enterprise Digital Transformation Strategy: The Impact of Digital PlatformsMarch 2025
·
156 Reads
59 reads in the past 30 days
Principles Entailed by Complexity, Crucial Events, and Multifractal DimensionalityFebruary 2025
·
70 Reads
·
1 Citation
50 reads in the past 30 days
Self-Improvising Memory: A Perspective on Memories as Agential, Dynamically Reinterpreting Cognitive GlueMay 2024
·
587 Reads
·
16 Citations
Entropy (ISSN 1099-4300) is an international and interdisciplinary journal of entropy and information studies. It deals with the development and/or application of entropy or information-theoretic concepts in a wide variety of applications.
Relevant submissions ought to focus on one of the following:
Some common subject or application areas include:
April 2025
·
1 Read
Yasuji Sawada
·
Yasukazu Daigaku
·
Kenji Toma
Research on the birth and evolution of life are reviewed with reference to the maximum entropy production principle (MEPP). It has been shown that this principle is essential for consistent understanding of the birth and evolution of life. First, a recent work for the birth of a self-replicative system as pre-RNA life is reviewed in relation to the MEPP. A critical condition of polymer concentration in a local system is reported by a dynamical system approach, above which, an exponential increase of entropy production is guaranteed. Secondly, research works of early stage of evolutions are reviewed; experimental research for the numbers of cells necessary for forming a multi-cellular organization, and numerical research of differentiation of a model system and its relation with MEPP. It is suggested by this review article that the late stage of evolution is characterized by formation of society and external entropy production. A hypothesis on the general route of evolution is discussed from the birth to the present life which follows the MEPP. Some examples of life which happened to face poor thermodynamic condition are presented with thermodynamic discussion. It is observed through this review that MEPP is consistently useful for thermodynamic understanding of birth and evolution of life, subject to a thermodynamic condition far from equilibrium.
April 2025
·
1 Read
Dobromir Dotov
·
Jingxian Gu
·
Philip Hotor
·
Joanna Spyra
Full-body movement involving multi-segmental coordination has been essential to our evolution as a species, but its study has been focused mostly on the analysis of one-dimensional data. The field is poised for a change by the availability of high-density recording and data sharing. New ideas are needed to revive classical theoretical questions such as the organization of the highly redundant biomechanical degrees of freedom and the optimal distribution of variability for efficiency and adaptiveness. In movement science, there are popular methods that up-dimensionalize: they start with one or a few recorded dimensions and make inferences about the properties of a higher-dimensional system. The opposite problem, dimensionality reduction, arises when making inferences about the properties of a low-dimensional manifold embedded inside a large number of kinematic degrees of freedom. We present an approach to quantify the smoothness and degree to which the kinematic manifold of full-body movement is distributed among embedding dimensions. The principal components of embedding dimensions are rank-ordered by variance. The power law scaling exponent of this variance spectrum is a function of the smoothness and dimensionality of the embedded manifold. It defines a threshold value below which the manifold becomes non-differentiable. We verified this approach by showing that the Kuramoto model obeys the threshold when approaching global synchronization. Next, we tested whether the scaling exponent was sensitive to participants’ gait impairment in a full-body motion capture dataset containing short gait trials. Variance scaling was highest in healthy individuals, followed by osteoarthritis patients after hip replacement, and lastly, the same patients before surgery. Interestingly, in the same order of groups, the intrinsic dimensionality increased but the fractal dimension decreased, suggesting a more compact but complex manifold in the healthy group. Thinking about manifold dimensionality and smoothness could inform classic problems in movement science and the exploration of the biomechanics of full-body action.
April 2025
Chia-Wei Huang
·
Chih-Chiang Fang
·
Wei-Tai Hsu
·
[...]
·
Li-Ting Zhou
Transformer operations are susceptible to both internal and external faults. This study primarily employed software to construct a power system simulation model featuring a step-down transformer. The simulation model comprised three single-phase transformers with ten tap positions at the secondary coil to analyze internal faults. Additionally, ten fault positions between the power transformer and the load were considered for external fault analysis. The protection scheme incorporated percentage differential protection for both the power transformer and the transmission line, aiming to explore fault characteristics. To mitigate the protection device’s sensitivity issues, the scale-dependent intrinsic entropy method was utilized as a decision support system to minimize power system protection misoperations. The results indicated the effectiveness and practicality of the auxiliary method through comprehensive failure analysis.
April 2025
Weilei Wen
·
Qianqian Zhao
·
Xiuli Shao
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural features. Consequently, these limitations hinder the effective reconstruction of high-frequency details. To address these issues, we propose a novel Mamba-based ODISR network, termed MambaOSR, which consists of three key modules working collaboratively for accurate reconstruction. Specifically, we first introduce a spatial-frequency visual state space model (SF-VSSM) to capture global contextual information via dual-domain representation learning, thereby enhancing the preservation of high-frequency details. Subsequently, we design a distortion-guided module (DGM) that leverages distortion map priors to adaptively model geometric distortions, effectively suppressing artifacts resulting from equirectangular projections. Finally, we develop a multi-scale feature fusion module (MFFM) that integrates complementary features across multiple scales, further improving reconstruction quality. Extensive experiments conducted on the SUN360 dataset demonstrate that our proposed MambaOSR achieves a 0.16 dB improvement in WS-PSNR and increases the mutual information by 1.99% compared with state-of-the-art methods, significantly enhancing both visual quality and the information richness of omnidirectional images.
April 2025
Sha Ye
·
Qiong Wu
·
Pingyi Fan
·
Qiang Fan
The Internet of Vehicles (IoV), as the core of intelligent transportation system, enables comprehensive interconnection between vehicles and their surroundings through multiple communication modes, which is significant for autonomous driving and intelligent traffic management. However, with the emergence of new applications, traditional communication technologies face the problems of scarce spectrum resources and high latency. Semantic communication, which focuses on extracting, transmitting, and recovering some useful semantic information from messages, can reduce redundant data transmission, improve spectrum utilization, and provide innovative solutions to communication challenges in the IoV. This paper systematically reviews state-of-the-art semantic communications in the IoV, elaborates the technical background of the IoV and semantic communications, and deeply discusses key technologies of semantic communications in the IoV, including semantic information extraction, semantic communication architecture, resource allocation and management, and so on. Through specific case studies, it demonstrates that semantic communications can be effectively employed in the scenarios of traffic environment perception and understanding, intelligent driving decision support, IoV service optimization, and intelligent traffic management. Additionally, it analyzes the current challenges and future research directions. This survey reveals that semantic communications have broad application prospects in the IoV, but it is necessary to solve the real existing problems by combining advanced technologies to promote their wide application in the IoV and contributing to the development of intelligent transportation systems.
April 2025
Li Xie
·
Liangyan Li
·
Jun Chen
·
[...]
·
Zhongshan Zhang
A constrained version of Talagrand’s transportation inequality is established, which reveals an intrinsic connection between the Gaussian distortion-rate-perception functions with limited common randomness under the Kullback–Leibler divergence-based and squared Wasserstein-2 distance-based perception measures. This connection provides an organizational framework for assessing existing bounds on these functions. In particular, we show that the best-known bounds of Xie et al. are nonredundant when examined through this connection.
April 2025
Yubin Li
·
Weida Zhan
·
Yichun Jiang
·
Jinxin Guo
RGB-thermal object detection harnesses complementary information from visible and thermal modalities to enhance detection robustness in challenging environments, particularly under low-light conditions. However, existing approaches suffer from limitations due to their heavy dependence on precisely registered data and insufficient handling of cross-modal distribution disparities. This paper presents RDCRNet, a novel framework incorporating a Cross-Modal Representation Model to effectively address these challenges. The proposed network features a Cross-Modal Feature Remapping Module that aligns modality distributions through statistical normalization and learnable correction parameters, significantly reducing feature discrepancies between modalities. A Cross-Modal Refinement and Interaction Module enables sophisticated bidirectional information exchange via trinity refinement for intra-modal context modeling and cross-attention mechanisms for unaligned feature fusion. Multiscale detection capability is enhanced through a Cross-Scale Feature Integration Module, improving detection performance across various object sizes. To overcome the inherent data scarcity in RGB-T detection, we introduce a self-supervised pretraining strategy that combines masked reconstruction with adversarial learning and semantic consistency loss, effectively leveraging both aligned and unaligned RGB-T samples. Extensive experiments demonstrate that RDCRNet achieves state-of-the-art performance on multiple benchmark datasets while maintaining high computational and storage efficiency, validating its superiority and practical effectiveness in real-world applications.
April 2025
·
3 Reads
Yi Wang
·
Feng Li
·
Mengge Lv
·
[...]
·
Xiaohang Wang
Under cavitation conditions, hydraulic turbines can suffer from mechanical damage, which will shorten their useful life and reduce power generation efficiency. Timely detection of cavitation phenomena in hydraulic turbines is critical for ensuring operational reliability and maintaining energy conversion efficiency. However, extracting cavitation features is challenging due to strong environmental noise interference and the inherent non-linearity and non-stationarity of a cavitation hydroacoustic signal. A multi-index fusion adaptive cavitation feature extraction and cavitation detection method is proposed to solve the above problems. The number of decomposition layers in the multi-index fusion variational mode decomposition (VMD) algorithm is adaptively determined by fusing multiple indicators related to cavitation characteristics, thus retaining more cavitation information and improving the quality of cavitation feature extraction. Then, the cavitation features are selected based on the frequency characteristics of different degrees of cavitation. In this way, the detection of incipient cavitation and the secondary detection of supercavitation are realized. Finally, the cavitation detection effect was verified using the hydro-acoustic signal collected from a mixed-flow hydro turbine model test stand. The detection accuracy rate and false alarm rate were used as evaluation indicators, and the comparison results showed that the proposed method has high detection accuracy and a low false alarm rate.
April 2025
·
39 Reads
Rapid development of computers stipulated growing interest to the physical foundations of computation. This interest arises from both applicative and fundamental aspects of the computation. Landauer principle, addressed in the special issue, is one of the limiting physical principles, which constraint behavior of computing systems, establishing minimal energy cost for erasure of a single memory bit for the system operating at the equilibrium temperature T.
April 2025
In view of the typical multi-target scenarios of underwater direction-of-arrival (DOA) tracking complicated by uncertain measurement noise in unknown underwater environments, a robust underwater multi-target DOA tracking method is proposed by incorporating Saga–Husa (SH) noise estimation and a backward smoothing technique within the framework of the cardinalized probability hypothesis density (CPHD) filter. First, the kinematic model of underwater targets and the measurement model based on the received signals of a hydrophone array are established, from which the CPHD-based multi-target DOA tracking algorithm is derived. To mitigate the adverse impact of uncertain measurement noise, the Saga–Husa approach is deployed for dynamic noise estimation, thereby reducing noise-induced performance degradation. Subsequently, a backward smoothing technique is applied to the forward filtering results to further enhance tracking robustness and precision. Finally, extensive simulations and experimental evaluations demonstrate that the proposed method outperforms existing DOA estimation and tracking techniques in terms of robustness and accuracy under uncertain measurement noise conditions.
April 2025
Zhenning Chen
·
Xinyu Zhang
·
Siyang Wang
·
Youren Wang
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide simultaneous access to massive devices. One must perform client scheduling to balance the convergence rate and model accuracy. However, the heterogeneity of computing and communication resources across client devices, combined with the time-varying nature of wireless channels, makes it challenging to estimate accurately the delay associated with client participation during the scheduling process. To address this issue, we investigate the client scheduling and resource optimization problem in DFL without prior client information. Specifically, the considered problem is reformulated as a multi-armed bandit (MAB) program, and an online learning algorithm that utilizes contextual multi-arm slot machines for client delay estimation and scheduling is proposed. Through theoretical analysis, this algorithm can achieve asymptotic optimal performance in theory. The experimental results show that the algorithm can make asymptotic optimal client selection decisions, and this method is superior to existing algorithms in reducing the cumulative delay of the system.
April 2025
Francesco Tosti Tosti Guerra
·
Andrea Napoletano
·
Andrea Zaccaria
In this work, we propose to study the collective behavior of different ensembles of neural networks. These sets define and live on complex manifolds that evolve through training. Each manifold is characterized by its intrinsic dimension, a measure of the variability of the ensemble and, as such, a measure of the impact of the different training strategies. Indeed, higher intrinsic dimension values imply higher variability among the networks and a larger parameter space coverage. Here, we quantify how much the training choices allow the exploration of the parameter space, finding that a random initialization of the parameters is a stronger source of variability than, progressively, data distortion, dropout, and batch shuffle. We then investigate the combinations of these strategies, the parameters involved, and the impact on the accuracy of the predictions, shedding light on the often-underestimated consequences of these training choices.
April 2025
·
2 Reads
This study investigates the properties of financial markets that arise from the multi-scale structure of volatility, particularly intermittency, by employing robust theoretical tools from nonequilibrium thermodynamics. Intermittency in velocity fields along spatial and temporal axes is a well-known phenomenon in developed turbulence, with extensive research dedicated to its structures and underlying mechanisms. In turbulence, such intermittency is explained through energy cascades, where energy injected at macroscopic scales is transferred to microscopic scales. Similarly, analogous cascade processes have been proposed to explain the intermittency observed in financial time series. In this work, we model volatility cascade processes in the stock market by applying the framework of stochastic thermodynamics to a Langevin system that describes the dynamics. We introduce thermodynamic concepts such as temperature, heat, work, and entropy into the analysis of financial markets. This framework allows for a detailed investigation of individual trajectories of volatility cascades across longer to shorter time scales. Further, we conduct an empirical study primarily using the normalized average of intraday logarithmic stock prices of the constituent stocks in the FTSE 100 Index listed on the London Stock Exchange (LSE), along with two additional data sets from the Tokyo Stock Exchange (TSE). Our Langevin-based model successfully reproduces the empirical distribution of volatility—defined as the absolute value of the wavelet coefficients across time scales—and the cascade trajectories satisfy the Integral Fluctuation Theorem associated with entropy production. A detailed analysis of the cascade trajectories reveals that, for the LSE data set, volatility cascades from larger to smaller time scales occur in a causal manner along the temporal axis, consistent with known stylized facts of financial time series. In contrast, for the two data sets from the TSE, while similar behavior is observed at smaller time scales, anti-causal behavior emerges at longer time scales.
April 2025
·
5 Reads
In 1948, Claude Shannon published a revolutionary paper on communication and information in engineering, one that made its way into the psychology of perception and changed it for good. However, the path to truly successful applications to psychology has been slow and bumpy. In this article, we present a readable account of that path, explaining the early difficulties as well as the creative solutions offered. The latter include Garner’s theory of sets and redundancy as well as mathematical group theory. These solutions, in turn, enabled rigorous objective definitions to the hitherto subjective Gestalt concepts of figural goodness, order, randomness, and predictability. More recent developments enabled the definition of, in an exact mathematical sense, the key notion of complexity. In this article, we demonstrate, for the first time, the presence of the association between people’s subjective impression of figural goodness and the pattern’s objective complexity. The more attractive the pattern appears to perception, the less complex it is and the smaller the set of subjectively similar patterns.
April 2025
·
3 Reads
In this paper, we study the synchronization of dissipative quantum harmonic oscillators in the framework of a quantum open system via the active–passive decomposition (APD) configuration. We show that two or more quantum systems may be synchronized when the quantum systems of interest are embedded in dissipative environments and influenced by a common classical system. Such a classical system is typically termed a controller, which (1) can drive quantum systems to cross different regimes (e.g., from periodic to chaotic motions) and (2) constructs the so-called active–passive decomposition configuration, such that all the quantum objects under consideration may be synchronized. The main finding of this paper is that we demonstrate that the complete synchronizations measured using the standard quantum deviation may be achieved for both stable regimes (quantum limit circles) and unstable regimes (quantum chaotic motions). As an example, we numerically show in an optomechanical setup that complete synchronization can be realized in quantum mechanical resonators.
April 2025
·
6 Reads
Short-term patterns in financial time series form the cornerstone of many algorithmic trading strategies, yet extracting these patterns reliably from noisy market data remains a formidable challenge. In this paper, we propose an entropy-assisted framework for identifying high-quality, non-overlapping patterns that exhibit consistent behavior over time. We ground our approach in the premise that historical patterns, when accurately clustered and pruned, can yield substantial predictive power for short-term price movements. To achieve this, we incorporate an entropy-based measure as a proxy for information gain: patterns that lead to high one-sided movements in historical data yet retain low local entropy are more “informative” in signaling future market direction. Compared to conventional clustering techniques such as K-means and Gaussian Mixture Models (GMMs), which often yield biased or unbalanced groupings, our approach emphasizes balance over a forced visual boundary, ensuring that quality patterns are not lost due to over-segmentation. By emphasizing both predictive purity (low local entropy) and historical profitability, our method achieves a balanced representation of Buy and Sell patterns, making it better suited for short-term algorithmic trading strategies. This paper offers an in-depth illustration of our entropy-assisted framework through two case studies on Gold vs. USD and GBPUSD. While these examples demonstrate the method’s potential for extracting high-quality patterns, they do not constitute an exhaustive survey of all possible asset classes.
April 2025
·
2 Reads
Earthquakes, as serious natural disasters, have greatly harmed human beings. In recent years, the combination of acoustic emission technology and information entropy has shown good prospects in earthquake prediction. In this paper, we study the application of acoustic emission b-values and information entropy in earthquake prediction in China and analyze their changing characteristics and roles. The acoustic emission b-value is based on the Gutenberg–Richter law, which quantifies the relationship between magnitude and occurrence frequency. Lower b-values are usually associated with higher earthquake risks. Meanwhile, information entropy is used to quantify the uncertainty of the system, which can reflect the distribution characteristics of seismic events and their dynamic changes. In this study, acoustic emission data from several stations around the 2008 Wenchuan 8.0 earthquake are selected for analysis. By calculating the acoustic emission b-value and information entropy, the following is found: (1) Both the b-value and information entropy show obvious changes before the main earthquake: during the seismic phase, the acoustic emission b-value decreases significantly, and the information entropy also shows obvious decreasing entropy changes. The b-values of stations AXI and DFU continue to decrease in the 40 days before the earthquake, while the b-values of stations JYA and JMG begin to decrease significantly in the 17 days or so before the earthquake. The information entropy changes in the JJS and YZP stations are relatively obvious, especially for the YZP station, which shows stronger aggregation characteristics of seismic activity. This phenomenon indicates that the regional underground structure is in an extremely unstable state. (2) The stress evolution process of the rock mass is divided into three stages: in the first stage, the rock mass enters a sub-stabilized state about 40 days before the main earthquake; in the second stage, the rupture of the cracks changes from a disordered state to an ordered state, which occurs about 10 days before the earthquake; and in the third stage, the impending destabilization of the entire subsurface structure is predicted, which occurs in a short period before the earthquake. In summary, the combined analysis of the acoustic emission b-value and information entropy provides a novel dual-parameter synergy framework for earthquake monitoring and early warning, enhancing precursor recognition through the coupling of stress evolution and system disorder dynamics.
April 2025
·
2 Reads
Recent semantic communication methods explore effective ways to expand the communication paradigm and improve the performance of communication systems. Nonetheless, a common problem with these methods is that the essence of semantics is not explicitly pointed out and directly utilized. A new epistemology suggests that synonymity, which is revealed as the fundamental feature of semantics, guides the establishment of semantic information theory from a novel viewpoint. Building on this theoretical basis, this paper proposes a semantic arithmetic coding (SAC) method for semantic lossless compression using intuitive synonymity. By constructing reasonable synonymous mappings and performing arithmetic coding procedures over synonymous sets, SAC can achieve higher compression efficiency for meaning-contained source sequences at the semantic level and approximate the semantic entropy limits. Experimental results on edge texture map compression show a significant improvement in coding efficiency using SAC without semantic losses compared to traditional arithmetic coding, demonstrating its effectiveness.
April 2025
The rapid development of diffusion models in image generation and processing has led to significant security concerns. Diffusion models are capable of producing highly realistic images that are indistinguishable from real ones. Although deploying a watermarking system can be a countermeasure to verify the ownership or the origin of images, the regeneration attacks arising from diffusion models can easily remove the embedded watermark from the images, without compromising their perceptual quality. Previous watermarking methods that hide watermark information in the carrier image are vulnerable to these newly emergent attacks. To address these challenges, we propose a robust and traceable watermark framework based on the latent diffusion model, where the spread-spectrum watermark is coupled with the diffusion noise to ensure its security and imperceptibility. Since the diffusion model is trained to reduce information entropy from disordered data to restore its true distribution, the transparency of the hidden watermark is guaranteed. Benefiting from the spread spectrum strategy, the decoder structure is no longer needed for watermark extraction, greatly alleviating the training overhead. Additionally, the robustness and transparency are easily controlled by a strength factor, whose operating range is studied in this work. Experimental results demonstrate that our method performs not only against common attacks, but also against regeneration attacks and semantic-based image editing.
April 2025
This paper introduces a novel variational autoencoder model termed DVAE to prevent posterior collapse in text modeling. DVAE employs a dual-path architecture within its decoder: path A and path B. Path A makes the direct input of text instances into the decoder, whereas path B replaces a subset of word tokens in the text instances with a generic unknown token before their input into the decoder. A stopping strategy is implemented, wherein both paths are concurrently active during the early phases of training. As the model progresses towards convergence, path B is removed. To further refine the performance, a KL weight dropout method is employed, which randomly sets certain dimensions of the KL weight to zero during the annealing process. DVAE compels the latent variables to encode more information about the input texts through path B and fully utilize the expressiveness of the decoder, as well as avoiding the local optimum when path B is active through path A and the stopping strategy. Furthermore, the KL weight dropout method augments the number of active units within the latent variables. Experimental results show the excellent performance of DVAE in density estimation, representation learning, and text generation.
April 2025
·
3 Reads
The review presents arguments emphasizing the importance of using the entropic measure of time (EMT) in the study of irreversible evolving systems. The possibilities of this measure for obtaining the laws of system evolution are shown. It is demonstrated that EMT provides a novel and unified perspective on the principle of maximum entropy production (MEPP), which is established in the physics of irreversible processes, as well as on the laws of growth and evolution proposed in biology. Essentially, for irreversible processes, the proposed approach allows, in a certain sense, to identify concepts such as the duration of existence, MEPP, and natural selection. EMT has been used to generalize prior results, indicating that the intrinsic time of a system is logarithmically dependent on extrinsic (Newtonian) time.
April 2025
This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted label distribution of the target domain with a reference distribution derived from the source domain, thereby reducing prediction uncertainty; and (2) measure propagation, a technique that transfers probability mass from the source domain to generate pseudo-measures—estimated probabilistic representations—for the unlabeled target domain. This dual mechanism enhances both global feature alignment and semantic consistency across domains. Extensive experiments on benchmark datasets (OfficeHome and DomainNet) demonstrate that the proposed approach consistently outperforms State-of-the-Art methods, particularly in scenarios with significant domain shifts. These results confirm the robustness, scalability, and theoretical grounding of our framework, offering a new perspective on the fusion of information theory and domain adaptation.
April 2025
·
1 Read
By exploiting the rich automorphisms of Reed–Muller (RM) codes, the recently developed automorphism ensemble (AE) successive cancellation (SC) decoder achieves a near-maximum-likelihood (ML) performance for short block lengths. However, the appealing performance of AE-SC decoding arises from the diversity gain that requires a list of SC decoding attempts, which results in a high decoding complexity. To address this issue, this paper proposes a novel quasi-optimal path convergence (QOPC)-aided early termination (ET) technique for AE-SC decoding. This technique detects strong convergence between the partial path metrics (PPMs) of SC constituent decoders to reliably identify the optimal decoding path at runtime. When the QOPC-based ET criterion is satisfied during the AE-SC decoding, only the identified path is allowed to proceed for a complete codeword estimate, while the remaining paths are terminated early. The numerical results demonstrated that for medium-to-high-rate RM codes in the short-length regime, the proposed QOPC-aided ET method incurred negligible performance loss when applied to fully parallel AE-SC decoding. Meanwhile, it achieved a complexity reduction that ranged from 35.9% to 47.4% at a target block error rate (BLER) of 10−3, where it consistently outperformed a state-of-the-art path metric threshold (PMT)-aided ET method. Additionally, under a partially parallel framework of AE-SC decoding, the proposed QOPC-aided ET method achieved a greater complexity reduction that ranged from 81.3% to 86.7% at a low BLER that approached 10−5 while maintaining a near-ML decoding performance.
April 2025
·
5 Reads
Quantum computing gives direct access to the study of the real-time dynamics of quantum many-body systems. In principle, it is possible to directly calculate non-equal-time correlation functions, from which one can detect interesting phenomena, such as the presence of quantum scars or dynamical quantum phase transitions. In practice, these calculations are strongly affected by noise, due to the complexity of the required quantum circuits. As a testbed for the evaluation of the real-time evolution of observables and correlations, the dynamics of the Zn Schwinger model in a one-dimensional lattice is considered. To control the computational cost, we adopt a quantum–classical strategy that reduces the dimensionality of the system by restricting the dynamics to the Dirac vacuum sector and optimizes the embedding into a qubit model by minimizing the number of three-qubit gates. The time evolution of particle-density operators in a non-equilibrium quench protocol is both simulated in a bare noisy condition and implemented on a physical IBM quantum device. In either case, the convergence towards a maximally mixed state is targeted by means of different error mitigation techniques. The evaluation of the particle-density correlation shows a well-performing post-processing error mitigation for properly chosen coupling regimes.
April 2025
·
2 Reads
The Transformer-based target detection model, DETR, has powerful feature extraction and recognition capabilities, but its high computational and storage requirements limit its deployment on resource-constrained devices. To solve this problem, we first replace the ResNet-50 backbone network in DETR with Swin-T, which realizes the unification of the backbone network with the Transformer encoder and decoder under the same Transformer processing paradigm. On this basis, we propose a quantized inference scheme based entirely on integers, which effectively serves as a data compression method for reducing memory occupation and computational complexity. Unlike previous approaches that only quantize the linear layer of DETR, we further apply integer approximation to all non-linear operational layers (e.g., Sigmoid, Softmax, LayerNorm, GELU), thus realizing the execution of the entire inference process in the integer domain. Experimental results show that our method reduces the computation and storage to 6.3% and 25% of the original model, respectively, while the average accuracy decreases by only 1.1%, which validates the effectiveness of the method as an efficient and hardware-friendly solution for target detection.
Journal Impact Factor™
CiteScore™
Submission to first decision
Submission to publication
Acceptance to publication
Article processing charge
Managing Editor