Recent publications
The recent o-ran specifications promote the evolution of ranran architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by ric entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by ai and ml, novel solutions targeting traditionally unsolved ran management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such ai/ml solutions, affecting their trust in such novel tools. xai aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier. In this survey, we provide a summary of the xai methods and metrics before studying their deployment over the o-ran Alliance ran architecture along with its main building blocks. We then present various use-cases and discuss the automation of xai pipelines for o-ran as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of ai/ml decision entities in this context, focusing on how xai can help to interpret, understand, and improve trust in o-ran operational networks.
Quality of transmission (QoT) prediction is a fundamental function in optical networks. It is typically embedded within a digital twin and used for operational tasks, including service establishment, service rerouting, and (per-channel or per-amplifier) power management to optimize the working point of services and hence to maximize their capacity. Inaccuracy in QoT prediction results in additional, unwanted design margins. A key contributor to QoT inaccuracy is the uncertain knowledge of fiber insertion loss, e.g., the attenuation due to connector losses at the beginning or at the end of each fiber span, as such loss cannot be directly monitored. Indeed, insertion losses drive the choice of the launch power in fiber spans, which in turn drive key physical effects, including the Kerr and stimulated Raman scattering (SRS) effects, which affect services’ QoT. It is thus important to estimate (and detect possibly anomalous) fiber insertion losses at each span. We thereby propose a novel active input refinement (AIR) technique using active probing to estimate insertion losses in C and C + L systems. Here, active probing consists of adjusting amplifier gains span by span to slightly alter SRS. The amount of adjustment must be sufficient to be measurable (such that insertion losses can be inferred from the measures) but small enough to have a negligible impact on running services in a live network. The method is validated by simulations on a European network with 30 optical multiplex sections (OMSs) in C and C + L configurations and by lab experiments on a C-band network, demonstrating that AIR significantly improves insertion loss estimation, network QoT optimization, and QoT prediction compared with other state-of-the-art monitoring techniques. This work underscores the critical role of accurate estimation of QoT inputs in enhancing optical network performance.
Pangenome graphs can represent all variation between multiple reference genomes, but current approaches to build them exclude complex sequences or are based upon a single reference. In response, we developed the PanGenome Graph Builder, a pipeline for constructing pangenome graphs without bias or exclusion. The PanGenome Graph Builder uses all-to-all alignments to build a variation graph in which we can identify variation, measure conservation, detect recombination events and infer phylogenetic relationships.
Morphological operators are crucial in image analysis. Their integration into deep learning pipelines could improve performances by extracting or enhancing important image features, either within network architectures or loss functions. However, the difficulties in rendering those operators differentiable hinder their integration. In this paper, we present SoftMorph, a novel framework designed to convert any binary morphological operator defined as a Boolean expression into its differentiable and probabilistic counterpart, compatible with gradient-based optimization. Specifically, we define probabilistic operators as the expectation of the binary operator with respect to the probability of generating each binary configuration. This expectation can be computed trivially from the truth table of the binary morphological filter, as a multi-linear polynomial function. Moreover, we approximate the probabilistic operators with quasi-probabilistic operators directly translated from the Boolean expressions leveraging Fuzzy logic. These quasi-probabilistic operators therefore maintain the computational complexity of the original binary operator. We demonstrate the efficiency and reliability of our method through validation experiments, and evaluate the backpropagation capability of the proposed operators. Finally, we showcase several applications of morphological operators integrated into neural networks for image segmentation tasks.
Threat modeling (TM) is essential to manage, prevent, and fix security and privacy issues in our society. TM requires a data model to represent threats and tools to exploit such data. Current TM data models and tools have significant limitations preventing their usage in real-world scenarios. For example, it is challenging to TM embedded devices with current data models and tools as they cannot model their hardware, firmware, and low-level software. Moreover, it is impossible to TM a device lifecycle or security-privacy tradeoffs as these data models and tools were developed for other use cases (e.g., software security or user privacy).
We fill this relevant gap by presenting the AttackDefense Framework (ADF), which provides a novel data model and related tools to augment TM. ADF’s building block is the AD object that can be used to represent heterogeneous and complex threats. Moreover, ADF provides automations to process a collection of AD objects, including ways to create sets, maps, chains, trees, and wordclouds of AD objects. We present ADF , a toolkit implementing ADF composed of four modules (Catalog, Parse, Check, and Analyze).
We confirm that the data model and tools provided by ADF are useful by running an extensive set of experiments while threat modeling a crypto wallet and its lifecycle. Our experiments involved seven expert groups from academia and industry, each using the ADF on an orthogonal threat class. The evaluation generated 175 high-quality ADs covering ISA/IEC 62433-4-1 SecDev Lifecycle, side-channels, fault injection, microarchitectural attacks, speculative execution, pre-silicon testing, invasive physical chip modifications, Bluetooth protocol and implementation threats, and FIDO2 authentication.
We characterize learnability for quantum measurement classes by establishing matching necessary and sufficient conditions for their probably approximately correct (PAC) learnability, along with corresponding sample complexity bounds, in the setting where the learner is given access only to prepared quantum states. We first show that the empirical risk minimization (ERM) rule proposed in previous work is not universal, nor does uniform convergence of the empirical risk characterize learnability. Moreover, we show that VC dimension generalization bounds in previous work are in many cases infinite, even for measurement classes defined on a finite-dimensional Hilbert space and even for learnable classes. To surmount the failure of the standard ERM to satisfy uniform convergence, we define a new learning rule—denoised empirical risk minimization. We show this to be a universal learning rule for both classical probabilistically observed concept classes and quantum measurement classes, and the condition for it to satisfy uniform convergence is finite fat shattering dimension of the class. The fat shattering dimension of a hypothesis class is a measure of complexity that intervenes in sample complexity bounds for regression in classical learning theory. We give sample complexity upper and lower bounds for learnability in terms of finite fat shattering dimension and approximate finite partitionability into approximately jointly measurable subsets. We link fat shattering dimension with partitionability into approximately jointly measurable subsets, leading to our matching conditions. We also show that every measurement class defined on a finite-dimensional Hilbert space is PAC learnable. We illustrate our results on several example POVM classes.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information