Recent publications
Liver resection is a complex procedure requiring precise removal of tumors while preserving viable tissue. This study proposes a novel approach for automated liver resection planning, using segmentations of the liver, vessels, and tumors from CT scans to predict the future liver remnant (FLR), aiming to improve pre-operative planning accuracy and patient outcomes.
This study evaluates deep convolutional and Transformer-based networks under various computational setups. Using different combinations of anatomical and pathological delineation masks, we assess the contribution of each structure. The method is initially tested with ground-truth masks for feasibility and later validated with predicted masks from a deep learning model.
The experimental results highlight the crucial importance of incorporating anatomical and pathological masks for accurate FLR delineation. Among the tested configurations, the best performing model achieves an average Dice score of approximately 0.86, aligning closely with the inter-observer variability reported in the literature. Additionally, the model achieves an average symmetric surface distance of 0.95 mm, demonstrating its precision in capturing fine-grained structural details critical for pre-operative planning.
This study highlights the potential for fully-automated FLR segmentation pipelines in liver pre-operative planning. Our approach holds promise for developing a solution to reduce the time and variability associated with manual delineation. Such method can provide better decision-making in liver resection planning by providing accurate and consistent segmentation results. Future studies should explore its seamless integration into clinical workflows.
In this article, we present a simulation tool for modeling a quasi-optical bench for material characterization. The model uses a Gaussian beam expansion and tracking analysis, together with a modal analysis to enable a comparison of the simulated reflection and transmission S-parameters to the ones measured with a 4-port vector network analyzer. A Thru-Reflect-Line calibration is performed to de-embed the simulated S-parameters of a dielectric slab located between two lens antennas, showing good agreement with the analytical slab model used for experimental permittivity extraction.
Charmonium production in high-energy collisions can be split into a prompt and a non-prompt component. Both components can be distinguished experimentally by studying displaced topology, and represent valuable tools to investigate the properties of the strongly interacting medium produced in ultrarelativistic heavy-ion collisions. In particular, the study of non-prompt charmonium production can give access to the beauty-hadron production cross section and can be used to investigate the in-medium energy loss of beauty quarks. In these proceedings, the recent measurement of prompt and non-prompt J/ψ carried out by the ALICE Collaboration in Pb–Pb collisions at midrapidity (|y| < 0.8) at √sNN = 5.02 TeV, will be presented. Moreover, thanks to the installation of the new muon forward tracker (MFT), the prompt/non-prompt charmonium separation will be possible in LHC Run 3 also at forward rapidity (2.5 < y < 4).
We investigate the influence of the equation-of-state (EoS) of strongly interacting matter created in heavy-ion collisions on the light cluster and hypernuclei production within the Parton-Hadron-Quantum-Molecular Dynamics (PHQMD) microscopic transport approach. In earlier PHQMD calculations, nucleon interactions were modeled using a static, density-dependent potential corresponding to the soft and hard equation-of-state. In this study, we incorporate a momentum-dependent potential for the baryon-baryon interaction, derived from the soft EoS. We study the influence of momentum dependent potential on light cluster production.
Campus Frankfurt, 60438 Frankfurt, Germany Abstract. We describe quarkonium production in pp and heavy-ion collisions by using the Remler’s formalism where quarkonium density operator is applied to all possible combination of heavy quark and heavy antiquark pairs. In pp collisions heavy (anti)quark momentum is provided by the PYTHIA event generator after rescaling pT and rapidity to imitate the FONLL calculations. Then spatial separation between heavy quark and heavy antiquark is introduced based on the uncertainty principle. In heavy-ion collisions quarkonium wavefunction changes with temperature assuming heavy quark potential equals the free energy of heavy quark pair in heat bath. The density operator is updated whenever heavy quark or heavy antiquark scatters in quark-gluon plasma (QGP). Our results are consistent with the experimental data on bottomonia from ALICE and CMS Collaborations assuming that the interaction rate of bottom (anti)quark in bottomonium is suppressed to 10 % that of unbound bottom (anti)quark. We also find that off-diagonal recombination of bottomonium from two different initial pairs barely happens even in Pb+Pb collisions at √sNN = 5.02 TeV.
The intricate structural and functional architecture of the brain enables a wide range of cognitive processes ranging from perception and action to higher order abstract thinking. Despite important progress, the relationship between the brain’s structural and functional properties is not yet fully established. In particular, the way the brain’s anatomy shapes its electrophysiological dynamics remains elusive. The electroencephalography (EEG) activity recorded during naturalistic tasks is thought to exhibit patterns of coupling with the underlying brain structure that vary as a function of behavior. Yet these patterns have not yet been sufficiently quantified. We address this gap by jointly examining individual Diffusion-Weighted Imaging (DWI) scans and continuous EEG recorded during video watching and resting state, using a Graph Signal Processing (GSP) framework. By decomposing the structural graph into eigenmodes and expressing the EEG activity as an extension of anatomy, GSP provides a way to quantify the structure–function coupling. We elucidate how the structure shapes function during naturalistic tasks such as movie watching and how this association is modulated by tasks. We quantify the coupling relationship in a region-, time-, and frequency-resolved manner. First of all, our findings indicate that the EEG activity in the sensorimotor cortex is strongly coupled with brain structure, while the activity in higher order systems is less constrained by anatomy, that is, shows more flexibility. In addition, we found that watching videos was associated with stronger structure–function coupling in the sensorimotor cortex, as compared with resting-state data. Second, time-resolved analysis revealed that the unimodal systems undergo minimal temporal fluctuation in structure–function association, and the transmodal system displays the highest temporal fluctuations, with the exception of PCC seeing low fluctuations. Lastly, our frequency-resolved analysis revealed a consistent topography across different EEG rhythms, suggesting a similar relationship with the anatomical structure across frequency bands. Together, this unprecedented characterization of the link between structure and function using continuous EEG during naturalistic behavior underscores the role of anatomy in shaping ongoing cognitive processes. Taken together, by combining the temporal and spectral resolution of EEG and the methodological advantages of GSP, our work sheds new light on the anatomo-functional organization of the brain.
This study evaluates the performance of analog-based methodologies to predict, in a statistical way, the longitudinal velocity in a turbulent flow. The data used comes from hot-wire experimental measurements from the Modane wind tunnel. We compared different methods and explored the impact of varying the number of analogs and their sizes on prediction accuracy. We illustrate that the innovation, defined as the difference between the true velocity value and the prediction value, highlights particularly unpredictable events that we directly link with extreme events of the velocity gradients and so to intermittency. A statistical study of the innovation indicates that while the estimator effectively seizes linear correlations, it fails to fully capture higher-order dependences. The innovation underscores the presence of intermittency, revealing the limitations of current predictive models and suggesting directions for future improvements in turbulence forecasting.
Self-regulated learning (SRL) theory comprises cognitive, metacognitive, and affective aspects that enable learners to autonomously manage their learning processes. This article presents a systematic literature review on the measurement of SRL in digital platforms, that compiles the 53 most relevant empirical studies published between 2015 and 2023. The goal of this review is to exhibit current research orientations, characteristics of their contexts, and the SRL theories they rely on. A part of this study is then dedicated to a categorization of the trace data and indicators used to capture SRL behaviors as well as automatic methods to leverage them. Finally, we describe how learning scaffolds are provided to support SRL. In this respect, current research has brought methodological and theoretical insights to the study of SRL, particularly through the analysis of student’s behaviors and profiles, and their relationship to learning performance. Researchers guide their work by using recognized theoretical models of SRL to map trace data into SRL processes and dimensions. Future research is motivated by the development of learning platforms that go beyond data collection to incorporate learning analytics tools that provide personalized SRL support to students.
The article is available for free on the web site of the journal:
https://www.tandfonline.com/doi/full/10.1080/00207543.2024.2432463
In local networks and other short links, polymeric optical fiber may be used due to its mechanical properties; however, some undesired and strong effects appear in these environments. This work proposes three different models to fine‐tune the electrical parameters of discrete multi‐tone (DMT) modulation to predict the signal‐to‐noise rate (SNR) over each subcarrier used with a 560 nm central wavelength. Considering the present setup, the devised models achieve some bold results, demonstrating the possibility of explaining more than 99% of the experimental data, in the best scenario. Several models are mathematically fit to obtain distinct adjustments for the nonlinearity present in the link considering the electrical parameters.
Wind speed at the sea surface is a key quantity for a variety of scientific applications and human activities. For its importance, many observation techniques exist, ranging from in situ to satellite observations. However, none of such techniques can capture the spatiotemporal variability of the phenomenon at the same time. Reanalysis products, obtained from data assimilation methods, represent the state-of-the-art for sea-surface wind speed monitoring but may be biased by model errors and their spatial resolution is not competitive with satellite products. In this work, we propose a scheme based on both data assimilation and deep learning concepts to process spatiotemporally heterogeneous input sources to reconstruct high-resolution time series of spatial wind speed fields. This method allows to us make the most of the complementary information conveyed by the different sea-surface information typically available in operational settings. We use synthetic wind speed data to emulate satellite images, in situ time series and reanalyzed wind fields. Starting from these pseudo-observations, we run extensive numerical simulations to assess the impact of each input source on the model reconstruction performance. We show that our proposed framework outperforms a deep learning–based inversion scheme and can successfully exploit the spatiotemporal complementary information of the different input sources. We also show that the model can learn the possible bias in reanalysis products and attenuate it in the output reconstructions.
We explore the decay of bound neutrons in the JUNO liquid scintillator detector into invisible particles (e.g., n → 3 ν or n n → 2 ν ), which do not produce an observable signal. The invisible decay includes two decay modes: n → inv and n n → inv . The invisible decays of s -shell neutrons in 12 C will leave a highly excited residual nucleus. Subsequently, some de-excitation modes of the excited residual nuclei can produce a time- and space-correlated triple coincidence signal in the JUNO detector. Based on a full Monte Carlo simulation informed with the latest available data, we estimate all backgrounds, including inverse beta decay events of the reactor antineutrino ν ¯ e , natural radioactivity, cosmogenic isotopes and neutral current interactions of atmospheric neutrinos. Pulse shape discrimination and multivariate analysis techniques are employed to further suppress backgrounds. With two years of exposure, JUNO is expected to give an order of magnitude improvement compared to the current best limits. After 10 years of data taking, the JUNO expected sensitivities at a 90% confidence level are τ / B ( n → inv ) > 5.0 × 10 31 years and τ / B ( n n → inv ) > 1.4 × 10 32 years .
In this letter, we propose an O-RAN-based framework for Reconfigurable Intelligent Surfaces (RIS) control in 6G. The key objective is to enable the development of RIS control algorithms as xApps running at the Real-time Intelligent Controller (RIC) of Open RAN (O-RAN). To validate the proposed framework, we developed a Golang-based RIS simulator, GoSimRIS, intended to mimic and examine RIS behavior in various environmental scenarios. The simulator is linked with the RIC via a specialized Service Model (SM) devised in this work, namely E2SM RIS, which allows the design of xApps that dynamically optimize RIS coefficients by computing the ideal phase shifts and applying them in real-time to maximize network performance using channel information that is retrieved from the GoSimRIS environment. Finally, we introduce an ML-based RIS control mechanism that runs as an xApp using only the positions of the transmitter (Tx) and receiver (Rx) and the presence of Line-of-Sight (LOS) conditions, which corresponds to a realistic indoor scenario in 6G such industry 4.0.
Electronic Warfare (EW) receivers are passive systems that are designed to detect and identify active radar emitters in the environment. The radar pulses emitted by multiple sources are received and must be deinterleaved, in other words, sorted according to the waveform to which they belong, i.e. according to their emitter. As radar signals are more complex and observations are denser, new Machine Learning (ML) approaches appear in the literature to enhance traditional Radar Signal Deinterleaving (RSD). In this paper, we propose an overview of the ML approaches to RSD. To this end, we identify some criteria to characterize a method: its used technique, underlying assumptions, exploited parameters, input characterization, and architectural pattern. First, the problem of RSD is detailed with its challenges and operational requirements. We then outline the methods of the literature inside a taxonomy based on the technique criterion : the first category includes PRI estimation methods including histogram-based methods, and ML techniques constitute three other categories : clustering, RNN, CNN. Finally, the other identified criteria are explained and discussed.
Smart health and emotional care powered by the Internet of Medical Things (IoMT) are revolutionizing the healthcare industry by adopting several technologies related to multimodal physiological data collection, communication, intelligent automation, and efficient manufacturing. The security of medical images containing private patient information distributed in IoMT is of paramount importance in social communication, and guaranteeing their reliability has attracted widespread attention. Some existing medical image steganography methods fail to guarantee the original image’s integrity, authenticity, and lossless restoration. To solve this problem, a restoration-enhanced reversible adaptive information steganography network, MediSR-Net, is proposed. Specifically, MediSR-Net consists of an analysis module and an encoding module. The image degradation-restoration strategy is exploited by the analysis module to improve the effectiveness of information embedding and perceptual field to reduce the error of pixel prediction. An information distillation and attention machine-based approach are adopted to achieve degradation-restoration of CT medical images at any scale and focus more on the degradation-restoration effect with small magnification factors. The encoding module is responsible for the embedding and extraction of privacy information. Ten state-of-the-art are compared with the MediSR-Net method concerning steganographic effect, embedding rate, distortion analysis, and prediction error. The qualitative and quantitative results demonstrate the effectiveness and superiority of MediSR-Net.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Brest, France