INFN - Istituto Nazionale di Fisica Nucleare
Recent publications
  • M. Aker
    M. Aker
  • D. Batzler
    D. Batzler
  • A. Beglarian
    A. Beglarian
  • [...]
  • G. Zeller
    G. Zeller
The projected sensitivity of the effective electron neutrino-mass measurement with the KATRIN experiment is below 0.3 eV (90 % CL) after 5 years of data acquisition. The sensitivity is affected by the increased rate of the background electrons from KATRIN’s main spectrometer. A special shifted-analysing-plane (SAP) configuration was developed to reduce this background by a factor of two. The complex layout of electromagnetic fields in the SAP configuration requires a robust method of estimating these fields. We present in this paper a dedicated calibration measurement of the fields using conversion electrons of gaseous 83m^\textrm{83m} 83m Kr, which enables the neutrino-mass measurements in the SAP configuration.
Objective. Microdosimetry is gaining increasing interest in particle therapy. Thanks to the advancements in microdosimeter technologies and the increasing number of experimental studies carried out in hadron therapy frameworks, it is proving to be a reliable experimental technique for radiation quality characterisation, quality assurance, and radiobiology studies. However, considering the variety of detectors used for microdosimetry, it is important to ensure the consistency of microdosimetric results measured with different types of microdosimeters. Approach. This work presents a novel multi-thickness microdosimeter and a methodology to characterise the radiation quality of a clinical carbon-ion beam. The novel device is a diamond detector made of three sensitive volumes (SVs) of different thicknesses: 2, 6 and 12 µm. The SVs, which operate simultaneously, were accurately aligned and laterally positioned within 3 mm. This alignment allowed for a comparison of the results with a negligible impact of the SVs alignment and their lateral positioning, ensuring the homogeneity of the measured radiation quality. An experimental campaign was carried out at MedAustron using a carbon-ion beam of typical clinical energy (284.7 MeV u⁻¹). Main results. The measurement results allowed for a meticulous interpretation of its radiation quality, highlighting the effect of the SV thickness. The consistency of the microdosimetric spectra measured by detectors of different thicknesses is discussed by critically analysing the spectra and the differences observed. Significance. The methodology presented will be highly valuable for future experiments investigating the effects of the target volume size in radiobiology and could be easily adapted to the other particles employed in hadron therapy for clinical (i.e. protons) and for research purposes (e.g. helium, lithium and oxygen ions).
  • Ranjeet Kumar
    Ranjeet Kumar
  • Newton Nath
    Newton Nath
  • Rahul Srivastava
    Rahul Srivastava
A bstract We introduce a framework for hybrid neutrino mass generation, wherein scotogenic dark sector particles, including dark matter, are charged non-trivially under the A 4 flavor symmetry. The spontaneous breaking of the A 4 group to residual Z2 {\mathcal{Z}}_2 Z 2 subgroup results in the “cutting” of the radiative loop. As a consequence the neutrinos acquire mass through the hybrid “scoto-seesaw” mass mechanism, combining aspects of both the tree-level seesaw and one-loop scotogenic mechanisms, with the residual Z2 {\mathcal{Z}}_2 Z 2 subgroup ensuring the stability of the dark matter. The flavor symmetry also leads to several predictions including the normal ordering of neutrino masses and “generalized μ − τ reflection symmetry” in leptonic mixing. Additionally, it gives testable predictions for neutrinoless double beta decay and a lower limit on the lightest neutrino mass. Finally, A 4 → Z2 {\mathcal{Z}}_2 Z 2 breaking also leaves its imprint on the dark sector and ties it with the neutrino masses and mixing. The model allows only scalar dark matter, whose mass has a theoretical upper limit of ≲ 600 GeV, with viable parameter space satisfying all dark matter constraints, available only up to about 80 GeV. Conversely, fermionic dark matter is excluded due to constraints from the neutrino sector. Various aspects of this highly predictive framework can be tested in both current and upcoming neutrino and dark matter experiments.
Tensile testing probably represents the foremost important mechanical test that can performed on materials. This characterization has great relevance on polymeric materials, where the evaluation of the polymer goes beyond the pure chemical composition analysis. On the other hand, chemical labs are not always equipped with complete tensile machines due to space and budget constraints while often rely on much simpler machines usually provided with a dynamometer only. In this contest, the goal of the work is to provide a useful and effective method to estimate the stress–strain curve based only on force (and therefore the specimen stress) data. Of course, to recover the missing information (i.e. the sample elongation, and thus its strain) a suitable model of the tensile machine is needed to complement the dynamometer measures. Throughout the paper the steps to achieve such a model are described, together with an extensive experimental validation: firstly, we validated the method on metals which exhibit a well-defined behaviour. Then, we selected three different polymeric materials (polyvinyl alcohol, polydimethylsiloxane and natural rubber) in order to assess the performances of proposed approach in estimating their stress–strain characteristics. The obtained results confirmed the suitability and effectiveness of the proposed method in real-world applications.
This article explores the significant impact that artificial intelligence (AI) could have on food safety and nutrition, with a specific focus on the use of machine learning and neural networks for disease risk prediction, diet personalization, and food product development. Specific AI techniques and explainable AI (XAI) are highlighted for their potential in personalizing diet recommendations, predicting models for disease prevention, and enhancing data-driven approaches to food production. The article also underlines the importance of high-performance computing infrastructures and data management strategies, including data operations (DataOps) for efficient data pipelines and findable, accessible, interoperable, and reusable (FAIR) principles for open and standardized data sharing. Additionally, it explores the concept of open data sharing and the integration of machine learning algorithms in the food industry to enhance food safety and product development. It highlights the METROFOOD-IT project as a best practice example of implementing advancements in the agri-food sector, demonstrating successful interdisciplinary collaboration. The project fosters both data security and transparency within a decentralized data space model, ensuring reliable and efficient data sharing. However, challenges such as data privacy, model interoperability, and ethical considerations remain key obstacles. The article also discusses the need for ongoing interdisciplinary collaboration between data scientists, nutritionists, and food technologists to effectively address these challenges. Future research should focus on refining AI models to improve their reliability and exploring how to integrate these technologies into everyday nutritional practices for better health outcomes.
A bstract Scale-separated AdS compactifications of string theory can be constructed at the two-derivative supergravity level in the presence of smeared orientifold planes. The unsmearing corrections are known to leading order in the large volume, weak coupling limit. However, first-order perturbative approximations of non-linear problems can often produce spurious solutions, which are only weeded out by additional consistency conditions imposed at higher orders. In this work, we revisit the unsmearing procedure and present consistency conditions obtained from the second order warp factor and dilaton equations. This requires proper treatment of the near-source singularities. The resulting conditions appear as integral constraints on various non-linear combinations of the first order corrections, which we argue can generally be satisfied by appropriate choice of integration constants of the leading-order solutions. This provides a non-trivial consistency check for the perturbative unsmearing procedure and supports the existence of scale-separated AdS vacua in string theory.
Additive manufacturing technology is exploited for the first time to build a complex geometry scintillator using a thermosetting photocurable resin filled by lead halide perovskite as an active material. To this aim, an innovative nanocomposite is developed based on Cs4PbBr6 perovskite powders as fillers and photocurable resin as matrix, adopting stereolithography as a manufacturing process. The use of high‐Z lead‐based perovskite filler is needed for the detection of ionizing radiation and the conversion into visible light, while the polymer matrix provides 3D printability. On the one hand, the inclusion of the perovskite‐based filler in the photocurable resin does not affect the rheological behavior and photocuring properties of the polymer matrix, making the composite suitable for 3D printing by stereolithography. On the other hand, the presence of the polymer does not affect the emission properties of the perovskite leading to the development of a fast response scintillator with significantly improved environmental stability. This work opens the avenue to the development of a completely new class of plastic scintillating materials.
Objective: Currently, treatment planning in cancer hadrontherapy relies on dose-volume criteria and physical quantities constraints. However, incorporating biologically related models of Tumor Control Probability (TCP) and of Normal Tissue Complication Probability (NTCP) would help further minimizing adverse tissue reactions, and would allow achieving a more patient-specific strategy. The aim of this work was therefore the development of a mechanistic approach to predict NTCP for late tissue reactions following ion irradiation. Approach: A dataset on the tolerance of the rat spinal cord was considered, providing NTCP (for paresis of at least grade II) experimental data following irradiation by photons, protons, helium and carbon ions, under different fractionation schemes. The photon data were fit by a mechanistic NTCP model with four parameters, called Critical Element Model; this allowed fixing the two parameters that only depend on the tissue features. Afterwards, the two parameters depending on radiation quality were predicted by applying the BIANCA biophysical model, for each ion type and dose-averaged LET value. Main results: The predicted NTCP curves for ion irradiation were tested against the ion experimental data, by Chi-Square and p-value calculations. The model passed a significance test at 1% for all the datasets, and 5% for 13 out of 16 datasets, thus showing a good predictive power. The RBE was also calculated and compared with the data for the endpoint of NTCP equal to 50%, and a considerable discrepancy with the commonly calculated RBE for cell survival was shown. Significance: This study highlights the importance of considering the endpoint of interest when computing the RBE, through the application of a NTCP model, and it represents a first step towards the development of an approach to improve treatment plan optimization in therapy. To this aim, the approach needs to be extended to other endpoints and to be applied to patients’ data.
A bstract We consider an axion-like particle coupled to the Standard Model photons and decaying invisibly at Belle II. We propose a new search in the e ⁺ e − + invisible channel that we compare against the standard γ + invisible channel. We find that the e ⁺ e − + invisible channel has the potential to ameliorate the reach for the whole ALP mass range. This search leverages dedicated kinematic variables which significantly suppress the Standard Model background. We explore the implications of our expected reach for Dark Matter freeze-out through ALP-mediated annihilations.
An in-depth analysis of the decay process for β-emitting radionuclides highlights, for some of them, the existence of high-order effects usually not taken into account in literature as considered negligible in terms of energy and yield, and referred to as Internal Bremsstrahlung (IB). This set of β -radionuclides presents, besides their β spectrum, a continuous γ emission due to the Coulomb field braking action on the emitted electron following the decaying nucleus. In this work, we review the theoretical and experimental studies on the IB process focusing on its actual importance for the pure β emitters. It emerges that there is no satisfactory model able to reproduce the experimental IB distribution for most of the investigated beta emitters and the several measurements are sometimes at odds with each other. Moreover, as recently demonstrated, the IB process can give a relevant contribution to the physics of beta emitters thus requiring its inclusion in the physics of the beta decay. A discussion on the importance of considering IB process in both applicative fields such as nuclear medicine, industrial applications, and research or calibration laboratories, and in other relevant fields of particle physics or astrophysics, such as the research on dark matter or neutrino mass, is presented.
The origin of tiny neutrino mass is an unsolved puzzle leading to a variety of phenomenological aspects beyond the Standard Model (BSM). We consider U (1) gauge extension of the Standard Model (SM) where so-called seesaw mechanism is incarnated with the help of thee generations of Majorana type right-handed neutrinos followed by the breaking of U (1) and electroweak gauge symmetries providing anomaly free structure. In this framework, a neutral BSM gauge boson ZZ^\prime Z ′ is evolved. To explore the properties of its interactions we consider chiral (flavored) frameworks where ZZ^\prime Z ′ interactions depend on the handedness (generations) of the fermions. In this paper we focus on ZZ^\prime Z ′ -neutrino interactions which could be probed from cosmic explosions. We consider ννe+e\nu \overline{\nu } \rightarrow e^+ e^- ν ν ¯ → e + e - process which can energize gamma-ray burst (GRB221009A, so far the highest energy) through energy deposition. Hence estimating these rates we constrain U (1) gauge coupling (gX)(g_X) ( g X ) and ZZ^\prime Z ′ mass (MZ)(M_{Z^\prime }) ( M Z ′ ) under Schwarzchild (Sc) and Hartle-Thorne (HT) scenarios. We also study ν\nu ν -DM scattering through ZZ^\prime Z ′ to constrain gXMZg_X-M_{Z^\prime } g X - M Z ′ plane using IceCube data considering high energy neutrinos from cosmic blazar (TXS0506+056), active galaxy (NGC1068), the Cosmic Microwave Background (CMB) and the Lyman- α\alpha α data, respectively. Finally highlighting complementarity we compare our results with current and prospective bounds on gXMZg_X-M_{Z^\prime } g X - M Z ′ plane from scattering, beam-dump and g2g-2 g - 2 experiments. [ PICS code ].
A bstract The production cross sections of D ⁰ , D ⁺ , and Λc+ {\Lambda}_{\textrm{c}}^{+} Λ c + hadrons originating from beauty-hadron decays (i.e. non-prompt) were measured for the first time at midrapidity in proton–lead (p–Pb) collisions at the center-of-mass energy per nucleon pair of sNN \sqrt{s_{\textrm{NN}}} s NN = 5 . 02 TeV. Nuclear modification factors ( R pPb ) of non-prompt D ⁰ , D ⁺ , and Λc+ {\Lambda}_{\textrm{c}}^{+} Λ c + are calculated as a function of the transverse momentum ( p T ) to investigate the modification of the momentum spectra measured in p–Pb collisions with respect to those measured in proton–proton (pp) collisions at the same energy. The R pPb measurements are compatible with unity and with the measurements in the prompt charm sector, and do not show a significant p T dependence. The p T -integrated cross sections and p T -integrated R pPb of non-prompt D ⁰ and D ⁺ mesons are also computed by extrapolating the visible cross sections down to p T = 0. The non-prompt D-meson R pPb integrated over p T is compatible with unity and with model calculations implementing modification of the parton distribution functions of nucleons bound in nuclei with respect to free nucleons. The non-prompt Λc+ {\Lambda}_{\textrm{c}}^{+} Λ c + / D ⁰ and D ⁺ / D ⁰ production ratios are computed to investigate hadronisation mechanisms of beauty quarks into mesons and baryons. The measured ratios as a function of p T display a similar trend to that measured for charm hadrons in the same collision system.
We explored how the Lorentz symmetry breaking parameter ℓ affects the Reissner-Nordstöm BH solution in the context of weak field deflection angle, and the black hole shadow. We aim to derive the general expression for the weak deflection angle using the non-asymptotic version of the Gauss-Bonnet theorem, and we presented a way to simplify the calculations under the assumption that the distance of the source and the receiver are the same. Through the Solar System test, ℓ is constrained from around orders of magnitude to 0, implying challenging detection of ℓ through the deflection of light rays from the Sun. We also studied the black hole shadow in an analytic way, where we applied the EHT results under the far approximation in obtaining an estimate expression for ℓ. Using the realistic values of the black hole mass and observer distance for Sgr. A* and M87*, it was shown that is satisfied, implying the relevance and potential promise of the spontaneous Lorentz symmetry breaking parameter's role on the shadow radius uncertainties as measured by the EHT. We find constraints for ℓ to be negatively valued, where the upper and lower bounds are ∼ - 1.94 and ∼ - 2.04, respectively.
In this paper, we present an experimental apparatus for the measurement of the detection efficiency of free-space single-photon detectors based on the substitution method. We extend the analysis to account for the wavelength dependence introduced by the transmissivity of the optical window in front of the detector's active area. Our method involves measuring the detector's response at different wavelengths and comparing it to a calibrated reference detector. This allows us to accurately quantify the efficiency variations due to the optical window's transmissivity. The results provide a comprehensive understanding of the wavelength-dependent efficiency, which is crucial for optimizing the performance of single-photon detectors in various applications, including quantum communication and photonics research. This characterization technique offers a significant advancement in the precision and reliability of single-photon detection efficiency measurements.
We present a detailed study of an asymmetrically driven quantum Otto engine with a time-dependent harmonic oscillator as its working medium. We obtain analytic expressions for the upper bounds on the efficiency of the engine for two different driving schemes having asymmetry in the expansion and compression work strokes. We show that the Otto cycle under consideration cannot operate as a heat engine in the low-temperature regime. Then, we show that the friction in the expansion stroke is significantly more detrimental to the performance of the engine as compared to the friction in the compression stroke. Further, by comparing the performance of the engine with sudden expansion, sudden compression, and both sudden strokes, we uncover a pattern of connections between different operational points. Finally, we analytically characterize the complete phase diagram of the Otto cycle for both driving schemes and highlight the different operational modes of the cycle as a heat engine, refrigerator, accelerator, and heater.
Quasicondensation in one dimension is known to occur for equilibrium systems of hard-core bosons (HCBs) at zero temperature. This phenomenon arises due to the off-diagonal long-range order in the ground state, characterized by a power-law decay of the one-particle density matrix g1(x,y)∼|x−y|−1/2—a well-known outcome of Luttinger liquid theory. Remarkably, HCBs, when allowed to freely expand from an initial product state (i.e. characterized by initial zero correlation), exhibit quasicondensation and demonstrate the emergence of off-diagonal long-range order during nonequilibrium dynamics. This phenomenon has been substantiated by numerical and experimental investigations in the early 2000s. In this work, we revisit the dynamical quasicondensation of HCBs, providing a fully analytical treatment of the issue. In particular, we derive an exact asymptotic formula for the equal-time one-particle density matrix by borrowing ideas from the framework of quantum Generalized Hydrodynamics. Our findings elucidate the phenomenology of quasicondensation and of dynamical fermionization occurring at different stages of the time evolution, as well as the crossover between the two.
We explore how strain impacts the band structure of metallic-phase VO2 thin films deposited on TiO2(101) substrates. Employing a combination of X-ray absorption linear dichroism and valence band measurements, we demonstrate that strain can alter the intrinsic band structure anisotropy of metallic VO2. Our findings reveal that reducing the thickness of VO2 films leads to a more isotropic band structure. This observation is further supported by an analysis of the electronic population redistribution in the dπ{d}_{{||}}{-}{\pi }^{* } bands, which affects the screening length and induces effective mass renormalization. Overall, our results underscore the potential of strain manipulation in tailoring the electronic structure uniformity of thin films, thereby expanding the scope for engineering VO2 functionalities.
The dimensional transition in turbulent jets of a shear-thinning fluid is studied via direct numerical simulations. Our findings reveal that under vertical confinement, the flow exhibits a unique mixed-dimensional (or 2.5-dimensional) state, where large-scale two-dimensional and small-scale three-dimensional structures coexist. This transition from three-dimensional turbulence near the inlet to two-dimensional dynamics downstream is dictated by the level of confinement: weak confinement guarantees turbulence to remain three-dimensional, whereas strong confinement forces the transition to two dimensions; the mixed-dimensional state is observed for moderate confinement and it emerges as soon as flow scales are larger than the vertical length. In this scenario, we observed that the mixed-dimensional state is an overall more energetic state, and it shows a multi-cascade process, where the direct cascade of energy at small scales and the direct cascade of enstrophy at large scales coexist. The results provide insights into the complex dynamics of confined turbulent flows, relevant in both natural and industrial settings.
A bstract Using 7.33 fb − 1 of e ⁺ e − collision data samples collected with the BESIII detector at center-of-mass energies between 4.128 and 4.226 GeV, we search for the radiative decay Ds+γρ(770)+ {D}_s^{+}\to \gamma \rho {(770)}^{+} D s + → γρ 770 + for the first time. A hint of Ds+γρ(770)+ {D}_s^{+}\to \gamma \rho {(770)}^{+} D s + → γρ 770 + is observed with a statistical significance of 2.5 σ . The branching fraction of Ds+γρ(770)+ {D}_s^{+}\to \gamma \rho {(770)}^{+} D s + → γρ 770 + is measured to be (2.2 ± 0.9 stat . ± 0.2 syst . ) × 10 − 4 , corresponding to an upper limit of 6.1 × 10 − 4 at the 90% confidence level.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,843 members
Dario Moricciani
  • Laboratori Nazionali di Frascati LNF
Erika De Lucia
  • Laboratori Nazionali di Frascati LNF
Information
Address
Frascati, Italy
Head of institution
Antonio Zoccoli