Parthenope University of Naples
Recent publications
Structural Health Monitoring (SHM) is gaining increasing attention in Italy and worldwide due to structural obsolescence and sudden collapses occurring from time to time due to insufficient maintenance or extreme events. On the other hand, the technological progress in the SHM field is making it particularly attractive as a complement to visual inspections and in-situ surveys aimed at assessing the structural safety. Accordingly, several guidelines have been developed with the aim to provide useful recommendations to technician for the design of SHM systems. Nevertheless, because of very case-specific design, so far, a general qualification procedure aimed at assessing the performance of a SHM system is still missing. On the contrary, construction products already share a thorough and well-established harmonized standardization framework since many years, and this resulted in a reliable control of performance. In this study, a preliminary qualification approach for SHM systems is proposed. The qualification scheme is scenario dependent and allows to check the effectiveness of a given SHM system defined in terms of hardware as well as software components. In order to validate the approach, different SHM systems are hypothesized and checked for possible qualification with respect to different scenarios, obtaining encouraging results. The proposed approach, therefore, represents a promising attempt towards a more exhaustive and comprehensive qualification framework for civil SHM applications.
The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hard scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.
The ATLAS experiment at the Large Hadron Collider has a broad physics programme ranging from precision measurements to direct searches for new particles and new interactions, requiring ever larger and ever more accurate datasets of simulated Monte Carlo events. Detector simulation with Geant4 is accurate but requires significant CPU resources. Over the past decade, ATLAS has developed and utilized tools that replace the most CPU-intensive component of the simulation—the calorimeter shower simulation—with faster simulation methods. Here, AtlFast3, the next generation of high-accuracy fast simulation in ATLAS, is introduced. AtlFast3 combines parameterized approaches with machine-learning techniques and is deployed to meet current and future computing challenges, and simulation needs of the ATLAS experiment. With highly accurate performance and significantly improved modelling of substructure within jets, AtlFast3 can simulate large numbers of events for a wide range of physics processes.
Background The assessment of body composition is central in diagnosis and treatment of paediatric obesity, but a criterion method is not feasible in clinical practice. Even the use of bioelectrical impedance analysis (BIA) is limited in children. Body mass index (BMI) Z-score is frequently used as a proxy index of body composition, but it does not discriminate between fat mass and fat-free mass. We aimed to assess the extent to which fat mass and percentage of body fat estimated by a height-weight equation agreed with a BIA equation in youths with obesity from South Italy. Furthermore, we investigated the correlation between BMI Z-score and fat mass or percentage of body mass estimated by these two models. Methods One-hundred-seventy-four youths with obesity (52.3% males, mean age 10.8 ± 1.9) were enrolled in this cross-sectional study. Fat mass and percentage of body fat were calculated according to a height-weight based prediction model and to a BIA prediction model. Results According to Bland–Altman statistics, mean differences were relatively small for both fat mass (+ 0.65 kg) and percentage of body fat (+ 1.27%) with an overestimation at lower mean values; the majority of values fell within the limits of agreement. BMI Z-score was significantly associated with both fat mass and percentage of body fat, regardless of the method, but the strength of correlation was higher when the height-weight equation was considered ( r = 0.82; p < 0.001). Conclusions This formula may serve as surrogate for body fat estimation when instrumental tools are not available. Dealing with changes of body fat instead of BMI Z-score may help children and parents to focus on diet for health.
We establish some higher differentiability results for solution to non-autonomous obstacle problems of the form min∫Ωfx,Dv(x)dx:v∈Kψ(Ω),where the function f satisfies p−growth conditions with respect to the gradient variable, for 1<p<2, and Kψ(Ω) is the class of admissible functions. Here we show that, if the obstacle ψ is bounded, then a Sobolev regularity assumption on the gradient of the obstacle ψ transfers to the gradient of the solution, provided the partial map x↦Dξf(x,ξ) belongs to a Sobolev space, W1,p+2. The novelty here is that we deal with subquadratic growth conditions with respect to the gradient variable, i.e. f(x,ξ)≈a(x)|ξ|p with 1<p<2, and where the map a belongs to a Sobolev space.
In thermoacoustics, stacks and regenerators are porous media where energy conversion takes place. Modelling full thermoacoustic devices with a CFD approach, in order to capture some nonlinearities, can be extremely expensive from a computational perspective compared to a standard linear approach used in the frequency domain. At the same time, macroscopic models for porous media developed for steady-state flows cannot be directly applied in oscillating flow conditions. Moreover, macroscopic models in the available literature for oscillating flows are inaccurate at high frequencies or require a closure coefficient to be determined numerically (with Direct Numerical Simulations) or experimentally. In this article, a time domain macroscopic model for heat and fluid flow is proposed based on the concepts of complex Darcy and Nusselt numbers in the linear regime. Such coefficients, introduced in the past to describe the oscillatory phenomena, have been used for the first time to build a CFD macroscopic model in terms of their real and imaginary parts. For two different porous media, a parallel plate and a transversal pin array, the developed macroscopic model is verified with the microscopic solution. Furthermore, for a transversal pin array stack, the proposed model is validated against experimental data from the available literature, showing a very good agreement. The findings of this paper can help to strongly reduce the computational costs of oscillatory flow simulations without prior direct numerical simulations of the porous core.
The urban green infrastructure (UGI), with special focus on street trees, is a very complex engineered ecosystem which plays an important role in generating ecosystem services and, if improperly managed, a number of dis-services to be prevented. This study applies the Emergy Accounting method to the cost and benefit evaluation, in order to establish a non-monetary “supply-side” assessment framework capable to assign an environmental value to each kind of services provided by urban forests and other green infrastructures. Further, the study classifies urban street tree integrated valuation framework into ecosystem services, avoided cost for human health and biodiversity damage, growing/maintenance cost and ecosystem dis-services. In a like manner, the interaction among the three different component flows in street tree ecosystems (costs, benefits and associated dis-services) are compared by means of a ternary diagram. Taking the case of the street ecosystem in Beijing, China, eleven typical urban tree species, including oak, maple, Chinese ash and linden, are selected for services and dis-services evaluation. Results show that, in general, UGI provides a large number of services to urban population, but it may also generate dis-services affecting human health, well-being and biodiversity when tree selection, location and management is not accurate. Results may help improve management practices which enhance the overall ecosystem service provision by urban forests not only in Beijing as case study but also in other cities by means of appropriate management.
Water planners must provide end-users with reliable and high-quality access to fresh water while complying with financial, institutional, and water availability constraints. In the pursuit of these goals, an over-investment in design can result in stranded assets of significant value and often unwanted environmental implications. Under-investment can lead to supply restrictions affecting human health, the economy, and the environment. The present study uses the Dominance-based Rough Set Approach (DRSA) to develop a balancing strategy concerning complexities encountered in water resource planning for irrigation systems. The methodology relies on the Dominance-based Learning from Examples Module (DOMLEM) algorithm, which extracts minimal set of rules regarding relevant combinations between flexibility allocation and design-cost criteria. The algorithm delineates outcomes in the form of “if., then.” rules that translate decision possibilities facing water planners into: “if (the design is more flexible by this amount), then (we expect this range of cost increment”). Then, a confusion matrix is computed for each irrigation system in order to exclude the rules generating incorrect and ambiguous classification results. The outcome reveals that cost is more subject to elasticity at the hydrant (eh) increment than the network’s coefficient (r). Furthermore, the analysis reveals that the parameter P(q) has only a minor impact on the cost and, as a result, the final decision. Any elasticity (eh) less than 3 assigned to any given coefficient (r) becomes a low-cost increment. For any given value of (eh), the cost increases as the coefficient (r) decrease. Elasticity from 4 to 5 with a network's coefficient (r) equal to or greater than 18/24 becomes a medium-cost increment. Elasticity (eh) from 5 to 6 associated with an (r) equal or less than 16/24 becomes a very high-cost increment. Finally, rather than identifying one solution that seems better than others, this approach provides an interactive schematic that helps identify the appropriate range of flexibility justified by the expense criterion, which allows for debate and supports decision-making.
The class of two-dimensional (2D) materials is critical in the domain of scientific investigation and technology due to its low dimensionality which offers a unique platform to modify the electronic states to harvest diverse applications. In this context, the findings of fundamental ferromagnetism in 2D van der Waals (vdW) crystals offer a mesmerizing field to understand and investigate the origin of magnetism which can invigorate spin transport. This review article covers recent progress on van der Waals 2D ferromagnetic materials to investigate intrinsic magnetism, interlayer coupling effect on their magnetism, and device structures for spintronics. Herein, we have comprehensively discussed magnetic tunnel junction (MTJ), the heterostructure of 2D magnetic materials with TMDCs, the spin transport properties based on the Anomalous Hall Effect (AHE). Moreover, the thermal mobilization of electron’s spins which generates the spin voltage in ferromagnetic materials because of the Anomalous Nernst Effect (ANE) and Spin Seebeck Effect (SSE) is described. Furthermore, the recent challenges, applications, and perspectives of 2D ferromagnetic magnetic materials are described in detail.
Bioelectrochemical systems (BES) have the potential to be used in a variety of applications such as waste biorefinery, pollutants removal, CO2 capture, and the electrosynthesis of clean and renewable biofuels or byproducts, among others. In contrast, many technical challenges need to be addressed before BES can be scaled up and put into real-world applications. Utilizing BES, this review article presents a state-of-the-art overall view of crucial concepts and the most recent innovative results and achievements acquired from the BES system. Special attention is placed on a hybrid approach for product recovery and wastewater treatment. There is also a comprehensive overview of waste biorefinery designs that are included. In conclusion, the significant obstacles and technical concerns found throughout the BES studies are discussed, and suggestions and future requirements for the virtual usage of the BES concept in actual waste treatment are outlined.
Maritime transport is one of the largest greenhouse gas emitting sectors of the global economy, responsible for around 1 GtCO2eq every year. To comply with the reduction of carbon dioxide emissions, the research is devoted introducing low and zero-carbon fuels and innovative propulsion technologies. In this context, hydrogen fuel cell powertrains can play a crucial role due to their high energy and environmental performances. In this paper, a techno-economic feasibility analysis for replacing an 8.3 MW diesel engine with a polymer electrolyte membrane fuel cell system is performed for a chemical tanker ship. For this purpose, a detailed method, that aims to estimate the volume and mass of the fuel cell system as well as of the hydrogen storage technologies, is developed. Three on board hydrogen storage technologies are considered: i) compressed hydrogen, ii) liquid hydrogen, iii) metal hydrides. Results highlight that the fuel cell-based powertrain is characterized by 60% less volume and 56% less mass compared to the diesel engine. As far as the hydrogen storage technologies are concerned, all solutions present significantly lower volumetric and gravimetric energy densities and therefore, additional volume and mass are required in comparison with diesel fuel configuration. These results involve a reduction of the total cargo capacity to comply with tanker ship physical constraints. The cargo should be reduced by 1.3%-1.1% for compressed hydrogen (at 350 bar and 700 bar, respectively), 0.1% for liquid hydrogen, and 9% for metal hydrides. Finally, the economic assessment in terms of Operational Expenditures, based upon the predicted hydrogen price reduction as well as the diesel price increase, in different cost scenarios (2020–2050), are evaluated. Results show that the competitiveness of the hydrogen solution with a retail price of 4 $/kg can be achieved by considering an incentive for the avoided CO2 equal to 112 $/tons.
The present article contributes to the theory of Business Model Innovation by incumbent firms via digital servitization. Our research explores the conditions affecting manufacturers' ability to innovate their business models by developing and supplying advanced, digitally-based services. The authors performed a Qualitative Comparative Analysis via a qualitative investigation of the novel business models adopted by 19 Italian small- and medium-sized incumbent manufacturers. Our study found a series of theoretically relevant causal factors for the targeted outcome variable: size and investments, customer intimacy, and external service suppliers are crucial paths for developing successful digitally-based advanced services. The findings suggest three managerial implications: first, managers must capitalise on corporate knowledge and assets, mapping and leveraging useful people and technologies. Second, they should seek external service providers related to technology and strategy/organisation to help them update the value proposition. Third, they must build and foster customer intimacy and capitalise on key customers, either leveraging the extant sales/field service structures or envisioning new direct data exchange channels.
Assessing the uncertainty of precipitation measurements is a challenging prob-lem because precipitation estimates are inevitably influenced by various errorsand environmental conditions. A way to characterize the error structure ofcoincident measurements is to use the triple colocation (TC) statistical method.Unlike more typical approaches, where measures are compared in pairs andone of the two is assumed error-free, TC has the enviable advantage to succeedin characterizing the uncertainties of co-located measurements being com-pared to each other, without requiring the knowledge of the true value whichis often unknown. However, TC requires to have at least three co-located mea-suring systems and the compliance with several initial assumptions. In thiswork, for the first time, TC is applied to in-situ measurements of rain precipita-tion acquired by three co-located devices: a weighing rain gauge, a laser disd-rometer and a bidimensional video disdrometer. Both parametric andnonparametric formulations of TC are implemented to derive the rainfall prod-uct precision associated with the three devices. While the parametric TC tech-nique requires tighter constraints and explicit assumptions which may beviolated causing some artifacts, the nonparametric formulation is more flexibleand requires less strict constrains. For this reason, a comparison between thetwo TC formulations is also presented to investigate the impact of TC con-strains and their possible violations. The results are obtained using a statisti-cally robust dataset spanning a 1.5 year period collected in Switzerland andpresented in terms of traditional metrics. According to triple colocation analy-sis, the two disdrometers outperform the classical weighing rain gauge andthey have similar measurement error structure regardless of the integrationtime intervals.
Repeated polygonal patterns are pervasive in natural forms and structures. These patterns provide inherent structural stability while optimizing strength-per-weight and minimizing construction costs. In echinoids (sea urchins), a visible regularity can be found in the endoskeleton, consisting of a lightweight and resistant micro-trabecular meshwork (stereom). This foam-like structure follows an intrinsic geometrical pattern that has never been investigated. This study aims to analyse and describe it by focusing on the boss of tubercles—spine attachment sites subject to strong mechanical stresses—in the common sea urchin Paracentrotus lividus. The boss microstructure was identified as a Voronoi construction characterized by 82% concordance to the computed Voronoi models, a prevalence of hexagonal polygons, and a regularly organized seed distribution. This pattern is interpreted as an evolutionary solution for the construction of the echinoid skeleton using a lightweight microstructural design that optimizes the trabecular arrangement, maximizes the structural strength and minimizes the metabolic costs of secreting calcitic stereom. Hence, this identification is particularly valuable to improve the understanding of the mechanical function of the stereom as well as to effectively model and reconstruct similar structures in view of future applications in biomimetic technologies and designs.
Energy transitions (ETs) can solve some societal problems but must transform societies. Accordingly, socio-technical transitions and other systemic frameworks have been used to assess ETs. However, based on these frameworks, assessments miss a value co-creation orientation, the focus on actors’ researched benefits and enabled service exchange, and the consideration of needed de/re-institutionalization practices. Analyzing those elements could prevent socioeconomic shocks and loss of opportunities and unfold possible ET challenges against ET viability and sustainability. Intending to develop a theory synthesis work for enriching previous frameworks, we propose service-dominant logic (S-D logic) as an integrative framework to assess ETs. We offer a literature review on ET systems’ frameworks to compare them with the proposal. We also identify the implications of adopting S-D logic for rethinking energy systems’ dynamics and ETs. Thus, we contribute to the literature by providing an integrative framework for assessing ETs and we illustrate its potentialities by deriving some challenges of the current Italian ET. This study paves the way for deeper analyses on the contribution of S-D logic to ETs and the operationalization of other systems’ frameworks in our integrative one. Merging with quantitative models could also follow.
Using a and a unique set of Italian non-listed Unlikely to Pay (UTP) positions, that consist in the phase that precedes the insolvency but where it is still possible for the company to succeed in restructuring, this paper aims to analyze the relationships between corporate governance characteristics and financial distress status. We compare the performance of corporate governance variables in predicting corporate defaults, using both the Logit and Random Forest models, which previous researchers have deemed to be the most efficient machine learning techniques. Our results show that the use of corporate governance variables – especially with regards to CEO renewal and stability in the composition of the board of directors – increases the accuracy of the Random Forest technique and influences the success of the turnaround process. This paper also confirms the Random Forest technique’s ability to significantly outperform the Logit model in terms of accuracy.
Cutaneous melanoma incidence is increasing worldwide, representing an aggressive tumor when evolving to the metastatic phase. High‐resolution ultrasound (US) is playing a growing role in the assessment of newly diagnosed melanoma cases, in the locoregional staging prior to the sentinel lymph‐node biopsy procedure, and in the melanoma patient follow‐up. Additionally, US may guide a number of percutaneous procedures in the melanoma patients, encompassing diagnostic and therapeutic modalities. These include fine needle cytology, core biopsy, placement of presurgical guidewires, aspiration of lymphoceles and seromas, and electrochemotherapy.
Innovative entrepreneurship is one of the key drivers of economic development particularly for less developed economies where the economic growth is at the forefront of policymakers’ agenda. Yet, the research on how various factors at different levels interact and bring about innovative entrepreneurship in emerging and developing countries remains relatively scarce. We address this issue by developing a multilevel framework that explains how entrepreneurial competencies attenuate the negative impact of innovation barriers. Our analysis on a sample of individuals from 24 economies, 17 developing and 7 emerging countries, reveals that entrepreneurial competencies become more instrumental for innovative entrepreneurship when general, supply-side, and demand-side innovation barriers are higher. The findings offer unique insights to policymakers particularly in developing countries interested in promoting innovative entrepreneurship and to entrepreneurs and investors seeking to establish and support innovative ventures.
Nowadays, really huge volumes of fake news are continuously posted by malicious users with fraudulent goals thus leading to very negative social effects on individuals and society and causing continuous threats to democracy, justice, and public trust. This is particularly relevant in social media platforms (e.g., Facebook, Twitter, Snapchat), due to their intrinsic uncontrolled publishing mechanisms. This problem has significantly driven the effort of both academia and industries for developing more accurate fake news detection strategies: early detection of fake news is crucial. Unfortunately, the availability of information about news propagation is limited. In this paper, we provided a benchmark framework in order to analyze and discuss the most widely used and promising machine/deep learning techniques for fake news detection, also exploiting different features combinations w.r.t. the ones proposed in the literature. Experiments conducted on well-known and widely used real-world datasets show advantages and drawbacks in terms of accuracy and efficiency for the considered approaches, even in the case of limited content information.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,423 members
Rosaria Rita Canale
  • Department of Business and Economics
Antonio Bracale
  • Department of Engineering
Oreste Napolitano
  • Department of Business and Economic Studies
Raffaele Montella
  • Department of Applied Science
Via Acton,38, 80131, Naples, Italy