In this paper, stochastic planar stick–slip motions are investigated using a slider-on-belt model where the coefficient of friction (COF) of the contact interface is modelled as a random field. New three-variable stick–slip transition criteria are proposed to improve the accuracy and robustness of the algorithm. Stochastic analyses are performed concerning the Peak-to-Valley value of the displacement and friction forces and the time duration of the stick state. It is found that the correlation length of the COF random field is dominantly responsible for the stochastic behaviours of the system. In contrast, the belt velocity and the mean value of the COF have a significant influence on the time duration of the stick state compared with the corresponding deterministic slider-on-belt model.
Gearbox plays a vital role in a wide range of mechanical power transmission systems in many industrial applications, including wind turbines, vehicles, mining and material handling equipment , oil and gas processing equipment, offshore vessels, and aircraft. As an inevitable phenomenon during gear service life, gear wear affects the durability of gear tooth and reduces the remaining useful life of the gear transmission system. The propagation of gear wear can lead to severe gear failures such as gear root crack, tooth spall, and tooth breakage, which can further cause unexpected equipment shutdown or hazardous incidents. Therefore, it is necessary to monitor gear wear propagation progression in order to perform predictive maintenance. Vibration analysis is a widely used and effective technique to monitor the operating condition of rotating machinery, especially for the diagnosis of localized failures such as gear root crack and tooth surface spalling. However, vibration-based techniques for gear wear analysis and monitoring are very limited, mainly due to the difficulties in identifying the complex vibration characteristics induced by gear wear propagation. Understanding the effect of gear wear on vibration characteristics is essential to develop vibration-based techniques for monitoring and tracking gear wear evolution. However, no research work has been previously published to summarize the research progress in vibration-based gear wear monitoring and prediction. To fill the research gap, this review paper aims to conduct a state-of-the-art comprehensive review on vibration-based gear wear monitoring, including studying the gear surface features caused by different gear wear mechanisms, investigating the relationships between gear surface features and vibration characteristics, and summarizing the current research progress of vibration-based gear wear monitoring. This review also makes some recommendations for future research work in this area. It is expected that this review will provide useful information for further development of vibration-based techniques for gear wear monitoring and remaining useful life predictions.
In this contribution, we study the transient scattering of waves from a structural bi-stable element modelled as a Euler elastica. The non-linear elastic element has localized inertia at its end points, each of which is connected to a semi-infinite straight rod. The focus is on longitudinal waves reflected and transmitted into the semi-infinite rods by the non-linear structural interface. The derivation of the transient governing ordinary differential equations for the reflected and transmitted amplitudes is outlined. The key point is the assumption that the semi-infinite rods are linear elastic, with non-linearity embedded into special transient transmission conditions for the scattering amplitudes. The governing equations are solved using standard numerical integration techniques. From a physical point of view, we focus on the effect of the bi-stability on the transmission resonances, identified in the linearized regime. Special attention is given to the up- and downconversion of the frequency content of scattered amplitudes as a function of the amplitude of a harmonic impinging wave.
Tuned mass damper (TMD) is a common strategy to reduce structural vibration in a passive manner without the need for active power. The basic parameters of a TMD include its mass ratio, natural frequency and damping ratio. While these parameters are factory-calibrated before installation, it would be desirable to assess the in-situ properties of the TMD and the ‘primary’ structure under operational state, e.g., to validate/assess performance and detect detuning over the service life. In this work, a Bayesian approach is developed for identifying the modal parameters of the TMD and primary structure using only the ambient vibration data measured on the primary structure, i.e., ‘operational modal analysis’. The likelihood function and theoretical PSD matrix of ambient data are formulated, accounting for primary-secondary structure dynamics with non-classical damping that is not treated in existing Bayesian formulations. An Expectation-Maximisation (EM) algorithm is developed for efficient computation of the most probable value of modal parameters. Analytical expressions are derived so that the ‘posterior’ (i.e., given data) covariance matrix can be determined accurately and efficiently. The proposed method is verified using synthetic data and applied to field data of a chimney with close modes response attenuated by a TMD.
In order to ensure that the wind power system (WPS) can realize maximum power point tracking (MPPT) under generator side fault, an MPPT control system with adaptive active fault tolerance capability is proposed in this paper. The basic idea of the proposed control strategy is that a perturbation term containing model uncertainties, system nonlinearities and unknown disturbances (generator actuator failure and disturbance torque) is estimated in real-time by a designed observer. The estimated perturbation is used for the compensation of actual perturbation. Then, an adaptive feedback linearizing control of variable-speed wind turbine (VSWT) system can be realized. Therefore, the control strategy neither requires detailed system model and full state measurements, nor relies on fault detection, diagnosis and isolation techniques when the generator actuator fails. Simulation results show that the proposed control scheme has smaller tracking error and better dynamic performance than the traditional PI control and MPPT control, and higher robustness against system parameter uncertainties, disturbance torque and generator actuator failure than the nonlinear static state feedback based MPPT (NSSF-MPPT) control, traditional PI control and MPPT control.
The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hard scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.
Clinical implementation of pharmacogenomics will help in personalizing drug prescriptions and alleviate the personal and financial burden due to inefficacy and adverse reactions to drugs. However, such implementation is lagging in many parts of the world, including the Middle East, mainly due to the lack of data on the distribution of actionable pharmacogenomic variation in these ethnicities. We analyzed 6,045 whole genomes from the Qatari population for the distribution of allele frequencies of 2,629 variants in 1,026 genes known to affect 559 drugs or classes of drugs. We also performed a focused analysis of genotypes or diplotypes of 15 genes affecting 46 drugs, which have guidelines for clinical implementation and predicted their phenotypic impact. The allele frequencies of 1,320 variants in 703 genes affecting 299 drugs or class of drugs were significantly different between the Qatari population and other world populations. On average, Qataris carry 3.6 actionable genotypes/diplotypes, affecting 13 drugs with guidelines for clinical implementation, and 99.5% of the individuals had at least one clinically actionable genotype/diplotype. Increased risk of simvastatin-induced myopathy could be predicted in ~32% of Qataris from the diplotypes of SLCO1B1 , which is higher compared to many other populations, while fewer Qataris may need tacrolimus dosage adjustments for achieving immunosuppression based on the CYP3A5 diplotypes compared to other world populations. Distinct distribution of actionable pharmacogenomic variation was also observed among the Qatari subpopulations. Our comprehensive study of the distribution of actionable genetic variation affecting drugs in a Middle Eastern population has potential implications for preemptive pharmacogenomic implementation in the region and beyond.
The ATLAS experiment at the Large Hadron Collider has a broad physics programme ranging from precision measurements to direct searches for new particles and new interactions, requiring ever larger and ever more accurate datasets of simulated Monte Carlo events. Detector simulation with Geant4 is accurate but requires significant CPU resources. Over the past decade, ATLAS has developed and utilized tools that replace the most CPU-intensive component of the simulation—the calorimeter shower simulation—with faster simulation methods. Here, AtlFast3, the next generation of high-accuracy fast simulation in ATLAS, is introduced. AtlFast3 combines parameterized approaches with machine-learning techniques and is deployed to meet current and future computing challenges, and simulation needs of the ATLAS experiment. With highly accurate performance and significantly improved modelling of substructure within jets, AtlFast3 can simulate large numbers of events for a wide range of physics processes.
The activation and dysregulation of retrotransposons has been identified in the CNS of individuals with the fatal neurodegenerative disorder Amyotrophic lateral sclerosis (ALS). This includes elements from multiple different families and subfamilies of retrotransposons, however there is limited knowledge of the specific loci from which this expression occurs in ALS. The long interspersed element-1 (L1) is the only autonomous retrotransposon in the human genome and members of this family of elements maintain the ability to mobilise. Despite L1s contributing to 17% of the human genome only 80–100 L1s encode the required proteins for mobilisation and are retrotransposition competent. Identifying the specific loci from which L1 expression occurs will inform on the potential functional consequences of their expression, such as the potential for somatic retrotransposition or DNA damage caused by the endonuclease activity of the ORF2 protein of the L1. Here we characterised L1 loci expression using the L1EM tool ( https://github.com/FenyoLab/L1EM ) in RNA sequencing data from 518 samples across four tissues (motor cortex, frontal cortex, cerebellum and cervical spinal cord) in the Target ALS cohort obtained from the New York Genome Center. There was a significant reduction in total intact L1 expression (those that encode functional proteins) in two brain regions of individuals with ALS compared to controls and clustering of the ALS brain regions occurred based on their intact L1 expression profile. Although overall the levels of L1 expression were reduced in ALS/ALS with other neurological disorder (ND) there were individuals in which L1s were expressed at much higher levels than the rest of the ALS/ALSND cohort. Expressed L1 loci were more frequently located in introns compared to those not expressed and the level of L1 expression positively correlated with the expression of the gene in which it was located. Significant differences were observed in the expression profiles of L1s in ALS and specific features of these elements, such as location in the genome and whether or not they are intact, were significantly associated with those that were expressed in the cohort.
Background The Arctic tundra is subject to the greatest climate change-induced temperature rises of any biome. Both terrestrial and freshwater biota are responding to recent climate warming through variability in their distribution, abundance, and richness. However, uncertainty arises within models of future change when considering processes that operate over centennial timescales. A systematic evidence synthesis of centennial-scale variability in biodiversity does not currently exist for the Arctic biome. Here, we sought to address the primary research question: what evidence exists for temporal variability in Arctic terrestrial and freshwater biodiversity throughout the Holocene (11,650 years before present (yBP)—0yBP)? Methods Consultation with stakeholders informed key definitions, scoping and the appropriateness of the research question. The research question was structured using a PECO framework—Arctic biota (P), a timestamped year in the Holocene (E), another year in the Holocene (C), and the dimensions of biodiversity that have been measured (O)—to inform the search strategy. Search strings were benchmarked against a test list of 100 known sources to ensure a specific and comprehensive return of literature. Searches will occur across 13 bibliographic databases. The eligibility criteria specify that sources must: (a) use ‘proxy’ methods to measure biodiversity; (b) fall within the spatial extent of the contemporary Arctic tundra biome; and (c) consist of a time-series that overlaps with 11,650yBP to 0yBP (1950AD). Information coded from studies will include proxy-specific information to account for both temporal uncertainty (i.e., the characteristics of age-depth models and dating methods) and taxonomic uncertainty (i.e., the samples and processes used for taxonomic identification). We will assess temporal uncertainty within each source by determining the quality of dating methods and measures; this information will be used to harmonise dates onto the IntCal20 calibration curve and determine the available temporal resolution and extent of evidence through space. Key outputs of this systematic map will be: (1) a graph database containing the spatial–temporal properties of each study dataset with taxonomic harmonisation; and (2) a geographical map of the evidence base.
Common perinatal mental disorders are the most frequent complications of pregnancy, childbirth and the postpartum period, and the prevalence among women in low- and middle-income countries is the highest at nearly 20%. Women are the cornerstone of a healthy and prosperous society and until their mental health is taken as seriously as their physical wellbeing, we will not improve maternal mortality, morbidity and the ability of women to thrive. On the heels of several international efforts to put perinatal mental health on the global agenda, we propose seven urgent actions that the international community, governments, health systems, academia, civil society, and individuals should take to ensure that women everywhere have access to high-quality, respectful care for both their physical and mental wellbeing. Addressing perinatal mental health promotion, prevention, early intervention and treatment of common perinatal mental disorders must be a global priority.
Background Whether the association of cardiovascular diseases (CVDs) with dementia differs by sex remains unclear, and the role of socioeconomic, lifestyle, genetic, and medical factors in their association is unknown. Methods We used data from the UK Biobank, a population-based cohort study of 502,649 individuals. We used Cox proportional hazards models to estimate sex-specific hazard ratios (HRs) and 95% confidence intervals (CI), and women-to-men ratio of HRs (RHR) for the association between CVD (coronary heart diseases (CHD), stroke, and heart failure) and incident dementia (all-cause dementia, Alzheimer's Disease (AD), and vascular dementia (VD)). The moderator roles of socioeconomic (education, income), lifestyle (smoking, BMI, leisure activities, and physical activity), genetic factors ( APOE allele status), and medical history were also analyzed. Results Compared to people who did not experience a CVD event, the HRs (95%CI) between CVD and all-cause dementia were higher in women compared to men, with an RHR (Female/Male) of 1.20 (1.13, 1.28). Specifically, the HRs for AD were higher in women with CHD and heart failure compared to men, with an RHR (95%CI) of 1.63 (1.39, 1.91) and 1.32 (1.07, 1.62) respectively. The HRs for VD were higher in men with heart failure than women, with RHR (95%CI) of 0.73 (0.57, 0.93). An interaction effect was observed between socioeconomic, lifestyle, genetic factors, and medical history in the sex-specific association between CVD and dementia. Conclusion Women with CVD were 1.5 times more likely to experience AD than men, while had 15% lower risk of having VD than men.
Modern web mapping techniques have enhanced the storytelling capability of cartography. In this paper, we present our recent development of a web mapping facility that can be used to extract interesting stories and unique insights from a diverse range of socio-economic and demographic variables and indicators, derived from a variety of datasets. We then use three curated narratives to show that online maps are effective ways of interactive storytelling and visualisation, which allow users to tailor their own story maps. We discuss the reasons for the revival of the recent attention to narrative mapping and conclude that our interactive web mapping facility powered by data assets can be employed as an accessible and powerful toolkit, to identify geographic patterns of various social and economic phenomena by social scientists, journalists, policymakers, and the public.
Background SARS-CoV-2 is known to transmit in hospital settings, but the contribution of infections acquired in hospitals to the epidemic at a national scale is unknown. Methods We used comprehensive national English datasets to determine the number of COVID-19 patients with identified hospital-acquired infections (with symptom onset > 7 days after admission and before discharge) in acute English hospitals up to August 2020. As patients may leave the hospital prior to detection of infection or have rapid symptom onset, we combined measures of the length of stay and the incubation period distribution to estimate how many hospital-acquired infections may have been missed. We used simulations to estimate the total number (identified and unidentified) of symptomatic hospital-acquired infections, as well as infections due to onward community transmission from missed hospital-acquired infections, to 31st July 2020. Results In our dataset of hospitalised COVID-19 patients in acute English hospitals with a recorded symptom onset date (n = 65,028), 7% were classified as hospital-acquired. We estimated that only 30% (range across weeks and 200 simulations: 20–41%) of symptomatic hospital-acquired infections would be identified, with up to 15% (mean, 95% range over 200 simulations: 14.1–15.8%) of cases currently classified as community-acquired COVID-19 potentially linked to hospital transmission. We estimated that 26,600 (25,900 to 27,700) individuals acquired a symptomatic SARS-CoV-2 infection in an acute Trust in England before 31st July 2020, resulting in 15,900 (15,200–16,400) or 20.1% (19.2–20.7%) of all identified hospitalised COVID-19 cases. Conclusions Transmission of SARS-CoV-2 to hospitalised patients likely caused approximately a fifth of identified cases of hospitalised COVID-19 in the “first wave” in England, but less than 1% of all infections in England. Using time to symptom onset from admission for inpatients as a detection method likely misses a substantial proportion (> 60%) of hospital-acquired infections.
The licensing of university technologies to private firms has become an important part of the technology transfer mission of many universities. An inherent challenge for the technology licensing of universities is that potential licensees find it difficult to judge the early stage technologies and their ultimate commercial value. We reason that patent litigation against universities can have unintended signaling effects about the commercial value of its technologies and results in increased licensing income for the university. We ground this hypothesis in theory integrating signaling mechanisms from patent enforcement research into theoretical models explaining university technology licensing. Within our logic, the public and costly nature of patent litigation against universities creates strong, credible signals to potential licensees about the technologies of a university even if the signal was not created for that specific purpose. We isolate the signaling mechanism that is central to our theorizing by exploring two moderation factors that reveal additional information to potential licensees, i.e. the licensing track-record of the university and whether the lawsuit involves private firms as co-defendants. We test our theory with a unique dataset of 157 US universities and the 1408 patent infringement cases in which they were involved as defendants over the period 2005–2016. Results show that defending against claims of patent infringement enhances technology licensing revenues, particularly when universities are already adept at licensing technology and when they are co-defendants with private firms.
In high energy physics (HEP), analysis metadata comes in many forms—from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
The fabrication of a future generation of (opto)electronic devices based on molecular components and materials will require careful chemical design, coupled with assembly methods that permit precise spatial orientation and arrangement of the functional molecules within device structures. Although unimolecular electronics are already a laboratory reality, the variation in the arrangement of the molecule within a molecular junction from measurement to measurement is considerable. Consequently, controlling the precise geometry at the molecule–metal contacts is a long-standing and largely unresolved challenge. Here, a strategy to fabricate uniform unimolecular junctions distributed in a regular pattern is reported. A monolayer of zinc metalloporphyrins, peripherally functionalised by bulky dendrons, is used to provide a well-defined array of molecular binding sites with precise spatial distribution. The dendrons are then photochemically cross-linked to form a robust base-layer. Parallel, uniformly-spaced unimolecular structures are subsequently assembled on these binding sites through a layer-by-layer (LbL) strategy. This LbL strategy proceeds through coordination of an α,ω-amino functionalised oligo(phenylene)ethynylene (OPE) molecule to the zinc ions of the metalloporphyrin template base-layer. The structure is then extended through alternating self-assembled layers of (cross-linked) zinc porphyrin and OPE. The coordination interaction between the zinc(II) sites and the bifunctional OPE wire ensures a high degree of registry between the layers and good electrical contact through the extended arrays and offer fine control over the chemical composition.
The large deviations between the partial pressure of oxygen and hydrogen can lead to severe membrane damage, which reduces the life of the polymer electrolyte membrane fuel cell (PEMFC). The deviation of pressure can be restrained by effective control of partial pressure. Considering the actual operating environment of PEMFC, its control system is not only affected by load changes, but also affected by some unknown disturbances, such as system parameter variations, failures, etc. Considering the above factors, this paper has proposed a robust nonlinear adaptive pressure control (RNAPC) employing the perturbation estimation technique to suppress the deviations. The proposed controller does not need to identify or estimate the system parameters in real time like the conventional controller based on parameter estimation technology, nor does it need to rely on the fault diagnosis and isolation technology to detect and estimate the failures in real time. It only uses the variations of system state variables caused by failures to provide the required failure information for the controller. In the design of RNAPC, an observer is designed to estimate a perturbation term containing system uncertainties, nonlinearities, and disturbances. The estimation of perturbation is used for the compensation of actual perturbation. Then, an adaptive FLC of PEMFC system can be realized. Therefore, the RNAPC requires only a few state measurements and is independent of an accurate system model. Simulation results show that the RNAPC method has smaller deviations of partial pressure of oxygen and hydrogen and better dynamic performance than traditional PI control under load disturbance, and obtains higher robustness against uncertainties and sensor failures than traditional PI control and FLC. The maximum error of partial pressure can be limited within 0.5% under four different simulation scenarios under the RNAPC.
This paper presents a novel smart soft open point (sSOP) that interconnects railway electrification system to the local low voltage (LV) distribution grid to provide additional flexibility and controllability for both networks. The proposed sSOP builds on the concept of soft open points, adding an energy storage system and embedding a rail and grid (R + G) energy management system to enable the energy transfer between the two networks at different average power levels. The smartness of the proposed soft open point is realised in the integration of an energy storage system to act as an energy buffer between these two networks of different dynamics. The paper demonstrates that the proposed sSOP improves the braking efficiency of trains and increase the local use of renewable energy through employing an integrated simulation platform between a railway and smart grid simulators that takes into account the trains' timetables as well as the daily variation of the grid loads and renewable energy generation. The results show that an appropriate design and control of the sSOP enables the recuperation of most of the braking energy available from the railway and that energy is enough to mainly supply the local power distribution grid, thereby effectively implementing the concept of smart grids.
The resonance problem of an industrial fluid-conveying pipeline system can be mitigated by shifting or assigning its natural frequencies. However, the desired natural frequencies are difficult to realize using typical numerical model-based parameter optimization technologies because of the modelling errors. Herein, the first theoretical and experimental study is reported on achieving desired natural frequencies of an industrial pipeline system by using measured receptances. This research involves the one-way fluid–structure interaction of the steady flow. On this basis, this paper considers the changeability of working conditions of the pipeline system and characterizes the uncertain flow speed as an interval uncertainty. The primary framework of the classic receptance method is employed, and a novel interval-based frequency assignment method is proposed. This method inherits the advantage of the receptance methodology, in which the determination of the optimal structural modifications entirely relies on the measured frequency response functions. More especially, the obtained stiffness modifications by using the proposed method can improve the robustness of the assignment results to uncertain flow speeds, and then actually achieved values of the assigned natural frequencies have a smaller perturbation. Several numerical examples demonstrate that the proposed method provides effective results. The application of the method to the modification of a real U-shaped fluid-conveying pipeline system gives experimental evidence of its effectiveness. The effective experimental results give the confidence to use the receptance-based structural modification method to improve the dynamical behaviour of real structures.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.