Recent publications
The growth of the IoT requires more comprehensive security measures than ever. RF fingerprinting (RFF) utilizes features in the signals and waveforms from transmitters’ physical-layer imperfections to classify and authenticate devices. To prevent attacks from impersonators, combinatorial randomness is exploited to augment the RF fingerprints with a high-efficiency PA for IoT applications. By enabling different subsets of thinly sliced PA elements, the transmitter can be reconfigured with 220 subsets that exhibit distinctive RF fingerprints for signal analysis at the edge. In this work, a combinatorial-randomness-based PA was implemented in a BLE system. The BLE packets’ in-phase and quadrature samples transmitted from each configuration are collected with different SNRs to emulate the environmental changes in communication channels. A lightweight convolutional neural network (CNN) classifier demonstrates the possibility of accurate and fast inference of unique features in the IoT environment, which our approach exploits to enable on-chip time-varying RF fingerprints.
Mesh-based image warping techniques typically represent image deformation using linear functions on triangular meshes or bilinear functions on rectangular meshes. This enables simple and efficient implementation, but in turn, restricts the representation capability of the deformation, often leading to unsatisfactory warping results. We present a novel, flexible polygonal finite element (poly-FEM) method for content-aware image warping. Image deformation is represented by high-order poly-FEMs on a content-aware polygonal mesh with a cell distribution adapted to saliency information in the source image. This allows highly adaptive meshes and smoother warping with fewer degrees of freedom, thus significantly extending the flexibility and capability of the warping representation. Benefiting from the continuous formulation of image deformation, our poly-FEM warping method is able to compute the optimal image deformation by minimizing existing or even newly designed warping energies consisting of penalty terms for specific transformations. We demonstrate the versatility of the proposed poly-FEM warping method in representing different deformations and its superiority by comparing it to other existing state-of-the-art methods.
The extent to which domestic industrial capabilities are essential in contributing to a Nations' prosperity and national well-being is the topic of long-standing debate. On the one hand, globalization and the outsourcing of production can lead to greater productivity, lower product costs, and gains from trade. On the other hand, national capabilities have long been a source of competitiveness and security during times of war and other crises. We explore the importance of domestic industrial capabilities during crises through a comparative case study of two countries - Spain and Portugal - to the sudden spike in demand for the manufacture of mechanical ventilators brought on by the COVID-19 pandemic. Both countries had to work within the framework of EU regulations, but had very different internal competencies upon which to draw in doing so. In addition, mechanical ventilators serve as a particularly interesting context for study because they involve high risk (loss of patients' lives if incorrectly manufactured) and entering the market presents high entry barriers (including significant tacit knowledge in its production and use, and significant intellectual property embedded in proprietary software at large, established firms). To unpack the processes used by each country we leverage insights from 60 semi-structured interviews across experts from industry, healthcare workers, regulators, non-profit organizations, and research centers. We find that Spanish regulatory measures were more effective, resulting in 12 times more new products receiving regulatory approval to enter the market. Although neither country is known for their mechanical ventilator production, instrumental in informing the Spanish regulatory and industrial responses was their internal knowledge base due to domestic experts and existing capabilities in ventilator production. We conclude by proposing new theory for how nations might identify important core competencies to enhance their dynamic (regulatory) capabilities in areas likely to be critical to their social welfare.
This letter presents a continuous probabilistic modeling methodology for spatial point cloud data using finite Gaussian Mixture Models (GMMs) where the number of components are adapted based on the scene complexity. Few hierarchical and adaptive methods have been proposed to address the challenge of balancing model fidelity with size. Instead, state-of-the-art mapping approaches require tuning parameters for specific use cases, but do not generalize across diverse environments.To address this gap, we utilize a self-organizing principle from information-theoretic learning to automatically adapt the complexity of the GMM model based on the relevant information in the sensor data. The approach is evaluated against existing point cloud modeling techniques on real-world data with varying degrees of scene complexity.
Europe has experienced a substantial increase in non-indigenous crayfish species (NICS) since the mid-20th century
due to their extensive use in fisheries, aquaculture and, more recently, pet trade. Despite relatively long invasion histories
of some NICS and negative impacts on biodiversity and ecosystemfunctioning, large spatio-temporal analyses of
their occurrences are lacking. Here, we used a large freshwater macroinvertebrate database to evaluate what information
on NICS can be obtained from widely applied biomonitoring approaches and how usable such data is for descriptions
of trends in identified NICS species. We found 160 time-series containing NICS between 1983 and 2019, to infer
temporal patterns and environmental drivers of species and region-specific trends. Using a combination of metaregression
and generalized linear models, we found no significant temporal trend for the abundance of any species
(Procambarus clarkii, Pacifastacus leniusculus or Faxonius limosus) at the European scale, but identified species-specific
predictors of abundances. While analysis of the spatial range expansion of NICS was positive (i.e. increasing spread)
in England and negative (significant retreat) in northern Spain, no trend was detected in Hungary and the Dutch-
German-Luxembourg region. The average invasion velocity varied among countries, ranging from 30 km/year in
England to 90 km/year in Hungary. The average invasion velocity gradually decreased over time in the long term,
with declines being fastest in the Dutch-German-Luxembourg region, and much slower in England. Considering that
NICS pose a substantial threat to aquatic biodiversity across Europe, our study highlights the utility and importance
of collecting high resolution (i.e. annual) biomonitoring data using a sampling protocol that is able to estimate crayfish
abundance, enabling a more profound understanding of NICS impacts on biodiversity.
There is growing interest in lithium ion battery materials recycling, and specifically direct recycling, wherein the cathode material is harvested and rejuvenated for use in new batteries. A key risk associated with this process is the fact that materials collected from aged batteries will be inhomogeneous due to the variable experience they incur during use, both between cells and even within cells themselves. In this paper, we investigate the direct recycling process factors that address the effect of three properties of aged and harvested LiNi0.815Co0.15Al0.035O2 (NCA) materials: variation of state-of-charge (SoC), the inconsistent existence of a surface reconstruction layer on the material, and the inhomogeneous growth of the solid electrolyte interface (SEI). We will explore how each property impacts the consistency of the recycled NCA in terms of several key materials properties.
Private Stream Aggregation (PSA) protocols perform secure aggregation of time-series data without leaking information about users’ inputs to the aggregator. Previous work in post-quantum PSA used the Ring Learning with Errors (RLWE) problem indirectly via homomorphic encryption (HE), leading to a needlessly complex and intensive construction. In this work, we present SLAP, the first PSA protocol that is directly constructed from the RLWE problem to gain post-quantum security. By nature of our white-box approach, SLAP is simpler and more efficient than previous PSA that uses RLWE indirectly through the black box of HE. We also show how to apply state-of-the-art optimizations for lattice-based cryptography to greatly improve the practical performance of SLAP. The communication overhead of SLAP is much less than in previous work, with decreases of up to 99.96% in ciphertext sizes as compared to previous work in RLWE-based PSA. We demonstrate a speedup of 20.76x over the previous state-of-the-art RLWE-based PSA work’s aggregation and show that SLAP achieves a throughput of 390,691 aggregations per second for 1000 users. We also compare SLAP to other state-of-the-art post-quantum PSA and show that SLAP is comparable in latency and shows improvement in throughput when compared to these works, and we compare the qualitative features of these schemes with regards to practical usability.
Recently developed methods for video analysis, especially models for pose estimation and behavior classification, are transforming behavioral quantification to be more precise, scalable, and reproducible in fields such as neuroscience and ethology. These tools overcome long-standing limitations of manual scoring of video frames and traditional ‘center of mass’ tracking algorithms to enable video analysis at scale. The expansion of open-source tools for video acquisition and analysis has led to new experimental approaches to understand behavior. Here, we review currently available open-source tools for video analysis and discuss how to set up these methods for labs new to video recording. We also discuss best practices for developing and using video analysis methods, including community-wide standards and critical needs for the open sharing of datasets and code, more widespread comparisons of video analysis methods, and better documentation for these methods especially for new users. We encourage broader adoption and continued development of these tools, which have tremendous potential for accelerating scientific progress in understanding the brain and behavior.
The past decade has witnessed the development of layered hydroxide‐based self‐supporting electrodes, but the low active mass ratio impedes its all‐around energy storage applications. Herein, we break the intrinsic limit of layered hydroxides by engineering F‐substituted β‐Ni(OH)2 (Ni‐F‐OH) plates with a sub‐micron thickness (over 700 nm), producing a super‐high mass loading of 29.8 mg cm−2 on the carbon substrate. Theoretical calculation and X‐ray absorption spectroscopy analysis demonstrate that Ni‐F‐OH shares the β‐Ni(OH)2‐like structure with slightly tuned lattice parameters. More interestingly, the synergy modulation of NH4+ and F− is found to serve as the key enabler to tailor these sub‐micron‐thickness 2D plates thanks to the modification effects on the (001) plane surface energy and local OH− concentration. Guided by this mechanism, we further develop the superstructures of bimetallic hydroxides and their derivatives, manifesting they are a versatile family with great promise. The tailored ultra‐thick phosphide superstructure achieves a super‐high specific capacity of 7144 mC cm−2 and a superior rate capability (79% at 100 mA cm−2). This work highlights a multiscale understanding of how exceptional structure modulation happens in low‐dimensional layered materials. The as‐built unique methodology and mechanisms will boost the development of advanced materials to better meet future energy demands. This article is protected by copyright. All rights reserved
Sensory feedback is essential to both animals and robotic systems for achieving coordinated, precise movements. Mechanosensory feedback, which provides information about body deformation, depends not only on the properties of sensors but also on the structure in which they are embedded. In insects, wing structure plays a particularly important role in flapping flight: in addition to generating aerodynamic forces, wings provide mechanosensory feedback necessary for guiding flight while undergoing dramatic deformations during each wingbeat. However, the role that wing structure plays in determining mechanosensory information is relatively unexplored. Insect wings exhibit characteristic stiffness gradients and are subject to both aerodynamic and structural damping. Here we examine how both of these properties impact sensory performance, using finite element analysis combined with sensor placement optimization approaches. We show that wings with nonuniform stiffness exhibit several advantages over uniform stiffness wings, resulting in higher accuracy of rotation detection and lower sensitivity to the placement of sensors on the wing. Moreover, we show that higher damping generally improves the accuracy with which body rotations can be detected. These results contribute to our understanding of the evolution of the nonuniform stiffness patterns in insect wings, as well as suggest design principles for robotic systems.
We develop a conceptual framework for studying collective adaptation in complex socio-cognitive systems, driven by dynamic interactions of social integration strategies, social environments and problem structures. Going beyond searching for 'intelligent' collectives, we integrate research from different disciplines and outline modelling approaches that can be used to begin answering questions such as why collectives sometimes fail to reach seemingly obvious solutions, how they change their strategies and network structures in response to different problems and how we can anticipate and perhaps change future harmful societal trajectories. We discuss the importance of considering path dependence, lack of optimization and collective myopia to understand the sometimes counterintuitive outcomes of collective adaptation. We call for a transdisciplinary, quantitative and societally useful social science that can help us to understand our rapidly changing and ever more complex societies, avoid collective disasters and reach the full potential of our ability to organize in adaptive collectives.
Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to obtain weights that directly minimizes covariate imbalance. In particular, we adapt a sensitivity analysis framework using the percentile bootstrap for a broad class of balancing weights estimators. We prove that the percentile bootstrap procedure can, with only minor modifications, yield valid confidence intervals for causal effects under restrictions on the level of unmeasured confounding. We also propose an amplification—a mapping from a one-dimensional sensitivity analysis to a higher dimensional sensitivity analysis—to allow for interpretable sensitivity parameters in the balancing weights framework. We illustrate our method through extensive real data examples.
We describe a new open-source Python-based package for high accuracy correlated electron calculations using quantum Monte Carlo (QMC) in real space: PyQMC. PyQMC implements modern versions of QMC algorithms in an accessible format, enabling algorithmic development and easy implementation of complex workflows. Tight integration with the PySCF environment allows for a simple comparison between QMC calculations and other many-body wave function techniques, as well as access to high accuracy trial wave functions.
Background
Diagnosis of shockable rhythms leading to defibrillation remains integral to improving out‐of‐hospital cardiac arrest outcomes. New machine learning techniques have emerged to diagnose arrhythmias on ECGs. In out‐of‐hospital cardiac arrest, an algorithm within an automated external defibrillator is the major determinant to deliver defibrillation. This study developed and validated the performance of a convolution neural network (CNN) to diagnose shockable arrhythmias within a novel, miniaturized automated external defibrillator.
Methods and Results
There were 26 464 single‐lead ECGs that comprised the study data set. ECGs of 7‐s duration were retrospectively adjudicated by 3 physician readers (N=18 total readers). After exclusions (N=1582), ECGs were divided into training (N=23 156), validation (N=721), and test data sets (N=1005). CNN performance to diagnose shockable and nonshockable rhythms was reported with area under the receiver operating characteristic curve analysis, F1, and sensitivity and specificity calculations. The duration for the CNN to output was reported with the algorithm running within the automated external defibrillator. Internal and external validation analyses included CNN performance among arrhythmias, often mistaken for shockable rhythms, and performance among ECGs modified with noise to mimic artifacts. The CNN algorithm achieved an area under the receiver operating characteristic curve of 0.995 (95% CI, 0.990–1.0), sensitivity of 98%, and specificity of 100% to diagnose shockable rhythms. The F1 scores were 0.990 and 0.995 for shockable and nonshockable rhythms, respectively. After input of a 7‐s ECG, the CNN generated an output in 383±29 ms (total time of 7.383 s). The CNN outperformed adjudicators in classifying atrial arrhythmias as nonshockable (specificity of 99.3%–98.1%) and was robust against noise artifacts (area under the receiver operating characteristic curve range, 0.871–0.999).
Conclusions
We demonstrate high diagnostic performance of a CNN algorithm for shockable and nonshockable rhythm arrhythmia classifications within a digitally connected automated external defibrillator.
Registration
URL: https://clinicaltrials.gov/ct2/show/NCT03662802 ; Unique identifier: NCT03662802
The presence of RF components in mixed-signal circuits make it a challenging task to resolve tradeoffs among performance specifications. In order to ease the process of circuit design, these tradeoffs are being analyzed using multi-objective optimization methodologies. This paper presents a hybrid multi-objective optimization framework (MHPSO), a combination of particle swarm optimization and simulated annealing. The framework emphasizes on preserving nondominated solutions in an external archive. The multi-dimensional space excluding the archive is divided into several sub-spaces according to a velocity-temperature mapping scheme. Further, the solutions in each sub-space are optimized using simulated annealing for generation of a Pareto front. The framework is extended by incorporating crowding distance comparison operator (MHPSO-CD) to maintain nondominated solutions in the archive. The effectiveness of proposed methodologies is demonstrated for performance space exploration of three electronic circuits, i.e., a two-stage operational amplifier, a folded cascode operational amplifier, and a low noise amplifier with inductive source degeneration. Additionally, the performance of proposed algorithms (MHPSO, MHPSO-CD) are evaluated on various test functions, and the results are compared with standard multi-objective evolutionary algorithms.
We prove global existence of a continuous-time Nash equilibrium with endogenous persistent and exogenous temporary price impact. Relative to the analogous Radner and Pareto-efficient equilibria, the Nash equilibrium has a lower interest rate but has similar Sharpe ratios and stock-return volatility.
Background:
Children awaiting transplantation face a high risk of waitlist mortality due to a shortage of pediatric organ donors. Pediatric donation consent rates vary across Organ Procurement Organizations (OPOs), suggesting that some OPOs might utilize more effective pediatric-focused donor recruitment techniques than others. An online survey of 193 donation requestor staff sheds light on the strategies that OPO staff utilize when approaching potential pediatric deceased organ donors.
Methods:
In collaboration with the Association of Organ Procurement Organizations, the research team contacted the executive directors and medical directors of all 57 of the OPOs in the US. Of these, 51 OPOs agreed to participate, and 47 provided contact information for donation requestor staff. Of the 379 staff invited to participate in the survey, 193 provided complete responses.
Results:
Respondents indicated more comfort approaching adult donors than pediatric donors, and they endorsed approach techniques that were interpersonal and emotional rather than professional and informative. Respondents were accurate in their perceptions about which donor characteristics are associated with consent. However, respondents from OPOs with high consent rates (according to data from the Scientific Registry of Transplant Recipients), and those from OPOs with low consent rates were very similar in terms of demographics, training, experience, and reported techniques.
Conclusions:
Additional research is needed to better determine why some OPOs have higher consent rates than others and whether the factors that lead to high consent rates in high-performing OPOs can be successful when implemented by lower-performing OPOs.
- Judit Prat
- G Zacharegkas
- Y Park
- [...]
- J Weller
Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3×2pt, had to discard a lot of signal-to-noise from small scales due to our inability to accurately model non-linearities and baryonic effects. Galaxy-galaxy lensing, or the position-shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale θ or physical scale R carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently there have been a few independent efforts that aim to mitigate the non-locality of the galaxy-galaxy lensing signal. Here we perform a comparison of the different methods, including the Y-transformation, the Point-Mass marginalization methodology and the Annular Differential Surface Density statistic. We do the comparison at the cosmological constraints level in a combined galaxy clustering and galaxy-galaxy lensing analysis. We find that all the estimators yield equivalent cosmological results assuming a simulated Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like setup and also when applied to DES Y3 data. With the LSST Y1 setup, we find that the mitigation schemes yield ∼1.3 times more constraining S8 results than applying larger scale cuts without using any mitigation scheme.
Existing reaction transition state (TS) databases are comparatively small and lack chemical diversity. Here, this data gap has been addressed using the concept of a graphically-defined model reaction to comprehensively characterize a reaction space associated with C, H, O, and N containing molecules with up to 10 heavy (non-hydrogen) atoms. The resulting dataset is composed of 176,992 organic reactions possessing at least one validated TS, activation energy, heat of reaction, reactant and product geometries, frequencies, and atom-mapping. For 33,032 reactions, more than one TS was discovered by conformational sampling, allowing conformational errors in TS prediction to be assessed. Data is supplied at the GFN2-xTB and B3LYP-D3/TZVP levels of theory. A subset of reactions were recalculated at the CCSD(T)-F12/cc-pVDZ-F12 and ωB97X-D2/def2-TZVP levels to establish relative errors. The resulting collection of reactions and properties are called the Reaction Graph Depth 1 (RGD1) dataset. RGD1 represents the largest and most chemically diverse TS dataset published to date and should find immediate use in developing novel machine learning models for predicting reaction properties.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
5000 Forbes Avenue, 15213, Pittsburgh, PA, United States
Head of institution
Subra Suresh
Website
http://www.cmu.edu
Phone
412-268-2000