In this work, we develop a machine learning-based method to characterize intracluster concentration (ρc), background concentration (ρb), clustering radius (r̄), and radius dispersity (δr) in simulated atom probe tomography data using multiple spatial statistics summary functions to train a Bayesian regularized neural network. We build upon previous work that utilized Ripley's K-function by incorporating additional features from nearest-neighbor spatial statistics summary functions to better characterize concentration-based metrics. The addition of nearest-neighbor based features allows for highly accurate estimates of ρc and ρb, both with 90% of the predictions within 4.0% of the real value; the root-mean-square errors are reduced by 81.5% and 92.8% from predictions using only K-function based features, respectively. Additionally, including these nearest-neighbor based features improves the ability to differentiate between r̄ and δr.
The formation of nuclei in slightly proton-rich regions of the neutrino-driven wind of core-collapse supernovae could be attributed to the neutrino-p process (νp-process). As it proceeds via a sequence of (p,γ) and (n,p) reactions, it may produce elements in the range of Ni and Sn, considering adequate conditions. Recent studies identify a number of decisive (n,p) reactions that control the efficiency of the νp-process. The study of one such (n,p) reaction via the measurement of the reverse (p,n) in inverse kinematics was performed with SECAR at NSCL/FRIB. Proton-induced reaction measurements, especially at the mass region of interest, are notably difficult since the recoils have nearly identical masses as the unreacted projectiles. Such measurements are feasible with the adequate separation level achieved with SECAR, and the in-coincidence neutron detection. Adjustments of the SECAR system for the first (p,n) reaction measurement included the development of new ion beam optics, and the installation of the neutron detection system. The aforementioned developments along with a discussion on the preliminary results of the p(⁵⁸Fe,n)⁵⁸Co reaction measurement are presented.
The r-process has been shown to be robust in reproducing the abundance distributions of heavy elements, such as europium, seen in ultra-metal poor stars. In contrast, observations of elements 26 < Z < 47 display overabundances relative to r-process model predictions. A proposed additional source of early nucleosynthesis is the weak r-process in neutrino-driven winds of core-collapse supernovae. It has been shown that in this site (α,n) reactions are both crucial to nucleosynthesis and the main source of uncertainty in model-based abundance predictions. Aiming to improve the certainty of nucleosynthesis predictions, the cross section of the important reaction ⁸⁶Kr(α,n)⁸⁹Sr has been measured at an energy relevant to the weak r-process. This experiment was conducted in inverse kinematics at TRIUMF with the EMMA recoil mass spectrometer and the TIGRESS gamma-ray spectrometer. A novel type of solid helium target was used.
Knowledge of snow cover distribution and disappearance dates over a wide range of scales is imperative for understanding hydrological dynamics and for habitat management of wildlife species that rely on snow cover. Identification of snow refugia, or places with relatively late snow disappearance dates compared to surrounding areas, is especially important as climate change alters snow cover timing and duration. The purpose of this study was to increase understanding of snow refugia in complex terrain spanning the rain-snow transition zone at fine spatial and temporal scales. To accomplish this objective, we used remote cameras to provide relatively high temporal and spatial resolution measurements on snowpack conditions. We built linear models to relate snow disappearance dates (SDDs) at the monitoring sites to topoclimatic and canopy cover metrics. One model to quantify SDDs included elevation, aspect, and an interaction between canopy cover and cold-air pooling potential. High-elevation, north-facing sites in cold-air pools had the latest SDDs, but isolated lower-elevation points also exhibited relatively late potential SDDs. Importantly, canopy cover had a much stronger effect on SDDs in cold-air pools than in non-cold-air pools, indicating that best practices in forest management for snow refugia could vary across microtopography. A second model that included in situ hydroclimate observations (DJF temperature and March 1 snow depth) indicated that March 1 snow depth had little impact on SDD at the coldest winter temperatures, and that DJF temperatures had a stronger effect on SDD at lower snow depths, implying that the relative importance of snowfall and temperature could vary across hydroclimatic contexts in their impact on snow refugia. This new understanding of factors influencing snow refugia can guide forest management actions to increase snow retention and inform management of snow-dependent wildlife species in complex terrain.
This chapter synthesizes the physics education research work related to the interplay of visualization and mathematization in physics teaching and learning, specifically as mediated by dynamic, interactive digital visualization tools. In structuring our synthesis, we build on existing theories of visualization and mathematization to propose two “functions” that visualizations tools exhibit in facilitating mathematization: (1) bridging between physical phenomena and formalisms, and (2) bridging between idealized models of physical phenomena and formalisms. We populate these two broad categories with illustrative examples of visualization tools and conclude with a summary of the developmental history of those tools in physics education research.
Environment stress is a major threat to the existence of coral reefs and has generated a lot of interest in the coral research community. Under the environmental stress, corals can experience tissue loss and/or the breakdown of symbiosis between the cnidarian host and its symbiotic algae causing the coral tissue to appear white as the skeleton can be seen by transparency. Image analysis is a common method used to assess tissue response under the environmental stress. However, the traditional approach is limited by the dynamic nature of the coral-algae symbiosis. Here, we observed coral tissue response in the scleractinian coral, Montipora capricornis, using high frequency image analysis throughout the experiment, as opposed to the typical start/end point assessment method. Color analysis reveals that the process can be divided into five stages with two critical stages according to coral tissue morphology and color ratio. We further explore changes to the morphology of individual polyps by means of the Pearson correlation coefficient and recurrence plots, where the quasi-periodic and nonstationary dynamics can be identified. The recurrence quantification analysis also allows the comparison between the different polyps. Our research provides a detailed visual and mathematical analysis of coral tissue response to environmental stress, which potentially shows universal applicability. Moreover, our approach provides a robust quantitative advancement for improving our insight into a suite of biotic responses in the perspective of coral health evaluation and fate prediction.
Copper is expected to play a big role in the global move, as solar, wind, and electric vehicles increase. Understanding the metal market and forecasting price changes can help players plan for future changes in supply and demand. Developing dynamic models of demand and supply requires considering price elasticity. In static prediction models, price elasticity is ignored, and the future quantity demanded is predicted without taking into consideration the relationship between price and quantity. A framework is proposed to determine the price elasticity of supply and demand of copper from 1990 to 2020 using production, consumption, and price data. The presented results show that both supply and demand price elasticities in the copper market in the long run are small but statistically significant. In this situation, rather than no change in price, there would be a small change in price, and thus, a small change in quantity demanded and supplied.
A normalizing flow (NF) is a mapping that transforms a chosen probability distribution to a normal distribution. Such flows are a common technique used for data generation and density estimation in machine learning and data science. The density estimate obtained with a NF requires a change of variables formula that involves the computation of the Jacobian determinant of the NF transformation. In order to tractably compute this determinant, continuous normalizing flows (CNF) estimate the mapping and its Jacobian determinant using a neural ODE. Optimal transport (OT) theory has been successfully used to assist in finding CNFs by formulating them as OT problems with a soft penalty for enforcing the standard normal distribution as a target measure. A drawback of OT-based CNFs is the addition of a hyperparameter, [Formula: see text], that controls the strength of the soft penalty and requires significant tuning. We present JKO-Flow, an algorithm to solve OT-based CNF without the need of tuning [Formula: see text]. This is achieved by integrating the OT CNF framework into a Wasserstein gradient flow framework, also known as the JKO scheme. Instead of tuning [Formula: see text], we repeatedly solve the optimization problem for a fixed [Formula: see text] effectively performing a JKO update with a time-step [Formula: see text]. Hence we obtain a "divide and conquer" algorithm by repeatedly solving simpler problems instead of solving a potentially harder problem with large [Formula: see text].
How Product Locations Drive Traffic Throughout a Retail Store In “Store-Wide Shelf-Space Allocation with Ripple Effects Driving Traffic,” Flamand, Ghoniem, and Maddah develop a framework for deciding where to place products in a store, in addition to apportioning the shelf space among products, in a way that maximizes impulse profit, a phenomenon that may account for 50% of transactions. By analyzing a large data set of customer receipts from a grocery store in Beirut, the authors develop a regression model that estimates traffic at a shelf based on its location and the “attraction” from products allocated nearby. The traffic model is embedded within a mixed-integer nonlinear program, which they solve via specialized linear approximations. For the store in Beirut, a 65% improvement in impulse profit is anticipated, and the location of products is found to be significantly more important in driving store-wide traffic than the relative shelf-space allocation.
Significant segments of the HRI literature rely on or promote the ability to reason about human identity characteristics, including age, gender, and cultural background. However, attempting to handle identity characteristics raises a number of critical ethical concerns, especially given the spatiotemporal dynamics of these characteristics. In this paper I question whether human identity characteristics can and should be represented, recognized, or reasoned about by robots, with special attention paid to the construct of race, due to its relative lack of consideration within the HRI community. As I will argue, while there are a number of well-warranted reasons why HRI researchers might want to enable robotic consideration of identity characteristics, these reasons are outweighed by a number of key ontological, perceptual, and deployment-oriented concerns. This argument raises troubling questions as to whether robots should even be able to understand or generate descriptions of people, and how they would do so while avoiding these ethical concerns. Finally, I conclude with a discussion of what this means for the HRI community , in terms of both algorithm and robot design, and speculate as to possible paths forward.
Social robots of the future will need to perceive, reason about, and respond appropriately to ethically sensitive situations. At the same time, policymakers and researchers alike are advocating for increased transparency and explainability in robotics-design principles that help users build accurate mental models and calibrate trust. In this short paper, we consider how Rube Goldberg machines might offer a strong analogy on which to build transparent user interfaces for the intricate, but knowable inner workings of a cog-nitive architecture's moral reasoning. We present a discussion of these related concepts, a rationale for the suitability of this analogy, and early designs for an initial prototype visualization.
Working Memory (WM) is a central component of cognition. It has direct impact not only on core cognitive processes, such as learning, comprehension, and reasoning, but also language-related processes, such as natural language understanding and referring expression generation. Thus, for robots to achieve human-like natural language capabilities, we argue that their cognitive models should include an accurate WM representation that plays a similarly central role. Our research investigates how different WM models from cognitive psychology affect robots' natural language capabilities. Specifically, we explore the limited capacity nature of WM and how different information forgetting strategies, namely decay and interference, impact the human-likeness of utterances formulated by robots.
The present work aims to optimize the thermal behavior of a building envelope by combining sensitivity analysis (SA) and multi-objective optimization (MOO). An existing classroom located in Marrakech city was considered a case study building. The building model was analyzed under six Moroccan climate zones. The SA was applied on 16 design variables and performed using the Morris method implemented in the tool Simlab to rank each design variable based on its influence on the objective function (overall energy demand). The SA results showed that the solar absorptance of the internal roof, wall, and ground floor and the ground hollow core slab thickness impacted less the overall energy demand. Therefore, the only remaining variables showing the most relevant effect will be optimized afterward. The optimization phase was conducted by coupling the generic optimization tool GenOpt with TRNSYS. The optimum solution was selected based on the Pareto front approach. The obtained results assessed the effectiveness of the adopted methodological approach in significant minimization of the required thermal loads. Furthermore, the values of each optimum design variables set differ from one climate zone to another; leading to energy demand reduction varying from 30 to 42%, in comparison with the original design building.
Diversity, equality, and inclusion (DEI) are critical factors that need to be considered when developing AI and robotic technologies for people. The lack of such considerations exacerbates and can also perpetuate existing forms of discrimination and biases in society for years to come. Although concerns have already been voiced around the globe, there is an urgent need to take action within the human-robot interaction (HRI) community. This workshop contributes to filling the gap by providing a platform in which to share experiences and research insights on identifying, addressing, and integrating DEI considerations in HRI. With respect to last year, this year the workshop will further engage participants on the problem of sampling biases through hands-on co-design activities for mitigating inequity and exclusion within the field of HRI.
Airborne radiometrics (radiometric surveying) has been used in many applications such as geological mapping, environmental studies, mineral exploration, and lithology mapping. The standard processing of gamma‐ray spectrometry data provides good results when the acquisition conditions such as the flight and sensor orientation are constant. In practice, abrupt changes in flight height when flying in mountainous terrain are common and standard processing neglecting this factor can lead to erroneous interpretation. The primary cause is the fact that the successive corrections applied to radiometric data in standard processing do not consider the effective sampled area of a survey (i.e., the field of view), which can have significant and variable overlaps between adjacent samples due to changing observation height. For this reason, standard height and sensitivity corrections may lead to incorrect estimates of the concentrations of the radioelements on the ground. To ameliorate this problem, we have developed a two‐dimensional inversion‐based processing method that incorporates the aircraft height, replaces the sensitivity correction, and accounts for a positivity condition using a logarithmic barrier. Some minerals associated with potassium, uranium and thorium allows the mapping of hydrothermal alteration, especially potassium which is an important constituent of hydrothermal fluids. We demonstrate the new methods by comparing the standard processing results with the inversion‐based processing of airborne gamma‐ray spectrometry data in the gold deposit‐rich area of Mara Rosa Magmatic Arc, Brazil. The inversion‐based processing results enhance the anomalies and suppress the interpolation artifacts and, also increases the signal‐to‐noise ratio. The application of airborne radiometric transformed maps also improved the footprint knowledge around Cu‐Au mineralization and highlighted potential areas for further studies. This article is protected by copyright. All rights reserved
A negative-temperature heat engine is achieved with photons.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1500 Illinois Street, 80401, Golden, CO, United States
Head of institution