CUNY Graduate Center
  • New York City, United States
Recent publications
Dual-side permanent magnet (DSPM) machines have gained attention with their high torque density and high efficiency. However, the topologies of the stator PM and stator core are constrained in the regular design, limiting the utilization of the stator PM. Consequently, the torque production of the stator side is limited. To overcome this constraint, this article proposes a novel DSPM machine with a decoupled topology of stator PMs and iron core. The proposed design allows for optimal topologies of both components, enhancing the flux modulation effect in both the stator and rotor sides. This results in a generation of additional even-order harmonics of the equivalent stator magnetomotive force (MMF), thereby improving torque production on the stator side. Additionally, the placement of the stator split teeth can be evenly distributed along the stator bore, achieving an enhanced high-order harmonic of the stator permeance. Performance comparison between the conventional and proposed DSPM machines, following a global optimization, demonstrates that the proposed machine exhibits relatively high torque density and efficiency. Notably, the proposed machine achieves a 36.4% higher torque density while maintaining the same copper loss and PM usage. Finally, a prototype is manufactured and tested to validate the analysis.
The dynamic electrical characteristics of insulated-gate bipolar transistor (IGBT) are of great significance in practical high-power electrical applications and are usually evaluated through double pulse test (DPT). However, DPTs of IGBTs under various working conditions are time-consuming and laborious. Traditional estimation methods are based on detailed physical parameters and complex formula calculations, making deployment process challenging. This article proposes a novel DPT efficiency enhancement method based on graph convolution network (GCN) and feature fusion technology, which can estimate and supplement switching transient waveforms of all working conditions. Thereby, dynamic electrical characteristics of the IGBT are obtained by estimated waveforms of DPT. This method proposes a multimodal attention fusion network to capture and fuse the features of switching transient waveforms between different positions thereby improving the expressive power and performance of the model. Moreover, this method is novel in that it is the first to utilize GCN to embed DPT data under multiple working conditions into a graph structure, which can use the graph structure information to fuse the features of spatially correlated working conditions data to obtain reliable estimation results. The method has been verified to be effective and accurate on real dataset collected on two batches of IGBTs.
In this work, we propose two novel quantum walk kernels, namely the Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK), between un-attributed graph structures. Different from most classical graph kernels, the proposed HAQJSK kernels can incorporate hierarchical aligned structure information between graphs and transform graphs of random sizes into fixed-size aligned graph structures, i.e., the Hierarchical Transitive Aligned Adjacency Matrix of vertices and the Hierarchical Transitive Aligned Density Matrix of the Continuous-Time Quantum Walks (CTQW). With pairwise graphs to hand, the resulting HAQJSK kernels are defined by computing the Quantum Jensen-Shannon Divergence (QJSD) between their transitive aligned graph structures. We show that the proposed HAQJSK kernels not only reflect richer intrinsic whole graph characteristics in terms of the CTQW, but also address the drawback of neglecting structural correspondence information that arises in most R-convolution graph kernels. Moreover, unlike the previous QJSD based graph kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can simultaneously guarantee the properties of permutation invariant and positive definiteness, explaining the theoretical advantages of the HAQJSK kernels. The experiment indicates the effectiveness of the new proposed kernels.
Graph-based deep learning models are becoming prevalent for data-driven traffic prediction in the past years, due to their competence in exploiting the non-Euclidean spatial-temporal traffic data. Nonetheless, these models are approaching a limit where drastically increasing model complexity in terms of trainable parameters cannot notably improve the prediction accuracy. Furthermore, the diversity of transportation networks requires traffic predictors to be scalable to various data sizes and quantities, and ever-changing traffic dynamics also call for capacity sustainability. To this end, we propose a novel adaptive deep learning scheme for boosting graph-based traffic predictor performance. The proposed scheme utilizes domain knowledge to decompose the traffic prediction task into sub-tasks, each of which is handled by deep models with low complexity and training difficulty. Further, a stream learning algorithm based on the empirical Fisher information loss is devised to enable predictors to incrementally learn from new data without re-training from scratch. Comprehensive case studies on five real-world traffic datasets indicate outstanding performance improvement of the proposed scheme when equipped to six state-of-the-art predictors. Additionally, the scheme also provides impressive autoregressive long-term predictions and incremental learning efficacy with traffic data streams.
The out-of-sample error (OO) is the main quantity of interest in risk estimation and model selection. Leave-one-out cross validation (LO) offers a (nearly) distribution-free yet computationally demanding approach to estimate OO. Recent theoretical work showed that approximate leave-one-out cross validation (ALO) is a computationally efficient and statistically reliable estimate of LO (and OO) for generalized linear models with differentiable regularizers. For problems involving non-differentiable regularizers, despite significant empirical evidence, the theoretical understanding of ALO’s error remains unknown. In this paper, we present a novel theory for a wide class of problems in the generalized linear model family with non-differentiable regularizers. We bound the error |ALO − LO| in terms of intuitive metrics such as the size of leave- i -out perturbations in active sets, sample size n , number of features p and regularization parameters. As a consequence, for the ℓ 1 -regularized problems, we show that |ALO − LO| p →∞⟶ 0 while n / p and signal-to-noise ratio (SNR) are bounded.
Background Continuously growing teeth are an important innovation in mammalian evolution, yet genetic regulation of continuous growth by stem cells remains incompletely understood. Dental stem cells responsible for tooth crown growth are lost at the onset of tooth root formation. Genetic signaling that initiates this loss is difficult to study with the ever-growing incisor and rooted molars of mice, the most common mammalian dental model species, because signals for root formation overlap with signals that pattern tooth size and shape (i.e., cusp patterns). Bank and prairie voles (Cricetidae, Rodentia, Glires) have evolved rooted and unrooted molars while retaining similar size and shape, providing alternative models for studying roots. Results We assembled a de novo genome of Myodes glareolus, a vole with high-crowned, rooted molars, and performed genomic and transcriptomic analyses in a broad phylogenetic context of Glires (rodents and lagomorphs) to assess differential selection and evolution in tooth forming genes. Bulk transcriptomics comparisons of embryonic molar development between bank voles and mice demonstrated overall conservation of gene expression levels, with species-specific differences corresponding to the accelerated and more extensive patterning of the vole molar. We leverage convergent evolution of unrooted molars across the clade to examine changes that may underlie the evolution of unrooted molars. We identified 15 dental genes with changing synteny relationships and six dental genes undergoing positive selection across Glires, two of which were undergoing positive selection in species with unrooted molars, Dspp and Aqp1. Decreased expression of both genes in prairie voles with unrooted molars compared to bank voles supports the presence of positive selection and may underlie differences in root formation. Conclusions Our results support ongoing evolution of dental genes across Glires and identify candidate genes for mechanistic studies of root formation. Comparative research using the bank vole as a model species can reveal the complex evolutionary background of convergent evolution for ever-growing molars.
In this study, deterministic current‐induced spin‐orbit torque (SOT) magnetization switching is achieved, particularly in systems with perpendicular magnetic anisotropy (PMA), without the need for a collinear in‐plane field, a traditionally challenging requirement. In a Ta/CoFeB/MgO/NiO/Ta structure, spin reflection at the MgO/NiO interface generates a spin current with an out‐of‐plane spin polarization component σz. Notably, the sample featuring 0.8 nm MgO and 2 nm NiO demonstrates an impressive optimal switching ratio approaching 100% without any in‐plane field. A systematic investigation of the effects of the MgO and NiO thickness demonstrates that the formation of noncollinear spin structures and canted magnetization in the ultrathin NiO interlayer plays a pivotal role to the field‐free SOT switching. The integration of NiO as an antiferromagnetic insulator effectively mitigates current shunting effects and enhances the thermal stability of the device. This advancement in the CoFeB/MgO system holds promise for significant applications in spintronics, marking a crucial step toward realizing innovative technologies.
The scale and connectivity of marine resources make them more complex than land resource management. Although digitization has been recognized as an organizational change process that can effectively improve resource efficiency and enhance network resilience, however, gaps remain in establishing the theoretical links between digitization and marine economic performance. Based on a panel fixed-effects model, this study evaluates the interrelationships and potential mechanisms of different firms with data from annual reports of listed firms in the marine economy in the eastern coastal region of China. The results indicate that there is a ‘U-shaped’ relationship between digitalization and enterprise efficiency in the maritime sector, and significant heterogeneity exists in the characteristics of these enterprises. Notably, firms’ technological innovation capability can modulate the ‘U-shaped’ relationship through the interaction of economies of scale and economies of scope. This paper highlights how digitization mitigates the fragmentation and sectionalization of marine information and addresses the digital overload and productivity paradox that firms may face in the early stages of digitization. The study suggests that institutional diversity shapes resilience. Governments need to promote top-down regulation and industry collaboration, while marine enterprises need to coevolve collaboratively with them through bottom-up internal communication and external interaction to enhance the value chain of marine enterprises.
This chapter provides a critical overview of archaeological approaches to gender and social inequality, and suggests future perspectives and approaches. We argue that considering gender as a central framework through which to analyse past social inequality is long overdue in archaeology. Surprisingly, even under Processualism, which focused on the origins and development of social inequality, the issues of gender inequality were rarely raised. Today, in spite of the fact that feminist and gender perspectives have repeatedly demonstrated the significance of gender for the construction of social differences and identities in the past, we identify a continuation of earlier approaches. Much work remains to be done by archaeologists, both addressing gender inequality and placing it within the social context of change in different periods. We identify positive steps in this direction, and propose that multi-proxy approaches are a promising way to address these complex questions and bring social inequality into focus.
The robots of tomorrow should be endowed with the ability to adapt to drastic and unpredicted changes in their environment and interactions with humans. Such adaptations, however, cannot be boundless: the robot must stay trustworthy. So, the adaptations should not be just a recovery into a degraded functionality. Instead, they must be true adaptations: the robot must change its behaviour while maintaining or even increasing its expected performance and staying at least as safe and robust as before. The RoboSAPIENS project will focus on autonomous robotic software adaptations and will lay the foundations for ensuring that they are carried out in an intrinsically trustworthy, safe and efficient manner, thereby reconciling open-ended self-adaptation with safety by design. RoboSAPIENS will transform these foundations into ‘first time right’-design tools and platforms and will validate and demonstrate them.
Neurotropic pathogens, notably, herpesviruses, have been associated with significant neuropsychiatric effects. As a group, these pathogens can exploit molecular mimicry mechanisms to manipulate the host central nervous system to their advantage. Here, we present a systematic computational approach that may ultimately be used to unravel protein–protein interactions and molecular mimicry processes that have not yet been solved experimentally. Toward this end, we validate this approach by replicating a set of pre-existing experimental findings that document the structural and functional similarities shared by the human cytomegalovirus-encoded UL144 glycoprotein and human tumor necrosis factor receptor superfamily member 14 (TNFRSF14). We began with a thorough exploration of the Homo sapiens protein database using the Basic Local Alignment Search Tool (BLASTx) to identify proteins sharing sequence homology with UL144. Subsequently, we used AlphaFold2 to predict the independent three-dimensional structures of UL144 and TNFRSF14. This was followed by a comprehensive structural comparison facilitated by Distance-Matrix Alignment and Foldseek. Finally, we used AlphaFold-multimer and PPIscreenML to elucidate potential protein complexes and confirm the predicted binding activities of both UL144 and TNFRSF14. We then used our in silico approach to replicate the experimental finding that revealed TNFRSF14 binding to both B- and T-lymphocyte attenuator (BTLA) and glycoprotein domain and UL144 binding to BTLA alone. This computational framework offers promise in identifying structural similarities and interactions between pathogen-encoded proteins and their host counterparts. This information will provide valuable insights into the cognitive mechanisms underlying the neuropsychiatric effects of viral infections. Graphical Abstract
In this study, we report the synthesis of a new type of chiral crystalline organic porous salt CF2 derived from the ionic reaction between tetrakis(4‐sulfophenyl)methane (TSPM) and the tetra‐(S)‐prolylamide of tetrakis(4‐aminophenyl)methane, (S)‐TPPM, and its ability to stabilize 2 nm palladium nanoparticles to give a novel, nonpyrophoric, chiral, catalytic material Pd@CF2. The preparation of the catalyst was very simple and conducted in water. The heterogeneous catalytic performance of Pd@CF2 was tested in hydrogen reductions of olefins and substituted nitroaromatic compounds using Pd/C as a comparison to determine the specific features of the novel catalyst. Although both types of catalysts exhibited similar catalytic activity in case of reductions of diphenylacetylene and nitrobenzene, Pd@CF2 predominantly promoted the reduction of p‐nitrobenzaldehyde to p‐aminobenzyl alcohol whereas Pd/C gave p‐toluidine. The reduction of p‐dinitrobenzene led to predominant formation of p‐nitrophenylhydroxylamine if promoted by the novel catalyst and to a mixture of products if promoted by Pd/C. In addition, the introduction of p‐alkoxy groups onto nitrobenzenes slowed down the reduction with Pd@CF2 but had no influence on Pd/C activity. A hypothesis ascribing these observations to dissimilar equilibrium distributions of nitro and polar groups within the organic framework and the palladium metal surface is proposed to rationalize the selectivity of the novel catalytic material.
Background In 2024 in the United States there is an attack on diversity, equity, and inclusion initiatives within education. Politics notwithstanding, medical school curricula that are current and structured to train the next generation of physicians to adhere to our profession’s highest values of fairness, humanity, and scientific excellence are of utmost importance to health care quality and innovation worldwide. Whereas the number of anti-racism, diversity, equity, and inclusion (ARDEI) curricular innovations have increased, there is a dearth of published longitudinal health equity curriculum models. In this article, we describe our school’s curricular mapping process toward the longitudinal integration of ARDEI learning objectives across 4 years and ultimately creation of an ARDEI medical education program objective (MEPO) domain. Methods Medical students and curricular faculty leaders developed 10 anti-racism learning objectives to create an ARDEI MEPO domain encompassing three ARDEI learning objectives. Results A pilot survey indicates that medical students who have experienced this curriculum are aware of the longitudinal nature of the ARDEI curriculum and endorse its effectiveness. Conclusions A longitudinal health equity and justice curriculum with well-defined anti-racist objectives that is (a) based within a supportive learning environment, (b) bolstered by trusted, structured avenues for student feedback and (c) amended with iterative revisions is a promising model to ensure that medical students are equipped to effectively address health inequities and deliver the highest quality of care for all patients.
Background Trypanosomatid parasites are a group of protozoans that cause devastating diseases that disproportionately affect developing countries. These protozoans have developed several mechanisms for adaptation to survive in the mammalian host, such as extensive expansion of multigene families enrolled in host-parasite interaction, adaptation to invade and modulate host cells, and the presence of aneuploidy and polyploidy. Two mechanisms might result in “complex” isolates, with more than two haplotypes being present in a single sample: multiplicity of infections (MOI) and polyploidy. We have developed and validated a methodology to identify multiclonal infections and polyploidy using whole genome sequencing reads, based on fluctuations in allelic read depth in heterozygous positions, which can be easily implemented in experiments sequencing genomes from one sample to larger population surveys. Results The methodology estimates the complexity index (CI) of an isolate, and compares real samples with simulated clonal infections at individual and populational level, excluding regions with somy and gene copy number variation. It was primarily validated with simulated MOI and known polyploid isolates respectively from Leishmania and Trypanosoma cruzi. Then, the approach was used to assess the complexity of infection using genome wide SNP data from 497 trypanosomatid samples from four clades, L. donovani/L. infantum, L. braziliensis, T. cruzi and T. brucei providing an overview of multiclonal infection and polyploidy in these cultured parasites. We show that our method robustly detects complex infections in samples with at least 25x coverage, 100 heterozygous SNPs and where 5–10% of the reads correspond to the secondary clone. We find that relatively small proportions (≤ 7%) of cultured trypanosomatid isolates are complex. Conclusions The method can accurately identify polyploid isolates, and can identify multiclonal infections in scenarios with sufficient genome read coverage. We pack our method in a single R script that requires only a standard variant call format (VCF) file to run (https://github.com/jaumlrc/Complex-Infections). Our analyses indicate that multiclonality and polyploidy do occur in all clades, but not very frequently in cultured trypanosomatids. We caution that our estimates are lower bounds due to the limitations of current laboratory and bioinformatic methods.
The relationship between the thermodynamic and computational properties of physical systems has been a major theoretical interest since at least the 19th century. It has also become of increasing practical importance over the last half-century as the energetic cost of digital devices has exploded. Importantly, real-world computers obey multiple physical constraints on how they work, which affects their thermodynamic properties. Moreover, many of these constraints apply to both naturally occurring computers, like brains or Eukaryotic cells, and digital systems. Most obviously, all such systems must finish their computation quickly, using as few degrees of freedom as possible. This means that they operate far from thermal equilibrium. Furthermore, many computers, both digital and biological, are modular, hierarchical systems with strong constraints on the connectivity among their subsystems. Yet another example is that to simplify their design, digital computers are required to be periodic processes governed by a global clock. None of these constraints were considered in 20th-century analyses of the thermodynamics of computation. The new field of stochastic thermodynamics provides formal tools for analyzing systems subject to all of these constraints. We argue here that these tools may help us understand at a far deeper level just how the fundamental thermodynamic properties of physical systems are related to the computation they perform.
Tokamak à configuration variable (TCV), recently celebrating 30 years of near-continual operation, continues in its missions to advance outstanding key physics and operational scenario issues for ITER and the design of future power plants such as DEMO. The main machine heating systems and operational changes are first described. Then follow five sections: plasma scenarios. ITER Base-Line (IBL) discharges, triangularity studies together with X3 heating and N2 seeding. Edge localised mode suppression, with a high radiation region near the X-point is reported with N2 injection with and without divertor baffles in a snowflake configuration. Negative triangularity (NT) discharges attained record, albeit transient, βN ∼ 3 with lower turbulence, higher low-Z impurity transport, vertical stability and density limits and core transport better than the IBL. Positive triangularity L-Mode linear and saturated ohmic confinement confinement saturation, often-correlated with intrinsic toroidal rotation reversals, was probed for D, H and He working gases. H-mode confinement and pedestal studies were extended to low collisionality with electron cyclotron heating obtaining steady state electron iternal transport barrier with neutral beam heating (NBH), and NBH driven H-mode configurations with off-axis co-electron cyclotron current drive. Fast particle physics. The physics of disruptions, runaway electrons and fast ions (FIs) was developed using near-full current conversion at disruption with recombination thresholds characterised for impurity species (Ne, Ar, Kr). Different flushing gases (D2, H2) and pathways to trigger a benign disruption were explored. The 55 kV NBH II generated a rich Alfvénic spectrum modulating the FI fas ion loss detector signal. NT configurations showed less toroidal Alfvén excitation activity preferentially affecting higher FI pitch angles. Scrape-off layer and edge physics. gas puff imaging systems characterised turbulent plasma ejection for several advanced divertor configurations, including NT. Combined diagnostic array divertor state analysis in detachment conditions was compared to modelling revealing an importance for molecular processes. Divertor physics. Internal gas baffles diversified to include shorter/longer structures on the high and/or low field side to probe compressive efficiency. Divertor studies concentrated upon mitigating target power, facilitating detachment and increasing the radiated power fraction employing alternative divertor geometries, optimised X-point radiator regimes and long-legged configurations. Smaller-than-expected improvements with total flux expansion were better modelled when including parallel flows. Peak outer target heat flux reduction was achieved (>50%) for high flux-expansion geometries, maintaining core performance (H98 > 1). A reduction in target heat loads and facilitated detachment access at lower core densities is reported. Real-time control. TCV’s real-time control upgrades employed MIMO gas injector control of stable, robust, partial detachment and plasma β feedback control avoiding neoclassical tearing modes with plasma confinement changes. Machine-learning enhancements include trajectory tracking disruption proximity and avoidance as well as a first-of-its-kind reinforcement learning-based controller for the plasma equilibrium trained entirely on a free-boundary simulator. Finally, a short description of TCV’s immediate future plans will be given.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
4,444 members
Robert L. Hatcher
  • Program in Psychology
Glenis Raewyn Long
  • Program in Speech–Language–Hearing Sciences
Thomas Howatt McGovern
  • Program in Anthropology
John Locke
  • Program in Speech–Language–Hearing Sciences
Eitan Friedman
  • physiology, pharmacology & neuroscience
Information
Address
New York City, United States