Recent publications
Self-assembly refers to the process by which small, simple components mix and combine to form complex structures using only local interactions. Designed as a hybrid between tile assembly models and cellular automata, the Tile Automata (TA) model was recently introduced as a platform to help study connections between various models of self-assembly. However, in this paper we present a result in which we use TA to simulate arbitrary systems within the amoebot model, a theoretical model of programmable matter in which the individual components are relatively simple state machines that are able to sense the states of their neighbors and to move via series of expansions and contractions. We show that for every amoebot system, there is a TA system capable of simulating the local information transmission built into amoebot particles, and that the TA “macrotiles” used to simulate its particles are capable of simulating movement (via attachment and detachment operations) while maintaining the necessary properties of amoebot particle systems. The TA systems are able to utilize only the local interactions of state changes and binding and unbinding along tile edges, but are able to fully simulate the dynamics of these programmable matter systems. Since the signal-passing tile assembly model, a mathematical model of “active” DNA-based tile self-assembly, has been shown to be able to simulate the TA model, our result provides a bridge showing how DNA-based self-assembling tiles can simulate the behavior of the amoebot model of programmable matter.
Despite superior performance achieved on many computer vision tasks, deep neural networks demand high computing power and memory footprint. Most existing network pruning methods require laborious human efforts and prohibitive computation resources, especially when the constraints are changed. This practically limits the application of model compression when the model needs to be deployed on a wide range of devices. Besides, existing methods are still challenged by the missing theoretical guidance, which lacks influence on the generalization error. In this paper we propose an information theory-inspired strategy for automated network pruning. The principle behind our method is the information bottleneck theory. Concretely, we introduce a new theorem to illustrate that the hidden representation should compress information with each other to achieve a better generalization. In this way, we further introduce the normalized Hilbert-Schmidt Independence Criterion on network activations as a stable and generalized indicator to construct layer importance. When a certain resource constraint is given, we integrate the HSIC indicator with the constraint to transform the architecture search problem into a linear programming problem with quadratic constraints. Such a problem is easily solved by a convex optimization method within a few seconds. We also provide rigorous proof to reveal that optimizing the normalized HSIC simultaneously minimizes the mutual information between different layers. Without any search process, our method achieves better compression trade-offs compared to the state-of-the-art compression algorithms. For instance, on ResNet-50, we achieve a 45.3%-FLOPs reduction, with a 75.75 top-1 accuracy on ImageNet. Codes are available at https://github.com/MAC-AutoML/ITPruner.
Aims
The paediatric population is vulnerable to suffering adverse drug events (ADEs) such as negative outcomes due to medication (NOMs)–drug related problems (DRPs), especially adverse drug reactions (ADRs) and medication errors (MEs). Social media (SM) is considered an interesting tool for pharmacovigilance. This study aims to assess descriptions of ADRs, NOM‐DRPs and MEs in SM.
Methods
Observational, ambispective study assessing NOM‐DRPs, ADRs and MEs in posts of child‐rearing public parenting forums from inception until December 2021 of drugs dispensed in outpatient setting. ADEs were classified, assessing causality by Liverpool Causality Assessment Tool and seriousness by the World Health Organization criteria. Summary of product characteristics were used to determine ADR prevalence.
Results
In total, 3573 posts of 2 child‐rearing public parenting forums were retrieved; 906 (25%) contained descriptions of medicine of which 823 (91%) were analysed; 425 posts (52%) described 636 NOM‐DRPs (1 NOM‐DRP median per child, interquartile range [IQR] 1–8), from which 161 (26%) were ADRs in 105 posts (1.5 ADR median per child, IQR 1–4) and 95 (15%) were MEs in 64 posts (1 ME median per child, IQR 1–4). From posts mined with medicines mentions, 70% included NOM‐DRPs, 18% ADRs and 10% MEs. More ADRs occurred in females and infants. Most ADRs (158; 98%) were evaluated as possible and 17 ADRs (11%) were serious. Uncommon 19 (12%), (14, 9%), very rare (3, 2%) and rare (1, 1%) ADRs were also found.
Conclusion
Results suggest that information retrieved from SM may be useful to assess paediatric ADEs and provide valuable pharmacovigilance complementary data.
Recent theoretical work has argued that moral psychology can be understood through the lens of “resource rational contractualism.” The view posits that the best way of making a decision that affects other people is to get everyone together to negotiate under idealized conditions. The outcome of that negotiation is an arrangement (or “contract”) that would lead to mutual benefit. However, this ideal is seldom (if ever) practical given the resource demands (time, information, computational processing power) that are required. Instead, the theory proposes that moral psychology is organized around a series of resource‐rational approximations of the contractualist ideal, efficiently trading off between more resource‐intensive, accurate mechanisms and less. This paper presents empirical evidence and a cognitive model that test a central claim of this view: when the stakes of the situation are high, then more resource‐intensive processes are engaged over more approximate ones. We present subjects with a case that can be judged using virtual bargaining —a resource‐intensive process that involves simulating what two people would agree to—or by simply following a standard rule. We find that about a third of our participants use the resource‐rational approach, flexibly switching to virtual bargaining in high‐stakes situations, but deploying the simple rule when stakes are low. A third of the participants are best modeled as consistently using the strict rule‐based approach and the remaining third as consistently using virtual bargaining. A model positing the reverse resource‐rational hypothesis (that participants use more resource‐intensive mechanisms in lower stakes situations) fails to capture the data.
Background
The ability to non-invasively measure left atrial pressure would facilitate the identification of patients at risk of pulmonary congestion and guide proactive heart failure care. Wearable cardiac monitors, which record single-lead electrocardiogram data, provide information that can be leveraged to infer left atrial pressures.
Methods
We developed a deep neural network using single-lead electrocardiogram data to determine when the left atrial pressure is elevated. The model was developed and internally evaluated using a cohort of 6739 samples from the Massachusetts General Hospital (MGH) and externally validated on a cohort of 4620 samples from a second institution. We then evaluated model on patch-monitor electrocardiographic data on a small prospective cohort.
Results
The model achieves an area under the receiver operating characteristic curve of 0.80 for detecting elevated left atrial pressures on an internal holdout dataset from MGH and 0.76 on an external validation set from a second institution. A further prospective dataset was obtained using single-lead electrocardiogram data with a patch-monitor from patients who underwent right heart catheterization at MGH. Evaluation of the model on this dataset yielded an area under the receiver operating characteristic curve of 0.875 for identifying elevated left atrial pressures for electrocardiogram signals acquired close to the time of the right heart catheterization procedure.
Conclusions
These results demonstrate the utility and the potential of ambulatory cardiac hemodynamic monitoring with electrocardiogram patch-monitors.
Agent-controlled intelligent subnetworks are envisioned as an integral part of Beyond 5G (B5G) communications for automatically selecting distinct policies based on dynamic performance constraints. In the B5G era, energy efficiency will be a prominent feature of Radio Access Network (RAN) slicing for achieving sustainable networks and lowering operational costs while multiplexing users with heterogeneous Quality-of-Service (QoS) requirements. Considering the importance of Intelligent Transportation Systems (ITS) within the realm of B5G applications, in this paper, we propose the LazyRAN framework as an energy-efficient Radio Resource Block (RRB) allocation approach for multi-policy RAN slicing in B5G ITS edge networks. Initially, we focus on resource utilization efficiency and define Lazy Skip Markov Decision Process (LS-MDP) formulation for spectrum agents to individually perform fine and coarse stochastic control depending on performance requirements incorporating varying levels of laziness. Followingly, we propose LazyRAN framework that uses Bayesian Optimization (BO) based Offline Policy Selection (OPS) for optimality calculations in case of multiple slicing policies. Our framework enables efficient multi-policy evaluation employing both exploration and exploitation in the agent policy space. The OPS method utilizes a Gaussian Process (GP) surrogate function combining logged data with online agent interactions before searching for the best slicing policy with BO. Using an energy-aware approach, hybrid QoS reward per energy consumption (HQEC), we compare the performance of LazyRAN framework in centralized and decentralized settings considering diverse agent policies. Our results show that the proposed scheme can significantly improve energy utilization with greater HQEC and higher throughput measurements while satisfying hybrid QoS demands.
Background
White matter hyperintensities (WMHs) are frequently observed on magnetic resonance imaging (MRI) in patients with cerebral amyloid angiopathy (CAA). The neuropathological substrates that underlie WMHs in CAA are unclear, and it remains largely unexplored whether the different WMH distribution patterns associated with CAA (posterior confluent and subcortical multispot) reflect alternative pathophysiological mechanisms.
Methods and Results
We performed a combined in vivo MRI—ex vivo MRI—neuropathological study in patients with definite CAA. Formalin‐fixed hemispheres from 19 patients with CAA, most of whom also had in vivo MRI available, underwent 3T MRI, followed by standard neuropathological examination of the hemispheres and targeted neuropathological assessment of WMH patterns. Ex vivo WMH volume was independently associated with CAA severity ( P =0.046) but not with arteriolosclerosis ( P =0.743). In targeted neuropathological examination, compared with normal‐appearing white matter, posterior confluent WMHs were associated with activated microglia ( P =0.043) and clasmatodendrosis ( P =0.031), a form of astrocytic injury. Trends were found for an association with white matter rarefaction ( P =0.074) and arteriolosclerosis ( P =0.094). An exploratory descriptive analysis suggested that the histopathological correlates of WMH multispots were similar to those underlying posterior confluent WMHs.
Conclusions
This study confirmed that vascular amyloid β severity in the cortex is significantly associated with WMH volume in patients with definite CAA. The histopathological substrates of both posterior confluent and WMH multispots were comparable, suggesting overlapping pathophysiological mechanisms, although these exploratory observations require confirmation in larger studies.
While transformers have gained recognition as a versatile tool for artificial intelligence (AI), an unexplored challenge arises in the context of chess — a classical AI benchmark. Here, incorporating Vision Transformers (ViTs) into AlphaZero is insufficient for chess mastery, mainly due to ViTs’ computational limitations. The attempt to optimize their efficiency by combining MobileNet and NextViT outperformed AlphaZero by about 30 Elo. However, we propose a practical improvement that involves a simple change in the input representation and value loss functions. As a result, we achieve a significant performance boost of up to 180 Elo points beyond what is currently achievable with AlphaZero in chess. In addition to these improvements, our experimental results using the Integrated Gradient technique confirm the effectiveness of the newly introduced features.
In an era marked by rapid advancements in artificial intelligence (AI), the dynamics of the labour market are undergoing significant transformation. A common concern amidst these changes is the potential obsolescence of traditional disciplines due to AI-driven productivity enhancements. This study delves into the evolving role and resilience of these disciplines within the AI-influenced labour market. Focusing on statistics as a representative field, we investigate its integration with AI and its interplay with other disciplines. Analyzing 279.87 million online job postings in the United States from 2010 to 2022, we observed a remarkable 31-fold increase in the demand for AI-specialized statistical talent, diversifying into 932 distinct AI-related job roles. Additionally, our research identified four major interdisciplinary clusters, encompassing 190 disciplines with a statistical focus. The findings also highlight a growing emphasis on specific hard skills within these AI roles and the differences in demand for AI talent in statistics across economic sectors and regions. Contrary to the pessimistic view of traditional disciplines’ survival in the AI age, our study suggests a more optimistic outlook. We recommend that professionals and organizations proactively adapt to AI advancements. Governments and academic institutions should collaborate to foster interdisciplinary skill development and evaluation for AI talents, thereby enhancing the employability of individuals from traditional disciplines and contributing to broader economic growth.
We analyse two soft notions of stable extensions in abstract argumentation, one that weakens the requirement of having full range and one that weakens the requirement of conflict-freeness. We then consider optimisation problems over these two notions that represent optimisation variants of the credulous reasoning problem with stable semantics. We investigate the computational complexity of these two problems in terms of the complexity of solving the optimisation problem exactly and in terms of approximation complexity. We also present some polynomial-time approximation algorithms for these optimisation problems and investigate their approximation quality experimentally.
Objective. Voxel-wise visual encoding models based on convolutional neural networks (CNNs) have emerged as one of the prominent predictive tools of human brain activity via functional magnetic resonance imaging signals. While CNN-based models imitate the hierarchical structure of the human visual cortex to generate explainable features in response to natural visual stimuli, there is still a need for a brain-inspired model to predict brain responses accurately based on biomedical data. Approach. To bridge this gap, we propose a response prediction module called the Structurally Constrained Multi-Output (SCMO) module to include homologous correlations that arise between a group of voxels in a cortical region and predict more accurate responses. Main results. This module employs all the responses across a visual area to predict individual voxel-wise BOLD responses and therefore accounts for the population activity and collective behavior of voxels. Such a module can determine the relationships within each visual region by creating a structure matrix that represents the underlying voxel-to-voxel interactions. Moreover, since each response module in visual encoding tasks relies on the image features, we conducted experiments using two different feature extraction modules to assess the predictive performance of our proposed module. Specifically, we employed a recurrent CNN that integrates both feedforward and recurrent interactions, as well as the popular AlexNet model that utilizes feedforward connections. Significance. We demonstrate that the proposed framework provides a reliable predictive ability to generate brain responses across multiple areas, outperforming benchmark models in terms of stability and coherency of features.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Alzheimer’s disease is the leading cause of dementia worldwide, but the cellular pathways that underlie its pathological progression across brain regions remain poorly understood1–3. Here we report a single-cell transcriptomic atlas of six different brain regions in the aged human brain, covering 1.3 million cells from 283 post-mortem human brain samples across 48 individuals with and without Alzheimer’s disease. We identify 76 cell types, including region-specific subtypes of astrocytes and excitatory neurons and an inhibitory interneuron population unique to the thalamus and distinct from canonical inhibitory subclasses. We identify vulnerable populations of excitatory and inhibitory neurons that are depleted in specific brain regions in Alzheimer’s disease, and provide evidence that the Reelin signalling pathway is involved in modulating the vulnerability of these neurons. We develop a scalable method for discovering gene modules, which we use to identify cell-type-specific and region-specific modules that are altered in Alzheimer’s disease and to annotate transcriptomic differences associated with diverse pathological variables. We identify an astrocyte program that is associated with cognitive resilience to Alzheimer’s disease pathology, tying choline metabolism and polyamine biosynthesis in astrocytes to preserved cognitive function late in life. Together, our study develops a regional atlas of the ageing human brain and provides insights into cellular vulnerability, response and resilience to Alzheimer’s disease pathology.
Objectives
To evaluate the transferability of deep learning (DL) models for the early detection of adverse events to previously unseen hospitals.
Design
Retrospective observational cohort study utilizing harmonized intensive care data from four public datasets.
Setting
ICUs across Europe and the United States.
Patients
Adult patients admitted to the ICU for at least 6 hours who had good data quality.
Interventions
None.
Measurements and Main Results
Using carefully harmonized data from a total of 334,812 ICU stays, we systematically assessed the transferability of DL models for three common adverse events: death, acute kidney injury (AKI), and sepsis. We tested whether using more than one data source and/or algorithmically optimizing for generalizability during training improves model performance at new hospitals. We found that models achieved high area under the receiver operating characteristic (AUROC) for mortality (0.838–0.869), AKI (0.823–0.866), and sepsis (0.749–0.824) at the training hospital. As expected, AUROC dropped when models were applied at other hospitals, sometimes by as much as –0.200. Using more than one dataset for training mitigated the performance drop, with multicenter models performing roughly on par with the best single-center model. Dedicated methods promoting generalizability did not noticeably improve performance in our experiments.
Conclusions
Our results emphasize the importance of diverse training data for DL-based risk prediction. They suggest that as data from more hospitals become available for training, models may become increasingly generalizable. Even so, good performance at a new hospital still depended on the inclusion of compatible hospitals during training.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Berlin, Germany