IMT School for Advanced Studies Lucca
Recent publications
Online Social Networks (OSNs) offer new means for political communications that have quickly begun to play crucial roles in political campaigns, due to their pervasiveness and communication speed. However, the OSN environment is quite slippery and hides potential risks: many studies presented evidence about the presence of d/misinformation campaigns and malicious activities by genuine or automated users, putting at severe risk the efficiency of online and offline political campaigns. This phenomenon is particularly evident during crucial political events, as political elections. In the present paper, we provide a comprehensive description of the networks of interactions among users and bots during the UK elections of 2019. In particular, we focus on the polarised discussion about Brexit on Twitter, analysing a data set made of more than 10 millions tweets posted for over a month. We found that the presence of automated accounts infected the debate particularly in the days before the UK national elections, in which we find a steep increase of bots in the discussion; in the days after the election day, their incidence returned to values similar to the ones observed few weeks before the elections. On the other hand, we found that the number of suspended users (i.e. accounts that were removed by the platform for some violation of the Twitter policy) remained constant until the election day, after which it reached significantly higher values. Remarkably, after the TV debate between Boris Johnson and Jeremy Corbyn, we observed the injection of a large number of novel bots whose behaviour is markedly different from that of pre-existing ones. Finally, we explored the bots’ political orientation, finding that their activity is spread across the whole political spectrum, although in different proportions, and we studied the different usage of hashtags and URLs by automated accounts and suspended users, targeting the formation of common narratives in different sides of the debate.
This review examines the alleged crisis of trust in environmental science and its impact on public opinion, policy decisions in the context of democratic governance, and the interaction between science and society. In an interdisciplinary manner, the review focuses on the following themes: the trustworthiness of environmental science, empirical studies of levels of trust and trust formation; social media, environmental science, and disinformation; trust in environmental governance and democracy; and co-production of knowledge and the production of trust in knowledge. The review explores both the normative issue of trustworthiness and empirical studies on how to build trust. The review does not provide any simple answers to whether trust in science is generally in decline or whether we are returning to a less enlightened era in public life with decreased appreciation of knowledge and truth. The findings are more nuanced, showing signs of both distrust and trust in environmental science. Expected final online publication date for the Annual Review of Environment and Resources, Volume 47 is October 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
In this paper, we consider solving discounted Markov Decision Processes (MDPs) under the constraint that the resulting policy is stabilizing. In practice MDPs are solved based on some form of policy approximation. We will leverage recent results proposing to use Model Predictive Control (MPC) as a structured approximator in the context of Reinforcement Learning, which makes it possible to introduce stability requirements directly inside the MPC-based policy. This will restrict the solution of the MDP to stabilizing policies by construction. Because the stability theory for MPC is most mature for the undiscounted MPC case, we will first show in this paper that stable discounted MDPs can be reformulated as undiscounted ones. This observation will entail that the undiscounted MPC-based policy with stability guarantees will produce the optimal policy for the discounted MDP if it is stable, and the best stabilizing policy otherwise.
To efficiently predict the crack propagation in thin-walled structures, a global–local approach for phase field modeling using large-deformation solid shell finite elements considering the enhanced assumed strain (EAS) and the assumed natural strain (ANS) methods for the alleviation of locking effects is developed in this work. Aiming at tackling the poor convergence performance of standard Newton schemes, a quasi-Newton (QN) scheme is proposed for the solution of coupled governing equations stemming from the enhanced assumed strain shell formulation in a monolithic manner. The excellent convergence performance of this QN monolithic scheme for the multi-field shell formulation is demonstrated through several paradigmatic boundary value problems, including single edge notched tension and shear, fracture of cylindrical structure under mixed loading and fatigue induced crack growth. Compared with the popular alternating minimization (AM) or staggered solution scheme, it is also found that the QN monolithic solution scheme for the phase field modeling using enhanced strain shell formulation is very efficient without the loss of robustness, and significant computational gains are observed in all the numerical examples. In addition, to further reduce the computational cost in fracture modeling of large-scale thin-walled structures, a specific global–local phase field approach for solid shell elements in the 3D setting is proposed, in which the full displacement-phase field problem is considered at the local level, while addressing only the elastic problem at the global level. Its capability is demonstrated by the modeling of a cylindrical structure subjected to both static and fatigue cyclic loading conditions, which can be appealing to industrial applications.
Repeated polygonal patterns are pervasive in natural forms and structures. These patterns provide inherent structural stability while optimizing strength-per-weight and minimizing construction costs. In echinoids (sea urchins), a visible regularity can be found in the endoskeleton, consisting of a lightweight and resistant micro-trabecular meshwork (stereom). This foam-like structure follows an intrinsic geometrical pattern that has never been investigated. This study aims to analyse and describe it by focusing on the boss of tubercles—spine attachment sites subject to strong mechanical stresses—in the common sea urchin Paracentrotus lividus. The boss microstructure was identified as a Voronoi construction characterized by 82% concordance to the computed Voronoi models, a prevalence of hexagonal polygons, and a regularly organized seed distribution. This pattern is interpreted as an evolutionary solution for the construction of the echinoid skeleton using a lightweight microstructural design that optimizes the trabecular arrangement, maximizes the structural strength and minimizes the metabolic costs of secreting calcitic stereom. Hence, this identification is particularly valuable to improve the understanding of the mechanical function of the stereom as well as to effectively model and reconstruct similar structures in view of future applications in biomimetic technologies and designs.
Sequential emulation is a semantics-based technique to automatically reduce property checking of distributed systems to the analysis of sequential programs. An automated procedure takes as input a formal specification of a distributed system, a property of interest, and the structural operational semantics of the specification language and generates a sequential program whose execution traces emulate the possible evolutions of the considered system. The problem as to whether the property of interest holds for the system can then be expressed either as a reachability or as a termination query on the program. This allows to immediately adapt mature verification techniques developed for general-purpose languages to domain-specific languages, and to effortlessly integrate new techniques as soon as they become available. We test our approach on a selection of concurrent systems originated from different contexts from population protocols to models of flocking behaviour. By combining a comprehensive range of program verification techniques, from traditional symbolic execution to modern inductive-based methods such as property-directed reachability, we are able to draw consistent and correct verification verdicts for the considered systems.
Bow-tie structures were introduced to describe the World Wide Web (WWW): in the direct network in which the nodes are the websites and the edges are the hyperlinks connecting them, the greatest number of nodes takes part to a bow-tie, i.e. a Weakly Connected Component (WCC) composed of 3 main sectors: IN, OUT and SCC. SCC is the main Strongly Connected Component of WCC, i.e. the greatest subgraph in which each node is reachable by any other one. The IN and OUT sectors are the set of nodes not included in SCC that, respectively, can access and are accessible to nodes in SCC. In the WWW, the greatest part of the websites can be found in the SCC, while the search engines belong to IN and the authorities, as Wikipedia, are in OUT. In the analysis of Twitter debate, the recent literature focused on discursive communities, i.e. clusters of accounts interacting among themselves via retweets. In the present work, we studied discursive communities in 8 different thematic Twitter datasets in various languages. Surprisingly, we observed that almost all discursive communities therein display a bow-tie structure during political or societal debates. Instead, they are absent when the argument of the discussion is different as sport events, as in the case of Euro2020 Turkish and Italian datasets. We furthermore analysed the quality of the content created in the various sectors of the different discursive communities, using the domain annotation from the fact-checking website Newsguard: we observe that, when the discursive community is affected by m/disinformation, the content with the lowest quality is the one produced and shared in SCC and, in particular, a strong incidence of low- or non-reputable messages is present in the flow of retweets between the SCC and the OUT sectors. In this sense, in discursive communities affected by m/disinformation, the greatest part of the accounts has access to a great variety of contents, but whose quality is, in general, quite low; such a situation perfectly describes the phenomenon of infodemic, i.e. the access to “an excessive amount of information about a problem, which makes it difficult to identify a solution”, according to WHO.
Recent crises have shown that the knowledge of the structure of input–output networks, at the firm level, is crucial when studying economic resilience from the microscopic point of view of firms that try to rewire their connections under supply and demand constraints. Unfortunately, empirical inter-firm network data are protected by confidentiality, hence rarely accessible. The available methods for network reconstruction from partial information treat all pairs of nodes as potentially interacting, thereby overestimating the rewiring capabilities of the system and the implied resilience. Here, we use two big data sets of transactions in the Netherlands to represent a large portion of the Dutch inter-firm network and document its properties. We, then, introduce a generalized maximum-entropy reconstruction method that preserves the production function of each firm in the data, i.e. the input and output flows of each node for each product type. We confirm that the new method becomes increasingly more reliable in reconstructing the empirical network as a finer product resolution is considered and can, therefore, be used as a realistic generative model of inter-firm networks with fine production constraints. Moreover, the likelihood of the model directly enumerates the number of alternative network configurations that leave each firm in its current production state, thereby estimating the reduction in the rewiring capability of the system implied by the observed input–output constraints.
Many of the amazing functional capabilities of the brain are collective properties stemming from the interactions of large sets of individual neurons. In particular, the most salient collective phenomena in brain activity are oscillations, which require the synchronous activation of many neurons. Here, we analyse parsimonious dynamical models of neural synchronization running on top of synthetic networks that capture essential aspects of the actual brain anatomical connectivity such as a hierarchical-modular and core-periphery structure. These models reveal the emergence of complex collective states with intermediate and flexible levels of synchronization, halfway in the synchronous–asynchronous spectrum. These states are best described as broad Griffiths-like phases, i.e. an extension of standard critical points that emerge in structurally heterogeneous systems. We analyse different routes (bifurcations) to synchronization and stress the relevance of ‘hybrid-type transitions’ to generate rich dynamical patterns. Overall, our results illustrate the complex interplay between structure and dynamics, underlining key aspects leading to rich collective states needed to sustain brain functionality. This article is part of the theme issue ‘Emergent phenomena in complex physical and socio-technical systems: from cells to societies’.
Psychopathy is a condition characterized by atypical emotions and socially maladaptive behavioral patterns. Among incarcerated people, psychopathy has been associated with higher rates of crimes, recidivism, and resistance to treatment. Many studies have indicated significant heritability of psychopathic traits, but little is known about the specific contribution of genes and their interaction with adverse experiences in life. Considering the primary role that serotonin plays in cognition and emotion, we investigated TPH2-rs4570625, 5-HTTLPR, MAOA-uVNTR, HTR1B-rs13212041 and HTR2A-rs6314 as risk factors for psychopathy in the largest sample of institutionalized individuals studied so far, consisting of 793 US White male incarcerated adults, and in a replication sample of 168 US White male incarcerated adolescents. In a subgroup of the adult sample, the interaction between genetics and parenting style, assessed by the Measure of Parental Style (MOPS) questionnaire, was also evaluated. The HTR1B-rs13212041-T/T genotype, as compared to HTR1B-rs13212041-C allele, predicted higher psychopathy scores in both the adult and the adolescent samples. The interaction between HTR1B-rs13212041-T/T genotype and paternal MOPS scores, investigated in a subgroup of the adult sample, was an even stronger predictor of higher levels of psychopathy than either the genetics or the environment taken individually. Overall, these data, obtained in two independent samples, shed new light on neurobiological correlates of psychopathy with promising implications both at a clinical and forensic level.
Economic Model Predictive Control has recently gained popularity due to its ability to directly optimize a given performance criterion, while enforcing constraint satisfaction for nonlinear systems. Recent research has developed both numerical algorithms and stability analysis for the undiscounted case. The introduction of a discount factor in the cost, however, can be desirable in some cases of interest, e.g., economics, stochastically terminating processes, Markov decision processes, etc. Unfortunately, the stability theory in this case is still not fully developed. In this paper we propose a new dissipativity condition to prove asymptotic stability in the infinite horizon case and we connect our results with existing ones in the literature on discounted economic optimal control. Numerical examples are provided to illustrate the theoretical results.
Background: Vaccine hesitancy (VH) remains worldwide a reason of concern. Most of the vaccination education strategies followed a “fact-based” approach, based on the assumption that decision making is a rational process, without considering the influence of cognitive biases and heuristics. Our study aimed at identifying factors involved in the parents’ vaccination choice to inform and shape communication interventions. Methods: We conducted an online national survey among parents between November 2020 and April 2021. The questionnaire consisted of 42 items organised in 4 parts: (1) personal information, (2) cognitive biases and risk propension, (3) Analytic Thinking (Cognitive Reflection Test), (4) conspiracy mentality, health literacy, and VH. Exploratory factor analysis was conducted to identify latent variables underlying the 19 items related to the 6 cognitive biases. Factors were categorised in quintiles and the corresponding pseudo-continuous variables used as predictors of the VH. Logistic regression model was applied to assess the association of the VH with factors, conspiracy mentality and risk propension. We adjusted for age, gender, economic status, and education levels. Results: The study included 939 parents, 764 women (81.4%), 69.8% had a degree or higher level of education. Considering cognitive biases, four factors explaining 54% of the total variance were identified and characterised as: fear of the side effects of vaccines (scepticism factor); carelessness of the risk and consequences of infections (denial factor); optimistic attitude (optimistic bias factor); preference for natural products (naturalness bias factor). All factors were positively associated to VH (p
Predictive brain theory challenges the general assumption of a brain extracting knowledge from sensations and considers the brain as an organ of inference, actively constructing explanations about reality beyond its sensory evidence. Predictive brain has been formalized through Bayesian updating, where top-down predictions are compared with bottom-up evidence. In this article, we propose a different approach to predictive brain based on quantum probability—we call it Quantum Predictive Brain (QPB). QPB is consistent with the Bayesian framework, but considers it as a special case. The tenet of QPB is that top-down predictions and bottom-up evidence are complementary, as they cannot be co-jointly determined to pursue a univocal model of brain functioning. QPB can account for several high-order cognitive phenomena (which are problematic in current predictive brain theories) and offers new insights into the mechanisms of neural reuse.
Recent scientific findings suggest that dopamine exerts a central role on impulsivity, as well as that aversive life experiences may promote the high levels of impulsivity that often underlie violent behavior. To deepen our understanding of the complex gene by environment interplay on impulsive behavior, we genotyped six dopaminergic allelic variants (ANKK1-rs1800497, TH-rs6356, DRD4-rs1800955, DRD4-exonIII-VNTR, SLC6A3-VNTR and COMT-rs4680) in 655 US White male inmates convicted for violent crimes, whose impulsivity was assessed by BIS-11 (Barratt Impulsiveness Scale). Furthermore, in a subsample of 216 inmates from the whole group, we also explored the potential interplay between the genotyped dopaminergic variants and parental maltreatment measured by MOPS (Measure of Parental Style) in promoting impulsivity. We found a significant interaction among paternal MOPS scores, ANKK1-rs1800497-T allele and TH-rs6356-A allele, which increased the variance of BIS-11 cognitive/attentive scores explained by paternal maltreatment from 1.8 up to 20.5%. No direct association between any of the individual genetic variants and impulsivity was observed. Our data suggest that paternal maltreatment increases the risk of attentive/cognitive impulsivity and that this risk is higher in carriers of specific dopaminergic alleles that potentiate the dopaminergic neurotransmission. These findings add further evidence to the mutual role that genetics and early environmental factors exert in modulating human behavior and highlight the importance of childhood care interventions.
Mean-field models are an established method to analyze large stochastic systems with N interacting objects by means of simple deterministic equations that are asymptotically correct when N tends to infinity. For finite N, mean-field equations provide an approximation whose accuracy is model- and parameter-dependent. Recent research has focused on refining the approximation by computing suitable quantities associated with expansions of order $1/N$ and $1/N^2$ to the mean-field equation. In this paper we present a new method for refining mean-field approximations. It couples the master equation governing the evolution of the probability distribution of a truncation of the original state space with a mean-field approximation of a time-inhomogeneous population process that dynamically shifts the truncation across the whole state space. We provide a result of asymptotic correctness in the limit when the truncation covers the state space; for finite truncations, the equations give a correction of the mean-field approximation. We apply our method to examples from the literature to show that, even with modest truncations, it is effective in models that cannot be refined using existing techniques due to non-differentiable drifts, and that it can outperform the state of the art in challenging models that cause instability due orbit cycles in their mean-field equations.
Artificial intelligence (AI) models and procedures hold remarkable predictive efficiency in the medical domain through their ability to discover hidden, non-obvious clinical patterns in data. However, due to the sparsity, noise, and time-dependency of medical data, AI procedures are raising unprecedented issues related to the mismatch between doctors’ mentalreasoning and the statistical answers provided by algorithms. Electronic systems can reproduce or even amplify noise hidden in the data, especially when the diagnosis of the subjects in the training data set is inaccurate or incomplete. In this paper we describe the conditions that need to be met for AI instruments to be truly useful in the orthodontic domain. We report some examples of computational procedures that are capable of extracting orthodontic knowledge through ever deeper patient representation. To have confidence in these procedures, orthodontic practitioners should recognize the benefits, shortcomings, and unintended consequences of AI models, as algorithms that learn from human decisions likewise learn mistakes and biases.
This work applies Matrix Completion (MC) – a class of machine-learning methods commonly used in recommendation systems – to analyze economic complexity. In this paper MC is applied to reconstruct the Revealed Comparative Advantage (RCA) matrix, whose elements express the relative advantage of countries in given classes of products, as evidenced by yearly trade flows. A high-accuracy binary classifier is derived from the MC application to discriminate between elements of the RCA matrix that are, respectively, higher/lower than one. We introduce a novel Matrix cOmpletion iNdex of Economic complexitY (MONEY) based on MC and related to the degree of predictability of the RCA entries of different countries (the lower the predictability, the higher the complexity). Differently from previously-developed economic complexity indices, MONEY takes into account several singular vectors of the matrix reconstructed by MC. In contrast, other indices are based only on one/two eigenvectors of a suitable symmetric matrix derived from the RCA matrix. Finally, MC is compared with state-of-the-art economic complexity indices, showing that the MC-based classifier achieves better performance than previous methods based on the application of machine learning to economic complexity.
Multi-regional input–output (I/O) matrices provide the networks of within- and cross-country economic relations. In the context of I/O analysis, the methodology adopted by national statistical offices in data collection raises the issue of obtaining reliable data in a timely fashion and it makes the reconstruction of (parts of) the I/O matrices of particular interest. In this work, we propose a method combining hierarchical clustering and matrix completion with a LASSO-like nuclear norm penalty, to predict missing entries of a partially unknown I/O matrix. Through analyses based on both real-world and synthetic I/O matrices, we study the effectiveness of the proposed method to predict missing values from both previous years data and current data related to countries similar to the one for which current data are obscured. To show the usefulness of our method, an application based on World Input–Output Database (WIOD) tables—which are an example of industry-by-industry I/O tables—is provided. Strong similarities in structure between WIOD and other I/O tables are also found, which make the proposed approach easily generalizable to them.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
246 members
Sahizer Samuk Carignani
  • LYNX Center for the Interdisciplinary Analysis of Images
Giulio Bernardi
  • MoMiLab Unit
Emiliano Ricciardi
  • MOMILAB - MOlecular MInd LABoratory
Giorgio Gnecco
  • AXES (Laboratory for the Analysis of Complex Economic Systems) Research Unit
Information
Address
Piazza San Francesco, 16, 55100, Lucca, Italy
Website
https://www.imtlucca.it/