National Institute for Research in Computer Science and Control
Recent publications
This is the story, told in the light of a new analysis of historical data, of a mathematical biology problem that was explored in the 1930s in Thomas Morgan’s laboratory at the California Institute of Technology. It is one of the early developments of evolutionary genetics and quantitative phylogeny, and deals with the identification and counting of chromosomal inversions in Drosophila species from comparisons of genetic maps. A re-analysis of the data produced in the 1930s using current mathematics and computational technologies reveals how a team of biologists, with the help of a renowned mathematician and against their first intuition, came to an erroneous conclusion regarding the presence of phylogenetic signals in gene arrangements. This example illustrates two different aspects of a same piece: (1) the appearance of a mathematical in biology problem solved with the development of a combinatorial algorithm, which was unusual at the time, and (2) the role of errors in scientific activity. Also underlying is the possible influence of computational complexity in understanding the directions of research in biology.
Many computer vision applications rely on feature detection and description, hence the need for computationally efficient and robust 4D light field (LF) feature detectors and descriptors. In this paper, we propose a novel light field feature descriptor based on the Fourier disparity layer representation, for light field imaging applications. After the Harris feature detection in a scale-disparity space, the proposed feature descriptor is then extracted using a circular neighborhood rather than a square neighborhood. It is shown to yield more accurate feature matching, compared with the LiFF LF feature, with a lower computational complexity. In order to evaluate the feature matching performance with the proposed descriptor, we generated a synthetic stereo LF dataset with ground truth matching points. Experimental results with synthetic and real-world dataset show that our solution outperforms existing methods in terms of both feature detection robustness and feature matching accuracy.
Introduction Certain neuropsychiatric symptoms (NPS), namely apathy, depression and anxiety demonstrated great value in predicting dementia progression representing eventually an opportunity window for timely diagnosis and treatment. However, sensitive and objective markers of these symptoms are still missing. Objectives To investigate the association between automatically extracted speech features and NPS in early-stage dementia patients. Methods Speech of 141 patients aged 65 or older with neurocognitive disorder was recorded while performing two short narrative speech tasks. Presence of NPS was assessed by the Neuropsychiatric Inventory. Paralinguistic markers relating to prosodic, formant, source, and temporal qualities of speech were automatically extracted, correlated with NPS. Machine learning experiments were carried out to validate the diagnostic power of extracted markers. Results Different speech variables seem to be associated with specific neuropsychiatric symptoms of dementia; apathy correlates with temporal aspects, anxiety with voice quality and this was mostly consistent between male and female after correction for cognitive impairment. Machine learning regressors are able to extract information from speech features and perform above baseline in predicting anxiety, apathy and depression scores. Conclusions Different NPS seem to be characterized by distinct speech features which in turn were easily extractable automatically from short vocal tasks. These findings support the use of speech analysis for detecting subtypes of NPS. This could have great implications for future clinical trials. Disclosure No significant relationships.
Many applications of light field (LF) imaging have been limited by the spatial-angular resolution problem, hence the need for efficient super-resolution techniques. Recently, learning-based solutions have achieved remarkably better performances than traditional super-resolution (SR) techniques. Unfortunately, the learning or training process relies heavily on the training dataset, which could be limited for most LF imaging applications. In this paper, we propose a novel LF spatial-angular SR algorithm based on zero-shot learning. We suggest learning cross-scale reusable features in the epipolar plane image (EPI) space, and avoiding explicitly modeling scene priors or implicitly learning that from a large number of LFs. Most importantly, without using any external LFs, the proposed algorithm can simultaneously super-resolve a LF in both spatial and angular domains. Moreover, the proposed solution is free of depth or disparity estimation, which is usually employed by existing LF spatial and angular SR. By using a simple 8-layers fully convolutional network, we show that the proposed algorithm can generate comparable results to the state-of-the-art spatial SR. Our algorithm outperforms the existing methods in terms of angular SR on multiple groups of public LF datasets. The experiment results indicate that the cross-scale features can be well learned and be reused for LF SR in the EPI space.
Reirradiation of a tumor recurrence or second cancer in a previously irradiated area is challenging due to lack of high-quality physical, radiobiological, clinical data and inherent substantial risks of toxicity with cumulative dose and uncertain tissue recovery. Yet, major advances have been made in radiotherapy techniques, that have the potential to achieve cure while limiting severe toxicity rates, but still much research is necessary to better appraise the therapeutic index in such a complex situation.
Identifying objective and reliable markers to tailor diagnosis and treatment of psychiatric patients remains a challenge, as conditions like major depression, bipolar disorder, or schizophrenia are qualified by complex behavior observations or subjective self-reports instead of easily measurable somatic features. Recent progress in computer vision, speech processing and machine learning has enabled detailed and objective characterization of human behavior in social interactions. However, the application of these technologies to personalized psychiatry is limited due to the lack of sufficiently large corpora that combine multi-modal measurements with longitudinal assessments of patients covering more than a single disorder. To close this gap, we introduce Mephesto, a multi-centre, multi-disorder longitudinal corpus creation effort designed to develop and validate novel multi-modal markers for psychiatric conditions. Mephesto will consist of multi-modal audio-, video-, and physiological recordings as well as clinical assessments of psychiatric patients covering a six-week main study period as well as several follow-up recordings spread across twelve months. We outline the rationale and study protocol and introduce four cardinal use cases that will build the foundation of a new state of the art in personalized treatment strategies for psychiatric disorders.
Microtubules and their post-translational modifications are involved in major cellular processes. In severe diseases such as neurodegenerative disorders, tyrosinated tubulin and tyrosinated microtubules are in lower concentration. We present here a mechanistic mathematical model of the microtubule tyrosination cycle combining computational modeling and high-content image analyses to understand the key kinetic parameters governing the tyrosination status in different cellular models. That mathematical model is parameterized, firstly, for neuronal cells using kinetic values taken from the literature, and, secondly, for proliferative cells, by a change of two parameter values obtained, and shown minimal, by a continuous optimization procedure based on temporal logic constraints to formalize experimental high-content imaging data. In both cases, the mathematical models explain the inability to increase the tyrosination status by activating the Tubulin Tyrosine Ligase enzyme. The tyrosinated tubulin is indeed the product of a chain of two reactions in the cycle: the detyrosinated microtubule depolymerization followed by its tyrosination. The tyrosination status at equilibrium is thus limited by both reaction rates and activating the tyrosination reaction alone is not effective. Our computational model also predicts the effect of inhibiting the Tubulin Carboxy Peptidase enzyme which we have experimentally validated in MEF cellular model. Furthermore, the model predicts that the activation of two particular kinetic parameters, the tyrosination and detyrosinated microtubule depolymerization rate constants, in synergy, should suffice to enable an increase of the tyrosination status in living cells.
When estimating full-body motion from experimental data, inverse kinematics followed by inverse dynamics does not guarantee dynamical consistency of the resulting motion, especially in movements where the trajectory depends heavily on the initial state, such as in free-fall. Our objective was to estimate dynamically consistent joint kinematics and kinetics of complex aerial movements. A 42-degrees-of-freedom model with 95 markers was personalised for five elite trampoline athletes performing various backward and forward twisting somersaults. Using dynamic optimisation, our algorithm estimated joint angles, velocities and torques by tracking the recorded marker positions. Kinematics, kinetics, angular and linear momenta, and marker tracking difference were compared to results of an Extended Kalman Filter (EKF) followed by inverse dynamics. Angular momentum and horizontal linear momentum were conserved throughout the estimated motion, as per free-fall dynamics. Marker tracking difference went from 17 ± 4 mm for the EKF to 36 ± 11 mm with dynamic optimisation tracking the experimental markers, and to 49 ± 9 mm with dynamic optimisation tracking EKF joint angles. Joint angles from the dynamic optimisations were similar to those of the EKF, and joint torques were smoother. This approach satisfies the dynamics of complex aerial rigid-body movements while remaining close to the experimental 3D marker dataset.
Emergency department (ED) overcrowding is an ongoing problem worldwide. Scoring systems are available for the detection of this problem. This study aims to combine a model that allows the detection and management of overcrowding. Therefore, it is crucial to implement a system that can reason model, rank ED resources and ED performance indicators based on environmental factors. Thus, we propose in this paper a new domain ontology (EDOMO) based on a new overcrowding estimation score (OES) to detect critical situations, specify the level of overcrowding and propose solutions to deal with these situations. Our approach is based on a real database created during more than four years from the Lille University Hospital Center (LUHC) in France. The resulting ontology is capable of modeling complete domain knowledge to enable semantic reasoning based on SWRL rules. The evaluation results show that the EDOMO is complete that can enhance the functioning of the ED.
The prediction of heat transfers in Reynolds-Averaged Navier–Stokes (RANS) simulations requires corrections for rough surfaces. The turbulence models are adapted to cope with surface roughness impacting the near-wall behaviour compared to a smooth surface. These adjustments in the models correctly predict the skin friction but create a tendency to overpredict the heat transfers compared to experiments. These overpredictions require the use of an additional thermal correction model to lower the heat transfers. Finding the correct numerical parameters to best fit the experimental results is non-trivial, since roughness patterns are often irregular. The objective of this paper is to develop a methodology to calibrate the roughness parameters for a thermal correction model for a rough curved channel test case. First, the design of the experiments allows the generation of metamodels for the prediction of the heat transfer coefficients. The polynomial chaos expansion approach is used to create the metamodels. The metamodels are then successively used with a Bayesian inversion and a genetic algorithm method to estimate the best set of roughness parameters to fit the available experimental results. Both calibrations are compared to assess their strengths and weaknesses. Starting with unknown roughness parameters, this methodology allows calibrating them and obtaining between 4.7% and 10% of average discrepancy between the calibrated RANS heat transfer prediction and the experimental results. The methodology is promising, showing the ability to finely select the roughness parameters to input in the numerical model to fit the experimental heat transfer, without an a priori knowledge of the actual roughness pattern.
We introduce the notion of clone algebra (CA), intended to found a one-sorted, purely algebraic theory of clones. CAs are defined by identities and thus form a variety in the sense of universal algebra. The most natural CAs, the ones the axioms are intended to characterise, are algebras of functions, called functional clone algebras (FCA). The universe of a FCA, called omega-clone, is a set of infinitary operations on a given set, containing the projections and closed under finitary compositions. The main result of this paper is the general representation theorem, where it is shown that every CA is isomorphic to a FCA and that the variety CA is generated by the class of finite-dimensional CAs. This implies that every omega-clone is algebraically generated by a suitable family of clones by using direct products, subalgebras and homomorphic images. We conclude the paper with two applications. In the first one, we use clone algebras to give an answer to a classical question about the lattices of equational theories. The second application is to the study of the category of all varieties of algebras.
Modeling multidimensional data using tensor models, particularly through the Canonical Polyadic (CP) model, can be found in large numbers of timely and important signal-based applications. However, the computational complexity in the case of high-order and large-scale tensors remains a challenge that prevents the implementation of the CP model in practice. While some algorithms in the literature deal with large-scale problems, others target high-order tensors. Nevertheless, these algorithms encounter major issues when both problems are present. In this paper, we propose a parallelizable strategy based on the tensor network theory, to deal simultaneously with both high-order and large-scale problems. We show the usefulness of the proposed strategy in reducing the computation time on a realistic electroencephalography data set.
In light field imaging, axial refocusing precision corresponds to the minimum distance in the axial direction between two distinguishable refocusing planes. The refocusing precision can be essential for applications like light field microscopy. In this paper, we introduce a refocusing precision model based on a geometrical analysis of the flow of rays within the virtual camera. The model establishes the relationship between the feature separability of refocusing and different camera settings. As extending numerical aperture (NA) in classical imaging, the baseline extension of light field also gives more accurate refocusing results. To test the axial refocus precision, we conduct experiments with 1st generation Lytro camera as well as a Blender light field simulation. The results is basically consistent with our prediction. Then, we show that computationally extending the light field baseline increases the axial refocusing precision on real plenoptic camera and light field microscopy datasets.
Today, in rural isolated areas or so-called ‘medical deserts’, access to diagnosis and care is very limited. With the current pandemic crisis, now even more than ever, telemedicine platforms are gradually more employed for remote medical assessment. Only a few are tailored to comprehensive teleneuropsychological assessment of older adults. Hence, our study focuses on evaluating the feasibility of performing a remote neuropsychological assessment of older adults suffering from a cognitive complaint. 50 participants (aged 55 and older) were recruited at the local hospital of Digne-les-Bains, France. A brief neuropsychological assessment including a short clinical interview and several validated neuropsychological tests was administered in two conditions, once by Teleneuropsychology (TNP) and once by Face-to-Face (FTF) in a crossover design. Acceptability and user experience was assessed through questionnaires. Results show high agreement in most tests between the FTF and TNP conditions. The TNP was overall well accepted by the participants. However, differences in test performances were observed, which urges the need to validate TNP tests with broader samples with normative data.
Working towards the development of robust motion recognition systems for assistive technology control, the widespread approach has been to use a plethora of, often times, multi-modal sensors. In this paper, we develop single-sensor motion recognition systems. Utilising the peripheral nature of surface electromyography (sEMG) data acquisition, we optimise the information extracted from sEMG sensors. This allows the reduction in sEMG sensors or provision of contingencies in a system with redundancies. In particular, we process the sEMG readings captured at the trapezius descendens and platysma muscles. We demonstrate that sEMG readings captured at one muscle contain distinct information on movements or contractions of other agonists. We used the trapezius and platysma muscle sEMG data captured in able-bodied participants and participants with tetraplegia to classify shoulder movements and platysma contractions using white-box supervised learning algorithms. Using the trapezius sensor, shoulder raise is classified with an accuracy of 99%. Implementing subject-specific multi-class classification, shoulder raise, shoulder forward and shoulder backward are classified with a 94% accuracy amongst object raise and shoulder raise-and-hold data in able bodied adults. A three-way classification of the platysma sensor data captured with participants with tetraplegia achieves a 95% accuracy on platysma contraction and shoulder raise detection.
Introduction Identifying cost-effective, non-invasive biomarkers of Alzheimer’s disease (AD) is a clinical and research priority. Speech data are easy to collect, and studies suggest it can identify those with AD. We do not know if speech features can predict AD biomarkers in a preclinical population. Methods and analysis The Speech on the Phone Assessment (SPeAk) study is a prospective observational study. SPeAk recruits participants aged 50 years and over who have previously completed studies with AD biomarker collection. Participants complete a baseline telephone assessment, including spontaneous speech and cognitive tests. A 3-month visit will repeat the cognitive tests with a conversational artificial intelligence bot. Participants complete acceptability questionnaires after each visit. Participants are randomised to receive their cognitive test results either after each visit or only after they have completed the study. We will combine SPeAK data with AD biomarker data collected in a previous study and analyse for correlations between extracted speech features and AD biomarkers. The outcome of this analysis will inform the development of an algorithm for prediction of AD risk based on speech features. Ethics and dissemination This study has been approved by the Edinburgh Medical School Research Ethics Committee (REC reference 20-EMREC-007). All participants will provide informed consent before completing any study-related procedures, participants must have capacity to consent to participate in this study. Participants may find the tests, or receiving their scores, causes anxiety or stress. Previous exposure to similar tests may make this more familiar and reduce this anxiety. The study information will include signposting in case of distress. Study results will be disseminated to study participants, presented at conferences and published in a peer reviewed journal. No study participants will be identifiable in the study results.
A better knowledge of tree vegetative growth phenology and its relationship to environmental variables is crucial to understanding forest growth dynamics and how climate change may affect it. Less studied than reproductive structures, vegetative growth phenology focuses primarily on the analysis of growing shoots, from buds to leaf fall. In temperate regions, low winter temperatures impose a cessation of vegetative growth shoots and lead to a well-known annual growth cycle pattern for most species. The humid tropics, on the other hand, have less seasonality and contain many more tree species, leading to a diversity of patterns that is still poorly known and understood. The work in this study aims to advance knowledge in this area, focusing specifically on herbarium scans, as herbariums offer the promise of tracking phenology over long periods of time. However, such a study requires a large number of shoots to be able to draw statistically relevant conclusions. We propose to investigate the extent to which the use of deep learning can help detect and type-classify these relatively rare vegetative structures in herbarium collections. Our results demonstrate the relevance of using herbarium data in vegetative phenology research as well as the potential of deep learning approaches for growing shoot detection.
We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case where the system is in a mixed state, measurements are performed through POVMs, and imprints of the outcomes are imperfect. We introduce a new condition on states and measurements to recover full classicality for any number of observers. We further show that evolutions of quantum many-body systems can be expected to yield states that satisfy this condition whenever the corresponding measurement outcomes are redundant.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,608 members
Fabien Lucien Gandon
  • WIMMICS - Web-Instrumented Man-Machine Interactions, Communities and Semantics Research Team
Marcus Denker
  • RMOD -Analyses and Languages Constructs for Object-Oriented Application Evolution Research Team
Fabien Lotte
  • POTIOC - Popular Interaction with 3d Content Research Team
Herve Rivano
  • URBANET - Réseaux Capillaires Urbains Research Team
Md Sahidullah
  • MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Domaine de Voluceau, Rocquencourt - BP 105, 78153, Le Chesnay, France
Head of institution
Bruno Sportisse