National Institute for Research in Computer Science and Control
Recent publications
In this paper, we are interested in comparing solutions to stochastic Volterra equations for the convex order on the space of continuous Rd{\mathbb{R}}^{d}-valued paths and for the monotonic convex order when d=1. Even if these solutions are in general neither semimartingales nor Markov processes, we are able to exhibit conditions on their coefficients enabling the comparison. Our approach consists in first comparing their Euler schemes and then taking the limit as the time step vanishes. We consider two types of Euler schemes, depending on the way the Volterra kernels are discretised. The conditions ensuring the comparison are slightly weaker for the first scheme than for the second, and this is the other way around for convergence. Moreover, we weaken the integrability needed on the starting values in the existence and convergence results in the literature to be able to only assume finite first moments, which is the natural framework for convex ordering.
Digital twins represent a key technology for precision health. Medical digital twins consist of computational models that represent the health state of individual patients over time, enabling optimal therapeutics and forecasting patient prognosis. Many health conditions involve the immune system, so it is crucial to include its key features when designing medical digital twins. The immune response is complex and varies across diseases and patients, and its modelling requires the collective expertise of the clinical, immunology, and computational modelling communities. This review outlines the initial progress on immune digital twins and the various initiatives to facilitate communication between interdisciplinary communities. We also outline the crucial aspects of an immune digital twin design and the prerequisites for its implementation in the clinic. We propose some initial use cases that could serve as “ proof of concept ” regarding the utility of immune digital technology, focusing on diseases with a very different immune response across spatial and temporal scales (minutes, days, months, years). Lastly, we discuss the use of digital twins in drug discovery and point out emerging challenges that the scientific community needs to collectively overcome to make immune digital twins a reality.
Coq is built around a well-delimited kernel that performs type checking for definitions in a variant of the Calculus of Inductive Constructions ( CIC ). Although the metatheory of CIC is very stable and reliable, the correctness of its implementation in Coq is less clear. Indeed, implementing an efficient type checker for CIC is a rather complex task, and many parts of the code rely on implicit invariants which can easily be broken by further evolution of the code. Therefore, on average, one critical bug has been found every year in Coq . This paper presents the first implementation of a type checker for the kernel of Coq (without the module system, template polymorphism and η -conversion), which is proven sound and complete in Coq with respect to its formal specification. Note that because of Gödel’s second incompleteness theorem, there is no hope to prove completely the soundness of the specification of Coq inside Coq (in particular strong normalization), but it is possible to prove the correctness and completeness of the implementation assuming soundness of the specification, thus moving from a trusted code base (TCB) to a trusted theory base (TTB) paradigm. Our work is based on the MetaCoq project which provides meta-programming facilities to work with terms and declarations at the level of the kernel. We verify a relatively efficient type checker based on the specification of the typing relation of the Polymorphic, Cumulative Calculus of Inductive Constructions ( PCUIC ) at the basis of Coq . It is worth mentioning that during the verification process, we have found a source of incompleteness in Coq ’s official type checker, which has then been fixed in Coq 8.14 thanks to our work. In addition to the kernel implementation, another essential feature of Coq is the so-called extraction mechanism: the production of executable code in functional languages from Coq definitions. We present a verified version of this subtle type and proof erasure step, therefore enabling the verified extraction of a safe type checker for Coq in the future.
A correct evaluation of scheduling algorithms and a good understanding of their optimization criteria are key components of resource management in HPC. In this work, we discuss bias and limitations of the most frequent optimization metrics from the literature. We provide elements on how to evaluate performance when studying HPC batch scheduling. We experimentally demonstrate these limitations by focusing on two use-cases: a study on the impact of runtime estimates on scheduling performance, and the reproduction of a recent high-impact work that designed an HPC batch scheduler based on a network trained with reinforcement learning. We demonstrate that focusing on quantitative optimization criterion (“our work improves the literature by X%”) may hide extremely important caveat, to the point that the results obtained are opposed to the actual goals of the authors. Key findings show that mean bounded slowdown and mean response time are hazardous for a purely quantitative analysis in the context of HPC. Despite some limitations, utilization appears to be a good objective. We propose to complement it with the standard deviation of the throughput in some pathological cases. Finally, we argue for a larger use of area-weighted response time, that we find to be a very relevant objective.
Insect biomass is declining globally, likely driven by climate change and pesticide use, yet systematic studies on the effects of various chemicals remain limited. In this work, we used a chemical library of 1024 molecules—covering insecticides, herbicides, fungicides, and plant growth inhibitors—to assess the impact of sublethal pesticide doses on insects. In Drosophila melanogaster , 57% of chemicals affected larval behavior, and a higher proportion compromised long-term survivability. Exposure to sublethal doses also induced widespread changes in the phosphoproteome and changes in development and reproduction. The negative effects of agrochemicals were amplified when the temperature was increased. We observed similar behavioral changes across multiple insect species, including mosquitoes and butterflies. These findings suggest that widespread sublethal pesticide exposure can alter insect behavior and physiology, threatening long-term population survival.
Truncated differential cryptanalyses were introduced by Knudsen in 1994. They are a well-known family of attacks that has arguably received less attention than some other variants of differential attacks. This paper gives some new insights into the theory of truncated differential attacks, specifically the conditions of provable security of SPN ciphers with MDS diffusion matrices against this type of attack. Furthermore, our study extends to various versions within the QARMA family of block ciphers, unveiling the only valid instances of single-tweak attacks on 10-round QARMAv1-64, 10-round QARMAv1-128, and 10- and 11-round QARMAv2-64. These attacks benefit from the optimal truncated differential distinguishers as well as some evolved key-recovery techniques.
The electrical properties of rocks are widely used in the geophysical exploration of natural resources, such as minerals, hydrocarbons and groundwater. In mining exploration, the primary goal is to map electrically anomalous geological features associated with different mineralization styles, such as clay alteration haloes, metal oxides and sulphides, weathered crystalline rocks or fractured zones. As such, the reconciliation of geophysical data with geological information (geochemistry, mineralogy, texture and lithology) is a critical step and can be performed based on petrophysical properties collected either on core samples or as downhole measurements. Based on data from 189 diamond drill cores collected for uranium exploration in the Athabasca Basin (Saskatchewan, Canada), this paper presents a case study of reconciliation of downhole resistivity probing with core sample geochemistry and short‐wave infrared spectroscopy (350–2500 nm) through three successive steps: (i) multivariate analysis of resistivity and other petrophysical properties (porosity, density) against geochemical and infrared spectroscopy information to characterize electrical properties of rocks with respect to other physical parameters, (ii) a machine‐learning workflow integrating geochemistry and spectral signatures in order to infer synthetic resistivity logs along with uncertainties. The best model in the basin was Light Gradient‐Boosting Machine with pairwise log‐ratio, which yielded a coefficient of determination R ² = 0.80 (root mean square error = 0.16), and in the basement, support vector regression with data fusion of infrared spectroscopy and pairwise log‐ratios on geochemistry yielded R ² = 0.82 (root mean square error = 0.35); (iii) the best model was then fitted on an area that was excluded from the original dataset (Getty Russell property) in order to infer synthetic resistivity logs for that zone. Software code is publicly available. This workflow can be re‐used for the valorization of legacy datasets.
Net-winged midge larvae (Blephariceridae) are known for their remarkable ability to adhere to and crawl on the slippery surfaces of rocks in fast-flowing and turbulent alpine streams, waterfalls, and rivers. This remarkable performance can be attributed to the larvae's powerful ventral suckers. In this article, we first develop a theoretical model of the piston-driven sucker that considers the lubricated state of the contact area. We then implement a piston-driven robotic sucker featuring a V-shaped notch to explore the adhesion-sliding mechanism. Each biomimetic larval sucker has the unique feature of an anterior-facing V-shaped notch on its soft disc rim; it slides along the shear direction while the entire disc surface maintains powerful adhesion on the benthic substrate, just like the biological counterpart. We found that this biomimetic sucker can reversibly transit between "high friction" (4.26 ± 0.34 kPa) and "low friction" (0.41 ± 0.02 kPa) states due to the piston movement, resulting in a frictional enhancement of up to 93.9%. We also elucidate the frictional anisotropy (forward/backward force ratio: 0.81) caused by the V-shaped notch. To demonstrate the robotic application of this adhesion-sliding mechanism, we designed an underwater crawling robot Adhesion Sliding Robot-1 (ASR-1) equipped with two biomimetic ventral suckers. This robot can successfully crawl on a variety of substrates such as curved surfaces, sidewalls, and overhangs and against turbulent water currents with a flow speed of 2.4 m/s. In addition, we implemented a fixed-wing aircraft Adhesion Sliding Robot-2 (ASR-2) featuring midge larva-inspired suckers, enabling transit from rapid water surface gliding to adhesion sliding in an aquatic environment. This adhesion-sliding mechanism inspired by net-winged midge larvae may pave the way for future robots with long-term observation, monitoring, and tracking capabilities in a wide variety of aerial and aquatic environments.
Deep neural networks (DNNs) are fundamental to modern applications like face recognition and autonomous driving. However, their security is a significant concern due to various integrity risks, such as backdoor attacks. In these attacks, compromised training data introduce malicious behaviors into the DNN, which can be exploited during inference or deployment. This paper presents a novel game-theoretic approach to model the interactions between an attacker and a defender in the context of a DNN backdoor attack. The contribution of this approach is multifaceted. First, it models the interaction between the attacker and the defender using a game-theoretic framework. Second, it designs a utility function that captures the objectives of both parties, integrating clean data accuracy and attack success rate. Third, it reduces the game model to a two-player zero-sum game, allowing for the identification of Nash equilibrium points through linear programming and a thorough analysis of equilibrium strategies. Additionally, the framework provides varying levels of flexibility regarding the control afforded to each player, thereby representing a range of real-world scenarios. Through extensive numerical simulations, the paper demonstrates the validity of the proposed framework and identifies insightful equilibrium points that guide both players in following their optimal strategies under different assumptions. The results indicate that fully using attack or defense capabilities is not always the optimal strategy for either party. Instead, attackers must balance inducing errors and minimizing the information conveyed to the defender, while defenders should focus on minimizing attack risks while preserving benign sample performance. These findings underscore the effectiveness and versatility of the proposed approach, showcasing optimal strategies across different game scenarios and highlighting its potential to enhance DNN security against backdoor attacks.
Background: Interpretability is a topical question in recommender systems, especially in healthcare applications. An interpretable classifier quantifies the importance of each input feature for the predicted item-user association in a non-ambiguous fashion. Results: We introduce the novel Joint Embedding Learning-classifier for improved Interpretability (JELI). By combining the training of a structured collaborative-filtering classifier and an embedding learning task, JELI predicts new user-item associations based on jointly learned item and user embeddings while providing feature-wise importance scores. Therefore, JELI flexibly allows the introduction of priors on the connections between users, items, and features. In particular, JELI simultaneously (a) learns feature, item, and user embeddings; (b) predicts new item-user associations; (c) provides importance scores for each feature. Moreover, JELI instantiates a generic approach to training recommender systems by encoding generic graph-regularization constraints. Conclusions: We show that the joint training approach yields a gain in the predictive power of the downstream classifier, that JELI can recover feature-association dependencies, and induces a restriction in the number of parameters compared to baselines in synthetic and drug-repurposing data sets.
We present StarMalloc, a verified, efficient, security-oriented, and concurrent memory allocator. Using the Steel separation logic framework, we show how to specify and verify a multitude of low-level patterns and delicate security mechanisms, by relying on a combination of dependent types, SMT, and modular abstractions to enable efficient verification. We produce a verified artifact, in C, that implements the entire API surface of an allocator, and as such works as a drop-in replacement for real-world projects, notably the Firefox browser. As part of StarMalloc, we develop several generic datastructures and proof libraries directly reusable in future systems verification projects. We also extend the Steel toolchain to express several low-level idioms that were previously missing. Finally, we show that StarMalloc exhibits competitive performance by evaluating it against 10 state-of-the-art memory allocators, and against a variety of real-world projects, such as Redis, the Lean compiler, and the Z3 SMT solver.
Type inference is essential for statically-typed languages such as OCaml and Haskell. It can be decomposed into two (possibly interleaved) phases: a generator converts programs to constraints; a solver decides whether a constraint is satisfiable. Elaboration, the task of decorating a program with explicit type annotations, can also be structured in this way. Unfortunately, most machine-checked implementations of type inference do not follow this phase-separated, constraint-based approach. Those that do are rarely executable, lack effectful abstractions, and do not include elaboration. To close the gap between common practice in real-world implementations and mechanizations inside proof assistants, we propose an approach that enables modular reasoning about monadic constraint generation in the presence of elaboration. Our approach includes a domain-specific base logic for reasoning about metavariables and a program logic that allows us to reason abstractly about the meaning of constraints. To evaluate it, we report on a machine-checked implementation of our techniques inside the Coq proof assistant. As a case study, we verify both soundness and completeness for three elaborating type inferencers for the simply typed lambda calculus with Booleans. Our results are the first demonstration that type inference algorithms can be verified in the same form as they are implemented in practice: in an imperative style, modularly decomposed into constraint generation and solving, and delivering elaborated terms to the remainder of the compiler chain.
The paper proposes a new approach to control the process of general education. Digital technology tools are used to form spaces of goals, tasks and learning activities, and to record the educational process of each student. Artificial intelligence tools are used when choosing a student’s personal goals and ways to achieve them, to make forecasts and recommendations to participants in the educational process. Big data from the entire education system and big linguistic models are used. The effects of the approach include ensuring the success of each student, objective assessment of the work of teachers and schools, and the adequacy of the succession process to higher education.
Harvesting radio frequency (RF) energy is an attractive solution for powering ultralow-power (ULP) devices. However, harvesting efficiently from different RF power levels is still a challenge. This work presents a wide power range RF harvester composed of a rectifier circuit and a power management integrated circuit (PMIC). Such a wide range is achieved by associating two independent rectifiers in parallel, one specifically optimized for low RF powers. In the association, the inductive matching technique is employed in each rectifier. It consists of an inductive branch comprised of a lumped inductor and a short-circuited stub whose values are selected to minimize ohmic losses. The operation range is controlled via the inductive branch values and by following the optimal load of the rectifier circuit with the PMIC, based on the input power level. The harvester is designed at 889 MHz889~\text{MHz} and manufactured by using off-the-shelf components. Measured efficiencies of 29%29\% and 63%63\% are obtained at 20-20 and 0 dBm0~\text{dBm} , respectively, demonstrating the wide power range with high efficiency at low powers. At 20 dBm-20~\text{dBm} , the rectifier association keeps the PMIC active and delivers hundreds of nW at a regulated voltage of 1.8 V1.8~\text{V} , which is suitable for ULP devices.
Principal component analysis(PCA) is a popular method for dimension reduction and has attracted an unfailing interest for decades. More recently, kernel PCA (KPCA) has emerged as an extension of PCA, but despite its use in practice, a sound theoretical understanding of KPCA is missing. We contribute several empirical generalisation bounds on the efficiency of KPCA, involving the empirical eigenvalues of the kernel Gram matrix. Our bounds are derived through the use of probably approximately correct (PAC)‐Bayes theory and highlight the importance of some desirable properties of datasets, expressed as variance‐typed terms, to attain fast rates, achievable for a wide class of kernels.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,750 members
Fabien Lucien Gandon
  • WIMMICS - Web-Instrumented Man-Machine Interactions, Communities and Semantics Research Team
Marcus Denker
  • RMOD -Analyses and Languages Constructs for Object-Oriented Application Evolution Research Team
Fabien Lotte
  • POTIOC - Popular Interaction with 3d Content Research Team
Herve Rivano
  • URBANET - Réseaux Capillaires Urbains Research Team
Angelos Mantzaflaris
  • AROMATH - AlgebRa, geOmetry, Modeling and AlgoriTHms
Information
Address
Le Chesnay, France
Head of institution
Bruno Sportisse