MINES ParisTech
  • Paris, Ile-de-France, France
Recent publications
We study the interaction between the holder of a standard‐essential patent (SEP) and two downstream firms using the patented technology to design standard‐compliant products. The SEP holder approaches the downstream firms simultaneously in the shadow of patent litigation and is subject to fair, reasonable, and non‐discriminatory licensing requirements. We show that the patent holder faces a litigation credibility constraint and a license acceptability constraint when setting its licensing terms. For patents of intermediate strength, there is no royalty that allows the patent holder to reconcile these constraints. Consequently, it cannot license its technology and must go to court against infringers. We show that the availability of an injunction improves the patent holder's ability to license its technology, but it tends to inflate the royalty rate for implementers.
Background Exploring the function or the developmental history of cells in various organisms provides insights into a given cell type's core molecular characteristics and putative evolutionary mechanisms. Numerous computational methods now exist for analyzing single-cell data and identifying cell states. These methods mostly rely on the expression of genes considered as markers for a given cell state. Yet, there is a lack of scRNA-seq computational tools to study the evolution of cell states, particularly how cell states change their molecular profiles. This can include novel gene activation or the novel deployment of programs already existing in other cell types, known as co-option. Results Here we present scEvoNet, a Python tool for predicting cell type evolution in cross-species or cancer-related scRNA-seq datasets. ScEvoNet builds the confusion matrix of cell states and a bipartite network connecting genes and cell states. It allows a user to obtain a set of genes shared by the characteristic signature of two cell states even between distantly-related datasets. These genes can be used as indicators of either evolutionary divergence or co-option occurring during organism or tumor evolution. Our results on cancer and developmental datasets indicate that scEvoNet is a helpful tool for the initial screening of such genes as well as for measuring cell state similarities. Conclusion The scEvoNet package is implemented in Python and is freely available from https://github.com/monsoro/scEvoNet. Utilizing this framework and exploring the continuum of transcriptome states between developmental stages and species will help explain cell state dynamics.
Cloud computing is currently one of the prime choices in the computing infrastructure landscape. In addition to advantages such as the pay‐per‐use bill model and resource elasticity, there are technical benefits regarding heterogeneity and large‐scale configuration. Alongside the classical need for performance, for example, time, space, and energy, there is an interest in the financial cost that might come from budget constraints. Based on scalability considerations and the pricing model of traditional public clouds, a reasonable optimization strategy output could be the most suitable configuration of virtual machines to run a specific workload. From the perspective of runtime and monetary cost optimizations, we provide the adaptation of a Hadoop applications execution cost model extracted from the literature aiming at Spark applications modeled with the MapReduce paradigm. We evaluate our optimizer model executing an improved version of the Diff Sequences Spark application to perform SARS‐CoV‐2 coronavirus pairwise sequence comparisons using the AWS EC2's virtual machine instances. The experimental results with our model outperformed 80% of the random resource selection scenarios. By only employing spot worker nodes exposed to revocation scenarios rather than on‐demand workers, we obtained an average monetary cost reduction of 35.66% with a slight runtime increase of 3.36%.
The thermal regime of streams is a relevant driver of their ecological functioning. As this regime is presently submitted to numerous alterations (among others, impoundments, and climate change), it seems important to study both their effects and potential recovery from the latter. Thus, we investigated the surface and hyporheic water temperature along a small headwater stream with contrasting environmental contexts: forest landscape, open grassland landscape without riparian vegetation, several artificial run-of-the-river impoundments and one discharge point of a by-pass impoundment. The main objectives were to study the influence of these contrasting contexts on surface and subsurface water temperature at a local scale. Contrasting contexts were supposed to create effects on both surface and hyporheic thermal regimes at a local scale. Differences of thermal regimes between surface and hyporheos were expected, as well as between geological contexts. Sensors located at multiple stations allowed monitoring of stream and hyporheos temperature along the stream, while comparison with adjacent reference stream allowed for surface water thermal regime benchmark. Impoundments and landscapes significantly influenced stream thermal regime at a local scale (impoundments created up to +3.7°C temperature increase in average). Their effect on hyporheos thermal regime was less marked than the ones generated by solar radiation or geological features. Hyporheos thermal regime varies from stream one by temperature dynamics delay (up to 18h) and decrease (up to -7°C between surface and hyporheos temperature in average). These coupled effects create a mosaic of thermal habitats, which could be used for river biodiversity preservation and restoration.
Building energy efficiency is a key factor in reducing CO2 emissions. For this reason, European Union (EU) member states have developed thermal regulations to ensure building thermal performance. These results are often based on results achieved with building simulation software during the design stage. However, the actual thermal performance can deviate significantly from the predicted one, and this difference is known as the energy performance gap. Accurate indicators of the actual thermal performance are a valuable tool to guarantee building quality. These indicators, including the heat transfer coefficient (HTC) and the heat loss coefficient (HLC), can be estimated by the application of in situ methods. As multi-family housing and tertiary sector buildings are an important part of the building stock, mature methods to measure their thermal performance are needed. This paper presents a short-duration method for assessing the HTC in large building typologies using a sampling approach. The method was applied in a four-storey building model under different conditions to study the limits of the method and to improve indicator bias and uncertainty. Indicator quality was strongly influenced by the external weather conditions, the temperature variation during the protocol and the heat exchange with the adjacent apartments. Under winter conditions and with stable indoor temperatures, the method had a high accuracy when the protocol was applied for half a day. It is recommended that the protocol be used over two days to improve indicator quality under less favorable test conditions.
While many works exploiting an existing Lie group structure have been proposed for state estimation, in particular the invariant extended Kalman filter (IEKF), few papers address the construction of a group structure that allows casting a given system into the framework of invariant filtering. In this article, we introduce a large class of systems encompassing most problems involving a navigating vehicle encountered in practice. For those systems we introduce a novel methodology that systematically provides a group structure for the state space, including vectors of the body frame such as biases. We use it to derive observers having properties akin to those of linear observers or filters. The proposed unifying and versatile framework encompasses all systems, where IEKF has proved successful, improves state-of-the art “imperfect” IEKF for inertial navigation with sensor biases, and allows addressing novel examples, like GNSS antenna lever arm estimation.
Various industries rely on numerical tools to simulate multiphase flows due to the wide occurrence of this phenomenon in nature, manufacturing processes, or the human body. However, the significant computation burden required for such simulations directs the research interest toward incorporating data-based approaches in the solution loop. Although, these approaches returned significant results in various domains, incorporating them in the computational fluid dynamics field is wrangled by their casting aside of the already known governing constitutional laws along with the natural incompatibility of various models with unstructured irregular discretization spaces. This work suggests a coupling framework, between a traditional finite element CFD solver and a deep learning model, for tackling multiphase fluid flows without migrating the benefits of physics-enriched traditional solvers. The tailored model architecture, along with the coupling framework, allows tackling the required problem with a dynamically adapted unstructured irregular triangular mesh, thus dodging the limitation of traditional convolution neural networks. Moreover, the various ingredients that allowed the model to simulate the complex and computation-demanding Navier-Stokes flow equation, such as relying on a sequential validation dataset while exposing the model training to a noise inherited from the quality of its inferring, along with the proper choice of model inputs, are highlighted and elaborated throughout this paper. To the authors' knowledge, this work is the first of its type to introduce a data-based graph-based approach for solving multiphase flow problems with a level-set interface capturing method.
MASK‐air®, a validated mHealth app (Medical Device regulation Class IIa) has enabled large observational implementation studies in over 58,000 people with allergic rhinitis and/or asthma. It can help to address unmet patient needs in rhinitis and asthma care. MASK‐air® is a Good Practice of DG Santé on digitally‐enabled, patient‐centred care. It is also a candidate Good Practice of OECD (Organisation for Economic Co‐operation and Development). MASK‐air® data has enabled novel phenotype discovery and characterisation, as well as novel insights into the management of allergic rhinitis. MASK‐air® data show that most rhinitis patients (i) are not adherent and do not follow guidelines, (ii) use as‐needed treatment, (iii) do not take medication when they are well, (iv) increase their treatment based on symptoms and (v) do not use the recommended treatment. The data also show that control (symptoms, work productivity, educational performance) is not always improved by medications. A combined symptom‐medication score (ARIA‐EAACI‐CSMS) has been validated for clinical practice and trials. The implications of the novel MASK‐air® results should lead to change management in rhinitis and asthma.
This article provides a concise overview of some of the recent advances in the application of rough path theory to machine learning. Controlled differential equations (CDEs) are discussed as the key mathematical model to describe the interaction of a stream with a physical control system. A collection of iterated integrals known as the signature naturally arises in the description of the response produced by such interactions. The signature comes equipped with a variety of powerful properties rendering it an ideal feature map for streamed data. We summarise recent advances in the symbiosis between deep learning and CDEs, studying the link with RNNs and culminating with the Neural CDE model. We concluded with a discussion on signature kernel methods.
This study aims to bring elements of understanding of water-gypsum interactions in situ to better assess the development of dissolution cavities in areas where gypsum is present at depth. Different types of natural gypsum facies are tested, impure alabaster, sacharoidal and clay/carbonate matrices gypsum. An experimental protocol is first developed to study the effect of erosion and particle transport on gypsum dissolution. An original leaching experiment under constant flow is then elaborated to study the effect of erosion and particle transport on the dissolution of gypsum. The flow of released particles is globally low and mostly composed of solid impurities (insolubles). The distribution of insolubles at the water-gypsum interface is found to have a significant impact on the dissolution. Geochemical models are then used to investigate the influence of the most common minerals present in natural gypsum on the dissolution process and the chemical composition of groundwater at their contact. These findings, applied to in situ conditions, allow us to evaluate the relevance of the common use of a simple criterion such as dissolved sulfate content or electrical conductivity to identify a system saturated in gypsum. Lastly, an effective recession rate, derived from the dissolution rate and evaluated directly according to the groundwater saturation index of gypsum, is used to determine the intensity of natural gypsum dissolution in the study area.
Components of the so-called “multiple-barrier system” from the waste form to the biosphere include a combination of waste containers, engineered barriers, and natural barriers. The Engineered Barrier System (EBS) is crucial for containment and isolation in a radioactive waste disposal system. The number, types, and assigned safety functions of the various engineered barriers depend on the chosen repository concept, the waste form, the radionuclides waste inventory, the selected host rock, and the hydrogeological and geochemical settings of the repository site, among others. EBS properties will evolve with time in response to the thermal, hydraulic, mechanical, radiological, and chemical gradients and interactions between the various constituents of the barriers and the host rock. Therefore, assessing how these properties evolve over long time frames is highly relevant for evaluating the performance of a repository system and safety function evaluations in a safety case. For this purpose, mechanistic numerical models are increasingly used. Such models provide an excellent way for integrating into a coherent framework a scientific understanding of coupled processes and their consequences on different properties of the materials in the EBS. Their development and validation are supported by R&D actions at the European level. For example, within the HORIZON 2020 project BEACON (Bentonite mechanical evolution), the development, test, and validation of numerical models against experimental results have been carried out in order to predict the evolution of the hydromechanical properties of bentonite during the saturation process. Also, in relation to the coupling with mechanics, WP16 MAGIC (chemo Mechanical AGIng of Cementitious materials) of the EURAD Joint Programming Initiative focuses on multi-scale chemo-mechanical modeling of cementitious-based materials that evolve under chemical perturbation. Integration of chemical evolution in models of varying complexity is a major issue tackled in the WP2 ACED (Assessment of Chemical Evolution of ILW and HLW Disposal cells) of EURAD. WP4 DONUT (Development and improvement of numerical methods and tools for modeling coupled processes) of EURAD aims at developing and improving numerical models and tools to integrate more complexity and coupling between processes. The combined progress of those projects at a pan-European level definitively improves the understanding of and the capabilities for assessing the long-term evolution of engineered barrier systems.
The article analyzed how the Residency in Family and Community Medicine (RFCM) in a capital of the Northern region of Brazil contributed to the formation and development of the current work process of its graduates. This is an exploratory, descriptive and cross-sectional study, with a qualitative approach focused on 31 graduates, through electronic questionnaires with open questions. The answers were interpreted based on the Content Analysis Thematic technique, constituting four empirical categories: training in Family and Community Medicine (FCM) in the graduate’s work process; recognition and application of the attributes of Primary Health Care (PHC); potentials of the FCM training and the challenges of the specialty. The analyzed data show that the studied Programs contribute to the education for professional practice and the profile of its graduates through the strengthening and implementation of the Primary Health Care (PHC) attributes, also extending beyond the specialty.
Analisou-se como as residências de Medicina de Família e Comunidade (RMFC) de uma capital da região norte do Brasil contribuíram para a formação e o desenvolvimento do atual processo de trabalho de seus egressos. Estudo exploratório, descritivo e transversal, com abordagem qualitativa focada em 31 egressos por meio de aplicação de questionário eletrônico com perguntas abertas. As respostas foram interpretadas por Análise de Conteúdo Temática, constituídas por quatro categorias empíricas: a formação em Medicina de Família e Comunidade (MFC) no processo de trabalho do egresso; reconhecimento e aplicação dos atributos da Atenção Primária à Saúde (APS); potências da formação em RMFC; e os desafios da especialidade. Os programas de RMFC estudados contribuem para a formação da prática profissional e do perfil dos seus egressos por meio do fortalecimento e da efetivação dos atributos da Atenção Primária à Saúde (APS), inclusive estendendo-se para além da especialidade.
The segmentation of tomographic images of the battery electrode is a crucial processing step, which will have an additional impact on the results of material characterization and electrochemical simulation. However, manually labeling X-ray CT images (XCT) is time-consuming, and these XCT images are generally difficult to segment with histographical methods. We propose a deep learning approach with an asymmetrical depth encode-decoder convolutional neural network (CNN) for real-world battery material datasets. This network achieves high accuracy while requiring small amounts of labeled data and predicts a volume of billions voxel within few minutes. While applying supervised machine learning for segmenting real-world data, the ground truth is often absent. The results of segmentation are usually qualitatively justified by visual judgement. We try to unravel this fuzzy definition of segmentation quality by identifying the uncertainty due to the human bias diluted in the training data. Further CNN trainings using synthetic data show quantitative impact of such uncertainty on the determination of material’s properties. Nano-XCT datasets of various battery materials have been successfully segmented by training this neural network from scratch. We will also show that applying the transfer learning, which consists of reusing a well-trained network, can improve the accuracy of a similar dataset.
The Teaching-Learning-Based Optimization (TLBO) algorithm is being extended to a broader range of applied optimization problems in the literature, mimicking the teaching-learning process. This paper proposes an Advanced Teaching-Learning-Based Optimization (Ad-TLBO) algorithm to enhance the efficiency and performance of the original version of TLBO in terms of accuracy, convergence rate, and reliability characteristics. The advancement is obtained by modifying the initialization, search approach, and structure of the two main phases of this algorithm in four steps to improve exploration and exploitation capability. Efficiency comparisons are shown in four challenges with various benchmark functions with multimodal, separable, differentiable, and continuity characteristics. The results are compared with several intelligent optimization algorithms. It is also deduced that this algorithm outperforms all investigated optimization algorithms in terms of accuracy, convergence speed, and success to reach acceptable solutions for various benchmark functions.
The mapping of the vertical and lateral variations in the physical properties of the few‐meter cover layer over near‐surface aquifers is important for hydrogeological modeling, particularly for the quantification of the recharge of groundwater systems. The first ground‐based TDEM (time domain electromagnetic) survey over a small catchment (Avenelles, France) of the watershed of Orgeval (Seine Basin) was carried out to determine discontinuities in the first silt layer as well as in the Brie multilevel aquifer limestone horizon. The results highlighted the following: 1) a good sensitivity of the TDEM survey to the presence of multi‐decametric resistive sand lenses, particularly in a location where they were previously identified, and 2) the interest in conducting a survey at a fine sampling step but extending to the meso‐scale. To overcome the sampling issue over a watershed of several hundred square kilometers, we proposed numerically assessing the use of a prototype of low‐cost airborne transient electromagnetic systems towed by light fixed‐wing airplanes (with transmitting and receiving loops in the same plane). The present numerical analysis, in 1D for the vertical (i.e., thickness) variation and in 3D for the lateral extensions of localized sandy and resistive units, showed that a conductive few‐meter cover can be mapped even with a system flying at 50 m with, however, the need of a priori constraint on the resistivity of the first layer to estimate its thickness variation as accurately as possible. Even if it did not bring more sensitivity to the layer thickness and despite the severe difficulty of practical implementation with a decametric emission loop, the vertical co‐planar (VCP) configuration potentially offered better near‐surface lateral resolution (down to ∼40 m) to delineate the sandy units (discontinuities) within the silt layer (if units are at least 50 m in size) and provided better spatial constraints compared to the classical horizontal co‐planar (HCP) geometry used in the TDEM. Even if not aerodynamically in the plane of the emission loop, the measurement of the Hx component with a vertical dipole emission loop (PERPxz geometry for perpendicular) improved the lateral resolution (down to ∼20 m; still with at least 50 m size sand units) and confirmed that a geometry different from the classical HCP configuration could be valuable. This article is protected by copyright. All rights reserved
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
2,110 members
Henry Proudhon
  • Centre des Matériaux PM FOURT (MAT)
Daniel Pino
  • Centre de Mise en Forme des Matériaux (CEMEF)
Clément Nizak
  • Biochemistry
Information
Address
60 boulevard Saint Michel, 75006, Paris, Ile-de-France, France