Recent publications
bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Objective:
Double-loop H-field probes are often used to measure current on loop antennae for magnetic resonance imaging (MRI). Loop crosstalk limits the dynamic range of direct measurements with such probes. The crosstalk can be removed by simple calibration. This work analyses the quantitative relation of a probe's calibrated
${S}_{21}$
with the RF coil current.
Method:
The analytical relation between RF coil current and calibrated
${S}_{21}$
measurements of a probe is established with the multi-port network theory, and verified by full-wave simulation and benchtop measurements. The effect of calibration is demonstrated by measuring the
<sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup>
H trap frequency, the active detuning, and the preamplifier decoupling.
Results:
The calibration removes the effect of crosstalk in a probe and improves the lower bound of
$| {{S}_{21}} |$
. The calibrated
${S}_{21}$
is proportional to coil current. In the lower frequency range, the ratio of calibrated
${S}_{21}$
to coil current changes almost linearly with frequency.
Impact
: The calibration method improves the sensitivity of probe measurements and facilitates fine-tuning current-suppressing circuits like active detuning circuits, traps, preamplifier decoupling. The linear frequency dependency between
${S}_{21}$
measurements and coil current allows easy, fair comparison of coil current up to 128 MHz, and in some cases 298 MHz, helping build multi-nucleus coils.
Over the last few years, the web of data has been evolved. Indeed, it allows sharing of a significant interconnection of a huge amount of data in several domains and it keeps increasing continuously. Due to the confidential nature of some data, sectors such as health, financial, and government, it have limited participation with fewer data to publish. Thus, to develop the web of data and make it more trustworthy, we have to take into consideration the confidentiality, sensitivity, and utility of data. We propose, in this paper, a framework for the confidentiality preservation, and sharing of linked data. Our approach provides the means to specify privacy policies and protect sensitive data in RDF triples. Subsequently, the application of the policy on the graph will allow their replacement by their encryption, which ensures a balance between confidentiality and the utility of data. We have experimented the performance of our proposed solution on benchmarks of different sizes by showing how to preserve the privacy of sensitive data and proving how hard it is to decrypt. The obtained results have shown the effectiveness of our developed framework.
For the purpose of automatically identifying significant home appliances based on their usage patterns, this study presents a novel hybridization of segmentation, time-domain feature extraction, and machine learning algorithms. The empirical findings affirm a promising performance of the devised method.
This article provides a new approach for efficient prediction of the “Lithium-ion” (Li-ion) battery cells capacities by analyzing and exploiting the battery parameters, acquired by an event-driven module. It acquires the intended battery cells parameters in a real-time manner and onward extract pertinent features. The feature set is processed with machine learning algorithms. The high-power Li-Ion cells dataset, provided by NASA is used to evaluate the performance of devised method. The empirical results affirm that the devised method has promising performance.
Federated Learning (FL) is an AI framework that enables collaborative and distributed training across multiple users to learn a global model while preserving the privacy of the data held locally at different sites. However, the aggregation process of FL that relies on a centralized server to update the global model parameters exposes the protocol to several vulnerabilities. Thus, the privacy and security concerns in FL systems need to be further investigated to fully leverage the capabilities of this protocol, especially in industries involving highly sensitive data, such as healthcare. As part of this study, We emphasize the security challenges in the FL systems and propose a conceptual solution for a secure and efficient FL protocol based on defensive and compression mechanisms, respectively. Our work hopes to properly highlight the susceptible adversaries and attacks that need to be considered. Furthermore, our proposal constitutes a significant step towards a reliable aggregation method specifically designed for healthcare.
Producing a large family of resource-constrained multi-processing systems on chips (MPSoC) is challenging, and the existing techniques are generally geared toward a single product. When they are leveraged for a variety of products, they are expensive and complex. Further in the industry, a considerable lack of analysis support at the architectural level induces a strong dependency on the experiences and preferences of the designer. This paper proposes a formal foundation and analysis of MPSoC product lines based on a featured transition system (FTS) to express the variety of products. First, features diagrams are selected to model MPSoC product lines, which facilitate capturing its semantics as FTS. To this end, the probabilistic model checker verifies the resulting FTS that is decorated with tasks characteristics and processors’ failure probability. The experimental results indicate that the formal approach offers quantitative results on the relevant product that optimizes resource usage when exploring the product family.
Product traceability is one of the major issues in supply chains management (e.g., Food, cosmetics, pharmaceutical, etc.). Several studies has shown that traceability allows targeted product recalls representing a health risk (e.g.: counterfeit products), thus enhancing the communication and risks management. It can be defined as the ability to track and trace individual items throughout their whole lifecycle from manufacturing to recycling. This includes real-time data analytics about actual product behavior (ability to track) and product historical data (ability to trace). This paper presents a comparative study between several works on product traceability and proposes a standardized traceability system architecture. In order to implement a counterfeit/nonconforming product detection algorithm, we implement a cosmetic supply chain as a multi-agent system implemented in Anylogic©. Data generated by this simulator are then used in order to identify genuine trajectories across the whole SC. The genuine product trajectories (behavior) are inferred using a frequent pattern mining algorithm (i.e., Apriori). This identified trajectories are used as a reference in order to identify counterfeit products and detect false alarms of product behavior
Deriving an accurate behavior model from historical data of a black box for verification and feature forecasting is seen by industry as a challenging issue especially for a large featured dataset. This paper focuses on an alternative approach where stochastic automata can be learned from time-series observations captured from a set of deployed sensors. The main advantage offered by such techniques is that they enable analysis and forecasting from a formal model instead of traditional learning methods. We perform statistical model checking to analyze the learned automata by expressing temporal properties. For this purpose, we consider a critical water infrastructure that provides a scenario based on a set of input and output values of heterogeneous sensors to regulate the dam spill gates. The method derives a consistent approximate model with traces collected over thirty years. The experiments show that the model provides not only an approximation of the desired output of a feature value but, also, forecasts the ebb and flow of the sensed data.
Thanks to the digital revolution, the construction industry has seen a recognizable evolution, where the world has been heading towards modern constructions based on the use of Building Information Modeling (BIM). This evolution was marked by the integration of this paradigm with immersive technologies like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). During the last few years, the development of BIM started to emerge. This paper proposes a Systematic Literature Review (SLR) of recent studies about the integration of BIM with immersive environments using VR/AR/MR technologies. Four electronic databases were exploited to search for eligible studies, namely: Google Scholar, ACM Digital Library, IEEE Xplore, and Science direct. From an initial cohort of 239 studies, 28 were retained for analysis. The main findings of this review have been focused on stages of the projects’ life cycle in which the immersive technologies are being implemented, approaches/techniques used to ensure the integration of BIM with the three immersive technologies, along with the current limitations and perspectives.
The distributed devices in a smart city are characterized by different degrees of sensitivity. Some of them can be accessed by everyone whereas others are limited to a specific class of users (subjects). Therefore, we created an access control system named SOT-S (Subject-Object-Task System) supported by blockchains that sort processes applied by subjects on smart devices. SOT-S depends on three entities: subjects, objects and tasks. It determines if subjects have the access’s rights to objects or not, and also it defines the priorities among the subjects. SOT-S principles are applied through an equation that takes the values of the three entities. To increase the level of trust and maintain the information integrity on the system, the values associated to the entities and the access control rules are managed through a blockchain mechanism. To ensure the applicability of the proposed solution, we developed a test environment to integrate the proposed concepts. In addition, we created a network integrating the developed components where the architecture was built through smooth operations. Compared to the existing solutions, the evaluated parameters of our system components are protected from damages by blockchain technology. Also, SOT-S paradigm is easy to understand, implement, and deploy. Further, it assigns a value of trust to a given task/action executed by a subject on an object.
One of the major security issues in the Internet of Things (IoT) is maintaining the network availability against attacks and traffic congestion. In practice, the greedy behavioral attack is considered as an intelligent Denial of Service (DoS), which aims to compromise the availability of the network by consuming as much as possible the bandwidth of the deployed network. This attack is achieved by tuning the CSMA-CA network communication parameters that plays at the physical layer. In this paper, we propose an efficient modeling technique of attacks proper to the behavior of the greedy node in IoT networks while respecting unslotted IEEE 802.15.4. In fact, our developed greedy nodes algorithm relies on CSMA-CA protocol. This fashioned way of attack representation helped us to easily detect greedy nodes on large-scale IoT networks through simulations. Indeed, the obtained numerical results of different scenarios allow us to validate our approach and showed that the greedy nodes can monopolize the transmission channel during a significant period of time. Various relevant parameters (the number of sent/lost packets, the collision rate, and energy consumption) are considered to analyze and evaluate the impact of selfish nodes on the IoT networks.
In this paper a theoretical analysis for determination of elastic and flexural modulus of natural fibre reinforced hybrid composite is proposed. The proposed analytical model is based on the classical lamination approach to predict the theoretical modulus. Experimental tensile and 3-point bending tests were carried out using standard specimens of inter and intra-ply hybrid composite materials. The elastic and flexural modulus predicted by the new approach showed a better agreement with the experimental results, with a maximum deviation of 11.21% and 11.65 % respectively. From the results it was observed that the theoretical value of elastic and flexural modulus was not accurately predicted by rule of hybrid mixture method. The proposed approach can be adopted as a primary tool to predict the modulus of the composite and it can also be used to optimise the composite parameters such as volume fraction of fibre, ply and fibre orientation, etc. for the maximum performance.
After two years of COVID-19 first infection and its speedy propagation, death and infection cases are till exponentially increasing. Unfortunately, during this a non-fully controlled situation, we noticed that the existing solutions for COVID-19 detection based on chest X-ray were not reliable enough in relation to the number of infected patients and the severity of the outbreak. To handle this issue by increasing the reliability and the efficiency of COVID-19 detection, we therefore deploy and compare the results of a set of reconfigurable classification approaches and deep learning techniques. Indeed, we have achieved a score of up to 99% accuracy with a dataset of 15,000 X-ray images, which makes the selected detection technique, deep learning, more reliable and effective.
Coverage area maximization is a crucial issue that must be considered in Wireless sensor network (WSN) deployment as long as it impacts the sensor network efficiency. In this paper, a novel approach based on particle swarm optimization (PSO) and voronoi diagram is developed to solve WSN deployment problem. The objective of the proposed solution is to reduce both of the coverage hole and coverage overlapping in the region of interest (RoI). In order to achieve it, the PSO fitness function is designed using voronoi diagram for the purpose of efficiently assessing the coverage hole of a particle solution and therefore, compute the improved deployment of the sensor nodes within the target area. The simulation results demonstrate that the proposed algorithm provides a noteworthy initial coverage enhancement.
Gamification can be seen as the intentional use of game design elements in non-game tasks, in order to produce psychological outcomes likely to influence behaviour and/or performance. In this respect, we hypothesize that gamification would produce measurable effects on user performance, that this positive impact would be mediated by specific motivational and attentional processes such as flow and that gamification would moderate the social comparison process. In three experimental studies, we examine the effects of gamified electronic brainstorming interfaces on fluency, uniqueness and flow. The first study mainly focuses on time pressure, the second on performance standard and the third one introduces social comparison. The results highlight some effects of the gamified conditions on brainstorming performance, but no or negative effects on flow. All three studies are congruent in that gamification did not occur as a psychological process, which questions popular design trends observed in a number of sectors.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Paris, France