Polytechnique Montréal
  • Montréal, QC, Canada
Recent publications
In this paper, the consensus problem is developed for AC Microgrids (MGs) considering communication delay and packet dropouts. The existing control schemes commonly design the secondary control layer assuming ideal communication channels between distributed generation (DG) units. However, in practical situations, the communication links are exposed to packet dropout and delays which can drastically decrease the quality and stability of DGs controllers. Most of the existing literature studies the impact of delay or packet dropout, and the simultaneous effects of both delay and packet dropout in the communication links considering the rigorous stability analysis and statistical properties are proposed in this paper. Therefore, we propose a new distributed control approach to restore the voltage and frequency to the reference values using the collected sample-based technique of each DG unit. In addition, sufficient consensusability conditions are found when the communication network topology, the DG dynamics, the packet dropout rates, and the communication delay are taken into account. The consensus error of the proposed control strategy is determined, and its convergence is shown. Finally, the proposed method is simulated for different scenarios in MATLAB/SimPowerSystems environment to prove the effectiveness of the proposed secondary control scheme.
Metals and sulfur, if not efficiently eliminated from crude petroleum oil during refining, may have severe detrimental impacts in refinery processes such as fluid catalytic cracking and hydrotreating units. Recently, the lab-scale microwave-assisted demetallization and desulfurization (MW-DMDS) of crude oil using Bis(2-ethylhexyle) phosphoric acid (D2EHPA) have shown several advantages such as high removal efficiency, being environmentally green, and lower energy requirements. This paper presents a comprehensive industrial process scheme for MW-DMDS by designing the required processing units. In addition, an effective methodology to regenerate D2EHPA using sulfuric acid and sodium hydroxide (NaOH) aqueous solutions was developed and experimentally validated. A Techno-economic investigation was carried out by adopting the ASPEN Plus process simulator to estimate the upscaling feasibility of the process to treat 50,000 barrels per stream day (BPSD) of crude oil. Total capital costs (CAPEX) and total annual operating costs (OPEX) were estimated at 6.77 MUSD and 4.23 MUSD (0.24 $/bbl), respectively. The results indicated the economic superiority of the proposed process compared to the existing technologies, like hydrodemetallization (HDM) and hydrodesulfurization (HDS) due to the remarkably lower CAPEX and OPEX costs. Sensitivity analysis by changing the primary design parameters demonstrated that the required microwave power and the corresponding purchase costs of the microwave generators have the highest share of the estimated CAPEX costs. Moreover, the annual operating costs seem to strongly depend on the reagent consumption and regeneration process effectiveness.
Context Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”. Method We conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately. Conclusion We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
Developers sometimes choose design and implementation shortcuts due to the pressure from tight release schedules. However, shortcuts introduce technical debt that increases as the software evolves. The debt needs to be repaid as fast as possible to minimize its impact on software development and software quality. Sometimes, technical debt is admitted by developers in comments and commit messages. Such debt is known as self-admitted technical debt (SATD). In data-intensive systems, where data manipulation is a critical functionality, the presence of SATD in the data access logic could seriously harm performance and maintainability. Understanding the composition and distribution of the SATDs across software systems and their evolution could provide insights into managing technical debt efficiently. We present a large-scale empirical study on the prevalence, composition, and evolution of SATD in data-intensive systems. We analyzed 83 open-source systems relying on relational databases as well as 19 systems relying on NoSQL databases. We detected SATD in source code comments obtained from different snapshots of the subject systems. To understand the evolution dynamics of SATDs, we conducted a survival analysis. Next, we performed a manual analysis of 361 sample data-access SATDs, investigating the composition of data-access SATDs and the reasons behind their introduction and removal. We identified 15 new SATD categories, out of which 11 are specific to database access operations. We found that most of the data-access SATDs are introduced in the later stages of change history rather than at the beginning. We also observed that bug fixing and refactoring are the main reasons behind the introduction of data-access SATDs.
Introduction There is a growing need for small-diameter (<6 mm) off-the-shelf synthetic vascular conduits for different surgical bypass procedures, with actual synthetic conduits showing unacceptable thrombosis rates. The goal of this study was to build vascular grafts with better compliance than standard synthetic conduits and with an inner layer stimulating endothelialization while remaining antithrombogenic. Methods Tubular vascular conduits made of a scaffold of polyurethane/polycaprolactone combined with a bioactive coating based on chondroitin sulfate (CS) were created using electrospinning and plasma polymerization. In vitro testing followed by a comparative in vivo trial in a sheep model as bilateral carotid bypasses was performed to assess the conduits’ performance compared to the actual standard. Results In vitro, the novel small-diameter (5 mm) electrospun vascular grafts coated with chondroitin sulfate (CS) showed 10 times more compliance compared to commercial expanded polytetrafluoroethylene (ePTFE) conduits while maintaining adequate suturability, burst pressure profiles, and structural stability over time. The subsequent in vivo trial was terminated after electrospun vascular grafts coated with CS showed to be inferior compared to their expanded polytetrafluoroethylene counterparts. Conclusions The inability of the experimental conduits to perform well in vivo despite promising in vitro results may be related to the low porosity of the grafts and the lack of rapid endothelialization despite the presence of the CS coating. Further research is warranted to explore ways to improve electrospun polyurethane/polycaprolactone scaffold in order to make it prone to transmural endothelialization while being resistant to strenuous conditions.
We formulate a batch reinforcement learning-based demand response approach to prevent distribution network constraint violations in unknown grids. We use the fitted Q-iteration to compute a network-safe policy from historical measurements for thermostatically controlled load aggregations providing frequency regulation. We test our approach in a numerical case study based on real load profiles from Austin, TX. We compare our approach’s performance to a greedy, grid-aware approach and a standard, grid-agnostic approach. The average tracking root mean square error is 0.0932 for our approach, and 0.0600 and 0.0614 for, respectively, the grid-aware and grid-agnostic implementations. Our numerical case study shows that our approach leads to a 95% reduction, on average, in the total number of rounds with at least a constraint violation when compared to the grid-agnostic approach. Working under limited information, our approach thus offers lower but acceptable setpoint tracking performance while ensuring safer distribution network operations.
Design systems represent a user interaction design and development approach that is currently of avid interest in the industry. However, little research work has been done to synthesize knowledge related to design systems in order to inform the design of tools to support their creation, maintenance, and usage practices. This paper represents an important step in which we explored the issues that design system projects usually deal with and the perceptions and values of design system project leaders. Through this exploration, we aim to investigate the needs for tools that support the design system approach. We found that the open source communities around design systems focused on discussing issues related to behaviors of user interface components of design systems. At the same time, leaders of design system projects faced considerable challenges when evolving their design systems to make them both capable of capturing stable design knowledge and flexible to the needs of the various concrete products. They valued a bottom-up approach for design system creation and maintenance, in which components are elevated and merged from the evolving products. Our findings synthesize the knowledge and lay foundations for designing techniques and tools aimed at supporting the design system practice and related modern user interaction design and development approaches.
Since visco-hyperelastic materials are vastly used in different industries, a proper constitutive model plays a key role in predicting materials’ behavior. On the other hand, due to the substantial leap in data science and machine learning methods, it is beneficial to use these models to simplify the constitutive modeling procedure. This paper aims to develop a new modeling approach based on machine learning that can accurately predict the visco-hyperelastic behavior of materials in different loading conditions. The suggested model includes a library of inputs that can be different based on the problem. In addition, the model benefits from the mathematical simplicity, a fast and straightforward calibration process, acceptable accuracy in all ranges of stretches, and being independent from the structure geometry. Employing these models, four sets of problems with different structures and loading conditions such as uniaxial tension, pure shear, and torsion are solved. Comparing the results with several available analytical models as well as the experimental data, it can be concluded that the data-driven models are fast and give accurate predictions.
This paper presents a new mathematical formulation for planning and scheduling activities in the Engineer To Order (ETO) context. Designed from an Advanced Planning System perspective, the proposed formulation not only schedules production operations but also takes into account the assembly, design, engineering, and validation phases. The definition of resources is thus generic and enables to model employees, finite capacity machines, and consumable materials. The definition of operations allows to represent short production operations, respecting precedence relations representing the assembly of elements, but also non-physical activities. Non-physical activities are longer, subject to validation, and applied once for multiple identical elements. Furthermore, to integrate planning and scheduling, the proposed formulation is not limited to time-based objectives but also considers financial and organizational aspects. The experiments carried out on instances with up to 100 operations show that our model performs well and requires reasonable computing times. Besides, we propose an ETO strategy that tends to validate the design of non-standard and highly uncertain items first and delay their production or purchase. Our integrated model governed by the proposed ETO strategy is compared to a model that mimics decision processes in existing industrial systems. The comparative study and experimental results highlight how this strategy yields robust integrated solutions that offer a good trade-off between the wastes caused by unpredictable changes in the BOM and the projects completion time.
Three distributed circuit-based approaches based on Hopkinson analogy, Buntenbach analogy and duality principle are proposed in this paper to provide a detailed electromagnetic model of an inductor. The proposed approaches can provide several important aspects of finite element method (FEM), such as detailed geometrical representation and incorporation of magnetic saturation. In comparison to FEM, which has numerous obstacles in studying a magnetic device in a large network, the proposed approaches can be implemented in electromagnetic transient (EMT)-type software and thus can simulate the magnetic devices in a large network. The three circuit-based methods are first compared to each other, and then a 2-D finite-element method is used to validate them.
The aim of this study was to investigate the role of MgO on the long-term operation of a mixed media contactor. Specifically, the simultaneous removal of aqueous manganese and the remineralization of soft waters using pure calcite are limited by a remineralization breakthrough after 600 h of operation in the presence of high Mn concentrations (5 mg Mn/L). The introduction of as little as 5 % (m/m) MgO was able to improve the Mn removal kinetic, while maintaining remineralization objectives over 720 h of operation. After treating elevated concentrations of Mn, both media exhibited a Mn-coated layer which contributed to limiting the mass transfer from the media core to the liquid phase. X-ray photoelectron spectroscopy (XPS) identified this superficial layer as 10 % Mn oxides (MnOx) on MgO media compared to 1.4 % MnOx on calcite, suggesting the MgO acts as the preferred reaction surface during Mn removal. Therefore, it is postulated that in the presence of MgO, Mn removal is impacted by high pH conditions (pH = 10.5) which introduces a significant precipitation mechanism as Mn²⁺ is oxidized. For all the examined conditions, the formation of this coating improved Mn removal due to the autocatalytic nature of the adsorption/oxidation of dissolved manganese by MnOx. MgO is therefore thought to contribute to a complex sorption-coprecipitation process in the operation of a mixed media contactor.
Negative-sequence currents at the transmission line terminals are widely adopted in differential protection to achieve excellent sensitivity. The underlying principle is that the traditional power sources (synchronous generators) can be treated as inductive reactance in the negative-sequence network. However, for the transmission lines connected to Doubly-Fed Induction Generator (DFIG) based wind parks (WPs), the negative-sequence currents are controlled by fault-ride-through (FRT) solutions. This paper first analyzes the phasor characteristics between negative-sequence voltage and current of a DFIG-based WP under three typical FRT solutions, and then evaluates their impact on the dynamic performance of negative-sequence differential protection elements (87LQ). The analysis indicates that a certain FRT control strategy would make DFIG-based WP provide capacitive current into the negative-sequence network. This impacts the sensitivity and even results in maloperation as shown for the first time in this work. The analytical results are validated with detailed time domain models and simulations in Simulink.
The fault current characteristics of Doubly Fed Induction Generator (DFIG) based wind turbine generator (WTGs) differ from those of traditional generators. Although asymmetrical faults are far more frequent, the studies on transient current characteristics of DFIG-based WTGs are mainly focused on symmetrical faults and consider the variations in voltage magnitude only. This paper presents highly accurate and concise analytical expressions for the stator and rotor currents of DFIG-based WTGs under crowbar protection considering the variations in magnitude and phase angle of both the positive- and negative-sequence voltages during asymmetrical faults. The proposed expressions are validated using a detailed simulation model of a 1.5 MW DFIG-based WTG. The expressions can be directly integrated into protection tools for the evaluation of dynamic performance of relays and design of fault ride through (FRT) solutions.
The interactions between wind parks and the series-compensated transmission lines can bring about sub- or super-synchronous resonance (SSR) incidents which jeopardize the safe operation of the entire network. In practice, such incidents may emerge under various conditions as different types of wind turbines with various parameters can be deployed at several locations in the network. Hence, it is crucial to efficiently identify the conditions that lead to adverse interactions and their subsequent SSR incidents. This paper proposes a simulation-based method, namely disturb and scan (DaS), for fast and automated detection of SSR. The proposed technique uses small-scale disturbances in time-domain electromagnetic transient (EMT) simulations to perform spectral analysis along with positive-sequence impedance scans. Numerical results are presented and validated for benchmark systems that utilize type-III and type-IV wind turbines. The system operators can adopt the developed methodology to quickly assess the risks of SSR, evaluate conditions of instability, and improve their network protection and control schemes.
Fast delivery is one of the most popular services in e-commerce retail. It consists in shipping the items ordered on-line in short times. Customer orders in this segment come with deadlines, and respecting this latter is pivotal to ensure a high service quality. The most time-consuming process in the warehouse is order picking. It consists in regrouping orders into batches, assigning those batches to order pickers, sequencing the batches assigned to each order picker such that the orders deadlines are satisfied, and the picking time is minimized. To speed up the order picking operations, e-commerce warehouses implement new logistical practices. In this paper, we study the impact of splitting the orders (assigning the orderlines of an order to multiple pickers). We thus generalize the integrated orders batching, batch scheduling, and picker routing problem by allowing the orders splitting and propose a route first-schedule second heuristic to solve the problem. In the routing phase, the heuristic divides the orders into clusters and constructs the picking tours that retrieve the orderlines of each cluster using a split-based procedure. In the scheduling phase, the constructed tours are assigned to pickers such that the orders deadlines are satisfied using a constraint programming formulation. On a publicly available benchmark, we compare our results against a state-of-the-art iterated local search algorithm designed for the non-splitting version of the problem. Results show that splitting the customer orders using our algorithm reduces the picking time by 30% on average with a maximum reduction of 60%.
The concept of balance plays an important role in many combinatorial optimization problems. Yet there exist various ways of expressing balance, and it is not always obvious how best to achieve it. In this methodology-focused paper, we study three cases where its integration is deficient and analyze the causes of these inadequacies. We examine the characteristics and performance of the measures of balance used in these cases, and provide general guidelines regarding the choice of a measure.
Local-nonlocal coupling approaches provide a means to combine the computational efficiency of local models and the accuracy of nonlocal models. This paper studies the continuous and discrete formulations of three existing approaches for the coupling of classical linear elasticity and bond-based peridynamic models, namely (1) a method that enforces matching displacements in an overlap region, (2) a variant that enforces a constraint on the stresses instead, and (3) a method that considers a variable horizon in the vicinity of the interfaces. The performance of the three coupling approaches is compared on a series of one-dimensional numerical examples that involve cubic and quartic manufactured solutions. Accuracy of the presented methods is measured in terms of the difference between the solution to the coupling approach and the solution to the classical linear elasticity model, which can be viewed as a modeling error. The objective of the paper is to assess the quality and performance of the discrete formulation for this class of force-based coupling methods.
Since its emergence a few decades ago, Product Lifecycle Management (PLM) has mainly shaped product development and production engineering and helped achieve a tremendous quickening in processes and operations. On the other hand, Internet of Things (IoT) is currently in very strong emergence and people's interest in it is increasingly growing. Surprisingly, interaction between IoT and PLM systems are very scarce to this day. Industrialists have enabled a few connections when and where return on investment was high and certain. However, nowadays, the struggle for systems integration remains as the culture difference between PLM’s engineering background and IoT’s computer science background remains. This research work aims to bridge the gap by making explicit all previous research on PLM and IoT through a systematic literature review that explicits IoT’s evolving perimeter over the latest decade. It also tackles literature’s approach between the PLM & IoT information systems and humans. It finally proposes and discusses a framework supporting the integration of PLM and IoT in manufacturing industry.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
5,324 members
Mohamad Sawan
  • Department of Electrical Engineering
Galina Nemova
  • Department of Engineering Physics
Nikola Stikov
  • Department of Electrical Engineering
John Mullins
  • Department of Computer Science and Software Engineering
Jason Robert Tavares
  • Department of Chemical Engineering
Information
Address
H3T 1J4, Montréal, QC, Canada
Head of institution
Philippe A. Tanguy
Website
http://www.polymtl.ca/
Phone
+15143404711 (7434)