University of Waterloo
  • Waterloo, ON, Canada
Recent publications
Under-approximations of reachable sets and tubes have been receiving growing research attention due to their important roles in control synthesis and verification. Available underapproximation methods applicable to continuous-time linear systems typically assume the ability to compute transition matrices and their integrals exactly, which is not feasible in general, and/or suffer from high computational costs. In this note, we attempt to overcome these drawbacks for a class of linear time-invariant (LTI) systems, where we propose a novel method to under-approximate finite-time forward reachable sets and tubes, utilizing approximations of the matrix exponential and its integral. In particular, we consider the class of continuous-time LTI systems with an identity input matrix and initial and input values belonging to full dimensional sets that are affine transformations of closed unit balls. The proposed method yields computationally efficient underapproximations of reachable sets and tubes, when implemented using zonotopes, with first-order convergence guarantees in the sense of the Hausdorff distance. To illustrate its performance, we implement our approach in three numerical examples, where linear systems of dimensions ranging between 2 and 200 are considered.
The authors present an approach to conceptualising and predicting environmental conflicts in which conflicts are analysed as a continuum of disagreement over values and options. They also operationalise this approach using an online values-centred survey tool, the ‘public-to-public decision support system’ (P2P-DSS). The authors put values and conflict in environmental management into perspective. Next, they review how values are defined in scholarship and operationalised for decision support. The relevance of values research to conflict management is presented. With reference to a real-world aggregate-mining conflict, the authors demonstrate how P2P-DSS can be used to collect data and categorise conflicts to enhance environmental management decision-making. The authors argue that P2P-DSS has potential to support values-sensitive thinking for environmental conflict management. They then set out research priorities to investigate the theoretical and practical implications of this approach. This work contributes to advancing values research in environmental conflict management and expanding values-based decision-making.
A major portion of a power system's asset portfolio comprises distribution transformers on residential premises. The rapid and massive acceptance of electric vehicles is posing challenges for distribution transformers to operate over their expected lifespan. This work proposes a four-layer framework to assess the real-time and anticipated aging of a distribution transformer and estimate the remaining useful life of a distribution transformer. The first layer stores residential smart meter data to be utilized for the kVA load estimation of a distribution transformer in the second layer. The performance of two powerful forecasting tools, i.e., Time Series Decomposition and Hidden Markov Model, is compared in the third layer. The historical and forecast data, along with the distribution transformer's thermal parameters, are used for its remaining useful life assessment. Numerical validation is conducted on real-world data utilizing electricity consumption and ambient temperature of fifteen households in London, Ontario, Canada. This work also includes the penetration of the most popular electric vehicles in Canada, along with service drop cable data and practical secondary distribution circuit configuration.
This paper presents an adaptable method for fault current derivative calculation in high voltage direct current (HVDC) grids composed of modular multilevel converters (MMCs). The proposed method can be used for current derivative calculation under different fault scenarios including pole-to-pole, pole-to-ground, and pole-to-metallic. The proposed method is adaptable as it can provide accurate fault current derivatives in various grid topologies such as symmetric monopole, asymmetric monopole with ground or metallic return, and bipole with ground or metallic return, as well as grids with different types of converters with and without fault-blocking capability including full-bridge MMCs (FB-MMCs), half-bridge MMCs (HB-MMCs), or a mix of HB- and FB-MMCs. The paper also demonstrates how the proposed current derivative calculation method can be used to form a derivative relay, which is fast, selective, computationally efficient, and insensitive to fault resistance. Furthermore, using the proposed current derivative calculation method, all relay settings are analytically calculated instead of being obtained through time-consuming simulation studies. Simulation results for various fault scenarios, grid topologies, and converter configurations show that the calculation method is accurate and the presented relaying algorithm can detect various faults within 10 $\mu$ s, even when the fault resistance is as high as 500 $Upomega$ .
The formation of hybrid alternating current (AC)-(DC) direct current systems from converters built by different manufacturers has attracted considerable attention in recent years. In multi-vendor AC-DC systems, the converter stations and their controllers are designed independently due to confidentiality requirements. If the converters have a physical connection at their AC side, unforeseen interactions among adjacent converters may disrupt stability and alter the dynamic performance of the converters from that intended by their designers. This paper contributes to seamlessly integrating converters with independently designed controllers into a multi-VSC (voltage-sourced converter) system. An $H_\infty$ control problem is defined to design two supplementary filters (SFs) per converter, one for the direct (d)-axis and one for the quadrature (q)-axis control loop, to simultaneously stabilize the multi-VSC system and minimize the perturbation of the dynamic response of the interconnected converters from the vendors' designed dynamic behavior. Adding the SFs to the control system of converters will not cause new disruptive interactions, because the coupling dynamics among the converters are considered in designing the SFs. It is also analytically shown that employing the proposed SFs increases the robust stability margin of the multi-VSC system. Various studies based on the nonlinear model of a 2-VSC system verify the effectiveness of the presented method in integrating independently designed converters into a multi-VSC system.
With the wide range of Internet of things (IoT) applications, Federated Learning (FL) is commonly adopted to protect the privacy of IoT data. FL enables privacy-preserving model training while keeping the data locally available. To alleviate the additional load caused by FL, an improved hierarchical aggregation framework is presented in this paper to decentralize the model aggregation tasks based on end-device clusters. However, when applying FL to IoT networks, how to keep high efficiency and reliability remains open challenges due to a large number and vulnerability of IoT end-devices. In this paper, we propose a blockchain-assisted aggregation scheme for FL in IoT networks, where the aggregation node selection is applied for efficiency improvement as well as blockchain for performance verification. During model aggregation, a selection strategy is obtained by the Deep Deterministic Policy Gradient (DDPG) algorithm and aims to select the optimal subset of IoT end-devices based on multiple metrics. Furthermore, a new performance verification based on the characteristics of blockchain is applied to achieve mutual verification among a number of untrustworthy nodes with the optimal stopping theory, which provides reliable model performance proofs. Simulation results show that the proposed scheme can maintain FL efficiency and reduce the system latency while protecting data privacy.
Intelligent tire systems are promising solutions for achieving precise vehicle state estimations, localization, and motion control in the context of autonomous driving. Tire cornering properties, namely, lateral force, aligning moment, and pneumatic trail, are crucial factors that should be accurately estimated for vehicle dynamics control purposes. In this work, a soft sensor for estimating tire cornering properties based on intelligent tire and machine learning is developed. The intelligent tire system is based on a triaxial accelerometer mounted on the inner liner of the tire tread, which provides acceleration measurements from the $x$ , $y$ , and $z$ directions. Partial least squares and variable importance in the projection scores (PLS-VIP) are used in the feature extraction of the acceleration signals over the contact patch. A Gaussian process regression (GPR) model is trained to predict the cornering properties with confidence intervals under different input conditions. Based on the variances in the GPR predictions and minimum mean-square error criterion, a data fusion method for pneumatic trail estimation is proposed. It is demonstrated that the developed GPR models for cornering properties and the data fusion method for pneumatic trail estimation have satisfactory accuracy and reliability. The experimental results show that the soft sensor proposed in this work is a strong candidate for further applications in the development of vehicle state estimation and control algorithms.
We study the hardness of the problem of finding the distance of quantum error-correcting codes. The analogous problem for classical codes is known to be NP-hard, even in approximate form. For quantum codes, various problems related to decoding are known to be NP-hard, but the hardness of the distance problem has not been studied before. In this work, we show that finding the minimum distance of stabilizer quantum codes exactly or approximately is NP-hard. This result is obtained by reducing the classical minimum distance problem to the quantum problem, using the CWS framework for quantum codes, which constructs a quantum code using a classical code and a graph. A main technical tool used for our result is a lower bound on the so-called graph state distance of 4-cycle free graphs. In particular, we show that for a 4-cycle free graph G , its graph state distance is either δ or δ + 1, where δ is the minimum vertex degree of G . Due to a well-known reduction from stabilizer codes to CSS codes, our results also imply that finding the minimum distance of CSS codes is also NP-hard.
A line of work has looked at the problem of recovering an input from distance queries. In this setting, there is an unknown sequence s ∈ {0, 1} <sup xmlns:mml="" xmlns:xlink="">≤ n </sup> , and one chooses a set of queries y ∈ {0, 1} <sup xmlns:mml="" xmlns:xlink=""> O ( n )</sup> and receives d ( s , y ) for a distance function d . The goal is to make as few queries as possible to recover s . Although this problem is well-studied for decomposable distances, i.e., distances of the form d ( s , y ) = Σ <sup xmlns:mml="" xmlns:xlink=""> n </sup> <sub xmlns:mml="" xmlns:xlink=""> i =1</sub> f ( s<sub>i</sub> , y<sub>i</sub> ) for some function f , which includes the important cases of Hamming distance, ℓ <sub xmlns:mml="" xmlns:xlink=""> p </sub> -norms, and M -estimators, to the best of our knowledge this problem has not been studied for non-decomposable distances, for which there are important instances including edit distance, dynamic time warping (DTW), Fréchet distance, earth mover’s distance, and others. We initiate the study and develop a general framework for such distances. Interestingly, for some distances such as DTW or Fréchet, exact recovery of the sequence s is provably impossible, and so we show by allowing the characters in y to be drawn from a slightly larger alphabet this then becomes possible. In a number of cases we obtain optimal or near-optimal query complexity. One motivation for understanding non-adaptivity is that the query sequence can be fixed and provide a non-linear embedding of the input, which can be used in downstream applications involving, e.g., neural networks for natural language processing.
Energy leapfrogging (i.e., skipping non-renewable grid infrastructures to micro-grid renewable sources) has been promoted by researchers and politicians as a solution in fighting against climate change and for access to electricity in less developed countries. Despite research on its potential, quantitative measurement of leapfrogging is still required to determine those nations who have utilized energy leapfrogging's promise. In this study, we present a quantitative analysis using World Bank Open Database data from 2000 to 2015, creating an aggregated leapfrogging estimate (ALE) through renewable energy consumption (i.e., percentage of total energy consumption) and access to electricity (i.e., percent of total population with access). We defined the ALE by subtracting (renewable consumption % in 2000 / access to electricity % in 2015) from (renewable consumption % in 2015 / access to electricity in 2000). We included only countries whose renewable energy consumption increased during the study interval. Low-income countries collectively leapfrogged more than other income groups. Somalia (48.11), Togo (3.05), Eswatini (2.76), and Timor-Leste (1.04) all had ALE values greater than 1 (range: 1.7 × 10⁻⁵–48.11). We then conducted a policy analysis of these countries, confirming that all four had implemented renewable energy policies to create access to electricity. Our ALE accurately determined countries with energy leapfrogging, uniquely incorporating access to electricity, consistent with the fundamental purpose of leapfrogging as a strategy to increase access. Future studies are needed to understand why low-income countries with low ALEs and access to electricity failed to leapfrog in the past. Future studies are also required to design prospective quantitative statistical models predicting the outcomes of leapfrogging strategies.
In this chapter, we review models and methods that incorporate uncertainty in hub location problems. In particular, we present stochastic and robust optimization models to formulate different sources of uncertainty including demand and costs and confer when each approach is best suited for use. We further describe and discuss hybrid modeling approaches and other extensions. We also review the common solution methods that are used to solve hub location models under uncertainty.
Citation: Opoku-Yamoah V, Christian LW, Irving EL, Jones D, McCulloch D, Rose K, Leat SJ. Validation of the Waterloo Differential Visual Acuity Test (WatDAT) and comparison with existing pediatric tests of visual acuity. Transl Vis Sci Technol. 2023;12(9):13, Purpose: The new Waterloo Differential Acuity Test (WatDAT) is designed to allow recognition visual acuity (VA) measurement in children before they can typically undertake matching tests. The study purpose was to validate WatDAT in adults with normal and reduced VA. Methods: Eighty adults (18 to <40 years of age) participated (32 normal VA, 12 reduced VA, and 36 simulated reduced VA). Monocular VA was measured on two occasions in random order for WatDAT (versions with 3 and 5 distractors for Faces and Patti Pics house among circles), Lea Symbols, Kay Pictures and Patti Pics matching tests, Teller Acuity Cards, Cardiff Acuity Test, and Early Treatment Diabetic Retinopathy Study (ETDRS) letter chart. Pediatric tests were validated against ETDRS using limits of agreement (LoA), sensitivity, and specificity. The LoA for repeatability were also determined. Results: WatDAT showed minimal bias compared with ETDRS, and LoAs, which were similar to pediatric matching tests (0.241-0.250). Both preferential looking tests showed higher bias and LoAs than ETDRS. Matching tests showed good agreement with ETDRS, except for Kay Pictures and Lea Uncrowded test, which overestimated VA. WatDAT showed high sensitivity (>0.96) and specificity (>0.79), which improved with criterion adjustment and were significantly higher than for the preferential looking tests. LoA for repeatability for WatDAT 3 Faces and WatDAT 5 Faces were comparable with the ETDRS. Conclusions: WatDAT demonstrates good agreement and repeatability compared with the gold-standard ETDRS letter chart, and performed better than preferential looking tests, the alternative until a child can undertake a matching VA test.
Wind derivatives are financial instruments designed to mitigate losses caused by adverse wind conditions. With the rapid growth of wind power capacity due to efforts to reduce carbon emissions, the demand for wind derivatives to manage uncertainty in wind power production is expected to increase. However, existing wind derivative literature often assumes normally distributed wind speed, despite the presence of skewness and leptokurtosis in historical wind speed data. This paper investigates how the misspecification of wind speed models affects wind derivative prices and proposes the use of the generalized hyperbolic distribution to account for non-normality. The study develops risk-neutral approaches for pricing wind derivatives using the conditional Esscher transform, which can accommodate stochastic processes with any distribution, provided the moment-generating function exists. The analysis demonstrates that model risk varies depending on the choice of the underlying index and the derivative’s payoff structure. Therefore, caution should be exercised when choosing wind speed models. Essentially, model risk cannot be ignored in pricing wind speed derivatives.
Background Population-based studies estimating the epidemiology of paediatric-onset multiple sclerosis (PoMS) are scarce. Methods We accessed population-based health administrative data from two provinces in Canada, Ontario and British Columbia (BC). Individuals with PoMS were identified via a validated case definition. The index date (‘MS onset’) was the first demyelinating or MS specific claim recorded ≤18 years of age. We estimated the age-standardised annual incidence and prevalence of PoMS, and 95% CIs between 2003 and 2019. We used negative binomial regression models to assess the temporal changes in the annual crude incidence and prevalence of PoMS, and the ratios comparing sex groups. Results From 2003 to 2019, a total of 148 incident PoMS cases were identified in BC, and 672 in Ontario. The age-standardised annual incidence of PoMS was stable in both provinces, averaging 0.95 (95% CI 0.79 to 1.13) in BC and 0.98 (95%CI 0.84 to 1.12) in Ontario per 100 000 person-years. The incidence ratio by sex (female vs male) was also stable over the study period, averaging 1.5:1 (95% CI 1.06 to 2.08, BC) and 2.0:1 (95% CI 1.61 to 2.59, Ontario). The age-standardised prevalence per 100 000 people rose from 4.75 (2003) to 5.52 (2019) in BC and from 2.93 (2003) to 4.07 (2019) in Ontario, and the increase was statistically significant in Ontario (p=0.002). There were more female prevalent PoMS cases than males in both provinces. Conclusions Canada has one of the highest rates of PoMS globally, and the prevalence, but not incidence, has increased over time. Allocation of resources to support the growing youth population with MS should be a priority.
A bstract Following the general theory of categorified quantum groups developed by the author previously, we construct the Drinfel’d double 2-bialgebra associated to a finite group N = G 0 . For N = ℤ 2 , we explicitly compute the braided 2-categories of 2-representations of certain version of this Drinfel’d double 2-bialgebra, and prove that they characterize precisely the 4d toric code and its spin- ℤ 2 variant. This result relates the two descriptions (categorical vs. field theoretical) of 4d gapped topological phases in existing literature and displays an instance of higher Tannakian duality for braided 2-categories. In particular, we show that particular twists of the underlying Drinfel’d double 2-bialgebra is responsible for much of the higher-structural properties that arise in 4d topological orders.
Memristive devices with threshold switching characteristics can be effectively utilized to mimic biological neurons acting as one of the key building blocks for constructing advanced hardware neural networks. In this work, the emulation of leaky integrate‐and‐fire memristive neuron is realized in one single cell with Ag/Ag−In−Zn−S/silk sericin/W architecture without the need for additional auxiliary circuits. The studied devices demonstrate excellent electrical properties, such as stably repeatable threshold switching, concentratedly low threshold voltage (≈0.4 V), and relatively small device‐to‐device variation. In addition, multiple neural features, such as leaky integrate‐and‐fire neuron functionality and strength‐modulated spike frequency characteristic, have been successfully emulated owing to the forming‐free volatile threshold switching effect. The stable volatile threshold switching behaviors and regular firing event may be attributed to the controllable metallic Ag filamentary mechanism. Furthermore, a solid accuracy of 91.44% of the pattern recognition of Modified National Institute of Standards and Technology (MNIST) data is obtained via a trained spiking neural network (SNN) based on the leaky integrate‐and‐fire behavior of sericin‐based device. These achievements shed light on the fact that employing sericin biomaterials has great application potential in advanced neuromorphic computation.
Knowledge about exposure to cannabidiol (CBD) in breastfed infants can provide an improved understanding of potential risk. The aim was to predict CBD exposure in breastfed infants from mothers taking CBD and CBD-containing products. Cannabidiol concentrations in milk previously attained from data collected through an existing human milk research biorepository were used to simulate infant doses and identify subgroups. A developed pediatric physiologically based pharmacokinetic model produced virtual breastfed infants administered the simulated CBD doses. Predicted breastfed infant exposures and upper area under the curve ratios were compared to the lowest therapeutic dose for approved indications in children. The existing human milk research biorepository contained 200 samples from 181 unique breastfeeding mothers for whom self-reported administration data and CBD concentrations had previously been measured. Samples that were above the lower limit of quantification with only one maternal administration type revealed that administration type, i.e., joint/blunt or edible versus oil or pipe, resulted in significantly different subgroups in terms of milk concentrations. Resulting simulated infant doses (ng/kg) were described by lognormal distributions with geometric means and geometric standard deviations: 0.61 ± 2.41 all concentrations, 0.10 ± 0.37 joint/blunt or edible, and 2.23 ± 8.15 oil or pipe. Doses administered to breastfed infants had exposures magnitudes lower than exposures in children aged 4–11 years administered the lowest therapeutic dose for approved indications, and low upper area under the curve ratios. Based on real-world use, breastfeeding infants are predicted to receive very small exposures of CBD through milk. Studies examining adverse reactions will provide further insight into potential risk.
Hydrologic pathways beneath ice sheets and glaciers play an important role in regulating ice flow. Antarctica has experienced, and will continue to experience, changes in ice dynamics and geometry, but the associated changes in subglacial hydrology have received less attention. Here, we use the GlaDS subglacial hydrology model to examine drainage evolution beneath an idealised Antarctic glacier in response to steepening ice surface slopes, accelerating ice velocities and subglacial lake drainages. Ice surface slope changes exerted a dominant influence, redirecting basal water to different outlet locations and substantially increasing channelised discharge crossing the grounding line. Faster ice velocities had comparatively negligible effects. Subglacial lake drainage results indicated that lake refilling times play a key role in drainage system evolution, with lake flux more readily accommodated following shorter refilling times. Our findings are significant for vulnerable Antarctic regions currently experiencing dynamic thinning since subglacial water re-routing could destabilise ice shelves through enhanced sub-shelf melting, potentially hastening irreversible retreat. These changes could also affect subglacial lake activity. We, therefore, emphasise that including a nuanced and complex representation of subglacial hydrology in ice-sheet models could provide critical information on the timing and magnitude of sea-level change contributions from Antarctica.
A suffix tree is a fundamental data structure for string processing and information retrieval, however, its structure is still not well understood. The suffix trees reverse engineering problem, which its research aims at reducing this gap, is the following. Given an ordered rooted tree T with unlabeled edges, determine whether there exists a string w such that the unlabeled-edges suffix tree of w is isomorphic to T. Previous studies on this problem consider the relaxation of having the suffix links as well as assume a binary alphabet. This paper is the first to consider the suffix tree detection problem, in which the relaxation of having suffix links as input is removed. We study suffix tree detection on two scenarios that are interesting per se. We provide a suffix tree detection algorithm for general alphabet periodic strings. Given an ordered tree T with n leaves, our detection algorithm takes \(O(n+|\varSigma |^p)\)-time, where p is the unknown in advance length of a period that repeats at least 3 times in a string S having a suffix tree structure identical to T, if such S exists. Therefore, it is a polynomial time algorithm if p is a constant and a linear time algorithm if, in addition, the alphabet has a sub-linear size. We also show some necessary (but insufficient) conditions for binary alphabet general strings suffix tree detection. By this we take another step towards understanding suffix trees structure.
Rheumatoid arthritis is an autoimmune disease which affects the small joints. Early prediction of RA is necessary for the treatment and management of the disease. The current work presents a deep learning and quantum computing-based automated diagnostic approach for RA in hand thermal imaging. The study’s goals are (i) to develop a custom RANet model and compare its performance with the pretrained models and quanvolutional neural network (QNN) to distinguish between the healthy subjects and RA patients, (ii) To validate the performance of the custom model using feature selection method and classification using machine learning (ML) classifiers. The present study developed a custom RANet model and employed pre-trained models such as ResNet101V2, InceptionResNetV2, and DenseNet201 to classify the RA patients and normal subjects. The deep features extracted from the RA Net model are fed into the ML classifiers after the feature selection process. The RANet model, RA Net+ SVM, and QNN model produced an accuracy of 95%, 97% and 93.33% respectively in the classification of healthy groups and RA patients. The developed RANet and QNN models based on thermal imaging could be employed as an accurate automated diagnostic tool to differentiate between the RA and control groups.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
20,461 members
Raymond Louis Legge
  • Department of Chemical Engineering
Sherilyn Houle
  • School of Pharmacy
Derek Besner
  • Department of Psychology
Mark Crowley
  • Department of Electrical & Computer Engineering
George Heckman
  • School of Public Health and Health Systems
200 University Avenue West, N2L 3G1, Waterloo, ON, Canada
+1 (519) 888-4567