The MathWorks, Inc
  • Natick, United States
Recent publications
While recent technological developments contributed to breakthrough advances in single particle cryo-electron microscopy (cryo-EM), sample preparation remains a significant bottleneck for the structure determination of macromolecular complexes. A critical time factor is sample optimization that requires the use of an electron microscope to screen grids prepared under different conditions to achieve the ideal vitreous ice thickness containing the particles. Evaluating sample quality requires access to cryo-electron microscopes and a strong expertise in EM. To facilitate and accelerate the selection procedure of probes suitable for high-resolution cryo-EM, we devised a method to assess the vitreous ice layer thickness of sample coated grids. The experimental setup comprises an optical interferometric microscope equipped with a cryogenic stage and image analysis software based on artificial neural networks (ANN) for an unbiased sample selection. We present and validate this approach for different protein complexes and grid types, and demonstrate its performance for the assessment of ice quality. This technique is moderate in cost and can be easily performed on a laboratory bench. We expect that its throughput and its versatility will contribute to facilitate the sample optimization process for structural biologists.
Linear temporal logic (LTL) formulae are widely used to provide high level behavioral specifications on mobile robots. We propose a decentralized route-planning method for a networked team of mobile robots to satisfy a common (global) LTL specification. The global specification is assumed to be a conjunction of multiple formulae, each of which is treated as a task to be assigned to one or more vehicles. The vehicle kinematic model consists of two modes: a hover mode and a constant-speed forward motion mode with bounded steering rate. The proposed method leverages the consensus-based bundle algorithm for decentralized task assignment, thereby decomposing the global specification into local specifications for each vehicle. We develop a new algorithm to assign collaborative tasks: those simultaneously assigned to multiple vehicles. Rewards in the proposed task assignment algorithm are computed using the so-called lifted graph, which ensures the satisfaction of minimum turn radius constraints on vehicular motion. This algorithm synchronizes the vehicles’ routes with minimum waiting durations. The proposed method is compared against a multi-vehicle task assignment algorithm from the literature, which we extend to apply for satisfying LTL specifications. The proposed route-planning method is demonstrated via numerical simulation examples and a hardware implementation on networked single-board computers.
Loewner matrix pencils play a central role in the system realization theory of Mayo and Antoulas, an important development in data-driven modeling. The eigenvalues of these pencils reveal system poles. How robust are the poles recovered via Loewner realization? With several simple examples, we show how pseudospectra of Loewner pencils can be used to investigate the influence of interpolation point location and partitioning on pole stability, the transient behavior of the realized system, and the effect of noisy measurement data. We include an algorithm to efficiently compute such pseudospectra by exploiting Loewner structure.
Characterization of rock samples is relevant to hydrocarbon production, geothermal energy, hydrogen storage, waste storage, and carbon sequestration. Image resolution plays a key role in both property estimation and image analysis. However, poor resolution may lead to underestimation of rock properties such as porosity and permeability. Therefore, improving the image resolution is paramount. This study shows the workflow for 2D image super-resolution processes using a Convolutional Neural Network (CNN) method. The rock samples used to test the networks were three unfractured Wolfcamp shales, a Bentheimer sandstone (Guan et al., 2019), and a Vaca Muerta (Frouté et al., 2020) shale. These samples were imaged with a clinical Computed Tomography (CT) scanner (100's µm resolution) as well a microCT scanner (10's µm resolution). This established training, validation, and test data sets. The deep learning architectures were implemented in Matlab 2021b. The network performance is calculated using two metrics: i) pixel signal to noise ratio (PSNR) and ii) structural similarity index method (SSIM). In addition, porosity values on the image data sets are presented to illustrate their relevance. Training options and different strategies for network tuning are also discussed in the results section. Results illustrate the potential for AI to improve the resolution of CT images by at least a factor of 4. This level of improvement is essential for resolving fractures, other permeable conduits in impermeable shale samples, and shale fabric features. We outline a pathway to greater improvement of resolution.
No PDF available ABSTRACT Ultrasound imaging technology has undergone a revolution during the last decade due to the availability of transducers that can operate over a large range of frequencies, and also due to the availability of high-speed, high-resolution analog-to-digital converters and signal processors. The large data acquisition and computational bandwidth afforded on these portable and bench-top ultrasound imaging systems could potentially be leveraged for designing energy-efficient bio-telemetry links that can be used for communicating with devices implanted in vivo. In this paper, we discuss an ultrasound water-marking approach that can be used for simultaneous imaging and bio-telemetry. The trade-off involves supporting a reasonable data-rate bio-telemetry link and at the same time minimizing artifacts due to bio-telemetry data in the B-scan images. Specifically, we exploit a variance-based signal representation where the image background noise is modulated to encode the data to be transmitted or sensed. Using a B-scan ultrasound imaging system and a phantom setup we show that the approach can support biotelemetry links at sub-nanowatt transmission power and at depths greater than 10 cm.
A novel supermesh method that was presented for computing solutions to the multimaterial heat equation in complex stationary geometries with microstructure, is now applied for computing solutions to the Stefan problem involving complex deforming geometries with microstructure. The supermesh is established by combining the underlying (fixed) structured rectangular grid with the (deforming) piecewise linear interfaces reconstructed by the multi-material moment-of-fluid method. The temperature diffusion equation with Dirichlet boundary conditions at interfaces is solved by the linear exact multi-material finite volume method upon the supermesh. The interface propagation equation is discretized using the unsplit cell-integrated semi-Lagrangian method. The level set method is also coupled during this process in order to assist in the initialization of the (transient) provisional velocity field. The resulting method is validated on both canonical and challenging benchmark tests. Algorithm convergence results based on grid refinement are reported. It is found that the new method approximates solutions to the Stefan problem efficiently, compared to traditional approaches, due to the localized finite volume approximation stencil derived from the underlying supermesh. The new deforming boundary supermesh approach enables one to compute many kinds of complex deforming boundary problems with the efficiency properties of a body fitted mesh and the robustness of a “cut-cell” (a.k.a. “embedded boundary” or “immersed”) method.
This paper is concerned with solving, from learning-based decomposition control viewpoint, the problem of output tracking with nonperiodic tracking-transition switching. Such a nontraditional tracking problem occurs in applications where sessions for tracking a given desired trajectory are alternated with those for transiting the output with given boundary conditions. It is challenging to achieve precision tracking while maintaining smooth tracking-transition switching, as post-switching oscillations can be induced due to the mismatch of the boundary states at the switching instants, and the tracking performance can be limited by the nonminimum-phase zeros of the system, and effected by factors such as input constraints and external disturbances. Although recently an approach by combining the system-inversion with optimization techniques has been proposed to tackle these challenges, modeling of the system dynamics and complicated online computation are needed, and the controller obtained can be sensitive to the model uncertainties. In this work, a learning-based decomposition control technique is developed to overcome these limitations. A dictionary of input-output basis is constructed offline a priori via data-driven iterative learning first. The input-output basis are used online to decompose the desired output in the tracking sessions and design an optimal desired transition trajectory with minimal transition time under input-amplitude constraint. Finally, the control input is synthesized based on the superpositioning principle, and further optimized online to account for system variations and external disturbance. The proposed approach is illustrated through a nanopositioning control experiment on a piezoelectric actuator.
To enable optical interconnect fluidity in next-generation data centers, we propose adaptive transmission based on machine learning in a wavelength-routing network. We consider programmable transmitters that can apply N possible code rates to connections based on predicted bit error rate (BER) values. To classify the BER, we employ a preprocessing algorithm to feed the traffic data to a neural network classifier. We demonstrate the significance of our proposed preprocessing algorithm and the classifier performance for different values of N and switch port count.
View Video Presentation: Multifunction radars can search for targets, confirm new tracks, and revisit tracks to update the state. To perform these functions, a multifunction radar is often controlled by a resource manager that creates radar tasks for search, confirmation, and tracking. Multi-target tracking often requires sensor resources to be allocated efficiently to update the different targets while continuing to search for new targets to track. In this work, we present a software environment to develop radar resource management algorithms. Resource management and tracking are closely coupled, especially when tracking maneuvering targets that require the radar to revisit the targets more frequently than tracking non-maneuvering targets. We demonstrate how using an Interacting Multiple Model (IMM) filter, which estimates when the target is maneuvering, helps to manage the radar revisit time and therefore minimizes the resource needed for tracking. We show how existing software tools can help develop and test resource management systems.
View Video Presentation: Multi-target tracking is at the heart of surveillance and autonomous systems. In multi-sensor systems, there are several options for the system-level tracking architectures that span a range of centralization levels. Therefore, the engineers responsible for building the system require a framework that enables the design, testing, and evaluation of various architectures. In previous works, we discussed a set of sensor fusion and tracking algorithms. In this work, we describe an extension to this set that supports the design of tracking system-of-systems. We provide examples that demonstrate the framework in action.
Automatic prostate tumor segmentation is often unable to identify the lesion even if in multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. With the CNN a mean Dice Sorensen Coefficient for the prostate gland and the tumor lesions of 0.62 and 0.31 with the radiologist drawn ground truth and 0.32 with wholemount histology ground truth for tumor lesions could be achieved. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.
The basal ganglia and pontocerebellar systems regulate somesthetic-guided motor behaviors and receive prominent inputs from sensorimotor cortex. In addition, the claustrum and thalamus are forebrain subcortical structures that have connections with somatosensory and motor cortices. Our previous studies in rats have shown that primary and secondary somatosensory cortex (S1 and S2) send overlapping projections to the neostriatum and pontine nuclei, whereas, overlap of primary motor cortex (M1) and S1 was much weaker. In addition, we have shown that M1, but not S1, projects to the claustrum in rats. The goal of the current study was to compare these rodent projection patterns with connections in cats, a mammalian species that evolved in a separate phylogenetic superorder. Three different anterograde tracers were injected into the physiologically identified forepaw representations of M1, S1, and S2 in cats. Labeled fibers terminated throughout the ipsilateral striatum (caudate and putamen), claustrum, thalamus, and pontine nuclei. Digital reconstructions of tracer labeling allowed us to quantify both the normalized distribution of labeling in each subcortical area from each tracer injection, as well as the amount of tracer overlap. Surprisingly, in contrast to our previous findings in rodents, we observed M1 and S1 projections converging prominently in striatum and pons, whereas, S1 and S2 overlap was much weaker. Furthermore, whereas, rat S1 does not project to claustrum, we confirmed dense claustral inputs from S1 in cats. These findings suggest that the basal ganglia, claustrum, and pontocerebellar systems in rat and cat have evolved distinct patterns of sensorimotor cortical convergence.
The periodic QZ (pQZ) algorithm is a basic tool in many applications, including periodic systems, cyclic matrices and matrix pencils, and solution of skew-Hamiltonian/Hamiltonian eigenvalue problems, which, in turn, is essential in optimal and robust control, and characterization of dynamical systems. This iterative algorithm operates on a formal product of matrices. The shifts needed to increase the convergence rate are implicitly defined and applied via an embedding of the Wilkinson polynomial. But the implicit approach may not converge for some periodic eigenvalue problems, since the shifts involved may be indefinitely unsuitable. A semi-implicit approach has been recently proposed hoping to avoid convergence failures and reduce the number of iterations. In this approach, the shifts are chosen based on eigenvalues computed explicitly using a special pQZ algorithm for subproblems of order two, and they are enforced via a suitable embedding. This offers degrees of freedom unavailable for the implicit approach. But a semi-implicit pQZ step is more expensive than an implicit pQZ step, since an inner iterative process is involved. Moreover, the purely semi-implicit approach may also sometimes fail to converge. However, alternating implicit and semi-implicit iterations offers the advantages of both approaches, improving the behavior of the pQZ algorithm. The numerical results in extensive tests have shown no convergence failures and a reduced number of iterations.
In the generalized multiple measurement vectors (GMMV) setup of compressive sensing, we consider the problem of jointly recovering multiple sparse signal vectors sharing a common sparsity pattern based on joint analysis of their linear measurements. The important aspect of the GMMV setup is that the linear measurements of different sparse signal vectors are acquired by using different sensing matrices. It is not yet well understood how the GMMV setup affects identifiability of sparse signal vectors from the measurements, in particular compared to the single measurement vector (SMV) setup wherein we perform recovery of each sparse signal vector independently. Gao et al. [1] presented the first theorem on identifiability of sparse signal vectors in the GMMV setup, in which it is stated that the GMMV setup allows signal vectors with more nonzero rows to be identifiable compared to the SMV setup. In this correspondence, we aim to present a contrasting perspective on sparse signal identifiability in the GMMV setup. In particular, it is shown that the GMMV setup does not result in an improved identifiability condition for some signal vectors even when their entries are diverse.
As decisions in drug development increasingly rely on predictions from mechanistic systems models, assessing the predictive capability of such models is becoming more important. Several frameworks for the development of quantitative systems pharmacology (QSP) models have been proposed. In this paper, we add to this body of work with a framework that focuses on the appropriate use of qualitative and quantitative model evaluation methods. We provide details and references for those wishing to apply these methods, which include sensitivity and identifiability analyses, as well as concepts such as validation and uncertainty quantification. Many of these methods have been used successfully in other fields, but are not as common in QSP modeling. We illustrate how to apply these methods to evaluate QSP models, and propose methods to use in two case studies. We also share examples of misleading results when inappropriate analyses are used.
Quantum states can acquire a geometric phase called the Berry phase after adiabatically traversing a closed loop, which depends on the path not the rate of motion. The Berry phase is analogous to the Aharonov–Bohm phase derived from the electromagnetic vector potential, and can be expressed in terms of an Abelian gauge potential called the Berry connection. Wilczek and Zee extended this concept to include non-Abelian phases—characterized by the gauge-independent Wilson loop—resulting from non-Abelian gauge potentials. Using an atomic Bose–Einstein condensate, we quantum-engineered a non-Abelian SU(2) gauge field, generated by a Yang monopole located at the origin of a 5-dimensional parameter space. By slowly encircling the monopole, we characterized the Wilczek–Zee phase in terms of the Wilson loop, that depended on the solid-angle subtended by the encircling path: a generalization of Stokes’ theorem. This observation marks the observation of the Wilson loop resulting from a non-Abelian point source.
A central goal of condensed-matter physics is to understand how the diverse electronic and optical properties of crystalline materials emerge from the wavelike motion of electrons through periodically arranged atoms. However, more than 90 years after Bloch derived the functional forms of electronic waves in crystals1 (now known as Bloch wavefunctions), rapid scattering processes have so far prevented their direct experimental reconstruction. In high-order sideband generation2–9, electrons and holes generated in semiconductors by a near-infrared laser are accelerated to a high kinetic energy by a strong terahertz field, and recollide to emit near-infrared sidebands before they are scattered. Here we reconstruct the Bloch wavefunctions of two types of hole in gallium arsenide at wavelengths much longer than the spacing between atoms by experimentally measuring sideband polarizations and introducing an elegant theory that ties those polarizations to quantum interference between different recollision pathways. These Bloch wavefunctions are compactly visualized on the surface of a sphere. High-order sideband generation can, in principle, be observed from any direct-gap semiconductor or insulator. We thus expect that the method introduced here can be used to reconstruct low-energy Bloch wavefunctions in many of these materials, enabling important insights into the origin and engineering of the electronic and optical properties of condensed matter. Bloch wavefunctions of two types of hole in gallium arsenide are reconstructed by measuring the polarization of light emitted by collisions of electrons and holes accelerated by a terahertz laser.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Natick, United States