Taylor T Johnson

Taylor T Johnson
Vanderbilt University | Vander Bilt · Department of Electrical Engineering and Computer Science

PhD

About

190
Publications
32,932
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
4,185
Citations
Introduction
Taylor directs the Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL) at Vanderbilt in EECS and the Institute for Software Integrated Systems (ISIS). Taylor's research focus is developing formal verification techniques and software tools for cyber-physical systems (CPS) along with applications across CPS domains, such as power and energy systems, aerospace and avionics systems, automotive systems, transportation systems, and robotics.
Additional affiliations
August 2013 - August 2016
The University of Texas at Arlington
Position
  • Professor (Assistant)
July 2016 - present
Vanderbilt University
Position
  • Professor (Assistant)
August 2008 - August 2013
University of Illinois Urbana-Champaign
Position
  • Research Assistant
Education
May 2010 - August 2013
University of Illinois Urbana-Champaign
Field of study
  • Electrical and Computer Engineering
August 2008 - May 2010
University of Illinois Urbana-Champaign
Field of study
  • Electrical and Computer Engineering
August 2004 - May 2008
Rice University
Field of study
  • Electrical and Computer Engineering

Publications

Publications (190)
Conference Paper
Full-text available
Rectangular hybrid automata (RHA) are finite state machines with additional skewed clocks that are useful for modeling real-time systems. This paper is concerned with the uniform verification of safety properties of networks with arbitrarily many interacting RHAs. Each automaton is equipped with a finite collection of pointers to other automata tha...
Article
Full-text available
The Simplex architecture ensures the safe use of an unverifiable complex/smart controller by using it in conjunction with a verified safety controller and verified supervisory controller (switching logic). This architecture enables the safe use of smart, high-performance, untrusted, and complex control algorithms to enable autonomy without requirin...
Article
Full-text available
Embedded systems use increasingly complex software and are evolving into cyber-physical systems (CPS) with sophisticated interaction and coupling between physical and computational processes. Many CPS operate in safety-critical environments and have stringent certification, reliability, and correctness requirements. These systems undergo changes th...
Conference Paper
This paper proposes novel reachability algorithms for both exact (sound and complete) and over-approximation (sound) analysis for deep neural networks (DNNs). The approach uses star sets as a symbolic representation of sets of states, which are known in short as stars and provide an effective representation of high-dimensional polytopes. Our star-b...
Conference Paper
Full-text available
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output...
Preprint
Neural network verification is a new and rapidly developing field of research. So far, the main priority has been establishing efficient verification algorithms and tools, while proper support from the programming language perspective has been considered secondary or unimportant. Yet, there is mounting evidence that insights from the programming la...
Preprint
Full-text available
This report summarizes the 5th International Verification of Neural Networks Competition (VNN-COMP 2024), held as a part of the 7th International Symposium on AI Verification (SAIV), that was collocated with the 36th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective compari...
Preprint
In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor poisoning attacks on ML malware classifiers. For instance, adversaries could inject malicious samples into public malware repositories, contaminating the training data and potentially misclassifying malware by...
Preprint
Full-text available
Image editing technologies are tools used to transform, adjust, remove, or otherwise alter images. Recent research has significantly improved the capabilities of image editing tools, enabling the creation of photorealistic and semantically informed forged regions that are nearly indistinguishable from authentic imagery, presenting new challenges in...
Preprint
Behavior Trees (BTs) are high-level controllers that are useful in a variety of planning tasks and are gaining traction in robotic mission planning. As they gain popularity in safety-critical domains, it is important to formalize their syntax and semantics, as well as verify properties for them. In this paper, we formalize a class of BTs we call St...
Preprint
Behavior Trees (BTs) are high level controllers that have found use in a wide range of robotics tasks. As they grow in popularity and usage, it is crucial to ensure that the appropriate tools and methods are available for ensuring they work as intended. To that end, we created a new methodology by which to create Runtime Monitors for BTs. These mon...
Preprint
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML) by enabling decentralized, collaborative learning. However, FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overal...
Preprint
Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning. However, most current solutions focus on private data collected from a single domain. A significant challenge arises when client data comes from diverse domains (i.e., domain shift), leading to poor performance on unseen domains. Existing Federated Doma...
Chapter
Information hiding is the process of embedding data within another form of data, often to conceal its existence or prevent unauthorized access. This process is commonly used in various forms of secure communications (steganography) that can be used by bad actors to propagate malware, exfiltrate victim data, and discreetly communicate. Recent work h...
Preprint
In fragile watermarking, a sensitive watermark is embedded in an object in a manner such that the watermark breaks upon tampering. This fragile process can be used to ensure the integrity and source of watermarked objects. While fragile watermarking for model integrity has been studied in classification models, image transformation/generation model...
Article
Recent advancements in federated learning (FL) have greatly facilitated the development of decentralized collaborative applications, particularly in the domain of Artificial Intelligence of Things (AIoT). However, a critical aspect missing from the current research landscape is the ability to enable data-driven client models with symbolic reasoning...
Chapter
Deep Learning success in a wide range of applications, such as image recognition and natural language processing, has led to the increasing usage of this technology in many domains, including safety-critical applications such as autonomous cars and medicine. The usage of the models, e.g., neural networks, in safety critical applications demands a t...
Chapter
Formal verification of neural networks and broader machine learning models is an emerging field that has gained significant attention due to the growing use and impact of these data-driven methods. This track explores techniques for formally verifying neural networks and other machine learning models across various application domains. It includes...
Chapter
Formal verification utilizes a rigorous approach to ensure the absence of critical errors and validate models against predefined properties. While significant progress has been made in verification methods for various deep neural networks (DNNs), such as feed-forward neural networks (FFNNs) and convolutional neural networks (CNNs), the application...
Chapter
As malware threats continue to increase in both complexity and sophistication, the adoption of advanced detection methods, such as deep neural networks (DNNs) for malware classification, has become increasingly vital to safeguard digital infrastructure and protect sensitive data. In order to measure progress in this safety-critical landscape, we pr...
Chapter
Video capturing devices with limited storage capacity have become increasingly common in recent years. As a result, there is a growing demand for techniques that can effectively analyze and understand these videos. While existing approaches based on data-driven methods have shown promise, they are often constrained by the availability of training d...
Chapter
Full-text available
Steganography, or hiding messages in plain sight, is a form of information hiding that is most commonly used for covert communication. As modern steganographic mediums include images, text, audio, and video, this communication method is being increasingly used by bad actors to propagate malware, exfiltrate data, and discreetly communicate. Current...
Chapter
Data-driven, neural network (NN) based anomaly detection and predictive maintenance are emerging as important research areas. NN-based analytics of time-series data provide valuable insights and statistical evidence for diagnosing past behaviors and predicting critical parameters like equipment’s remaining useful life (RUL), state-of-charge (SOC) o...
Article
In this paper, we present a decentralized safe control (DSC) approach for distributed cyber-physical systems based on conducting reachability analysis in real-time. Each agent can periodically compute the local reachable set from its current local time to some time instant in the near future, and then broadcast a message containing the computed rea...
Preprint
Full-text available
Boolean Satisfiability (SAT) and Satisfiability Modulo Theories (SMT) are widely used in automated verification, but there is a lack of interactive tools designed for educational purposes in this field. To address this gap, we present EduSAT, a pedagogical tool specifically developed to support learning and understanding of SAT and SMT solving. Edu...
Preprint
Full-text available
Data-driven, neural network (NN) based anomaly detection and predictive maintenance are emerging research areas. NN-based analytics of time-series data offer valuable insights into past behaviors and estimates of critical parameters like remaining useful life (RUL) of equipment and state-of-charge (SOC) of batteries. However, input time series data...
Chapter
Full-text available
This manuscript presents the updated version of the Neural Network Verification (NNV) tool. NNV is a formal verification software tool for deep learning models and cyber-physical systems with neural network components. NNV was first introduced as a verification framework for feedforward and convolutional neural networks, as well as for neural netwo...
Article
Full-text available
This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP), held in 2020, 2021, and 2022. In the VNN-COMP, participants submit software tools that analyze whether given neural networks satisfy specifications describing their input-output behavior....
Preprint
This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP) held in 2020, 2021, and 2022. In the VNN-COMP, participants submit software tools that analyze whether given neural networks satisfy specifications describing their input-output behavior. T...
Preprint
Full-text available
This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair...
Article
Neural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hardware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. An example of such systems is th...
Article
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled cyber-physical systems (Le-CPS) under adversarial attacks, model uncertainties, and sensing errors is essential for...
Chapter
Reinforcement Learning (RL) depends critically on how reward functions are designed to capture intended behavior. However, traditional approaches are unable to represent temporal behavior, such as “do task 1 before doing task 2.” In the event they can represent temporal behavior, these reward functions are handcrafted by researchers and often requi...
Chapter
Full-text available
Behavior Trees, which originated in video games as a method for controlling NPCs but have since gained traction within the robotics community, are a framework for describing the execution of a task. BehaVerify is a tool that creates a nuXmv model from a py_tree. For composite nodes, which are standardized, this process is automatic and requires no...
Chapter
Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years. Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems. In this paper, we consider a general class of neural ODEs with varying architectures...
Chapter
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. This paper proposes a method to repair unsafe ReLU DNNs in safety-critical systems using reachability analysis. Our repair method uses reachability analysis to calculate the unsafe reachable domain of a...
Preprint
Behavior Trees, which originated in video games as a method for controlling NPCs but have since gained traction within the robotics community, are a framework for describing the execution of a task. BehaVerify is a tool that creates a nuXmv model from a py_tree. For composite nodes, which are standardized, this process is automatic and requires no...
Preprint
Full-text available
Satisfiability Modulo Theories (SMT) solvers have been successfully applied to solve many problems in formal verification such as bounded model checking (BMC) for many classes of systems from integrated circuits to cyber-physical systems. Typically, BMC is performed by checking satisfiability of a possibly long, but quantifier-free formula. However...
Preprint
Full-text available
This work in progress paper introduces robustness verification for autoencoder-based regression neural network (NN) models, following state-of-the-art approaches for robustness verification of image classification NNs. Despite the ongoing progress in developing verification methods for safety and robustness in various deep neural networks (DNNs), r...
Preprint
Full-text available
Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years. Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems. In this paper, we consider a general class of neural ODEs with varying architectures...
Article
Full-text available
This work in progress paper introduces robustness verification for autoencoder-based regression neural network (NN) models, following state-of-the-art approaches for robustness verification of image classification NNs. Despite the ongoing progress in developing verification methods for safety and robustness in various deep neural networks (DNNs), r...
Preprint
Full-text available
Reinforcement Learning (RL) has become an increasingly important research area as the success of machine learning algorithms and methods grows. To combat the safety concerns surrounding the freedom given to RL agents while training, there has been an increase in work concerning Safe Reinforcement Learning (SRL). However, these new and safe methods...
Preprint
Full-text available
Recent advances in machine learning technologies and sensing have paved the way for the belief that safe, accessible, and convenient autonomous vehicles may be realized in the near future. Despite tremendous advances within this context, fundamental challenges around safety and reliability are limiting their arrival and comprehensive adoption. Auto...
Article
Automata-based modeling of hybrid and cyber-physical systems (CPS) is an important formal abstraction amenable to algorithmic analysis of its dynamic behaviors, such as in verification, fault identification, and anomaly detection. However, for realistic systems, especially industrial ones, identifying hybrid automata is challenging, due in part to...
Article
Full-text available
Dynamic mode decomposition (DMD) has become synonymous with the Koopman operator, where continuous time dynamics are discretized and examined using Koopman (i.e. composition) operators. Using the newly introduced “occupation kernels,” the present manuscript develops an approach to DMD that treats continuous time dynamics directly through the Liouvi...
Preprint
Full-text available
This report summarizes the second International Verification of Neural Networks Competition (VNN-COMP 2021), held as a part of the 4th Workshop on Formal Methods for ML-Enabled Autonomous Systems that was collocated with the 33rd International Conference on Computer-Aided Verification (CAV). Twelve teams participated in this competition. The goal o...
Article
Verification has emerged as a means to provide formal guarantees on learning-based systems incorporating neural network before using them in safety-critical applications. This paper proposes a new verification approach for deep neural networks (DNNs) with piecewise linear activation functions using reachability analysis. The core of our approach is...
Preprint
Full-text available
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled systems under attacks, model uncertainties, and sensing errors is essential for safe autonomy. This research propos...
Chapter
This paper introduces robustness verification for semantic segmentation neural networks (in short, semantic segmentation networks [SSNs]), building on and extending recent approaches for robustness verification of image classification neural networks. Despite recent progress in developing verification methods for specifications such as local advers...
Preprint
Full-text available
Deep convolutional neural networks have been widely employed as an effective technique to handle complex and practical problems. However, one of the fundamental problems is the lack of formal methods to analyze their behavior. To address this challenge, we propose an approach to compute the exact reachable sets of a network given an input domain, w...
Conference Paper
Full-text available
This paper introduces robustness verification for semantic segmentation networks (SSNs), building on and extending recent approaches for robustness verification of image classification neural networks. Despite recent progress in developing verification methods for specifications such as local adversarial robustness in deep neural networks in terms...
Article
Full-text available
Modern cyber-physical microgrids rely on the information exchanged among power electronics devices (i.e., converters or inverters with local embedded controllers) making them vulnerable to cyber manipulations. The physical devices themselves are susceptible to potential faults and failures. Effects of these cyber and physical anomalies can propagat...
Conference Paper
Aviation has a remarkable safety record ensured by strict processes, rules, certifications, and regulations, in which formal methods have played a role in large companies developing commercial aerospace vehicles and related cyber-physical systems (CPS). This has not been the case for small Unmanned Aircraft Systems (UAS) that are still largely unre...
Conference Paper
Neural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hard-ware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. This manuscript evaluates the s...
Article
Full-text available
This paper presents an overview survey of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabled by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), e...
Chapter
Full-text available
This paper presents the Neural Network Verification (NNV) software tool, a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abst...
Chapter
Full-text available
Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in...
Chapter
Full-text available
Neural networks provide quick approximations to complex functions, and have been increasingly used in perception as well as control tasks. For use in mission-critical and safety-critical applications, however, it is important to be able to analyze what a neural network can and cannot do. For feed-forward neural networks with ReLU activation functio...