
Gabriel A. MorenoCarnegie Mellon University | CMU · Software Engineering Institute
Gabriel A. Moreno
PhD
About
52
Publications
6,687
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,552
Citations
Introduction
Additional affiliations
August 2001 - present
Publications
Publications (52)
Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To ad...
Artifacts support evaluating new research results and help comparing them with the state of the art in a field of interest. Over the past years, several artifacts have been introduced to support research in the field of self-adaptive systems. While these artifacts have shown their value, it is not clear to what extent these artifacts support resear...
Today’s world is witnessing a shift from human-written software to machine-learned software, with the rise of systems that rely on machine learning. These systems typically operate in non-static environments, which are prone to unexpected changes, as is the case of self-driving cars and enterprise systems. In this context, machine-learned software...
Artifacts support evaluating new research results and help comparing them with the state of the art in a field of interest. Over the past years, several artifacts have been introduced to support research in the field of self-adaptive systems. While these artifacts have shown their value, it is not clear to what extent these artifacts support resear...
Self-adaptation improves the resilience of software-intensive systems, enabling them to adapt their structure and behavior to run-time changes (e.g., in workload and resource availability). Many of these approaches reason about the best way of adapting by synthesizing adaptation plans online via planning or model checking tools. This method enables...
Research in self-adaptive systems often uses web applications as target systems, running the actual software on real web servers. This approach has three drawbacks. First, these systems are not easy and/or cheap to deploy. Second, run-time conditions cannot be replicated exactly to compare different adaptation approaches due to uncontrolled factors...
Self-adaptive systems depend on models of themselves and their environment to decide whether and how to adapt, but these models are often affected by uncertainty. While current adaptation decision approaches are able to model and reason about this uncertainty, they do not consider ways to reduce it. This presents an opportunity for improving decisi...
Proactive latency-aware adaptation is an approach for self-adaptive systems that considers both the current and anticipated adaptation needs when making adaptation decisions, taking into account the latency of the available adaptation tactics. Since this is a problem of selecting adaptation actions in the context of the probabilistic behavior of th...
We describe an approach to Statistical Model Checking (SMC) that produces not only an estimate of the probability that specified properties (a.k.a. predicates) are satisfied, but also an “input attribution” for those predicates. We use logistic regression to generate the input attribution as a set of linear and non-linear functions of the inputs th...
Run-time generation of adaptation plans is a powerful mechanism that helps a self-adaptive system to meet its goals in a dynamically changing environment. In the past, researchers have demonstrated successful use of various automated planning techniques to generate adaptation plans at run time. However, for a planning technique, there is often a tr...
Self-adaptive systems must decide which adaptations to apply and when. In reactive approaches, adaptations are chosen and executed after some issue in the system has been detected (e.g., unforeseen attacks or failures). In proactive approaches, predictions are used to prepare the system for some future event (e.g., traffic spikes during holidays)....
Distributed Adaptive Real-Time (DART) systems are interconnected and collaborating systems that continuously must satisfy guaranteed and highly critical requirements (e.g., collision avoidance), while at the same time adapt, smartly, to achieve best-effort and low-critical application requirements (e.g., protection coverage) when operating in dynam...
Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when adaptations have latency, and take some time to produce their e...
Self-adaptive systems overcome many of the limitations of human supervision in complex software-intensive systems by endowing them with the ability to automatically adapt their structure and behavior in the presence of runtime changes. However, adaptation in some classes of systems (e.g., safety-critical) can benefit by receiving information from h...
Although different approaches to decision-making in self-adaptive systems have shown their effectiveness in the past by factoring in predictions about the system and its environment (e.g., resource availability), no proposal considers the latency associated with the execution of tactics upon the target system. However, dierent adaptation tactics ca...
Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global per-spective for reas...
Cyber-physical systems are an emerging class of applications that require tightly coupled interaction between the computational and physical worlds. These systems are typically realized using sensor/actuator interfaces connected with processing backbones. Safety is a primary concern in cyber-physical systems since the actuators directly influence t...
Power consumption is an increasing concern in real-time systems that operate on battery power or require heat dissipation to keep the system at its operating temperature. Today, most processors allow software to change their frequency and voltage of operation to reduce their power consumption. Frequency scaling in real-time systems must be done in...
This paper describes an analysis of some of the challenges facing one portion of the Electrical Smart Grid in the United States - residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span...
Software-reliant systems permeate all aspects of modern society. The resulting interconnectedness and associated complexity has resulted in a proliferation of diverse stakeholders with conflicting goals. Thus, contemporary software engineering is plagued by incentive conflicts, in settling on design features, allocating resources during the develop...
Model interchange approaches support the analysis of software architecture and design by enabling a variety of tools to automatically exchange performance models using a common schema. This paper builds on one of those interchange formats, the Software Performance Model Interchange Format (S-PMIF), and extends it to support the performance analysis...
Large-scale distributed cyber-physical systems will have many sensors/actuators (each with local micro-controllers), and a distributed communication/computing backbone with multiple processors. Many cyber-physical applications will be safety critical and in many cases unexpected workload spikes are likely to occur due to unpredictable changes in th...
In this paper we present a measurement-based approach that produces both a WCET (Worst Case Execution Time) estimate, and a prediction of the probability that a future execution time will exceed our estimate. Our statistical-based approach uses extreme value theory to build a model of the tail behavior of the measured execution time value. We valid...
The Software Engineering Institute (SEI) annually undertakes several independent research and development (IRAD) projects. These projects serve to (1) support feasibility studies investigating whether further work by the SEI would be of potential benefit and (2) support further exploratory work to determine whether there is sufficient value in even...
Model-Driven Engineering (MDE) is an approach to develop software systems by creating models and applying automated transformations
to them to ultimately generate the implementation for a target platform. Although the main focus of MDE is on the generation
of code, it is also necessary to support the analysis of the designs with respect to quality...
We consider a tactical data network with limited bandwidth, in which each agent is tracking objects and may have value for receiving data from other agents. The agents are self- interested and would prefer to receive data than share data. Each agent has private information about the quality of its data and can misreport this quality and degrade or...
Software components and the technology supporting component based software engineering contribute greatly to the rapid development and configuration of systems for a variety of application domains. Such domains go beyond desktop office applications and information systems supporting e-commerce, but include systems having real-time performance requi...
Model interchange approaches support the analysis of software architecture and design by enabling a variety of tools to automatically exchange performance models using a common schema. This paper builds on one of those interchange formats, the Software Performance Model Interchange Format (S-PMIF), and extends it to support the performance analysis...
The PACC Starter Kit is an Eclipse-based development environment that combines a model-driven development approach with reasoning frameworks that apply performance, safety, and security analyses. These analyses predict runtime behavior based on specifications of component behavior and are accompanied by some measure of confidence.
The PACC Starter Kit is an eclipse-based development environment that combines a model-driven development approach with reasoning frameworks that apply performance, safety, and security analyses. These analyses predict runtime behavior based on specifications of component behavior and are accompanied by some measure of confidence.
Component containers are a key part of mainstream component technologies, and play an important role in separating nonfunctional concerns from the core component logic. This paper addresses two different aspects of containers. First, it shows how generative programming techniques, using AspectC++ and metaprogramming, can be used to generate stubs a...
Demands for increased functionality, better quality, and faster time-to-market in software products continue to increase. Component-based development is the software industry’s response to these demands. The industry has developed technologies such as EJB and CORBA to assemble components that are created in isolation. Component technologies availab...
Significant economic and technical benefits accrue from the use of pre-existing and commercially available software components
to develop new systems. However, challenges remain that, if not adequately addressed, will slow the adoption of software component
technology. Chief among these are a lack of consumer trust in the quality of components, and...
One risk inherent in the use of software components has been that the behavior of assemblies of components is discovered only after their integration. The objective of our work is to enable designers to use known (and certified) component properties as parameters to models that can be used to predict assembly-level properties. Our concern in this p...
Significant economic and technical benefits accrue from the use of pre-existing and commercially available software components to develop new systems. However, challenges remain that, if not adequately addressed, will slow the adoption of software component technology. Chief among these are a lack of consumer trust in the quality of components, and...
This report describes the use of prediction-enabled component technology (PECT) as a means of packaging predictable assembly as a deployable product. A PECT results from integrating a component technology with one or more analysis technologies. Analysis technologies allow analysis and prediction of assembly-level properties prior to component assem...
The SEI has been developing a list of scenarios to characterize quality attributes. The SEI has also been conducting Architecture Tradeoff Analysis Method (ATAM) evaluations. One output of an ATAM evaluation is a collection of scenarios that relate to quality attribute requirements for the specific system being evaluated. In this report, we compare...
The Predictable Assembly from Certifiable Components (PACC) Initiative at the Software Engineering Institute (SEI) is developing methods and technologies for predictable assembly. A software development activity that builds systems from components is predictable if the runtime behavior of an assembly of components can be predicted from known proper...
This report develops a queueing-theoretic solution to predict, for a real-time system, the average-case latency of aperiodic tasks managed by a sporadic server. The report applies this theory to a model problem drawn in the domain of industrial robot control. In this model problem, a controller with hard periodic deadlines is "open" to third-party...
Today, software engineering is concerned less with individual programs than with large-scale networks of interacting programs. For large-scale networks, engineering problems emerge that go well beyond functional correctness (the purview of programming) and encompass equally crucial nonfunctional qualities such as security, performance, availability...