Annual Conference on Systems Engineering Research
Disciplinary Convergence: Implications for
Systems Engineering Research
Eds.: Azad M. Madni, Barry Boehm
Daniel A. Erwin, Roger Ghanem; University of Southern California
Marilee J. Wheaton, The Aerospace Corporation
Redondo Beach, CA, March 23-25, 2017
Categorical foundations for system engineering
, Eswaran Subrahmanian
, Albert Jones
National Institute of Standards and Technology, email@example.com
Carnegie Mellon University, firstname.lastname@example.org
In this paper we argue that category theory (CT), the mathematical theory of abstract processes, could
provide a concrete formal foundation for the study and practice of systems engineering. To provide some
evidence for this claim, we trace the classic V-model of systems engineering, stopping along the way to (a)
introduce elements of CT and (b) show how these might apply in a variety of systems engineering contexts.
Keywords: Category theory, Foundations of system engineering, Mathematical modeling
Systems are becoming more complex, both larger and more interconnected. As computation and
communication in system components goes from novelty to the norm, this only becomes more true. In
particular, we have no generally accepted method for designing, testing and analyzing systems which mix
both physical and computational dynamics. We believe that a new formal foundation is required to model
and study such complex systems.
Existing approaches, typified by the V-model of systems engineering, are more heuristic than formal.
First we conceptualize the system, setting our various requirements and assumptions. Next we refine this
into a functional decomposition which details how our system will meet its goals. In realization, we map
these functions to components of our systems. Finally, we integrate these components into a true system,
testing along the way, before releasing the system for operation.
This says what we need to do, but not how to do it. A formal foundation would supplement this
framework with concrete tools and formal methods for accomplishing each step. Our goal in this paper is
to propose a candidate approach for such a foundation, based on a branch of mathematics called
category theory (CT).
We should mention some prior work associating CT and systems engineering. For example, CT is
listed as a foundational approach in the Systems Engineering Body of Knowledge (SEBOK, ), although
there is little detail associated with the entry. More substantively, Arbib & Manes  studied applications
of CT in systems control in the 1970's. This work was largely stymied by the unfamiliarity of categorical
ideas and the lack of good tools for implementing them (on which we will have more to say in the
CT is the mathematical theory of abstract processes, and as such it encompasses both physics and
computation. This alone makes it a good candidate for foundational work on modern systems. As we
proceed, we will also argue for other virtues including expressivity, precision, universality and modularity
To make our argument, we will trace through the classic V-model of systems engineering,
demonstrating along the way how CT might apply at each step in the process. We have chosen the V-
model not for validity (it oversimplifies) but merely for familiarity.
In tracing the V, we hope to accomplish two things. First, we aim to demonstrate the range of
categorical methods in order to demonstrate that CT might provide a holistic foundation for systems
engineering. Second, and more important, we hope to introduce systems engineers to the language and
methods of CT, and pique the interest of the systems engineering community to investigate further. Our
hope is that one day soon this paper might serve as the preface to a much deeper study that systems
engineers and category theorists might write together.
The first role for CT in systems engineering is as a precise technical language in which to express and
analyze models of systems information, ranging from theoretical predictions to raw data. The key feature
of CT in this respect is its abstraction. We can form categorical models from graphs, from logical
ontologies, from dynamical systems and more, and we can use categorical language to analyze the
relationships and interactions between these. To get a sense of what this looks like, we will model some
simple system architectures and the relationships between them.
The categorical model for an abstract network is remarkably simple:
The first thing to observe is that a category contains two types of entities, called objects and arrows.
Intuitively, we think of these as sets and functions, though they are abstract in the model itself. An
instance of the model replaces abstract objects and arrows with concrete sets and functions. It is not hard
to see that any network can be encoded as an instance of N, as in figure 1.
The key difference between categories and directed graphs are the construction principles which allow
us to combine the elements of our models. Foremost among these construction principles is arrow
composition; whenever we are given sequential arrows
→ , we can build a new arrow . : → .
Another way to think of this is, when we draw categories as directed graphs, the arrows include paths of
edges as well as individual arcs. We also allow paths of length 0, called identities.
To see why this is useful, consider the following simple model for a hierarchy of depth :
Here the primary structure is the self-arrow parent:Node→Node, which sends each node to the level
above it in the hierarchy. By composing parent with itself we can trace our way up the hierarchy from
By itself, this is too flexible. There is nothing to ensure that all nodes are part of the same hierarchy
and, even worse, our ``hierarchy'' might contain loops! We can eliminate these worries by demanding that
the parent map is ``eventually constant'': after repetitions, every node ends up at the same place. This
Fig. 1: Network as an
involves two ingredients: a construction and a path equation.
Categorical constructions generalize most set theoretic operations such as unions, intersections and
Cartesian products. The terminal object 1 stands in for a singleton set, and allows us to express the
notion of a constant value root∈Node. The path equation paren
const.root forces the th parent
of any node to equal root, ensuring a single hierarchy with no loops.
A more interesting example is the layered architecture L (figure 2), in which channels must conform to
a hierarchy of layers. Here the path equations constrain where channels may occur, while the + and /
constructions express the fact that channels may form either between layers (Γ) or within a layer (∆).
All of these models are fairly trivial. The main point is that the sorts of class modeling which systems
engineers already do is not too far away from a precise formal language. By carefully modeling our
concepts at the early stages of systems engineering we can express requirements more precisely,
identify misconceptions and inconsistencies, and establish concrete domain-specific languages. Best of
all, we get both intuitive graphical presentations like those found in UML/SysML class diagrams without
sacrificing the semantic precision associated with OWL and other formal approaches to ontology.
CT also goes beyond these existing languages. A functor is a mapping between categories; it sends
object to objects and arrows to (paths of) arrows, without changing the effects of composition. These
maps, along with other constructions like colimits and natural transformations, allows us to explicitly
identify and represent the relationships between individual categorical models, thereby linking them into
larger networks. This allows semantic ontologies to emerge organically from the bottom-up, grounded in
practice, in contrast to ``upper ontology'' approach (e.g., the Basic Formal Ontology ), which tries to
impose semantic structure from the top down.
A simple example is the idea that a hierarchy is a special type of network. This fact can be formalized
as a functor :N→H. To define we ask, for each component of N, what plays an analogous role in H?
The translation for Node is clear. In the hierarchy we have one channel for each node, so Channel also
maps to the same object Node. Since each channel maps from a node to its parent, target corresponds
with parent and source with the identity (zero-length path). Putting it all together, we have the functor
depicted in figure 3(a). Similarly, we can identify one hierarchy (of layers L) and two networks (of
channels C and layers L') in the layer architecture, corresponding to the four functors in figure 3(b). We
Fig. 2: Categorical model for layered architectures
Fig. 3: Functors translate between categorical models
even have a path equations--. ′--which acknowledges that the network of layers in L is just the
same as the network in H which is constructed from the hierarchy in L.
The stylized models and relationships presented here are fairly trivial, but the general method of
categorical modeling is quite powerful. By varying the constructions we allow ourselves to use, CT
modeling can range in expressiveness from simple equations to full higher-order logic . For more
thorough introductions to categorical modeling, see  or . The main thing to remember is that
categorical methods provide tools for expressing and relating our formal models.
In the last section we met all the essential elements of category theory--objects and arrows,
composition, identities--except one: the associativity axiom. Given a sequence of three composable
→ , we could first compose at and then at , or vice versa. Both should yield the
same result: . . . . . When applied to processes, this axiom is so obvious it is difficult to
express in English:
Doing and then , and then doing
is the same as
doing , and then doing and then .
Because of this, there is no need to keep track of parentheses when we compose arrows.
This allows us to describe complex processes based on only two pieces of information: (i) the
descriptions of simpler subprocesses and (ii) the way they were chained together. Of course, systems
engineers know that complex emergent phenomena may arise from simple subprocesses. This does not
mean that compositional, categorical mathematics does not apply. Instead, it means that the
compositional representations of such systems may require greater complexity than the naïve models we
might produce from scratch. By demanding compositionality from the outset, we are forced to build
interaction into our models from the ground up!
One important step in this direction is to generalize the sorts of composition that we allow. In fact,
there are many different flavors of category theory, each of which supports a different notion of
composition. The plain categories that we met in the last section allow only unary (single-input) processes
and serial composition. Some varieties like groups, which formalize the mathematics of symmetry, restrict
ordinary categories to obtain simpler structures. Others like process categories and operads add in
additional construction principles like parallel composition and multiple input/output. Through these
constructions, categories axiomatize the most fundamental concepts in systems engineering: resources
and processes .
Fig. 4: Process decomposition as a string diagram
All of these share a common theme of composition and associativity. For groups, this allows us to
describe the way that arbitrary rigid motions can be decomposed into translations and rotations. More
generally, this allows us to express complicated structures in terms of smaller and simpler pieces. It can
also help to show when a chain of complicated operations has a simple and predictable outcome.
Process categories, which are embody the mathematical structure of multi-resource functional
decomposition [7,4]. In the mathematical literature these are often refered to as “traced symmetric
monoidal categories”, but we feel that this nomenclature is too imposing given their simplicity and
importance. One particularly nice feature of these structures is that process categories support a
graphical syntax called string diagrams like the one in figure 4. Completely formal and technically precise,
these diagrams are nevertheless as intuitive and easy-to-read as flow charts.
Where string diagrams represent process flows, another class of structures called operads formalizes
the notion of a parts decomposition . In an operad, the objects are interfaces and the arrows are
“wiring diagrams” which connect a set of small interfaces into one larger component. Here associativity
says that there is only one meaning for the phrase “a system of systems of systems.”
These representations make it easier to talk about relationships across scale. Some or all of the
subprocesses in the figure 4 will have their own process decompositions. The only substantive constraint
on these decompositions is that they have the appropriate input and output strings. This leaves us with
one high-level categorical model P for the entire process and several low-level models Q
To express the relationship between these, we first combine the low-level pieces into a single
aggregate model Q ⨁ Q
. This involves an operation called a colimit which generalizes set-theoretic
unions; building them requires explicitly representing the overlap between different models. Once we
build the aggregate model, we can then define a functor P→Q which essentially pastes copies of the
smaller diagrams Q
into the appropriate bubbles from P. This identifies an explicit model for the total
high-level process P inside the aggregate low-level model Q. Furthermore, we can also allow multiple
decompositions for a given subprocess, providing a framework for modularity and versioning.
During realization we turn our abstract models into concrete realizations. In spirit, the relationship
between these two is analogous to the that between the logician's notions of syntax and semantics.
Roughly speaking, syntax is what we say and semantics is what we mean, or what we are talking about.
Models are like syntax: they describe how a product or system is supposed to work in terms of both
structure (decomposition and component interaction) and behavior (requirement and verification
specifications). Attaching semantics to these models means assigning each syntactic component to some
sort of concrete entity, in a way that mirrors the structure and behavior of the model.
Ultimately these concrete entities will be physical components and functioning source code, but before
we reach that point we must pass through many other, more abstract semantics. These might range from
the formal verification of a critical algorithm to a stochastic model of user behavior, but most have some
flavor of simulation. The motivating example to keep in mind is the simulation of a system in terms of
(discrete, continuous or hybrid) dynamical systems .
The key feature of the logician's semantics is compositionality: if we want to determine the truth of a
complex logical formula, it is enough to look at the truth values of its subformulas. This might seem to fail
for a given dynamical system: just because each component of my system is safe in isolation hardly
guarantees safety of the composite system. Doesn't the existence of emergent phenomena mean that the
behavior of a complex system is not determined by the behavior of its components? This
misunderstanding rests on a conflation of two distinct notions of “behavior”.
We can think of system behavior as a path through some high-dimensional state space; component
behavior is the projection of this path onto the subspace of component parameters. The problem is that
component dynamics in isolation trace out different paths than the projected system dynamics would.
This is why component safety in isolation does not entail system safety, even for the same component
metrics. This also means that there is no hope of composing individual component behaviors to derive
However dynamical models, the differential equations which generate these paths, are composable:
we can derive the dynamical equations of a system from the dynamics of its components . The
formula for this derivation will, of course, depend on how the components are connected to one another.
Each diagram like the one in Figure 4 generates its own formula. CT structures this relationship, making
the requirements of compositionality explicit through the language of categories and functors.
Logical semantics involves three main elements: (i) a syntactic model to be interpreted, (ii) an
assignment of syntactic elements to semantic objects, and (iii) a satisfaction relation which determines
whether this assignment meets the requirements of the model. However, traditional logic operates in a
fixed context of sets and functions (deterministic semantics), while CT broadens this to allow stochastic
semantics, dynamical semantics and more. Thus categorical semantics adds one further element, (iv) a
universe of semantic entities.
This approach relies on an important though informal distinction in CT between smaller, ``syntactic''
categories and larger, ``semantic'' categories. Syntactic categories are like the architectural models
described from section 1, built directly from graphs (generators), path equations (relations) and
categorical structure (constructions).
Semantic categories instead use some other formalism, like set theory or matrix algebra, to define the
objects and arrows of a category directly. The prototypical example is the category of sets and functions,
denoted Sets, where composition (and hence path equations) is computed explicitly in terms of the rule
. . Many other semantic categories like Graph (graphs and homomorphisms) and Vect
(vector spaces and linear maps) can be constructed from set theoretic entities.
Once we adopt this viewpoint, the relationship between syntax and semantics can be represented as a
functor from one type of category to the other. We have already seen one example of this approach, in
figure 1, where we described a network instance in terms of a pair of functions. This is exactly the same
as a functor N !: we map objects of N to objects of Sets and arrows of N to arrows of Sets (i.e., to
sets and functions).
The satisfaction relation for the semantic interpretation is determined by the preservation of categorical
structure. A good example is the coproduct “+”, used in our model for the layered architecture L (figure 3).
Not all functors L ! are semantically valid, only those which map the abstract coproduct " # $ L to
a concrete coproduct (disjoint union) in Sets. We say that a model of L should preserve coproducts.
Implicit in any categorical model is a minimal set of construction principles required to preserve full
Once we recognize that the traditional (logical) interpretations for a model M are the structure-
preserving functors M !, we are in an easy position to generalize to a much wider array of
semantics. We have explicitly identified the necessary structural context (e.g., coproducts) M, so we can
replace Sets by any other category which has these same features. We can use a category Dyn whose
objects are dynamical systems; a functor M %&'provides dynamical semantics. There is a category
Prob whose arrows are probabilistic mappings; a functor M ()*+ describes stochastic semantics for
M. There is a computational category Type where arrows are algorithms; functors M ,&- provide
computational interpretations for M. We can often compose these, for example mapping a model to a
dynamical system, and then mapping this to a computational simulation. Sometimes we can even mix
semantics together, so that in figure 4 we could give dynamical models for Heat and Simmer, a
computational model of Control and a stochastic Measure, and compose these to give a hybrid
dynamical model for the whole system.
The main role of our models in system integration is to collect and manage the tremendous amount of
structured data collected and analyzed during the integration process. This data is necessarily
heterogeneous, multi-scale and dispersed across many models and experts. Categorical models have
several nice features which can support the federation of this data.
First of all, we can regard a finite syntactic category M (like one of the architectural models in section
1) as a database schema [14,19,20]. Roughly speaking, the objects are tables and the arrows are foreign
keys. This means that we can use the models already produced during conceptualization and
decomposition to store the data generated during integration. Formally this depends on the functorial
semantics discussed in the previous section; we can think of an instance of the database as a functor
M ! mapping each table to a set of rows. Notice that this approach automatically ties the data that
we produce to our semantic models.
A more significant challenge is the dispersion of data across many engineers using many different
models. In order to build a holistic picture of our system, we need some way of putting models together
and aggregating the data they contain. The CT approach involves a categorical construction called a
colimit, together with an additional twist.
A colimit is a categorical construction that generalizes unions, allowing us to build new objects by
gluing together old ones. For example, any graph can be constructed using colimits by gluing edges
together at nodes. To integrate two objects using a colimit, we first explicitly identify their overlap as a
third object, along with two maps embedding the overlap into each component. Given this data, the
colimit construction then produces a fourth object together with two maps which embed the original
components into the new object. See figure 5(a).
The twist is that, instead of looking at categorical constructions inside our models, now we are
interested in performing colimits with our models. This approach depends on the fact that CT is self-
referential: the methods of CT can be applied to study categories themselves. In particular, there is a
semantic category Cat whose objects are categories and whose arrows are functors. Colimits in this and
related semantic contexts can be used to define model integration. A very simple example is given in
In fact, we can form colimits from any number of components, so long as we accurately represent their
overlaps (and overlaps of overlaps, etc.), providing a scheme for wider integrations. However,
representing all those overlaps may be inefficient. Another alternative is to integrate serially, adding in
one new model at a time. CT provides us with a language to state and prove that either approach is valid,
and that the two options will yield equivalent results .
As for heterogeneity, CT constructions called sheaves have recently been proposed as ``the canonical
datastructure for sensor integration'' . The main idea is that when different of sensors capture
overlapping information, it must be restricted or transformed before it can be compared. In the simplest
example, to identify overlapping images we must first crop to their common ground (restriction) before
comparing the results. A simplistic algorithm would ask for perfect agreement on the restriction, but a
more sophisticated integration might allow small differences in shading or perspective (transformation).
We can also compare different types of information, so long as we can project them to a common context;
we might match up audio and video by translating both to time series and looking for common patterns.
CT provides the language and spells out the requirements for translating between contexts in this way.
Finally, by mixing colimits with functors, we can connect our models across layers of abstraction .
Suppose that H is a model one level of abstraction above that of M and N in figure 5. Both M and N are
Fig. 5: The colimit construction
more detailed than H, but each only covers half the range. When we put them together, though, they do
cover the same range: every entity of H can be defined by mixing structures from M and from N.
Formally, this means that we can construct a refinement functor H ./012M3N4O which tells us how to
compute high-level characteristics in terms of low-level ones, helping to trace high-level requirements to
In operation, systems are never static. Components fail and need to be replaced. New models and
versions require tweaks to existing production and control system. New technology or regulation changes
the environment in which our systems operate. Because of this, it is critical that our models should be
relatively easy to maintain and update. Here again, categorical methods have some nice features which
One significant challenge in updating a model is that we must take existing data attached to the
original model and shift it over to the new one. Thinking of our models as domain-specific languages, we
must translate our data from one language to another. These processes are often messy and ad hoc, but
categorical constructions can help to structure them.
As we mentioned in the last section, a class-type categorical model N like those discussed in section 1
can be translated more-or-less directly into database schemas [14,19,20] where objects are tables and
arrows are foreign keys. An instance of the database is a functor N ! which sends each abstract
table to a concrete set of rows. By generating our data stores directly from models, our data is
automatically tied to its semantics.
We can then use functors to formalize the relationship between old and new models. This will provide
a dictionary to guide our translation. Moreover, expressing the transformations in these terms can help to
organize and explain certain inevitable features of this process.
A good example is the phenomenon of duality between models and data. A meticulous reader will
have noted that, in the discussion of architectural models, we said that “every hierarchy is a special kind
of network”, but then proceded to define a functor NH. The direction has reversed!
The categorical formulation explains this fact: given a functor NH and an instance H !, we
can compose these at H to obtain an instance N !. So every functor between syntactic models
defines a mapping of instances in the opposite direction. We might call this operation model restriction or
projection, and categorically speaking it is simply composition.
While composition allows us to restrict data backwards along a functor, subtler and more significant
constructions called Kan extensions allow us to push data in the same direction as a functor . In many
cases, data demanded by the new model will be unavailable in the old; in others, we may split one
concept into two, or vice versa. In all of these cases, Kan extensions provide explicit instructions for
building a “best approximation” to the old data, subordinate to the new schema.
Remarkably, the same operation of Kan extension can also be used to encode quantification in formal
logic  and periodic states in dynamical systems . This points to a critically important aspect of
categorical methods: uniformity. The abstraction of CT allows us to apply the same set of tools to a
remarkably diverse set of problems and circumstances.
This can be problematic for beginners: even simple applications of CT may require learning several
abstract constructions. Why bother, when there are easier solutions to this problem or that? The value of
the CT approach only becomes apparent for more substantive problems, where the same familiar tools
can still be applied.
Another nice property of categorical models is modularity, which is supported by the fact that the
colimit construction is a functor. Suppose, for example, that we extend one of the models in figure 5(a) via
a functor NN. A categorical construction principle for the colimit then guarantees that we can build a
new map ./012M3N4O ./012M3N4 O. This allows us to update domain-specific models locally and
then lift these changes to a global context.
More generally, the category theoretic property of naturality (over the diagram of the colimit) encodes
the restrictions which must be satisfied if updates to multiple components are to be consistent with one
another. Other categorical constructions called fibrations have been useful in formalizing more general
bidirectional transformations, where updates may not be consistent with one another [13,9]. In fact, the
elucidation of this concept of naturality was the motivating goal in the original development of CT;
categories and functors were merely the supporting concepts which underpin ``natural transformations''
Our discussion here has tried to indicate the potential breadth of categorical analysis. In so doing, we
have sacrificed depth in return. There is much more to be said.
One by one, the elements of category theory may not seem so impressive. We already have OWL for
representic semantic information, and good tools for interacting with databases. The UML/SysML
language family allows us to build graphical models and translate them into code stubs for programming.
Modelica and other modeling languages allow us to describe component-based decompositions and link
these to dynamical simulations. R and other software provides tools for statistical modeling.
The real value of CT is that it provides a context in which all of these can interact, and a rigorous
language for defining and analyzing those interactions. Now we have a chance to formalize entire
toolchains and workflows: we can agree on a graphical model, produce from it a semantic (logical) model
and populate it with data from an existing schema. We can use that data to derive a dynamical model,
and transform this into a computational simulation before piping the results to statistical software for
analysis. This entire process can be structured by categorical models.
This indicates why systems engineering offers an ideal test bed for the emerging discipline of applied
category theory. First, there is no avoiding the need to employ formal methods from multiple disciplines.
The details of our system exist at different scales and layers of abstraction. The need to interface
between many groups and researchers generates many demands: precise language to prevent
misunderstanding, intuitive (e.g., graphical) representations for easy communication, and structural
modularity for putting these pieces together.
Today, CT can supply plausible suggestions for meeting all of these requirements and more. However,
much work is required to turn this promise into practice. We can identify at least two important obstacles
which have stymied the growth of applied category theory.
First of these is CT’s learning curve, which is undeniably steep, but has become more gentle in recent
years. New textbooks [16,22] targeted at scientists and undergraduates have made the mathematical
ideas more accessible. New applications in areas like chemistry , electrical engineering  and
machine learning  have broadened the base of examples to more concrete, real-world problems.
A more substantial obstacle is tool support. Today CT can solve many problems at the conceptual
level, but there are few good tools for implementing those solutions. Outside of functional programming
(one of the major successes of CT) most software is academic, and it is neither simple enough nor
powerful enough to address system-scale demands. Addressing this deficiency will require substantial
funding and a concerted effort to bring together mathematicians with domain experts to attack complex,
Fortunately, this requirement is less daunting than it seems. Because CT generalizes many other
formalisms, we should be able to use existing tools to solve categorically formulated problems. By turning
a category into a logical theory we can use an OWL theorem prover for validation. To analyze the
behavior of a functional model, we can derive a Petri net for simulation. By projecting our categorical
models back into existing formalisms, we can piggyback on existing tools and methods. The results of
these analyses can then be lifted back to the categorical level for a holistic appraisal.
We envision an open, CT-based platform for information modeling and analysis. The platform should
support modules for the various CT constructions (e.g., functors, colimits) and translations (OWL, SQL,
petri nets), which could then be assembled on a case-by-case basis to address specific problems. In the
long run, such a platform would be applicable across many domains, but to get there we first need to drill
down and provide a proof of concept. Systems engineering is the perfect candidate.
Any mention of commercial products within NIST web pages is for information only; it does not imply
recommendation or endorsement by NIST.
1. Abran A, Moore JW, Bourque P, Dupuise R, Tripp LL. Software engineering body of knowledge. New York: IEEE Computer Society;
2. Arbib M, Manes G. Foundations of system theory: decomposable systems. Automatica 1974:10(3),285-302.
3. Arp R, Smith B, Spear AD. Building ontologies with basic formal ontology. Cambridge: MIT Press; 2015.
4. Baez J, Stay M. Physics, topology, logic and computation: a Rosetta Stone. In: Coecke, B editor. New Structures for Physics 2011.
Heidelberg: Springer; p. 95-168.
5. Baez J, Fong B. A compositional framework for passive linear networks. arXiv preprint 2015:1504.05625.
6. Breiner S, Subrahmanian E, Jones A. Categorical models for process planning. Under review: Computers and Industry, 2016.
7. Coecke B, Fritz T, Spekkens RW. A mathematical theory of resources. Information and Computation, 2014: 250:59-86.
8. Culbertson J, Sturtz K. Bayesian machine learning via category theory. arXiv preprint 2013:1312.1445.
9. Diskin Z. Algebraic models for bidirectional model synchronization. In Czarnecki K, et al. editors. International Conference on Model
Driven Engineering Languages and Systems 2008. Springer, p. 21-36.
10. Diskin Z, Maibaum T. Category theory and model-driven engineering: From formal semantics to design patterns and beyond. In
Cretu LG, Dumitriu F, editors. Model-Driven Engineering of Information Systems: Principles, Techniques, and Practice 2014.
Toronto: Apple; p. 173-206.
11. Eilenberg S, Mac Lane S. General theory of natural equivalences. Trans. AMS 1945; 58(2):231-294.
12. Jacobs B. Categorical logic and type theory. New York: Elsevier; 1999.
13. Johnson M, Rosebrugh R, Wood RJ. Lenses, brations and universal translations. Math. Struct. in Comp.Sci. 2012; 22(01):25-42.
14. Johnson M, Rosebrugh R, Wood RJ. Entity-relationship-attribute designs and sketches. Theory and Applications of Categories
15. Lawvere FW. Taking categories seriously. Revista Colombiana de Matematicas 1986; XX:147-178, 1986.
16. Lawvere FW, Schanuel SH. Conceptual mathematics: a first introduction to categories. Cambridge: Cambridge; 2009.
17. MacLane S, Moerdijk I. Sheaves in geometry and logic: A first introduction to topos theory. New York: Springer Science & Business
18. Robinson M. Sheaves are the canonical datastructure for sensor integration. arXiv preprint 2016::1603.01446.
19. Rosebrugh R, Wood RJ. Relational databases and indexed categories. In Seely RAG, editor. Proceedings of the International
Category Theory Meeting 1991 1992. Providence: Canadian Mathematical Society (vol. 13):391-407.
20. Spivak DI. Functorial data migration. Information and Computation 2012; 217:31-51.
21. Spivak DI. The operad of wiring diagrams: Formalizing a graphical language for databases, recursion, and plug-and-play circuits.
arXiv preprint 2013:1305.0297.
22. Spivak DI. Category theory for the sciences. Cambridge: MIT Press; 2014.
23. Spivak DI, Kent RE. Ologs: a categorical framework for knowledge representation. PLoS One 2012; 7(1):e24274.
24. Spivak DI, Vasilakopoulou C, Schultz P. Dynamical systems and sheaves. arXiv preprint 2016:1609.08086.
25. Wisnesky R, Breiner S, Jones A, Spivak DI, Subrahmanian E. Using category theory to facilitate multiple manufacturing service
database integration. J Comput Inf Sci in Eng, In press.