Chapter

Synthetic kinds: Kind-making in synthetic biology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We focus on the preliminary steps and processes of knowledge production which are prerequisite to the construction or identification of ontologies of parts within synthetic biology. Biological parts repositories serve as a common resource where synthetic biologists can go to obtain physical samples of DNA associated with descriptive data about those samples. Perhaps the best example of a biological parts repository is the iGEM Registry of Standard Biological Parts. These parts have been classified into collections, some labeled with engineering terms (e.g. chassis, receiver) some labeled with biological terms (e.g., proteindomain, binding), and some labeled with vague generality (e.g., classic, direction). Descriptive catalogues appear to furnish part-specific knowledge and individuation criteria that allow us to individuate them as parts. Repositories catalogue parts. It seems straightforward enough to understand what is contained within the repository in terms of the general concept: part. But what are we doing when we describe something as being a part? In this paper, we investigate some problems arising from the varied descriptions of parts contained in different repositories. Following this, we outline problems that arise with naming and tracking parts within and across repositories and explore how the comparison of parts across different databases might be facilitated. This focuses on computational models currently being sought that would allow practitioners to capture information and meta-information relevant to answering particular questions through the construction of similarity measures for different biological ontologies. We conclude by discussing the social and normative aspects of part-making and kind-making in synthetic biology.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These 'endolichenic' fungi are also part of healthy lichens and associate closely with the algal partner within the lichen thalli but are distinct from the mycobiont (Arnold et al., 2009, p. 283). They found that these endolichenic fungi played a significant role in lichen evolution and speciation: 'endolichenism appears to have served as an evolutionary source for transitions to parasitic/pathogenic, 1 Although an elaboration of grey nomenclatures in synthetic biology is beyond the scope of the present paper, a focused discussion of the individuation and comparison of synthetic parts and synthetic kinds across different repositories including SBOL and Sequence Ontology can be found in Kendig & Bartley, 2019. 2 For a detailed history of the dissent and uptake of Schwendener's dual hypothesis of lichens, see Honegger, 2000. 3 For other objections to the 1950 revision to the Code, see also especially Ciferri & Tomaselli, 1955, pp. ...
Article
Ethnobotanical research provides ample justification for comparing diverse biological nomenclatures and exploring ways that retain alternative naming practices. However, how (and whether) comparison of nomenclatures is possible remains a subject of discussion. The comparison of diverse nomenclatural practices introduces a suite of epistemic and ontological difficulties and considerations. Different nomenclatures may depend on whether the communities using them rely on formalized naming conventions; cultural or spiritual valuations; or worldviews. Because of this, some argue that the different naming practices may not be comparable if the ontological commitments employed differ. Comparisons between different nomenclatures cannot assume that either the naming practices or the object to which these names are intended to apply identifies some universally agreed upon object of interest. Investigating this suite of philosophical problems, I explore the role grey nomenclatures play in classification. 'Grey nomenclatures' are defined as those that employ names that are either intentionally or accidently non-Linnaean. The classification of the lichen thallus (a symbiont) has been classified outside the Linnaean system by botanists relying on the International Code of Nomenclature for algae, fungi, and plants (ICN). But, I argue, the use of grey names is not isolated and does not occur exclusively within institutionalized naming practices. I suggest, 'grey names' also aptly describe nomenclatures employed by indigenous communities such as the Samí of Northern Finmark, the Sherpa of Nepal, and the Okanagan First Nations. I pay particular attention to how naming practices are employed in these communities; what ontological commitments they hold; for what purposes are these names used; and what anchors the community's nomenclatural practices. Exploring the history of lichen naming and early ethnolichenological research, I then investigate the stakes that must be considered for any attempt to preserve, retain, integrate, or compare the knowledge contained in both academically formalized grey names and indigenous nomenclatures in a way that preserves their source-specific informational content.
... For Kendig, this has included research on the comparability and translatability of the varied names and descriptions of parts contained within different synthetic biology repositories. She has critically assessed the problems arising with naming and tracking of parts within and across repositories, but also how comparisons across different databases might be facilitated using computational models that capture information and meta-information as similarity measures for different biological ontologies (Kendig 2016a, Kendig andBartley 2019). In these, Kendig focuses on the inextricability of epistemological and ontological activities required for data-categorizations in synthetic biology. ...
Article
Full-text available
We undeniably live in an information age—as, indeed, did those who lived before us. After all, as the cultural historian Robert Darnton pointed out: ‘every age was an age of information, each in its own way’ (Darnton 2000: 1). Darnton was referring to the news media, but his insight surely also applies to the sciences. The practices of acquiring, storing, labeling, organizing, retrieving, mobilizing, and integrating data about the natural world has always been an enabling aspect of scientific work. Natural history and its descendant discipline of biological taxonomy are prime examples of sciences dedicated to creating and managing systems of ordering data. In some sense, the idea of biological taxonomy as an information science is commonplace. Perhaps it is because of its self-evidence that the information science perspective on taxonomy has not been a major theme in the history and philosophy of science. The botanist Vernon Heywood once pointed out that historians of biology, in their ‘preoccupation with the development of the sciences of botany and zoology… [have] diverted attention from the role of taxonomy as an information science’ (Heywood 1985: 11). More specifically, he argued that historians had failed to appreciate how principles and practices that can be traced to Linnaeus constituted ‘a change in the nature of taxonomy from a local or limited folk communication system and later a codified folk taxonomy to a formal system of information science [that] marked a watershed in the history of biology’ (ibid.). A similar observation could be made about twentieth-century philosophy of biology, which mostly skipped over practical and epistemic questions about information management in taxonomy. The taxonomic themes that featured in the emerging philosophy of biology literature in the second half of the twentieth century were predominantly metaphysical in orientation. This is illustrated by what has become known as the ‘essentialism story’: an account about the essentialist nature of pre- Darwinian taxonomy that used to be accepted by many historians and philosophers, and which stimulated efforts to document and interpret shifts in the metaphysical understanding of species and (natural) classification (Richards 2010; Winsor 2003; Wilkins 2009). Although contemporary debates in the philosophy of taxonomy have moved on, much discussion continues to focus on conceptual and metaphysical issues surrounding the nature of species and the principles of classification. Discussions centring on whether species are individuals, classes, or kinds have sprung up as predictably as perennials. Raucous debates have arisen even with the aim of accommodating the diversity of views: is monism, pluralism, or eliminativism about the species category the best position to take? In addition to these, our disciplines continue to interrogate what is the nature of these different approaches to classification: are they representational or inferential roles of different approaches to classification (evolutionary taxonomy, phenetics, phylogenetic systematics)? While there is still much to learn from these discussions—in which we both actively participate—our aim with this topical collection has been to seek different entrypoints and address underexposed themes in the history and philosophy of taxonomy. We believe that approaching taxonomy as an information science prompts new questions and can open up new philosophical vistas worth exploring. A twenty-first century information science turn in the history and philosophy of taxonomy is already underway. In scientific practice and in daily life it is hard to escape the imaginaries of Big Data and the constant threats of being ‘flooded with data’. In the life sciences, these developments are often associated with the socalled bioinformatics crisis that can hopefully be contained by a new, interdisciplinary breed of bioinformaticians. These new concepts, narratives, and developments surrounding the centrality of data and information systems in the biological and biomedical sciences have raised important philosophical questions about their challenges and implications. But historical perspectives are just as necessary to judge what makes our information age different from those that preceded us. Indeed, as the British zoologist Charles Godfray has often pointed out, the piles of data that are being generated in contemporary systematic biology have led to a second bioinformatics crisis, the first being the one that confronted Linnaeus in the mid-18th century (Godfray 2007). Although our aim is to clear a path for new discussions of taxonomy from an information science-informed point of view, we continue where others in the history, philosophy, and sociology of science have already trod. We believe that an appreciation of biological taxonomy as an information science raises many questions about the philosophical, theoretical, material, and practical aspects of the use and revision of biological nomenclatures in different local and global communities of scientists and citizen scientists. In particular, conceiving of taxonomy as an information science directs attention to the temporalities of managing an accumulating data about classified entities that are themselves subject to revision, to the means by which revision is accomplished, and to the semantic, material, and collaborative contexts that mediate the execution of revisions.
Article
Full-text available
The premise of biological modularity is an ontological claim that appears to come out of practice. We understand that the biological world is modular because we can manipulate different parts of organisms in ways that would only work if there were discrete parts that were interchangeable. is is the foundation of the BioBrick assembly method widely used in synthetic biology. It is one of a number of methods that allows practitioners to construct and reconstruct biological pathways and devices using DNA libraries of standardized parts with known functions. In this paper, we investigate how the practice of synthetic biology reconfigures biological understanding of the key concepts of modularity and evolvability. We illustrate how this practice approach takes engineering knowledge and uses it to try to understand biological organization by showing how the construction of functional parts and processes can be used in synthetic experimental evolution. We introduce a new ap- proach within synthetic biology that uses the premise of a parts-based ontology together with that of organismal self-organization to optimize orthogonal metabolic pathways in E. coli. We then use this and other examples to help characterize semisynthetic categories of modularity, parthood, and evolvability within the discipline. Part of a special issue, Ontologies of Living Beings, guest-edited by A. M. Ferner and Thomas Pradeu. Editorial introduction: Catherine Kendig and Todd Eckdahl defend and illustrate a prac- tice-based view of metaphysics of science. The target of their paper is the emerging and fas- cinating field of synthetic biology—a bioengineering domain that focuses on designing and assembling biological entities. e challenge they discuss is the following: What happens, ontologically-speaking, when as well as describing biological entities we start manufacturing new ones?
Article
Full-text available
We propose a framework to describe, analyze, and explain the conditions under which scientific communities organize themselves to do research, particularly within large-scale, multidisciplinary projects. The framework centers on the notion of a research repertoire, which encompasses well-aligned assemblages of the skills, behaviors, and material, social, and epistemic components that a group may use to practice certain kinds of science, and whose enactment affects the methods and results of research. This account provides an alternative to the idea of Kuhnian paradigms for understanding scientific change in the following ways: (1) it does not frame change as primarily generated and shaped by theoretical developments, but rather takes account of administrative, material, technological, and institutional innovations that contribute to change and explicitly questions whether and how such innovations accompany, underpin, and/or undercut theoretical shifts; (2) it thus allows for tracking of the organization, continuity, and coherence in research practices which Kuhn characterized as ‘normal science’ without relying on the occurrence of paradigmatic shifts and revolutions to be able to identify relevant components; and (3) it requires particular attention be paid to the performative aspects of science, whose study Kuhn pioneered but which he did not extensively conceptualize. We provide a detailed characterization of repertoires and discuss their relationship with communities, disciplines, and other forms of collaborative activities within science, building on an analysis of historical episodes and contemporary developments in the life sciences, as well as cases drawn from social and historical studies of physics, psychology, and medicine.
Article
Full-text available
Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.
Article
Full-text available
To collaboratively design synthetic biology systems, it is important to communicate both the structural and functional aspects of a design in a standard manner. This paper presents the Synthetic Biology Open Language (SBOL) 2.0 and demonstrates how this standard enables effective collaborative design across different institutions and tools. SBOL 2.0 serves the diverse interests of the synthetic biology community. The standard includes the ability to describe both functional and structural aspects of a design, including DNA, RNA, small molecules, and proteins, as well as their interactions as part of functional modules. SBOL 2.0 has been developed via consensus, with careful consideration of recent design trends in synthetic biology and real use cases submitted by members of the commercial biotechnology community. The standard thus provides researchers with a standardized representation for describing, manipulating, and reproducing biological designs across the synthetic biology community. This paper demonstrates how a set of SBOL-enabled tools can form a complex workflow to share and exchange designs for representative use cases between different organizations and tool suites. We also describe the development support in the form of software libraries, which facilitate the integration of the SBOL 2.0 standard into software tools.
Article
Full-text available
There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.
Article
Full-text available
The ribosome is a ribonucleoprotein machine responsible for protein synthesis. In all kingdoms of life it is composed of two subunits, each built on its own ribosomal RNA (rRNA) scaffold. The independent but coordinated functions of the subunits, including their ability to associate at initiation, rotate during elongation, and dissociate after protein release, are an established model of protein synthesis. Furthermore, the bipartite nature of the ribosome is presumed to be essential for biogenesis, since dedicated assembly factors keep immature ribosomal subunits apart and prevent them from translation initiation. Free exchange of the subunits limits the development of specialized orthogonal genetic systems that could be evolved for novel functions without interfering with native translation. Here we show that ribosomes with tethered and thus inseparable subunits (termed Ribo-T) are capable of successfully carrying out protein synthesis. By engineering a hybrid rRNA composed of both small and large subunit rRNA sequences, we produced a functional ribosome in which the subunits are covalently linked into a single entity by short RNA linkers. Notably, Ribo-T was not only functional in vitro, but was also able to support the growth of Escherichia coli cells even in the absence of wild-type ribosomes. We used Ribo-T to create the first fully orthogonal ribosome-messenger RNA system, and demonstrate its evolvability by selecting otherwise dominantly lethal rRNA mutations in the peptidyl transferase centre that facilitate the translation of a problematic protein sequence. Ribo-T can be used for exploring poorly understood functions of the ribosome, enabling orthogonal genetic systems, and engineering ribosomes with new functions.
Article
Full-text available
The ways in which the various activities of synthetic biology connect to those of conventional biology display both a multiplicity and variety that reflect the multiplicity and variety of meanings for which the term synthetic biology has been invoked, today as in the past. Central to this variety, as well as to the connection itself, is the complex relationship between knowing (understanding, representing) and making (constructing, intervening) that has prevailed in the life sciences. That relationship is the focus of this article. More specifically, my aim is to explore the different assumptions about how knowing is related to making that have prevailed, implicitly or explicitly in the various activitiesnow or in the pastsubsumed under the name synthetic biology.
Article
Full-text available
Despite the multidisciplinary dimension of the kinds of research conducted under the umbrella of synthetic biology, the US-based founders of this new research area adopted a disciplinary profile to shape its institutional identity. In so doing they took inspiration from two already established fields with very different disciplinary patterns. The analogy with synthetic chemistry suggested by the term 'synthetic biology' is not the only model. Information technology is clearly another source of inspiration. The purpose of the paper, with its focus on the US context, is to emphasize the diversity of views and agendas coexisting under the disciplinary label synthetic biology, as the two models analysed are only presented as two extreme postures in the community. The paper discusses the question: in which directions the two models shape this emerging field? Do they chart two divergent futures for synthetic biology?
Article
Full-text available
Knowledge-making practices in biology are being strongly affected by the availability of data on an unprecedented scale, the insistence on systemic approaches and growing reliance on bioinformatics and digital infrastructures. What role does theory play within data-intensive science, and what does that tell us about scientific theories in general? To answer these questions, I focus on Open Biomedical Ontologies, digital classification tools that have become crucial to sharing results across research contexts in the biological and biomedical sciences, and argue that they constitute an example of classificatory theory. This form of theorizing emerges from classification practices in conjunction with experimental know-how and expresses the knowledge underpinning the analysis and interpretation of data disseminated online.
Article
Full-text available
The Joint BioEnergy Institute Inventory of Composable Elements (JBEI-ICEs) is an open source registry platform for managing information about biological parts. It is capable of recording information about ‘legacy’ parts, such as plasmids, microbial host strains and Arabidopsis seeds, as well as DNA parts in various assembly standards. ICE is built on the idea of a web of registries and thus provides strong support for distributed interconnected use. The information deposited in an ICE installation instance is accessible both via a web browser and through the web application programming interfaces, which allows automated access to parts via third-party programs. JBEI-ICE includes several useful web browser-based graphical applications for sequence annotation, manipulation and analysis that are also open source. As with open source software, users are encouraged to install, use and customize JBEI-ICE and its components for their particular purposes. As a web application programming interface, ICE provides well-developed parts storage functionality for other synthetic biology software projects. A public instance is available at public-registry.jbei.org, where users can try out features, upload parts or simply use it for their projects. The ICE software suite is available via Google Code, a hosting site for community-driven open source projects.
Article
Full-text available
This paper introduces a model of the information flows in Product Life cycle Management (PLM), serving as the basis for understanding the role of standards in PLM support systems. Support of PLM requires a set of complementary and interoperable standards that cover the full range of aspects of the products’ life cycle. The paper identifies a typology of standards relevant to PLM support that addresses the hierarchy of existing and evolving standards and their usage and identifies a suite of standards supporting the exchange of product, process, operations and supply chain information. A case study illustrating the use of PLM standards in a large organization is presented. The potential role of harmonization among PLM support standards is described and a proposal is made for using open standards and open source models for this important activity.
Chapter
Full-text available
OBO is an ontology language that has often been used for modeling ontologies in the life sciences. Its definition is relatively infor- mal, so, in this paper, we provide a clear specification for OBO syntax and semantics via a mapping to OWL. This mapping also allows us to apply existing Semantic Web tools and techniques to OBO. We show that Semantic Web reasoners can be used to eciently reason with OBO ontologies. Furthermore, we show that grounding the OBO language in formal semantics is useful for the ontology development process: using an OWL reasoner, we detected a likely modeling error in one OBO ontology.
Article
Full-text available
The exploding number of computational models produced by Systems Biologists over the last years is an invitation to structure and exploit this new wealth of information. Researchers would like to trace models relevant to specific scientific questions, to explore their biological content, to align and combine them, and to match them with experimental data. To automate these processes, it is essential to consider semantic annotations, which describe their biological meaning. As a prerequisite for a wide range of computational methods, we propose general and flexible similarity measures for Systems Biology models computed from semantic annotations. By using these measures and a large extensible ontology, we implement a platform that can retrieve, cluster, and align Systems Biology models and experimental data sets. At present, its major application is the search for relevant models in the BioModels Database, starting from initial models, data sets, or lists of biological concepts. Beyond similarity searches, the representation of models by semantic feature vectors may pave the way for visualisation, exploration, and statistical analysis of large collections of models and corresponding data.
Article
Full-text available
We present genome engineering technologies that are capable of fundamentally reengineering genomes from the nucleotide to the megabase scale. We used multiplex automated genome engineering (MAGE) to site-specifically replace all 314 TAG stop codons with synonymous TAA codons in parallel across 32 Escherichia coli strains. This approach allowed us to measure individual recombination frequencies, confirm viability for each modification, and identify associated phenotypes. We developed hierarchical conjugative assembly genome engineering (CAGE) to merge these sets of codon modifications into genomes with 80 precise changes, which demonstrate that these synonymous codon substitutions can be combined into higher-order strains without synthetic lethal effects. Our methods treat the chromosome as both an editable and an evolvable template, permitting the exploration of vast genetic landscapes.
Article
Full-text available
Ontologies and standards are very important parts of today's bioscience research. With the rapid increase of biological knowledge, they provide mechanisms to better store and represent data in a controlled and structured way, so that scientists can share the data, and utilize a wide variety of software and tools to manage and analyze the data. Most of these standards are initially designed for computers to access large amounts of data that are difficult for human biologists to handle, and it is important to keep in mind that ultimately biologists are going to produce and interpret the data. While ontologies and standards must follow strict semantic rules that may not be familiar to biologists, effort must be spent to lower the learning barrier by involving biologists in the process of development, and by providing software and tool support. A standard will not succeed without support from the wider bioscience research community. Thus, it is crucial that these standards be designed not only for machines to read, but also to be scientifically accurate and intuitive to human biologists.
Article
Full-text available
We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.
Article
Full-text available
Circuit diagrams and Unified Modeling Language diagrams are just two examples of standard visual languages that help accelerate work by promoting regularity, removing ambiguity and enabling software tool support for communication of complex information. Ironically, despite having one of the highest ratios of graphical to textual information, biology still lacks standard graphical notations. The recent deluge of biological knowledge makes addressing this deficit a pressing concern. Toward this goal, we present the Systems Biology Graphical Notation (SBGN), a visual language developed by a community of biochemists, modelers and computer scientists. SBGN consists of three complementary languages: process diagram, entity relationship diagram and activity flow diagram. Together they enable scientists to represent networks of biochemical interactions in a standard, unambiguous way. We believe that SBGN will foster efficient and accurate representation, visualization, storage, exchange and reuse of information on all kinds of biological knowledge, from gene regulation, to metabolism, to cellular signaling.
Article
Full-text available
In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research.
Article
Full-text available
Genomic sequencing has made it clear that a large fraction of the genes specifying the core biological functions are shared by all eukaryotes. Knowledge of the biological role of such shared proteins in one organism can often be transferred to other organisms. The goal of the Gene Ontology Consortium is to produce a dynamic, controlled vocabulary that can be applied to all eukaryotes even as knowledge of gene and protein roles in cells is accumulating and changing. To this end, three independent ontologies accessible on the World-Wide Web (http://www.geneontology.org) are being constructed: biological process, molecular function and cellular component.
Article
Full-text available
Nature Biotechnology journal featuring biotechnology articles and science research papers of commercial interest in pharmaceutical, medical, and environmental sciences.
Article
Full-text available
The Sequence Ontology (SO) is a structured controlled vocabulary for the parts of a genomic annotation. SO provides a common set of terms and definitions that will facilitate the exchange, analysis and management of genomic data. Because SO treats part-whole relationships rigorously, data described with it can become substrates for automated reasoning, and instances of sequence features described by the SO can be subjected to a group of logical operations termed extensional mereology operators.
Article
Full-text available
Motivation: Several genome-scale efforts are underway to reconstruct metabolic networks for a variety of organisms. As the resulting data accumulates, the need for analysis tools increases. A notable requirement is a pathway alignment finder that enables both the detection of conserved metabolic pathways among different species as well as divergent metabolic pathways within a species. When comparing two pathways, the tool should be powerful enough to take into account both the pathway topology as well as the nodes' labels (e.g. the enzymes they denote), and allow flexibility by matching similar--rather than identical--pathways. Results: MetaPathwayHunter is a pathway alignment tool that, given a query pathway and a collection of pathways, finds and reports all approximate occurrences of the query in the collection, ranked by similarity and statistical significance. It is based on a novel, efficient graph matching algorithm that extends the functionality of known techniques. The program also supports a visualization interface with which the alignment of two homologous pathways can be graphically displayed. We employed this tool to study the similarities and differences in the metabolic networks of the bacterium Escherichia coli and the yeast Saccharomyces cerevisiae, as represented in highly curated databases. We reaffirmed that most known metabolic pathways common to both the species are conserved. Furthermore, we discovered a few intriguing relationships between pathways that provide insight into the evolution of metabolic pathways. We conclude with a description of biologically meaningful meta-queries, demonstrating the power and flexibility of our new tool in the analysis of metabolic pathways.
Article
Full-text available
The complex genetic circuits found in cells are ordinarily studied by analysis of genetic and biochemical perturbations. The inherent modularity of biological components like genes and proteins enables a complementary approach: one can construct and analyse synthetic genetic circuits based on their natural counterparts. Such synthetic circuits can be used as simple in vivo models to explore the relation between the structure and function of a genetic circuit. Here we describe recent progress in this area of synthetic biology, highlighting newly developed genetic components and biological lessons learned from this approach.
Article
Full-text available
The variation in the sizes of the genomes of distinct life forms remains somewhat puzzling. The organization of proteins into domains and the different mechanisms that regulate gene expression are two factors that potentially increase the capacity of genomes to create more complex systems. High-throughput protein interaction data now make it possible to examine the additional complexity generated by the way that protein interactions are organized. We have studied the reduction in genome size of Buchnera compared to its close relative Escherichia coli. In this well defined evolutionary scenario, we found that among all the properties of the protein interaction networks, it is the organization of networks into modules that seems to be directly related to the evolutionary process of genome reduction. In Buchnera, the apparently non-random reduction of the modular structure of the networks and the retention of essential characteristics of the interaction network indicate that the roles of proteins within the interaction network are important in the reductive process.
Article
Full-text available
The value of any kind of data is greatly enhanced when it exists in a form that allows it to be integrated with other data. One approach to integration is through the annotation of multiple bodies of data using common controlled vocabularies or 'ontologies'. Unfortunately, the very success of this approach has led to a proliferation of ontologies, which itself creates obstacles to integration. The Open Biomedical Ontologies (OBO) consortium is pursuing a strategy to overcome this problem. Existing OBO ontologies, including the Gene Ontology, are undergoing coordinated reform, and new ontologies are being created on the basis of an evolving set of shared principles governing ontology development. The result is an expanding family of ontologies designed to be interoperable and logically well formed and to incorporate accurate representations of biological reality. We describe this OBO Foundry initiative and provide guidelines for those who might wish to become involved.
Article
Full-text available
The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the "modelling view" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or "ontologies") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose. 1.
Article
Synthetic biology was founded as a biophysical discipline that sought explanations for the origins of life from chemical and physical first principles. Modern synthetic biology has been reinvented as an engineering discipline to design new organisms as well as to better understand fundamental biological mechanisms. However, success is still largely limited to the laboratory and transformative applications of synthetic biology are still in their infancy. Here, we review six principles of living systems and how they compare and contrast with engineered systems. We cite specific examples from the synthetic biology literature that illustrate these principles and speculate on their implications for further study. To fully realize the promise of synthetic biology, we must be aware of life’s unique properties.
Article
This paper presents a new validation and conversion utility for the Synthetic Biology Open Language (SBOL). This utility can be accessed directly in software using the libSBOLj library, through a web interface, or using a web service via RESTful API calls. The validator checks all required and best practice rules set forth in the SBOL specification document, and it reports back to the user the location within the document of any errors found. The converter is capable of translating from/to SBOL 1, GenBank, and FASTA formats to/from SBOL 2. The SBOL Validator/Converter utility is released freely and open source under the Apache 2.0 license. The online version of the validator/converter utility can be found here: http://www.async.ece.utah.edu/sbol-validator/. The source code for the validator/converter can be found here: http://github.com/SynBioDex/SBOL-Validator/.
Article
This book offers a comprehensive virtue ethics that breaks from the tradition of eudaimonistic virtue ethics. In developing a pluralistic view, it shows how different 'modes of moral response' such as love, respect, appreciation, and creativity are all central to the virtuous response and thereby to ethics. It offers virtue ethical accounts of the good life, objectivity, rightness, demandingness, and moral epistemology.
Article
This paper describes a pattern of explanation prevalent in the biological sciences that I call a ‘lineage explanation’. The aim of these explanations is to make plausible certain trajectories of change through phenotypic space. They do this by laying out a series of stages, where each stage shows how some mechanism worked, and the differences between each adjacent stage demonstrates how one mechanism, through minor modifications, could be changed into another. These explanations are important, for though it is widely accepted that there is an ‘incremental constraint’ on evolutionary change, in an important class of cases it is difficult to see how to satisfy this constraint. I show that lineage explanations answer important questions about evolutionary change, but do so by demonstrating differences between individuals rather than invoking population processes, such as natural selection. • Introduction • Turning a ‘Scale’ into a ‘Plume’ • Lineage Explanations in Biology • 3.1The evolution of eyes • 3.2The evolution of feathers • The Two Dimensions of a Lineage Explanation • 4.1The production dimension • 4.2The continuity dimension • 4.3The dual role of the parts • Constraining the Explanations • Operational and Generative Lineages • Explaining Change Without Populations • Conclusion
Article
The interests of synthetic biologists may appear to differ greatly from those of evolutionary biologists. The engineering of organisms must be distinguished from the tinkering action of evolution; the ambition of synthetic biologists is to overcome the limits of natural evolution. But the relations between synthetic biology and evolutionary biology are more complex than this abrupt opposition: Synthetic biology may play an important role in the increasing interactions between functional and evolutionary biology. In practice, synthetic biologists have learnt to submit the proteins and modules they construct to a Darwinian process of selection that optimizes their functioning. More importantly, synthetic biology can provide evolutionary biologists with decisive tools to test the scenarios they have elaborated by resurrecting some of the postulated intermediates in the evolutionary process, characterizing their properties, and experimentally testing the genetic changes supposed to be the source of new morphologies and functions. This synthetic, experimental evolution will renew and clarify many debates in evolutionary biology: It will lead to the explosion of some vague concepts as constraints, parallel evolution, and convergence, and replace them with precise mechanistic descriptions. In this way, synthetic biology resurrects the old philosophical debate about the relations between the real and the possible.
Article
Social scientific and humanistic research on synthetic biology has focused quite narrowly on questions of epistemology and ELSI. I suggest that to understand this discipline in its full scope, researchers must turn to the objects of the field—synthetic biological artifacts—and study them as the objects in the making of a science yet to be made. I consider one fundamentally important question: how should we understand the material products of synthetic biology? Practitioners in the field, employing a consistent technological optic in the study and construction of biological systems, routinely employ the mantra ‘biology is technology’. I explore this categorization. By employing an established definition of technological artifects drawn from the philosophy of technology, I explore the appropriateness of attributing to synthetic biological artifacts the four criteria of materiality, intentional design, functionality, and normativity. I then explore a variety of accounts of natural kinds. I demonstrate that synthetic biological artifacts fit each kind imperfectly, and display a concomitant ontological ‘messiness’. I argue that this classificatory ambivalence is a product of the field’s own nascence, and posit that further work on kinds might help synthetic biology evaluate its existing commitments and practices. KeywordsSynthetic biology–Biological engineering–Technological artifacts–Natural kinds–Ontology–Classification–Philosophy of technology
Conference Paper
It is no secret that the multidisciplinary sphere of informationsystems has borrowed the term 'ontology' from philosophy, andreinterpreted it to be more suitable for information systems.However, there is some disagreement about what thisreinterpretation should be. This paper examines two prominent anddistinct views on what information systems ontology is, andattempts to advance a unified definition that can be understoodinterdisciplinarily. But the goal of this paper is to show thespecific points of variance between information systems ontologyand philosophical ontology in order to shed light on thetransformation of the term 'ontology' in its adoption by theinformation systems community. The relatively new informationsystems ontology is facing great challenges that may be betterconfronted with the insights that can be discovered throughphilosophical ontology.
Article
Laboratory evolution has generated many biomolecules with desired properties, but a single round of mutation, gene expression, screening or selection, and replication typically requires days or longer with frequent human intervention.1 Since evolutionary success is dependent on the total number of rounds performed,2 a means of performing laboratory evolution continuously and rapidly could dramatically enhance its effectiveness.3 While researchers have accelerated individual steps in the evolutionary cycle,4–9 the only previous example of continuous directed evolution was the landmark study of Joyce,10 who continuously evolved RNA ligase ribozymes with an in vitro replication cycle that unfortunately cannot be easily adapted to other biomolecules. Here we describe a system that enables the continuous directed evolution of gene-encoded molecules that can be linked to protein production in E. coli. During phage-assisted continuous evolution (PACE), evolving genes are transferred from host cell to host cell through a modified bacteriophage life cycle in a manner that is dependent on the activity of interest. Dozens of rounds of evolution can occur in a single day of PACE without human intervention. Using PACE, we evolved T7 RNA polymerases that recognize a distinct promoter, initiate transcripts with A instead of G, and initiate transcripts with C. In one example, PACE executed 200 rounds of protein evolution over the course of eight days. Starting from undetectable activity levels in two of these cases, enzymes with each of the three target activities emerged in less than one week of PACE. In all three cases, PACE-evolved polymerase activities exceeded or were comparable to that of the wild-type T7 RNAP on its wild-type promoter, representing improvements of up to several hundred-fold. By greatly accelerating laboratory evolution, PACE may provide solutions to otherwise intractable directed evolution problems and address novel questions about molecular evolution.
Book
What is temperature, and how can we measure it correctly? These may seem like simple questions, but the most renowned scientists struggled with them throughout the 18th and 19th centuries. In Inventing Temperature , Chang examines how scientists first created thermometers; how they measured temperature beyond the reach of standard thermometers; and how they managed to assess the reliability and accuracy of these instruments without a circular reliance on the instruments themselves. In a discussion that brings together the history of science with the philosophy of science, Chang presents the simple yet challenging epistemic and technical questions about these instruments, and the complex web of abstract philosophical issues surrounding them. Chang's book shows that many items of knowledge that we take for granted now are in fact spectacular achievements, obtained only after a great deal of innovative thinking, painstaking experiments, bold conjectures, and controversy. Lurking behind these achievements are some very important philosophical questions about how and when people accept the authority of science.
Article
Recent work in Artificial Intelligence (AI) is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices are evaluated against the design criteria.
Mapping Genetic Design Space with Phylosemantics
  • B A Bartley
  • M Galdzicki
  • R S Cox
  • H M Sauro
Bartley, B. A., Galdzicki, M., Cox, R. S. & Sauro, H. M. (2017) Mapping Genetic Design Space with Phylosemantics. Proceedings of the 9th International Workshop on Bio-Design Automation, University of Pittsburgh. Retrieved from www.iwbdaconf.org/2017/ docs/IWBDA_2017_Proceedings.pdf.
Perception, Interpretation, and the Sciences: Toward a New Philosophy of Science
  • M Grene
Grene, M. (1985) Perception, Interpretation, and the Sciences: Toward a New Philosophy of Science. In D. J. Depew & B. H. Weber (Eds.), Evolution at a Crossroads (pp. 1-20). Cambridge, MA: MIT Press.
From Universal Languages to Intermediary Languages in Machine Translation
  • J Léon
Léon, J. (2007) From Universal Languages to Intermediary Languages in Machine Translation. In History of Linguistics 2002: Selected Papers from the Ninth International Conference on the History of the Language Sciences, 27-30 August 2002, São Paulo-Campinas. Vol. 110 (p. 123). Amsterdam: John Benjamins Publishing.
Activities of Kinding in Scientific Practice
  • C Kendig
Kendig, C. (2016) Activities of Kinding in Scientific Practice. In C. Kendig (Ed.), Natural Kinds and Classification in Scientific Practice. Abingdon and New York: Routledge.
Ad-Hoc and Personal Ontologies: A Prototyping Approach to Ontology Engineering
  • D Richards
Richards, D. (2006) Ad-Hoc and Personal Ontologies: A Prototyping Approach to Ontology Engineering. In A. Hoffmann, B. Kang, D. Richards & S. Tsumoto (Eds.), Advances in Knowledge Acquisition and Management: PKAW 2006: Lecture Notes in Computer Science. Vol. 4303. Berlin, Heidelberg: Springer.
Re-Engineering Philosophy for Limited Beings
  • W Wimsatt
Wimsatt, W. (2007) Re-Engineering Philosophy for Limited Beings. Cambridge: Harvard University Press.