Conference PaperPDF Available

EAP4EMSIG - Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis

Authors:

Abstract and Figures

Microfluidic Live-Cell Imaging (MLCI) generates high-quality data that allows biotechnologists to study cellular growth dynamics in detail. However, obtaining these continuous data over extended periods is challenging, particularly in achieving accurate and consistent real-time event classification at the intersection of imaging and stochastic biology. To address this issue, we introduce the Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis (EAP4EMSIG). In particular, we present initial zero-shot results from the real-time segmentation module of our approach. Our findings indicate that among four State-Of-The- Art (SOTA) segmentation methods evaluated, Omnipose delivers the highest Panoptic Quality (PQ) score of 0.9336, while Contour Proposal Network (CPN) achieves the fastest inference time of 185 ms with the second-highest PQ score of 0.8575. Furthermore, we observed that the vision foundation model Segment Anything is unsuitable for this particular use case.
Content may be subject to copyright.
EAP4EMSIG - Experiment Automation Pipeline
for Event-Driven Microscopy to Smart
Microfluidic Single-Cells Analysis
Nils Friederich1,2,*, Angelo Jovin Yamachui Sitcheu1,*,
Annika Nassal1,2, Matthias Pesch3, Erenus Yildiz4,
Maximilian Beichter1, Lukas Scholtes4, Bahar Akbaba1,
Thomas Lautenschlager1, Oliver Neumann1, Dietrich Kohlheyer3,
Hanno Scharr4, Johannes Seiffarth3,5,#, Katharina Nöh3,#,
Ralf Mikut1,#
1Institute for Automation and Applied Informatics (IAI)
2Institute of Biological and Chemical Systems (IBCS)
Karlsruhe Institute of Technology
3Institute of Bio- and Geosciences (IBG-1)
4Institute for Data Science and Machine Learning (IAS-8)
Forschungszentrum Jülich GmbH
5Computational Systems Biology (AVT-CSB)
RWTH Aachen University
*Contributed equally
#Supervised equally
Abstract
Microfluidic Live-Cell Imaging (MLCI) generates high-quality data that allows
biotechnologists to study cellular growth dynamics in detail. However, obtaining
these continuous data over extended periods is challenging, particularly
in achieving accurate and consistent real-time event classification at the
intersection of imaging and stochastic biology. To address this issue,
we introduce the Experiment Automation Pipeline for Event-Driven Mi-
croscopy to Smart Microfluidic Single-Cells Analysis (EAP4EMSIG). In
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 169
DOI: 10 .58 895/ ksp/100 0174544 -11 erschienen in:
Proceedings - 34. Workshop Computational Intelligence: Berlin, 21.-22. November 2024
DOI: 10. 58895/k sp/100017454 4 | https:/ /w ww.k sp.ki t.ed u/si te/ book s/e /10.58 895/ks p/100 0174544 /
particular, we present initial zero-shot results from the real-time segmentation
module of our approach. Our findings indicate that among four State-Of-The-
Art (SOTA) segmentation methods evaluated, Omnipose delivers the highest
Panoptic Quality (PQ) score of 0.9336, while Contour Proposal Network (CPN)
achieves the fastest inference time of 185 ms with the second-highest PQ score
of 0.8575. Furthermore, we observed that the vision foundation model Segment
Anything is unsuitable for this particular use case.
1 Introduction
What are microbes? Microbes, also known as microorganisms, are a group
of tiny living organisms that are invisible to the naked eye. This group includes
bacteria, archaea, fungi and protists [5]. Microbes are present almost everywhere
on Earth, from harsh environments such as hydrothermal vents to the human
body, where they outnumber human cells by a factor of around 1.3 [54]. Despite
their tiny size, microbes play crucial roles in various ecological and biological
processes, making them essential for life on Earth [30].
Why are microbes relevant? Microbes are relevant for several reasons. The
first is ecological balance, where microbes are essential in the nutrient cycle,
decomposing organic matter and contributing to soil fertility [57]. They are
crucial for the carbon, nitrogen and sulfur cycles that sustain life on Earth [27].
Second, in human health, the human microbiome aids digestion, produces
essential vitamins and protects against pathogenic microbes [38]. Disruptions
in the microbiome can lead to health issues such as infections, obesity and
autoimmune diseases [7,29]. Finally, in the context of industrial applications,
microbes are harnessed in biotechnology, pharmaceuticals and agriculture. They
are used to produce antibiotics, biofuels and fermented foods [25]. Microbial
enzymes are also crucial in many manufacturing processes [46].
Why is research on microbes essential? Research on microbes is important
due to their impact on health, industry and the environment. Understanding
microbial behavior, genetics and interactions can advance all three areas. In
170 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
medical science, it is crucial to study pathogens to help develop vaccines
and treatments for infectious diseases [2]. Microbe research can potentially
reveal new therapies for chronic diseases [3]. In environmental protection,
microbes can be used in bioremediation to clean up oil spills and toxic waste [18].
Therefore, understanding microbial ecosystems can help conservation efforts
and combat climate change. In biotechnology, microbial research can lead to the
development of new applications, such as using microbes to produce valuable
compounds, e.g., insulin or biodegradable plastics [36].
Why is the segmentation of microbes relevant? While some biological
analysis is possible at the macroscopic level, other results can only be
obtained by studying organisms at the microscopic single-cell level. MLCI
particularly enables an understanding of single-cell growth and growth
heterogeneity due to very small volumes. For example, the effect of antibiotic
concentrations on organism performance can be analyzed through such
experiments. Understanding the dynamics of microbes at this single-cell level
therefore requires accurate and precise automated cell segmentation, as large
amounts of data acquired using automated microscopy must be analyzed to
obtain relevant results. The segmented data can then be used to make statements
about the organism’s growth as a function of various other factors.
What is the challenge in microbe research? MLCI experiments with
microbes are usually not carried out on a single colony but in parallel on
thousands. To achieve this, the microfluidic device is infused with a cell
suspension and cells are randomly seeded into the growth chambers, where
they remain trapped. Optimally, a microbial colony grows in each chamber.
In a standard growth experiment, seeded cells grow until the chamber is filled
with densely packed cells, which can be 1000 ends, which marks the end
of the experiment. Subsequent experiment examination requires analyzing
thousands of colonies in parallel, with thousands of microbes in some cases.
Each chamber must be manually assessed to determine whether it meets
the experiment’s objectives, even as some chambers become irrelevant as
the experiment advances. This process is extremely time-consuming, costly,
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 171
strenuous, monotonous and nearly impossible. Therefore, automated and
intelligent processing, analysis and experiment planning are required.
How does this paper address this challenge? In this paper, we introduce
the EAP4EMSIG, designed to automate and intelligently execute MLCI
experiments, during which the human expert specifies settings, monitors
progress and intervenes only to address any issues that may arise. We explain
the concept of the pipeline and its eight primary modules. To achieve this, a
literature review (see Section 2) and an extensive description (see Section 3) of
each module are provided.
Since real-time data evaluation, inference and incorporation into the experimen-
tal design are central parts of our entire Experiment Automation Pipeline (EAP)
pipeline, our work will compare initial results. We will compare the results
related to the Average Precision (AP) [28] score, PQ [23] score and inference
time of four SOTA methods from the task, domain and foundation areas, using
a large publicly available microbial benchmark dataset [51,52] (see Section 4).
For this purpose, their zero-shot abilities and their real-time capability will be
analyzed and investigated to determine which models are suitable for retraining.
Additionally, we will evaluate whether using a foundation model potentially
leads to better results through improved generalization.
2 Related Work
Experiment Automation Pipelines. Various EAP tools have been developed,
ranging from local standalone projects [10] to cloud-based tools [34]. Some
methods focus on automating the data analysis part [17, 37], others focus on
automating the data acquisition part [43], particularly on microscope control [40,
42] and event-based image acquisition [6,33]. However, very few generic tools
for full experiment automation have been proposed due to the complexity of
combining the experiments’ software, hardware and biological components.
One example is the PYthon Microscopy Environment (PYME)
1
open-source
1https://www.python-microscopy.org/
172 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
package, which offers data acquisition, processing, exploration and visualization
modules. PYME is, however, only tailored for super-resolution techniques.
Another example is Cheetah [39], a Python library that automates real-time
cybergenetic experiments. It offers limited microscope control capabilities and
relies on one specific image segmentation method, i.e., U-Net [44].
Recently, the EAP tool MicroMator [16] has emerged, strongly aligning with our
goal. Similarly to the idea of smart futuristic microscopy depicted in [4] and [41],
MicroMator supports reactive microscopy experiments. The developed open-
source package is modular, extendable and adjustable for several experiments.
However, they offer limited image analysis possibilities and no tool to improve
the image analysis results. Moreover, the software seems not to be actively used
and maintained.
In summary, while many tools exist that each contribute to a step towards fully
EAP, no tool, to the best of our knowledge, provides a complete, modular and
extendable pipeline that manages event-based data acquisition, analysis and
reporting.
Segmentation. Deep learning-based segmentation methods have recently
emerged as they are often faster, more accurate, and precise than traditional
methods [14], given sufficient training data availability [12].
A method with pixel-wise segmentation is required to obtain all the information
needed for event detection in the context of microbes. To allow the extracted
data to flow directly in the EAP during the experiments, this method must be
fast enough, accurate and precise to enable real-time processing [31]. Therefore,
objects can be segmented, for example, with generalized methods like the SOTA
vision model Segment Anything [24], so-called foundation models, which
attempt to recognize all objects correctly, e.g. in the context of segmentation.
Although these methods can recognize many diverse objects, they may have
limitations in precision for a single use-case [22]. To overcome this problem,
there are also SOTA domain-specific biomedical methods like CPN [58] and
StarDist [50] or task-specific-models like Omnipose [9].
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 173
With the wide variety of models available, selecting the most appropriate one
for a given task remains a significant challenge. Currently, this selection is still
performed manually. However, solutions that aim to automate this selection
process are being proposed. [19, 35, 55] investigate image similarity metrics to
select the best model for a given task.
Experiment Database. MLCI experiments produce vast amounts of data.
This data and associated metadata must be stored and managed for subsequent
analysis and reporting. In the context of EAP4EMSIG, the data management
tool must support the FAIR data management principles as depicted in [59].
For our work, the most suitable tool is Open Microscopy Environment Re-
mote Objects (OMERO) [1], an open-source tool for managing, analyzing
and visualizing microscopy images and their metadata. It offers a centralized,
secure and scalable solution for handling diverse imaging data types, facilitating
collaboration and data sharing among entities. Compared to other SOTA data
management tools, OMERO provides advanced visualization tools and supports
integration with other image analysis software [49].
Semi-Automated Data Annotations. To train the segmentation methods,
corresponding training data are crucial. Annotating this data is typically time-
consuming, so semi-automated segmentation tools like KaIDA [48] or ObiWan-
Microbi [53] are often employed in biomedical use cases [24, 47, 60]. This
process involves training a network with a small amount of manually annotated
datasets and manually refining the network’s predictions by a human annotator
on new unannotated datasets.
AI-ready Image with Ground Truth Cell Simulation. A significant challenge
in applying Deep Learning (DL) techniques is the need for labeled data for
training and validation. Particularly in cell instance segmentation tasks, pixel-
exact masks that accurately distinguish individual cells from the background
are essential. Due to the high costs needed to generate such labeled data [21],
cell simulators have been developed [26,56]. Their aim is to create realistic,
controlled and reproducible cellular models that accurately reflect biological
174 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
processes. For bacterial microcolony ground truth generation, particularly in the
context of phase contrast microscopy, the cell simulator CellSium [45] emerges
as the suitable tool in this work. It is an agent-based, highly customizable and
versatile simulator that produces data for different use cases.
Module Interaction. Given the complexity of integrating software, hardware
and biological components in laboratory experiments, a suitable architecture is
required. This architecture must be robust, understandable, modular and most
importantly extendable due to the uniqueness of each laboratory experiment. For
our EAP, we currently use Robot Operating System (ROS) [32], an open-source
framework primarily for developing robot software.
ROS provides a modular building architecture based on the central notion of
nodes. Each node represents a functional unit and can exchange messages
with another node, particularly in an event-based manner. Hence, ROS is very
suitable for real-time tasks in various fields, such as in [20]. Nevertheless,
due to the high complexity of installing and maintaining ROS as well as its
dependency bugs [15], just very few approaches use it. The closest to ours is
Archemist [13], an experiment-automating system for chemistry laboratories.
An alternative to ROS which is currently being investigated for our EAP is
Dataflow-Oriented Robotic Architecture (DORA)
2
, a framework designed to
ease and simplify the development of AI-based robotic applications. To the best
of our knowledge, DORA is very new and has not been used for experiment
automation tasks yet. It provides low-latency, composable and distributed
dataflow capabilities. Applications are organized as directed graphs, often
referred to as pipelines. Although it is much faster than ROS, it is still unstable
and has a rather smaller community.
3 Methodology
To fill the noted gaps, we propose a new EAP approach, which is fully described
module-by-module in this section. As shown in Fig. 1, our system consists of
2https://dora-rs.ai/
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 175
Figure 1:
EAP4EMSIG visualization. The pipeline consists of eight modules, represented by the
light blue boxes and the OMERO database, arranged in a cyclical process. The microbial
images in the figure come from dataset [51]. The images from the experiment chip are
from an internal dataset.
eight modules arranged in a cyclical process. For image acquisition, the system
utilizes SOTA research microscope setups and low-cost 3D-printed microscope
systems in the first EAP module. Second, the real-time image processing is
executed on incoming images, generating single-cell instance segmentation
predictions. The generated data and metadata are saved and managed in an
instance of the third module’s OMERO DataBase (DB). This instance is also
used to manage ground truth data generated with the cell simulator module
CellSium and the ObiWan-Microbi semi-annotation module as the fourth and
fifth modules. Sixth, the real-time data analysis module relies on this data
to provide feedback via a dashboard and detect events. Based on these, the
real-time experiment planner, as the seventh module, schedules the experiment
continuously and sends the next steps to the microscope control module, the
eighth module, which forwards these instructions back to the image acquisition
module. The modules are described individually in the upcoming sections.
176 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
3.1 Microscope Control
µManager. Automatic control of the microscope is essential for experiment
automation. To make our EAP as independent as possible from microscope
manufacturers and thus enable easy transfer to new laboratories and microscopes,
µManager [11] is used. µManager is an open-source software package that
controls microscopes and associated hardware components such as cameras,
stages and shutters. It provides a powerful, flexible and cost-effective solution
for automated microscopy. In this work, the implementation is done in Python
with the help of Pymmcore(-Plus)3with a Nikon T1-based setup.
Autofocusing. One specific challenge we address here is the autofocusing
of the microscope. We treat autofocusing as a regression problem, where
a simple Multi-Layer Perceptron (MLP) is used to predict the relationship
between the extracted input features from microscopy images and the continuous
target variable, which is the distance to the optimal focus frame (among all
z-stacks). After predicting the focus offset and direction, our system employs a
closed-loop control mechanism to communicate the predicted adjustments to
the microscope control. The focus is then iteratively adjusted until the optimal
focus is reached.
3.2 Image Acquisition
This module handles image acquisition primarily in two different ways:
1.
The most common process is using real research microscopes for the
experiments and generating high-quality images. This is still by far the
most used approach, particularly due to the ability of such microscopes
to provide direct, high-resolution and high-fidelity images of biological
samples.
2.
The low-cost alternative to such expensive tools are 3D-printed
microscopes, which are emerging as cost-effective and accessible tools,
3https://github.com/micro-manager/pymmcore
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 177
especially in educational settings and low-resource environments [8].
However, they generally do not match the resolution and functionality of
high-end commercial microscopes.
The acquired image data and metadata are then collected and saved according to
standardized protocols in a OMERO DB. Standardization offers the possibility
of a uniform mask for querying different information for all modules (including
future ones). Furthermore, the data can be distributed, stored and accessed
worldwide.
3.3 Real-Time Image Processing
The image data acquired in the previous step (see Section 3.2) is processed
as shown in Fig. 1. On the one hand, the region of interest, that is the
growth chamber, is extracted by removing any microfluidic structures from
the images. On the other hand, the content of the chamber is segmented
using a suitable method. In this work, we focus on SOTA DL segmentation
methods (see Section 2), which are either task-specific, domain-specific or
foundation models and therefore allow us to address various segmentation tasks
effectively. We also investigate the data processing speed of these methods. This
is important because the classification of the events and, therefore, the decision
of the experiment planner (see Section 3.8) is highly based on the segmentation
results.
3.4 OMERO Database
As mentioned in Section 2, we use OMERO to manage not only the images
and the associated metadata in a centralized and standardized manner but also
the results of downstream analyses such as chamber detection and extraction,
segmentation and cell analysis. All modules of the EAP (see Fig. 1) can retrieve
information via a standardized interface. In addition, this makes it possible
for the human expert to easily and comprehensibly document his experiments,
including access to the post-processing and -analysis results.
178 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
3.5 ObiWan-Microbi
For intra- and inter-cell analysis to be possible, the best feasible extraction
of objects through segmentation (see Section 3.3) is required. One challenge
in our context is the large number of densely packed cells that need to be
segmented. To date, there are no labeled datasets that accurately represent a
comparable use case, which would facilitate transfer learning or the training of
supervised segmentation methods. Since manual labeling alone would be too
long and too inefficient, the semi-automated annotation tool ObiWan-Microbi is
used in this work. This tool allows the prediction and correction of labels and
subsequent retraining of the used DL segmentation models. An example of a
dataset created with this is [51], which will be used to evaluate the segmentation
methods in Section 4.
3.6 CellSium
However, even the creation of labels using semi-automated methods such as
ObiWan-Microbi (see Section 3.5) costs a lot of human time and therefore
money, especially in the first iteration because the segmentation methods still
provide right-angled pre-segmentations. An alternative here is to have an initial
basis for the segmentation methods by using automatically generated images
with associated labels, e.g., from simulations. The simulator CellSium is used
in our work. CellSium enables the generation of microbe images in different
growth stages and also in the density and frequency required in our context. Even
if these images cannot represent the full diversity of real images, combining
data augmentation methods can lead to first stable results as shown in [45],
where only slight adjustments have to be made in ObiWan-Microbi.
3.7 Real-Time Data Analysis
3.7.1 Dashboard
Once the microbes have been segmented, single-cell data such as average
cell size and growth rate are computed and saved. This data is visualized
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 179
and, most importantly, leveraged by the human expert to navigate through the
experiment. For this purpose, a customized dashboard is being developed. The
dashboard provides real-time insights into ongoing experiments such as cell
count, growth rate and average cell size per chamber. The dashboard integrates
various functionalities to facilitate the monitoring and analysis of the experiment.
Due to its modular architecture, which facilitates the seamless integration of
new features and functionalities without disrupting the existing codebase, our
dashboard is highly extendable and can be tailored for other use cases.
3.7.2 Event Detection
The data and metadata gathered from the real-time data analysis and image
processing enable us to detect different events in hundreds of parallel
experiments and resolve their temporal evolution. In our case, we have two
classes of events. On the one hand, technical events that are related to the
devices themselves, e.g., loss of focus or chamber defects. On the other hand,
we have biological events that display the behavior of microbes, such as growth
rate or cell death. The detection is performed based on rules provided by the
domain expert.
3.8 Real-Time Experiment Planner
A central part of the EAP pipeline is the intelligent experiment planner. The next
n chambers to be explored are determined based on the last chamber recorded,
including the resulting data analysis. The determination is made according to
the defined experiment objectives of the human domain expert.
4 Experiments
In this section, preliminary experiments and results of our approach, particularly
for real-time image segmentation, are presented and discussed. Four
segmentation algorithms are compared on an Ubuntu 22.04-based workstation
with an Intel Core i9-13900 Central Processing Unit (CPU), a RTX3090
180 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
Graphics Processing Unit (GPU) and a 64 GB Random-Access Memory (RA M).
This system was chosen as low-performance because the hardware components
represent an affordable system for users interested in such use cases. The
measured inference time can be considered realistic for a lower boundary. An
improved hardware configuration can achieve an additional performance boost
here. We define 100 ms as the maximal limit for the real-time inference time.
This is because initial tests of the microscope control program have shown that
it is perfectly sufficient for the EAP4EMSIG, including autofocusing.
4.1 Dataset, Metrics and Implementation
The benchmark dataset [51] is used to evaluate the methods. The dataset
contains images of Corynebacterium glutamicum microbes and represents a
typical experiment in cell culture. The dataset includes video sequences of the
development of the microbes with 800 images each and consists of ground truth
instance segmentation mask and tracking paths. For the context of this work,
we use all 5 × 800 images purely to evaluate the segmentation performances.
To evaluate the segmentation accuracy, the metrics AP, including AP@0.50
and AP@0.75 and PQ, comprising Segmentation Quality (SQ) and Recogni-
tion Quality (RQ), are calculated for all four methods mentioned in Section 2
(see Table 1) using their respective official implementations.
Since the AP-based metric requires the confidence score for calculation,
evaluating this metric on Omnipose was impossible. Omnipose does not directly
return uncertainties for predicted masks and no official instructions on how to
extract these are known until the publication of the work.
4.2 Real-Time Image Processing: Segmentation
The evaluation results of the four methods are shown in Table 1. In addition, an
example image from the dataset (see Fig. 2a) and the respective segmentation
results (see Fig. 2b to Fig. 2e) are displayed for a medium population density
with approx. 400 microbes (see Fig. 2).
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 181
(a) Original (b) Omnipose [9] (c) StarDist [50]
(d) CPN [58] (e) SAM [24]
Figure 2: Comparison of zero-shot instance segmentation predictions for [51]. The original image
is shown in Fig. 2a and the predictions are shown in Fig. 2b to Fig. 2e.
From the results in Table 1, Omnipose is the best model concerning the scores
PQ, SQ and RQ. However, CPN with a PQ score difference of 0.0761, i.e.,
domain-specific model, is not that far away and is still 86 ms faster than
Omnipose. In detail, CPN has a slightly lower RQ score of 0.0526, but
the difference in PQ score is primarily due to the notably worse SQ. The
SQ score can be seen by directly comparing Fig. 2b and Fig. 2d. While
Omnipose segmented the objects cleanly, including at the edges, CPN struggled.
Additionally, some parts are often no longer properly recognized in the curved
microbes towards the end.
Nevertheless, CPN’s performance is quite remarkable because in CPN’s training
datasets, there were no such long, rod-shaped objects or similar microbial
colonies in contrast to the task domain model Omnipose. As a domain model, it
is also remarkable that CPN recognizes the object instances with a similarly
182 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
Metric
Method Omnipose [9] StarDist [50] CPN [58] SAM-H [24]
AP- 0.0000 0.6232 0.0347
AP@0.5- 0.0000 0.9551 0.0476
AP@0.75- 0.0000 0.8170 0.0470
PQ0.9336 0.3629 0.8575 0.0626
PQ-SQ0.9395 0.7287 0.8779 0.8416
PQ-RQ0.9935 0.4093 0.9763 0.0736
Inf. [ms]271 7686 185 1994
Table 1: Average Precision (AP) results, Panoptic Quality (PQ) results comprising Segmentation
Quality (SQ) score and Recognition Quality (RQ) score as well as inference times (Inf.)
evaluated on the benchmark dataset [51].4
good RQ score (only 0.0172 difference). Both the number of objects and the
difficult boundaries between the objects are not new for CPN and also occur,
for example, in NeurIPS 22 Cell Segmentation Competition
5
dataset as one of
the pre-training datasets. However, in combination with the microbes, this is a
noteworthy generalization achievement.
The second domain model StarDist, on the other hand, with a PQ score of 0.3629
and an AP score of 0, has not yielded sufficient results and is clearly worse
than CPN. The vision foundation model Segment Anything, the fourth model,
is also not convincing with a PQ score of 0.0626. However, it is worth noting
that the SQ score is only slightly lower than that of CPN, indicating that the
objects recognized as True Positive (TP) were segmented well. However, upon
examining RQ, it appears that the number of TP is likely very low. Segment
Anything’s problem can also be seen in Fig. 2e. There, almost the entire cluster
of microbes is predicted as one object. Although there are many objects in dense
clusters in the dataset SA-1B [24] on which the model was trained and also
with a comparable number, it is quite possible that this transferred knowledge
cannot be applied to the shape, which in turn raises doubts about the “Any”
segmentation.
5https://neurips22-cellseg.grand-challenge.org/
4
When calculating the metric, falsely detected backgrounds were not removed and evaluated during
the AP calculation as false positives. The models were used according to the basic configurations
for fair comparison. The values in bold are the best across all methods, provided there were
results. To ensure a fair comparison, we define inference time as the duration from inputting
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 183
5 Conclusion
This paper presents the EAP4EMSIG - a novel pipeline for experiment
automation for thousands of microbe colonies on microfluidic chips. For this
purpose, the methodological concept of each of the eight pipeline modules was
introduced, explained and distinguished from existing alternatives. Preliminary
development steps of the pipeline were presented, particularly for the real-time
image segmentation module. To this end, four SOTA methods were compared
and evaluated quantitatively and qualitatively in the paper. CPN and Omnipose
proved to be particularly powerful. Omnipose, trained task-specifically for
bacteria segmentation, is 86 ms slower at inference than CPN but has a
slightly better recognition quality and a noticeably higher segmentation quality.
However, because CPN was not trained explicitly for bacterial segmentation,
but on very diverse biomedical cells such as blood cells or nuclei of different
cell types, among others, future work would investigate retraining different
methods to get the best model for real-time segmentation in the EAP4EMSIG.
Future work will also investigate increasing segmentation speed to achieve the
minimum 100 ms required for our task, such as converting models to special
inference formats like TensorRT
5
or transforming the trained models to a lower
precision (e.g. FP16, Int8).
Even though only initial results for the real-time image processing module were
shown in this work, the other modules are also being developed. So far, the
pipeline as a whole has not yet been tested, but the modules themselves are
already in advanced development and, in some cases, ready for use, such as
CellSium or ObiWan-Microbi. The next steps are to combine the modules and
test the EAP4EMSIG as a whole.
the image to receiving the model’s prediction as an instance mask with confidence scores. This
includes any post-processing needed by certain methods, such as converting predicted contours
to a pixel-wise mask for the EAP4EMSIG pipeline. The inference time is measured with all
models using FP32 precision.
5https://github.com/NVIDIA/TensorRT
184 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
Acknowledgments
This work was supported by the President’s Initiative and Networking Funds of
the Helmholtz Association of German Research Centres [Grant EMSIG ZT-I-
PF-04-44]. The Helmholtz Association funds this project under the "Helmholtz
Imaging Platform", the authors N. Friederich, A. J. Yamachui Sitcheu and
R. Mikut under the program "Natural, Artificial and Cognitive Information
Processing (NACIP)", the authors N. Friederich and A. J. Yamachui Sitcheu
through the graduate school "Helmholtz Information & Data Science School
for Health (HIDSS4Health)" and the author Johannes Seiffarth through the
graduate school "Helmholtz School for Data Science in Life, Earth and Energy
(HDS-LEE)".
The authors have accepted responsibility for the entire content of this manuscript
and approved its submission. We describe here the individual contributions of
N. Friederich (NF), A. J. Yamachui Sitcheu (AJYS), A. Nassal (AN), M. Pesch
(MP), E. Yildiz (EY), M. Beichter (MB), L. Scholtes (LS), B. Akbaba (BA),
T. Lautenschlager (TL), O. Neumann (ON), D. Kohlheyer (DK), H. Scharr (HS),
J. Seiffarth (JS), K. Nöh (KN), R. Mikut (RM): Conceptualization: NF, AJYS,
JS, RM; Methodology: NF, AJYS, EY, JS, DK, HS, KN, RM; Software: NF,
AN, MB; Investigation: NF, AJYS, JS; Resources: JS, KN; Writing Original
Draft: NF, AJYS, MP, MB, ON, EY, JS; Writing Review & Editing: NF, AJYS,
AN, MP, EY, MB, LS, BA, TL, ON, NK, HS, JS, KN, RM; Supervision: DK,
HS, KN, RM; Project administration: DK, HS, KN, RM; Funding Acquisition:
DK, HS, EY, KN, RM.
References
[1]
C. Allan, J.-M. Burel, J. Moore, C. Blackburn, M. Linkert, S. Loynton,
D. MacDonald, W. J. Moore, C. Neves, A. Patterson, et al. OMERO:
flexible, model-driven data management for experimental biology. Nature
Methods, 9(3):245–253, 2012.
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 185
[2]
U. Bethe, Z. D. Pana, C. Drosten, H. Goossens, F. König, A. Marchant,
G. Molenberghs, M. Posch, P. van Damme, and O. A. Cornely.
Innovative approaches for vaccine trials as a key component of pandemic
preparedness - a white paper. Infection, 2024.
[3]
K. M. Carbone, R. B. Luftig, and M. Buckley. Microbial Triggers of
Chronic Human Illness. Am Soc Microbiol, 2005.
[4]
A. E. Carpenter, B. A. Cimini, and K. W. Eliceiri. Smart microscopes of
the future. Nature Methods, 20(7):962–964, 2023.
[5]
A. Casadevall. Microbes and Climate Change - Science, People &
Impacts. 2022.
[6]
L. Chiron, M. Le Bec, C. Cordier, S. Pouzet, D. Milunov, A. Banderas,
J.-M. Di Meglio, B. Sorre, and P. Hersen. CyberSco. Py an open-source
software for event-based, conditional microscopy. Scientific Reports,
12(1):11579, 2022.
[7]
A. Christovich and X. M. Luo. Gut microbiota, leaky gut, and
autoimmune diseases. Frontiers in Immunology, 13:946248, 2022.
[8]
J. T. Collins, J. Knapper, J. Stirling, J. Mduda, C. Mkindi, V. Mayagaya,
G. A. Mwakajinga, P. T. Nyakyi, V. L. Sanga, D. Carbery, L. White,
S. Dale, Z. J. Lim, J. J. Baumberg, P. Cicuta, S. McDermott,
B. Vodenicharski, and R. Bowman. Robotic microscopy for everyone:
the OpenFlexure microscope. Biomed. Opt. Express, 11(5):2447–2460,
2020.
[9]
K. J. Cutler, C. Stringer, T. W. Lo, L. Rappez, N. Stroustrup,
S. Brook Peterson, P. A. Wiggins, and J. D. Mougous. Omnipose:
a high-precision morphology-independent solution for bacterial cell
segmentation. Nature Methods, 19(11):1438–1448, 2022.
[10]
P. Dettinger, T. Frank, M. Etzrodt, N. Ahmed, A. Reimann, C. Trenzinger,
D. Loeffler, K. D. Kokkaliaris, T. Schroeder, and S. Tay. Automated
microfluidic system for dynamic stimulation and tracking of single cells.
Analytical Chemistry, 90(18):10695–10700, 2018.
186 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
[11]
A. D. Edelstein, M. A. Tsuchida, N. Amodaj, H. Pinkard, R. D. Vale, and
N. Stuurman. Advanced methods of microscope control using
µ
Manager
software. Journal of Biological Methods, 1(2), 2014.
[12]
A. Esteva, K. Chou, S. Yeung, N. Naik, A. Madani, A. Mottaghi, Y. Liu,
E. Topol, J. Dean, and R. Socher. Deep learning-enabled medical
computer vision. npj Digital Medicine, 4(1):5, 2021.
[13]
H. Fakhruldeen, G. Pizzuto, J. Glowacki, and A. I. Cooper. Archemist:
Autonomous robotic chemistry system architecture. In 2022 International
Conference on Robotics and Automation (ICRA), pages 6013–6019. IEEE,
2022.
[14]
M. S. Fasihi and W. B. Mikhael. Overview of current biomedical
image segmentation methods. In 2016 International Conference on
Computational Science and Computational Intelligence (CSCI), pages
803–808, 2016.
[15]
A. Fischer-Nielsen, Z. Fu, T. Su, and A. W ˛asowski. The forgotten case
of the dependency bugs: on the example of the robot operating system.
In Proceedings of the ACM/IEEE 42nd International Conference on
Software Engineering: Software Engineering in Practice, pages 21–30,
2020.
[16]
Z. R. Fox, S. Fletcher, A. Fraisse, C. Aditya, S. Sosa-Carrillo, J. Petit,
S. Gilles, F. Bertaux, J. Ruess, and G. Batt. Enabling reactive microscopy
with MicroMator. Nature Communications, 13(1):2199, 2022.
[17]
N. Friederich, A. J. Yamachui Sitcheu, O. Neumann, S. Eroglu-Kayıkçı,
R. Prizak, L. Hilbert, and R. Mikut. AI-based automated active
learning for discovery of hidden dynamic processes: A use case in light
microscopy. In Proceedings-33. Workshop Computational Intelligence:
Berlin, 23.-24. November 2023, volume 23, page 31. KIT Scientific
Publishing, 2023.
[18]
M. Ganesan, R. Mani, S. Sai, G. Kasivelu, M. K. Awasthi, R. Rajagopal,
N. I. Wan Azelee, P. K. Selvi, S. W. Chang, and B. Ravindran.
Bioremediation by oil degrading marine bacteria: An overview of
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 187
supplements and pathways in key processes. Chemosphere, 303(Pt
1):134956, 2022.
[19]
P. Godau and L. Maier-Hein. Task Fingerprinting for Meta Learning in
Biomedical Image Analysis. In Medical Image Computing and Computer-
Assisted Intervention MICCAI 2021, pages 436–446. Springer, 2021.
[20]
F. He and L. Zhang. Design of indoor security robot based on robot
operating system. Journal of Computer and Communications, 11(5):93–
107, 2023.
[21]
H. Jeckel and K. Drescher. Advances and opportunities in image analysis
of bacterial cells and communities. FEMS Microbiology Reviews, 45,
2020.
[22]
W. Ji, J. Li, Q. Bi, T. Liu, W. Li, and L. Cheng. Segment Anything Is
Not Always Perfect: An Investigation of SAM on Different Real-world
Applications. Machine Intelligence Research, 21(4):617–630, Aug 2024.
[23]
A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic
segmentation. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 9404–9413, 2019.
[24]
A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson,
T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick.
Segment Anything. arXiv:2304.02643, 2023.
[25]
G. Lancini and A. L. Demain. Bacterial pharmaceutical products. In
E. Rosenberg, E. F. DeLong, S. Lory, E. Stackebrandt, and F. Thompson,
editors, The Prokaryotes, pages 257–280. Springer Berlin Heidelberg,
Berlin, Heidelberg, 2013.
[26]
A. Lehmussola, P. Ruusuvuori, J. Selinummi, H. Huttunen, and O. Yli-
Harja. Computational framework for simulating fluorescence microscope
images with cell populations. IEEE Transactions on Medical Imaging,
26(7):1010–1016, 2007.
188 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
[27]
M. Li, A. Fang, X. Yu, K. Zhang, Z. He, C. Wang, Y. Peng, F. Xiao,
T. Yang, W. Zhang, X. Zheng, Q. Zhong, X. Liu, and Q. Yan. Microbially-
driven sulfur cycling microbial communities in different mangrove
sediments. Chemosphere, 273:128597, 2021.
[28]
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Dollár, and C. L. Zitnick. Microsoft COCO: Common Objects in
Context. In Computer Vision–ECCV 2014: 13th European Conference,
Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13,
pages 740–755. Springer, 2014.
[29]
B.-N. Liu, X.-T. Liu, Z.-H. Liang, and J.-H. Wang. Gut microbiota in
obesity. World Journal of Gastroenterology, 27(25):3837–3850, 2021.
[30]
K. J. Locey and J. T. Lennon. Scaling laws predict global microbial
diversity. Proceedings of the National Academy of Sciences of the United
States of America, 113(21):5970–5975, 2016.
[31]
A. Lou, S. Guan, and M. Loew. CFPNet-M: A Light-Weight Encoder-
Decoder Based Network for Multimodal Biomedical Image Real-Time
Segmentation. Computers in Biology and Medicine, 154:106579, 2023.
[32]
S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall. Robot
Operating System 2: Design, architecture, and uses in the wild. Science
Robotics, 7(66):eabm6074, 2022.
[33]
D. Mahecic, W. L. Stepp, C. Zhang, J. Griffié, M. Weigert, and S. Manley.
Event-driven acquisition for content-enriched microscopy. Nature
Methods, 19(10):1262–1267, 2022.
[34]
B. Miles and P. L. Lee. Achieving Reproducibility and Closed-Loop
Automation in Biological Experimentation with an IoT-Enabled Lab of
the Future. SLAS TECHNOLOGY: Translating Life Sciences Innovation,
23(5):432–439, 2018.
[35]
M. Molina-Moreno, M. P. Schilling, M. Reischl, and R. Mikut.
Automated Style-Aware Selection of Annotated Pre-Training Databases
in Biomedical Imaging. In 2023 IEEE 20th International Symposium on
Biomedical Imaging (ISBI), pages 1–5, 2023.
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 189
[36]
J. M. Nduko and S. Taguchi. Microbial production of biodegradable
lactate-based polymers and oligomeric building blocks from renewable
and waste resources. Frontiers in Bioengineering and Biotechnology,
8:618077, 2020.
[37]
J. P. Neto, A. Mota, G. Lopes, B. J. Coelho, J. Frazão, A. T. Moura,
B. Oliveira, B. Sieira, J. Fernandes, E. Fortunato, R. Martins, R. Igreja,
P. V. Baptista, and H. Águas. Open-source tool for real-time and
automated analysis of droplet-based microfluidic. Lab Chip, 23:3238–
3244, 2023.
[38]
K. Oliphant and E. Allen-Vercoe. Macronutrient metabolism by the
human gut microbiome: major fermentation by-products and their impact
on host health. Microbiome, 7(1):91, 2019.
[39]
E. Pedone, I. De Cesare, C. G. Zamora-Chimal, D. Haener, L. Postiglione,
A. La Regina, B. Shannon, N. J. Savery, C. S. Grierson, M. Di Bernardo,
et al. Cheetah: a computational toolkit for cybergenetic control. ACS
Synthetic Biology, 10(5):979–989, 2021.
[40]
H. Pinkard, N. Stuurman, I. E. Ivanov, N. M. Anthony, W. Ouyang, B. Li,
B. Yang, M. A. Tsuchida, B. Chhun, G. Zhang, et al. Pycro-Manager:
open-source software for customized and reproducible microscope
control. Nature Methods, 18(3):226–228, 2021.
[41]
H. Pinkard and L. Waller. Microscopes are coming for your job. Nature
Methods, 19(10):1175–1176, 2022.
[42]
D. M. S. Pinto, M. A. Phillips, N. J. Hall, J. Mateos-Langerak,
D. Stoychev, T. S. Pinto, M. J. Booth, I. Davis, and I. M. Dobbie.
Python-Microscope a new open-source Python library for the control
of microscopes. Journal of Cell Science, 134, 2021.
[43]
F. Rahmanian, J. Flowers, D. Guevarra, M. Richter, M. Fichtner,
P. Donnely, J. M. Gregoire, and H. S. Stein. Enabling modular
autonomous feedback-loops in materials science through hierarchical
experimental laboratory automation and orchestration. Advanced
Materials Interfaces, 9(8):2101987, 2022.
190 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
[44]
O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks
for Biomedical Image Segmentation. In Medical Image Computing
and Computer-Assisted Intervention MICCAI 2015, pages 234–241.
Springer, 2015.
[45]
C. C. Sachs, K. Ruzaeva, J. Seiffarth, W. Wiechert, B. Berkels, and
K. Nöh. CellSium: versatile cell simulator for microcolony ground truth
generation. Bioinformatics Advances, 2(1):vbac053, 2022.
[46]
S. Sanchez and A. L. Demain. Useful microbial enzymes—an
introduction. In Biotechnology of Microbial Enzymes, pages 1–11.
Elsevier, 2017.
[47]
T. Scherr, J. Seiffarth, B. Wollenhaupt, O. Neumann, M. P. Schilling,
D. Kohlheyer, H. Scharr, K. Nöh, and R. Mikut. microbeSEG: A deep
learning software tool with OMERO data management for efficient and
accurate cell segmentation. Plos one, 17(11):e0277601, 2022.
[48]
M. P. Schilling, S. Schmelzer, L. Klinger, and M. Reischl. KaIDA: a
modular tool for assisting image annotation in deep learning. Journal of
Integrative Bioinformatics, 19(4):20220018, 2022.
[49]
C. Schmidt, J. Hanne, J. Moore, C. Meesters, E. Ferrando-May,
S. Weidtkamp-Peters, et al. Research data management for bioimaging:
the 2021 NFDI4BIOIMAGE community survey. F1000Research, 11,
2022.
[50]
U. Schmidt, M. Weigert, C. Broaddus, and G. Myers. Cell Detection
with Star-Convex Polygons. In Medical Image Computing and Computer-
Assisted Intervention MICCAI 2018, pages 265–273. Springer, 2018.
[51]
J. Seiffarth, L. Blöbaum, K. Löffler, T. Scherr, A. Grünberger, H. Scharr,
R. Mikut, and K. Nöh. Data for - Tracking one in a million: Performance
of automated tracking on a large-scale microbial data set.
https://doi.
org/10.5281/zenodo.7260137
, 10 2022.
[52]
J. Seiffarth, L. Blöbaum, R. Paul, N. Friederich, A. J. Yamachui Sitcheu,
R. Mikut, H. Scharr, A. Grünberger, and K. Nöh. Tracking one-in-
a-million: Large-scale benchmark for microbial single-cell tracking
Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024 191
with experiment-aware robustness metrics. In European Conference
on Computer Vision. Springer, 2024.
[53]
J. Seiffarth, T. Scherr, B. Wollenhaupt, O. Neumann, H. Scharr,
D. Kohlheyer, R. Mikut, and K. Nöh. ObiWan-Microbi: OMERO-based
integrated workflow for annotating microbes in the cloud. SoftwareX,
26:101638, 2024.
[54]
R. Sender, S. Fuchs, and R. Milo. Revised estimates for the number of
human and bacteria cells in the body. PLoS Biology, 14(8):e1002533,
2016.
[55]
A. Y. Sitcheu, N. Friederich, S. Baeuerle, O. Neumann, M. Reischl, and
R. Mikut. MLOps for Scarce Image Data: A Use Case in Microscopic Im-
age Analysis. In Proceedings-33. Workshop Computational Intelligence:
Berlin, 23.-24. November 2023, volume 23, page 169. KIT Scientific
Publishing, 2023.
[56]
D. Svoboda and V. Ulman. MitoGen: A Framework for Generating 3D
Synthetic Time-Lapse Sequences of Cell Populations in Fluorescence
Microscopy. IEEE Transactions on Medical Imaging, 36:310–321, 2017.
[57]
D. M. Sylvia, J. J. Fuhrmann, P. G. Hartel, and D. A. Zuberer. Principles
and applications of soil microbiology. Pearson, 2005.
[58]
E. Upschulte, S. Harmeling, K. Amunts, and T. Dickscheid. Contour
proposal networks for biomedical instance segmentation. Medical Image
Analysis, 77:102371, 2022.
[59]
M. D. Wilkinson, M. Dumontier, I. J. Aalbersberg, G. Appleton,
M. Axton, A. Baak, N. Blomberg, J.-W. Boiten, L. B. da Silva Santos,
P. E. Bourne, et al. The FAIR Guiding Principles for scientific data
management and stewardship. Scientific Data, 3(1):1–9, 2016.
[60]
B. Xiao, H. Wu, W. Xu, X. Dai, H. Hu, Y. Lu, M. Zeng, C. Liu, and
L. Yuan. Florence-2: Advancing a unified representation for a variety of
vision tasks. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 4818–4829, 2024.
192 Proc. 34. Workshop Computational Intelligence, Berlin, 21.-22.11.2024
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Recently, Meta AI Research approaches a general, promptable segment anything model (SAM) pre-trained on an unprecedentedly large segmentation dataset (SA-1B). Without a doubt, the emergence of SAM will yield significant benefits for a wide array of practical image segmentation applications. In this study, we conduct a series of intriguing investigations into the performance of SAM across various applications, particularly in the fields of natural images, agriculture, manufacturing, remote sensing and healthcare. We analyze and discuss the benefits and limitations of SAM, while also presenting an outlook on its future development in segmentation tasks. By doing so, we aim to give a comprehensive understanding of SAM’s practical applications. This work is expected to provide insights that facilitate future research activities toward generic segmentation. Source code is publicly available at https://github.com/LiuTingWed/SAM-Not-Perfect.
Article
Full-text available
Droplet-based microfluidic technology is a powerful tool for generating large numbers of monodispersed nanoliter-sized droplets for ultra-high throughput screening of molecules or single cells. Yet further progress in the development of methods for the real-time detection and measurement of passing droplets is needed for achieving fully automated systems and ultimately scalability. Existing droplet monitoring technologies are either difficult to implement by non-experts or require complex experimentation setups. Moreover, commercially available monitoring equipment is expensive and therefore limited to a few laboratories worldwide. In this work, we validated for the first time an easy-to-use, open-source Bonsai visual programming language to accurately measure in real-time droplets generated in a microfluidic device. With this method, droplets are found and characterized from bright-field images with high processing speed. We used off-the-shelf components to achieve an optical system that allows sensitive image-based, label-free, and cost-effective monitoring. As a test of its use we present the results, in terms of droplet radius, circulation speed and production frequency, of our method and compared its performance with that of the widely-used ImageJ software. Moreover, we show that similar results are obtained regardless of the degree of expertise. Finally, our goal is to provide a robust, simple to integrate, and user-friendly tool for monitoring droplets, capable of helping researchers to get started in the laboratory immediately, even without programming experience, enabling analysis and reporting of droplet data in real-time and closed-loop experiments.
Article
Full-text available
In biotechnology, cell growth is one of the most important properties for the characterization and optimization of microbial cultures. Novel live-cell imaging methods are leading to an ever better understanding of cell cultures and their development. The key to analyzing acquired data is accurate and automated cell segmentation at the single-cell level. Therefore, we present microbeSEG, a user-friendly Python-based cell segmentation tool with a graphical user interface and OMERO data management. microbeSEG utilizes a state-of-the-art deep learning-based segmentation method and can be used for instance segmentation of a wide range of cell morphologies and imaging techniques, e.g., phase contrast or fluorescence microscopy. The main focus of microbeSEG is a comprehensible, easy, efficient, and complete workflow from the creation of training data to the final application of the trained segmentation model. We demonstrate that accurate cell segmentation results can be obtained within 45 minutes of user time. Utilizing public segmentation datasets or pre-labeling further accelerates the microbeSEG workflow. This opens the door for accurate and efficient data analysis of microbial cultures.
Article
Full-text available
Advances in microscopy hold great promise for allowing quantitative and precise measurement of morphological and molecular phenomena at the single-cell level in bacteria; however, the potential of this approach is ultimately limited by the availability of methods to faithfully segment cells independent of their morphological or optical characteristics. Here, we present Omnipose, a deep neural network image-segmentation algorithm. Unique network outputs such as the gradient of the distance field allow Omnipose to accurately segment cells on which current algorithms, including its predecessor, Cellpose, produce errors. We show that Omnipose achieves unprecedented segmentation performance on mixed bacterial cultures, antibiotic-treated cells and cells of elongated or branched morphology. Furthermore, the benefits of Omnipose extend to non-bacterial subjects, varied imaging modalities and three-dimensional objects. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism. Our results distinguish Omnipose as a powerful tool for characterizing diverse and arbitrarily shaped cell types from imaging data.
Article
We dream of a future where light microscopes have new capabilities: language-guided image acquisition, automatic image analysis based on extensive prior training from biologist experts, and language-guided image analysis for custom analyses. Most capabilities have reached the proof-of-principle stage, but implementation would be accelerated by efforts to gather appropriate training sets and make user-friendly interfaces.
Article
-Deep learning techniques are proving instrumental in identifying, classifying, and quantifying patterns in medical images. Segmentation is one of the important applications in medical image analysis. The U-Net has become the predominant deep-learning approach to medical image segmentation tasks. Existing U-Net based models have limitations in several respects, however, including: the requirement for millions of parameters in the U-Net, which consumes considerable computational resources and memory; the lack of global information; and incomplete segmentation in difficult cases. To remove some of those limitations, we built on our previous work and applied two modifications to improve the U-Net model: 1) we designed and added the dilated channel-wise CNN module and 2) we simplified the U-shape network. We then proposed a novel light-weight architecture, the Channel-wise Feature Pyramid Network for Medicine (CFPNet-M). To evaluate our method, we selected five datasets from different imaging modalities: thermography, electron microscopy, endoscopy, dermoscopy, and digital retinal images. We compared its performance with several models having a variety of complexities. We used the Tanimoto similarity instead of the Jaccard index for gray-level image comparisons. The CFPNet-M achieves segmentation results on all five medical datasets that are comparable to existing methods, yet require only 8.8 MB memory, and just 0.65 million parameters, which is about 2% of U-Net. Unlike other deep-learning segmentation methods, this new approach is suitable for real-time application: its inference speed can reach 80 frames per second when implemented on a single RTX 2070Ti GPU with an input image size of 256 × 192 pixels.