Conference PaperPDF Available

Analysing the performance and costs of reactive programming libraries in Java

Analysing the Performance and Costs of Reactive
Programming Libraries in Java
Julien Ponge
Red Hat
Lyon, France
Arthur Navarro
Red Hat
Villeurbanne, France
Clément Escoer
Red Hat
Valence, France
Frédéric Le Mouël
Univ Lyon, INSA Lyon, Inria, CITI, EA3720
Villeurbanne, France
Modern services running in cloud and edge environments
need to be resource-ecient to increase deployment density
and reduce operating costs. Asynchronous I/O combined
with asynchronous programming provides a solid techni-
cal foundation to reach these goals. Reactive programming
and reactive streams are gaining traction in the Java ecosys-
tem. However, reactive streams implementations tend to be
complex to work with and maintain. is paper discusses
the performance of the three major reactive streams com-
pliant libraries used in Java applications: RxJava, Project
Reactor, and SmallRye Mutiny. As we will show, advanced
optimization techniques such as operator fusion do not yield
beer performance on realistic I/O-bound workloads, and
they signicantly increase development and maintenance
CCS Concepts: Soware and its engineering;
Keywords: reactive programming, reactive streams, java,
ACM Reference Format:
Julien Ponge, Arthur Navarro, Clément Escoer, and Frédéric Le
Mouël. 2021. Analysing the Performance and Costs of Reactive Pro-
gramming Libraries in Java. In Proceedings of the 8th ACM SIGPLAN
International Workshop on Reactive and Event-Based Languages and
Systems (REBLS ’21), October 18, 2021, Chicago, IL, USA. ACM, New
York, NY, USA, 10 pages.
REBLS ’21, October 18, 2021, Chicago, IL, USA
© 2021 Copyright held by the owner/author(s). Publication rights licensed
to ACM.
is is the author’s version of the work. It is posted here for your personal
use. Not for redistribution. e denitive Version of Record was published in
Proceedings of the 8th ACM SIGPLAN International Workshop on Reactive and
Event-Based Languages and Systems (REBLS ’21), October 18, 2021, Chicago,
1 Introduction
Modern applications are made by composing distributed
services that are developed in-house or taken from o-the-
shelve third-party vendors. Services are being increasingly
deployed and operated in Kubernetes clusters in cloud and
edge environments[
]. Micro-services recently became a pop-
ular architecture style where each service has a tight func-
tional scope, has data ownership and has its own release
life-cycle. Such services can be scaled up and down in a ne-
grained fashion to respond to uctuating workloads. For
instance, a service may have 12 instances running at peak
time during the day and 0 at night when there is no trac.
It is increasingly important to maximize deployment density
in such environments where costs are driven by resource
usage[21], hence deploy resource-ecient services[5].
One of the key ingredients for resource eciency is to
move away from traditional soware stacks where each net-
work connection is associated with a thread, and where I/O
operations are blocking[
]. By moving to asynchronous I/O,
one can multiplex multiple concurrent connection process-
ing on a limited number of threads[
], but this requires
abandoning familiar imperative programming constructs.
ere is a great interest in the Java ecosystem for em-
bracing asynchronous I/O and asynchronous programming,
with reactive streams[
] playing a pivotal role as a foun-
dation for higher-level programming models and middle-
]. Still, reactive streams implementations
such as RxJava[
], Reactor[
] and Mutiny[
] are complex.
e maintenance of such libraries is expensive due to the
complexity of the reactive streams protocol. As reactive is
oen associated with hopes for beer performance, most
libraries introduced complex optimizations. By contrast, the
Mutiny library took a dierent approach by not including
any complex optimization, resulting in more straightforward,
easier to maintain code. A natural question arises: how does
Mutiny perform in comparison to the other optimized li-
is paper discusses the performance of the 3 major re-
active programming libraries used in Java soware stacks:
REBLS ’21, October 18, 2021, Chicago, IL, USA Julien Ponge, Arthur Navarro, Clément Escoier and Frédéric Le Mouël
RxJava, Reactor and Mutiny. Experiments have been con-
ducted across a series of CPU-bound then I/O bound micro-
benchmarks to measure the performance impact of the reac-
tive pipelines built with these libraries. As we will see, clever
optimization techniques do not necessarily yield beer per-
formance on realistic I/O-bound workloads, which is what
reactive streams were designed for. Still, these techniques
greatly increase development and maintenance costs.
2 Reactive Programming in Java
Java has long provided support for asynchronous types in-
spired by promises and futures[
. ese types encapsulate sin-
gle operations. e package deals with func-
tional stream processing of in-memory collections, not re-
sources with asynchronous I/O. is lead to the reactive
streams initiative that later inuenced Java to adopt the pro-
posed interfaces as part of the standard library, albeit with a
still nascent adoption.
2.1 Reactive Streams
Reactive streams is a specication and a protocol for asyn-
chronous data stream processing. It denes a non-blocking
back-pressure protocol guaranteeing that a producer through-
put respects the consumer processing capacity[16].
Reactive streams are the lingua franca for an open and
vendor-neutral asynchronous programming ecosystem in
Java. For instance, both an event streaming service and a
large data store can expose reactive streams compliant clients.
Applications can then connect these APIs and build reactive
applications enforcing end-to-end back-pressure. Of course,
this requires that clients and the application code honor the
reactive streams protocol and semantic.
1, 2, 3, 4, 5… filter(odd) f(n) = n * 10 20, 40, 60, …
Figure 1. A sample reactive streams pipeline.
Reactive streams dene an API and a protocol, captured in
implementations as a library and a technology compatibility
kit (TCK). e API denes 4 components, whose interactions
are illustrated by a sample pipeline in Figure 1.
Apublisher produces a potentially innite sequence
of items.
Asubscriber requests a subscription from a publisher,
then receives zero or more items. e subscriber can
also be notied of a terminal error, and it can also
receive a completion signal that marks the end of the
Asubscription is passed to a subscriber as a way to
signal the publisher. A cancellation can be requested,
aer which the publisher eventually stops sending
items. A subscription is also used to request a positive
number of items, and the publisher should not send
more than the requested amount. Unbounded requests
are possible by requesting (263 1)items.
Aprocessor is both a publisher and a subscriber, used
as an intermediary operator. For example, a processor
can transform and lter values, recover from errors,
Publishers, subscribers and processors have to pass the re-
active streams TCK. e reactive streams API is included
in Java 9 and beyond in the
, and ships as an independent set of interfaces
in the
library with Java 6 compati-
Reactive streams is a low-level protocol, and applications
should use higher-level reactive programming libraries. Many
compliant libraries exist, such as SmallRye Mutiny, RxJava,
or Project Reactor. In addition to the reactive streams API
and protocol, they oer a rich set of operators (e.g., transform
values, chain operations, manage failures, combine streams,
etc.), publisher, and subscriber implementations.
e reactive streams APIs are simple, but the protocol is
complex with a broad set of rules regarding signals ordering,
serialized concurrent emissions, subscription retention, can-
cellation, and more. e TCK oers to validate publishers,
subscribers and processors against the rules of the reactive
streams protocol. Writing correct implementations can be
surprisingly dicult despite the apparent simplicity of the
2.2 Libraries
We focus our comparison on three reactive programming
libraries for Java: RxJava, Reactor, and Mutiny.
2.2.1 RxJava. RxJava has historically been the rst popu-
lar library to oer reactive extensions in Java. It is popular
in the Android ecosystem, especially to respond to user in-
puts and network requests in graphical user interfaces. e
popularity of RxJava in the Android space is fading as An-
droid has now switched focus to the Kotlin programming
language and (Kotlin) internal domain-specic languages for
writing reactive code. RxJava usage can be found in other
areas for graphical user interfaces or backend development
to compose asynchronous I/O operations (e.g., the Vert.x
Here is an example with a stream (type
) where
even numbers are selected, then transformed to a
object, then recorded into a database using an asynchronous
record method:
Analysing the Performance and Costs of Reactive Programming Libraries in Java REBLS ’21, October 18, 2021, Chicago, IL, USA
flowable.filter(n-> n%2== 0)
.flatMap(s-> record(db,s));
RxJava 1 did not support back-pressure and reactive streams,
which were an addition of RxJava 2 and now version 3. Rx-
Java brings the concepts from reactive extensions[
] and
borrows functional programming terminology to name its
operators (e.g., map,flatMap,zip, etc).
RxJava tries to optimize performance by using several
e reactive streams semantics and protocol are not
always being respected but “relaxed” for interactions
between publishers, processors and subscribers origi-
nating from the RxJava library1.
Operators can be fused to reduce the number of ac-
tual operators that items traverse as they go through
RxJava pipelines. e actual fusion depends on the
pipeline and operators semantics: in some cases, they
can be actually merged as one; in other cases, internal
data structures such as queues can be shared, or thread
synchronization can be removed.
Some operators aempt to pre-fetch data from their
upstream publisher, even if their subscriber hasn’t re-
quested as much or any item yet. In theory this may
reduce the number of signals on frequent small batches
requests as an operator can cache a larger amount of
pre-fetched data, but the actual gain has to be mea-
sured against concrete workloads.
2.2.2 Reactor. Reactor is a popular library whose promi-
nent usage is in the Spring Framework community. Its history
is closely related to that of RxJava 2 when it adopted the reac-
tive streams specication in the form of the
e codebases share lots of internal code, with the external
APIs diering. RxJava 2 continued the API approach of ver-
sion 1 and kept Java 6 compatibility to address the Android
development market. Reactor focused on Java 8, especially
with the support of lambdas, and oers an API reduced to
2 types:
for single-valued operations, and
reactive streams. Reactor exhibits an API with functional
programming terminology and idioms, just like RxJava, al-
though it oers a few shortcuts and helper operators with
more meaningful names (e.g., the then operator).
In fact the previous stream processing example based on
RxJava is identical when converted to Reactor:
flux.filter(n-> n%2== 0)
.flatMap(s-> record(db,s));
e internal design of Reactor borrows code and tech-
niques found in RxJava, although the code bases have since
diverged due to a more sustained development pace of Reac-
tor project. Reactor hence also uses the operator fusing and
pre-fetching optimizations from RxJava.
2.2.3 Mutiny. Mutiny is a more recent addition to the re-
active Java ecosystem
. Mutiny is prominent in the arkus
framework and Vert.x toolkit communities.
Two leading principles have guided the design of the
Mutiny APIs.
Bring meaningful operator names for processing asyn-
chronous events rather than borrow from the func-
tional programming terminology.
Ensure API navigability by oering groups (e.g.,
) to limit the number of pro-
posed methods when using an IDE completion, and
show only the relevant methods for a given context
(e.g., responding to a failure, responding to an item,
Users of both RxJava and Reactor are overwhelmed by
more than a hundred methods when they need an operator.
In contrast, Mutiny users see about ten methods each time
they need an operator. It is worth noting that this design
will sometimes lead to more verbosity. is level of verbosity
combined with modern tools (IDE completion typically) is
a design choice aiming at improving readability and navi-
gability. e previous example where even numbers from a
stream are being selected, transformed into a
then saved to a database could be wrien as follows with
Mutiny:> n%2== 0)
.transformToUniAndMerge(s-> record(s,db));
e internals of Mutiny are strictly adhering to the reac-
tive streams specication. Mutiny does not try to perform
operator fusing, and, as we will see in experiments, this does
not harm performance when processing I/O bound work-
loads (which is what reactive streams are for). It also dramati-
cally simplies the code base, as operator fusing strategies in
RxJava and Reactor require complex protocols for operators
to be merged and for internal state and synchronization shar-
ing. Mutiny does not perform pre-fetching either. While it is
possible to pass the reactive streams TCK and do pre-fetching
as RxJava / Reactor do, this requires internal buering. Pre-
fetching also requires an internal, library-specic operator
negotiation mechanism that increases the code complexity.
2.3 Source Code Metrics
Table 1shows code metrics for RxJava, Reactor and Mutiny.
e metrics have been obtained using the
and only
2Disclaimer: some of the paper authors work on this project.
REBLS ’21, October 18, 2021, Chicago, IL, USA Julien Ponge, Arthur Navarro, Clément Escoier and Frédéric Le Mouël
Table 1. Code metrics of the reactive programming libraries using scc (excluding tests and documentation).
Library Java lines of code (LOC) Number of les
Cyclomatic complexity
Cyclomatic complexity
density (CC/KLOC)[3]
RxJava 3.0.13 100313 907 11750 117.13
Reactor 3.4.8 72858 444 13358 183.34
Mutiny 1.0.0 21177 300 2840 134.10
take the core implementations source code into account, not
documentation snippets, tests and complementary modules.
We can see that Mutiny is a more straightforward code
base than that of both RxJava and Reactor. Reactor and Rx-
Java have comparable cyclomatic complexities, although the
Reactor codebase is more condensed with about half the
number of les and about 72% the number of eective lines
of Java code compared to RxJava. e operator fusing and
pre-fetching optimization techniques play a role in explain-
ing the higher complexity of RxJava and Reactor compared
to that of Mutiny. Codebases tend to grow bigger and more
complex as features are added and bugs are discovered and
xed. Mutiny is a newer project, we must hence assume that
its low complexity is in part due to that factor.
e code base of RxJava is the biggest, which can be ex-
plained by supporting a broader palee of reactive types:
By contrast Reactor and Mutiny expose just 2 reactive types
each: Mono /Flux for Reactor and Uni /Multi for Mutiny.
RxJava and Mutiny present beer maintenance productiv-
ity, whereas Reactor with a higher cyclomatic complexity
density is more dicult to maintain either in lines of
code and in complexity[
]. Mutiny is still 21% the number of
lines of code of RxJava, making it a more approachable code
3 Benchmarking Reactive Programming
We compare the performance of Mutiny, Reactor and Rx-
Java in both CPU and I/O bound seings. e benchmarks
code can be found in the Git repository at https://github.
com/jponge/rebls21-paper-benchmarks for review and repro-
ducibility purposes. e library versions used in the bench-
marks are that of Table 1.
3.1 Experimental Approach
e benchmarks are wrien using JMH
, a sub-project of
OpenJDK dedicated to helping writing beer micro-bench-
]. e Java virtual machine is an adaptive runtime
that is especially ecient at optimizing code based on spec-
ulation heuristics and runtime proles[
], so it is very easy
to write incorrect benchmarks where dead-code elimination,
constant folding or loop unrolling do not measure what the
benchmarks developer intended. JMH provides a harness
API to write benchmarks and executes them with various
techniques in generated code to defeat common JIT optimiza-
tions. We ran benchmarks multiple times because dierent
runs may not trigger the same optimizations, and we tuned
the warmup rounds so that the JIT compiler had time to reach
a stable state. Using JMH alone does not provide o-the-shelf
correctness in benchmarks, but it considerably helps to limit
the likeliness of such mistakes being made when benchmarks
developers are aware of them[6].
e servers used for benchmarking have Intel Xeon CPUs
at 3.50GHz, 16 cores and 252GB of RAM. e operating sys-
tem is using a Linux kernel 4.4.0. e Java virtual machine
is an AdoptOpenJDK
distribution version 11.0.11+9, tuned
to use the Shenandoah garbage collector
, 1 GB of heap size
and 256MB of stack size. We use Shenandoah for its short
GC pauses allowing for beer latency and more predictable
Library-neutral publisher
(random numbers, text lines, …)
Benchmark subscriber
JMH blackhole
Pipelines under test
Figure 2. Benchmarking reactive libraries with library-
neutral publishers and subscribers.
As we want to measure the performance of Mutiny, Reac-
tor and RxJava, we need to ensure that the variance in bench-
marks is reduced to stream processing code in pipelines that
solely exercise code from these libraries. To do so, we de-
veloped publishers and a subscriber that pass the reactive
streams TCK, but that are free from any code from these
libraries, as illustrated in Figure 2. is ensures that all li-
braries are exercised with the same event sources and event
consumers. It also prevents any library from doing end-to-
end optimizations from its own publishers, such as opera-
tor fusing and pre-fetching, which are only possible using
internal APIs that are outside of the scope of the reactive
streams specication. e subscriber sends events to JMH
5See from the Eclipse Foundation.
Analysing the Performance and Costs of Reactive Programming Libraries in Java REBLS ’21, October 18, 2021, Chicago, IL, USA
, a helper class for consuming values and pre-
venting (but not limited to) constant folding and dead-code
elimination as they are the most common mistakes.
ere is neither an established benchmark for reactive
programming libraries in Java, nor a benchmark for reac-
tive streams implementations. We have developed a bench-
mark suite split into 3 families: individual operations (CPU
bound), multiple-operator pipelines (CPU bound), and I/O
bound pipelines. e benchmarks rst compare the perfor-
mance of individual operations commonly used in reactive
pipelines: transforming a value with a function, chaining
with another asynchronous operation and selecting values.
We then run benchmarks where pipelines perform several op-
erations between the initial publisher and the subscriber. In
either individual operations or pipelines, the initial publisher
generates random numbers. ese are CPU-bound bench-
marks, and since reactive streams have been designed for
asynchronous I/O, we also compare all libraries on represen-
tative I/O-bound workloads: composing network requests
and processing text lines from a le.
3.2 Single Operation Pipelines
All libraries oer reactive types for modeling single oper-
in Reactor,
RxJava and Uni in Mutiny. ese types are useful for repre-
senting and composing one-shot operations such as sending
an event to a message queue or doing a database insert. ey
directly compare to
in Java, which is a
form of future / promise[10].
3.2.1 Individual Operators. Here we compare the per-
formance of transforming an event value (
) and chaining
with another operation (
) on the
(RxJava) and
(Reactor) types. e benchmark can be
found in the
class of the source
code repository. We use Java
concrete class) as a baseline. As
said earlier, it also performs an asynchronous one-shot op-
eration while avoiding the overhead of the reactive streams
protocols. e throughput results are in Figure 3.
provides the best throughput by near-
ly a factor of 2 for both types of operations, which is not sur-
prising since it does not have the overhead of a subscription-
based protocol. Here RxJava performs the best, followed by
Mutiny and Reactor, which is about 2 times slower than
Mutiny on the map operation.
3.2.2 Multiple Operators. To beer appreciate the perfor-
mance of the reactive libraries, we need to measure what hap-
pens on a pipeline with multiple operators. e benchmark
can be found in the
tion class of the source code repository.
We start with a random number from a publisher, then
convert it to an absolute value using
, then con-
vert it to a hexadecimal string using
0 10000 20000 30000 40000 50000 60000
ops/ms (more is better)
Uni individual operators
Figure 3. Single operation and individual operators perfor-
We then chain the result with an operation to enclose the
string in brackets. e Mutiny version of the pipeline is as
Uni.createFrom().item(() ->
Uni.createFrom().item(”[” +s+”]”));
We again use
as a baseline, but we also
perform the processing in plain imperative code using di-
rect calls to
as another
baseline. e results are in Figure 4.
0 5000 10000 15000 20000 25000 30000
ops/ms (more is better)
Single direct transformations
Figure 4. Single operation and multiple operators perfor-
REBLS ’21, October 18, 2021, Chicago, IL, USA Julien Ponge, Arthur Navarro, Clément Escoier and Frédéric Le Mouël
With more operations, the subscription-time overhead is
less apparent against
. e relative per-
formance of the 3 reactive libraries is in line with the results
on individual operators: Reactor is the slowest, RxJava is the
fastest, and Mutiny sits in-between them. e plain imper-
ative baseline also serves as a reminder that boxing values
and transforming them in a monadic fashion is not free.
3.3 Event Stream Pipelines
Back-pressured stream pipelines can be constructed with
, Reactor
and RxJava
. ey
model streams of asynchronous events where back-pressure
is managed through the reactive streams request protocol.
An example would be receiving events from a message queue.
All types dene some form of overow management strat-
egy when items are being received, but there is no outstand-
ing subscriber demand. For instance, items can be buered,
dropped, or an overow can be a failure that terminates a
3.3.1 Individual Operators. We compare the performance
of transforming values (
), selecting values based on a
predicate (
), chaining with a single-valued operation
), and chaining with a stream-returning operation
). e benchmark can be found in the
class of the source code repository.
e results are in Figure 5.
0 50 100 150 200 250
ops/s (more is better)
Multi individual operators
Figure 5. Event stream and individual operators perfor-
ere is no baseline in this benchmark to focus on just
the relative performance of the libraries. e transforma-
tion and selection operators (map and filter) do not show
any signicant performance dierence across the 3 reactive
programming libraries.
Mutiny has a slower operation chaining performance com-
pared to RxJava and Reactor for both single-valued and
stream cases (
). We correlated this
performance issue to a boleneck in the Mutiny subscription
mechanism because these operations require subscribing to
objects. is will likely be revisited
and addressed in a future release of Mutiny.
Figure 5does not show a
result for RxJava be-
cause the library has a bug with the
ator that causes a memory exhaustion when running the
benchmarks. is is a signicant problem as it is very fre-
quent for applications to perform an asynchronous operation
for each item in a stream (e.g., doing a database insert). e
exact cause for that problem has yet to be identied, reported
and xed. We also see that the “relaxed” reactive streams
protocol handling of RxJava does not result in signicant
performance benets compared to Reactor.
3.3.2 Multiple Operators. We now compare the reactive
libraries on pipelines with multiple operations. e bench-
mark can be found in the
class of the source code repository.
Pipelines start with a random number publisher, followed
by a transformation to an absolute value with
then an odd value selection. en for each item, we produce
a stream of strings with the hexadecimal string representa-
tion of the numbers (
), followed with
1, 2 and 3
characters. Hence a stream
(2,2!,2!!,2!!!, 𝑐, 𝑐 !, 𝑐 !!, 𝑐 !!!)
. e Reactor version of
the pipeline is as follows:
Flux.from(new RandomNumberPublisher())
.filter(n-> n%2== 0)
.concatMap(n-> {
String s =Long.toHexString(n);
return Flux.just(s,s+”!”,s+”!!”,
Note that we use stream concatenation semantics with
to preserve ordering, which would not be the
case with a flatMap.
We use 2 baselines: one with Java collection streams and
the equivalent imperative code done in a plain imperative
loop. e results are in Figure 6.
e imperative baseline is unsurprisingly the fastest, fol-
lowed by Java streams that do not need a back-pressure sig-
nalling and subscription mechanism, and do not have to deal
with potential multi-threaded access in that benchmark. Rx-
Java and Reactor have the same performance, while Mutiny
is around 13% slower. is can be explained by the impact of
operator fusion in RxJava and Reactor as well as the inner
streams subscription performance highlighted in the previ-
ous individual operations benchmark. Stream processing of
in-memory data is CPU-bound, and as such Java streams
are a beer choice over any reactive streams based library
when a declarative pipeline model is a good t. Last but not
Analysing the Performance and Costs of Reactive Programming Libraries in Java REBLS ’21, October 18, 2021, Chicago, IL, USA
0 25 50 75 100 125 150 175 200
ops/s (more is better)
Stream direct transformations
Figure 6. Event stream and multiple operators performance.
least good old imperative programming should always be
considered as it has evidently the best performance.
3.4 I/O Bound Pipelines
e reactive streams specication was motivated by the need
for programming against asynchronous I/O streams. In this
section, we look at the performance of Mutiny, RxJava and
Reactor with I/O bound pipelines. e rst scenario per-
forms I/O on the le system, while the second one performs
network requests.
3.4.1 File Processing. e source publisher in this exam-
ple reads text lines from an electronic transcript of the 19
century book Les Misérables by Victor Hugo (the le weights
3.2M). e benchmark can be found in the
class of the source code repository.
e rst operator aer the publisher discards blank lines.
e next one computes the number of characters in the line.
We then chain with a lesystem operation that appends
the characters count to a le, and to make it asynchronous
that operation is dispatched to a worker thread pool. e
last operator transforms the characters count to a string
and prexes it with an arrow. We hence have 3 simple in-
memory operations (selecting and transforming values) and
2 I/O operations (reading and writing). e Mutiny version
of the pipeline is as follows:
new TextFileLinePublisher(source))
.select().where(line -> !line.isBlank())
.onItem().transformToUniAndConcatenate(count ->
.onItem().transform(count -> ”=> ” +count);
We compared Reactor, RxJava and Mutiny against a base-
line of the equivalent imperative code. Figure 7shows the
results, with Figure 7a showing a box plot of the samples data,
and Figure 7b showing a histogram of the processing times
distribution. Neither representation shows any statistical
benet of using one reactive library or the other. e imper-
ative code baseline is faster, but when it comes to reactive
programming libraries they all exhibit the same performance
3.4.2 Network Requests Processing. e previous bench-
mark exhibited lesystem I/O, while the next one focuses
on network requests. e benchmark can be found in the
NetworkRequests class of the source code repository.
In this benchmark, the source publisher issues 6 concur-
rent HTTP requests to fetch the content of the Les Misérables
book, which is exposed by an HTTP server. e HTTP server
is running on a separate server so that the HTTP requests ac-
tually go through a network, and not the loopback network
Once all responses have been triggered and collected, they
become a stream of 6 HTTP responses. en, for each item,
the HTTP response body (the book text) is extracted, and
the nal operation computes the total number of charac-
ters. e baseline uses
objects to per-
form 6 HTTP requests, then does the same processing as in
the reactive pipelines of Mutiny, RxJava and Reactor. All
pipelines delegate the HTTP request to a
-returning method called
, and
that uses the JDK HTTP client from Java 11 and beyond.
e Mutiny variant of the pipeline is built as follows:
// (...) times 6
.onItem().transformToMulti(list ->
is benchmark interestingly exposes expressiveness dif-
ferences between the 3 reactive libraries. Reactor looses para-
metric types in the pipeline chain, forcing the usage of
operators between steps:
.flatMapMany(tup -> Flux.fromIterable(tup.toList()))
RxJava can produce reactive type objects from a Java
, but unlike Mutiny and Reactor not from
. e nuance is subtle as
RxJava eectively caches the HTTP request rather than trig-
gering a new one for each new subscription. To reproduce the
correct behavior and trigger requests every time a pipeline
subscription happens, we need to use the defer operator:
REBLS ’21, October 18, 2021, Chicago, IL, USA Julien Ponge, Arthur Navarro, Clément Escoier and Frédéric Le Mouël
RxJava Reactor Mutiny Baseline
ms/op (less is better)
Text processing - 1000 samples
(a) Box plot
225 250 275 300 325 350 375
ms/op (less is better) - 100 bins
Text processing - 1000 samples
(b) Histograms
Figure 7. File processing performance.
Flowable.defer(() -> Flowable
// (...) times 6
Another interesting issue that was found during the devel-
opment of this benchmark is the consistency of the HTTP
server response times. e initial iteration used the Python
embedded HTTP server, but it ended up not responding aer
serving the le under load, and response times remained
uctuant. e next iteration used a simple Node.js HTTP
server, but the latency was too inconsistent due to the adap-
tive nature of the V8 runtime, both in terms of just-in-time
compilation and garbage collection. We instead obtained
solid and consistent HTTP response times by running an
HTTP server wrien in Go as it features a non-adaptive
Figure 8shows the results. We again used a box plot (Fig-
ure 8a) and histogram (Figure 8b) to visualize the data.
Just like in the previous I/O-bound benchmark, there is
again no statistical advantage in choosing one reactive li-
brary or the other. e performance is very comparable,
despite RxJava and Reactor sharing operator fusion and pre-
fetching techniques that Mutiny does not have. Relaxing the
reactive streams protocol semantics between RxJava opera-
tors does not result in any observable benet either.
4 Related Work
e reactive streams specication emerged due to the need
for coordinating publishers and subscribers of asynchronous,
back-pressure enabled data streams[
]. It was heavily inu-
enced by the experience of the Akka actor framework[
and the RxJava reactive programming library[
] that later
adopted a back-pressured reactive type called
. Re-
] appeared as a heavily RxJava-inspired reactive pro-
gramming library, sharing the design of internals, the func-
tional programming operators terminology, yet restricting
itself to just 2 reactive types and embracing modern Java
constructs at the time (e.g., lambdas and method references).
e early designs of Mutiny emerged in 2019, motivated by
the need to oer a beer developer experience when com-
posing asynchronous operations over reactive types[
], and
backed by a fast real-world adoption in projects arkus
and Vert.x[
]. e reactive streams interfaces have been
ported to the Java standard library as part of the
interfaces in Java 9 and beyond, but so far, adoption has
remained limited in favor of the original reactive streams
All RxJava, Reactor and Mutiny are descendants of the
reactive extensions of Erik Meijer[
] that inspired elements
of C#, although it should be noted that reactive extensions
were never envisioned for back-pressured streams. Reactive
extensions build on ideas from promises and futures that
focused on the composition of asynchronous RPC opera-
]. RxJava is itself part of a family of ports of the
reactive extensions idioms to other languages such as RxJs
for JavaScript[1].
Since not everything is a stream, RxJava, Reactor and
Mutiny all expose a reactive type for single asynchronous
operations that are a form of future. ese 3 reactive program-
ming libraries for Java are thus a mix of reactive extensions
and promise / futures as a programming model, and the reac-
tive streams protocol as an interaction model. Moreover, the
reactive streams specication does not mandate any specic
programming model. For instance Akka actors can cooperate
with Mutiny pipelines over reactive streams.
Analysing the Performance and Costs of Reactive Programming Libraries in Java REBLS ’21, October 18, 2021, Chicago, IL, USA
RxJava Reactor Mutiny Baseline
ms/op (less is better)
Network requests - 1000 samples
(a) Box plot
180 190 200 210 220 230 240
ms/op (less is better) - 100 bins
Network requests - 1000 samples
(b) Histograms
Figure 8. Network requests processing performance.
e Reactive Manifesto focuses on the design of reactive
], that is, systems that remain responsive as work-
loads increase and/or as failures arise[
]. Reactive sys-
tems are based on asynchronous message passing for elastic-
ity and resilience purposes, hence reactive streams are key to
their design and implementation. Reactive systems, reactive
streams and reactive programming are dierent facets of the
modern distributed framework stacks as applications need to
face scalability[8] and resiliency challenges[2,7,13,18,22].
We designed our own benchmark suite because there
doesn’t exist any for either reactive programming libraries
in Java or reactive streams implementations. e wide di-
versity and discrepancy of operator combinations across the
libraries make the creation of a standard benchmark suite
dicult. We do not claim that the benchmark suite that we
designed is perfect: we selected a set of representative op-
erators based on our experience of reactive programming
usage in the eld.
e Renaissance benchmark suite addresses parallel appli-
], and it contains a benchmark called rx-scrabble
that uses an old version of RxJava 1 and le system opera-
tions. e scope of this benchmark suite is too distant for
being useful in the context of this paper, and the le process-
ing benchmark above already performs a fair comparison of
the performance of RxJava, Reactor and Mutiny in presence
of I/O operations.
5 Conclusion
is paper compared the performance of 3 reactive program-
ming libraries for Java that comply with the reactive streams
specication: RxJava, Reactor and Mutiny. To that purpose,
we developed a suite of custom micro-benchmarks.
We rst measured the performance of single operations
and stream reactive types in purely CPU-bound micro-bench-
marks. We benchmarked commonly used individual oper-
ators: transforming values, selecting values, chaining with
individual and stream-producing operations. We also bench-
marked pipelines with multiple operators, as this is more
representative of how reactive libraries are being used in
RxJava performs the best for single operation reactive
types, but it does not try to replicate the reactive streams
protocol as Mutiny and Reactor do. We found a bug caus-
ing out-of-memory exhaustion with RxJava
operator. Reactor ended up slowest in benchmarks, with
Mutiny being half-way to the performance of RxJava on
multiple operators pipelines. Single operation reactive types
are important as they model frequently-used asynchronous
operations such as database insert queries or acknowledging
messages. In fact, a typical HTTP-exposing micro-service
uses single asynchronous operations more oen than it has
to deal with streams.
Both Reactor and RxJava have similar performance on
stream operations, and outperform Mutiny as soon as chain-
ing re-subscription is involved (e.g.,
). In
the multiple operations reactive pipeline benchmark, Mutiny
ended up 13% slower than RxJava and Reactor. Individual
operation benchmarks show that the performance of Mutiny
is comparable to that of the other libraries for direct data
transformation and selection. RxJava and Reactor share lots
of history and operators internals. eir performance in
CPU-bound cases can be explained by operator fusing and
pre-fetching techniques, but at the cost of more complex
code bases, as Table 1shows. We could not assess if the “re-
laxation” of the reactive streams protocol between RxJava
REBLS ’21, October 18, 2021, Chicago, IL, USA Julien Ponge, Arthur Navarro, Clément Escoier and Frédéric Le Mouël
operators had any signicant eect, and we found a case
where memory exhaustion was possible.
Reactive libraries based on reactive streams should not be
used for the sole purpose of processing in-memory data such
as Java collections. When dealing with collections, the Java
streams API oers the ability to build data transformation,
selection and aggregation pipelines, including processing
parallelization. Reactive libraries inevitably pay the cost of
the reactive streams protocol (e.g., demand signalling, multi-
threading and serialization requirements) that was primarily
designed for asynchronous I/O. Java
an ecient framework-neutral type to model asynchronous
operations, but it is not subscription-based like with the
reactive libraries types so pipeline constructions cannot be
e I/O bound cases depict a more realistic picture of
the impact of RxJava, Reactor and Mutiny when used to
process what reactive streams were made for. We ran two
benchmarks: one that performs operations on the lesystem,
and one that performs network requests. In both benchmarks,
there is no statistical evidence that either of the libraries
performs any beer. We could not exhibit any evidence that
the operator fusing and pre-fetching techniques employed
by RxJava and Reactor could have any favorable impact
in realistic workloads where non-trivial I/O operations are
involved. Such optimizations seem to have a minor eect in
CPU-bound workloads, but, again, there are already beer
alternatives in the Java standard library.
e engineering costs associated with developing opti-
mization techniques such as operator fusing and pre-fetching
is questionable in light of code metrics such as the lines to
be maintained and the cyclomatic complexity estimates of
Table 1. e simpler code base of Mutiny is within range of
the performance of RxJava and Mutiny in CPU-bound cases,
and it performs just as well in realistic I/O bound workloads.
ere is potential for exploring alternative library imple-
mentation and operation techniques, and possibly improve
performance. Still, would the development costs be any jus-
tied, especially given the results on I/O bound workloads
compared to the non-reactive baselines?
is work is partially supported by Red Hat Research. e
authors would like to thank Georgios Andrianakis, Rodney
Russ and Stéphane Épardaud for their constructive feedback.
Manuel Alabor and Markus Stolze. 2020. Debugging of RxJS-based
applications. In Proceedings of the 7th ACM SIGPLAN International
Workshop on Reactive and Event-Based Languages and Systems. ACM,
Virtual USA, 15–24.
Sadek Drobi. 2012. Play2: A New Era of Web Application Development.
IEEE Internet Computing 16, 4 ( July 2012), 89–94.
G.K. Gill and C.F. Kemerer. 1991. Cyclomatic complexity density and
soware maintenance productivity. IEEE Transactions on Soware En-
gineering 17, 12 (1991), 1284–1288.
Brian Hayes. 2008. Cloud computing. Commun. ACM 51, 7 (July 2008),
Jonas Bonér, Dave Farley, Roland Kuhn, Martin ompson. 2014. e
Reactive Manifesto.
Julien Ponge. 2014. Avoiding Benchmarking Pitfalls on the JVM. Or-
acle Java Magazine (Aug. 2014).
Julien Ponge. 2020. Vert.x in Action: Asynchronous and Reactive Java.
Manning Publications.
Dan Kegel. 1999. e C10K problem.
Roland Kuhn, Brian Hanafee, and Jamie Allen. 2017. Reactive design
paerns. Manning Publications.
Barbara Liskov and Liuba Shrira. 1988. Promises: Linguistic Support
for Ecient Asynchronous Procedure Calls in Distributed Systems.
ACM SIGPLAN Notices (1988), 8.
Erik Meijer. 2012. Your mouse is a database. Commun. ACM 55, 5 (May
2012), 66–73.
Michael Paleczny, Christopher Vick, and Cli Click. 2001. e Java
Hotspot Server Compiler. In Proceedings of the 2001 Symposium on
JavaTM Virtual Machine Research and Technology Symposium - Volume
1 (JVM’01). USENIX Association, USA.
Julien Ponge and Mark Lile. 2019. Scalability and Resilience in Prac-
tice: Current Trends and Opportunities. In 2019 38th Symposium on
Reliable Distributed Systems (SRDS). IEEE, Lyon, France, 267–2670.
Aleksandar Prokopec, Andrea Rosà, David Leopoldseder, Gilles Du-
boscq, Petr Tůma, Martin Studener, Lubomír Bulej, Yudi Zheng, Alex
Villazón, Doug Simon, omas Würthinger, and Walter Binder. 2019.
Renaissance: benchmarking suite for parallel applications on the JVM.
In Proceedings of the 40th ACM SIGPLAN Conference on Programming
Language Design and Implementation. ACM, Phoenix AZ USA, 31–47.
Raymond Roestenburg, Rob Bakker, and Rob Williams. 2016. Akka in
Action. Manning Publications.
Reactive Streams Special Interest Group. 2014. Reactive streams
Red Hat and contributors. 2021. Mutiny.
[18] Red Hat and contributors. 2021. arkus.
RxJava contributors. 2021. RxJava.
VMWare and contributors. 2021. Project Reactor. https://
Bill Williams. 2012. e Economics of Cloud Computing: An Overview
For Decision Makers [Book]. Cisco Press.
Christian Wimmer, Codrut Stancu, Peter Hofer, Vojin Jovanovic, Paul
Wögerer, Peter B. Kessler, Oleg Pliss, and omas Würthinger. 2019.
Initialize once, start fast: application initialization at build time. Pro-
ceedings of the ACM on Programming Languages 3, OOPSLA (Oct. 2019),
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Arbitrary program extension at run time in language-based VMs, e.g., Java's dynamic class loading, comes at a startup cost: high memory footprint and slow warmup. Cloud computing amplifies the startup overhead. Microservices and serverless cloud functions lead to small, self-contained applications that are started often. Slow startup and high memory footprint directly affect the cloud hosting costs, and slow startup can also break service-level agreements. Many applications are limited to a prescribed set of pre-tested classes, i.e., use a closed-world assumption at deployment time. For such Java applications, GraalVM Native Image offers fast startup and stable performance. GraalVM Native Image uses a novel iterative application of points-to analysis and heap snapshotting, followed by ahead-of-time compilation with an optimizing compiler. Initialization code can run at build time, i.e., executables can be tailored to a particular application configuration. Execution at run time starts with a pre-populated heap, leveraging copy-on-write memory sharing. We show that this approach improves the startup performance by up to two orders of magnitude compared to the Java HotSpot VM, while preserving peak performance. This allows Java applications to have a better startup performance than Go applications and the V8 JavaScript VM.
Conference Paper
Full-text available
Established benchmark suites for the Java Virtual Machine (JVM), such as DaCapo, ScalaBench, and SPECjvm2008, lack workloads that take advantage of the parallel programming abstractions and concurrency primitives offered by the JVM and the Java Class Library. However, such workloads are fundamental for understanding the way in which modern applications and data-processing frameworks use the JVM's concurrency features, and for validating new just-in-time (JIT) compiler optimizations that enable more efficient execution of such workloads. We present Renaissance, a new benchmark suite composed of modern, real-world, concurrent, and object-oriented workloads that exercise various concurrency primitives of the JVM. We show that the use of concurrency primitives in these workloads reveals optimization opportunities that were not visible with the existing workloads. We use Renaissance to compare performance of two state-of-the-art, production-quality JIT compilers (HotSpot C2 and Graal), and show that the performance differences are more significant than on existing suites such as DaCapo and SPECjvm2008. We also use Renaissance to expose four new compiler optimizations, and we analyze the behavior of several existing ones. Evaluating these optimizations using four benchmark suites shows a more prominent impact on the Renaissance workloads than on those of other suites.
Conference Paper
Full-text available
The Java HotSpot TM Server Compiler achieves improved asymptotic performance through a combination of ob- ject-oriented and classical-compiler optimizations. Aggressive inlining using class-hierarchy analysis reduces function call overhead and provides opportunities for many compiler optimizations.
Oracle Java Magazine July/August 2014
Play is an open source framework for Web application development created in 2007 and opened to the community in 2009. It targets the Java Virtual Machine (JVM) and focuses on simplicity, developer productivity, and HTTP/Web friendliness. Play implements the classic model-view-controller (MVC) architecture with a route file that maps HTTP requests to controllers, which then take request information and produce an HTTP result representation, optionally using a view template. Play then serializes this result representation and returns it as a response to the client.
Web and mobile applications are increasingly composed of asynchronous and real-time streaming services and push notifications. The goal of Rx is to coordinate and orchestrate event-based and asynchronous computations such as low-latency sensor streams, Twitter and social media status updates, SMS messages, GPS coordinates, and mouse moves and other UI events. The Java SDK already provides asynchronous computations as first-class values in the form of the interface, whose principal method retrieves the result of the computation and blocks when the underlying computation has not yet terminated. Asynchronous data streams can be wired together using a fluent API of standard query operators to create complex event-processing systems in a highly composable and declarative way. It is also convenient to generate the output stream of an asynchronous stream using a more sequential pattern.
This paper deals with the integration of an efficient asynchronous remote procedure call mechanism into a programming language. It describes a new data type called a promise that was designed to support asynchronous calls. Promises allow a caller to run in parallel with a call and to pick up the results of the call, including any exceptions it raises, in a convenient and type-safe manner. The paper also discusses efficient composition of sequences of asynchronous calls to different locations in a network.
As software migrates from local PCs to distant Internet servers, users and developers alike go along for the ride.