PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Concurrency is a topic which is hard to grasp since the effects of multi-threaded code can not be easily deduced by a single threaded human brain. These lecture notes are an introduction into concurrency programming in Java and summarize the main topics of our module DSG-PKS-B at the University of Bamberg with the exception of message passing concepts and frameworks. Understanding the concepts and functionality of different elements in the Java API helps to understand synchronous and asynchronous mechanisms discussed in our master courses and all other computer science courses. This document includes all the necessary parts for a solid understanding of concurrency programming in Java.
Content may be subject to copyright.
BAMBERGER BEITRÄGE
ZUR WIRTSCHAFTSINFORMATIK UND ANGEWANDTEN INFORMATIK
ISSN 0937-3349
Nr. 106
Lecture Notes: Concurrency Topics in
Java
Johannes Manner and Sebastian Böhm
April 2022
FAKULTÄT WIRTSCHAFTSINFORMATIK UND ANGEWANDTE INFORMATIK
OTTO-FRIEDRICH-UNIVERSITÄT BAMBERG
https://doi.org/10.20378/irb-53739
Lecture Notes: Concurrency Topics in Java
Johannes Manner and Sebastian Böhm
Lehrstuhl für Praktische Informatik, Fakultät WIAI
Otto-Friedrich-Universität Bamberg
An der Weberei 5, 96047 Bamberg
https://github.com/johannes-manner/ConcurrencyTopics
Version 1.2
Keywords
Java Programming, Concurrency, Multi-Threading
Preface
Concurrency is a topic which is hard to grasp since the effects of multi-threaded code can
not be easily deduced by a single threaded human brain :)
Programming is an easy task, but understanding written code after a few weeks or months
is hard. Finding bugs is even harder. Doing this for concurrent code is - in many cases -
nearly impossible. Brian Goetz, author of Java Concurrency in Pratice, gives good advice
on how to program and deal with concurreny:
Sometimes abstraction and encapsulation are at odds
with performance — although not nearly as often as many
developers believe — but it is always a good practice
first to make your code right, and then make it fast.
It is far easier to design a class to be thread-safe
than to retrofit it for thread safety later.
These lecture notes are an introduction into concurrency programming in Java and sum-
marize the main topics of our module DSG-PKS-B with the exception of message passing
concepts and frameworks. Understanding the concepts and functionality of different elements
in the Java API helps to understand synchronous and asynchronous mechanisms discussed
in our DSG master courses and all other computer science courses. This document includes
all the necessary parts for a solid understanding of concurrency programming in Java but
does not include all the nit picky details presented in some great textbooks like the following
ones we highly recommend to read:
Brian Goetz with others: Java Concurrency in Practice, Addison-Wesley, 2006.
ISBN 978-0321349606 (or other editions)
Joshua Bloch: Effective Java: Third Edition, Addison-Wesley, 2018.
ISBN 978-0-13-468599-1
We highly recommend you to read this document carefully and invest some time to get your
hands dirty. Without programming, merely reading this document is useless.
Since concurrency is doing stuff in parallel and NOT sequentially, this document is also
written for parallel use and doesn’t have to be read starting with Section 1 and ending with
the last page.
Sections 1 and 2 are more or less fundamentals and explain why we are teaching and have
to care about concurrency.
Section 3 discusses the two basic classes for concurrency within the Java API: Thread and
Runnable. All the low level aspects, Java monitors, wait/notify, critical sections, locking,
visibility and deadlocks are discussed in Section 4, whereas we climb the high level concepts
mountain in Section 5 to get rid of the error-prone low level stuff. In Section 6 we con-
cluded with examples about hidden concurrency in frameworks. A typical example in this
case is request handling implemented in Jersey and used by SpringBoot where concurrency
considerations are important to get things right.
Where we thought it might be beneficial, we included Rule of Thumbs,Examples and
Exercises to foster the written material. We additionally ask some questions at the end of
each Section to recap the content. If you are able to answer all of these questions, you made
it!
The source code of all examples and exercise skeletons can be found on GitHub: https:
//github.com/johannes-manner/ConcurrencyTopics. All examples and exercises were
implemented with Java 11 LTS. Further changes and additions in the concurrency landscape
will be incorporated in new versions of this document. If you want to test your implemented
code, we prepared some CodeRunner tasks in the specific VC course in our master courses
at the University of Bamberg.
Johannes Manner
Sebastian Böhm
Bamberg, April 2022
Acknowledgement
Thanks to Dr. Simon Harrer and Dr. Jörg Lenhard who initiated the basics in the concur-
rency course in our bachelor degree program. Some figures and examples included in the
PDF and source code were designed by them.
Many thanks also to our (former) colleagues Linus Dietz, Dr. Stefan Kolb and Robin Licht-
enthäler who held the course and improved the material.
Last but not least many thanks to Prof. Dr. Guido Wirtz for giving us the opportunity to
teach this course and supporting us with feedback on the course content.
Changelog
Version 1.0
released in May 2020.
Version 1.1
released in December 2021.
Added Changelog
Added Acknowledgement also to the document
Added Section on Hidden Concurrency in Frameworks (6) and the first example on
Concurrency in Jersey Controllers (6.1)
Version 1.2
released in April 2022.
Added recap sections
Distributed Systems Group
Otto-Friedrich Universität Bamberg
An der Weberei 5, 96047 Bamberg, GERMANY
Prof. Dr. rer. nat. Guido Wirtz
http://www.uni-bamberg.de/pi/
Preface
Due to hardware developments, strong application needs and the overwhelming influence
of the net in almost all areas, distributed systems have become one of the most important
topics for nowadays software industry. Owing to their ever increasing importance for ev-
eryday business, distributed systems have high requirements with respect to dependability,
robustness and performance. Unfortunately, distribution adds its share to the problems of
developing complex software systems. Heterogeneity in both, hardware and software, per-
manent changes, concurrency, distribution of components and the need for inter-operability
between different systems complicate matters. Moreover, new technical aspects like resource
management, load balancing and guaranteeing consistent operation in the presence of partial
failures and deadlocks put an additional burden onto the developer.
The long-term common goal of our research efforts is the development, implementation and
evaluation of methods and tools that support the realization of robust and easy-to-use software
for complex systems in general while putting a focus on the problems and issues regarding
distributed systems on all levels. Our current research activities are focussed on different
aspects centered around Cloud computing, esp. Microservice and Serverless architectures:
Integration Testing of Serverless Applications: Many cloud platform providers offer
Function as a Service (FaaS) now which got popular with the introduction of Ama-
zon’s AWS Lambda in 2014. These offerings are based on serverless functions whose
statelessness helps handle dynamic workloads by scaling them dynamically.Serverless
functions are often combined with other services like data storage , e.g., to save the
state of the application. The interactions of these services with serverless functions
build complex systems. The aim of this project is to support the integration testing
process for serverless applications. While it is easy to test single functions in isola-
tion, the emerging behavior caused by the integration of serverless functions with other
services needs to be tested. Therefore, the relevant aspects of an application have to
be modeled to support the creation of test cases. Coverage criteria are created and
their applicability is investigated. Furthermore, the automatic creation of test cases
for serverless applications shall be supported.
Benchmark and Simulation of Cloud Functions (FaaS): The goal of this project is to
understand runtime characteristics of the platform as well as characteristics of the
deployed cloud functions and take dependent services like database access into consid-
eration when building a local clone of the platform at a developer’s machine. These
aspects allow to configure cloud functions appropriately to the specified requirements
upfront. Furthermore, a simulation and benchmarking tool to conduct repeatable and
fair experiments is under development.
Architecting Cloud-native Applications: Cloud-native applications are designed and
built to maximally exploit the benefits offered by modern cloud computing. This com-
prises several aspects, such as a fine-grained modular architecture, using existing cloud
services instead of custom solutions, exploiting the rapid elasticity of cloud comput-
ing, achieving robustness by distributing an application over independent nodes, and
finally a faster and more agile development process. The goal of this project is to
analyze how the development of such cloud-native applications can be supported and
improved with regard to all these aspects. The focus is on the overarching architecture
of a cloud-native application, specifically how the individual components are combined
and how they interact, rather than focusing on individual components.
Universal Cloud-Edge-IoT Orchestration: The emergence of the Internet of Things
(IoT) is a significant development in today’s information technology and involves the
ability of physical devices to exchange data over networks. Often, the generated data
is transferred to the cloud and processed there. Likewise, the cloud may take control of
the devices. The ever-increasing number of data-generating IoT devices is creating new
challenges that require modifications of the already existing Cloud-edge architectures.
This project aims to realize an abstracted, configurable and simplified management
of cloud-edge/edge-IoT architectures based on already popular container orchestration
platforms like Kubernetes and other platforms and technologies in order to make the
use of edge computing easier for even more application areas and users.
Visual Programming- and Design-Languages: The goal of this long-term effort is the
utilization of visual metaphors and languages as well as visualization techniques to
make design- and programming languages as well as distributed systems more under-
standable and, hence, more easy-to-use.
More information about our work, i.e., projects, papers and software, is available at our
homepage (see above). If you have any questions or suggestions regarding this report or our
work in general, don’t hesitate to contact me at guido.wirtz@uni-bamberg.de
Guido Wirtz
Bamberg, April, 2022
Contents I
Contents
1 Fundamentals 1
1.1 CallByValue....................................... 1
1.2 MemoryModel ...................................... 3
1.2.1 Heap and Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Java Memory Model - JSR 133 . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Immutability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Recap Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Why Threading? 9
2.1 Situation.......................................... 9
2.1.1 Models for Concurrent Programming . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Processes vs. Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Problem .......................................... 10
2.3 Solution .......................................... 11
2.4 Recap Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Thread and Runnable 12
3.1 Thread Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Thread States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 A Running Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Recap Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Shared Memory 19
4.1 Low Level Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.1 Java Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.2 Lock - synchronized and Wait Set . . . . . . . . . . . . . . . . . . . . . . 19
4.1.3 Static Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.4 A Running Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.5 Another Example - Using Wrapper/final Objects as Locks . . . . . . . . 32
Contents II
4.1.6 Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.7 Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Interruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.1 Non-Blocking Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 Blocking Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.3 Strategies to handle an InterruptedException . . . . . . . . . . . . . . . 45
4.3 Thread Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Recap Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5 High Level Concepts 51
5.1 Semaphor ......................................... 51
5.2 Producer-Consumer Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.3 Manged Runtime Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.3.1 Executor Service and Thread Pools . . . . . . . . . . . . . . . . . . . . . 53
5.3.2 Futures and Callables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3.3 A Running Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4 Other Useful API Stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.4.1 Atomic Classes Primitive Datatypes . . . . . . . . . . . . . . . . . . . . . 59
5.4.2 Thread Safe Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . 59
6 Hidden Concurrency in Frameworks 61
6.1 Jersey Controllers (JAX-RS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
List of previous University of Bamberg reports 62
List of Figures III
List of Figures
1 The final Destination for a Concurrency Programmer . . . . . . . . . . . . . . . 6
2 Difference between Processes and Threads . . . . . . . . . . . . . . . . . . . . . . 10
3 Two Actors A and B increment a shared Variable . . . . . . . . . . . . . . . . . 10
4 Two Actors A and B increment a shared Variable. Critical Section enables
Atomicity from a User’s Perspective. . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Runnable and Thread Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 12
6 Runnable and Thread Code Comparison . . . . . . . . . . . . . . . . . . . . . . . 13
7 Thread States and their Transitions . . . . . . . . . . . . . . . . . . . . . . . . . 14
8 Java Monitor with SyncSet, Lock Object and WaitSet . . . . . . . . . . . . . . 19
9 Using a Wrapper or other final Objects as Lock Object . . . . . . . . . . . . . . 32
10 Producer Consumer Communication Paradigm . . . . . . . . . . . . . . . . . . . 52
11 Executor Service Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
List of Tables IV
List of Tables
1 Stack and Heap Memory Summary from Baeldung . . . . . . . . . . . . . . . . . 3
2 Non-Blocking and Blocking Methods Comparison . . . . . . . . . . . . . . . . . 44
Listings V
Listings
1 Call By Value Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Another Call By Value Example with References . . . . . . . . . . . . . . . . . . 2
3 Calculator as a Scoping Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4 Mutable Class Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Immutability Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
6 Create and start Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7 Join Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
8 Usage of synchronized Keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9 Calling wait() ....................................... 20
10 Calling notify() ...................................... 21
11 Adding InterruptedException and complete Example . . . . . . . . . . . . . . . 22
12 Synchronized Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
13 Synchronized Blocks within Methods . . . . . . . . . . . . . . . . . . . . . . . . . 24
14 Synchronized Methods - Mixture . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
15 Two Threads doing something . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
16 Enabling outside Class Synchronization - Compound Action . . . . . . . . . . . 26
17 Static Member Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
18 Running Synchronization Example . . . . . . . . . . . . . . . . . . . . . . . . . . 28
19 Running Synchronization Example . . . . . . . . . . . . . . . . . . . . . . . . . . 29
20 Awake one Thread via notify() ............................. 30
21 Awake all Threads via notifyAll() ........................... 31
22 A simple Counter - so where is the Problem? . . . . . . . . . . . . . . . . . . . . 32
23 A simple Visibility Class - so where is the Problem? . . . . . . . . . . . . . . . . 33
24 Visibility Class with a volatile Member . . . . . . . . . . . . . . . . . . . . . . . . 33
25 Visibility Example with two Threads . . . . . . . . . . . . . . . . . . . . . . . . . 36
26 A Deadlock which you don’t wanna experience . . . . . . . . . . . . . . . . . . . 38
27 Always use the same order for acquiring locks . . . . . . . . . . . . . . . . . . . 39
28 Simple Interruption Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Listings VI
29 Interuption-responsive Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
30 Thread Interruption with interrupted() ........................ 42
31 Awake all Threads via notifyAll() ........................... 43
32 Interruption - Direct Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
33 Interruption - Perserving the State . . . . . . . . . . . . . . . . . . . . . . . . . . 46
34 Documentation of Thread Safety and Attribute’s Synchronization . . . . . . . 49
35 Acquire and savely release a Semaphore . . . . . . . . . . . . . . . . . . . . . . . 51
36 Executor Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
37 Executor Service Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
38 Create an Executor Service via Factory Methods . . . . . . . . . . . . . . . . . . 55
39 Callable Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
40 Executor Service Interface completed . . . . . . . . . . . . . . . . . . . . . . . . . 57
41 ExecutorService - Runnable - Callable and Future . . . . . . . . . . . . . . . . . 59
1 Fundamentals 1
1 Fundamentals
1.1 Call By Value
In order to understand the problems which come along with concurrency programming, un-
derstanding the difference of memory handling for objects and primitives is one cornerstone.
Java organizes the memory access like C, but References (in C Pointers) are hidden in most
cases. This is the case as construction (done via special constructor methods) and destruc-
tion (done by the garbage collection) are managed by the runtime environment. Primitives
and objects are to some extent handled equally in Java.
As a rule of thumb:
Primitives are Called By Value. They are always stored inside the Stack Memory.
Different methods have different Stack Spaces. So changes to the values in another
Stack Space are not present in the former location.
Objects are also Called By Value, but many developers claim that Java uses a Call
By Reference semantic. The explanation is as follows: Objects are stored inside the
Heap. The Reference/Pointer is stored on the Stack. So changes to the values of the
object by another method are present to all References which identify the same object.
The object’s Reference is passed By Value, so the heap address of the object is copied
and passed as a parameter to the called method.
There is NO Call By Reference semantic in Java. For an extensive discussion, we recommend
a stackoverflow discussion1.
For a discussion about the differences of Stack and Heap, read Section 1.2.1.
But what is the difference in memory access paths in the following snippets Listing 1 and
Listing 2?
Listing 1: Call By Value Example
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2int x = 1 ;
3int y = 2 ;
4Sy st em . o ut . p r i n t l n ( " x=" + x + " ; y=" + y ) ; // x =1;y=2
5m od i fy ( x , y ) ;
6Sy st em . o ut . p r i n t l n ( " x=" +x+" ; y=" + y ) ; // x=1; y=2
7}
8p ri va t e s t a t i c v oi d m odify ( i n t x , int y ) {
9x = 5 ;
10 y = 1 0 ;
11 Sy st em . o ut . p r i n t l n ( " x=" + x + " ; y=" + y ) ; // x =5;y=10
12 return ;
13 }
1https://stackoverflow.com/questions/40480/is-java-pass-by-reference-or-pass-by- value
1 Fundamentals 2
Listing 2: Another Call By Value Example with References
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2Stude n t s = new St ud en t ( ) ;
3Sy st em . o ut . p r i n t l n ( " n ame=" + s . getName ( ) ) ; // name=
4modifyStudent( s );
5Sy st em . o ut . p r i n t l n ( " n ame=" + s . getName ( ) ) ; // name=A l e x
6}
7p ri va t e s t a t i c v oi d m od if yS t ud en t ( S tu d ent s t ude n t ) {
8s tu d en t . s etNa me ( " A lex " ) ;
9}
Listing 1 shows an example of a Call By Value parameter assignment with primitive values.
In line 5 the method is invoked with modify(1,2) so the print statement in line 4 results in
x=1;y=2. Since xand yare called By Value (they are primitives), the values of xand y
are copied to new Stack variables xand ywithin modify and changed only within modify.
Now there are two sets of xand y. One in the main method scope, the other in the modify
method scope. The print in modify in line 11 results in x=5;y=10. The changes to xand y
are not recognized by the main method. The third print result (from an execution point of
view) in line 6, therefore, prints x=1;y=2.
The second listing shows a Call By Value example with references where an object (Student)
is instantiated in line 2. We assume that a name is by default an empty string, so the
print result in line 3 is name=. The modifyStudent method invoked in line 4 receives sas a
parameter. Here sis passed by Value where the reference of the object sis copied. What’s
happening technically is that the memory address of Student s is copied and there are now
two stack variables in the main and modifyStudent method scope which contain the same (!)
reference to the Student s(the object is located in the heap). The reference of the object is
copied but the destination (when following the reference) is the same.
Hint: The variable name sis abbreviated due to the length of the line, student might be a
better variable name :)
If you want to read more about this topic, we highly recommend an article by Hussein Terek
at ProgrammerGate2.
Example:
You can find an example for primitives and objects, which is only
a single concept (Call by Value) when assessing it in detail, under
de.uniba.dsg.concurrency.examples.fundamentals.CallByValueObject and
Value.
2https://www.programmergate.com/java-pass-reference-pass-value/
1 Fundamentals 3
1.2 Memory Model
1.2.1 Heap and Stack
To deepen your knowledge about primitives, objects, Stack and Heap, we recommend using
the python tutor3and investigate some programs or the examples and exercises we provide
via the Virtual Campus.
A summary of the comparison of Stack and Heap memory copied from Baeldung4can be
found in Table 1 where important parts for our concurrency topic are highlighted in bold.
Table 1: Stack and Heap Memory Summary from Baeldung
Parameter Stack Memory Heap Space
Application Stack is used in parts, one at
a time during execution of a
thread
The entire application uses Heap
space during runtime
Size Stack has size limits depending
upon OS and is usually smaller
than Heap
There is no size limit on Heap
Storage Stores only primitive vari-
ables and references to ob-
jects that are created in Heap
Space
All newly created objects are
stored here
Order It is accessed using Last-in First-
out (LIFO) memory allocation
system
This memory is accessed via com-
plex memory management tech-
niques that include Young Gen-
eration, Old or Tenured Genera-
tion, and Permanent Generation.
Life Stack memory only exists as
long as the current method is
running
Heap space exists as long as
the application runs
Efficiency Comparatively much faster to al-
locate when compared to heap
Slower to allocate when compared
to stack
Allocation/
Dealloca-
tion
Stack Memory is automat-
ically allocated and deallo-
cated when a method is
called and returned respec-
tively
Heap space is allocated when
new objects are created and
deallocated by Garbage Col-
lector when they are no
longer referenced
It is especially remarkable that stack memory is always thread-safe since only primitives are
stored there and the stack memory only exists as long as the method is running!
3http://pythontutor.com/visualize.html: Configure the python tutor with show all frames(Python),
render all objects on the heap (Python/Java) and draw pointers as arrows [default].
4https://www.baeldung.com/java-stack-heap: Complete article very detailed and a good overview.
Summary is taken from section 5.
1 Fundamentals 4
As a rule of thumb:
Scoping: Try to scope as restrictive as possible. This means that you should avoid
class members (attributes) wherever the application does not require a state within an
object. If an attribute, for example, is only used within a single method, you should
think about deleting the class attribute and change the method signature so that the
method gets this single parameter when executing.
In a later part of this document, we will discuss this scoping example of Listing 3 in more
detail. For now you should assess and understand the differences in scoping. The first
CalculatorBefore implementation has two class members. If we share the reference to a
calculator of class CalculatorBefore, the two members are visible within the class for other
methods, in our example getter and setter. If we have more than 1 actor, concurrent accesses
can happen - as a first primer to our topic. Since the calculator is stored on the heap, all
actors access the same CalculatorBefore object and can alter its state in an undefined order.
The second implementation of our code listing does not contain state within the Calcula-
torAfter. If more than one actor concurrently calls the add method of a single CalculatorAfter
instance, they use the same instance but each invocation gets a separate address space on
the stack and, therefore, the different invocations happen concurrently but are separated
from each other due to the stack space.
Listing 3: Calculator as a Scoping Example
1// b e f o r e
2c l a s s CalculatorBefore{
3priv a t e double op e r a n d 1 ;
4priv a t e double op e r a n d 2 ;
5
6public Calculator(double o perand1 , double operand2 ) {
7th i s . o p erand1 = o p e r a n d 1 ;
8th i s . o p erand2 = o p e r a n d 2 ;
9}
10
11 pub l i c double ad d ( ) {
12 return operand1 + operand2 ;
13 }
14
15 // a ssume g e t t e r an d s e t t e r
16 }
17
18 // a f t e r
19 c l a s s CalculatorAfter{
20 pub l i c double ad d ( double operand1 , double o pe ran d2 ) {
21 return operand1 + operand2 ;
22 }
23 }
Exercise:
We created a stack/heap exercise to play a bit football. There is no source code
provided for this exercise, you have to look in your VC course for the Code Runner
Task StackHeap.
1 Fundamentals 5
1.2.2 Java Memory Model - JSR 133
"Java Specification Requests (JSRs) are the actual descriptions of proposed and final spec-
ifications for the Java platform."5The JSR133 (title: Java Memory Model and Thread
Specification Revision) "describes the semantics of threads, locks, volatile variables and data
races. This includes what has been referred to as the Java memory model."6
The Java Memory model defines rules based on the Java Language Specification which have
to be guaranteed by a valid JVM implementation. These so-called happens-before rules are
listed in the following7and can be relied upon independent of the platform.
Program order rule – Each action in a thread happens-before every action in that
thread that comes later in the program order. (As we would assume from sequential
program execution known from the main thread.)
Monitor lock rule – An unlock on a monitor lock happens-before every subsequent
lock on that same monitor lock. (If your are confused now, what a monitor is, read
Section 4.1.1. This specification enables the mutual exclusion of competing threads for
a critical section (if you are even more confused, read Section 4.1.2).)
Volatile variable rule – A write to a volatile field happens-before every subsequent
read of that same field. (See Section 4.1.6 for a detailed discussion.)
Thread start rule – A call to Thread.start on a thread happens-before every action
in the started thread. (This is somehow self-explanatory.)
Thread termination rule – Any action in a thread happens-before any other thread
detects that thread has terminated, either by successfully return from Thread.join or
by Thread.isAlive returning false. (See Section 3.3 and the Thread’s methods.)
Interruption rule – A thread calling interrupt on another thread happens-before
the interrupted thread detects the interrupt (either by having InterruptedException
thrown, or invoking isInterrupted or interrupted). (For interruption and how these
methods interact with each other, have a look at Section 4.2.)
Finalizer rule – The end of a constructor for an object happens-before the start of
the finalizer for that object. (As a remark here: Calling an object’s finalize method is
deprecated since Java 98. The mechanism is still valid, but other possible solutions for
implementing finalization should be used.)
Transitivity – If A happens-before B, and B happens-before C, then A happens-before
C.
5Cited from the official JSR page: https://jcp.org/en/jsr/overview
6Cited from the official JSR 133 page: https://jcp.org/en/jsr/detail?id=133
Specification of the JSR 133 (worth but hard to read): https://www.cs.umd.edu/~pugh/java/
memoryModel/jsr133.pdf
An easier to read explanation to the Java Memory Model: http://tutorials.jenkov.com/java-
concurrency/java-memory-model.html. It is probably useful, if you read the chapter 4 first, i.e. Visi-
bility discussion (Section 4.1.6), Java Monitor (Section 4.1.1) and the basic synchronization Section 4.1.2.
7The happens-before enumeration is directly copied from Brian Goetz’s Book "Java Concurrency in Prac-
tice". The citing page is 341. We added some comments from us in brackets.
8Read here for more information: https://docs.oracle.com/en/java/javase/11/docs/api/java.
base/java/lang/Object.html#finalize()
1 Fundamentals 6
Some of the rules are discussed in later sections directly or implicitly. The summary here is
a first stepping stone so that you have heard some important aspects before digging deeper
into the concurrency issues in Java.
1.3 Immutability
Immutability describes "the state of not changing, or being unable to be changed"9. For
developers in different domains it is a highly important concept that an object does not
change its internal state after construction. This makes concurrent programming much easier
since you do not have to care about concurrent access with immutable objects. (There are
some exceptions but for now we won’t worry about those, for a discussion of these problems
see Section 4.1.5).
For some immutability is the destination of programming10:
Figure 1: The final Destination for a Concurrency Programmer
As a rule of thumb:
An object is immutable if its internal state (values of the class attributes) does NOT
change after construction. A class is immutable if inheritance is NOT possible.
Listing 4: Mutable Class Age
1// m u t a ble
2c l a s s Age {
3priv a t e i nt day , month , y ea r ;
4// c o n s t r u c t o r
5public Age ( i n t day , int month , i n t ye a r ) {
6// . . .
7}
8// g e t t e r an d s e t t e r
9}
9https://dictionary.cambridge.org/de/worterbuch/englisch/immutability
10 The Figure is copied from https://itnext.io/why-concept-of-immutability- is-so- damn-
important-for-a-beginner-front-end-developer-8da85b565c8e.
1 Fundamentals 7
Listing 5: Immutability Example
1//@Immutable
2f i n a l c l a s s ImmutableStud e n t { // 1
3
4p ri va te f i n a l i n t i d ; // 2
5// S t r i n g s a r e c o n st a n t ; t h e i r v a l u e s c an no t be c h an ge d
6p ri va te f i n a l S t r i n g name ; // 2
7p ri va te f i n a l Age age ; // 2
8
9public Immuta bl eStudent ( i n t i d , S t r i n g name , Ag e a ge ) {
10 th i s . name = name ;
11 th i s . i d = i d ;
12 th i s . a g e = new Age ( a g e . g et Day ( ) , a g e . g etMo nt h ( ) , a ge . g et Y ea r ( ) ) ; // 5 ( a )
13 }
14
15 // no s e t t e r s //3
16 // 'n o r mal 'g e t t e r s f o r i d an d name ( f i n a l )
17 public Age g et Ag e ( ) {
18 re tur n new Age ( a ge . g etD ay ( ) , a g e . g etM on th ( ) , a g e . g et Y ea r ( ) ) ; // 5 ( b )
19 }
20 public I mm ut a bl e St ud e nt h as M a rr i e d ( S t r i n g newName) { // 4
21 re tur n new ImmutableStuden t ( th i s . i d , newName , t h i s . a ge ) ; // 4
22 }
23 }
The following list gives you instructions on how to make a class immutable. Learn these
steps by heart and if you are interested, the following link11 explains some stuff in more
detail:
1. Declare the class as final to avoid inheritance from the class since there are constella-
tions where this could compromise immutability!
2. Declare all attributes as final! Final means that reassigning some value to a variable
which has already been instantiated is not possible any more (Nice language feature!).
3. No setters! This is also not possible when doing step 2 right.
4. If you implement methods which change the state of the object, the return value of
this method MUST be a new instance since the old instance remains unchanged! The
returned instance then contains the changed values as in our example.
5. If the class contains a mutable object as a class member, the following two steps are
necessary:
(a) During construction: Copy the mutable object (in our case age) so that the
reference/pointer to that object is encapsulated only within the current object.
The reason for this copying is that the caller of the constructor also has a reference
to the age object and can alter its state. HINT: Make a deep copy of the
mutable object! If the mutable object contains other objects, you have
to copy these objects as well!! (So make a recursive deep copy :))
11 https://dzone.com/articles/how-to-create-an-immutable-class-in-java
1 Fundamentals 8
(b) Copy the mutable object whenever you share the object to the outer object world.
Only share a copy of the object as presented here for the getter. HINT: Keep
your internal state safe from outside accesses!
Exercise:
Change two already implemented classes Student and Group to make them immutable.
Don’t add any other public methods to the source code (private methods are ok).
You find the sample under de.uniba.dsg.concurrency.exercises.immutability
and a corresponding Code Runner Task (Code Runner is a automatic code assessment
tool which provides an integration for Virtual Campus) in your VC course.
Example:
de.uniba.dsg.concurrency.examples.fundamentals.Immutability contains the
source code of an immutability example which is quite similar to your exercise!
1.4 Recap Section
What is the difference between primitives and objects?
Name examples for both types!
Is it possible to access the same primitive from two different objects?
Is it possible to access the same object from two other objects?
How does heap and stack relate to each other?
What are the major differences?
What is scoping?
Does scoping help you to improve encapsulation?
What is immutability?
How can I achieve this as a developer?
What implications do immutability have on the memory behavior of my application?
2 Why Threading? 9
2 Why Threading?
2.1 Situation
It’s all about performance!
To name a few examples: Multi-Core processors, asynchronous event and data processing,
background tasks etc. The reasons for concurrency - running more than 1 application, task
etc. concurrently - are manyfold.
2.1.1 Models for Concurrent Programming
In general, there are two models for concurrent programming. The first one is shared memory,
discussed in more detail in Section 4 and the second one is message passing (or distributed
memory).
As a primer, in a shared memory scenario, two or more threads share the same memory
space. In a message passing situation two or more processes exchange messages and use
different memory spaces.
But what are processes and threads?
2.1.2 Processes vs. Threads
"Each process provides the resources needed to execute a program. A process has a virtual
address space, executable code, open handles to system objects, a security context, a unique
process identifier, environment variables, a priority class, minimum and maximum working
set sizes, and at least one thread of execution. Each process is started with a single thread,
often called the primary thread, but can create additional threads from any of its threads.
A thread is the entity within a process that can be scheduled for execution. All threads
of a process share its virtual address space and system resources. In addition, each thread
maintains exception handlers, a scheduling priority, thread local storage, a unique thread
identifier, and a set of structures the system will use to save the thread context until it
is scheduled. The thread context includes the thread’s set of machine registers, the kernel
stack, a thread environment block, and a user stack in the address space of the thread’s
process."12
Figure 2 shows the difference of the two concepts and highlights the message passing in an
inter-process communication and the shared memory paradigm between threads. There are
also situations where more processes and within the single process more threads are active
than CPU cores are available. Scheduling on a process level and context switches on a thread
level make this possible.
12 https://docs.microsoft.com/en-us/windows/win32/procthread/about-processes-and-
threads?redirectedfrom=MSDN
2 Why Threading? 10
Operating System
Process
Memory,
Resources,
etc.
Process
Memory,
Ressourcen,
etc.
Msg.
Msg.
Processes Threads
Process
Memory, Ressources, etc.
Thread Thread
Operating System
Operating System
Figure 2: Difference between Processes and Threads
2.2 Problem
Especially in shared memory scenarios the value of a computation is dependent on the
execution order of the program. Think about an increment of a variable counter++. Assume
that more than one actor concurrently increments this counter.
You may probably say now: ’That’s no problem. counter++ is a single expression and
therefore an atomic command’
Sadly not!
counter++ is a compound action, composed out of three atomic operations: read the value
into the register, add 1 to the actual value, write the result back.
Figure 3 depicts the increments of two actors A and B. A reads the value first and sees for
example 5. B subsequentially reads the value and also reads 5. A adds 1 to the counter, B
adds 1 to the counter, A writes 6 to the counter, B writes also 6 to the counter. So a single
update is lost (LOST UPDATE problem).
Parallele Ausführungen können dazu führen, dass
die Resultate nicht deterministisch sind. *
read
read
add
add
write
write
read
time
A B
Figure 3: Two Actors A and B increment a shared Variable
2 Why Threading? 11
2.3 Solution
As ascertained in the previous section, the problem of compound actions is that different
actors can interleave and the result is corrupted to some extent. So the obvious solution for
this problem is to define some set of actions as a Critical Section, where only a single
actor is allowed to enter this section and perform the given set of actions as if there was only
a single action from a user point of view.
– - -
Figure 4: Two Actors A and B increment a shared Variable. Critical Section enables Atom-
icity from a User’s Perspective.
This concept is presented in Figure 4 where the two Actors A and B want to enter this
critical section of read, add and write. A and B first compete for the mutually exclusive
right to enter the critical section. In our metaphor the critical section is a room. They
enter a SyncSet13 , where a single Actor is randomly picked and gets the key for the room to
lock the door after she enters the room. Then the picked actor reads, adds and writes and
after finishing the critical stuff, she unlocks the door, leaves the room and the next actor is
randomly picked (in our case the remaining Actor), gets the key, locks the door, does her
stuff and leaves by unlocking the room.
So for this critical operation consisting of these 3 atomic steps, interleaving is not possible
anymore since the critical section is protected by the lock.
2.4 Recap Section
What is the difference between a process and a thread?
How is memory management impacted by threads?
What is the difference between an atomic operation and compound actions?
What is the problem in the latter when having a lot of concurrent, interleaving actors?
13 SyncSet is introduced by us for a later usage in the Java domain. There exists no SyncSet in Java, but we
need it for our metaphor later on.
Sync
Set
Critical Section
A B
read
add
write
3 Thread and Runnable 12
3 Thread and Runnable
Runnable (interface, introduced with Java 1.0) and Thread (class, introduced with Java
1.0)14 are the two main elements of the concurrency API when Java started in the 90s.
- -
interface
Runnable
MyRunnableThread
MyThread
I
C
C
C
Figure 5: Runnable and Thread Class Diagram
Runnable as an interface only specifies a single method void run(). When implementing this
interface in a class MyRunnable a user has to add his business logic and can extend (inherit
from) other classes. An instance of this class can then be executed by a runtime environment
of a programmer’s choice.
As Figure 5 shows, class Thread implements Runnable and a user has to add his business
logic also to the run method when extending a Thread with a class MyThread. Furthermore,
some lifecycle methods are also implemented by the Thread class and therefore the user does
not have to implement these methods (so he can run concurrent code quite easily). HINT:
Since Java does not allow multiple extends, inheritance is no longer possible.
The code in Figure 6 produces the same output for both sides. On the left hand side,
MyRunnable implements the Runnable interface. Here the thread (execution environment)
and the runnable (logic) are separated. On the right hand side, MyThread extends Thread
class and implements/overrides the method.
As a rule of thumb:
When possible, implement a Runnable since separation of concerns is enforced. The
logic (run method) is separated from the execution environment (Thread). Implement-
ing runnables is also the normal way when using different APIs (see section 5.3.
14 see the documentation for Runnable: https://docs.oracle.com/en/java/javase/11/docs/api/java.
base/java/lang/Runnable.html
See the documentation for Thread: https://docs.oracle.com/en/java/javase/11/docs/api/java.
base/java/lang/Thread.html
run
-
The most important methods of class Thread are as follows (for further information look at
the JavaDocs):
setName(String name): void – Sets the thread’s name.
getName(): String – Gets the thread’s name.
getId(): int – Gets a unique ID, but IDs may be reused.
getState(): Thread.State – Gets the current thread state.
interrupt(): void – Interrupts the thread and sets the interrupted flag to true.
isInterrupted(): boolean – Checks the interrupted flag.
interrupted(): boolean – Checks the interrupted flag and resets it to false.
start(): void – Starts the thread and executes it (invokes the run() method) concur-
rently to the main thread and all other running threads and daemon threads.
join(): void – Blocking method. Waits until the thread is finished with its computation
- means when the run() method exits.
As shown in the excerpt of Thread‘s methods, there is a method which returns the state
of a thread. Once a thread is terminated, there is no possibility to restart it for a second
execution.
Runnable Thread
public class MyRunnable implements Runnable {
private String name;
private int someInt;
public MyRunnable(String name, int someInt) {
this.name = name;
this.someInt = someInt;
}
public void run() {
System.out.println("Thread name is "
+ name + " " + someInt);
}
}
public class MyThread extends Thread {
private int someInt;
public MyThread(String name, int someInt) {
super(name); // attribute in superclass
this.someInt = someInt;
}
public void run() {
System.out.println("Thread name is "
+ this.getName() + " " + someInt);
}
}
public class RunnableMain {
public static void main(String[] args) {
MyRunnable r= new MyRunnable("My runnable", 1);
Thread myThread = new Thread(r);
myThread.start();
}
}
public class ThreadMain {
public static void main(String[] args) {
Thread t= new MyThread("My first thread", 1);
t.start();
}
}
+MyRunnable kann von weiteren Klassen erben
+Separation of concerns: run()ist vom ausführenden
Objekt (Thread) getrennt, Strategy Pattern
+Normaler Weg bei Benutzung verschiedener APIs
-wenig bereitgestellte Funktionalität
+viel Funktionalität bereits vorhanden, leichteres Thread-
Management
-Einschränkung der Vererbung (keine Mehrfachvererbung)
Figure 6: Runnable and Thread Code Comparison
3.1 Thread Methods
3 Thread and Runnable 13
Runnable / Thread Tipp: Die Klassen sind im
Übungsprojekt im Paket
examples.threading.* verfügbar
3 Thread and Runnable 14
3.2 Thread States
After the thread is instantiated with new Thread(), it is in the status new. After invoking
start() on the thread instance, the runtime environment starts the thread concurrently (sta-
tus runnable) to already running threads15 . When the thread wants to acquire a lock and
waits for the monitor (due to another thread holding the monitor‘s lock), the thread is in
state blocked and resumes when the lock is granted (so it can acquire the lock and access
the critical section). With wait() and join(), the executing thread is in a monitor’s Wait Set
and can only be awakened by notify(), notifyAll() or interrupt(). Method notify() selects a
random thread within the monitor‘s wait set, whereas notifyAll() awakes all threads which
are competing with each other for the monitor’s lock (remember all besides a single thread,
are then in the state blocked :). Timed waiting is the same as waiting besides the millisecond
period x.
new
new
runnable
runnable
blocked
blocked
waiting
waiting
timed
waiting
timed
waiting
terminated
terminated
start()
return;
throw ...; sleep(x)
wait(x)
join(x)
timeout, interrupt()
wait(),join()
notify(),interrupt()
new
Thread();
Waiting for
monitor Lock granted
Figure 7: Thread States and their Transitions
3.3 A Running Example
This running example shows the most important methods and how they interact with each
other. We will try to emphasize some common pitfalls, which are often neglected when doing
thread programming.
As rule of thumb, we are separating the logic (Runnable) and the execution environment
(Thread) from each other. We further know from the previous section that start starts the
thread instance concurrently to the main thread and executes the run method of the passed
Runnable. Listing 6 shows how to create and start threads:
Listing 6: Create and start Threads
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2
3Runnabl e r = new R un na bl e ( ) {
4@Override
5pub l i c void r un ( ) {
15 Main thread is always running until the program terminates. Some frameworks do also use daemon
threads, e.g. the garbage collector (see Java Concurrency in Practice, page 165 for a description), which
run also concurrently but do NOT hinder the JVM to terminate properly.
3 Thread and Runnable 15
6try {
7Th re ad . s l e e p ( 1 0 0 0 0 ) ;
8p r i n t I t ( " ) w ai t ed f o r a l o n g t im e a nd t e rm i n at e now ! " ) ;
9}catch ( I n te r ru p te d Exc ep ti o n e ) {
10 p r i n t I t ( " ) w as i n t e r r u p t e d a nd t e r m i na t e now ! " ) ;
11 }
12 }
13 } ;
14
15 // t h r ea d o b j e c t c r e a t i o n ,
16 // l i k e e v e r y o t h e r o b j e c t by c a l l i n g t he c o n s t r u c t o r
17 Thread a = new Thre ad ( r , "A" ) ;
18 Thread b = new T hre ad ( r , "B" ) ;
19
20 // s t a r t i n g t he t h re ad s
21 a . s t a r t ( ) ;
22 b . s t a r t ( ) ;
23
24 // e x i t ma in t hr ea d
25 p r i n t I t ( " ) am t he m aster and t er mi na t e now ! " ) ;
26 }
27
28 p ri va t e s t a t i c v oi d p r i n t I t ( S t r i n g s ) {
29 Sy st em . o ut . p r i n t l n ( " I ( " + Th rea d . c urre nt Thre ad ( ) . getName ( ) + s ) ;
30 }
Console output:
I (main) am the master and terminate now!
I (B) waited for a long time and terminate now!
I (A) waited for a long time and terminate now!
As we passed names to the threads, the output here should be no surprise. Or does the
output surprise you? When reading the code snippet and thinking about it, we might say,
we start the threads in 21 and 22 but why is the print in 25 faster then the prints of our
threads of line 8? And why is the print of thread B prior to the one of thread A?
As we already know (and when you execute the snippets of our example project, you will
see it), the threads are executed concurrently. So the ordering is not deterministic, since
the scheduling of different threads on different CPUs happens without any possibility to
intervene16.
So how do we get the correct ordering so that the print in line 25 is present after the prints
within the threads?
A first suggestion would be to call run instead of start in line 21 (a.run()) and 22 (b.run()).
The console output is the following:
16 Besides the stuff we discuss in the next Section.
3 Thread and Runnable 16
Console output:
I (main) waited for a long time and terminate now!
I (main) waited for a long time and terminate now!
I (main) am the master and terminate now!
Surprisingly all print statements are executed by the main thread. Remember the dualism
of class Thread here.
As a rule of thumb:
An instance of class Thread is like any other object in Java. When I call a method,
in this case run directly, it is executed in the current thread (which is per default the
main thread, since there is no other). The specialty of Thread is, that the JVM starts a
concurrent thread when calling start and executes within this new thread the business
logic specified in run.
What you also might have recognized is the longer execution time. Since it is executed
sequentially, the JVM needs a bit more than 20 seconds (in each thread we wait for 10
seconds) to terminate. When using start() it only took a bit more than 10 seconds, but the
ordering was corrupted.
So how to solve this, waiting for the termination of some threads, before doing some impor-
tant stuff, but having the performance boost of concurrent executions?
In the previous chapter we introduced the most important Thread methods, join being one
of them. The API documentation of Thread#join() says: "Waits for this thread to die."17
The adjusted code sample looks like the following, we omitted some code compared to the
previous sample:
17 https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#
join()
3 Thread and Runnable 17
Listing 7: Join Threads
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2
3Runnabl e r = . . . ;
4
5// t h r ea d o b j e c t c r e a t i o n ,
6// l i k e e v e r y o t h e r o b j e c t by c a l l i n g t he c o n s t r u c t o r
7Thread a = new Thre ad ( r , "A" ) ;
8Thread b = new T hre ad ( r , "B" ) ;
9
10 // s t a r t i n g t he t h re ad s
11 a . s t a r t ( ) ;
12 b . s t a r t ( ) ;
13
14 // j o i n i n g t h re a d s
15 a . j o i n ( ) ;
16 b . j o i n ( ) ;
17
18 // e x i t ma in t hr ea d
19 p r i n t I t ( " ) am t he m aster and t er mi na t e now ! " ) ;
20 }
Example:
Under de.uniba.dsg.concurrency.examples.threads.Threading you will find the
example code.
Exercise:
As an exercise, try also to implement some runnables and threads and play a bit around
with thread methods and states.
For sake of simplicity, we do not discuss join’s checked InterruptedException here. You can
find a detailed explanation about interruptions in Section 4.2.
When reading this single sentence from the API doc, two questions arise. Who waits? And
who dies?
As we already know, all the code we see in Listing 7 is executed by the main thread (think
about the run example we did previously and the corresponding console output)18 . So, in
line 15 the main thread calls a.join(), therefore the main thread has to wait here until thread
aterminates/dies (when the run method exits). Then the main thread is informed that a
finished its execution and the main thread continues. In our case it continues with calling
b.join in line 16 and waits until it gets informed again. It is very important, that the
two joins are executed in sequence and the main thread can only proceed when
both threads die (first A, then B).
So in the end the two prints of our threads A and B are executed before the print in line 19,
where the main thread terminates and also the JVM. The order of A and B is not preserved
since both threads are started at the same time and they both sleep for 10 seconds and
18 The stuff specified within the anonymous implementation of Runnable in line 3 is executed concurrently to
the main thread! We hope that this is already clear and the remark here is only to foster your knowledge.
3 Thread and Runnable 18
awake at the same time. So both orderings, A before B or vice versa, are possible.
3.4 Recap Section
What is the difference between Runnable and Thread?
What is the difference between a ’normal’ and a deamon thread?
Why favor Runnable over implementing Threads?
Draw the thread’s lifecycle and annotate the arrows between the states with the cor-
responding methods to change them.
When does a thread terminate/die?
What happens when calling join on another thread object?
What happens if the main thread terminates/dies?
4 Shared Memory 19
4 Shared Memory
4.1 Low Level Concepts
4.1.1 Java Monitor
Each object in Java is associated with a monitor (the type of monitor similar to the ones
discussed in operating system courses). For our understanding it is sufficient to think of a
monitor in Java as an entity which consists of an inherent lock object and a Wait Set. The
Wait Set contains threads which are waiting upon acquiring the monitor’s lock (see also
Section 3.2 for a discussion about different thread states).
– - -
( )
Sync
Set
synchronized critical section
WaitSet
wait(),
join()
notify(),
interrupt()
Threads
When threads compete for the lock, they enter a SyncSet19 and one of the competing threads
can acquire the lock while the others are blocked as depicted and explained in Section 3.2.
4.1.2 Lock - synchronized and Wait Set
We will now discuss the inherent lock object part of a Java monitor. The synchronized
keyword helps us to create a critical section in Java. Listing 8 shows the syntax for using
synchronized. The implications are discussed after the Listing.
Listing 8: Usage of synchronized Keyword
1c l a s s DoSomeWork {
2p ri va te f i n a l O b jec t mutex = new Ob ject ( ) ; / / o u r l o c k o b j e c t
3
4// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
5
6pub l i c void d o I t ( ) {
7sy nchro niz ed ( mu tex ) {
8// c r i t i c a l s ec t io n
9}
10 }
11 }
The lock object is specified in brackets where the mutual (sequential) access is enforced. For
DoSomeWork we create a private object mutex which does all the monitor stuff for us. As a
first remark here: the lock object mutex is not accessible from outside the class. Therefore,
19 The concept of a SyncSet was introduced by us. There is no such thing as a SyncSet in Java but we need
it for our metaphor later on.
Basismechanismus für Synchronisation, verfügbar seit
Java 1.0 (wait und notify gehören zu Object)
Jedes Objekt hat ein Lock und ein WaitSet (= Menge
WaitSet: Gibt das Lock eines
Figure 8: Java Monitor with SyncSet, Lock Object and WaitSet
4 Shared Memory 20
the synchronization policy is encapsulated within the class (this is an important decision,
we will see in Section 4.1.2 why).
In our example, all threads, which have a reference to the same instance of DoSomeWork
and concurrently invoking doIt compete for the lock mutex to gain access to the critical
section. But only executing a single function without coordination is not sufficient in most
cases and also not really exciting! We now know how the lock object part of a Java monitor
works. Let’s look at the Wait Set.
As a rule of thumb:
Guarded Wait - wait/notify
In multithreading scenarios, when many actors want to achieve a specific goal, it is
usual to check a condition before doing some work (Guarded Wait) since there are
often scenarios where a thread has to pass the control over to another thread which is
blocked and reacquire the lock later again.
So the first question is, what is the guard in our example?
The guard is a condition at the beginning of a critical section. Here a thread checks whether
it is its turn or not. If this is not the case, the thread releases the lock and waits until another
thread or framework informs it that it can wake up. When awoken, the thread reacquires
the lock and before entering the critical section stuff, it has to check the condition again! It
might be the case that the situation has changed but its condition is still not true. Then
this thread has to wait again, wake up again and check the condition until it evaluates to
true.
Now we add the concept of Guarded Wait to our previous Listing 8 in Listing 9.
Listing 9: Calling wait()
1c l a s s DoSomeWork {
2p ri va te f i n a l O b jec t mutex = new Ob ject ( ) ; / / o u r l o c k o b j e c t
3
4// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
5
6pub l i c void d o I t ( ) {
7sy nchro niz ed ( mu tex ) {
8wh ile ( c o n d i t i o n == f a l s e ){
9mutex . w ai t ( ) ;
10 }
11
12 // c r i t i c a l s ec t io n s t u f f
13 }
14 }
15 }
There is a second wait method within the Java API for getting a thread to wait and enter
the wait set of a lock object. This method includes a time interval, how long the thread is
waiting before waking up again. This method can be beneficial in cases when the situation
changes but the waiting thread does not recognize since it is not informed by another thread
in the system. So keep in mind, that calling wait with a timer is also a possible
solution to awake a thread after some time and checking the condition again.
4 Shared Memory 21
Since wait is related to the Java monitor, the thread enters the wait set if the condition is
false. Wait MUST be called on the lock object. Because of that the lock object knows which
threads are waiting for it. Otherwise you would get an IllegalMonitorStateException.
Wait API Doc: "Causes the current thread to wait until it is awakened, typically by being
notified or interrupted, or until a certain amount of real time has elapsed."20
Notify API Doc: "The awakened thread will compete in the usual manner with any other
threads that might be actively competing to synchronize on this object; for example, the
awakened thread enjoys no reliable privilege or disadvantage in being the next thread to lock
this object."21 22
As a rule of thumb:
Code Style: Always use wait and notify/notifyAll within the same synchronized code
block. The reason for this rule of thumb is the readability due to proximity of the two
related methods.
As stated in our example, the thread is in the wait set and we consider the two possibilities
for the awakening process23.In both cases the waiting thread has to reacquire the
lock, otherwise it can’t resume.
The two options to wake up a thread are the following:
1. Notify() - as the name suggest - notifies an arbitrary thread within the wait set. This
awoken thread reaquires the lock, checks the guard condition and - if applicable - enters
the critical section. But what does an implementation look like and where should I
call notify? Listing 10 provides some answers.
Listing 10: Calling notify()
1c l a s s DoSomeWork {
2p ri va te f i n a l O b jec t mutex = new Ob ject ( ) ; / / o u r l o c k o b j e c t
3// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
4
5pub l i c void d o I t ( ) {
6sy nchro niz ed ( mu tex ) {
7wh ile ( c o n d i t i o n == f a l s e ){
8mutex . w ai t ( ) ;
9}
10 // c r i t i c a l s ec t io n s t u f f
11
12 // l e a v e c r i t i c a l s e c t i o n , n o t i f y a n ot h e r t h r e ad
13 mu tex . n o t i f y ( ) ;
14 }
15 }
16 }
20 https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Object.html#
wait(long,int)
21 Same holds true for the interrupt we discuss in the second item.
22 https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Object.html#
notify()
23 There is a third possibility as the documentation says which are spurious wakeups, but these wake ups are
handled by implementing Guarded Wait right. So if you are interested read it up on your own.
4 Shared Memory 22
Assume we have two threads, the first enters the synchronized block/critical section
and evaluates the condition, which is false, and has to wait. The second thread sub-
sequentially checks the condition, evaluates it to true, enters the critical section and
afterwards calls notify on the lock object. Only the java monitor (lock object) knows
which threads are waiting. The consequence is that an arbitrary thread within the
wait set (we know only the first thread is there), is awoken and aquires the lock if
the condition evaluates to true. Given that this is the case, it does the critical section
stuff, notifies another thread (hint: no one is there, but who cares), returns from the
method and the JVM exits (program terminates).
NotifyAll, as the name implies, wakes up all threads within the monitor’s wait set. The
awoken threads then compete for the lock in order to enter the critical section24.
2. The second option to release a thread from the wait set is to interrupt it directly via
the thread’s interrupt() method or via some frameworks (for further discussion see
Section 5.3). This causes an InterruptedException which we haven’t discussed so far
intentionally as it is a highly difficult problem/exception to handle it right. Section 4.2
discusses these problems in detail. When the thread is interrupted, it has to compete
for the lock (same policy as before). If the lock is acquired the control flow resumes at
line 14 (catch block) in Listing 11.
Listing 11: Adding InterruptedException and complete Example
1c l a s s DoSomeWork {
2p ri va te f i n a l O b jec t mutex = new Ob ject ( ) ; / / o u r l o c k o b j e c t
3
4// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
5
6pub l i c void d o I t ( ) {
7
8// d o o t h e r s t u f f , w hi ch n e e d s no s y n c h r o n i z a t i o n
9
10 sy nchro niz ed ( mu tex ) {
11 wh ile ( c o n d i t i o n == f a l s e ){
12 try {
13 mutex . w ai t ( ) ;
14 }catch ( I n t e r r u p t e d E x c e p t i o n e ) {
15 // ha nd le i t
16 }
17 }
18 // c r i t i c a l s ec t io n s t u f f
19
20 // l e a v e c r i t i c a l s e c t i o n , n o t i f y a n ot h e r t h r e ad
21 mu tex . n o t i f y ( ) ;
22 }
23
24 // d o o t h e r s t u f f , w hi ch n e e d s no s y n c h r o n i z a t i o n
25 }
26 }
24 So in our metaphor, they enter the SyncSet and competing there for the lock.
4 Shared Memory 23
As a rule of thumb:
Using a dedicated lock object gives a developer the opportunity to decide within a
method which parts need synchronization (critical section) and which do not (do other
stuff ).
As a rule of thumb:
Wait(),Notify() and NotifyAll() are monitor methods. The monitor methods are
included in the Class Object. Since each class inherits from Object, every object can
be used as a lock. When you call one of these three methods, you have to do it by
invoking them on the lock object.
Synchronized within the Method Signature
We introduced the synchronized keyword with an explicit lock object to separate different
concepts from each other. Our example from Listing 8- 11 implemented a class DoSomeWork.
When we create an instance of DoSomeWork, a single member is responsible for all the
mutex stuff which is protected from outside class access via the private modifier and can’t
be changed because of the final keyword25. But there is also the possibility to include
synchronized keyword in the method signature (HINT: this is more or less the default for Java
APIs or code you see implemented by others. Looks easier, but in the end synchronization
policy is harder to enforce - you will see in a page why :).
So we rewrite our class DoSomeWork and synchronize the doIt method by including the
keyword in the method signature in Listing 12:
Listing 12: Synchronized Methods
1c l a s s DoSomeWork {
2// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
3pub l i c synchronized void d o I t ( ) {
4// c r i t i c a l s ec t io n
5}
6}
Synchronized included in the method signature.
But where is the lock object?
As already said and explained, each object in Java has an associated monitor. And as you
might know, you can self-reference an object within its implementation via this.
When synchronized is included in the method signature, implicitly the self-reference of the
object is used as the lock. If this dualism is already in your mind, you will not be surprised
that the following snippet does the same as the previous, except for a single difference.
25 This is also an important restriction! If objects alter their identity during locking, the synchronization
policy is corrupted, see Section 4.1.5 for a discussion about final objects which change their identity.
4 Shared Memory 24
Listing 13: Synchronized Blocks within Methods
1c l a s s DoSomeWork {
2// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
3pub l i c void d o I t ( ) {
4sy nch roniz ed (th i s ) {
5// c r i t i c a l s ec t io n
6}
7}
8}
The only, tiny difference is, that we can do some unsynchronized work before or after the
synchronized block.
Another difference to our final Object mutex approach is that we expose the synchronization
policy to everyone using an instance of DoSomeWork. That’s somehow dangerous, but for
many APIs necessary to enable outside class synchronization. To shed some light into
this, we adjust our DoSomeWork class with another synchronized method doItAgain. But
before we proceed, another rule of thumb.
As a rule of thumb:
Decide which encapsulation strategy you use for the synchronization of a class. As
we have seen, there is a huge difference in a private, encapsulated lock object and
using the self-reference for synchronization, which means exposing the synchronization
policy to everyone using the instance. Despite for really good arguments, choose the
first strategy.
We use the notation of Listing 12 for doIt and the notation of Listing 14 for doItAgain. (But,
they are both the same ;) despite the simple difference)
Listing 14: Synchronized Methods - Mixture
1c l a s s DoSomeWork {
2// . . . a t t r i b u t e s , c o n s t r u c t o r e t c .
3
4pub l i c synchronized void d o I t ( ) {
5// c r i t i c a l s ec t io n
6}
7
8pub l i c void d o I tA g a i n ( ) {
9sy nch roniz ed (th i s ) {
10 // c r i t i c a l s ec t io n
11 }
12 }
13 }
4 Shared Memory 25
Outside class synchronization
Assuming we have two threads and a runnable implementation as shown in the following
Listing 15:
Listing 15: Two Threads doing something
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2DoSomeWork work = new DoSomeWork ( ) ;
3
4Runnabl e r = new R un na bl e ( ) {
5@Override
6pub l i c void r un ( ) {
7work . do It Ag ain ( ) ;
8work . d oIt ( ) ;
9}
10 } ;
11
12 Thread one = new T hre ad ( r , "1" ) ;
13 Thread two = new Thre ad ( r , "2" ) ;
14
15 // s t a r t t h re a d s
16 one . s t a r t ( ) ; two . s t a r t ( ) ;
17
18 // j o i n t h re a d s
19 one . j o i n ( ) ; tw o . j o i n ( ) ;
20 }
Due to the implementation we provided for the two methods, the console output is in the
following order. Reason for this order is a context switch after doItAgain.
Console output:
Executing thread: 1 Do it again
Executing thread: 2 Do it again
Executing thread: 1 Do It
Executing thread: 2 Do It
Executing thread: main DONE
We can see that the two threads interleave. At the beginning thread 1 acquires the lock,
thread 2 is blocked. Thread 1 executes doItAgain, when leaving the synchronized block
(now the lock is free again), thread 2 can enter the critical section in doItAgain. Same
procedure, thread 2 exits the synchronized block/releases the lock and another context switch
occurred26 . Thread 1 enters the critical section of doIt, since no other thread currently holds
the lock. After executing the method, it releases the lock, return from the run method and
terminates. Thread 2 then executes doIt and also terminates.
Since we have a really urgent business demand, we want to have a compound action of doIt
and doItAgain without interleaving. Because locks are reentrant, we can synchronize the
26 We simulated these context switches via Thread#sleep invocations within the implementations.
4 Shared Memory 26
two methods outside of the class. As another hint here: Most APIs offer this method style
synchronization and a developer can then build compound actions in his/her code.
The lock object in our DoSomeWork class is implicitly the this reference. It is important to
synchronize with the identical lock, otherwise the synchronization does not work properly.
So we can use the work object in line 2 as a lock object for doing our compound action.
Lines 12 to 20 of Listing 15 are skipped for sake of simplicity.
Listing 16: Enabling outside Class Synchronization - Compound Action
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) {
2DoSomeWork work = new DoSomeWork ( ) ;
3
4Runnabl e r = new R un na bl e ( ) {
5@Override
6pub l i c void r un ( ) {
7// o u t s i d e c l a s s s y n c h r o n i z a t i o n c ompo und a c t i o n
8sync hroni zed ( w ork ) {
9work . do It Ag ain ( ) ;
10 work . d oIt ( ) ;
11 }
12 }
13 } ;
14 }
And the output is as expected:
Console output:
Executing thread: 1 Do it again
Executing thread: 1 Do It
Executing thread: 2 Do it again
Executing thread: 2 Do It
Executing thread: main DONE
Example:
We also included an example in our Java example project.
de.uniba.dsg.concurrency.examples.lowlevel.DoSomeWork contains the code. We
used another lock object (Object mutex) within the main method to enable the com-
pound action and explain the problems there.
As a primer: when using another lock object, other threads which only execute doIt or
doItAgain can interleave between the two methods calls of our compound action, since
they compete for the this lock every time independently. If we use work here, then
there is a competition of T1 & T2 which want to execute the compound action and
subsequentially reenter the lock twice. Any other thread which wants to execute one
of the two methods is blocked.
Exercise:
Investigate the aforementioned example and also play around with the lock objects and
understand the problems and differences and discuss these problems with your fellow
students (when you can explain it - you got it :).
4 Shared Memory 27
As a rule of thumb:
When not implementing an API which you share via Maven Central or another pack-
age repository, use the encapsulated private lock object strategy. Otherwise a develop-
er/user which is normally not aware of all the nit-picky details will use your stuff in a
wrong way and corrupt your internal state.
As a rule of thumb:
All accesses (reading and writing) to private fields (internal state) MUST be synchro-
nized by an identical instance! (must be the same(identical!) instance!)
4.1.3 Static Synchronization
So far, we discussed how to secure the internal state of objects from concurrent accesses, but
what about static members of a class?
As we already explained, objects are stored on the heap. Static members are also stored
on the heap (primitives and objects - otherwise they wouldn’t be accessible from all threads
concurrently), but they are not stored normally, they are included in a meta space of the
JVM.
Each class also has an implicit (so to say static instance), which identifies the class and can
be used as lock object (same guarantees as discussed in Section 4.1.2).
Listing 17 shows how static fields are synchronized to avoid concurrency issues:
Listing 17: Static Member Synchronization
1c l a s s Cou n t er C l as s {
2
3p ri va t e s t a t i c i n t x ;
4
5pub l i c i n t in cr em ent An dG et X ( ) {
6sy nch roniz ed ( Cou n te rC la ss . c l a s s ) {
7x++;
8return x ;
9}
10 }
11 }
A good explanation with all the possibilities to synchronize a static member can be found
at StackOverflow27 .
4.1.4 A Running Example
For our running example, where we explain the more theoretical considerations of the pre-
vious subsection in more detail, we assume that the monitor object is an instance ball of a
27 https://stackoverflow.com/questions/2120248/how-to-synchronize-a-static-variable-
among-threads-running-different-instances-o
4 Shared Memory 28
implemented Class Ball. Since playing ball against a wall is somehow boring, we (A) want to
play with our friends B,C and D. Assuming that thread A28 acquires the lock of ball, thread
B,C and D are blocked now. For now, acquiring the lock and getting into the critical section
works with the synchronized keyword where the lock object (in our case ball) is specified
within brackets.
So the current code looks like the following:
Listing 18: Running Synchronization Example
1c l a s s B a l l {
2// . . . .
3}
4
5c l a s s Player extends Thread {
6p ri va te f i n a l B a l l b a l l ; / / l o c k o b j e c t
7priv a t e i n t no OfTu rns =2; // e x i t s when t u r ns == 0
8
9public P l ay e r ( B a l l b a l l , S t r i n g name ) {
10 super( name ) ;
11 th i s . b a l l = b a l l ;
12 }
13
14 pub l i c void r un ( ) {
15 th i s . p la y ( ) ;
16 }
17
18 pub l i c void p l a y ( ) {
19 sync hroni zed ( b a l l ) {
20 // c r i t i c a l s ec t io n
21 }
22 }
23
24 // o th er me thods , l i k e it sMyTu rn . . .
25 }
So thread A is now in the critical section and checks the condition (business logic) to deter-
mine whether it’s its turn.
Thread A B C D
State runnable blocked blocked blocked
Assuming that the condition we check is only true, when it’s thread D’s turn.
So thread A checks the condition, it is false and thread A says: "ok, let’s wait until someone
notifies me and I will then check, if it is my turn". So that everybody knows in the Java
world that we are waiting for the ball, we call wait() on our lock object ball. Thread A is
then in the wait set of ball and one of the other threads can grant the lock and switch the
state from blocked to running. We assume that thread C (a random thread is picked from
the SyncSet (where the threads are competing for the lock, see thread states in Section 3.2))
now gets control and can acquire the lock.
28 What’s meant here is the player A which is executed by a thread. For simplicity we assume that player A
is executed by thread A, ...
4 Shared Memory 29
Listing 19: Running Synchronization Example
1c l a s s B a l l {
2// . . . .
3}
4
5c l a s s Player extends Thread {
6
7// omi t t ed . . .
8
9pub l i c void p l a y ( ) {
10 sync hroni zed ( b a l l ) {
11 // c r i t i c a l s ec t io n
12 wh ile ( ! t h i s . itsMyTurn ( )) { // c he c k t h e c o n d i t i o n
13 try {
14 b a ll . w ait ( ) ;
15 }catch ( I n te r ru p te d Exc ep ti o n e ) {
16 // we do n ot c a r e f o r t he moment
17 }
18 }
19
20 makeMyTurn ( ) ;
21 // end c r i t i c a l s e ct io n
22 }
23 }
24
25 // o th er me thods , l i k e it sMyTu rn . . .
26 }
The actual thread state situation is now the following:
Thread A B C D
State waiting blocked running blocked
As we already know it’s thread D’s turn, so thread C checks the condition, evaluates that
condition is false and also calls wait on the lock object ball.
Thread A B C D
State waiting blocked waiting blocked
So the next player getting control is thread B. Same "problem". So finally thread D grants
the lock and executes its turn, the play method return and also the run method returns So
thread D is now in the state terminated (Leaves the game).
Thread A B C D
State waiting waiting waiting terminated
Coool :) Thread D does its turn finally, BUT what about A, B and C. They are all waiting
that someone will call them, that things have changed and it’s another thread’s turn. (Also
the JVM does not terminate, which is another urgent problem...)
As we already know from the previous subsection, we forgot to notify another thread, when
we finished our execution. So the easiest way is to call notify() when we did our turn.
4 Shared Memory 30
For the sake of simplicity we only show the play method:
Listing 20: Awake one Thread via notify()
1pub l i c void p l a y ( ) {
2sync hroni zed ( b a l l ) {
3// c r i t i c a l s ec t io n
4wh ile ( ! t h i s . itsMyTurn ( )) { // c he c k t h e c o n d i t i o n
5try {
6b a ll . w ait ( ) ;
7}catch ( I n te r ru p te d Exc ep ti o n e ) {
8// we do n ot c a r e f o r t he moment
9}
10 }
11
12 makeMyTurn ( ) ;
13 // end c r i t i c a l s e ct io n
14 b a l l . n o t i f y ( ) ;
15 }
16 }
So thread D, after doing its turn, notifies an arbitrary thread. We assume after D, thread
B’s condition becomes true. The arbitrary scheduling selects thread A. A can acquire the
lock and currently the situation looks like the following.
Thread A B C D
State running waiting waiting terminated
Now A checks its condition, it is sadly false and calls wait again. So the overall status of
our program is like the following:
Thread A B C D
State waiting waiting waiting terminated
Damn! Another scenario where the program is stuck. So how to solve this issue.
There is another method notifyAll(), which awakes all threads from a monitor’s wait set. So
we change our method and call notifyAll() instead of notify().
4 Shared Memory 31
Listing 21: Awake all Threads via notifyAll()
1pub l i c void p l a y ( ) {
2sync hroni zed ( b a l l ) {
3// c r i t i c a l s ec t io n
4wh ile ( ! t h i s . itsMyTurn ( )) { // c he c k t h e c o n d i t i o n
5try {
6b a ll . w ait ( ) ;
7}catch ( I n te r ru p te d Exc ep ti o n e ) {
8// we do n ot c a r e f o r t he moment
9}
10 }
11
12 makeMyTurn ( ) ;
13 // end c r i t i c a l s e ct io n
14 b a l l . n o t i f y A l l ( ) ;
15 }
16 }
All threads included in the wait set awaken and compete to enter the critical section. The
scenario know is as follows (all threads in the wait set change their state from waiting to
blocked):
Thread A B C D
State blocked blocked blocked terminated
Assuming, as already said, thread A wins the race, recognizes that it is not its turn and
waits again. Now B and C are blocked and lock is granted to B. So the current thread state
is as follows:
Thread A B C D
State waiting running blocked terminated
B executes the critical section stuff and terminates. But before termination, it awakes all
waiting threads (in our case A). So the implementation is fair/unfair here (as you will), so
that A and C are blocked again and competing for the lock.
Another two or three iterations, all threads reach the terminated state and the JVM exits.
Example:
de.uniba.dsg.concurrency.examples.lowlevel.Play contains the sources of our
running example.
As a rule of thumb:
If you are not sure, if notify() is sufficient and your program will terminate in all con-
ceivable situations, use notifyAl l() instead. It is only a question of a micro-performance
optimization when using notify.
4 Shared Memory 32
4.1.5 Another Example - Using Wrapper/final Objects as Locks
To understand locking and identity of lock objects a bit better, we have the following example
code snippet, where we use an Integer value, which is incremented concurrently, as a lock
object. Since the wrapper class Integer is an object and not a primitive type like its little
brother int, it has all the necessary methods, we need for synchronization. The following
code snippet - with the knowledge we gained so far - is syntactically correct, but there is a
tiny problem with the lock object:
Listing 22: A simple Counter - so where is the Problem?
1c l a s s C ounter {
2priv a t e I n t e g e r v a l u e ;
3public C ou nt er ( I n t e g e r v a l u e ) {
4th i s . v a l u e = v al u e ;
5}
6
7pub l i c void i n c r em e n t ( ) {
8sy nch roniz ed ( va l ue ) {
9va l u e ++;
10 }
11 }
12 }
You might ask, where is the problem? From the prior chapter and the discussion until now
there is no error when looking at the Counter in Listing 22. As a hint here: Integer is final.
So all instances of Integer do not change their value. If the value of an Integer is changed,
the reference to the next Integer is set on the stack’s variable.
So let’s do an example. Assume we have three threads which use all the same instance of
counter and concurrently call increment on it. As our Figure 9 shows, the first two threads
want to do an increment, so the lock object Integer value has the value 5 and the hash code
(hashCode method in Java) of 529 . So A and B synchronize on the Integer object 5. A gets
the lock and executes the critical section (incrementing the value to 6), leaves the critical
section. B resumes from his blocking state and changes its state to running and acquires the
critical section (the blocking state was - when waiting for object - 5), so it acquires the lock
for object 5 and also enters the critical section and increments value.
So our lock object changes its identity (the reference which is stored in the stack variable
value). B doesn’t recognize this state change since it is blocked on the prior object (5). So
the update of the first thread A is lost.
:
29 For Integer, hashCode and actual value is identical.
5
5Symbolisch dargestellter Methodenaufruf von increment()
Symbolisch dargestelltes Integerobjekt (value), gleichzeitig Lock Objekt, das über alle Threads hinweg synchronisiert wird
1. 5
2.
6
5
3.
6
4. 61
61
Figure 9: Using a Wrapper or other final Objects as Lock Object
4 Shared Memory 33
As a rule of thumb:
As we know from the Immutability Section 1.3, immutable objects alter their identity
if a change happens. Immutable objects can only be lock objects, if and only if they do
not change their state (think about final keyword) during program execution, otherwise
the problem, we seen in this section, happens.
4.1.6 Visibility
We stated in our synchronization example that different threads have access to the same,
identical object. But how can this happen, if we have a multi-core computer and the threads
are scheduled on different CPUs. You may answer that’s fair enough, they are on the same
system and therefore have access to the same main memory (RAM). We would agree, but
that is not the complete truth.
There is some sort of optimization implemented we have to discuss here and which makes this
access to consistent data in all situations somehow fragile. Let’s look at a simple example
in Listing 2330:
Listing 23: A simple Visibility Class - so where is the Problem?
1c l a s s Visibility {
2pub l i c boolean r e ady = f a l s e ;
3pub l i c i nt nu mber = 0 ;
4}
Problem
So when we are thinking about two threads, which concurrently read and write values (we
are aware that writing is not an atomic action, see Section 2.2) they can see different values
of the variables due to caching and reordering.
Before explaining the caching and reordering problem, we have to introduce the volatile
keyword which is part of our solution. We also add an additional boolean member to our
Visibility class of Listing 23 in Listing 24.
Listing 24: Visibility Class with a volatile Member
1c l a s s Visibility {
2p ub li c v o l a t i l e bo ole an breakLoop ;
3pub l i c boolean r e ady = f a l s e ;
4pub l i c i nt nu mber = 0 ;
5}
30 The public fields ready and number are here for sake of simplicity. We highly recommend to use private
as a field modifier. The example is taken and adapted from Brian Goetz’s "NoVisibility.java", page 34 in
his book "Java Concurrency in Practice".
4 Shared Memory 34
Volatile
"Volatile fields are special fields which are used for communicating state between threads.
Each read of a volatile will see the last write to that volatile by any thread; in
effect, they are designated by the programmer as fields for which it is never acceptable
to see a "stale" value as a result of caching or reordering. The compiler and runtime are
prohibited from allocating them in registers. They must also ensure that after they
are written, they are flushed out of the cache to main memory, so they can immediately
become visible to other threads. Similarly, before a volatile field is read, the cache must be
invalidated so that the value in main memory, not the local processor cache, is the one seen.
There are also additional restrictions on reordering accesses to volatile variables."31
Caching
So we now know that volatile guarantees to see the most recent consistent value of all fields
over all threads of the program. You may say - ok that’s not a new information, I already
know this, where is the real problem?
Each CPU has caches nearby, where it looks for attributes when executing a command. So
some attributes are cached for a better performance and not read and written from the main
memory every time a command with this attribute is executed32. The MESI protocol helds
all CPU caches (normally L1 and L2) in sync. So where is the problem? CPU registers are
the problem which are not held in sync by the MESI protocol. But volatile prohibits the
allocation in CPU registers. So first problem solved :)
Reordering
"There are a number of cases in which accesses to program variables (object instance fields,
class static fields, and array elements) may appear to execute in a different order than
was specified by the program. The compiler is free to take liberties with the ordering of
instructions in the name of optimization. Processors may execute instructions out of order
under certain circumstances. Data may be moved between registers, processor caches, and
main memory in different order than specified by the program.
For example, if a thread writes to field a and then to field b, and the value of b
does not depend on the value of a, then the compiler is free to reorder these
operations, and the cache is free to flush b to main memory before a. There are
a number of potential sources of reordering, such as the compiler, the JIT, and the cache.
The compiler, runtime, and hardware are supposed to conspire the illusion of as-if-serial
semantics, which means that in a single-threaded program, the program should not be able
to observe the effects of reorderings. However, reorderings can come into play in incorrectly
synchronized multithreaded programs, where one thread is able to observe the effects of other
31 Since this is a good explanation for volatile, we copied it from the FAQ of the JSR133: https://www.cs.
umd.edu/~pugh/java/memoryModel/jsr-133-faq.html
32 This is also a good reference to read about caching and how the MESI protocol works for a few examples...
https://software.rajivprab.com/2018/04/29/myths-programmers-believe-about-cpu-caches/
4 Shared Memory 35
threads, and may be able to detect that variable accesses become visible to other threads in
a different order than executed or specified in the program.
Most of the time, one thread doesn’t care what the other is doing. But when it does, that’s
what synchronization is used for."3334
Volatile gives us the guarantee from the previous section, that all volatile fields
are consistent and not affected by caching. All instructions in a thread before
the access to a volatile field, are therefore also visible to any other thread.
Instructions after the access to a volatile field can be reordered and also executed
before the access to a volatile field.
Let’s make a running example to understand the implications: Caching and Reordering.
A running example
We use our example of Listing 24 and add two threads to this example, which concurrently
change values of our shared object Vibility in Listing 25.
In our example we start the reader first, so it is up and running. Subsequentially, we start
the changer which changes the values of breakLoop and the other two attributes. Since
breakLoop is volatile, it is guaranteed that the reader sees the change of breakLoop made by
the changer. The other instructions might be reordered or the changes are committed at
least to L1 and therefore reader can’t read these changes.
For 10,000 iterations, a possible result (dump of my console) could be35:
key no of occurences
true - 42 9799
false - 0 70
true - 0 131
Theoretically, when reordering and caching happens at the same time, also false - 42 is
possible. It is not only theoretically possible, you can also see this in practice 36.
You may say now, ok that’s not a proof, that visibility is the problem here, maybe the
reader read the values before the changer changed it. You are right, we agree completely
that this example is not a proof. But there is one possibility to have the clarity that visibility
(and therefore caching within the CPU register) is a problem, assumed the MESI protocol
is implemented correctly - which it is for mature processors like Intel and AMD. And this
33 Since this is a good explanation for reordering, we copied it from the FAQ of the JSR133: https://www.
cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html,
34 Good explanation and figures can also be found on following sites: http://igoro.com/
archive/volatile-keyword-in-c-memory-model-explained/ and http://tutorials.jenkov.com/
java-concurrency/java-memory-model.html, but be aware that these sources sometimes do not dis-
tinguish between registers and caches and therefore introduce some inaccuracy.
35 You can play around by using the Visibility class of our example.
36 If you are interested in this, you have to deal with JCStress (https://openjdk.java.net/projects/
code-tools/jcstress/) and we can provide some tips and sources, but that’s really for those who want
to understand how the last bit in the JVM is accessed ;)
4 Shared Memory 36
clarity is easily gained when removing volatile before breakLoop. When executing
the sample with volatile in line 3 (having the guarantee that each thread sees the most recent
value und therefore the change of the changer thread), the program executes in 2.8 seconds
on our machine. When removing volatile in line 3, the program hangs (print the summary
here after each iteration and you can also determine which iteration hangs). The only option
is to exit the JVM by hand.
Example:
de.uniba.dsg.concurrency.examples.lowlevel.Visibility contains the source
code of this running example. Try to understand the different steps and try it on
your machine.
Listing 25: Visibility Example with two Threads
1c l a s s Visibility {
2
3p ub li c v o l a t i l e bo ole an breakLoop ;
4pub l i c boolean r e ady = f a l s e ;
5pub l i c i nt nu mber = 0 ;
6
7public Th re ad c r e at e C ha n g e r ( ) {
8Runnabl e chan g e r = new Ru nnab le ( ) {
9@Override
10 pub l i c void r un ( ) {
11 breakLoop = tru e ;// 1
12 read y = t rue ;// 2
13 nu mber = 4 2 ; // 3
14 }
15 } ;
16
17 Thread t = new T hre ad ( c ha nger , " Ch ange r " ) ;
18 t . s t a r t ( ) ;
19 return t ;
20 }
21
22 public Th re ad c r e a t eR e a d e r ( ) {
23 R un na bl e r e a d e r = new Ru nn ab le ( ) {
24 @Over r i d e
25 pub l i c void r un ( ) {
26 wh ile ( ! b re akLoop ) {
27 // s p i n w a i t i n g
28 }
29 boolean tempReady = r e ady ;
30 int tempNumber = number ;
31 S t r i n g k e y = " " + re a d y + ""+ number ;
32 Sy st em . o ut . p r i n t l n ( key ) ;
33 }
34 } ;
35
36 Thread s = new T hre ad ( r e ad e r , " Re ader " ) ;
37 s . s t a r t ( ) ;
38 return s ;
39 }
4 Shared Memory 37
40
41 p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) throws I n te r r u p t e d E x c e p t io n {
42 V i s i b i l i t y v = new V i s i b i l i t y ( ) ;
43 Th rea d r ea de r = v . c re at eRe ad er ( ) ;
44 Th rea d c ha ng er = v . cr ea teC ha nge r ( ) ;
45
46 c ha n ge r . j o i n ( ) ;
47 r ea d er . j o i n ( ) ;
48 }
The explanation for this possible output is quite easy. Since breakLoop in the reader is used
frequently, the CPU caches it in its register or the changer does not write it directly to L1.
Now we are looking at the ordering and how volatile does help here. Therefore we change
line 11 and line 12, so the green commented statements 1 and 2 in Listing 25. We already
explained that all writes to non-volatile variables before a volatile variable access are seen
by any other thread. Reordering is prohibited here. Therefore our console output, when
running the example again, should not include some false -0 results.
On our machine the following output was generated when using 10,000 iterations again:
key no of occurences
true - 42 9782
true - 0 218
And finally changing statement of line 11 and 13, so the commented statement 1 and 3, only
a single result remain (true 42 ).
key no of occurences
true - 42 10000
Summary and Comparison of volatile and synchronized
Volatile guarantees us that attributes are not cached, volatile statements are not reordered
and every thread sees the most recent value of an attribute. It does NOT guarantee atomicity
when thinking about an increment on a counter. Use cases for volatile are completion,
interruption and status flags, where a concurrent read and write does not lead to problems
since for example the flag is accessed via a loop and a read of stale data is not problematic.
Synchronized on the other hand gives us the same guarantee as volatile, BUT enables
critical sections. So the set of statements within a sychronized block is for all threads in the
system like an atomic action.
4.1.7 Deadlock
A deadlock is a reciprocal waiting condition, where A waits for a resource B has acquired
and B waits for a resource A has acquired. Deadlocks are also possible with more than two
members when thinking about circular waiting conditions.
4 Shared Memory 38
As a developer, we have to ensure deadlock free programs. But how do we do that? Let’s
have a look at the following example and the problems which might occur and are often
easily solvable.
We have two nice and friendly bar keepers, who have two different preparation styles for gin
and tonic. One gin tonic recipe is with ice and the other without. All bar keepers believe,
that for a gin and tonic without ice you need to put in gin first and then tonic. For a gin
and tonic with ice it is the other way around.
Sounds like a possible deadlock scenario, doesn’t it?
Two guests concurrently order gin and tonics from two nice bar keepers. There is only a
single bottle of each gin and tonic within the bar. So the first guest - let’s call him Max -
orders a gin and tonic without ice. Sebastian, one of the bar keepers, takes the gin and also
wants to get hold of the tonic, but the bottle isn’t there.
WHY? - A case for Sherlock?
At the same time as Max made his order, Sofia, another guest, ordered a gin and tonic with
ice from the other bar keeper called Leonie. So Leonie picked up the bottle of tonic first,
but can’t find the gin bottle since Sebastian holds it in his hand. As Sebastian has the gin
and Leonie has the tonic, neither of the bar keepers is able to finish their drink. Because
both bar keepers do not talk to each other during work, Max and Sofia don’t get a gin tonic
and leave the bar thirsty.
Listing 26: A Deadlock which you don’t wanna experience
1c l a s s GinTonic {
2
3p ri va te f i n a l O b jec t g i n = new Ob je ct ( ) ;
4p ri va te f i n a l O b je c t t o n i c = new O bj ec t ( ) ;
5
6pub l i c void p r e p ar e ( ) {
7sync hroni zed ( g in ) {
8sync hroni zed ( t o n i c ) {
9mi xI t ( ) ;
10 }
11 }
12 }
13 // Warning : DON'T d o t h i s
14 pub l i c void p r e pa r e W it h I ce ( ) {
15 sync hroni zed ( t o n i c ) {
16 sync hroni zed ( g in ) {
17 mixItWithIce ();
18 }
19 }
20 }
21 }
As in this sad real world example ( :)), having two resources which are reciprocally acquired
and not freed after some time, the whole system ends in a deadlock.
Since there is no check whether a lock has already been acquired, a developer can only
ensure deadlock free programs by checking the call paths of her solution. So our example in
4 Shared Memory 39
Listing 26 is highly deadlock prone since both methods can be invoked concurrently.
If there really is a need for two (or more) distinct lock objects like in our example, make
sure that locks are always acquired in the same order as shown in 2737 . That way, locks can
not be distributed among threads.
Listing 27: Always use the same order for acquiring locks
1c l a s s GinTonic {
2
3p ri va te f i n a l O b jec t g i n = new Ob je ct ( ) ;
4p ri va te f i n a l O b je c t t o n i c = new O bj ec t ( ) ;
5
6pub l i c void p r e p ar e ( ) {
7sync hroni zed ( g in ) {
8sync hroni zed ( t o n i c ) {
9mi xI t ( ) ;
10 }
11 }
12 }
13
14 pub l i c void p r e pa r e W it h I ce ( ) {
15 sync hroni zed ( g in ) {
16 sync hroni zed ( t o n i c ) {
17 mixItWithIce ();
18 }
19 }
20 }
21 }
4.2 Interruption
4.2.1 Non-Blocking Operations
So far, we managed the life cycle of a thread with the methods start() to start the execution
of a thread and join() to wait for a thread’s termination. As shown in the previous Listings,
we implemented the run() method with several statements which are concurrently executed.
Each thread will execute the statements in the run() method until its termination.
Usually that is not a problem if the threads execute computations with a low complexity. If
we have a short total runtime (e.g. only a few seconds) and all calculations are performed
without any external dependencies, there is no need for an abnormal termination. But what
if the user realizes that the input data to a long-running operation was incorrect? Or if
we need an interactive application which properly reacts to the user’s intent to terminate it
early? If we look back to our running example, there are no complex statements in the run()
method. However, we need a mechanism for an abnormal termination of a Player because
we want a responsive application which properly reacts to the user’s intention to terminate
the game while it hasn’t finished yet. Up to now, we only have the possibility to stop the
execution by shutting down the JVM, regardless of the results.
37 You might agree that you need really good reasons for such a design!
4 Shared Memory 40
Fortunately, it is possible to interrupt a thread during its execution. For this, the class
Thread offers the life cylce methods interrupt(),isInterrupted(), and interrupted().Thread
interruption is a collaborative mechanism. It allows a thread A to disturb another
thread B by sending an interruption signal. Each thread owns the so-called interrupted flag.
It is a boolean member of a thread object and is initially false. We can derive the current
value of the flag with the boolean method isInterrupted(). If thread A wants to interrupt
thread B, it can call the interrupt() method on thread B’s object instance. As a result, the
interrupted flag of thread B changes from false to true. For now, that’s enough. Let’s have
a look at a very simple example to get an understanding of the basic aspects (Listing 28).
Listing 28: Simple Interruption Example
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) throws I n te r r u p t e d E x c e p t io n {
2
3Runnabl e r = new R un na bl e ( ) {
4@Over r i d e
5pub l i c void r un ( ) {
6wh ile (tr ue ) {
7// the r u n method d o e s not h in g
8}
9}
10 } ;
11 Thread t 1 = new Thr ead ( r ) ;
12
13 // s t a r t t 1 a nd c he c k f o r t he i n t e r r u p t e d f l a g
14 t 1 . s t a r t ( ) ;
15 Sy st em . o ut . p r i n t l n ( " I n t e r r u p t e d ? : " + t 1 . i s I n t e r r u p t e d ( ) ) ;
16 // I n t e r r u p t e d ? : f a l s e
17
18 // i n t e r r u p t t 1 an d c h ec k f o r t h e i n t e r r u p t e d f l a g
19 t 1 . i n t e r r u p t ( ) ;
20 Sy st em . o ut . p r i n t l n ( " I n t e r r u p t e d ? : " + t 1 . i s I n t e r r u p t e d ( ) ) ;
21 // I n t e r r u p t e d ? : t r u e
22
23 // i s a s t i l l r u nni n g ?
24 Sy st em . o ut . p r i n t l n ( " I s A l i v e ? : " + t 1 . i s A l i v e ( ) ) ; / / I s A l i v e ? : t r u e
25
26 // w ai t f o r t h e t e r mi na t io n o f Th re ad t 1
27 t 1 . j o i n ( ) ;
28 }
As in the previous listings, we implemented the thread’s logic by using the interface Runnable
(Line 3 – 10). The implementation is just a simple while loop with no other statements (Line
6 – 8). After the creation of the thread object, we start the thread in line 14 and check for
the interrupted flag by using the method isInterrupted(). The method returns the current
state of the interrupted flag which evaluates to false.
Currently, there are two threads running: The main thread and thread t1. The main
thread invokes interrupt() in line 19 on the thread (object) t1. After that, we check for the
interrupted flag again. Finally, the main thread waits for the termination of t1 in line 27.
The interrupt signal sent by the main thread changed t1 ’s interrupted-flag successfully to
true. Nevertheless, thread t1 is still alive. An additional side-effect is that the program/JVM
4 Shared Memory 41
does not terminate anymore. The main thread waits forever in the wait set of t1 for the
termination of t1. Thus, this implementation will never terminate.
How can we change that? We need to check for the state of the interrupted flag at regular
intervals, for example by using a while loop. Hence, the solution could be:
Listing 29: Interuption-responsive Thread
1Runnabl e r = new R un na bl e ( ) {
2@Over r i d e
3pub l i c void r un ( ) {
4// T hr ea d w i l l t e r m i n a te i f t h e i n t e r r u p t e d f l a g i s t ru e
5wh ile ( ! T hre ad . c ur r en t Th re a d ( ) . i s I n t e r r u p t e d ( ) ) {
6// Busy w a i t i n g
7}
8// A f t e r l e a v i n g w h i le l o o p ,
9// t h r ea d t e r m i n a t e s a s t h e r e a r e n o f u r t h e r s t a t em e n t s i n ru n
10 }
11 } ;
In Listing 29 the thread is continuously evaluating the value of its interrupted flag with the
previously introduced method isInterrupted(). The static method Thread.currentThread()
returns a reference to the currently executing thread on the JVM. Because isInterrupted()
initially returns false, the thread continues the execution as long as there is no change. If the
interrupted flag changes to true because of an interrupt from another thread, the condition
of the while-loop does not hold anymore. The loop terminates and as a consequence, the
executing thread of the Runnable as well. Of course you don’t have to use while-loops but
can also use if-statements at several points in the thread’s logic to achieve responsiveness.
As a rule of thumb:
The interruption signal from thread A to thread B does not necessarily mean the
immediate termination of thread B. It is just a way of politely asking to try an early
termination. The behaviour of the interrupted thread depends on its implementation,
meaning the statements in the run() method. A thread must support its interruption
by providing an implementation that considers the current state of the interrupted flag.
The method interrupted() sounds similar to interrupt() but works differently. Firstly, the
method interrupt() is an instance method and can be called on an instance of the class
Thread. The method interrupted() works in a static context like isInterrupted() in the above
sample and does not need an instance. But there is an important difference. The method
interrupted() returns the current state of the interrupted flag and resets the flag to false
subsequently while isInterrupted() only reads the value of the flag without changing it.
Listing 30 is quite similar to the previous listing. However, we replaced the Runnable’s
implementation by iterating via a for-loop. Within the for-loop, there might be some complex
operations with a long runtime (ommited in line 7). To achieve responsiveness, we add a
check for the interrupted flag by using the method interrupted() inside the thread’s logic.
4 Shared Memory 42
Listing 30: Thread Interruption with interrupted()
1p ub li c s t a t i c vo id ma in ( S t r i n g [ ] a r g s ) throws I n te r r u p t e d E x c e p t io n {
2
3Runnabl e r = new R un na bl e ( ) {
4@Override
5pub l i c void r un ( ) {
6for (int i = 0 ; i < 1 _00 0_00 0 ; i ++) {
7// s ome c om pl ex o p e r a t i o n s
8i f ( T hr ea d . i n t e r r u p t e d ( ) ) {
9// t h e r e wa s a n i n t e r r u p t
10 Sy st em . o ut . p r i n t l n ( " I n t e r r u p t e d a f t e r " + i + " ! " ) ;
11 // I n te rr up te d a f t e r 1332 i t e r a t i o n s !
12 return ;
13 }
14 }
15 }
16 } ;
17 Thread t 2 = new Thr ead ( r ) ;
18
19 // s t a r t t 2 a nd c he c k f o r t he i n t e r r u p t e d f l a g
20 t 2 . s t a r t ( ) ;
21 Sy st em . o ut . p r i n t l n ( " I n t e r r u p t e d : " + t 2 . i s I n t e r r u p t e d ( ) ) ;
22 // I n t e r r u p t e d ? : f a l s e
23
24 // i n t e r r u p t t 2 a nd c he c k f o r t he i n t e r r u p t e d f l a g
25 t 2 . i n t e r r u p t ( ) ;
26 Sy st em . o ut . p r i n t l n ( " I n t e r r u p t e d : " + t 2 . i s I n t e r r u p t e d ( ) ) ;
27 // I n t e r r u p t e d ? : f a l s e
28
29 t 2 . j o i n ( ) ; // w ai t f o r t e r m in a ti o n
30 }
Again, the flag is initially false (line 21). In line 25, the main thread signals t2 the interrupt.
The interrupted flag of t2 is now true. As a consequence, t2 enters the if-statement in line
8–12 and prints the current iteration (line 10), because the condition holds in line 8. Finally,
it returns (line 12) and terminates. However, the second check for the interrupted flag also
returns false in line 26. The reason for that is due to the interrupted() method. It returns
the interrupted flag of the thread and clears the flag immediately! In the end, t2 was
stopped after 1332 iterations. If we execute this program multiple times, we would always
get different results. Can you explain why? What is the range of iterations we can expect?
Discuss the questions with your fellow students.
In this chapter we introduced basic mechanism of interruptions. We presented two simple
ways to make threads responsive for interruptions from outside. Certainly, the method
Thread.interrupted() should be used wisely. Often, this method can be replaced with the
approach from Listing 30 to achieve the same behavior. In general, there are multiple ways
to make a multi-threaded application responsive to unexpected interactions. Usually, there
is a master thread that is in charge of managing several worker threads. If a user wants
to cancel a complex calculation, the master thread receives the interrupt and p