Java Concurrency

From Wiki
Revision as of 15:14, 3 October 2012 by Suhridk (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Concurrent Programming

  • The name given to programming notations and techniques to express potential parallelism and solve the resulting communication and synchronization problems.
  • Why do we need it ? To allow the expression of potential parallelism so that more than CPU can solve the problem.
  • To model the parallelism in the real world.
  • Alternative is difficult: programmer must construct sequential program as a cyclic execution of task to handle various concurrent activities.
  • This make programming difficult and introduction of structures which are irrelevant to the problem at hand.
  • Parallel execution on multiple CPU's is difficult.
  • A concurrent program is a collection of autonomous sequential processes executing logically in parallel.
  • Each process has a single thread of control.
  • The actual implementation of these collection of processes can be via:
    • Multiprogramming: Multiple processes multiplex their executions on a single processor.
    • Multiprocessing: Multiple processes multiplex their executions on a a multiprocessor system where there is access to shared memory.
    • Distributed processing: Multiple processes multiplex their executions on several processors where they do not share memory.

Problems with concurrent programming

  • Deadlock is a condition where each concurrent activity is waiting for another to perform an operation.
    • Four conditions required for a deadlock:
    • Mutual exclusion : only one concurrent activity can use a resource at once.
    • Hold and wait: there must exist concurrent activities that are holding resources while waiting for others resources to be acquired;
    • No preemption: a resource can only be released voluntarily by a concurrent activity; the locks acquired by a concurrent activity cannot be forcibly taken away from it by another activity.
    • Circular wait:a circular chain of concurrent activities must exist such that each activity holds resources (has locked the objects) that are being requested by the next activity in the chain.
  • Dealing with deadlocks:
    • Prevention: Ensure that one of the four conditions dont hold.
    • Avoidance: If resource usage pattern is known, the deadlock conditions can exist, but system will dynamically examine resources to ensure deadlock is not reached.
    • Recovery: Take some action once system is in deadlock state.
  • Starvation is a condition where one or more concurrent activities are continuously denied resources as a result of others.
  • Interference may occur when more than one concurrent activity attempt to update shared data.
  • Livelock A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing.
    • A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time.

Real time systems

  • A real time system is any system which has to respond to a stimuli within a finite and specified period.
  • The correctness depends on both the result and the time it was delivered.
  • Failure to respond is as bad as the wrong response.
  • Types of real time systems
    • Hard real-time — systems where it is absolutely imperative that responses occur within the required deadline. E.g. Flight control systems.
    • Soft real-time — systems where deadlines are important but which will still function correctly if deadlines are occasionally missed. E.g. Data acquisition system.
    • Firm real-time — systems which are soft real-time but in which there is no benefit from late delivery of service.

Characteristics of RTS

  • Concurrent control of separate system components.
  • Facilities to interact with special purpose hardware.
  • Extreme reliability and safe.
  • Guaranteed response times.

Java Concurrency

  • Java supports threads.
  • In the native model, each Java thread maps to a single OS thread.
  • In the green thread model, threads are invisible to the OS. The thread library handles all the threading.
  • There are various models in which concurrency can be integrated with OOP:
    • Asynchronous method calls
    • Early returns from methods.
    • Futures
    • Active Objects
  • Java adopts the "active object" model for concurrency.
  • What are active objects ? From [1]
  • Active objects are different from passive objects (normal objects) because passive objects re-use the execution context of other objects who call methods on them. To make the distinction clearer, consider a normal Java object. When a second object calls a method on it, that method executes inside the same JVM process and thread as the caller method. Many times the object being called will have some concurrency requirements - access to that object may not be thread-safe so it must be serialized, or perhaps internally the called object serializes access to some data. On the other hand, when a method is called on an active object, control returns immediately to the caller, and the active object executes the method in some execution context that it manages. The active object may keep an internal worker thread (or pool of threads) to perform execution. A more advanced active object may delegate execution to a different process or a different computer.
  • Active objects, by definition, will execute concurrently with other active objects. They encapsulate a thread.

Communication and Synchronization

  • Approaches are broadly classified as shared variable or message passing.
  • A monitor is a popular concurrency model.
  • A monitor encapsulates a shared resource and provides a procedural/functional interface to that resource.
  • In essence, it is rather like an object except for the important property that the procedure/function calls are executed atomically with respect to each other.
  • Therefore a monitor is essentially an object where each of its operation executes in mutual exclusion.

Condition Synchronization

  • Mutex is not adequate for all forms of cooperation.
  • Consider, for example, a printer server that cannot print a file until other threads (clients) have told it what to print. A print-list resource is often used to facilitate communication between the server and the clients. Mutual exclusion is needed to ensure that the print-list remains consistent when accessed by multiple threads. However, the server must also wait when the print-list is empty; clients might need to wait when the list is full.
  • This form of synchronization is called condition synchronization, monitors support it through condition variables.
  • For every condition for which a thread wishes to wait, there is usually an associated condition variable. In the print-list resource example, two conditions listNotEmpty and listNotFull might be declared. Condition variables themselves may be considered as objects (or abstract data types) with two available operations:
    • Wait – this operation will unconditionally suspend the execution of the calling thread and place it on a queue of waiting threads associated with the condition variable.

For example, when the printer server waits on the listNotEmpty condition, it is immediately suspended.

    • Notify (or Signal) – this operation will allow the first thread suspended on the queue associated with the condition variable to continue its execution. For example, if the printer server removes an item from the print-list so that the list is now not full, it will call the notify operation on the listNotFull condition variable, thereby waking up one client thread (if any are waiting).
  • A condition variable queue is usually ordered either in a first-in-first-out manner or according to the priority of the waiting threads. Some monitors also support a third operation called NotifyAll.
  • wait() : Causes the current thread to wait until another thread invokes the notify() method or the notifyAll() method for this object.
    • The current thread must own this object's monitor. The thread releases ownership of this monitor and waits until another thread notifies threads waiting on this object's monitor to wake up either through a call to the notify method or the notifyAll method. The thread then waits until it can re-obtain ownership of the monitor and resumes execution.
  • wait(long millis) : The current thread must own this object's monitor. The thread releases ownership of this monitor and waits until either of the following two conditions has occurred:
    • Another thread notifies threads waiting on this object's monitor to wake up either through a call to the notify method or the notifyAll method.
    • The timeout period, specified by timeout milliseconds plus nanos nanoseconds arguments, has elapsed.
  • Important: The thread then waits until it can re-obtain ownership of the monitor and resumes execution.
  • notify() : Wakes up a single thread that is waiting on this object's monitor. If any threads are waiting on this object, one of them is chosen to be awakened. The choice is arbitrary and occurs at the discretion of the implementation. A thread waits on an object's monitor by calling one of the wait methods.
    • The awakened thread will not be able to proceed until the current thread relinquishes the lock on this object. The awakened thread will compete in the usual manner with any other threads that might be actively competing to synchronize on this object; for example, the awakened thread enjoys no reliable privilege or disadvantage in being the next thread to lock this object.
    • This method should only be called by a thread that is the owner of this object's monitor.
  • notifyAll() : Wakes up all threads that are waiting on this object's monitor. A thread waits on an object's monitor by calling one of the wait methods.
    • The awakened threads will not be able to proceed until the current thread relinquishes the lock on this object. The awakened threads will compete in the usual manner with any other threads that might be actively competing to synchronize on this object; for example, the awakened threads enjoy no reliable privilege or disadvantage in being the next thread to lock this object.
  • In Java, every object is associated with a mutual exclusion lock.
  • When synchronized keyword is used, lock association happens automatically.
  • Threads can wait and notify on an a single anonymous condition variable.
  • There are no explicit condition variable supported by the Java language.
  • When a thread wakes after a call to wait(), it cannot assume the condition is still true!
  • Because when it wakes up, it has to compete for the lock with other threads, so another thread could get the lock and change the state of the program, so the condition is no longer true.
  • So threads need to reevaluate their guards.(Typically done using a while loop).


  • Consider a class Foo and multiple threads accessing Foo's code.
class Foo {
 private static Data staticData;
 private Data instanceData;
 private ThreadLocal<Data> tLocal;

 public void go() {
   tLocal.set(new Data());
   Data d = tLocal.get();
  • Now each object of Foo, say foo1, foo2 will have its own copy of instanceData and a shared copy of static data.
  • But what if many threads are accessing the same object say foo1 and we need to store some data of Foo specific to each Thread. Both instance data and static data cannot be used, since they are shared.
  • So use a ThreadLocal variable. Threads running foo's methods will set() and get() on the ThreadLocal object.
  • Note if T1 does a set and T2() does a get, T2 will get null data because the data is ThreadLocal to T1.
  • So if the ThreadLocal variable is an instance variable of Foo, then every instance of Foo will have its own copy of ThreadLocal, which means thread local data can be stored per instance of Foo.
  • If we don't care about instance specific data, then Foo can have a static ThreadLocal object which will still allow each Thread to maintain its own copy of Data.

Synchronized Blocks

  • Used in its full generality, the synchronized block can undermine one of the advantages of monitor-like mechanisms, that of encapsulating synchronization constraints associate with an object into a single place in the program.
  • This is because it is not possible to understand the synchronization associated with a particular object by just looking at the object itself when other objects can name that object in a synchronized statement.

Java Memory Model

  • If programmers ensure that all shared variables are only accessed by threads that hold appropriate monitor locks, they need not be concerned with: multiprocessor implementations, compiler optimisations, out-of-order instruction executions etc.
  • However, synchronization can be expensive.
  • See for a good explanation: Double Checked Locking
  • The double-checked locking algorithm illustrates that the synchronized method (and statement) in Java serves a dual purpose.
  • Not only do they enable mutual exclusive access to a shared resource but they also ensure that data written by one thread (the writer) becomes visible to another thread (the reader).
  • The visibility of data written by the writer is only guaranteed when it releases a lock that is subsequently acquired by the reader.
  • Consider this code (from Effective Java)
public class StopThread {
	private static boolean stopRequested;

	public static void main(String[] args) throws InterruptedException {
		Thread backgroundThread = new Thread(new Runnable() {
			public void run() {
				int i = 0;
		System.out.println("Requesting BGThread to Stop");
		stopRequested = true;
  • Now in absence of synchronization, the backgroundthread never sees the update of the stopRequested variable made by the main thread.
  • This can be fixed by synchronizing read and write access to the stopRequested flag.
  • So here synchronization is being used for its communication effects, rather than mutex because the code is atomic anyway.


  • The other alternative to synchronization in the above problem is to mark the flag as volatile.
  • While the volatile modifier performs no mutex, it guarantees that any thread that reads the field will see the most recently written value.
  • The JMM requires that a volatile field not be held in local memory and that all reads and writes go straight to main memory. Furthermore, operations on volatile fields must be performed in exactly the order that a thread requests. A further rule requires that volatile double or long variables must be read and written atomically.
  • Note the below code has a race condition even though the variable is volatile. The other thread is not guaranteed to see the updated value of nextSerialNumber, because the ++ operation is not atomic.
private static volatile int nextSerialNumber = 0;

public static int generateNextSerialNumber() {
     return nextSerialNumber++;

Thread Priorities

  • Priorities only used as guides to the scheduler.
  • Thread can give up CPU using yield() - which places the thread at the back of the run queue for its priority level.
  • Priority can be set using the setPriority(int priority) method.
  • Threading model is weak for RT Systems:
    • No guarantee that highest priority thread will run ahead of others.
    • Equal priority threads may or may not be time sliced.
    • If native threads are used, different java thread priorities may be mapped to the same OS thread priority.

Delaying Threads

  • The time over-run associated with both relative and absolute delays is called the local drift and it it cannot be eliminated.
  • Local drift causes:
    • Difference of granularity between the sleep time specified for the thread and the system clock.
    • Interrupts disabled ?
    • Thread can be runnable after sleeping but not executing.
  • Cumulative drift: local drifts adding up.
  • It is possible, with absolute delays, to eliminate the cumulative drift that could arise if local drifts were allowed to superimpose.
    • i.e. a relative delay is Thread.sleep(1000). Each call can produce local drifts that can add up. Now with an absolute delay this can be avoided. e.g. wake thread up on 15 Dec 2013.
  • Timeout on waiting. Threads can wait() indefinitely for a notify. However wait() can also be specified with a timeout.
  • It is not possible to know if thread has woken up after timeout or if someone has called notify.

Thread State


Thread Groups

  • Thread groups allow collections of threads to be grouped together and manipulated as a group rather than as individuals.
  • Thread groups can contain other groups.
  • Interrupt method can be used to interrupt all members of the Thread group.
  • Also has an uncaught exception hook that pre Java 5 code could use.

Strengths of Java Concurrency Model

  • Simple and supported by the language.
    • So, many of the errors in attempting to use an OS interface for concurrency does not exist in Java.
    • Language syntax and strong type checking give protection e.g. cant forget to close a synchronized block.
  • Portability is enhanced because concurrency model is OS independent.


  • Lack of language support for conditional variables.
  • Poor support for absolute time outs and time outs on waiting.
  • No preference given for threads waking up from notify over threads waiting to gain a lock for the first time.
  • Poor support for priorities.
  • Synchronized code should be kept as short as possible.
  • Nested monitor calls should be avoided because the outer lock is not released when the inner monitor waits.
  • Not obvious when a nested call is being made.
    • methods not declared as synchronized can still have a synchronized block.
    • non synchronized superclass methods can be overriden by synchronized subclass methods.
    • interface methods cannot be marked as synchronized.

Bloch's Thread Safety Levels

  • How a class behaves when its instances or static methods are accessed concurrently is an important part of the classes contract. This behavior needs to be documented.
  • To enable safe concurrent use, a class must clearly document what level of thread safety it supports.
  • The following is a list:
    • Immutable : Instances of this class are constants. e.g. String, Long.
    • Unconditionally Thread safe: Instances are mutable, but the class has sufficient internal synchronization that its instances can be used concurrently without the need for external synchronization. e.g. StringBuffer.
    • Conditionally thread safe: Like unconditionally thread safe, but some methods require external synchronization for safe concurrent use. e.g. collections returned by the Collections.synchronized wrappers.
    • Thread compatible / Not Thread safe: Instance are mutable, to use them concurrently clients must externally synchronize each method invocation. e.g. ArrayList, Map etc.
    • Thread hostile: Not safe for concurrent use even if methods are surrounded by external synchronization. Usually happens when static data is modified without synchronization or accessing external data.

Communication Paradigms


  • A semaphore is a technique for coordinating or synchronizing activities in which multiple processes compete for the same resource.
  • A useful way to think of a semaphore is as a record of how many units of a particular resource are available, coupled with operations to safely (i.e. without race conditions) adjust that record as units are required or become free, and if necessary wait until a unit of the resource becomes available.
  • Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1(or locked/unlocked, unavailable/available) are called binary semaphores (same functionality that mutexes have).
public class Semaphore {
	protected int value;
	public Semaphore(int initial) {
		value = initial;
	public synchronized void acquire() throws InterruptedException {
		while(value==0) wait();
	public synchronized void release() {

	public static void main(String[] args) {


  • Now this semaphore can be used to control access to a certain resource by a specified number of threads.
  • For e.g. a tunnel (resource) where only a certain number of cars (threads) can use at a any given time.
  • Tunnel can have a counting semaphrore initialized to max number of cars it can allow.
  • Each car then has to call entryProtocol() which calls semaphore.acquire, make the journey and then call exitProtocol() which calls semaphore.release().


  • Often a thread needs to wait for a signal from another thread before it can proceed. There are various types of signals. A persistent signal (sometimes called a latch or a gate ) is a signal that remains set until a single thread has received it.
  • Example:
package net.suhrid;

public class PersistentSignal {
	private boolean arrived = false;
	public synchronized void waitS() throws InterruptedException {
		while(!arrived) wait();
		arrived = false;
	public synchronized void signal() {
		arrived = true;
	public static void main(String[] args) {
		final PersistentSignal ps = new PersistentSignal();
		Thread t1 = new Thread() {
			public void run() {
				try {
					System.out.println("T1 computing part 1");
					System.out.println("Done Part 1 .. signalling");
					System.out.println("T1 computing part 2");
				} catch(InterruptedException e) {
		Thread t2 = new Thread() {
			public void run() {
				try {
					System.out.println(" T2 Waiting for a signal");
				} catch (InterruptedException e) {
				System.out.println(" T2 Received a signal from t1");
  • A transient signal (or pulse ) is a signal that releases one or more waiting threads but is lost if no threads are waiting.