You are on page 1of 32

If you are a Java Developer, I am sure that you must be aware of ConcurrentModificationException that comes when you want

to modify the Collection object while using iterator to go through with all its element. Java 1.5 has introduced java.util.concurrent package with Collection classes implementations that allow you to modify your collection object at runtime. ConcurrentHashMap is the class that is similar to HashMap but works fine when you try to modify your map at runtime. Lets run a sample program to explore this: ConcurrentHashMapExample.java 1 package com.journaldev.util; 2 3 import java.util.HashMap; 4 import java.util.Iterator; 5 import java.util.Map; 6 import java.util.concurrent.ConcurrentHashMap; 7 8 public class ConcurrentHashMapExample { 9 10 public static void main(String[] args) { 11 12 //ConcurrentHashMap 13 Map<String,String> myMap = new ConcurrentHashMap<String,String>(); 14 myMap.put("1", "1"); 15 myMap.put("2", "1"); 16 myMap.put("3", "1"); 17 myMap.put("4", "1"); 18 myMap.put("5", "1"); 19 myMap.put("6", "1"); 20 System.out.println("ConcurrentHashMap before iterator: "+myMap); 21 Iterator<String> it = myMap.keySet().iterator(); 22 23 while(it.hasNext()){ 24 String key = it.next(); 25 if(key.equals("3")) myMap.put(key+"new", "new3"); 26 } 27 System.out.println("ConcurrentHashMap after iterator: "+myMap); 28 29 //HashMap 30 myMap = new HashMap<String,String>(); 31 myMap.put("1", "1"); 32 myMap.put("2", "1"); 33 myMap.put("3", "1"); 34 myMap.put("4", "1"); 35 myMap.put("5", "1"); 36 myMap.put("6", "1"); 37 System.out.println("HashMap before iterator: "+myMap); 38 Iterator<String> it1 = myMap.keySet().iterator(); 39 40 while(it1.hasNext()){ 41 String key = it1.next(); 42 if(key.equals("3")) myMap.put(key+"new", "new3");

43 } 44 System.out.println("HashMap after iterator: "+myMap); 45 } 46 47} When we try to run the above class, output is 1ConcurrentHashMap before iterator: {1=1, 5=1, 6=1, 3=1, 4=1, 2=1} 2ConcurrentHashMap after iterator: {1=1, 3new=new3, 5=1, 6=1, 3=1, 4=1, 2=1} 3HashMap before iterator: {3=1, 2=1, 1=1, 6=1, 5=1, 4=1} 4Exception in thread "main" java.util.ConcurrentModificationException 5 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793) 6 at java.util.HashMap$KeyIterator.next(HashMap.java:828) 7 at com.test.ConcurrentHashMapExample.main(ConcurrentHashMapExample.java:44) Looking at the output, its clear that ConcurrentHashMap takes care of any new entry in the map whereas HashMap throws ConcurrentModificationException. Lets look at the exception stack trace closely. The statement that has thrown Exception is: 1String key = it1.next(); It means that the new entry got inserted in the HashMap but Iterator is failing. Actually Iterator on Collection objects are fail-fast i.e any modification in the structure or the number of entry in the collection object will trigger this exception thrown by iterator. So How does iterator knows that there has been some modification in the HashMap. We have taken the set of keys from HashMap once and then iterating over it. HashMap contains a variable to count the number of modifications and iterator use it when you call its next() function to get the next entry. HashMap.java 1/** 2 * The number of times this HashMap has been structurally modified 3 * Structural modifications are those that change the number of mappings in 4 * the HashMap or otherwise modify its internal structure (e.g., 5 * rehash). This field is used to make iterators on Collection-views of 6 * the HashMap fail-fast. (See ConcurrentModificationException). 7 */ 8 transient volatile int modCount; Now to prove above point, lets change the code a little bit to come out of the iterator loop when we insert the new extry. All we need to do is add a break statement after the put call. 1if(key.equals("3")){ 2 myMap.put(key+"new", "new3"); 3 break; 4 } Now execute the modified code and the output will be:

1ConcurrentHashMap before iterator: {1=1, 5=1, 6=1, 3=1, 4=1, 2=1} 2ConcurrentHashMap after iterator: {1=1, 3new=new3, 5=1, 6=1, 3=1, 4=1, 2=1} 3HashMap before iterator: {3=1, 2=1, 1=1, 6=1, 5=1, 4=1} 4HashMap after iterator: {3=1, 2=1, 1=1, 3new=new3, 6=1, 5=1, 4=1} Finally, what if we wont add a new entry but update the existing key-value pair. Will it throw exception? Change the code in the original program and check yourself. 1//myMap.put(key+"new", "new3"); 2myMap.put(key, "new3"); If you get confused (or shocked) with the output, comment below and I will be happy to explain it further.

As you may have seen from my past performance related articles and HashMap case studies, Java thread safety problems can bring down your Java EE application and the Java EE container fairly easily. One of most common problems I have observed when troubleshooting Java EE performance problems is infinite looping triggered from the non-thread safe HashMap get() and put() operations. This problem is known since several years but recent production problems have forced me to revisit this issue one more time. This article will revisit this classic thread safety problem and demonstrate, using a simple Java program, the risk associated with a wrong usage of the plain old java.util.HashMap data structure involved in a concurrent threads context. This proof of concept exercise will attempt to achieve the following 3 goals: Revisit and compare the Java program performance level between the non-thread safe and thread safe Map data structure implementations (HashMap, Hashtable, synchronized HashMap, ConcurrentHashMap) Replicate and demonstrate the HashMap infinite looping problem using a simple Java program that everybody can compile, run and understand Review the usage of the above Map data structures in a real-life and modern Java EE container implementation such as JBoss AS7

For more detail on the ConcurrentHashMap implementation strategy, I highly recommend the great article from Brian Goetz on this subject. Tools and server specifications As a starting point, find below the different tools and softwares used for the exercise: Sun/Oracle JDK & JRE 1.7 64-bit Eclipse Java EE IDE Windows Process Explorer (CPU per Java Thread correlation) JVM Thread Dump (stuck thread analysis and CPU per Thread correlation)

The following local computer was used for the problem replication process and performance measurements: Intel(R) Core(TM) i5-2520M CPU @ 2.50Ghz (2 CPU cores, 4 logical cores)

8 GB RAM Windows 7 64-bit

* Results and performance of the Java program may vary depending of your workstation or server specifications. Java program In order to help us achieve the above goals, a simple Java program was created as per below: The main Java program is HashMapInfiniteLoopSimulator.java A worker Thread class WorkerThread.java was also created

The program is performing the following: Initialize different static Map data structures with initial size of 2 Assign the chosen Map to the worker threads (you can chose between 4 Map implementations) Create a certain number of worker threads (as per the header configuration). 3 worker threads were created for this proof of concept NB_THREADS = 3; Each of these worker threads has the same task: lookup and insert a new element in the assigned Map data structure using a random Integer element between 1 1 000 000. Each worker thread perform this task for a total of 500K iterations The overall program performs 50 iterations in order to allow enough ramp up time for the HotSpot JVM The concurrent threads context is achieved using the JDK ExecutorService

As you can see, the Java program task is fairly simple but complex enough to generate the following critical criterias: Generate concurrency against a shared / static Map data structure Use a mix of get() and put() operations in order to attempt to trigger internal locks and / or internal corruption (for the non-thread safe implementation) Use a small Map initial size of 2, forcing the internal HashMap to trigger an internal rehash/resize

Finally, the following parameters can be modified at your convenience: ## Number of worker threads 1 private static final int NB_THREADS = 3; ## Number of Java program iterations 1 private static final int NB_TEST_ITERATIONS = 50; ## Map data structure assignment. You can choose between 4 structures 01 // Plain old HashMap (since JDK 1.2) 02 threadSafeMap1 = new Hashtable<String, Integer>(2); 03 04 // Plain old Hashtable (since JDK 1.0) 05 threadSafeMap1 = new Hashtable<String, Integer>(2); 06 07 // Fully synchronized HashMap 08 threadSafeMap2 = new HashMap<String, Integer>(2); 09 threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2);

10 11 // ConcurrentHashMap (since JDK 1.5) 12 threadSafeMap3 = new ConcurrentHashMap<String, Integer>(2); 13 14 /*** Assign map at your convenience ****/ 15 assignedMapForTest = threadSafeMap3; Now find below the source code of our sample program. 01 #### HashMapInfiniteLoopSimulator.java 02 package org.ph.javaee.training4; 03 04 import java.util.Collections; 05 import java.util.Map; 06 import java.util.HashMap; 07 import java.util.Hashtable; 08 09 import java.util.concurrent.ConcurrentHashMap; 10 import java.util.concurrent.ExecutorService; 11 import java.util.concurrent.Executors; 12 13 /** 14 * HashMapInfiniteLoopSimulator 15 * @author Pierre-Hugues Charbonneau 16 * 17 */ 18 public class HashMapInfiniteLoopSimulator { 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 private static final int NB_THREADS = 3; private static final int NB_TEST_ITERATIONS = 50; private static Map<String, Integer> assignedMapForTest = null; private static Map<String, Integer> nonThreadSafeMap = null; private static Map<String, Integer> threadSafeMap1 = null; private static Map<String, Integer> threadSafeMap2 = null; private static Map<String, Integer> threadSafeMap3 = null; /** * Main program * @param args */ public static void main(String[] args) { System.out.println("Infinite Looping HashMap Simulator");

36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com"); for (int i=0; i<NB_TEST_ITERATIONS; i++) { // Plain old HashMap (since JDK 1.2) nonThreadSafeMap = new HashMap<String, Integer>(2); // Plain old Hashtable (since JDK 1.0) threadSafeMap1 = new Hashtable<String, Integer>(2); // Fully synchronized HashMap threadSafeMap2 = new HashMap<String, Integer>(2); threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2); // ConcurrentHashMap (since JDK 1.5) threadSafeMap3 = new ConcurrentHashMap<String, Integer>(2); // ConcurrentHashMap

/*** Assign map at your convenience ****/ assignedMapForTest = 56 threadSafeMap3; 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 } // This will make the executor accept no new threads // and finish all existing threads in the queue executor.shutdown(); // Wait until all threads are finish long timeBefore = System.currentTimeMillis(); long timeAfter = 0; Float totalProcessingTime = null; ExecutorService executor = Executors.newFixedThreadPool(NB_THREADS); for (int j = 0; j < NB_THREADS; j++) { /** Assign the Map at your convenience **/ Runnable worker = new WorkerThread(assignedMapForTest); executor.execute(worker);

76 77 78 79 80 81 82 83 84 85 86

while (!executor.isTerminated()) { } timeAfter = System.currentTimeMillis(); totalProcessingTime = new Float( (float) (timeAfter - timeBefore) / (float) 1000); System.out.println("All threads completed in "+totalProcessingTime+" seconds"); } }

87 } 01 #### WorkerThread.java 02 package org.ph.javaee.training4; 03 04 import java.util.Map; 05 06 /** 07 * WorkerThread 08 * 09 * @author Pierre-Hugues Charbonneau 10 * 11 */ 12 public class WorkerThread implements Runnable { 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 private Map<String, Integer> map = null; public WorkerThread(Map<String, Integer> assignedMap) { this.map = assignedMap; } @Override public void run() { for (int i=0; i<500000; i++) { // Return 2 integers between 1-1000000 inclusive Integer newInteger1 = (int) Math.ceil(Math.random() * 1000000); Integer newInteger2 = (int) Math.ceil(Math.random() * 1000000); // 1. Attempt to retrieve a random Integer element

30 31 32 33 34 35 36 37 } } }

Integer retrievedInteger = map.get(String.valueOf(newInteger1)); // 2. Attempt to insert a random Integer element map.put(String.valueOf(newInteger2), newInteger2);

Performance comparison between thread safe Map implementations The first goal is to compare the performance level of our program when using different thread safe Map implementations: Plain old Hashtable (since JDK 1.0) Fully synchronized HashMap (via Collections.synchronizedMap()) ConcurrentHashMap (since JDK 1.5)

Find below the graphical results of the execution of the Java program for each iteration along with a sample of the program console output.

# Output when using ConcurrentHashMap 01 Infinite Looping HashMap Simulator Author: Pierre-Hugues 02 Charbonneau 03 http://javaeesupportpatterns.blogspot.com 04 All threads completed in 0.984 seconds 05 All threads completed in 0.908 seconds 06 All threads completed in 0.706 seconds 07 All threads completed in 1.068 seconds 08 All threads completed in 0.621 seconds 09 All threads completed in 0.594 seconds 10 All threads completed in 0.569 seconds 11 All threads completed in 0.599 seconds

12 As you can see, the ConcurrentHashMap is the clear winner here, taking in average only half a second (after an initial ramp-up) for all 3 worker threads to concurrently read and insert data within a 500K looping statement against the assigned shared Map. Please note that no problem was found with the program execution e.g. no hang situation. The performance boost is definitely due to the improved ConcurrentHashMap performance such as the non-blocking get() operation. The 2 other Map implementations performance level was fairly similar with a small advantage for the synchronized HashMap. HashMap infinite looping problem replication The next objective is to replicate the HashMap infinite looping problem observed so often from Java EE production environments. In order to do that, you simply need to assign the non-thread safe HashMap implementation as per code snippet below: 1 /*** Assign map at your convenience ****/ assignedMapForTest = 2 nonThreadSafeMap; Running the program as is using the non-thread safe HashMap should lead to: No output other than the program header Significant CPU increase observed from the system At some point the Java program will hang and you will be forced to kill the Java process

What happened? In order to understand this situation and confirm the problem, we will perform a CPU per Thread analysis from the Windows OS using Process Explorer and JVM Thread Dump. 1 - Run the program again then quickly capture the thread per CPU data from Process Explorer as per below. Under explore.exe you will need to right click over the javaw.exe and select properties. The threads tab will be displayed. We can see overall 4 threads using almost all the CPU of our system.

2 Now you have to quickly capture a JVM Thread Dump using the JDK 1.7 jstack utility. For our example, we can see our 3 worker threads which seems busy/stuck performing get() and put() operations. 01 ..\jdk1.7.0\bin>jstack 272

02 2012-08-29 14:07:26 03 Full thread dump Java HotSpot(TM) 64-Bit Server VM (21.0-b17 mixed mode): 04 05 "pool-1-thread-3" prio=6 tid=0x0000000006a3c000 nid=0x18a0 runnable [0x0000000007ebe000] 06 java.lang.Thread.State: RUNNABLE 07 08 09 10 11 12 at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)

13 "pool-1-thread-2" prio=6 tid=0x0000000006a3b800 nid=0x6d4 runnable [0x000000000805f000] 14 java.lang.Thread.State: RUNNABLE 15 16 17 18 19 20 at java.util.HashMap.get(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)

21 "pool-1-thread-1" prio=6 tid=0x0000000006a3a800 nid=0x2bc runnable [0x0000000007d9e000] 22 java.lang.Thread.State: RUNNABLE 23 24 25 26 at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

27 at java.lang.Thread.run(Unknown Source) 28 .............. It is now time to convert the Process Explorer thread ID DECIMAL format to HEXA format as per below. The HEXA value allows us to map and identify each thread as per below: ## TID: 1748 (nid=0X6D4) Thread name: pool-1-thread-2 CPU @25.71% Task: Worker thread executing a HashMap.get() operation

1 at java.util.HashMap.get(Unknown Source) 2 at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) 3 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 4 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 5 at java.lang.Thread.run(Unknown Source) ## TID: 700 (nid=0X2BC) Thread name: pool-1-thread-1

CPU @23.55% Task: Worker thread executing a HashMap.put() operation

1 at java.util.HashMap.put(Unknown Source) 2 at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) 3 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 4 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 5 at java.lang.Thread.run(Unknown Source) ## TID: 6304 (nid=0X18A0) Thread name: pool-1-thread-3 CPU @12.02% Task: Worker thread executing a HashMap.put() operation

1 at java.util.HashMap.put(Unknown Source) 2 at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) 3 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 4 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 5 at java.lang.Thread.run(Unknown Source) ## TID: 5944 (nid=0X1738) Thread name: pool-1-thread-1 CPU @20.88% Task: Main Java program execution

1 "main" prio=6 tid=0x0000000001e2b000 nid=0x1738 runnable [0x00000000029df000] 2 java.lang.Thread.State: RUNNABLE at org.ph.javaee.training4.HashMapInfiniteLoopSimulator.main(HashMapInfiniteLoopSimulator.java:75) As you can see, the above correlation and analysis is quite revealing. Our main Java program is in a hang state because our 3 worker threads are using lot of CPU and not going anywhere. They may appear 'stuck' performing HashMap get() & put() but in fact they are all involved in an infinite loop condition. This is exactly what we wanted to replicate. 3 HashMap infinite looping deep dive Now lets push the analysis one step further to better understand this looping condition. For this purpose, we added tracing code within the JDK 1.7 HashMap Java class itself in order to understand what is happening. Similar logging was added for the put() operation and also a trace indicating that the internal & automatic rehash/resize got triggered. The tracing added in get() and put() operations allows us to determine if the for() loop is dealing with circular dependency which would explain the infinite looping condition. 01 #### HashMap.java get() operation 02 public V get(Object key) { 03 04 05 06 if (key == null) return getForNullKey(); int hash = hash(key.hashCode());

07 08 /*** P-H add-on- iteration counter ***/ 09 10 11 12 13 14 15 16 17 18 19 int iterations = 1; for (Entry<K,V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { /*** Circular dependency check ***/ Entry<K,V> currentEntry = e; Entry<K,V> nextEntry = e.next; Entry<K,V> nextNextEntry = e.next != null?e.next.next:null;

K currentKey = currentEntry.key; K nextNextKey = nextNextEntry != null?(nextNextEntry.key != 20 null?nextNextEntry.key:null):null; 21 22 23 24 25 System.out.println("HashMap.get() #Iterations : "+iterations++); if (currentKey != null && nextNextKey != null ) {

if (currentKey == nextNextKey || currentKey.equals(nextNextKey)) System.out.println(" ** Circular Dependency detected! 26 ["+currentEntry+"]["+nextEntry+"]"+"]["+nextNextEntry+"]"); 27 28 29 30 31 32 33 34 } /***** END ***/ Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null;

35 } 01 HashMap.get() #Iterations : 1 02 HashMap.put() #Iterations : 1 03 HashMap.put() #Iterations : 1 04 HashMap.put() #Iterations : 1 05 HashMap.put() #Iterations : 1 06 HashMap.resize() in progress... 07 HashMap.put() #Iterations : 1 08 HashMap.put() #Iterations : 2 09 HashMap.resize() in progress... 10 HashMap.resize() in progress... 11 HashMap.put() #Iterations : 1

12 HashMap.put() #Iterations : 2 13 HashMap.put() #Iterations : 1 14 HashMap.get() #Iterations : 1 15 HashMap.get() #Iterations : 1 16 HashMap.put() #Iterations : 1 17 HashMap.get() #Iterations : 1 18 HashMap.get() #Iterations : 1 19 HashMap.put() #Iterations : 1 20 HashMap.get() #Iterations : 1 21 HashMap.put() #Iterations : 1 22 ** Circular Dependency detected! [362565=362565][333326=333326]][362565=362565] 23 HashMap.put() #Iterations : 2 24 ** Circular Dependency detected! [333326=333326][362565=362565]][333326=333326] 25 HashMap.put() #Iterations : 1 26 HashMap.put() #Iterations : 1 27 HashMap.get() #Iterations : 1 28 HashMap.put() #Iterations : 1 29 ............................. 30 HashMap.put() #Iterations : 56823 Again, the added logging was quite revealing. We can see that following a few internal HashMap.resize() the internal structure became affected, creating circular dependency conditions and triggering this infinite looping condition (#iterations increasing and increasing...) with no exit condition. It is also showing that the resize() / rehash operation is the most at risk of internal corruption, especially when using the default HashMap size of 16. This means that the initial size of the HashMap appears to be a big factor in the risk & problem replication. Finally, it is interesting to note that we were able to successfully run the test case with the non-thread safe HashMap by assigning an initial size setting at 1000000, preventing any resize at all. Find below the merged graph results:

The HashMap was our top performer but only when preventing an internal resize. Again, this is definitely not a solution to the thread safe risk but just a way to demonstrate that the resize operation is the most at risk given the entire manipulation of the HashMap performed at that time. The ConcurrentHashMap, by far, is our overall winner by providing both fast performance and thread safety against that test case.

Wait and notify Its important to understand that sleep( ) does not release the lock when it is called. On the other hand, the method wait( ) does release the lock, which means that other synchronized methods in the thread object can be called during a wait( ). When a thread enters a call to wait( ) inside a method, that threads execution is suspended, and the lock on that object is released. There are two forms of wait( ). The first takes an argument in milliseconds that has the same meaning as in sleep( ): Pause for this period of time. The difference is that in wait( ): 1. The object lock is released during the wait( ). 2. You can come out of the wait( ) due to a notify( ) or notifyAll( ), or by letting the clock run out. One fairly unique aspect of wait( ), notify( ), and notifyAll( ) is that these methods are part of the base class Object and not part of Thread, as is sleep( ). Although this seems a bit strange at firstto have something thats exclusively for threading as part of the universal base classits essential because they manipulate the lock thats also part of every object. As a result, you can put a wait( ) inside any synchronized method, regardless of whether that class extends Thread or implements Runnable. In fact, the only place you can call wait( ), notify( ), or notifyAll( ) is within a synchronized method or block (sleep( ) can be called within non-synchronized methods since it doesnt manipulate the lock). If you call any of these within a method thats not synchronized, the program will compile, but when you run it, youll get an IllegalMonitorStateException with the somewhat nonintuitive message current thread not owner. This message means that the thread calling wait( ), notify( ), or notifyAll( ) must own (acquire) the lock for the object before it can call any of these methods. You can ask another object to perform an operation that manipulates its own lock. To do this, you must first capture that objects lock. For example, if you want to notify( ) an object x, you must do so inside a synchronized block that acquires the lock for x: synchronized(x) { x.notify(); } Typically, wait( ) is used when youre waiting for some condition that is under the control of forces outside of the current method to change (typically, this condition will be changed by another thread). You dont want to idly wait while testing the condition inside your t hread; this is called a busy wait and its a very bad use of CPU cycles. So wait( ) allows you to put the thread to sleep while waiting for the world to change, and only when a notify( ) or notifyAll( ) occurs does the thread wake up and check for changes. Thus, wait( ) provides a way to synchronize activities between threads. As an example, consider a restaurant that has one chef and one waitperson. The waitperson must wait for the chef to prepare a meal. When the chef has a meal ready, the chef notifies the waitperson, who then gets the meal and goes back to waiting. This is an excellent example of thread cooperation: The chef represents the producer, and the waitperson represents the consumer. Here is the story modeled in code: //: c13:Restaurant.java // The producer-consumer approach to thread cooperation. import com.bruceeckel.simpletest.*; class Order {

private static int i = 0; private int count = i++; public Order() { if(count == 10) { System.out.println("Out of food, closing"); System.exit(0); } } public String toString() { return "Order " + count; } } class WaitPerson extends Thread { private Restaurant restaurant; public WaitPerson(Restaurant r) { restaurant = r; start(); } public void run() { while(true) { while(restaurant.order == null) synchronized(this) { try { wait(); } catch(InterruptedException e) { throw new RuntimeException(e); } } System.out.println( "Waitperson got " + restaurant.order); restaurant.order = null; } } } class Chef extends Thread { private Restaurant restaurant; private WaitPerson waitPerson; public Chef(Restaurant r, WaitPerson w) { restaurant = r; waitPerson = w; start(); } public void run() { while(true) { if(restaurant.order == null) { restaurant.order = new Order(); System.out.print("Order up! "); synchronized(waitPerson) { waitPerson.notify(); } } try { sleep(100); } catch(InterruptedException e) { throw new RuntimeException(e); }

} } } public class Restaurant { private static Test monitor = new Test(); Order order; // Package access public static void main(String[] args) { Restaurant restaurant = new Restaurant(); WaitPerson waitPerson = new WaitPerson(restaurant); Chef chef = new Chef(restaurant, waitPerson); monitor.expect(new String[] { "Order up! Waitperson got Order 0", "Order up! Waitperson got Order 1", "Order up! Waitperson got Order 2", "Order up! Waitperson got Order 3", "Order up! Waitperson got Order 4", "Order up! Waitperson got Order 5", "Order up! Waitperson got Order 6", "Order up! Waitperson got Order 7", "Order up! Waitperson got Order 8", "Order up! Waitperson got Order 9", "Out of food, closing" }, Test.WAIT); } } ///:~

Order is a simple self-counting class, but notice that it also includes a way to terminate the program; on order 10, System.exit( ) is called. A WaitPerson must know what Restaurant they are working for because they must fetch the order from the restaurants order window, restaurant.order. In run( ), the WaitPerson goes into wait( ) mode, stopping that thread until it is woken up with a notify( ) from the Chef. Since this is a very simple program, we know that only one thread will be waiting on the WaitPersons lock: the WaitPerson thread itself. For this reason its safe to call notify( ). In more complex situations, multiple threads may be waiting on a particular object lock, so you dont know which thread should be awakened. The solutions is to call notifyAll( ), which wakes up all the threads waiting on that lock. Each thread must then decide whether the notification is relevant. Notice that the wait( ) is wrapped in a while( ) statement that is testing for the same thing that is being waited for. This seems a bit strange at firstif youre waiting for an order, once you wake up the order must be available, right? The problem is that in a multithreading application, some other thread might swoop in and grab the order while the WaitPerson is waking up. The only safe approach is to always use the following idiom for a wait( ): while(conditionIsNotMet) wait( ); This guarantees that the condition will be met before you get out of the wait loop, and if you have either been notified of something that doesnt concern the condition (as can happen with notifyAll( )), or the condition changes before you get fully out of the wait loop, you are guaranteed to go back into waiting.

A Chef object must know what restaurant he or she is working for (so the Orders can be placed in restaurant.order) and the WaitPerson who is picking up the meals, so that WaitPerson can be notified when an order is ready. In this simplified example, the Chef is generating the Order objects, then notifying the WaitPerson that an order is ready. Observe that the call to notify( ) must first capture the lock on waitPerson. The call to wait( ) in WaitPerson.run( ) automatically releases the lock, so this is possible. Because the lock must be owned in order to call notify( ), its guaranteed that two threads trying to call notify( ) on one object wont step on each others toes. The preceding example has only a single spot for one thread to store an object so that another thread can later use that object. However, in a typical producer-consumer implementation, you use a first-in, first-out queue in order to store the objects being produced and consumed. See the exercises at the end of the chapter to learn more about this.

Producer-Consumer Problem using Blocking Queue

Before we start with the actual example, lets have a look at the few concepts we should be aware of. Producer-Consumer Problem Wikipedia here says that: The consumer producer problem (also known as the bounded-buffer problem) is a classical example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producers job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer wont try to add data into the buffer if its full and that the consumer wont try to remove data from an empty buffer. The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores. An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. There are numerous ways to solve a Producer-Consumer problem and in this post I will show one simple way to solve this problem by using the Data Structures and other constructs provided in the JDK. Java 5 introduced a new set of concurrency related APIs in its java.util.concurrent package. As I said here, not many of the developers are aware of these APIs and very few of the make use of it in their code. There were quite a few new Collection classes which got introduced in Java 5 and one of them is the BlockingQueue.

BlockingQueue in Java The JavaDoc says: BlockingQueue is A Queue that additionally supports operations that wait for the queue to become non empty when retrieving an element, and wait for space to become available in the queue when storing an element. There are multiple methods which are supported to retrieve and add elements to the queue wherein one pair throws an exceptions, one of them waits for some fixed time and one pair which blocks until the queue is full/empty. The methods which blocks the thread are the put(e) and take. In a typical producer-consumer problem we would want the consumer thread to be blocked until there is something in the queue to be consumed and the producer thread to be blocked until there is some free space in the queue to add some element. With the use of normal collection classes it because quite a bit of work in implementing inter thread communication by waiting and notifying other threads about the status of the queue. The put(e) and take() methods of the BlockingQueue class are the ones which make it very easy to solve producer-consumer like problems. Lets take a scenario where the producer thread would watch for the files being modified in some directory and add those files to the queue and the consumer thread would print the contents of those files on to the console. Producer thread If you are not familiar with implementing the WatchService in Java, you must first read this to get an idea of how it works. 1 class FileProducer implements Runnable{ 2 3 BlockingQueue<Path> filesList; 4 Path rootPath; 5 6 public FileProducer(BlockingQueue<Path> filesList, 7 Path rootPath){ 8 this.filesList = filesList; 9 this.rootPath = rootPath; 10 } 11 12 @Override 13 public void run() { 14 try { 15 WatchService service = 16 FileSystems.getDefault().newWatchService(); 17 18 rootPath.register(service, 19 StandardWatchEventKinds.ENTRY_MODIFY); 20 21 while(true){ 22 WatchKey key = service.take(); 23 24 for (WatchEvent event : key.pollEvents()){ 25 26 Path relativePath = (Path)event.context(); 27 28 Path absolutePath = 29 Paths.get(rootPath.toString(), 30 relativePath.toString()); 31 32 filesList.put(absolutePath);

33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

} //reset is invoked to put the key back to ready boolean valid = key.reset(); //If the key is invalid, just exit. if ( !valid){ break; } } } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); Thread.currentThread().interrupt(); } } }

The producer thread above watches a certain directory for file modifications and adds the absolute path of the file into the BlockingQueue collection passed to the producer thread via its constructor. Consumer Thread The consumer thread would invoke take() on the BlockingQueue instance and then use the Files API to read the contents. As take() is a blocking call, if the filesList collection is empty then it would just block and wait for the data to be available in the filesList collection. 1 class FileConsumer implements Runnable{ 2 3 BlockingQueue<Path> filesList; 4 Path rootPath; 5 6 public FileConsumer(BlockingQueue<Path> filesList, 7 Path rootPath){ 8 this.filesList = filesList; 9 this.rootPath = rootPath; 10 } 11 12 @Override 13 public void run(){ 14 try { 15 while(true){ 16 17 Path fileToRead = filesList.take(); 18 19 List<String> linesInFile = 20 Files.readAllLines(fileToRead,

21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

Charset.defaultCharset()); System.out.println("reading file: "+fileToRead); for ( String line : linesInFile){ System.out.println(line); } } } catch (InterruptedException e) { e.printStackTrace(); Thread.currentThread().interrupt(); } catch (IOException e) { e.printStackTrace(); } } }

Note: If you are writing to a file using Vim or some other editors which creates temporary files then you have to make sure you exclude such files being added to the queue. Invoking the Producer and consumer 1 public class ProducerConsumerSample { 2 3 public static void main(String[] args) { 4 5 BlockingQueue<Path> filesList = 6 new LinkedBlockingQueue<>(10); 7 8 Path rootPath = Paths.get("/tmp/nio"); 9 10 Thread producerThread = 11 new Thread(new FileProducer(filesList, rootPath)); 12 Thread consumerThread = 13 new Thread(new FileConsumer(filesList, rootPath)); 14 15 producerThread.start(); 16 consumerThread.start(); 17 } 18 19} Pretty straight forward- create instances of both the threads and then launch them. You can create multiple consumer threads as well! In the above example we make use of the LinkedBlockingQueue which is one of the implementations of the BlockingQueue. You have to make sure you import corresponding classes in your source code. You can have all the three classes defined in the same file and name the file as ProducerConsumerSample.java and compile and run the code. Once you have the code running, then go to your terminal and type: 1/tmp/nio$ touch file1

2/tmp/nio$ echo "this is file1" >> file1 3/tmp/nio$ touch file2 4/tmp/nio$ echo "this is file2" >> file2 and the output you see on the terminal of your java program is: 1reading file: /tmp/nio/file1 2reading file: /tmp/nio/file1 3this is file1 4reading file: /tmp/nio/file2 5reading file: /tmp/nio/file2 6this is file2 Note: This code was compiled and tested on a Linux platform, please find similar ways of creating files on Windows when you run your code.

The producer-consumer pattern in Java 5: using blocking queues in preference to wait()/notify() A common use for the wait/notify mechanism is to implement what is sometimes called a producerconsumer pattern. What is meant by this is that one thread "produces" work that another thread, or various other threads, then carry out at a convenient moment. Examples of this pattern include: a messaging thread logs messages "passed" to it from other threads; worker threads of a web server "notify" a statistics thread to update some central statistics on each request.

A typical case for using the pattern is thus to separate tasks with different priorities. Logging, for example, can be a relatively expensive operation and we may not want it to delay completing another operation. By delegating logging to another thread, we can effectively allow logging to take place at a future moment when "there's nothing better to do". The producer-consumer pattern works by having some queue of pending tasks. The producer places tasks in the list; the consumer removes them. Both parties use suitable synchronization. Producer-consumer before Java 5: using a List with wait/notify Pre Java 5, the common way to implement a producer-consumer pattern was to use a plain old LinkedList with explicit synchronization. When we add a "job" to the list, we call notify(); in another thread, the consumer is sitting waiting for the job to come in. So the code would look something like this: public class LoggingThread extends Thread { private LinkedList linesToLog = new LinkedList(); private volatile boolean terminateRequested; public void run() { try { while (!terminateRequested) { String line; synchronized (linesToLog) { while (linesToLog.isEmpty()) linesToLog.wait(); line = (String) linesToLog.removeFirst();

} doLogLine(line); } } catch (InterruptedException ex) { Thread.currentThread().interrupt(); } } private void doLogLine(String line) { // ... write to wherever } public void log(String line) { synchronized (linesToLog) { linesToLog.add(line); linesToLog.notify(); } } } The code is a little messy because we have no explicit queue object: we just use an everyday list with code around it to perform the queuing. The queuing code might get more complex, for example, if we wanted to limit the number of items that could be queued, or if we wanted to prioritise items in the queue rather than having a simple first-in-first-out policy. The wait/notify mechanism also provides us no means of imposing fairness: that is, if two threads want to add a logging while the list is locked (because a line is being logged from it), which line gets logged first is essentially random. In the case of logging, this may not seem such a big deal (though in rare debugging cases could complicate things if you don't know "what happened first"). But in other cases it could matter more. The Java 5 producer-consumer pattern Java 5 improves the producer-consumer pattern by providing explicit blocking queue classes. A blocking queue effectively takes the place of the list in the code above, and also handles the associated synchronization, waiting and notifying (though under the hood, these new classes use the Java 5 lock features rather than a "raw" wait/notify).

The producer-consumer pattern in Java 5: using blocking queues in preference to wait()/notify() (ctd) On the previous page, we looked at using wait/notify to implement a producer-consumer pattern. As mentioned, some new Java 5 classes allow us to separate out the queue which holds pending tasks from the "real" functionality of the producer/consumer. The BlockingQueue interface The Java 5 concurrent package provides the BlockingQueue interface and various implementations. Blocking queues offer (among others) the following methods: public void put(E o); public E take();

This is a parametrised class: in other words, E, represents the type of object that we declare the queue to hold. A common queue implementation is ArrayBlockingQueue, which has a fixed bound on the number of elements it can hold, or LinkedBlockingQueue, which need not have a limit (other than that of available memory). Using the latter, this is how our logging thread might look: public class LoggingThread extends Thread { private BlockingQueue<String> linesToLog = new LinkedBlockingQueue<String>(); public void run() { try { while (!terminateRequested) { String line = linesToLog.take(); doLogLine(line); } } catch (InterruptedException ex) { Thread.currentThread().interrupt(); } } public void log(String line) { linesToLog.put(line); } } The code is now cleaner since the gubbinry of synchronization and notifying is taken care of by the blocking queue. In addition, it is now simple for us to modify the code to place different requirements on the queue.

Queues in Java 5: the Queue interface Java 5 introduces several queue implementations to the Collections framework. Queue implementations firstly share a new Queue interface, which has several methods for accessing the head and tail of the queue. Recall that items are always placed on the end or "tail" of the list, and always read from the beginning or "head" of the list. Methods specified by the Java Queue interface Operation Add item to tail Throws exception Returns value if not possible if not possible add() offer() poll()

Remove item from head remove()

"Peek" item at head

element()

peek()

Types of Queues Java provides Queue implementations depending on a few key criteria: thread-safety: if you don't require the queue to be accessed concurrently from multiple threads, then a plain LinkedList can be used as a Queue; the advantage of the other implementations is that they offer efficient thread-safety; blocking or non-blocking: various blocking implementations add extra methods to put and remove items from the queue, blocking until the operation is possible, with an optional time limit; bound or non-bound: sometimes it is useful to put an upper limit on the number of items that can fit in the queue, e.g. to prevent a thread pool from queueing up too many jobs when the machine is busy; other special operations: Java provides an implementation that orders by priority, and another that applies a delay to queued items.

As of Java 6, the various queue classes are as follows: Queue implementations as of Java 6 Blocking? None Blocking Priority-based Delayed Thread-safe Non-blocking Non thread-safe Non thread-safe, priority-based Other criteria Bound Non-bound

ArrayBlockingQueue LinkedBlockingQueue PriorityBlockingQueue DelayQueue ConcurrentLinkedQueue LinkedList PriorityQueue

One further type of queue not included above is the SynchronousQueue, which is effectively a zerolength queue (so that a thread adding an item to the queue will block until another thread removes the item).

BlockingQueue In our overview of Java queue types, we said that perhaps the most significant type is the blocking queue. A blocking queue has the following characteristics:

methods to add an item to the queue, waiting for space to become available in the queue if necessary; corresponding methods that take an item from the queue, waiting for an item to put in the queue if it is empty; optional time limits and interruptibility on the latter calls; efficient thread-safety: blocking queues are specifically designed to have their put() method called from one thread and the take() method from another in particular, items posted to the queue will be published correctly to any other thread taking the item from the queue again; significantly, the implementations generally achieve this without locking the entire queue, making them highly concurrent components; integration with Java thread pools: a flavour of blocking queue can be passed into the constructor of ThreadPoolExecutor to customise the behaviour of the thread pool.

These features make BlockingQueues useful for cases such as the following: a server, where incoming connections are placed on a queue, and a pool of threads picks them up as those threads become free; in a variety of parallel processes, where we want to manage or limit resource usage at different stages of the process.

Example use of BlockingQueue On this next page, we'll examine the facilities provided by BlockingQueue implementations. We'll work through a BlockingQueue example, using it to cosntruct a logger thread.

BlockingQueue example: a background logger thread A simple use of a BlockingQueue is where we want one or more threads to pass jobs to some "background" or "processing" thread. The processing thread will sit waiting for jobs and execute them one at a time. On a server, for example, we might want to perform "lazy logging": we want a busy thread to be able to add a string to the queue of "things to be logged"; at moments when the server is less busy, a logger thread will then pick strings off the queue and actually log them to the console (or disk or "place where things are logged"...).

With a BlockingQueue, the task becomes simple. The logger thread holds a queue instance, and its run() method affectively sits waiting for things to log until it is told to shut down. Then from other threads, we call a method to add an item to the queue. The BlockingQueue implementation will handle thread-safety and the actual notification to the logger thread when an item is added to the queue. Creating the thread Our logger will run in its own thread and so our logger class will be an extension of Thread. We won't dwell too much on thread creation in Java, which is covered in a separate section of this web site. However, we will mention that we want our logger to be a singleton: there'll be a maximum of one static instance of it. So the outer shell of the class looks as follows: public class LoggerThread extends Thread { private static final LoggerThread instance = new LoggerThread();

public static LoggerThread getLogger() { return instance; } private LoggerThread() { start(); } } With this pattern, all accesses to the logger thread must be made via the getLogger() method. And the first caller to getLogger() will actually cause the logger thread to be started. Inside the LoggerThread constructor, we could consider setting options such as the thread name and thread priority though, as discussed in the latter article, thread priority actually means different things on different systems. Constructing the queue First, we need to create our queue. There are two flavours of "normal" blocking queue: a LinkedBlockingQueue, which uses a linked list as its internal structure and thus has no fixed capacity, and an ArrayBlockingQueue, which uses a fixed array and thus has a maximum capacity. In this case, we go for a fixed capacity, but that's just an arbitrary design decision. We'll create a queue with space for up to 100 strings: public class LoggerThread extends Thread { private BlockingQueue<String> itemsToLog = new ArrayBlockingQueue<String>(100); ... } In this case, if the queue filled up quicker than the logger thread could process the strings, then further threads trying to add strings to the queue would either hang until the logger caught up, or just drop the surplus strings. Which behaviour is adopted depends on which method we decide to call when adding a string to the queue, and is thus a design decision we have to make. Arguably, ArrayBlockingQueue is also slightly more efficient in terms of object overhead, though that's not such a great concern here: we won't be logging so many things per second! Notice that the queue, like much of the collections framework, is parametrised: thanks to Java 5 generics, we can declare it as a queue of Strings and from then on avoid ugly casts. Notice that, like blocking methods in Java in general, the take() method could be interrupted. If that ever happened, we just let the interruption cause the thread's run method to terminate. Pulling items off the queue Now we need a run() method with a loop that continually takes the next item off the queue and logs it. We need to solve two main issues: when there's no item on the queue, we need wo wait for one to appear; we need a mechanism for shutting down the logger.

First, the problem of waiting. Recall that in a non-blocking queue, the possibilities would have been remove() or poll(). But these methods return immediately (either with an item if there's one on the queue, or else a null return value or exception if the queue is empty). To get round this, the BlockingQueue

provides a take() method. This method will wait for an item to appear if there is none on the queue. It then returns the item at the head of the queue. At some point, we assume that the logger will be shut down "cleanly" in response to the user quitting the application. At that moment, we want the logger to log all pending strings on the queue before shutting down. One way to handle this is to post a special object to the queue that is a signal for the logger thread to shut down. Our run() method then looks as follows: public class LoggerThread extends Thread { private static final String SHUTDOWN_REQ = "SHUTDOWN"; private volatile boolean shuttingDown, loggerTerminated; ... // Sit in a loop, pulling strings off the queue and logging public void run() { try { String item; while ((item = itemsToLog.take()) != SHUTDOWN_REQ) { System.out.println(item); } } catch (InterruptedException iex) { } finally { loggerTerminated = true; } } } So in each iteration of the while loop, we wait for and take the next item on the queue. If that's the special "shutdown request" string, then we exit the while loop and let the run() method exit (thus terminating the logger thread). Otherwise, we print the string. When the logger eventually does terminate, we set a flag to say so, which we'll come back to in a moment. (See the separate section for information on why this flag is declared as a volatile variable.) Adding items to the queue Now we need to write our log() method, which we can call from any thread to add a string to the queue for subsequent logging. The method we use on the blocking queue for this is put(). This adds an item immediately to the queue if it has space, else waits for space to become available: public void log(String str) { if (shuttingDown || loggerTerminated) return; try { itemsToLog.put(str); } catch (InterruptedException iex) { Thread.currentThread().interrupt(); throw new RuntimeException("Unexpected interruption"); } } When we add an item to the queue, the logic of BlockingQueue will automatically handle "waking up" the logger thread (or any thread waiting for a job on the queue). This means that some time in the future, when the logger thread is scheduled in, it will pick up the item that we added to the queue (and if already running, say, on another processor, it is likely to pick it up immediately).

Again, because it's a blocking operation, we must handle the possibility of put() being interrupted. We could declare our log() method to throw InterruptionException. But requiring every log operation to handle this exception makes for a slightly messy API for what will be an unlikely scenario in practice. So we just re-cast the exception as a RuntimeException (which we don't have to explicitly declare and which the caller doesn't have to explicitly catch), remembering to re-mark the current thread as interrupted, in case the handler that eventually catches the RuntimeException wants to know this. (For more details on why this is good practice, see the section on thread interruption.) Notice the first line of the method, where we check if the logger has shut down: if it has, then nothing is going to pull jobs off the queue, and the put() method would block forever if the queue was empty. Shutting down the queue We've really already dealt with this issue: we saw that our run() method waits for a special "shutdown" notification to be posted to the queue. So we just need our shutdown method to post this object to the queue: public void shutDown() throws InterruptedException { shuttingDown = true; itemsToLog.put(SHUTDOWN_REQ); } We also set a "shutting down" flag, checked from the log() method, so that more messages won't be posted to the queue from this moment onwards. In the design we've chosen, we assume that there'll be a well-defined "shutdown routine" in our application which will call the logger's shutDown() method, presumably towards the end of the shutdown process. Other things we could think about in a more sophisticated implementation include: what happens if a string is posted to the queue after a shutdown request has been issued? (in our implementation, such strings will effectively be ignored) what if the application is shut down at various arbitrary moments?: we could consider using a shutdown hook, and/or making the logger thread a daemon thread

Home Java Programs Core Java J2ee DB2 Struts WebSphere

Java CopyOnWriteArrayList with example . Solution for ConcurrentModificationException / iterators fail-fast Vector and Synchronised ArrayList are not fully thread safe . Both are condionally thread-safe that means all the individual operations (methods) are thread-safe . For example , many number of threads can call the add() method of a Vector or Synchronized ArrayList simultaneously to add elements to the list . This concurrent operation will never fail because add() method is synchronized (not applicaple for ArrayList) . But when one thread changes the List by mutative operations (add, remove , etc) , while another thread is traversing it through an Iterator , the iterator implemented in the java.util Collections classes fails by

throwing ConcurrentModificationException ( see Example 1 ). The exception occurs when the hasNext() or next() method of Iterator class is called. The same error also occurs (See example 2) , when elements are added in ArrayList or Vector, after iterator() method is called . Example 1:

import java.util.*; class addElement implements Runnable { private List list; private int element; public addElement (List myList, int e) { list = myList; element = e; } public void run() { list.add(element); } } class printElement implements Runnable { private List list; public printElement (List myList) { list = myList; } public void run() { Iterator it=list.iterator(); while(it.hasNext()) System.out.print(it.next()+" "); } } public class ConcurrentModificationExceptionTest2 { public static void main(String[] args) throws InterruptedException { List list = new Vector(); //CopyOnWriteArrayList list=new CopyOnWriteArrayList(); for (int i = 0; i < 1000; i++) { Thread a1 = new Thread(new addElement(list,i)); a1.start(); } Thread p1 = new Thread(new printElement(list)); p1.start(); } } Running the above program , throws the following error . May not throw the error for each run. When addElement method and printElement method is called simultaniously , the the following error occurs.

Example 2 :

import java.util.*; public class ConcurrentModificationException { public static void main(String args[]) { List list=Collections.synchronizedList(new ArrayList()); //CopyOnWriteArrayList list=new CopyOnWriteArrayList(); list.add("1"); list.add("2"); list.add("3");

Iterator it = list.iterator(); list.add("4"); list.add("5"); while (it.hasNext()) { System.out.print(it.next()+" "); } } } Running the above code will throw the below error

D:\as2\JF3>java ConcurrentModificationException Exception in thread "main" java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(Unknown Source) at java.util.AbstractList$Itr.next(Unknown Source) at ConcurrentModificationException.main(ConcurrentModificationException.java:13) Now one of the solution is to prevent ConcurrentModificationException,we may use the external synchronization to lock the entire List while iterating .Another solution is to use CopyOnWriteArrayList class. We can use CopyOnWriteArrayList (a thread-safe variant of ArrayList ) class as a replacement for ArrayList in concurrent applications where more iterations are required on the ArrayList. This class is packaged with java.util.concurrent . It extends Object and implements List . Now let us see what happens exactly when we use the class CopyOnWriteArrayList . All mutable operations ( add , remove and so on) make a copy (clone) of original array list ,then make the change to the copy, and finally replace the copy with original array list after the iteration is over. The iterators operates on the original array list at the time the iterator was constructed thus does not throw error ConcurrentModificationException. CopyOnWriteArrayList implementations provide higher concurrency and also preserves thread safety. It is not necessarily to use CopyOnWriteArrayList everywhere instead of ArrayList . It may be used where more concurrent access of mutable operations and iterations over array list are required. To solve the error of Example 1 , just uncomment the line //CopyOnWriteArrayList list=new CopyOnWriteArrayList(); and comment the line List list=new Vector(); . To solve the error of Example 2 , uncomment the line //CopyOnWriteArrayList list=new CopyOnWriteArrayList(); and comment the line List list=Collections.synchronizedList(new ArrayList()); . The output will be 1 2 3 . It will not print 4 5 because the iterator uses the original arraylist at the time the iterator was constructed.

Cyclic barrier vs.CountDown Latch Was this post useful to you?

6 Answers active oldest votes up vote 14 down vote accepted One major difference is that CyclicBarrier takes an (optional) Runnable task which is run once the common barrier condition is met. It also allows you to get the number of clients waiting at the barrier and the number

required to trigger the barrier. Once triggered the barrier is reset and can be used again. For simple use cases - services starting etc... a CountdownLatch is fine. A CyclicBarrier is useful for more complex co-ordination tasks. An example of such a thing would be parallel computation - where multiple subtasks are involved in the computation - kind of like MapReduce.

There's another difference. When using a CyclicBarrier, the assumption is that you specify the number of waiting threads that trigger the barrier. If you specify 5, you must have at least 5 threads to call await(). up vote 8 down vote When using a CountDownLatch, you specify the number of calls to countDown() that will result in all waiting threads being released. This means that you can use a CountDownLatch with only a single thread. "Why would you do that?", you may say. Imagine that you are using a mysterious API coded by someone else that performs callbacks. You want one of your threads to wait until a certain callback has been called a number of times. You have no idea which threads the callback will be called on. In this case, a CountDownLatch is perfect, whereas I can't think of any way to implement this using a CyclicBarrier (actually, I can, but it involves timeouts... yuck!). The main difference is documented right in the Javadocs for CountdownLatch. Namely: up vote 6 down A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting vote threads are released and any subsequent invocations of await return immediately. This is a oneshot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using a CyclicBarrier.

You might also like