When two or more threads modify some single object state we can get unexpected result. For example, we expect to get the value of object modified by Thread-A, but we are getting some other unexpected value because object was also modified by Thread-B.
Thread-A changed the state of the object using the setVal, then Thread-B also changed the object state and Thread-A will output the value that has been set by Thread-B. That’s the code snippet of threads initialization
The output is going to be “9”, “9”. Surely, if you removed the useless cycle from setVal then you would get proper results because the value would be printed by Thread-A faster than the Thread-B changes it. Another option to prevent this behavior is using synchronized keyword.
Synchronized may be used with argument: synchronized(some_object), or without argument: in a method signature – it implicitly means that you are using synchronized with ‘this’ reference as an argument. Marking the method synchronized you tell all your threads that the operation with this method is atomic – method body operations are considered to be a single operation which can be executed by one and only one thread at the moment of time.
Marking the method setVal ‘synchronized’ makes Thread-B wait until Thread-A completes the atomic operation (leaves synchronized block)
Atomic operations take effect because of locking mechanism in java, when the thread captures the object’s lock, getting into synchronized block no more threads may get to same block until the lock will be released.
Every java object supports locking mechanism; it’s implemented by monitor object (which is a part of any java object). So when you hear/see words like lock or monitor you can think about it like about identical things. There is only one monitor per object and the object is specified as an argument of synchronized operator (don’t forget that that ‘synchronized’ in the method signature means synchronized(this)).
Static methods also can be synchronized, but in this case monitor belongs to java.lang.Class object which represents the whole class.
Let’s get deeper about monitors.
In order to implement locking mechanism monitors support the following methods (these methods you can find decompiling synchronized block of code):
monitorenter – lock capture
monitorexit – lock release
wait – thread moving to the monitor’s wait set (may be recognized as moving from RUNNING state to WAITING state) and awaiting for ‘notify’ function.
When the thread (lock-holder) calls ‘wait’ method the lock is released and any other thread may capture the lock
notify(all) – one or all threads are awaken at the monitor’s wait set (actually, just removed from the wait set) and have some chances to capture the lock again.
How does the monitor capture the lock, and how does it release? How fast are these operations? The answer is: it depends on monitor locking type. There are three types of locking: thin, fat and biased. More details about it in the nice answer from stackoverflow.com:
“There are 3 type of locking in HotSpot
1. Fat: JVM relies on OS mutexes to acquire lock.
2. Thin: JVM is using CAS algorithm.
3. Biased: CAS is rather expensive operation on some of the architecture. Biased locking - is special type of locking optimized for scenario when only one thread is working on object.
By default JVM uses thin locking. Later if JVM determines that there is no contention thin locking is converted to biased locking. Operation that changes type of the lock is rather expensive, hence JVM does not apply this optimization immediately. There is special JVM option -XX:BiasedLockingStartupDelay=delay which tells JVM when this kind of optimization should be applied.
Once biased, that thread can subsequently lock and unlock the object without resorting to expensive atomic instructions”
Now we understand what atomic operation is and we may have some suggestions about atomic types like AtomicInteger, AtomicLong. Variables of these types are thread-safe by default, i.e. protected of concurrent modification by different threads at the same moment of time. What about variables marked with volatile? Operations on volatile variables are atomic, aren’t they?
Before answering we need to clarify what is happening when we declare variable as volatile. There are two levels of storing the variable value: memory and cache. For the sake of performance when the thread reads variable value from memory and modifies it the new value will not be immediately updated in the memory, it will be saved in the cache and eventually flushed back to the memory. For more than one thread we may have the following case: Thread-A reads and modifies value of variable A and saved it into a cache. Thread-B reads the value of variable A from the memory (Thread-B does not read the value from cache) and get the non-actual value which differs with value saved by Thread-A.
To prevent storing value in the cache the key word volatile is used, in case of volatile variable Thread-B will get an actual value of A, because its value was immediately saved in memory after modification by Thread-A. However marking variable as volatile doesn’t guarantee us that operations will be atomic.
What do yo think about the code below is it thread-safe?
The code above can be considered as follows:
It turns us out that if the value is read, modified and then assigned as a new one the result will not be thread-safe. What should we use now? Use classes from java.util.concurrent.atomic package. These classes are intended to provide its variables concurrent access and thread-safe modification operations (e.g. increment, decrement, add element to array). There are such classes like AtomicInteger, AtomicBoolean, AtomicLong, AtomicIntegerArray and so on can be found in java.util.concurrent.atomic. In fact these classes are wrappers for Integer, Boolean Long and other types with an improved concurrency support. For example, you can think about AtomicInteger like about a class which has volatile Integer field and every method where this field is used marked as synchronized.