Está en la página 1de 88

Concurrent

Programming
All threads are equal, but some
threads are more equal than other…

© Eyal Segalis, Sep.


2009
Agenda
Concurrency terminology and
concepts
◦ Atomicity, Ordering and Visibility
 Main Memory, Working Memory and
Memory Barrier
◦ Lock (aka Mutex), Semaphore and
Condition
◦ Deadlock, Livelock, Scheduling,
Starvation, Race Condition (aka Data
Race) and Thread Safety

Java Memory Model (Java 5 or later)


◦ Monitors, Synchronization, volatile and
Agenda
Doug Lea’s java.util.concurrent
package
◦ Concurrent collections
◦ Active Objects using BlockingQueue
◦ Thread pools and Task scheduling
◦ Locks, Conditions and
Synchronizers
◦ Atomic variables
Demonstration: The Double
Check Locking (DCL) problem
and solutions
 Part I

CONCURRENCY
TERMINOLOGY
AND CONCEPTS
Introduction
What is a sequential program?
◦ A single thread of control that
executes one instruction and
when it is finished execute the
next logical instruction
What is a concurrent program?
◦ A collection of autonomous
sequential threads, executing
(logically) in parallel
Concurrency is not (only)
parallelism
Introduction
Why use concurrent programming?
◦ Natural Application Structure
 The world is not sequential
◦ Increased application throughput and
responsiveness
Not blocking the entire application due
to blocking IO
◦ Performance from multi-core
hardware
Parallel execution
◦ Distributed systems
1-tier, 2-tiers, etc.
Introduction
Speed me up! But what’s the
problem?
◦ Consider the following code:
Start with: x = y = 0
Thread #1:
 x = 1;
j = y;
Thread #2:
y = 1;
i = x;
◦ Possible result of execution: i = j =
0
Introduction
How is this possible?
◦ Reorder of assignments by compiler
◦ Reorder of assignments by
processor
◦ Values kept in registers
◦ Values from the working memory
are not synchronized with main
memory

What?
Introduction
 Butit gets even worse…
 Look carefully at this one
◦ Start with some pointers p = q, p.x = 0
◦ Thread #1 Thread #2
r1 = p; r6 = p;
r2 = r1.x; r6.x = 3;
r3 = q;
r4 = r3.x;
r5 = r1.x;
◦ Surprisingly r4 might be equal to 3 and at the
same time r5 = 0!
◦ This is because compiler can replace the last line
by: r5 = r2;
H u h ??
 This is called forward substitution
Three Aspects of
Synchronization
Atomicity
◦ Locking to obtain mutual exclusion
Visibility
◦ Ensuring that changes to object
fields made in one thread are seen
in other threads
Ordering
◦ Ensuring that you aren’t surprised
by the order in which statements
are executed
Atomicity
A set of operations can be
considered atomic when two
conditions are met
◦ Until the entire set of operations
completes, no other process can
know about the changes being
made (invisibility)
◦ If any of the operations fail then the
entire set of operations fails, and
the state of the system is restored
to the state it was in before any of
the operations began
Atomicity
Example
◦ Consider storing a value into the
memory
◦ If two processes will store to the
same memory location at the
same time, and the storing
operation is not atomic – unknown
value will be left in this memory
location

Atomicity
Atomicity cannot be implemented
in
user level only
A critical section is a piece of
code that accesses a shared
resource that must not be
concurrently accessed by more
than one thread of execution
A critical section is implemented
in user-level to be atomic using
locks
Atomicity in Java
Accesses and updates to the
memory cells corresponding to
fields of any type except long or
double are guaranteed to be
atomic
◦ This includes fields serving as
references
to other objects (either 32/64-bit
references)
volatile
long and double are also
guaranteed to be atomic
Visibility
Memory updates performed by
one thread are not necessarily
seen by other threads
◦ There is no time constraints for
memory synchronization
May never happen
◦ Some threads might see the
updated value while other won’t
Visibility
Every thread in the system has
(logically) its own working
memory in which it operates
◦ Including registers, CPU cache
(L1/L2), etc.
◦ Can even reside on different
machine
To share data between threads, a
main memory is used
◦ In practice, many times the
separation is logical only and
Visibility
Example
◦ Thread #1:
alive = true;
while (alive) {

}

◦ Thread #2:
alive = false;

◦ May result in infinite loop!


Visibility
For updates to be seen by other
threads two conditions must be
met
◦ The writer thread must publish the
data to the main memory
◦ The reader thread must
synchronize its working memory
from the main memory

Ordering
Both compiler and processor may
perform reordering of code
pieces to boost up performance
From the point of view of the
thread performing the actions in
a method, instructions proceed
in the normal as-if-serial manner
that applies in sequential
programming
Ordering
 …But…
From the point of view of other
threads
that might be "spying" on this
thread by concurrently running
unsynchronized methods, almost
anything can happen
◦ In other words:
memory operations performed by
one thread will not necessarily be
perceived as happening in the
Ordering
Example
◦ Start with: num = 0, finished =
false
◦ Thread #1:
num = 100;
finished = true;

◦ Thread #2:
if (finished)
System.out.println(num);

◦ May print 0
Ordering
Memory barrier is a class of
instructions which enforce an
ordering constraint on memory
operations issued before and
after the barrier instruction
 Simplest barrier: full hence
◦ A full fence ensures that all load
and store operations prior to the
fence will have been committed
prior to any loads and stores
issued following the fence
Lock
A lock (aka mutex - acronym for
mutual exclusion) is a tool for
controlling access to a shared
resource by multiple threads
◦ It is the user level tool for critical
sections
Usually, a lock will provide two
atomic operations: lock and
unlock
Locking a lock will alter its state
in a way that will not permit any
Lock
The result of an attempt to lock an
already locked lock is
implementation specific
Usually, the attempter thread is
blocked until the lock is unlocked
by the original locking thread
◦ A queue is used to remember the
blocked threads awaiting for the lock
◦ Whenever the lock is unlocked, the
first blocked thread will resume and
lock the lock
This is called a fair lock, other scheduling
Lock T1 T2

Example lock.lock()
lock.lock()
foo()
◦ Thread #1:
// blocked
lock.lock(); …
foo(); lock.unlock()
lock.unlock(); // lock()
returns
◦ Thread #2:
bar()
lock.lock(); …
bar(); lock.unlock()
lock.unlock();

◦ foo and bar methods will never be


executed concurrently
Lock
What happens if the same thread
tries to lock the same lock
twice?
◦ Naively, the thread will be locked
forever
◦ Reentrant lock is a lock that
remembers its locking thread
On first call to lock the lock is locked
and change its owner thread
Unless the lock was already owned by
another thread
On second call to lock by its owner
thread, a simple counter is increased
Lock
Hmmm…. This is cool…
But can be very inefficient!
◦ Consider a resource that is changed
rarely but accessed frequently
◦ If we use a simple lock to protect it, all
readers will block each other for no
reason
Read-Write-Lock to the rescue!
◦ A read-write-lock is a lock that permits
concurrent reading at the same time,
but only one write
◦ During a write operation, no reading is
Lock
How is it used?
Look at the following code
◦ Writer Threads Reader Threads
lock.lockWrite(); lock.lockRead();
// do some writing // do some
reading
// to shared resource // from shared
resource
lock.unlockWrite();
lock.unlockRead();

lockRead will not block as long


as lockWrite was not called
Lock
Atomicity
But what about visibility and
ordering?
◦ The behavior of a lock regarding
these issues is implementation
specific and depends on the
language memory model
◦ A lock is often also a memory
barrier to ensure ordering of
statements
◦ A lock is often implemented to
synchronize the working and main
Semaphore
A Semaphore is a lock that enables
locking for a predefined maximum
number of times by different threads
 The semaphore blocks the locking
thread iff the maximum number of
locks had reached
◦ It is unblocked whenever one of the
locking
threads unlocks it
 This is useful if you want to limit the
number of access to a shared resource
◦ e.g. DB connection that supports
concurrent requests
Condition
Locks solve the problem of
updating and accessing a shared
resource concurrently
Locks do not solve the problem of
synchronization points between
threads
Suppose you have one thread
responsible for fetching data and
another thread responsible for
analyzing it

Condition
Conditions (aka condition variables)
provide a means for one thread to
suspend execution (to "wait") until
notified by another thread that
some state condition may now be
true
The access to the shared state
information occurs in different
threads, therefore it must be
protected using a lock
The idea is that the condition
Condition
Here is how it works
◦ Thread #1 Thread #2
 lock.lock(); lock.lock();
// do stuff atomic // do stuff
lock.unloc
condition.wait();
k()
suspend
condition.notify();
lock.lock()
// do more stuff // do more stuff
lock.unlock(); lock.unlock();

T1: lock … -running - … wait - blocked wait - -


blocked lock - …

T2: lock - blocked lock - … - running - … notify


… … unlock
Condition
More realistic example

◦ Reader Thread Analyzer Thread


while (alive) { while(alive) {
 newInput = getInput(); // IO
lock.lock();
 lock.lock(); while(!
newDataArrived) {
 queue.push(newInput);
condition.wait();
 newDataArrived = true; }
 condition.notify();
newDataArrived = false;
 lock.unlock(); newInput =
Condition
Ifmore than one thread is waiting
on a condition, the notify (aka
signal) call will wakeup only one
thread
◦ Usually the first one to wait
This behavior is very useful for
thread-pools, but that’s about
it…
Most of the time, you would want
all awaiting threads to wake up
◦ For that matter use notifyAll
Liveness
A concurrent application's ability
to execute in a timely manner is
known as its liveness
◦ Deadlock
◦ Livelock
◦ Starvation
A common metaphor to
demonstrate those concepts is
the dining philosophers problem
Dining Philosophers
Fivephilosophers sitting at a circular
table doing one of two things:
eating or thinking
Each philosopher has a plate of
spaghetti, and a fork is placed in
between each
pair of adjacent philosophers
A philosopher must eat
with two forks
Each philosopher can
only use the forks on
Deadlock
A deadlock is a situation wherein
two or more competing actions are
waiting for the other to finish, and
thus neither ever does
◦ “…When two trains approach each
other at a crossing, both shall come
to a full stop and neither shall start
up again until the other has gone…”
(a law in Kansas)
Inthe dining philosophers problem,
a deadlock will arise if all
philosophers will pick up the fork
Deadlock
Example
◦ Thread #1 Thread #2
lock1.lock(); lock2.lock();
// do something // whatever
lock2.lock(); lock1.lock();
// more stuff // more of that
lock2.unlock(); lock1.unlock();
lock1.unlock(); lock2.unlock();

Deadlock !!!
Livelock
A livelock issimilar to a deadlock,
except that the states of the
processes involved in the
livelock constantly change with
regard to one another, none
progressing
Think on two people meet in a
narrow corridor, each tries to be
polite by moving aside to let the
other pass, but they end up
swaying from side to side
Scheduling and Starvation
A scheduler is a module responsible
for deciding which thread / process
to execute on each CPU at any
given time
Starvation is a situation in which a
thread / process is unable to gain
regular access to shared resource
and is unable to make progress
Usually, the shared resource is CPU
time, and the starvation is caused
by the scheduler
Starvation
Inthe dining philosophers
problem, starvation will occur for
the following algorithm:
◦ Philosophers #1 and #3 eat in
turns
◦ Philosopher #2 eats only if he sees
both forks on the table for some
time
Philosopher#2 will starve
because there is no time in
which both forks are resting on
Race Condition and Thread
Safety
Race condition (aka data race) is a
situation in which the result of a
set of threads or processes is
unexpectedly and critically
dependent on some shared state
◦ Result depends on scheduling
sequence
◦ Result depends on ordering of code
pieces
◦ Result depends on visibility of memory
cells Thread Safe code,
A ftw! thread safe if it
program is called
 Part II

JAVA MEMORY MODEL


Introduction
A memory model describes, given
a program and an execution
trace of that program, whether
the execution trace is a legal
execution of the program
Together with the description of
single-threaded execution of
code, the Java memory model
provides the semantics of the
Java programming language
The java memory model is written
Introduction
The original Java memory model,
developed in 1995, was widely
perceived as broken, preventing
many runtime optimizations and
not providing strong enough
guarantees for code safety
It was updated through the Java
Community Process, as Java
Specification Request 133 (JSR-
133), which took effect in 2004,
for Tiger (Java 5.0)
Monitors
A monitor is an object intended to
be used safely by more than one
thread
In Java, every Object is associated
with a monitor that supports two
kinds of synchronization
◦ Mutual exclusion (lock)
◦ Cooperation (condition)

Synchronization
 To use the lock associated with objects, Java
offers the following syntax
 Object myMonitor = new Object();
 synchronized(myMonitor) {
 // some critical section code
 }

 In practice, this code is interpreted to


Acquire
 Object myMonitor = new Object();
 myMonitor.getLock().lock();
 try {
 // some critical section code Release
 } finally {
 myMonitor.getLock().unlock();
Synchronization
 Java also offers some syntactic sugar
 void synchronized myMethod() {
 // critical section code
 }

 Is interpreted to
 void myMethod() {
 synchronized(this) {
 // critical section code
 }
 }

Synchronization
 And also
 class MyClass { …
 void static synchronized myMethod() {
 // critical section code
 }
 }

 Is interpreted to
 …
 void myMethod() {
 synchronized(MyClass.class) {
 // critical section code
 }
Synchronization
How strong is this lock?
◦ As any lock, any piece of code
synchronized using the same
monitor is atomic
◦ It is reentrant lock
◦ Java also put constrains on
reordering
 Synchronized blocks with the same
monitor cannot be reordered with
respect to each other
Yet, reordering is permitted for the
rest of the code
 including the insertion of statements into
Synchronization
And visibility?
◦ All memory updates before a
release are visible after his
matching acquire
In other words – all updated inside a
synchronization block are visible to
any latter synchronization block of
the same monitor
You can think of a release as a point
in which working memory is
published to main memory, and an
acquire as a point in which the
working memory is synchronized
from the main memory
Volatile Fields
 In terms of atomicity, visibility, and
ordering, declaring a field as volatile is
like using a little fully synchronized
class protecting only that field, as in

 final class VFloat {


 private float value;
 final synchronized void set(float f) {
 value = f;
 }
 final synchronized float get() {
 return value;
 }
Volatile Fields
Because no locking is involved,
declaring fields as volatile is likely to
be cheaper than using
synchronization, especially for reads
◦ If volatile fields are accessed frequently
though, it is more efficient to
synchronize the entire block
Attention
◦ Incrementing a volatile is not atomic
If two threads try to increment (e.g. i++) a
volatile at the same time, one of the
updates might get lost
◦ No way to make elements of an array be
volatile
Volatile Fields
So, what does volatile do?
◦ Reads and writes go directly to
memory - not cached in registers
(or anywhere else)
◦ volatile longs and doubles are
atomic
◦ Reordering of volatile accesses is
restricted
No restrictions for volatiles and non-
volatile
No reordering for volatiles and
volatiles
Final Fields
 Fields declared final are initialized once, but
never changed under normal circumstances
 Optimization of final fields
◦ Java allows aggressive optimization of
final fields
◦ Hoisting of reads of final fields across
synchronization and unknown method
calls
still maintains immutability
A thread that can only see an object after
that it has been completely initialized (i.e.
its constructor finished) is guaranteed to
see the correctly initialized values for that
object's final fields
Final Fields
final fields are not that final
◦ Using a PrivilegedAction it is possible to
dynamically change a final field
declaration to allow modification
◦ This is done using the
Field.setAccessible(true) reflection
method
◦ However, there is no guarantee that this
changed would reflect anywhere
Especially if compiler replaced field with
constant
◦ Use this only on object creation and with
a lot of care that no other thread has
access to the field
Wait / Notify
The monitor associated with any Java
Object is also a Condition
And the syntax is…

◦ Thread #1
 synchronized(myMonitor)atomic
{
 myMonitor.wait(); lock.unloc
k()
 } suspend

lock.lock()
◦ Thread #2
 synchronized(myMonitor) {
 myMonitor.notifyAll();
 }
Wait / Notify
A call to wait / notify outside a
synchronization block will result in
llegalMonitorStateException
Recall the differences between notify /
notifyAll
The call to wait can be time limited
◦ When wait returns, there is no way to
know whether it was woken up or the
timeout has reached
JVM is allowed to wake up a waiting
condition for no reason
◦ This is called a spurious wake-up
Sleep and Yield
Class Thread contains also the
sleep and yield static methods
Both have no impact on monitors,
but on scheduling only
Sleep cause the Thread not to be
scheduled for execution for a
minimum amount of time
(unless interrupted)
Yield only suggests the scheduler
to perform a context switch
Thread Life Cycle
A Thread in Java has the following life-
cycle
◦ It starts after the call to its start()
method
Calling start synchronizes with the
threads actual start, so no reorder or
visibility issues exists between the
two
◦ It ends after its run method returns
Either the method returns or throws
an exception
The end of a thread is synchronized
with any attempt to detect its
Thread Interruption
Thread can be interrupted using the
interrupt() method in the class
Thread
Interrupting a thread will cause one of
the following to that thread
◦ If thread is blocked in the invocation of a
wait, join or a sleep methods it will
throw an InterruptedException
◦ If thread is blocked on an IO operation it
might throw some interruption
exception (too long for here…)
On some OS it might also interrupt a system
call
Thread Interruption
Ifflag interruption status is set, a call
to either wait, join or sleep will throw
immediately InterruptedException
and then the status will be cleared
A call to the static interrupted()
method of the class Thread will
return the current thread interruption
status, and clear it right afterwards
Confusingly, the isInterrupted()
method of class Thread will return
the interruption status without
altering its state
Deprecated Thread
Methods
Class Thread had many design
mistakes when it was first published
For backwards compatibility those
mistakes still exists as deprecated
methods
First deprecated mechanism is stop
◦ Its purpose was to enable killing a thread
brutally
◦ This is unsafe since it breaks all monitors
that the thread is locking
◦ The right solution is a user-level flag for
thread’s liveness, that is checked
periodically by the thread
Deprecated Thread
Methods
Anotherdeprecated mechanism is
Thread.suspend() and
Thread.resume()
◦ Its purpose was to enable external thread
to decide on threads scheduling
◦ It is deprecated since it promotes
deadlocks
The suspended thread still locks its monitors
The call to resume() might get lost before the
call to suspend() and thus the thread will
never resume
◦ The right solution is to use the new
java.util.concurrent.lock.LockSupport
It contains the park() and unpark()static
Thread Groups
A thread group represents a set
of threads
Thread group is a composite, so it
can contain other thread groups
as well
A thread is allowed to access
information about its own thread
group, but not to access
information about its thread
group's parent thread group or
any other thread groups
Thread Specific Storage
(TLS)
Use static or global memory that is
localized to a Thread
◦ Normally all heap data is shared across
all threads
◦ Writing a parallel code, feeling like a
single thread
◦ No locking it needed
Classical example
◦ errno: Global storage of the last system
call result
Implementation
◦ Simply use the ThreadLocal class
Word Tearing
 Java guarantees that every field and array
element is considered distinct
◦ Updates to one field or element must not interact
with reads or updates of any other field or
element
 Inparticular, two threads that update
adjacent elements of a byte array
separately must not interfere or interact
even of no synchronization is defined
 Some processors do not provide the ability to
write to a single byte
◦ It would be illegal to implement byte array
updates on such a processor by simply reading
an entire word, updating the appropriate byte,
and then writing the entire word back
 Part III

JAVA.UTIL.CONCURRENT

“ T h e ja va . u tilco
. n cu rre n t p a cka g e
in JD K 1 . 5
is w o rth its w e ig h t in In te rn e t
p o rn ”
From studdugie on Java
Introduction
The util.concurrent package was
originally written as open source by
Doug Lea
In JDK 1.5, Sun imported it into the JDK
and since then it is widely used and
recommended
The goal of the package was described
as
◦ “…to make concurrent programs clearer,
shorter, faster, more reliable, more
scalable, easier to write, easier to read,
and easier to maintain…”
◦ And it does so pretty well!
java.util.concurrent
◦ Presentation from JavaOne 2006
Asynchronous Method Invocation
(AMI)
Invoke a method on a different thread
than the callers thread
◦ Method returns immediately
◦ Several ways to retrieve the result
 Method can return a Future Object
 Can tell whether the method finished its
execution
Can wait for method to finish and return its
result
Sometime can also cancel the execution
process
Method can receive a callback reference to
notify when it finishes
Can also be preregistered as a listener, thus
enabling more then
Active Object
Decouplemethod execution from
method invocation using AMI
◦ Store a queue of requests
Potentially “smart” queue (e.g. priority
queue)
Potentially Command requests
◦ Each method execution is replaced by
adding a request to the queue
◦ Active Object runs in its own thread,
taking requests from the queue and
invokes them
Response can be returned as discussed in AMI
Active Object
 Pros
◦ Write the critical (synchronous) session once
(avoid race conditions)
◦ Simple interface to invoke Commands
◦ Asynchronous
◦ Easy to debug
Compared to asynchronous code
 Cons
◦ Non-intuitive response method
◦ Difficult to debug
Compared to synchronous code
 Implementation
◦ Use BlockingQueue
 Part IV

DOUBLE CHECK
LOCKING
Singleton
 Consider
the naïve Singleton
implementation

 public final class Singleton {


 private static final Singleton INSTANCE =
new Singleton();

 private Singleton() { … }

 public static Singleton instance() {


 return INSTANCE;
 }
 }

 But what about lazy-fetched Singleton?


Singleton
 This is straight forward
 public final class Singleton {
 private static final Singleton INSTANCE = null;

 private Singleton() { … }

 public static synchronized Singleton instance() {


 if (INSTANCE == null) {
 INSTANCE = new Singleton();
 }
 return INSTANCE;
 }
 }

 But this is too expensive


◦ Synchronizing all read just for the one-time
instantiation
Double Check Locking
 public final class DoubleLockMechanism {

 private static DoubleLockMechanism


INSTANCE = null;

 private DoubleLockMechanism() { … }

 public static DoubleLockMechanism


instance() {
 if (INSTANCE == null) {

synchronized(DoubleLockMechanism.class) {
 if (INSTANCE == null) {
 INSTANCE = new
DoubleLockMechanism();
 }
 }
 } ØDoesn ’ t work !
 return INSTANCE ;
ØMight return unconstructed
 }
instance
 }
Double Check Locking –
Wrong
public static singleton getInstance() {

 if (instance == null) {
 synchronized(Singleton.class) {
 Singleton inst = instance;
 if (inst == null) {
 synchronized(Singleton.class) {
 inst = new Singleton();
 }
 }
 instance = inst;
 }
 }
 return instance;
}


Double Check Locking -
Wrong
 private volatile boolean initialized = false;
 private static Singleton instance;
  
 public static Singleton getInstance() {
 if (instance == null || !initialized) {
 synchronized(Singleton.class) {
 if (instance == null) {
 instance = new Singleton();
 }
 }
 initialized = (instance != null);
 }
 return instance;
 }
Double Check Locking -
Wrong
 public final class FullMemoryBarrierSingleton {
 private static boolean initialized = false;
 private static Resource resource = null;
 private static Object lock = new Object();
 public static Resource getResource() {
 if (!initialized) {
 synchronized (lock) {
 if (!initialized && resource == null) resource
= new Resource();
 } synchronized (lock) {
 initialized = true;
 }}
 return resource;
} }


 JMM doesn’t permit changes so it’s write-barrier but not
read-barrier safety
Double Check Locking –
Solution 1
 Remember this?
 public final class Singleton {
 private static final Singleton INSTANCE = new
Singleton();

 private Singleton() { … }

 public static Singleton instance() {


 return INSTANCE;
 }
 }

 Thisis a lazy-fetched implementation, by


class loader

Double Check Locking –
Solution 2
Synchronize guaranties, that only
one thread can enter a block of
code
◦ It doesn't guarantee that variables
modifications done within synchronized
section will be visible to other threads
◦ Only the threads that enters the
synchronized block is guaranteed to see
the changes
◦ This is the reason why double checked
locking is broken - it is not synchronized
on reader's side
◦ Reading thread may see that singleton is
Double Check Locking –
Solution 2
Ordering is provided by volatile
◦ volatile guarantees ordering, for instance
write to volatile singleton static field
guaranties that writes to the singleton
object will be finished before the write
to volatile static field
◦ It doesn't prevent creation singleton of
two objects, this is provided by
synchronize
Class final static fields doesn't need to
be volatile, JVM takes care of this
problem
So preferred code for lazy singleton
Double Check Locking –
Solution 2
 public final class Singleton {
 private Singleton() {
 // Initialize object
 }
 private static class SingletonHolder {
 private static final Singleton INSTANCE = new
Singleton();
 }
 public static Singleton instance() {
 return SingletonHolder.INSTANCE;
 }
}

 Bill Pugh
Double Check Locking –
Solution 3
 public final class DoubleLockMechanism {

 private static volatile DoubleLockMechanism


INSTANCE = null;
 private DoubleLockMechanism() { … }
 public static DoubleLockMechanism instance() {
 if (INSTANCE == null) {
 synchronized(DoubleLockMechanism.class)
{
 if (INSTANCE == null) {
 INSTANCE = new
DoubleLockMechanism();
 }
 }
 }
 return INSTANCE ;
 }
Double Check Locking –
Solution 4
 public enum Singleton {
 INSTANCE;
}

Joshua Bloch
References
Wikipedia
http://java.sun.com/docs/books/jls/thir
http://gee.cs.oswego.edu/dl/cpj/jmm.h
http://www.softwaresummit.com/2003
http://docs.huihoo.com/javaone/2006/
http://developers.sun.com/learning/jav

También podría gustarte