Thursday, June 30, 2011

Concurrency: A Method for Reducing Contention and Overhead in Worker Queues for Multithreaded Java Applications

Introduction
Many server applications, such as Web servers, application servers, database servers, file servers, and mail servers, maintain worker queues and thread pools to handle large numbers of short tasks that arrive from remote sources. In general, a "worker queue" holds all the short tasks that need to be executed, and the threads in the thread pool retrieve the tasks from the worker queue and complete the tasks.

Since multiple threads act on the worker queue, adding tasks to and deleting tasks from the worker queue needs to be synchronized, which introduces contention in the worker queue. This article explains the contention involved with the traditional approach (using a common queue for the thread pool) and helps you reduce the contention by maintaining one queue per thread. This article also explains a work stealing technique that is important for utilizing the CPU effectively in multicore systems.

Note: The source code for the examples described in this article can be downloaded here: workerqueue.zip

Common Worker Queue: The Traditional Approach

Today, most server applications use a common worker queue and thread pool to exploit the concurrency provided by the underlying hardware. As shown in Figure 1, server applications use a common worker queue to hold short tasks that arrive from remote sources. A pool of threads acts on the worker queue by retrieving tasks from the worker queue and running the tasks to completion. Threads are blocked on the queue if there is no task in the worker queue.

This method of using a common worker queue resolves the issues created by earlier approaches, such as creating a thread per task, which caused lots of threads to be spawned. However, the common worker queue method creates a bottleneck when the number of tasks is high and the task time is very short. The single background thread approach also has flaws when an application has a huge number of short-spanned, independent tasks.

Figure 1. Common Worker Queue.


Listing 1 shows how you can create a common worker queue with just few lines of code.

Listing 1. Creating a Common Worker Queue


 /* * Defines common worker queue and pool of 
      threads to execute tasks from remote sources */ 

public class SimpleWorkQueue {     
  private final PoolWorker[] threads;     
  private final BlockingDeque queue;
      
  public SimpleWorkQueue(int nThreads)     {         
        queue = new LinkedBlockingDeque();
        threads = new PoolWorker[nThreads];
        for (int i = 0 ; i < nThreads; i++) {
            threads[i] = new PoolWorker();
            threads[i].start();
        }
    }

    /*
     * Worker thread to execute remote tasks
     */
    private class PoolWorker extends Thread {         
     /*
      * Method to retrieve task from worker queue and start executing it.
      * This thread will wait for a task if there is no task in the queue. 
      */
        public void run() {
            while (!stopNow) {
                try {
     Runnable r = (Runnable) queue.takeLast();
     r.run();
                } catch ( java.lang.Throwable  e) { }
            }
        }
    }
}

As shown in Listing 1, the SimpleWorkQueue class initializes a dequeue and starts a fixed number of threads at startup. Each thread then executes queue.takeLast() in a loop that retrieves a task from the worker queue (if there are tasks) or waits for a new task to arrive (if it finds the queue is empty).

Once a task is retrieved, each thread then calls the run method, r.run(), of the task.

Worker Queues per Thread

The approach above is very simple and improves performance over the traditional approach of creating threads for each incoming task. However, as shown in Figure 2, this method creates contention.
Contention is created when multiple threads use a single work queue to get their task. The condition is worse when the number of threads (cores) is higher.

Figure 2. Contention in a Common Worker Queue.


Today, with the advent of more multicore processors, it becomes a challenge for software applications to utilize the underlying cores effectively. (For example, IBM's Power7, Oracle's UltraSPARC, and Intel's Nehalem are multicore processors capable of running multiple threads.)

There are various solutions available for overcoming the contention in the common worker queue approach:
  • Using lock-free data structures
  • Using concurrent data structures with multiple locks
  • Maintaining multiple queues to isolate the contention
In this article, we explain how to maintain multiple queues—a queue-per-thread approach—to isolate the contention, as shown in Figure 3.

Figure 3. Queue-per-Thread Queue.


In this approach, each thread has its own worker queue and can retrieve tasks only from its own queue, not from any other queue. This approach isolates contention when retrieving tasks because there is no one to compete with. This guarantees that threads will not be in a sleeping state if there are tasks in the worker queue, which utilizes the cores effectively.

Listing 2 shows how you can easily migrate from the common worker queue approach to the queue-per-thread approach by making just a few modifications to the code that was shown in Listing 1. In Listing 2, the constructor initializes multiple queues (equal to the number of threads) at startup and each thread maintains an ID called thread_id. Then, thread_id is used to isolate the contention by helping each thread retrieve tasks from its own queue.

Listing 2. Creating a Queue-per-Thread Queue


 /* Modification to number of queue initialization */ 
for (int i = 0 ; i
Queue-per-Thread Queue with Work Stealing
Although the queue-per-thread approach greatly reduces the contention, it does not guarantee that the underlying cores are used effectively all the time, For example, what happens if a couple of queues get emptied long before other queues? This is a common situation, and in this case, only a few threads execute the tasks whereas other threads (emptied queues threads) wait for the new tasks to arrive. This can happen due to following:
  • Unpredictable nature of the scheduling algorithm
  • Unpredictable nature of the incoming tasks (short versus long)
A solution to this problem is work stealing.
Work stealing lets one thread steal work from another queue when it finds that its own queue is empty. This ensures that all the threads (and, in turn, the cores) are busy all the time. Figure 4 shows a scenario where Thread 2 steals a work from Thread 1’s queue because its own queue is empty. Work stealing can be implemented with standard queues, but using a dequeue greatly reduces the contention involved in stealing the work:
  • Only the worker thread accesses the head of its own dequeue, so there is never contention for the head of a dequeue.
  • The tail of the dequeue is accessed only when a thread runs out of work. There is rarely contention for the tail of any thread's dequeue either.

Figure 4. Work Stealing.


Listing 3 shows how you can steal work from other queues with just a few modifications to the queue-per-thread approach. As shown, each thread calls pollLast() instead to takeLast().
This is necessary because threads should not get blocked on a queue if there is no task in the queue. Once a thread finds that its own queue is empty, it steals work from another queue by calling pollFirst() on the other thread's queue.


Listing 3. Implementing Work Stealing

/* do not block if there is no task in the current queue */
r = (Runnable) queue[thread_id].pollLast();

if(null == r) {
 /* There is no task in the current queue, 
steal one from another thread's queue 
*/
 r = stealWork(thread_id);
}

/*
 * Method to steal work from other queues.
 */
Runnable stealWork(index) {
     for (int i = 0; i < nThreads; i++) {
      if(i != index) {
       Object o = queue[i].pollFirst();
       if(o != null) {
        return (Runnable) o;
       }
      }
     }
     return null;
}

Building the Benchmark

In order to demonstrate these approaches, we developed a small test scenario for the three approaches mentioned in this article and studied the behavior. The test basically creates a lot of 10 x 10 matrix multiplication tasks and executes them using the three approaches.
Note: The source code for the examples described in this article can be downloaded here: workerqueue.zip
The test defines the following classes:
  • MainClass: A class that initiates, starts, and coordinates various elements of the benchmark.
  • WorkAssignerThread: A thread that creates a lot of 10 x 10 matrix multiplication tasks and queues them.
  • Task: A class that defines a 10 x 10 matrix multiplication.
  • WorkQueue: An interface that defines a set of methods any worker queue must implement.
  • WorkerQueueFactory: A factory class that returns the workQueue object based on the queue type.
  • SimpleWorkQueue: A class that defines a simple worker queue and initiates a set of threads. This depicts the first queue type mentioned in this article (common worker queue).
  • MultiWorkQueue: A class that isolates the contention by defining multiple worker queues (one per thread) and depicts the second queue type mentioned in this article.
  • WorkStealingQueue: A class that isolates the contention by defining multiple queues and steals work when it finds one of its thread's queues is empty. This depicts the third queue type mentioned in this article.
The test can be executed by specifying the queue type, number of threads, and number of tasks, as shown in Listing 4. Listing 4 also shows how to invoke a test with the first queue type (common worker queue), with the number of threads equal to 10 and the number of tasks equal to 10000.


Listing 4. Executing the Test
 java MainClass <Queue type> <number of threads> <number of tasks>  
/* for example: */  java MainClass 1 10 10000 
Experimental Results
We evaluated the performance in different architectures and the results are very positive. Initially, we evaluated the performance on an AMD Opteron box, which had eight core processors and ran Linux, and we found that performance for queue type 3 was improved by 12 to 18.4% over queue type 1, depending on the load, as shown in Figure 5.

Figure 5. Performance Comparison Between Type 1 and Type 3 Queues on Linux AMD Opteron System with Eight Core Processors. [Disclaimer: This is not to compare any products or claim performance for any product; it is just to showcase the advantage of the techniques proposed in this article, which are purely the authors' views.]
We also evaluated the performance in a Linux power system that had four dual-core processors (Power4), and we found that performance was improved by 12 to16% for the same load, as shown in Figure 6.

Figure 6. Performance Comparisons Between Type 1 and Type 3 Queues on Linux System with Four Dual-Core Power Processors. [Disclaimer: This is not to compare any products or claim performance for any product; it is just to showcase the advantage of the techniques proposed in this article, which are purely the authors' views.


As shown in Figure 5 and Figure 6, we varied the tasks from 0.1 million to .5 million and measured the performance in seconds. The outcome of our experiment clearly indicates that a large amount of contention is created in queue type 1 and it can be eliminated by creating multiple queues and stealing work.

Summary

This article demonstrated the contention involved in the common worker queue approach and then isolated the contention by creating one queue per thread. This article also demonstrated, through a simple benchmark, why work stealing is important and how it improves the overall performance of an application.

Resources

Friday, June 24, 2011

JPA: Don't use JPA's RESOURCE_LOCAL on the server


The JPA 1.0 / 2.0 specifications are clear about the JTA or / RESOURCE_LOCAL usage on application servers:

"The transaction-type attribute is used to specify whether the entity managers provided by the entity manager factory for the persistence unit must be JTA entity managers or resource-local entity managers. The value of this element is JTA or RESOURCE_LOCAL. 

A transaction-type of JTA assumes that a JTA data source will be provided—either as specified by the jta-data-source element or provided by the container. In general, in Java EE environments, a transaction-type of RESOURCE_LOCAL assumes that a non-JTA datasource will be provided.

In a Java EE environment, if this element is not specified, the default is JTA. In a Java SE environment, if this element is not specified, the default is RESOURCE_LOCAL."

See section 8.2.1.2, Page 312 from JSR317


If you deploy the following persistence.xml:

<persistence>
  <persistence-unit name="integration" transaction-type="RESOURCE_LOCAL">
    <class>...AnEntity</class>
    <exclude-unlisted-classes>true</exclude-unlisted-classes>
    <properties>
      <property name="javax.persistence.jdbc.url" value="jdbc:derby:memory:testDB;create=true"/>
      <property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver"/>
      <property name="eclipselink.ddl-generation" value="create-tables"/>
    </properties>
  </persistence-unit>
</persistence>

into a Java EE application server, you will also have to manage both: the EntityManager and it's JTA-transaction by yourself. This will end up with lots of plumbing. Instead of RESOURCE_LOCAL, you should use the JTA setting in production. With the JTA setting you don't have to specify the JDBC-connection and use a pre-configured JTA data source instead: 


<persistence>
  <persistence-unit name="prod" transaction-type="JTA">
    <jta-data-source>jdbc/sample</jta-data-source>
    <properties>
      <property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
    </properties>
  </persistence-unit>
</persistence>

Wednesday, June 22, 2011

ADF: Call custom JavaScript method during ADF component initialization

If you need to invoke custom Java Script method during some specific UI (ADF Faces) component initialization, then this post is for you. Idea is to useClientListenerSet::addBehavior(javax.el.ValueExpression) to associate a deferred value expression (referring a JavaScript method) with the UI component, which will be evaluated later when DOM is rendered on the page. 



public void
setSomeInputField(RichInputText inpField) {
   
this.inputField = inpField;
   

    ClientListenerSet clientListenerSet = inputField.getClientListeners();
  

    if (clientListenerSet == null) {
       clientListenerSet =
new ClientListenerSet();
       clientListenerSet.addBehavior("new CustomCompBehavior()");
       inputField.setClientListeners(clientListenerSet);
  }
}

Sunday, June 19, 2011

WS: Keep REST/HTTP/HESSIAN client state by one line of code


Stateful HTTP-services require you to pass the jsessionid back and forth between a Java-based HTTP client (like Hessian, REST-client like Jersey/RESTeasy, SOAP, or plain HTTP-connection) as URL-extension or cookie. 

JDK 1.6 comes with java.net.CookieManager which handles all the bookkeeping for you.

I modified the already introduced Hessian example to be stateful - it takes a single (but bold) line of code:

import java.net.CookiePolicy;
import java.net.CookieManager;
import java.net.CookieHandler;
public class HessianStatefulTimeEndpoint {
    private TimeService timeService;

    @Before
    public void initProxy() throws MalformedURLException {
        
        CookieHandler.setDefault(new CookieManager(null /*=default in-memory store*/, CookiePolicy.ACCEPT_ALL));
        String url = "http://localhost:8080/EJB31AndHessian/TimeService";
        HessianProxyFactory factory = new HessianProxyFactory();
        this.timeService = (TimeService) factory.create(TimeService.class,url);
        assertNotNull(timeService);
    }

    
    @Test
    public void statefulness(){
        int numberOfSessions = this.timeService.getNumberOfSessions();
        int nextInvocation = this.timeService.getNumberOfSessions();
        assertEquals(numberOfSessions,nextInvocation);
    }
}
On the server server you can inject @SessionScoped beans into the HessianServlet:

public class HessianTimeEndpoint extends HessianServlet implements TimeService{

    @Inject
    CurrentTime currentTime;
...
}

@SessionScoped
public class CurrentTime implements Serializable{
    
    private static AtomicInteger instanceCount = new AtomicInteger(0);
    
    @PostConstruct
    public void onNewSession(){
        System.out.println("On new session: " + new Date());
        instanceCount.incrementAndGet();
    }
    
    public int getNumberOfSessions(){
        return instanceCount.get();
    }
    
    public long nanos(){
        return System.nanoTime();
    }
}

JEE6: JSF + JPA - EJB = More Code

Without an EJB you will need for every EntityManager interaction at least 4 lines of code:
1.trx.begin
2.EntityManager interaction
3.trx.commit
4.consistent error handling + trx.rollback

The creation and management of an EntityManager is not even included. The code will look like this:

With a single EJB 3 you can eliminate all the bloat - the EntityManager will be properly managed and injected by the container.
The same code will look like:

You could even build a generic and reusable CRUD Service. The @Stateless session bean can be directly injected into the backing bean then:

There is no additional XML, libraries or additional frameworks needed. You can just jar everything in a WAR and you are done. the whole EJB 3 container in Glassfish v3 is < 1MB.

Design Pattern: Value Objects (VOs) V.S Data Transfer Object(DTO).


The pattern which is known today as Data Transfer Object was mistakenly (see this definition) called Value Object in the first version of the Core J2EE Patterns. The name was corrected in the second edition of the Core J2EE Patterns book, but the name "Value Object" became very popular and is still used as an alias for the actual DTOs. There is, however, a real difference between both patterns:
  1. Data Transfer Object (DTO) is just as stupid data container which is used to transport data between layers and tiers. It mainly contains of attributes. Actually you can even use public attributes without any getters / setters, but this will probably cause too much meetings and discussions :-). DTOs are anemic in general and do not contain any business logic. DTOs are often java.io.Serializable - its only needed if you are going to transfer the data across JVMs.
  2. A Value Object [1,2] represents itself a fix set of data and is similar to a Java enum. A Value Object doesn't have any identity, it is entirely identified by its value and is immutable. A real world example would be Color.RED, Color.BLUE, SEX.FEMALE etc.
Data Transfer Objects are widely overused and the "real" Value Objects a bit unattended. Most developers who use the term Value Object actually have in mind DTOs. I have to continuously correct myself as well :-).

Saturday, June 18, 2011

JEE5: EJB3, JPA CRUD in 2 minutes


Requirements:
  1. Installed JDK 1.5 (better 1.6) 
  2. An IDE of your choice e.g.netbeans 6.1 (SE or EE), Eclipse (SE or EE), JDeveloper
  3. @Stateless, @Local, @Entity, @Id Annotations in classpath 
  4. An Java EE 5 capable application server of your choice. It will work with Glassfish v3+, JBoss 5+, WLS 10+.
What is to do:
  1. In the IDE you will have to point to a JAR containing the three annotations. If you have the Reference Implementation installed (Glassfish), put just: glassfish\lib\javaee.jar to the classpath. You will need a persistence provider as well. In case of toplink, there is only one jar: (glassfish\lib\toplink-essentials.jar). IDEs with built in EE support have already everything you need. However for the very first time I would prefer to develop "from scratch" an EJB. 
  2. Start with the Entity class. Just create a class and put @Entity tag on it:

    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.Id;

    @Entity
    public class Book {

        @Id
        @GeneratedValue

        private Long id;
        private String title;

        public Book() {
        }


        public Book(String title) {
            this.title = title;
        }
    }

    The @Id @GeneratedValue annotations denote the id field as a primary key. An entity should contain a parameterless constructor as well.
  3. Setup a DataSource at the application. jdbc/sample already exists in every fresh Glassfish installation (so nothing to do here).
  4. Create an interface with CRUD methods:
    import javax.ejb.Local;
    @Local public interface BookServiceLocal {
        Book createOrUpdate(Book book);
        void remove(Book book);
        Book find(Object id);
    }
  5. Create a class which implements this interface. You will be forced by a good IDE to implement this interface:
    @Stateless
    public class BookService implements  BookServiceLocal {
        @PersistenceContext
        private EntityManager em;

        public Book createOrUpdate(Book book) {
            return em.merge(book);
        }
        public void remove(Book book) {
            em.remove(em.merge(book));
        }
        public Book find(Object id) {
            return em.find(Book.class, id);
        }
    }

    The method merge creates or updates an entity, all other methods should be self-explanatory. Hint: you cannot remove not-attached entities - you have to find them first. This is the "Seek And Destroy" pattern :-). 
  6. You have to create a small XML file. However - it will not grow:
      <persistence-unit name="sample" transaction-type="JTA">
        <jta-data-source>jdbc/sample</jta-data-source>
        <properties>
          <property name="toplink.ddl-generation" value="create-tables"/>
        </properties>
      </persistence-unit>
    </persistence>

    There is only one persistence-unit element with the name "sample". EJB 3 Dependency Injection works with the "Convention Over Configuration" principle. This allows us to keep the injection of the EntityManager very lean: if there is only one possibility - you have not to configure it.
  7. Compile everything and JAR (the persistence.xml into META-INF) the output (in Netbeans just "build", in Eclipse "Export -> JAR")
  8. Copy the JAR into the autodeploy folder of WLS 10 (bea10\user_projects\domains\YOUR_DOMAIN\autodeploy), or glassfish\domains\domain1\autodeploy in the case of Glassfish v3, or jboss-5.0.GA\server\default\deploy in case of JBoss
  9. Inspect the log files, you are done :-)
What you have gained:
  1. It's threadsafe (in multicore environments as well) 
  2. Remoting: you can access the interface remotely
  3. It's transactional - transactions are started for you
  4. It's pooled - you can control the concurrency and prevent "denial of service" attacks.
  5. It's monitored: and EJB have to be visible through JMX. Application servers provide additional monitoring services as well.
  6. Dependency Injection just works - you can inject persistence, other beans, legacy pojos (I will cover this in some upcomings posts)
  7. It's portalble and so vendor-neutral. Deployment to different application servers just works
  8. There is almost NO XML.
  9. Its easily accessible (via DI), from Restful services, JSF, Servlets etc.
  10. Clustering and security are beneficial as well - but not the main reason to use EJBs
  11. EntityManager is injected in thread-save manner.
  12. Transactions are managed for you - the EntityManager participates in the transactions (no additional setup etc. necessary)

EJB3: Learn Interceptors (EJB3) for absolute beginner or Aspect oriented programming in 2 minutes

I would like to explain the essence of interception and so realization of cross-cutting aspects.

Requirements:
  1. Installled JDK 1.5 (better 1.6) 
  2. An IDE of your choice e.g. vi, emacs, netbeans 6.1/6.5 (SE or EE), Eclipse Ganymede (SE or EE)
  3. @Stateless, @Local Annotations in classpath 
  4. An Java EE 5 capable application server of your choice. It will work with Glassfish v1+ (better v2), JBoss 4.2+, WLS 10+ and Geronimo
What is to do:
  1. Create and deploy a simple Stateless or Stateful Session Bean.
  2. Create a simple Java class with one method with the following signature:  public Object <any name you like>(InvocationContext context) throws Exception:

    import javax.interceptor.AroundInvoke;
    import javax.interceptor.InvocationContext;
    public class TracingInterceptor {
       
        @AroundInvoke
        public Object logCall(InvocationContext context) throws Exception {
            System.out.println("Invoking method: " + context.getMethod());
            return context.proceed();
        }
    }
  3. The method has to be annotated with  @AroundInvoke. It is the one and only available annotation.
  4. Inside the method you can "decorate" existing functionality. The invocation context.proceed() invokes the actual method and returns the value. An interceptor wraps the method completely.
  5. Apply the interceptor on any Session Bean you like e.g.:

    @Interceptors(TracingInterceptor.class)@Stateless
    public class HelloWorldBean implements HelloWorld {
        
         public void sayHello() {
            System.out.println("Hello!");
        }
    }
  6. The annotation @Interceptors can be applied for the whole class, or chosen methods. You can even exclude interceptors with @ExcludeClassInterceptors - but it is rarely needed.
  7. Compile everything and JAR the output (in Netbeans just "build", in Eclipse "Export -> JAR")
  8. Copy the JAR into the autodeploy folder of WLS 10 (bea10\user_projects\domains\YOUR_DOMAIN\autodeploy), or glassfish\domains\domain1\autodeploy in the case of Glassfish v2, or jboss-4.2.2.GA\server\default\deploy in case of JBoss
  9. Inspect the log files, you are done :-)
What you have gained:
  1. There is no XML needed - its DRY.
  2. Its robust - compiler checks for the existence of the Interceptor and checks the right spelling of the annotation etc.
  3. Cross cutting functionality can be easily factored out into reusable interceptors.
  4. DI works in interceptors. You can easily inject resources or other beans into an interceptor.
  5. The whole method is wrapped - you have full access to the parameters and return values. You can even reexcute the method or not do it at all (for caching purposes).
  6. It's self documented: there is no surprise - the annotation is visible in code.
  7. They are portable and run on every Java EE 5 compliant application server.
  8. No additional frameworks, libraries etc. are need. This is good for maintenance.
  9. Its flexible - if you prefer XML - no problem just configure the decoration in a XML-descriptor:
    <ejb-jar xmlns = "http://java.sun.com/xml/ns/javaee"
             version = "3.0"
             xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation = "http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd">
        <interceptors>
            <interceptor>
                <interceptor-class>com.taman.logging.interceptor.TracingInterceptor</interceptor-class>
            </interceptor>
        </interceptors>
        <assembly-descriptor>
            <interceptor-binding>
                <ejb-name>HelloWorldBean</ejb-name>
                <interceptor-order>
                    <interceptor-class>com.taman.logging.interceptor.TracingInterceptor</interceptor-class>
                </interceptor-order>
            </interceptor-binding>
        </assembly-descriptor>
        </ejb-jar>

Interceptors and EJBs seem to be controversial. Interceptors are absolutely sufficient for the most use cases.

Friday, June 17, 2011

JEE 6: Simplicity by Design

Leverage new Java EE 6 features to build simple and maintainable applications.
The introduction of Java Platform, Enterprise Edition (Java EE) 5, in 2006, did a lot to simplify enterprise application development. Java EE 6, released in 2009, simplifies design and architecture tasks even further. Java EE 6 is a good choice for building small situational applications quickly and without any overhead. This article discusses various Java EE 6 architectures and design approaches that help developers build efficient, simple, and maintainable apps.
Java EE 6 consists of a set of independent APIs released together under the Java EE name. Although these APIs are independent, they fit together surprisingly well. For a given application, you could use only JavaServer Faces (JSF) 2.0, you could use Enterprise JavaBeans (EJB) 3.1 for transactional services, or you could use Contexts and Dependency Injection (CDI) with Java Persistence API (JPA) 2.0 and the Bean Validation model to implement transactions.
With a pragmatic mix of available Java EE 6 APIs, you can entirely eliminate the need to implement infrastructure services such as transactions, threading, throttling, or monitoring in your application. The real challenge is in selecting the right subset of APIs that minimizes overhead and complexity while making sure you don’t have to reinvent the wheel with custom code. As a general rule, you should strive to use existing Java SE and Java EE services before expanding your search to find alternatives. 

CDI: The Standard Glue

CDI, introduced with Java EE 6 to act as a glue for the different parts of the Java EE 6 specification, manages the lifecycle of POJO (Plain Old Java Object) beans and uses a type-safe mechanism for dependency injection. CDI also introduces many powerful features such as events, interceptors, decorators, standardized extension points, and the service provider interface. 
Because CDI is new and designed to be an integration layer, there is some overlap with older technologies. Although you can continue to use EJB 3.1 injection or JSF managed beans directly, you should consider using CDI wherever possible. CDI is more powerful, and you can simplify your application by using a single API.
CDI uses annotations to perform dependency injection. The most important annotation is javax.inject.Inject. The example in Listing 1 shows how this annotation can be used to inject a POJO into a servlet. All you need to do is to declare a field and annotate it with @Inject. When that code is executed, the container automatically initializes fields annotated with the @Inject annotation before the execution of any business methods.
Code Listing 1: POJO injection into a servlet with @Inject 
@WebServlet(name="HelloWorldService", urlPatterns={"/HelloWorldService"})
public class HelloWorldHTTPService extends HttpServlet {
   
    @Inject
    private Hello hello;
    
    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
    throws ServletException, IOException {
        PrintWriter out = response.getWriter();
        out.println(hello.helloWorld());
        out.flush();
        out.close();
    } 
}

There are no specific requirements for the injected class, beyond having to contain a default constructor: 
public class Hello {
    public String helloWorld(){
        return "Hello World";
    }
}

To make the above example work, you would also need an empty beans.xml deployment descriptor with the following content: <beans></beans>. The existence of this configuration file in the WEB-INF folder activates CDI capabilities.
Note that the Hello class is a POJO and not an EJB. It doesn’t have to be declared or configured—the @Inject annotation ensures proper creation and lifecycle management. In the real world, you would rarely inject POJOs into a servlet; you would probably use a UI framework (such as JSF 2) or expose your service via representational state transfer (REST). In such cases, the use of CDI is even more beneficial.
To illustrate, consider a simple MessageMe application that stores a message string in a database. The JSF 2 markup consists of two components: inputText and commandButton. As shown in Listing 2, inputText is value-bound to a class with the name index with a property message, which has, in turn, a content attribute. commandButtons’s action attribute is bound to the save method of the backing bean with the name index.
Code Listing 2: index.xhtml: binding the values to a CDI backing bean 
<h:body>
     <h:form>
     Content:<h:inputText value="#{index.message.content}"/>
      <br/>
     <h:commandButton value="Save" action="#{index.save}"/>
     </h:form>
</h:body>

Listing 3 shows the backing bean implemented as a request-scoped CDI bean, using the @RequestScoped annotation for request handling. A JSF 2 managed bean (using the @ManagedBean annotation) could also work, but CDI is just as powerful. And using CDI everywhere simplifies the architecture, with a single glue API across all application layers.
Code Listing 3: A CDI backing bean with injected EJB 
package com.tm.messageme.presentation;
import com.tm.messageme.business.messaging.boundary.Messaging;
import com.tm.messageme.business.messaging.entity.Message;
import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.inject.Named;
@Named
@RequestScoped
public class Index {
    @Inject
    Messaging ms;
    private Message message = new Message();
    public Message getMessage() {
        return message;
    }
    public void save(){
        ms.store(message);
    }
}

The annotation @Named (as specified in the JSR 330 specification and implemented in Guice and Spring) makes the index backing bean visible in all expression language (EL) markup. It works according to the “convention over configuration” principle: the name of the backing bean in JSF 2 is derived from the class name. The first letter is not capitalized.
The Message class is implemented as a JPA 2 entity, as shown in Listing 4.
Code Listing 4: JPA 2 entity validated with Bean Validation 
package com.tm.messageme.business.messaging.entity;
@Entity
public class Message {
    @Id
    @GeneratedValue
    private Long id;
    @Size(min=2,max=140)
    private String content;

    public Message(String content) {
        this.content = content;
    }
    public Message() { /*required by JPA */}
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
    public Long getId() {
        return id;
    }
}

The next class in this example is the Messaging class, which is implemented as an EJB 3.1 session bean. This class represents a pragmatic exception to the “CDI everywhere” rule. EJBs provide many capabilities, such as transactions, pooling, Java Management Extensions (JMX) monitoring, and asynchronous execution—all for the price of a single additional @Stateless annotation. In future Java EE releases, these aspects are likely to be extracted from EJBs and made available in CDI as well. In Java EE 6, however, a boundary or facade of a business component is most effectively implemented as a stateless session bean.
The @Asynchronous annotation in Listing 5 is particularly interesting. It enables the asynchronous but transactional execution of methods and is available only for EJBs. Note that the Messaging EJB is injected with @Inject and not @EJB. In practice, either annotation would work, with virtually no difference. The use of @Inject is slightly more powerful and supports inheritance. The @EJB annotation, on the other hand, works only with EJBs.
Code Listing 5: A boundary implemented as an EJB session bean 
package com.tm.messageme.business.messaging.boundary;
import com.tm.messageme.business.messaging.control.MessageStore;
import com.tm.messageme.business.messaging.entity.Message;
import javax.ejb.Asynchronous;
import javax.ejb.Stateless;
import javax.inject.Inject;
@Stateless
public class Messaging {
    @Inject
    MessageStore messageStore;
    @Asynchronous
    public void store(Message message){
        messageStore.store(message);
    }
}

The MessageStore class in Listing 6 is a Data Access Object (DAO) that encapsulates access to the EntityManager.
Code Listing 6: A CDI bean from the control layer 
package com.tm.messageme.business.messaging.control;
import com.tm.messageme.business.messaging.entity.Message;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
public class MessageStore {
    @PersistenceContext
    EntityManager em;
    public void store(Message message){
        em.persist(message);
    }
}

ECB: A Pragmatic Separation of Concerns
If you review the packaging of the application described above, you will notice separate boundary, control, and entity packages. This packaging approach is an implementation of the Entity Control Boundary (ECB) pattern. The boundary layer is the facade, the control layer is responsible for the implementation of process- and entity-independent logic, and the entity layer contains rich domain objects. 
With Java EE 6 and especially the availability of JPA 2, CDI, and EJB, the implementation of all three layers can lead to empty delegate code. For example, many CRUD-based use cases can be implemented very efficiently with a single boundary acting as a facade for accessing multiple entities.
However, a direct one-to-one relationship between the concepts in the ECB pattern and packages inside a component can still be beneficial. When packages are kept separate, static analysis tools can be used more easily to measure dependencies between packages. Furthermore, frameworks such as OSGi and Jigsaw rely on the existence of separate packages to expose public APIs.
In Java EE 6, the boundary is always realized with EJBs. The control layer can contain either CDIs or EJBs, and the entity layer can contain either JPA 2 entities or transient, unmanaged entities. The final decision of whether to use a CDI or an EJB in the control layer does not have to be made up front. You can start with a CDI and convert it into an EJB down the road by using the @Stateless annotation. You may need to use an EJB in some cases, such as when you need to start a subsequent transaction with @RequiresNew, when you need to execute a method asynchronously, or when you need to roll back the current transaction by invoking SessionContext.setRollbackOnly().
CDI, on the other hand, is more suitable for integrating legacy code or implementing Strategy, Factory, or Observer software design patterns. All of these capabilities are already built in and result in far less code than with the Java SE counterpart.
When you are developing applications with the ECB pattern, the ECB layering should evolve iteratively and not be forced in a top-down way. You should start with the persistence (Entity) layer, perform unit testing, and then implement the boundary layer. For building the unit test, the EntityManager and the associated transactions need to be created and managed manually (as shown in Listing 7).
Code Listing 7: Standalone JPA unit tests 
package com.tm.messageme.business.messaging.entity;
import javax.persistence.*;
import org.junit.Test;

    @Test
    public void mappingSmokeTest() {
        EntityManagerFactory emf = Persistence.createEntityManagerFactory("test");       EntityManager em = emf.createEntityManager();
        EntityTransaction tx = em.getTransaction();
        tx.begin();
        em.persist(new Message("duke"));
        tx.commit();
    }

The persistence.xml file must also be adjusted to handle standalone execution. Specifically, the transaction type should be changed to RESOURCE_LOCAL and a JDBC connection (instead of a datasource) must be configured explicitly, as shown in Listing 8.
Code Listing 8: persistence.xml for standalone JPA unit tests 
<persistence>
  <persistence-unit name="test" transaction-type="RESOURCE_LOCAL">
    <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
    <class>com.abien.messageme.business.messaging.entity.Message</class>
    <exclude-unlisted-classes>true</exclude-unlisted-classes>
    <properties>
      <property name="javax.persistence.jdbc.url" value="jdbc:derby:
./sample;create=true"/>
      <property name="javax.persistence.jdbc.password" value="app"/>
      <property name="javax.persistence.jdbc.driver" value="org.apache.derby
.jdbc.EmbeddedDriver"/>
      <property name="javax.persistence.jdbc.user" value="app"/>
      <property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
    </properties>
  </persistence-unit>
</Persistence>

When building the control layer, note that its content will be the product of entity and boundary layer refactoring. The reusable and noncohesive parts of the boundary layer, such as queries, algorithms, or validations, along with cross-cutting concerns from the entity layer, will be extracted into CDI managed beans in the control layer. 

Using the CEC Pattern

The main purpose of the boundary in the ECB pattern is to provide a clear separation between business and presentation logic. By definition, the boundary needs to be independent of presentation logic. Even with the many compromises you may make in your architecture, a clear separation between the business and UI technology is a must. In practice, UI logic tends to vary more frequently than business logic. It is common to produce business logic that can be accessed by a Web client (such as JSF 2), a rich client (such as Swing or Eclipse RCP), and REST at the same time. 
In the case of JSF 2, CDI is again the easiest choice for implementing a controller or a presenter. CDI managed beans can be directly bound to the JSF 2 view via EL, and the boundary (EJB 3.1) can be directly injected into the presenter. The presenter (or a controller) can be directly captured with an @Stereotype annotation. It works like a macro—you can place in it CDI annotations that get expanded with the annotation. A stereotype is a regular Java annotation represented by @Stereotype
@Named
@RequestScoped
@Stereotype
@Retention(RUNTIME)
@Target(TYPE)
public @interface Presenter {}

This custom stereotype can be applied instead of @Named and @RequestScoped—just like a macro. All CDI annotations identifying the presenter pattern can then be replaced with 
@Presenter
public class Index {//}

The purpose of the presenter is to implement presentation logic. The structure of the presenter is tightly coupled with the view, in that the state of a JSF component in the view is mapped to a property inside the presenter. The property can be either a value (with value binding) or the component instance itself (with component binding). In trivial cases, there is a one-to-one relationship between the view and the presenter. The presenter contains the view’s data as well as all the presentation logic. Injecting the boundary into the presenter involves using the @Inject annotation.
As the amount of presentation logic grows inside the presenter, the code can become harder to maintain and test. With CDI, it is fairly easy to split the monolithic presenter into separate data and presentation logic parts. For example, the following code shows how to refactor the backing bean from the earlier example by moving the save method into a newly created IndexPresenter bean. The presenter annotation is duplicated and renamed @View, and the bean is renamed IndexView: 
@View
public class IndexView {
    private Message message = new Message();
    public Message getMessage() {
        return message;
    }
}

The IndexPresenter bean gets the old @Presenter annotation. As the following code shows, the only purpose of the IndexPresenter bean in this case is to implement the presentation logic. 
@Presenter
public class IndexPresenter {
    @Inject
    Messaging boundary;
    @Inject
    IndexView indexView;
public void save(){
        boundary.store(indexView.getMessage());
    }
}

Because the boundary and the view are injected into the IndexPresenter, they can be easily mocked out. In a unit test environment, both fields would be set directly with the mock, whereas in a production environment, the container would perform the injection and set the actual dependency. Because the unit test and the IndexPresenter reside in the same package, the default visible fields can be set directly. Private fields with public setters can be used, but packagewide fields are good enough in most cases and can reduce code size.
Listing 9 shows how to test the presentation logic by mocking out the IndexView as well as the boundary Messaging class. The test, which invokes the IndexPresenter.save() method, is successful if the method store gets invoked exactly once with the Message-Instance being returned by the IndexView. Verifying the invocation means passing the mock to the Mockito.verify() method. The IndexView is mocked out to manipulate the return value without interacting with the JSF rendering.
Code Listing 9: IndexPresenterTest—with mocked-out view and boundary 
package com.tm.messageme.presentation;
//...other imports
import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.*;

public class IndexPresenterTest {
    private IndexPresenter cut;
    @Before
    public void initialize(){
        this.cut = new IndexPresenter();
    }
    @Test
    public void save() {
        this.cut.boundary = mock(Messaging.class);
        this.cut.indexView = mock(IndexView.class);
        Message expected = new Message("duke");
        when(this.cut.indexView.getMessage()).thenReturn(expected);
        cut.save();
        verify(this.cut.boundary,times(1)).store(expected);
    }
}

The Messaging boundary is mocked out for a different reason: to verify that the expected method actually gets invoked: 
public void save(){
    boundary.store(indexView.getMessage());
    }

The design of the JSF 2 presentation is similar to that of a rich Swing application. Common patterns such as Model-View-Controller and their refinements—Supervising Controller and Passive View—can be applied to JSF 2 as well. The main difference between JSF and a rich client technology is the way the view is rendered. In Swing the developer implements the view in Java, whereas in JSF 2 the developer uses XHTML markup. In JSF 2 the values of the component can be directly bound to a corresponding class, whereas in Swing they are usually stored in the view itself or in the model.
For the implementation of data-driven use cases such as CRUD, the Supervising Controller is a better choice than the Passive View. In the Supervising Controller pattern, a single backing bean (IndexView) is responsible for managing both the presentation logic and the state of the view. In more-sophisticated use cases, the Passive View variant may be more applicable. In the Passive View pattern, the backing bean is split into the view and presentation logic, and the presentation logic is extracted from the IndexView to the IndexPresenter.
CDI is best suited for the implementation of the presentation layer. Because of the built-in cross-cutting concerns (such as transactions, concurrency, asynchronous execution, monitoring, and throttling), the boundary of the business logic is realized as an EJB. The business component can be realized either as EJBs or CDIs. In general, you can start with CDIs and, over time, replace managed beans in special cases with EJBs. The CDI-EJB-CDI (CEC) pattern is the simplest and most pragmatic choice for Java EE 6. 

Making Interfaces Useful

EJB 3.0 (in Java EE 5) required separate interfaces for bean classes. To avoid naming collisions, developers often had to resort to well-defined naming conventions such as XyzLocal/XyzRemote and XyzBean. In Java EE 6, interfaces for EJBs and CDIs are now optional. Public EJB or CDI methods can now expose a “no interface” view, with no loss of functionality. 
This new functionality makes interfaces meaningful again. As opposed to the obligatory, nondescript use of interfaces with earlier releases, interfaces in Java EE 6 can be used for the implementation of the Strategy pattern; implementation of a public API; or strict separation of modules, which makes the code more expressive. An interface can also signal the “protected variations” of a system, and direct dependencies between classes can be used for code that is less likely to vary.
You can safely start without any interfaces and introduce them later as the need arises. This approach is fundamentally different from that in Java EE 5. Compared to Java 2 Platform, Enterprise Edition (J2EE) from 2003, Java EE 6 code is simpler, in terms of the elimination of several layers, indirections, and abstractions. Unlike J2EE, Java EE 6 consists of annotated classes without any dependencies on the platform. This approach eliminates the need to separate business logic from the infrastructure and makes the majority of J2EE patterns and best practices superfluous. In Java EE 6, simple cases can be solved with two layers: presentation and business logic. The EntityManager is already a good enough abstraction of the underlying persistence, so there is no need for additional indirections.
Maintainable Java EE 6 applications are written according to the YAGNI (You Ain’t Gonna Need It), DRY (Don’t Repeat Yourself), and KISS (Keep It Simple, Stupid) principles. Design patterns and best practices are introduced in a bottom-up—not a top-down—fashion. The patterns are always motivated by functional and nonfunctional requirements, not by the shortcomings of the platform. This approach represents the biggest difference between Java EE 6 and previous J2EE releases. In J2EE many of the design decisions were made up front, driven by the J2EE platform dependencies.
By contrast, the Java EE 6 development process focuses on function:
  1. Write simple code that directly solves the business problem.
  2. Verify the business logic with unit tests.
  3. Cut redundancies and improve the design with refactoring.
  4. Stress-test your application.
  5. Go back to 1.
Design and architecture are driven by concrete requirements rather than generic and architectural best practices. By continually stress-testing your application (at least once per week), you can more easily justify a simple design with hard facts to gain insight into system behavior under stress.