Running Batch jobs in J2SE applications

This tutorial shows how you can run the Java Batch API (JSR 352) as part of a J2SE application.

The Java Batch API (JSR 352) allows executing Batch activities based on a Job Specification Language (JSL) using two main Programming models: Chunk Steps or Batchlets. I’ve already blogged some examples of Chunk steps  and BatchLets  which are designed to run on WildFly application server.
The implementation of Java Batch API (JSR 352) is provided by a project named JBeret which allows also executing Batch activities as part of Java standard edition applications. In this tutorial we will see a basic example of how to run a Batchlet from within a J2SE application.

Defining the Batch Job

The first step is defining the job via the Job Specification Language (JSL). Let’s create this file named simplebatchlet.xml in the folder src\main\resources\META-INF\batch-jobs of a Maven project:

<job id="simplebatchlet" xmlns="http://xmlns.jcp.org/xml/ns/javaee"
    version="1.0">

    <step id="step1">
        <properties>
            <property name="file" value="/home/jboss/log.txt" />
            <property name="destination" value="/var/opt/log.txt" />
        </properties>
        <batchlet ref="sampleBatchlet" />
    </step>
</job>

In this simple JSL file we are executing a Batchlet named “sampleBatchlet” as part of “step1” which takes also two properties. We will use these properties to copy a file from a source to a destination.

Defining the Batchlet

Here is the Batchlet which is a CDI @Named Bean that collects the properties from the StepContext and uses the Java 7 Files API to copy files from the source to the destination:

package com.mastertheboss.jberet;

import javax.batch.api.AbstractBatchlet;
import javax.inject.Inject;
import javax.batch.runtime.context.*;
import javax.inject.Named;
import java.io.*;
import java.nio.file.Files;
 
@Named 
public class SampleBatchlet extends AbstractBatchlet {
    @Inject StepContext stepContext;
 
    @Override
    public String process() {
         String source = stepContext.getProperties().getProperty("source");
         String destination = stepContext.getProperties().getProperty("destination");

    try {
         Files.copy(new File(source).toPath(), new File(destination).toPath());
         System.out.println("File copied!");
         return "COMPLETED";
     } catch (IOException e) {
       e.printStackTrace();
     }
      return "FAILED";
    }

}

You can trigger the execution of your job with simple main Java class:

package com.mastertheboss.jberet;

import javax.batch.operations.JobOperator;
import javax.batch.operations.JobSecurityException;
import javax.batch.operations.JobStartException;
import javax.batch.runtime.BatchRuntime;

public class Main {

    public static void main(String[] args) {
        try {
            JobOperator jo = BatchRuntime.getJobOperator();

            long id = jo.start("simplebatchlet", null);

            System.out.println("Batchlet submitted: " + id);
            Thread.sleep(5000);

        } catch (Exception ex) {
            System.out.println("Error submitting Job! " + ex.getMessage());
            ex.printStackTrace();
        }

    }

}

Compiling the project

In order to run our project, we need to include in our pom.xml a set of dependencies which include javax.batch Batch API, JBeret core and its related dependencies, Weld container API and its related dependencies:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
   <modelVersion>4.0.0</modelVersion>
   <groupId>com.mastertheboss.jberet</groupId>
   <artifactId>batch-job</artifactId>
   <packaging>jar</packaging>
   <version>1.0-SNAPSHOT</version>
   <name>batch-job</name>
   <url>http://maven.apache.org</url>
   <repositories>
      <repository>
         <id>jboss-public-repository-group</id>
         <name>JBoss Public Repository Group</name>
         <url>http://repository.jboss.org/nexus/content/groups/public/</url>
      </repository>
   </repositories>
   <dependencies>
      <dependency>
         <groupId>org.jboss.spec.javax.batch</groupId>
         <artifactId>jboss-batch-api_1.0_spec</artifactId>
         <version>1.0.0.Final</version>
      </dependency>
      <dependency>
         <groupId>org.jberet</groupId>
         <artifactId>jberet-core</artifactId>
         <version>1.0.2.Final</version>
 
      </dependency>
      <dependency>
         <groupId>org.jberet</groupId>
         <artifactId>jberet-support</artifactId>
         <version>1.0.2.Final</version>
 
      </dependency>
 
      <dependency>
         <groupId>org.jboss.spec.javax.transaction</groupId>
         <artifactId>jboss-transaction-api_1.2_spec</artifactId>
         <version>1.0.0.Final</version>
      </dependency>
      <dependency>
         <groupId>org.jboss.marshalling</groupId>
         <artifactId>jboss-marshalling</artifactId>
         <version>1.4.2.Final</version>
      </dependency>
      <dependency>
         <groupId>org.jboss.weld</groupId>
         <artifactId>weld-core</artifactId>
         <version>2.1.1.Final</version>
      </dependency>
      <dependency>
         <groupId>org.jboss.weld.se</groupId>
         <artifactId>weld-se</artifactId>
         <version>2.1.1.Final</version>
      </dependency>
      <dependency>
         <groupId>org.jberet</groupId>
         <artifactId>jberet-se</artifactId>
         <version>1.0.2.Final</version>
      </dependency>

   </dependencies>
   <build>
      <plugins>
         <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <version>1.2.1</version>
            <executions>
               <execution>
                  <goals>
                     <goal>java</goal>
                  </goals>
               </execution>
            </executions>
            <configuration>
               <mainClass>com.mastertheboss.jberet.Main</mainClass>
            </configuration>
         </plugin>
      </plugins>
   </build>
</project>

Optionally you can include also some other dependencies in case you need an XML processor, or the streaming JSON processor:

<dependency>
    <groupId>com.fasterxml</groupId>
    <artifactId>aalto-xml</artifactId>
     <version>0.9.9</version>
</dependency>

<dependency>
    <groupId>org.codehaus.woodstox</groupId>
    <artifactId>stax2-api</artifactId>
    <version>3.1.4</version>
</dependency>
        
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.4.1</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-xml</artifactId>
    <version>2.4.1</version>
</dependency>

You can execute your application with:

mvn clean install exec:java

After some INFO messages you should check on the Console that:
Batchlet submitted: 1
File Copied!

Configuring JBeret engine

When using Batch Jobs within WildFly container you can configure Jobs persistence and thread pools via the batch subsystem. When running as standalone application you can do it via a file named jberet.properties which has to be placed in src\main\resources of your Maven project.
Here follows a sample jberet.properties file:

# Optional, valid values are jdbc (default), mongodb and in-memory
job-repository-type = jdbc

# Optional, default is jdbc:h2:~/jberet-repo for h2 database as the default job repository DBMS.
# For h2 in-memory database, db-url = jdbc:h2:mem:test;DB_CLOSE_DELAY=-1
# For mongodb, db-url includes all the parameters for MongoClientURI, including hosts, ports, username, password,



# Use the target directory to store the DB
db-url = jdbc:h2:./target/jberet-repo
db-user =sa
db-password =sa
db-properties =

# Configured: java.util.concurrent.ThreadPoolExecutor is created with thread-related properties as parameters.
thread-pool-type =

# New tasks are serviced first by creating core threads.
# Required for Configured type.
thread-pool-core-size =

# If all core threads are busy, new tasks are queued.
# int number indicating the size of the work queue. If 0 or negative, a java.util.concurrent.SynchronousQueue is used.
# Required for Configured type.
thread-pool-queue-capacity =

# If queue is full, additional non-core threads are created to service new tasks.
# int indicating the maximum size of the thread pool.
# Required for Configured type.
thread-pool-max-size =

# long number indicating the number of seconds a thread can stay idle.
# Required for Configured type.
thread-pool-keep-alive-time =

# Optional, valid values are true and false, defaults to false.
thread-pool-allow-core-thread-timeout =

# Optional, valid values are true and false, defaults to false.
thread-pool-prestart-all-core-threads =

# Optional, fully-qualified name of a class that implements java.util.concurrent.ThreadFactory.
# This property should not be needed in most cases.
thread-factory =

# Optional, fully-qualified name of a class that implements java.util.concurrent.RejectedExecutionHandler.
# This property should not be needed in most cases.
thread-pool-rejection-policy =

As you can see, this file largely relies on defaults for many variables like the thread pool. We have anyway applied a change in the job-repository-type to persist jobs on a DB (H2 DB). In this case we will need adding the JDBC Driver API to our Maven project as follows:

      <dependency>
         <groupId>com.h2database</groupId>
         <artifactId>h2</artifactId>
         <version>1.4.178</version>
      </dependency>

That’s all! enjoy Batch API using jBeret!

Acknowledgments: I’d like to express my gratitude to Cheng Fang (JBeret project Lead) for providing useful insights for writing this article

Java EE Batchlets with WildFly tutorial

This is the second tutorial about Java API for Batch Applications (JSR-352) on WildFly application server. In the first Java EE Batch Tutorial we have learnt how to use chunk steps which are iterative executions of read/process/write operations on a set of items. Batchlets, on the other hand, include activities which are not executed in an iterative style as chunk but as atomic activities. In short a batchlet is a step which is called once. It either succeeds or fails. If it fails it can be restarted and it runs again.  

In the following example we will show how to combine Batchlets, Flows and Deciders in order to create a Workflow-like logic using simply a few java classes and the Job Specification Language XML file.  So here’s our simple job which takes care of copying some files, then a decision node evaluates if the hard disk space is getting low. If so, a mail is sent to the administrator otherwise the job is completed:

And here’s the job XML file which reflects this job:

<job id="bpmJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0">

   <flow id="mainprocess" next="sendemail">

      <step id="copyfiles" next="decider1">

         <batchlet ref="copyFilesBatchlet" />
      </step>

      <decision id="decider1" ref="decisionNode">
         <next on="DSK_SPACE_LOW" to="sendemail" />
         <end on="DSK_SPACE_OK" />
      </decision>
   </flow>

 

   <step id="sendemail">
      <properties>
          <property name="mail.from" value="SENDER-EMAIL" />
          <property name="mail.to" value="DESTINATION-EMAIL"  />
      </properties>
      <batchlet ref="mailBatchlet" />
   </step>
</job>

The JSL XML file needs to be created in the META-INF/batch-jobs of your application so if you are using Maven to build your project a good place to include the XML file is in the resources/META-INF/batch-jobs folder. 

The job includes a flow and a step named “sendmail”. The flow is a composition of steps which run as a single unit. In our case, the flow “mainProcess” includes a first step named “copyfiles” which can be used to copy files and a decision node which does some checks and drivers the execution towards the step “sendemail” or to the end.

The copyFilesBatchlet is a CDI bean which extends javax.batch.api.AbstractBatchlet and implements the process method as follows:

@Named

public class CopyFilesBatchlet extends AbstractBatchlet {
 @Inject JobContext jobContext; 
 
    @Override
    public String process() {
     
     System.out.println("Running inside SendBillBatchlet batchlet ");
     
     Properties parameters = getParameters();
     String source = parameters.getProperty("source");
     String destination = parameters.getProperty("destination");
   
     // JDK 1.7 API
     try {
         Files.copy(new File(source).toPath(), new File(destination).toPath());
         return "COMPLETED";
     } catch (IOException e) {
       e.printStackTrace();
     }

      return "FAILED";
        
    }

 private Properties getParameters() {
  JobOperator operator   = BatchRuntime.getJobOperator();
  return operator.getParameters(jobContext.getExecutionId());
   
 }

}

As you can see, within the process method the actual copy of files happens. The origin and destination files are taken from parameters included when starting up the job so that you can dynamically change the file names. In order to collect the job parameters we programatically interact with the JobOperator, which is available via the BatchRuntime class. The JobOperator requires a job execution id which is collected from the JobContext interface. If the Batchlet completes successfully, it exits with a “COMPLETED” exit code, otherwise a “FAILED” exit code is returned.

The other component of our job is the Decider which does a check up on the file system free space and returns an exit code as well.

@Named
public class DecisionNode implements Decider {

 @Inject
 JobContext jobContext;

 @Override
 public String decide(StepExecution[] ses) throws Exception {
   Properties parameters = getParameters();
  String fs = parameters.getProperty("filesystem");
  File file = new File(fs);
  long totalSpace = file.getTotalSpace();
  if (totalSpace > 100000) {
   return "DSK_SPACE_OK";
  } else {
   return "DSK_SPACE_LOW";
  } 
 }

 private Properties getParameters() {
  JobOperator operator = BatchRuntime.getJobOperator();
  return operator.getParameters(jobContext.getExecutionId());

 }
}

As you can see a Decider has to implement the javax.batch.api.Decider interface and its decide method. Within it, you can choose the return value of the Decider that is used in the JSL file to drive the path in one direction or in another.

The last Batchlet included in this example is the mailBatchlet which is triggered in case the Decider returned “DSK_SPACE_LOW”.

@Named

public class MailBatchlet extends AbstractBatchlet {
 @Resource(mappedName="java:jboss/mail/Default")
 private Session mailSession;

 
 @Inject StepContext stepContext;
 @Inject JobContext jobContext; 
    @Override
    public String process() {
        System.out.println("Running inside MailBatchlet batchlet ");
        String fromAddress = stepContext.getProperties().getProperty("mail.from");
        String toAddress = stepContext.getProperties().getProperty("mail.to");
   
          try    {
            MimeMessage m = new MimeMessage(mailSession);
            Address from = new InternetAddress(fromAddress);
            Address[] to = new InternetAddress[] {new InternetAddress(toAddress) };

            m.setFrom(from);
            m.setRecipients(Message.RecipientType.TO, to);
            m.setSubject("WildFly Mail");
            m.setSentDate(new java.util.Date());
            m.setContent("Job Execution id "+jobContext.getExecutionId()+" warned disk space getting low!","text/plain");
            Transport.send(m);
            
        }
        catch (javax.mail.MessagingException e)
        {
            e.printStackTrace();
            
        } 
        return "COMPLETED";
    }

}

What is interesting to note in this Batchlet is that it uses some properties relative to the Step which are specified in the JSL file:

<step id="sendemail">
    <properties>
         <property name="mail.from" value="SENDER-EMAIL" />
         <property name="mail.to" value="DESTINATION-EMAIL"  />
    </properties>
    <batchlet ref="mailBatchlet" />
</step>

Properties provide a convenient wat to specify some attributes which are not so dynamic like parameters yet they can be changed without the need to recompile the project. In order to execute our job we need to code a simple Client which can be a Servlet for example:

 protected void processRequest(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        response.setContentType("text/html;charset=UTF-8");
        PrintWriter out = response.getWriter();
        try {
        JobOperator jo = BatchRuntime.getJobOperator();
       
        Properties p = new Properties();
        p.setProperty("source", "/home/user1/file.dmp");
        p.setProperty("destination", "/usr/share/dmp");
        p.setProperty("filesystem", "/usr/");
        long id = jo.start("bpmJob",p);
        
        out.println("Job submitted: " + id );
     

        } catch (JobStartException | JobSecurityException ex) {
         out.println("Error submitting Job! " + ex.getMessage() );
         ex.printStackTrace();
        }
        out.flush();
    }

 
If you have already gone through our first tutorial about chunk steps, the Servlet client should look familiar. The only difference here is that we are including some parameters in order to make dynamic the job. You can compile the example by including the code Maven dependencies required for Batch API and CDI:

 <repositories>
  <repository>
   <id>JBoss Repository</id>
   <url>https://repository.jboss.org/nexus/content/groups/public/">https://repository.jboss.org/nexus/content/groups/public/</url>
  </repository>
 </repositories>

 <dependencyManagement>
  <dependencies>
   <dependency>
    <groupId>org.jboss.spec</groupId>
    <artifactId>jboss-javaee-7.0</artifactId>
    <version>1.0.0.Final</version>
    <type>pom</type>
    <scope>import</scope>
   </dependency>

  </dependencies>
 </dependencyManagement>
 <dependencies>

  <!-- Import the Batch API -->
  <dependency>
   <groupId>org.jboss.spec.javax.batch</groupId>
   <artifactId>jboss-batch-api_1.0_spec</artifactId>
   <version>1.0.0.Final</version>
  </dependency>
  
  <!-- Import the Mail API -->
  <dependency>
   <groupId>com.sun.mail</groupId>
   <artifactId>javax.mail</artifactId>
   <version>1.5.1</version>
  </dependency>

  <!-- Import the CDI API -->
  <dependency>
   <groupId>javax.enterprise</groupId>
   <artifactId>cdi-api</artifactId>
   <scope>provided</scope>
  </dependency>
  
  <!-- Import the Common Annotations API (JSR-250) -->
  <dependency>
   <groupId>org.jboss.spec.javax.annotation</groupId>
   <artifactId>jboss-annotations-api_1.2_spec</artifactId>
   <scope>provided</scope>
  </dependency>
  
  <!-- Import the Servlet API -->
  <dependency>
   <groupId>org.jboss.spec.javax.servlet</groupId>
   <artifactId>jboss-servlet-api_3.1_spec</artifactId>
   <scope>provided</scope>
  </dependency>
 </dependencies>

 
That’s all! enjoy the Java EE Batch API with WildFly application server!

Batch Applications tutorial on WildFly

This tutorial discusses about Batch Applications for the Java Platform (JSR-352) which can be used to define, implement and running batch jobs. Batch jobs are composed of a set of tasks which can be automatically executed without user interaction. These tasks are executed periodically or when resource usage is low and they often process large amounts of information such as log files, database records or images. Examples include billing, report generation, data format conversion, and image processing. These tasks are called batch jobs. The batch framework is a rather rich one as it includes a Java API an XML configuration and a batch runtime.

Batch applications are broken down in a set of steps which specify their execution order. A simple batch might include just to elaborate sequentially a set of records, however more advanced ones may specify additional elements like decision elements or parallel execution of steps.

Before diving into the example, some definitions first: what is a step? put it simply, a step is an independent and sequential phase of a batch job. A step it self can contain chunk-oriented steps and task-oriented steps.

Chunk-oriented steps process data by reading items from a source, applying some transformation/business logic to each item, and storing the results. Chunk steps operate on one item at a time and group the results into a chunk. The results are stored when the chunk reaches a configurable size. Chunk-oriented processing makes storing results more efficient and facilitates transaction demarcation.


Task-oriented steps, on the other hand, execute actions other than processing single items from a source. A typical example of task-oriented step might be some DDL on a database or operation on a file system. In terms of comparison a chunk oriented step can be used for massive, long running tasks whilst a task oriented step might be fit for a set of batch operations that are to be executed periodically.

JSR 352 also defines a roll-your-own kind of a step called a batchlet. A batchlet is free to use anything to accomplish the step, such as sending an e-mail.

In this tutorial we will learn how to use a Chunk-oriented steps. Each chunk step is in turn broken in three parts:

  • The Read chunk part which is used to read the single items from a source of data (database/fs/ldap etc.)
  • The Processor chunk part manipulates one item at a time using the logic defined by the application. (e.g. sorting, filtering data, trasnforming data etc.)
  • The Writer chunk part is used to write the item which has been processed in the earlier phase.

Due to its nature, chunk steps are usually long-running activities, therefore it is possible to bookmark their progress using checkpoints. A checkpoint can be used to restart the execution of a step which has been interrupted.

In this tutorial we will see a simple yet powerful example of batch job which takes as input a CSV file which is read, processed and inserted into a database. This example has been taken from A.Gupta Java EE 7 examples (https://github.com/arun-gupta/javaee7-samples/tree/master/batch/chunk-csv-database) and it has been slightly modified in its configuration to use the default WildFly datasource,the WildFly Java EE 7 dependencies and the JBoss Maven plugin.

The Job file

Each Job must be named uniquely and must be placed in the META-INF/batch-jobs directory. So here’s our job definition file (myJob.xml) :

<job id="myJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0">
    <step id="myStep" >
        <chunk item-count="3">
            <reader ref="myItemReader"/>
            <processor ref="myItemProcessor"/>
            <writer ref="myItemWriter"/>
        </chunk>    
    </step>
</job> 

This is the Job definition file which describes how many steps and chunks we are going to execute, the reference implementation for them and the size of the chunk, via the item-count attribute. Now here the three implementation follow here.

Writing the ItemReader

public class MyItemReader extends AbstractItemReader {

    private BufferedReader reader;

    @Override
    public void open(Serializable checkpoint) throws Exception {
        reader = new BufferedReader(
                new InputStreamReader(
                    this
                    .getClass()
                    .getClassLoader()
                    .getResourceAsStream("/META-INF/mydata.csv")
                )
            );
    }

    @Override
    public String readItem() {
        try {
            return reader.readLine();
        } catch (IOException ex) {
            Logger.getLogger(MyItemReader.class.getName()).log(Level.SEVERE, null, ex);
        }
        return null;
    }
}

Note that the class is annotated with the @Named annotation. Because the @Named annotation uses the default value, the Contexts and Dependency Injection (CDI) name for this bean is myItemReader.

Writing the ItemProcessor
Our SimpleItemProcessor follows a pattern similar to the pattern for myItemReader but it’s in charge to create a Person object from the CSV line of text which has been read:

@Named
public class MyItemProcessor implements ItemProcessor {
    SimpleDateFormat format = new SimpleDateFormat("M/dd/yy");

    @Override
    public Person processItem(Object t) {
        System.out.println("processItem: " + t);
        
        StringTokenizer tokens = new StringTokenizer((String)t, ",");

        String name = tokens.nextToken();
        String date;
        
        try {
            date = tokens.nextToken();
            format.setLenient(false);
            format.parse(date);
        } catch (ParseException e) {
            return null;
        }
        
        return new Person(name, date);
    }
}

The processItem() method receives (from the batch runtime) a String object which is tokenized and used to create a Person object as output. Notice that the type of object returned by an ItemProcessor can be different from the type of object it received from ItemReader.

Writing the ItemWriter
Following here is the ItemWriter which, as we said, is in charge to persist the Item (Person) on the default Database (ExampleDS).

@Named
public class MyItemWriter extends AbstractItemWriter {
    
    @PersistenceContext
    EntityManager em;

    @Override
    public void writeItems(List list) {
        System.out.println("writeItems: " + list);
        for (Object person : list) {
            em.persist(person);
        }
    }
}

Starting a Batch Job from a Servlet

Note that the mere presence of a job XML file or other batch artifacts (such as ItemReader) doesn’t mean that a batch job is automatically started when the application is deployed. A batch job must be initiated explicitly, say, from a servlet or from an Enterprise JavaBeans (EJB) timer or an EJB business method.

protected void processRequest(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        response.setContentType("text/html;charset=UTF-8");
        try (PrintWriter out = response.getWriter()) {
            out.println("<html>");
            out.println("<head>");
            out.println("<title>CSV-to-Database Chunk Job</title>");
            out.println("</head>");
            out.println("<body>");
            out.println("<h1>CSV-to-Database Chunk Job</h1>");
            JobOperator jo = BatchRuntime.getJobOperator();
            long jid = jo.start("myJob", new Properties());
            out.println("Job submitted: " + jid + "<br>");
            out.println("<br><br>Check server.log for output, also look at \"myJob.xml\" for Job XML.");
            out.println("</body>");
            out.println("</html>");
        } catch (JobStartException | JobSecurityException ex) {
            Logger.getLogger(TestServlet.class.getName()).log(Level.SEVERE, null, ex);
        }
    }

The first step is to obtain an instance of JobOperator. This can be done by calling the following:

JobOperator jo = BatchRuntime.getJobOperator(); 

The servlet then creates a Properties object and stores the input file name in it. Finally, a new batch job is started by calling the following:

jor.start("myJob", new Properties()) 

The jobname is nothing but the job JSL XML file name (minus the .xml extension). The properties parameter serves to pass any input data to the job.

The batch runtime assigns a unique ID, called the execution ID, to identify each execution of a job whether it is a freshly submitted job or a restarted job. Many of the JobOperator methods take the execution ID as parameter. Using the execution ID, a program can obtain the current (and past) execution status and other statistics about the job. The JobOperator.start() method returns the execution ID of the job that was started.

Compiling the Batch Code

In order to compile your project, I’ve changed the original pom.xml for this project by including the org.jboss.spec.javax.batch dependency. Note also that I’m using the new jboss-javaee-7.0 BOM while, as far as I know, the jboss-javaee-7.0-with-hibernate bom is still not available.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.javaee7.batch</groupId>
   <artifactId>batch-samples</artifactId>
   <version>1.0-SNAPSHOT</version>
   <packaging>war</packaging>
   <name>Batch Applications for the Java Platform (JSR-352) Example</name>
   <description>Batch Applications for the Java Platform (JSR-352) Example</description>
   <url>http://jboss.org/jbossas</url>
   <licenses>
      <license>
         <name>Apache License, Version 2.0</name>
         <distribution>repo</distribution>
         <url>http://www.apache.org/licenses/LICENSE-2.0.html</url>
      </license>
   </licenses>
   <properties>
      <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
      <version.jboss.maven.plugin>7.4.Final</version.jboss.maven.plugin>
      <version.jboss.spec.javaee.6.0>3.0.2.Final</version.jboss.spec.javaee.6.0>
      <version.war.plugin>2.1.1</version.war.plugin>
      <version.compiler.plugin>2.3.1</version.compiler.plugin>
      <!-- maven-compiler-plugin -->
      <maven.compiler.target>1.7</maven.compiler.target>
      <maven.compiler.source>1.7</maven.compiler.source>
      <version.jboss.bom>1.0.4.Final</version.jboss.bom>
   </properties>
   <dependencyManagement>
      <dependencies>
         <dependency>
            <groupId>org.jboss.spec</groupId>
            <artifactId>jboss-javaee-7.0</artifactId>
            <version>1.0.0.Beta2</version>
            <type>pom</type>
            <scope>import</scope>
         </dependency>
         <dependency>
            <groupId>org.jboss.bom</groupId>
            <artifactId>jboss-javaee-6.0-with-hibernate</artifactId>
            <version>${version.jboss.bom}</version>
            <type>pom</type>
            <scope>import</scope>
         </dependency>
      </dependencies>
   </dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.hibernate.javax.persistence</groupId>
         <artifactId>hibernate-jpa-2.0-api</artifactId>
         <scope>provided</scope>
      </dependency>
      <dependency>
         <groupId>org.hibernate</groupId>
         <artifactId>hibernate-validator</artifactId>
         <scope>provided</scope>
         <exclusions>
            <exclusion>
               <groupId>org.slf4j</groupId>
               <artifactId>slf4j-api</artifactId>
            </exclusion>
         </exclusions>
      </dependency>
      <!-- Import the Batch API which is included in WildFly 8 -->
      <dependency>
         <groupId>org.jboss.spec.javax.batch</groupId>
         <artifactId>jboss-batch-api_1.0_spec</artifactId>
         <version>1.0.0.Final</version>
      </dependency>
      <!-- Import the CDI API -->
      <dependency>
         <groupId>javax.enterprise</groupId>
         <artifactId>cdi-api</artifactId>
         <scope>provided</scope>
      </dependency>
      <!-- Import the Common Annotations API (JSR-250) -->
      <dependency>
         <groupId>org.jboss.spec.javax.annotation</groupId>
         <artifactId>jboss-annotations-api_1.1_spec</artifactId>
         <scope>provided</scope>
      </dependency>
      <!-- Import the Servlet API -->
      <dependency>
         <groupId>org.jboss.spec.javax.servlet</groupId>
         <artifactId>jboss-servlet-api_3.0_spec</artifactId>
         <scope>provided</scope>
      </dependency>
   </dependencies>
   <build>
      <!-- Set the name of the war, used as the context root when the app is deployed -->
      <finalName>${project.artifactId}</finalName>
      <plugins>
         <plugin>
            <artifactId>maven-war-plugin</artifactId>
            <version>${version.war.plugin}</version>
            <configuration>
               <!-- Java EE 6 doesn't require web.xml, Maven needs to catch up! -->
               <failOnMissingWebXml>false</failOnMissingWebXml>
            </configuration>
         </plugin>
         <!-- JBoss AS plugin to deploy war -->
         <plugin>
            <groupId>org.jboss.as.plugins</groupId>
            <artifactId>jboss-as-maven-plugin</artifactId>
            <version>${version.jboss.maven.plugin}</version>
         </plugin>
         <!-- Compiler plugin enforces Java 1.6 compatibility and activates 
                annotation processors -->
         <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>${version.compiler.plugin}</version>
            <configuration>
               <source>${maven.compiler.source}</source>
               <target>${maven.compiler.target}</target>
            </configuration>
         </plugin>
      </plugins>
   </build>
</project>

Download the Maven project for this example from http://www.mastertheboss.com/code/chunk-csv-database.zip