Running Batch jobs in J2SE applications

This tutorial shows how you can run the Java Batch API (JSR 352) as part of a J2SE application.

The Java Batch API (JSR 352) allows executing Batch activities based on a Job Specification Language (JSL) using two main Programming models: Chunk Steps or Batchlets. I’ve already blogged some examples of Chunk steps  and BatchLets  which are designed to run on WildFly application server.
The implementation of Java Batch API (JSR 352) is provided by a project named JBeret which allows also executing Batch activities as part of Java standard edition applications. In this tutorial we will see a basic example of how to run a Batchlet from within a J2SE application.

Defining the Batch Job

The first step is defining the job via the Job Specification Language (JSL). Let’s create this file named simplebatchlet.xml in the folder src\main\resources\META-INF\batch-jobs of a Maven project:

<job id="simplebatchlet" xmlns=""

    <step id="step1">
            <property name="file" value="/home/jboss/log.txt" />
            <property name="destination" value="/var/opt/log.txt" />
        <batchlet ref="sampleBatchlet" />

In this simple JSL file we are executing a Batchlet named “sampleBatchlet” as part of “step1” which takes also two properties. We will use these properties to copy a file from a source to a destination.

Defining the Batchlet

Here is the Batchlet which is a CDI @Named Bean that collects the properties from the StepContext and uses the Java 7 Files API to copy files from the source to the destination:

package com.mastertheboss.jberet;

import javax.batch.api.AbstractBatchlet;
import javax.inject.Inject;
import javax.batch.runtime.context.*;
import javax.inject.Named;
import java.nio.file.Files;
public class SampleBatchlet extends AbstractBatchlet {
    @Inject StepContext stepContext;
    public String process() {
         String source = stepContext.getProperties().getProperty("source");
         String destination = stepContext.getProperties().getProperty("destination");

    try {
         Files.copy(new File(source).toPath(), new File(destination).toPath());
         System.out.println("File copied!");
         return "COMPLETED";
     } catch (IOException e) {
      return "FAILED";


You can trigger the execution of your job with simple main Java class:

package com.mastertheboss.jberet;

import javax.batch.operations.JobOperator;
import javax.batch.operations.JobSecurityException;
import javax.batch.operations.JobStartException;
import javax.batch.runtime.BatchRuntime;

public class Main {

    public static void main(String[] args) {
        try {
            JobOperator jo = BatchRuntime.getJobOperator();

            long id = jo.start("simplebatchlet", null);

            System.out.println("Batchlet submitted: " + id);

        } catch (Exception ex) {
            System.out.println("Error submitting Job! " + ex.getMessage());



Compiling the project

In order to run our project, we need to include in our pom.xml a set of dependencies which include javax.batch Batch API, JBeret core and its related dependencies, Weld container API and its related dependencies:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
         <name>JBoss Public Repository Group</name>


Optionally you can include also some other dependencies in case you need an XML processor, or the streaming JSON processor:



You can execute your application with:

mvn clean install exec:java

After some INFO messages you should check on the Console that:
Batchlet submitted: 1
File Copied!

Configuring JBeret engine

When using Batch Jobs within WildFly container you can configure Jobs persistence and thread pools via the batch subsystem. When running as standalone application you can do it via a file named which has to be placed in src\main\resources of your Maven project.
Here follows a sample file:

# Optional, valid values are jdbc (default), mongodb and in-memory
job-repository-type = jdbc

# Optional, default is jdbc:h2:~/jberet-repo for h2 database as the default job repository DBMS.
# For h2 in-memory database, db-url = jdbc:h2:mem:test;DB_CLOSE_DELAY=-1
# For mongodb, db-url includes all the parameters for MongoClientURI, including hosts, ports, username, password,

# Use the target directory to store the DB
db-url = jdbc:h2:./target/jberet-repo
db-user =sa
db-password =sa
db-properties =

# Configured: java.util.concurrent.ThreadPoolExecutor is created with thread-related properties as parameters.
thread-pool-type =

# New tasks are serviced first by creating core threads.
# Required for Configured type.
thread-pool-core-size =

# If all core threads are busy, new tasks are queued.
# int number indicating the size of the work queue. If 0 or negative, a java.util.concurrent.SynchronousQueue is used.
# Required for Configured type.
thread-pool-queue-capacity =

# If queue is full, additional non-core threads are created to service new tasks.
# int indicating the maximum size of the thread pool.
# Required for Configured type.
thread-pool-max-size =

# long number indicating the number of seconds a thread can stay idle.
# Required for Configured type.
thread-pool-keep-alive-time =

# Optional, valid values are true and false, defaults to false.
thread-pool-allow-core-thread-timeout =

# Optional, valid values are true and false, defaults to false.
thread-pool-prestart-all-core-threads =

# Optional, fully-qualified name of a class that implements java.util.concurrent.ThreadFactory.
# This property should not be needed in most cases.
thread-factory =

# Optional, fully-qualified name of a class that implements java.util.concurrent.RejectedExecutionHandler.
# This property should not be needed in most cases.
thread-pool-rejection-policy =

As you can see, this file largely relies on defaults for many variables like the thread pool. We have anyway applied a change in the job-repository-type to persist jobs on a DB (H2 DB). In this case we will need adding the JDBC Driver API to our Maven project as follows:


That’s all! enjoy Batch API using jBeret!

Acknowledgments: I’d like to express my gratitude to Cheng Fang (JBeret project Lead) for providing useful insights for writing this article