Hot to solve the “Too many Open Files” error in Java applications

This tutorial will discuss how to fix one of the most common errors for Java applications: “Too many open files“.

The error Java IOException “Too many open files” can happen on high-load servers and it means that a process has opened too many files (file descriptors) and cannot open new ones. In Linux, the maximum open file limits are set by default for each process or user and the defaut values are quite small.

Also note that socket connections are treated like files and they use file descriptor, which is a limited resource.

You can approach this issue with the following checklist:

1) Check what your application is doing. Are you using resources (Sockets/IO Streams/Database connections) without closing properly the connection?

A common way to release safely resources is to use the try-with-resources statement.

The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement. Any object that implements java.lang.AutoCloseable, which includes all objects which implement, can be used as a resource.

The following example reads the first line from a file. It uses an instance of BufferedReader to read data from the file. BufferedReader is a resource that must be closed after the program is finished with it:

static String readFromFile(String path) throws IOException {
    try (BufferedReader br =
                   new BufferedReader(new FileReader(path))) {
        return br.readLine();

Also, you should consider that, even if you are properly closing your handles, that handle will be disposed only during the Garbage collection phase. It is worth checking, using a tool like Eclipse MAT, which objects are retained in memory when the error “Too many open files” is issued.

2) If you don’t find any apparent flaw in your code, you should check the Global OS limits for your machine.

On a Linux Box the global number of maximum open files is configured in the /proc/sys/file-max file. You can use the sysctl command (with root privileges) to check the current value:

$ sudo /sbin/sysctl fs.file-max
fs.file-max = 3220524

The default value for fs.file-max can change depending on the OS version you are using, however it can be calculated to approximately 1/10 of physical RAM size at boot. If the calculated value is smaller than NR_FILE(8192), the default value is 8192.

Then, investigate on the Java process. Determine the pid of the Java process first:

$ ps -ef | grep jboss

frances+  7790  7671 58 12:10 pts/0    00:01:17 /home/java/jdk-11.0.4/bin/java -D[Standalone] -server -Xlog:gc*:file=/home/jboss/wildfly-23.0.0.Final/standalone/log/gc.log:time,uptimemillis:filecount=5,filesize=3M -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djboss.modules.system.pkgs=org.jboss.byteman,org.graalvm.visualvm -Djava.awt.headless=true --add-exports=java.base/ --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED -Dorg.jboss.boot.log.file=/home/jboss/wildfly-23.0.0.Final/standalone/log/server.log -Dlogging.configuration=file:/home//jboss/wildfly-23.0.0.Final/standalone/configuration/ -jar /home/jboss/wildfly-23.0.0.Final/jboss-modules.jar -mp /home//jboss/wildfly-23.0.0.Final/modules -Djboss.home.dir=/home//jboss/wildfly-23.0.0.Final -Djboss.server.base.dir=/home//jboss/wildfly-23.0.0.Final/standalone -c standalone-ha.xml

Then, check the number of file handles which are opened by the process:

$ lsof -p 7790 | wc -l

If you are not sure about which process is hogging your file descriptor table, you can run the following command, which will return the list of file descriptors for each process:

$ lsof | awk '{print $2}' | sort | uniq -c | sort -n

On the other hand, if you want to have the list of handles per user, you can run lsof with the -u parameter:

$ lsof -u jboss | wc -l

So, right now you have the count of open handles on your machine. Besides it, you should also check what are actually doing your sockets. This can be verified with the netstat command. For example, to verify the status of the sockets opened for a single process, you can execute the following command:

$ netstat -tulpn | grep 7790
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0*               LISTEN      7790/java           
tcp        0      0*               LISTEN      7790/java           
tcp        0      0*               LISTEN      7790/java           
tcp        0      0*               LISTEN      7790/java           
tcp        0      0*               LISTEN      7790/java           
udp        0      0*                           7790/java           
udp        0      0*                           7790/java           
udp        0      0*                           7790/java           
udp        0      0*               TIME_WAIT   7790/java           
udp        0      0*               TIME_WAIT   7790/java  

From the netstat output, we can see a small number of TIME_WAIT sockets, which is absolutely normal. You should worry if you detect thousands of active TIME WAIT sockets. If that happens, you should consider some possible actions such as:

  • Make sure you close the TCP connection on the client before closing it on the server.
  • Consider reducing the timeout of TIME_WAIT sockets. In most Linux machines, you can do it by adding the following contents to the sysctl.conf file (f.e.reduce to 30 seconds):
net.ipv4.tcp_syncookies = 1 
net.ipv4.tcp_tw_reuse = 1 
net.ipv4.tcp_tw_recycle = 1 
net.ipv4.tcp_timestamps = 1 
net.ipv4.tcp_fin_timeout = 30 
net.nf_conntrack_max = 655360 
net.netfilter.nf_conntrack_tcp_timeout_established  = 1200  
  • Use more client ports by setting net.ipv4.ip_local_port_range to a wider range.
  • Have the application to Listen for more server ports. (Example httpd defaults to port 80, you can add extra ports).
  • Add more client IPs by configuring additional IP on the load balancer and use them in a round-robin fashion.

Also, consider that other processes are running on your machine and you should account for them as well, if your machine is not dedicated only to your Java application.

That being said, what to do if the the number of process handles is greater than the fs.file-max? you can increase the value for fs.file-max as follows:

$ sudo/sbin/sysctl -w fs.file-max=<NEWVALUE>

3) Check the limits imposed per user/shell

On a Linux RHEL/Fedora, these limits are configured through the Pluggable Authentication Module (PAM). These limits can be configured in the file /etc/security/limits.conf. To verify the current limits for your user, run the command ulimit:

$ ulimit -n


To change this value for the user jboss, who is running the Java application, change as follows the /etc/security/limits.conf file:

jboss soft nofile 2048
jboss hard nofile 2048

You need to login again and restart the process for the changes to take effect.

Finally please note that the user limits will be ignored when your application is started in a cron job as the cron does not uses a login shell.

Writing high performance Java HTTP Client applications

This tutorial provides a detailed exposere on writing high performance Java HTTP Client with Apache HTTP Client library.

Out of the box, Apache HttpClient is configured to provide high reliability and standards compliance rather than raw performance. There are however several configuration tweaks and optimization techniques which can significantly improve the performance of applications using HttpClient. This tutorial covers various techniques to achieve maximum HttpClient performance.

Let’s start from a simple HTTPClient application:

import org.apache.http.HttpEntity;
import org.apache.http.HttpHeaders;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;


public class App {

    public static void main(String[] args) throws IOException {

        HttpGet request = new HttpGet("");

        request.addHeader(HttpHeaders.CACHE_CONTROL, "max-age=0");
        request.addHeader(HttpHeaders.USER_AGENT, "Mozilla/5.0");

        try (CloseableHttpClient httpClient = HttpClients.createDefault();
             CloseableHttpResponse response = httpClient.execute(request)) {

            // Get HttpResponse Status

            System.out.println(response.getStatusLine().getStatusCode());   // Prints 200

            HttpEntity entity = response.getEntity();
            if (entity != null) {
                // return it as a String
                String result = EntityUtils.toString(entity);




This example uses the try-with-resources statement which ensures that each resource is closed at the end of the statement. It can be used both for the client and for each response.

In terms of performance, it is recommended to have a single instance of HttpClient/CloseableHttpClient per communication component or even per application unless your application makes use of HttpClient only very infrequently.

For example, if an instance CloseableHttpClient is no longer needed and is about to go out of scope the connection manager associated with, it must be shut down by calling the CloseableHttpClient#close() method.

CloseableHttpClient httpclient = HttpClients.createDefault();
try {
    //do something
} finally {

This example uses Apache HTTP Client 4 API. To build it include in your pom.xml:


Using Apache HTTPClient 5

A more advanced example, which requires using Apache HTTPClient 5, demonstrates the use of the HttpClientResponseHandler to simplify the process of processing the HTTP response and releasing associated resources.


import org.apache.hc.client5.http.ClientProtocolException;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.HttpEntity;
import org.apache.hc.core5.http.HttpStatus;
import org.apache.hc.core5.http.ParseException;

public class AppWithResponseHandler {

    public static void main(final String[] args) throws Exception {
        try (final CloseableHttpClient httpclient = HttpClients.createDefault()) {
            final HttpGet httpget = new HttpGet("");

            System.out.println("Executing request " + httpget.getMethod() + " " + httpget.getUri());

            // Create a custom response handler
            final HttpClientResponseHandler<String> responseHandler = new HttpClientResponseHandler<String>() {

                public String handleResponse(
                        final ClassicHttpResponse response) throws IOException {
                    final int status = response.getCode();
                    if (status >= HttpStatus.SC_SUCCESS && status < HttpStatus.SC_REDIRECTION) {
                        final HttpEntity entity = response.getEntity();
                        try {
                            return entity != null ? EntityUtils.toString(entity) : null;
                        } catch (final ParseException ex) {
                            throw new ClientProtocolException(ex);
                    } else {
                        throw new ClientProtocolException("Unexpected response status: " + status);

            final String responseBody = httpclient.execute(httpget, responseHandler);


To compile and run the above example, you need to include the Apache HTTP Client 5 libraries in your pom.xml:


Configuring an HTTP Client Connection Pool

By default, the maximum number of connections is 20 and the maximum connection number per route is 2. However, these values are generally too low for real-world applications. For example, when all the connections are busy with handling other requests, HttpClient won’t create a new connection if the number exceeds 20. As a result, any class that tries to execute a request won’t get a connection. Instead, it’ll eventually get a ConnectionPoolTimeoutException exception.

There are two main strategies to configure an HTTP Client Connection Pool:

1) Configure the connection pool by directly creating an instance of PoolingHttpClientConnectionManager:

public void executeWithPooled() throws Exception {
    PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
    try (CloseableHttpClient httpClient = HttpClients.custom()
                                                     .build()) {
        final HttpGet httpGet = new HttpGet(GET_URL);
        try (CloseableHttpResponse response = httpClient.execute(httpGet)) {

The other option is to use the HttpClientBuilder class which provides some shortcut configuration methods for setting total maximum connection and maximum connection per route:

public void executeWithPooledUsingHttpClientBuilder() throws Exception {
    try (CloseableHttpClient httpClient = HttpClients.custom()
                                                     .build()) {
        final HttpGet httpGet = new HttpGet(GET_URL);
        try (CloseableHttpResponse response = httpClient.execute(httpGet)) {

In the above example, we’re using setMaxConnTotal() and setMaxConnPerRoute() methods to set the pool properties.

Using Multithread HTTP Clients

The main reason for using multiple theads in HttpClient is to allow the execution of multiple methods at once (Simultaniously downloading the latest builds of HttpClient and Tomcat for example). During execution each method uses an instance of an HttpConnection. Since connections can only be safely used from a single thread and method at a time and are a finite resource, we need to ensure that connections are properly allocated to the methods that require them. This job goes to the MultiThreadedHttpConnectionManager.

To get started one must create an instance of the MultiThreadedHttpConnectionManager and give it to an HttpClient:

import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.MultiThreadedHttpConnectionManager;
import org.apache.commons.httpclient.methods.GetMethod;

public class AppMultiThread {

    public AppMultiThread() {

    public static void main(String[] args) {

        HttpClient httpClient = new HttpClient(new MultiThreadedHttpConnectionManager());
        // Set the default host/protocol for the methods to connect to.
        // This value will only be used if the methods are not given an absolute URI
        httpClient.getHostConfiguration().setHost("", 80, "http");
        // The list of URIs we will connect to
        String[] urisToGet = {
        // create a thread for each URI
        GetThread[] threads = new GetThread[urisToGet.length];
        for (int i = 0; i < threads.length; i++) {
            GetMethod get = new GetMethod(urisToGet[i]);
            threads[i] = new GetThread(httpClient, get, i + 1);
        // start the threads
        for (int j = 0; j < threads.length; j++) {
     * The thread which performs a GET request */
    static class GetThread extends Thread {
        private HttpClient httpClient;
        private GetMethod method;
        private int id;
        public GetThread(HttpClient httpClient, GetMethod method, int id) {
            this.httpClient = httpClient;
            this.method = method;
   = id;
        public void run() {
            try {
                System.out.println(id + " - about to get something from " + method.getURI());
                // execute the method
                System.out.println(id + " - get executed");
                // get the response body as an array of bytes
                String response = method.getResponseBodyAsString();
            } catch (Exception e) {
                System.out.println(id + " - error: " + e);
            } finally {
                // always release the connection after we're done 
                System.out.println(id + " - connection released");

The MultiThreadedHttpConnectionManager supports the following options:

  • connectionStaleCheckingEnabled: The connectionStaleCheckingEnabled flag to set on all created connections. This value should be left true except in special circumstances. Consult the HttpConnection docs for more detail.
  • maxConnectionsPerHost: The maximum number of connections that will be created for any particular HostConfiguration. Defaults to 2.
  • maxTotalConnections: The maximum number of active connections. Defaults to 20.

Streaming HTTP Client request and response

The standard way to use HTTP Client requires buffering large entities which are stored in memory. A more efficient pattern is to use request/response body streaming.

In order to use Response streaming you can consume the HTTP response body as a stream of bytes/characters using HttpMethod#getResponseBodyAsStream method.

  HttpClient httpclient = new HttpClient();
  GetMethod httpget = new GetMethod("");
  try {
    Reader reader = new InputStreamReader(
            httpget.getResponseBodyAsStream(), httpget.getResponseCharSet()); 
    // consume the response entity
  } finally {

Please note, the use of HttpMethod#getResponseBody and HttpMethod#getResponseBodyAsString are strongly discouraged.

Request streaming: The main difficulty encountered when streaming request bodies is that some entity enclosing methods need to be retried due to an authentication failure or an I/O failure. Obviously non-buffered entities cannot be reread and resubmitted. The recommended approach is to create a custom RequestEntity capable of reconstructing the underlying input stream.

Let’s see an example:

public class FileRequestEntity implements RequestEntity {

    private File file = null;
    public FileRequestEntity(File file) {
        this.file = file;

    public boolean isRepeatable() {
        return true;

    public String getContentType() {
        return "text/plain; charset=UTF-8";
    public void writeRequest(OutputStream out) throws IOException {
        InputStream in = new FileInputStream(this.file);
        try {
            int l;
            byte[] buffer = new byte[1024];
            while ((l = != -1) {
                out.write(buffer, 0, l);
        } finally {

    public long getContentLength() {
        return file.length();

File myfile = new File("myfile.txt");
PostMethod httppost = new PostMethod("/stuff");
httppost.setRequestEntity(new FileRequestEntity(myfile));

In this tutorial we have covered various aspects to improve the performance of HTTP Client applications using Apache HTTP Client library.

How to solve java.lang.OutOfMemoryError: Compressed class space error

This article discusses how to solve the error “java.lang.OutOfMemoryError: Compressed class space error” which is an OutOfMemory error that can pop up on a 64-bit platform.

Starting with Java 1.8, loaded classes are confined to a native space called the “Compressed Class Metadata Space” or CCMS.

The default size for the Compressed Class Metadata Space is 1Gb of memory, except you specify a different value by passing an argument on the JVM command line. When classes are being loaded by the classloader, the JVM tries to allocate the metadata on the CCMS. If JVM cannot make room for the CCMS arena, the Compressed class space error is thrown. So in most cases, the simplest solution is to increase the value for CompressedClassSpaceSize, for example to 2 GB:


The variables which we should consider are however two:

  • MaxMetaspaceSize is the maximum amount of memory that can be committed to the class metadata and committed compressed class spaces combined. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime.
  • CompressedClassSpaceSize is the amount of virtual memory reserved just for the compressed class space.

As a rule, you should set the MaxMetaspaceSize to be larger than the CompressedClassSpaceSize as in the following example:

-XX:MaxMetaspaceSize=512m -XX:CompressedClassSpaceSize=256m 

We can check the values for both variables with ‘jcmd’ passing as arguement the PID of the Java process and “VM.metaspace”:

 jcmd 13966 VM.metaspace

  . . 
MaxMetaspaceSize: 512.00 MB
InitialBootClassLoaderMetaspaceSize: 4.00 MB
UseCompressedClassPointers: true
CompressedClassSpaceSize: 256.00 MB

As you can see, our proposed CompressedClassSpaceSize has been committed in the JVM.

Conversely, if MaxMetaspaceSize is set to a lower value than than CompressedClassSpaceSize, the JVM will self-adjust the CompressedClassSpaceSize based on the following formula:

CompressedClassSpaceSize = MaxMetaspaceSize – 2 * InitialBootClassLoaderMetaspaceSize

For example, we will be using the following JVM options:

-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=512m

Running again jcmd against the Java process reveals the actual value for CompressedClassSpaceSize:

MaxMetaspaceSize: 256.00 MB
InitialBootClassLoaderMetaspaceSize: 4.00 MB
UseCompressedClassPointers: true
CompressedClassSpaceSize: 248.00 MB

Finally, it is worth mentioning that you can disable both UseCompressedClassPointers and UseCompressedOops which should allow the JVM to load as many classes as they fit into memory instead of the limited imposed by CompressedClassSpaceSize virtual memory region.

You can check the default values for your JDK as follows:

$ java -XX:+PrintFlagsFinal -version | grep Compressed

     size_t CompressedClassSpaceSize               = 1073741824                                {product} {default}
     bool UseCompressedClassPointers               = true                                 {lp64_product} {ergonomic}
     bool UseCompressedOops                        = true                                 {lp64_product} {ergonomic}
java version "11.0.4" 2019-07-16 LTS

How to configure the HeapDumpOnOutOfMemoryError parameter on JBoss EAP or WildFly

How to configure the HeapDumpOnOutOfMemoryError parameter on JBoss EAP or WildFly ? It’s pretty simple.

The -XX:+HeapDumpOnOutOfMemoryError command-line option tells the HotSpot VM to generate a heap dump when an allocation from the Java heap or the permanent generation cannot be satisfied.

Just like any other JVM parameters, it can be added in the standalone.conf for standalone applications:

JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"

Check this tutorial to learn how to set this parameter for Domain Mode: Configuring JVM Settings in a WildFly / JBoss Domain

Out of the box, the HeapDump is generated in the current working directory of the Java process, that is in the $JBOSS_HOME/bin folder. If you want to place it on a different folder, you need to include also the parameter HeapDumpPath as in this example:

JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dumps"

It is worth mentioning that there is no overhead in running with this option, and so it can be useful for production systems where OutOfMemoryError takes a long time to surface.

The heap dump is in HPROF binary format, and so it can be analyzed using any tools that can import this format. For example, the jhat tool can be used to do rudimentary analysis of the dump.

How to measure the time spent on methods execution in Java

In this tutorial we will show how to measure the time spent on the execution of a Java method by using Byteman tool.

There are several tools or product which can trace the execution of Java methods and calculate how much time you are spending in the single methods of your application. This is a key aspect to identify bottlenecks in your code. The great advantage of Byteman is that you won’t need to change a single line of code in your application but you will rely completely on simple Byteman script that will be injected in your JVM.

So, supposing you want to trace how much time you are spending in the method “doSend” of the class “org.springframework.jms.core.JmsTemplate” then you will have to add two Rules:

1) A Rule for capturing the start time, when the method has been fired.

2) A Rule to measure the time spent when the method has terminated the execution. A Linked map is used to store the Start Time and the Thread used to run the method.

Here is our Byteman Rule file:

RULE doSend start time
CLASS org.springframework.jms.core.JmsTemplate
BIND thread = Thread.currentThread();
startTime = System.currentTimeMillis()
IF true
DO link("", thread, startTime)

RULE doSend end time
CLASS org.springframework.jms.core.JmsTemplate
BIND thread = Thread.currentThread();
startTime:int = unlink("", thread);
endTime = System.currentTimeMillis()
IF true
DO traceln("[BYTEMAN] org.springframework.jms.core.JmsTemplate.doSend elapsedTime = " + (endTime - startTime))

To install the Rule on WildFly:

1) Download and unzip the file (Available at:

2) It is recommended to set this location in the environment variable BYTEMAN_HOME:


$ export BYTEMAN_HOME=/home/jboss/byteman

3) Copy the above file rule.btm in all the $JBOSS_HOME/bin folders (It can be placed also in another folder, but if so you need to specify the full path of the Rule in the JVM settings)

4) Add the following JVM settings to WildFly/EAP:

-Dorg.jboss.byteman.transform.all -javaagent:${BYTEMAN_HOME}/lib/byteman.jar=script:rule.btm,boot:${BYTEMAN_HOME}/lib/byteman.jar -Dorg.jboss.byteman.debug=true

That’s all you need to instrument the Rule file.

If all the above steps have been executed correctly, you will see in your server log this information, which shows the time spent on the methods specified in the Rule file:

09:52:24,001 INFO  [stdout] (ServerService Thread Pool -- 92) [BYTEMAN] org.springframework.jms.core.JmsTemplate.doSend elapsedTime = 34

Measuring time spent on method execution using Spring

For the sake of completeness, you are using entirely Spring Beans in your applications, you can use AspectJ as wrapper around your method execution. The advantage of this approach is that you can use wildcards to capture a set of packages. See the following example:

import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;

public class MyProfiler {

    @Pointcut("execution(* com.sample.*.*(..))")
    public void businessMethods() { }

    public Object profile(ProceedingJoinPoint pjp) throws Throwable {

        long start = System.currentTimeMillis();
        System.out.println("Going to call the method."+pjp.getTarget());
        Object output = pjp.proceed();
        System.out.println("Method execution completed."+pjp.getTarget());
        long elapsedTime = System.currentTimeMillis() - start;
        System.out.println("Method execution time: " + elapsedTime + " milliseconds.");
        return output;


In this code, we are capturing the execution of all methods in the com.sample packages.

@PointCut − Marks a function as a PointCut

@Around is an advice type, which ensures that an advice can run before and after the method execution.

JBang: Create Java scripts like a pro

JBang is a scripting tool which allows to run Java application with minimal set up, without the need of having a project configuration. In this tutorial we will learn how to use it to run a simple Java application and then we will be looking at a more complex example, which starts a Quarkus application.

JBang features a new way of running Java code as a script, which similar to JShell. However, unlike JShell, JBang can be used to automatically download your dependencies without an external project file. Furthermore, JBang can even run without Java being installed: it will simply download an appropriate JVM if needed.

Finally, you can easily edit your JBang classes in Intellij, Eclipse, Visual Studio Code, Apache Netbeans, vim and emacs

Installing JBang

There are several options for installing JBang and they are available at:

You can either choose to use your OS installation procedure, for example on Fedora:

dnf install jbang

Otherwise, you can just download the latest binary release from it and unzip it:

Starting JBang

Once installed, include the JBANG_HOME/bin folder in your system PATH, so that you can execute the ‘jbang’ command:

export PATH=$PATH:~/tools/jbang-0.53.2/bin

Then, in order to run JBang, just execute the jbang command against a Java file:


The jbang tool also includes a template option which allows to create a basic Java class template:

jbang init --template=cli

If you take a look at the generated file, you will see an example of a Java class which uses an external dependency (picocli) to create a minimalist command line:

///usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS info.picocli:picocli:4.5.0

import picocli.CommandLine;
import picocli.CommandLine.Command;
import picocli.CommandLine.Parameters;

import java.util.concurrent.Callable;

@Command(name = "helloworld", mixinStandardHelpOptions = true, version = "helloworld 0.1",
        description = "helloworld made with jbang")
class helloworld implements Callable<Integer> {

    @Parameters(index = "0", description = "The greeting to print", defaultValue = "World!")
    private String greeting;

    public static void main(String... args) {
        int exitCode = new CommandLine(new helloworld()).execute(args);

    public Integer call() throws Exception { // your business logic goes here...
        System.out.println("Hello " + greeting);
        return 0;

You can run it as follows:

$ jbang

[jbang] Resolving dependencies...
[jbang]     Resolving info.picocli:picocli:4.5.0...Done
[jbang] Dependencies resolved
[jbang] Building jar...
Hello World!

So the dependencies have been loaded (that is required only on the first run) and the class executed.

What is really cool is that, on Linux/Mac thanks to this line:

///usr/bin/env jbang "$0" "$@" ; exit $?

… can even run directly the java file:

$ ./

Creating a sample Quarkus REST Service with jBang

Using the same strategy, we can deliver a minimal Quarkus REST Service:

///usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS io.quarkus:quarkus-resteasy:1.8.1.Final

import io.quarkus.runtime.Quarkus;
import javax.enterprise.context.ApplicationScoped;

public class quarkus {

    public String sayHello() {
        return "hello from Quarkus with";


Notice that we have just added the quarkus-resteasy dependency as a comment to our class.

You can run it with:

$ jbang
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2020-11-12 14:22:26,610 INFO  [io.quarkus] (main) Quarkus 1.8.1.Final on JVM started in 0.682s. Listening on:
2020-11-12 14:22:26,661 INFO  [io.quarkus] (main) Profile prod activated. 
2020-11-12 14:22:26,661 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy]

And test it with:

$ curl http://localhost:8080/hello
hello from Quarkus with

JBang options

If you want to use applicaton properties, you can use the “//Q” comment in your class to add it as Quarkus property:


Then, if you want to build the container image of your Java class file, you can do it as follows:

$ jbang build

On the other hand, if you want to use specific JVM settings when starting JBang, you can use the following comment marker:

//JAVA_OPTIONS -Djava.util.logging.manager=org.jboss.logmanager.LogManager


We have covered the basics of JBang scripting toolkit. Continue learning about JBang in the next tutorial: Running JBangs apps from the Catalog

How to initialize an Array in Java in 4 simple ways

This article discusses about array initialization in Java, showing multiple ways to initialize an array, some of them you probably don’t know!

Basic Array Initialization in Java

Firstly, some background. Java classifies types as primitive types, user-defined types, and array types. An array type is a region of memory that stores values in equal-size and contiguous slots, which we call elements. The array is declared with the element type and one or more pairs of square brackets that indicate the number of dimensions. A single pair of brackets means a one-dimensional array.


String array[];

On the other hand, this is a two-dimensional array:

double[][] matrix;

So you basically specify the datatype and the declared variable name. Mind it, declaring an array does not initialize it. You can initialize an array, and assign memory to it, by providing just the array size or also the content of the array.

How to init an array specifying the array size

The following example shows how to initialize an array of Strings which contains 2 elements:

String array[] = new String[2];

How to init an array using Array Literal

The following examples shows how to initialize different elements -in a single line- using Array Literals:

String array[] = new String[] { "Pear", "Apple", "Banana" };
int[] datas = { 12, 48, 91, 17 };
char grade[] = { 'A', 'B', 'C', 'D', 'F' };
float[][] matrixTemp = { { 1.0F, 2.0F, 3.0F }, { 4.0F, 5.0F, 6.0F }};
int x = 1, y[] = { 1, 2, 3, 4, 5 }, k = 3;

You can of course also split the declaration from the assignment:

int[] array;
array = new int[]{2,3,5,7,11};

When retrieving an array by its index, it is important to remember that index starts from 0:

Therefore, you can retrieve it by id as follows:

   for (int i =0;i < 5;i++) {

On the other hand, if you want to init a multidimensional array in a loop, you need to loop over the array twice. Example:

double[][] temperatures = new double[3][2];

for (int row = 0; row < temperatures.length; row++)
   for (int col = 0; col < temperatures[row].length; col++)
      temperatures[row][col] = Math.random()*100;

Finally, if you don’t need to operate on the array id, you can use the simplified for loop to iterate through the array:

int[] array = new int[]{2,3,5,7,11};

for (int a:array)

Initializing an array in Java using Arrays.copyOf()

The java.util.Arrays.copyOf(int[] original,int newLength) method copies the specified array, eventually truncating or padding with zeros (if needed) so the copy has the specified length.


int[] array = new int[]{2,3,5,7,11};
int[] copy =  Arrays.copyOf(array, 5);

A similar option also exists in the System packages, using the System.arraycopy static method:

int[] src  = new int[]{2,3,5,7,11};
int[] dest = new int[5];
System.arraycopy( src, 0, dest, 0, src.length );

The core difference is that Arrays.copyOf does not just copy elements, it also creates a new array. On the other hand, System.arrayCopy copies into an existing array.

In most cases, System.arrayCopy will be faster because it uses a direct native memory copy. Arrays.copyOf uses Java primitives to copy although the JIT compiler could do some clever special case optimization to improve the performance.

Using Arrays functions to fill an array

The method Arrays.setAll sets all elements of an array using a generator function. This is the most flexible option as it lets you use a Lambda expression to initialize an array using a generator. Example:

int[] arr = new int[10];
Arrays.setAll(arr, (index) -> 1 + index);

This can be useful, for example, to quickly initialize an Array of Objects:

Customer[] customerArray = new Customer[7];
// setting values to customerArray using setAll() method
Arrays.setAll(customerArray, i -> new Customer(i+1, "Index "+i));

A similar function is Arrays.fill which is the best choice if you don’t need to use a generator function but you just need to init the whole array with a value. Here is how to fill an array of 10 int with the value “1”:

int [] myarray = new int[10];
Arrays.fill(myarray, 1);

Finally, if you want to convert a Collection of List objects into an array, you can use the .toArray() method on the Collection. Example:

List<String> list = Arrays.asList("Apple", "Pear", "Banana");
String[] array = list.toArray(new String[list.size()]);

Initialize an array in Java using the Stream API

Finally, you can also use the Java 8 Stream API for making a copy of an Array into another. Let’s check with at an example:

String[] strArray = {"apple", "tree", "banana'"};
String[] copiedArray =[]::new); also does a shallow copy of objects, when using non-primitive types.

In this tutorial we have covered four basic strategies to initialize, prefill and iterate over an array in Java.

Troubleshooting OutOfMemoryError: Direct buffer memory

The java.nio.DirectByteBuffer class is special implementation of java.nio.ByteBuffer that has no byte[] laying underneath. The main feature of DirectByteBuffer is that JVM will try to natively work on allocated memory without any additional buffering so operations performed on it may be faster then those performed on ByteBuffers with arrays lying underneath.

We can allocate such ByteBuffer by calling:

ByteBuffer directBuffer = ByteBuffer.allocateDirect(64);

When such an object is created via the ByteBuffer.allocateDirect() call, it allocates the specified amount (capacity) of native memory using malloc() OS call. This memory is released only when the given DirectByteBuffer object is garbage collected and its internal “cleanup” method is called (the most common scenario), or when this method is invoked explicitly via getCleaner().clean().

Symptoms of the Direct Buffer Memory issue

As we said, the Direct Buffers are allocated to native memory space outside of the JVM’s established heap/perm gens. If this memory space outside of heap/perm is exhausted, the java.lang.OutOfMemoryError: Direct buffer memory Error will be throw.

A good runtime indicator of a growing Direct Buffers allocation is the size of Non-Heap Java Memory usage, which can be collected with any tool, like jconsole:

In terms of Operating System, the amount of Memory used by a Java process includes the following elements: Java Heap Size + Metaspace + CodeCache + DirectByteBuffers + Jvm-native-c++-heap.

You can obtain this information using the following command:

pmap -x [PID]

The above command will display the amount of RSS (in KB) for the process, as you can see from the third column of the output:

total kB         14391640 12343808 12272896

Once that you know the full size of the JVM process, you have to subtract the Java Heap Size + Metaspace for a rough estimate of the JVM native memory size.

Java Native Memory Tracking

A good indicator which can be added to the JVM is the NativeMemoryTracking, which can be added through the following settings:

-XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=detail -XX:+PrintNMTStatistics

When Native Memory Tracking is enable, you can request a report on the JVM memory usage using the following command:

jcmd <pid> VM.native_memory

If you check at the jcmd output, you will find at the bottom, the amount of native memory committed/used in the Internal (committed) section :

Native Memory Tracking:

Total: reserved=1334532KB, committed=369276KB
-                 Java Heap (reserved=524288KB, committed=132096KB)
                            (mmap: reserved=524288KB, committed=132096KB) 
-                     Class (reserved=351761KB, committed=112629KB)
                            (classes #19111)
                            (  instance classes #17977, array classes #1134)
                            (malloc=3601KB #66765) 
                            (mmap: reserved=348160KB, committed=109028KB) 
                            (  Metadata:   )
                            (    reserved=94208KB, committed=92824KB)
                            (    used=85533KB)
                            (    free=7291KB)
                            (    waste=0KB =0.00%)
                            (  Class space:)
                            (    reserved=253952KB, committed=16204KB)
                            (    used=12643KB)
                            (    free=3561KB)
                            (    waste=0KB =0.00%)
-                    Thread (reserved=103186KB, committed=9426KB)
                            (thread #100)
                            (stack: reserved=102712KB, committed=8952KB)
                            (malloc=352KB #524) 
                            (arena=122KB #198)
-                      Code (reserved=249312KB, committed=23688KB)
                            (malloc=1624KB #7558) 
                            (mmap: reserved=247688KB, committed=22064KB) 
-                        GC (reserved=71049KB, committed=56501KB)
                            (malloc=18689KB #13308) 
                            (mmap: reserved=52360KB, committed=37812KB) 
-                  Compiler (reserved=428KB, committed=428KB)
                            (malloc=302KB #923) 
                            (arena=126KB #5)
-                  Internal (reserved=1491KB, committed=1491KB)
                            (malloc=1451KB #4873) 
                            (mmap: reserved=40KB, committed=40KB) 
-                     Other (reserved=1767KB, committed=1767KB)
                            (malloc=1767KB #50) 
-                    Symbol (reserved=21908KB, committed=21908KB)
                            (malloc=19503KB #252855) 
                            (arena=2406KB #1)
-    Native Memory Tracking (reserved=5914KB, committed=5914KB)
                            (malloc=349KB #4947) 
                            (tracking overhead=5565KB)

Setting MaxDirectMemorySize

There is a JVM parameter named -XX:MaxDirectMemorySize which allows to set the maximum amount of memory which can be reserved to Direct Buffer Usage. As a matter of fact, for JDK 8, this value is set to 64MB:

private static long directMemory = 64 * 1024 * 1024;

However, by digging into sun.misc.VM you will see that, if not configured, it derives its value from Runtime.getRuntime.maxMemory(), thus the value of –Xmx. So if you don’t configure -XX:MaxDirectMemorySize and do configure -Xmx2g, the “default” MaxDirectMemorySize will also be 2 Gb, and the total JVM memory usage of the app (heap+direct) may grow up to 2 + 2 = 4 Gb.

Collecting the Heap Dump

Even if the DirectByteBuffer is allocated outside of the JVM Heap, the JVM still provides important hints. In fact, when the JVM requests a DirectByteBuffer, there will be a reference to it in the Heap.

From the Heap Dump, you can therefore check the amount, we can check how much native memory these DirectByteBuffers are using.

If you are using an advanced tool like JXRay report (, it’s enough to load your Heap dump and it will automatically pinpoint to your Off-Heap memory dump, with the amount of information already calculated:

With another tool like Eclipse Mat, you have to calculate it yourself by using the following OQL experssion:

SELECT x, x.capacity FROM java.nio.DirectByteBuffer x WHERE ((x.capacity > 1024 * 1024) and (x.cleaner != null))

The above query will list all DirectByteBuffer which have been allocated and not released and whose capacity is bigger than 1MB.

Checking the Reference chain.

After that we have checked how much native memory your DirectByteBuffers are using, next step will be checking through the reference chain and try to understand who’s holding the ByteBuffers.

Still using Eclipse Mat, you can right-click on the result of your OQL (x.capacity field) and choose “merge shortest path to GC roots“. That will show you which class is holding the memory for the DirectBuffer thus preventing it from being garbage-collected:

So, in this case you have your XNIO worker threads holding a reference to your DirectBuffers. This might be either a a temporary problem or a bug.

If it’s a temporary problem (such as a spike in native memory which gradually reduces), that might be something you can tune, for example by reducing the number of io threads used by your application.

In WildFly / JBoss EAP the number of io-threads to create for your workers is configued in the io subsystem:

    "outcome" => "success",
    "result" => {
        "io-threads" => undefined,
        "stack-size" => 0L,
        "task-keepalive" => 60,
        "task-max-threads" => undefined

If not specified, a default will be chosen, which is calculated by cpuCount * 2

Another option is to configure a limit per-thread DirectByteBuffer size using the -Djdk.nio.maxCachedBufferSize JVM property


The above JVM property will limit the per-thread DirectByteBuffer size.

Finally, if are using WildFly application server or JBoss EAP, a more drastic solution is to disable direct buffers, at the expense of an increased Heap usage:


Out of Memory caused by allocation failures

When using G1GC (the default Garbage collector since Java 11) there are additional options to manage an allocation failure. First of all some definitions: a GC allocation failure means that the garbage collector could not move objects from young gen to old gen fast enough because it does not have enough memory in old gen. In order to address this issue there are some potential solutions which include:    

  • Increasing the number of concurrent marking threads by setting ‘-XX:ConcGCThreads’ value. Increasing the number of Concurrent Marking Threads will make garbage collection run fast at the price of an higher CPU cost.
  • You can force the G1 Garbage Collector to start the Marking phase earlier by lowering ‘-XX:InitiatingHeapOccupancyPercent’ value. The default value for it is 45 which means the G1 GC marking phase will begin only when heap usage reaches 45%. By reducing this value, the G1 GC marking phase will start earlier so that Full GC can be avoided.
  • Set -XX:+UseG1GC -XX:SoftRefLRUPolicyMSPerMB=1  .This will enable immediate flushing of softly referenced objects in the JVM options. As it turns out, the Direct Buffers as stored outside the Heap and a reference to them is generally held as a PhantomReference in the tenured generation. If there’s no pressure to run a Garbage collector on the tenured generation you might hit an Out of Memory because of the accumulation of soft references in the tenured generation.

Tuning glibc

glibc is the default native memory allocator for Java applications. The objects allocated by glibc may not be returned once it’s freed for performance improvement. This performance improvement, however, comes to the price of an increased memory fragmentation. The fragmentation can grow unboundedly eventually causing an Out of Memory.

MALLOC_ARENA_MAX is an environment variable to control how many memory pools can be created for glibc. By default, it is 8 * CPU cores. You can experiment reducing this value to 2 or 1 and see if the Out of Memory issue is gone. The lower this value, the less number of memory pools will be created (at the expenses of a reduced performance).


Explicit Garbage Collection disabled?

In some cases, it can be that memory allocated by direct buffers may accumulate for a long time before it is collected. In the long run that’s not really a leak, but it will increase peak memory usage. In this case, the explicit Garbage collection (done with System.gc()) is there to free buffers when the reserveMemory limit is hit.

The OpenJDK invokes System.gc() during direct ByteBuffer allocation to provide a hint and hope for timely reclamation of directly memory by the GC

So, it is worth checking if you are using DisableExplicitGC in your JVM settings:



Check Open issues

In most cases, the issue is in some libraries used by your application. Therefore, you don’t have direct control on the source code to fix the issue. So it is worth checking for some known issues for frameworks using DirectByteBuffer such as netty:

Also, check if your specific version of the application server (WildFly / EAP ) needs to be upgraded to fix an older issue for the DirectByteBuffer.

Thanks to Francisco De Melo for taking the time to review and improve this article. Francisco runs a cool blog on Java/JDK at: