Are you going for a JBoss / WildFly interview ? here is a comprehensive list of JBoss / WildFly interview questions that will shake up your hiring!
General JBoss interview questions
Q: Name all possible ways you know to start WildFly application server
You can do it in at least four ways:
- Download and unzip WildFly from https://download.jboss.org/wildfly. Then start it with standalone.sh or domain.sh
- Launch it as Docker Image ( How to start a Docker image of WildFly with an application deployed )
- Start it as Bootable Jar (Turn your WildFly applications in bootable JARs )
- Use Openshift’s Source to Image builder to start WildFly with an project from a repository (Java EE example application on Openshift)
- Use an Helm chart to deploy WildFly on OpenShift / Kubernetes ( WildFly on the Cloud with Helm )
Q: What are the hard requirements to use WildFly ?
Firstly, WildFly runs on top of the Java platform. Therefore it needs at least a Java Runtime Environment (JRE) whose version which varies depending on which WildFly version you are running. As we will also need to compile and build Java applications, we will need the Java Development Kit (JDK), which provides the necessary tools to work with the Java source code. In the JDK panorama we can find the Oracle JDK, developed and maintained by Oracle, and OpenJDK, which relies on community contribution.
Q: Do we need to use Maven to run WildFly ?
No, but in order to develop applications it’s highly recommended to use Maven as most examples and quickstarts actually use Maven. You don’t actually need to install it as you can use the Maven wrapper shell script which download the correct Maven version if it’s not found.
Q: How do you set the start-up memory of WildFly in a standalone server?
The recommended way is to place the JVM settings in standalone.conf
Q: Which profile would you use for applications that require IIOP support for your EJBs?
You need to use a “full” profile (such as standalone-full.xml or standalone-full-ha.xml) as they include the iiop-openjdk subsystem.
Q: How many profiles can you define in your standalone configuration?
You can use only one (the default included in the configuration).
Q: Which property would you set to place the server configuration, logs and data in a custom folder?
You have to set the property “jboss.server.base.dir”. E.g. ./standalone.sh -Djboss.server.base.dir=/home/jboss/myserver
Q: How can you reverse engineer your XML configuration into CLI commands?
There is no automatic way to do that, however a tool named “Profile Cloner” that you can install for this purpose. See this tutorial: Reverse engineer your JBoss AS-WildFly configuration to CLI
Q: Can you bind the HTTP Port of the Web server on a custom value, without changing the whole server’s port offset?
Yes, for example:
./bin/standalone.sh -Djboss.http.port=9080
Domain mode interview questions
Q: What’s the difference between Standalone mode and Domain mode ?
In Standalone mode, each distribution starts a single JVM process with its own configuration, management instruments and deployments. When configured in Domain mode, multiple servers are managed from a centralized point called Domain Controller which maintain the configuration and provisions applications for deployment on the single nodes which are part of the Domain
Q: What happens if the Domain Controller fails ?
Firstly, if the Domain Controller fails, it is not possible to manage the Domain configuration anymore but applications running on the single nodes are preserved. It is however possible to choose a backup Domain Controller server as in the following configuration snippet:
<domain-controller> <remote host="127.0.0.1" port="9999" security-realm="ManagementRealm" username="eap7admin"> <discovery-options> <static-discovery name="backup" protocol="remote" host="127.0.0.1" port="19999"/> </discovery-options> </remote> </domain-controller>
Q: Do you need Domain mode in order to run a Cluster of JBoss EAP / WildFly nodes ?
No, you can configure a cluster both in Standalone mode and in Domain mode with no difference in terms of Profile configuration. The advantage of configuring a Cluster in Domain mode is that you don’t need to manually synchronize the configuration on all your Servers as you would need in a cluster of Standalone Servers. The same stands for provisioning applications: in a Domain running an “ha” (or “full-ha”) profile you can provision your applications on a whole Server group with a single Operation. In standalone mode you have to deploy applications on each Server.
Q: Multiple Domains in the same cluster and Multiple clusters in the same Domain: which one of these configurations is possible ?
Both! Let’s see each of them:
- Multiple Domains in the same clusters: Start two Domains in the same network sharing the same multicast address and port. This is a quite rare scenario, however it’s technically possible.
- Multiple clusters in the same Domain: Define a different multicast address on each Server Group or even at Server Level. This way, your Servers, even if in the same Domain, will communicate through different multicast addresses so they will make up separate clusters.
Q: Porting one application from Standalone mode to Domain mode: is it guaranteed that you won’t run in any issue ?
Firstly, one core difference between Standalone mode and Domain mode is that Standalone mode allows manual deployment of applications by copying archives into the deployment folder. On the other hand, in Domain mode you can manage applications through the CLI or the Admin Console into the data folder of the single nodes. That being said, some applications might require to know the physical path where the application has been deployed: one good example is LifeRay portal which requires some workarounds to run in Domain mode. So always check the applications requirement before committing to a change from Standalone mode to Domain mode.
Q: If you have JVM Settings at Server Group Level and your Servers are configured to use the Host JVM Settings, which one prevails?
In a managed domain, you can configure the JVM settings at different scopes: For a specific server group, for a host or for a particular server. If not declared, the settings inherit the parent scope. In this case you have the following configuration (domain.xml):
<server-groups> <server-group name="main-server-group" profile="default"> <jvm name="SGdefault"> <heap size="64m" max-size="512m"/> </jvm> <socket-binding-group ref="standard-sockets"/> </server-group> </server groups>
While at Host level you have (host.xml):
<servers> <server name="server-one" group="main-server-group" auto-start="true"> <jvm name="default"/> </server> </servers>
In this case, the Host configuration (named default) prevails, being more specific to the Server.
Q: I want to display all the JVM options when Servers in a Domain start. How can I achieve it ?
A simple an effective way to do it, is adding the following JVM option, which will output all the JVM options at startup:
<jvm-options> <option value="-XshowSettings:all"/> </jvm-options>
Datasource interview questions
Q: How to detect Connection leaks in your code?
Firstly, you can detect Connection leaks by verifying that use-ccm=”true” in the datasource settings (by default it’s true).
Also, verify that <cached-connection-manager> exists in the jca subsystem and set debug=”true”.
/subsystem=jca/cached-connection-manager=cached-connection-manager:write-attribute(name=debug,value=true)
By setting debug=”true” the application server will log a message when a connection leak is detected:
“Closing a connection for you. Please close them yourself”
Q: Say you have frequent “Connection closed” in your logs. You ping the database but it’s reachable. What would you do?
it’s very likely that you have some infrastructure policy which closes idle connections before when they are still Active for the application server. In order to cope with it, you have to include a Validation configuration in your Datasource which will prevent your application to use dead Connections. Example for Oracle:
<validation> <check-valid-connection-sql>select 1 from dual</check-valid-connection-sql> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <stale-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleStaleConnectionChecker"/> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/> </validation>
Q: What is the difference between validate-on-match and background-validation? can you use both of them ?
If you set the validate-on-match option to true, the database connection is validated every time it is checked out from the connection pool using the validation mechanism specified in the next step.
On the other hand, background-validation fires a background validation as dictated by the background-validation-millis.
Finally, you cannot apply both of them, they are mutually exclusive.
Q: What if I remove the ExampleDS default Datasource ?
If you remove the default Datasource you will not prevent the server to start, however you might find some errors due to the fact that the ee subsystem contains a reference to the Default Datasource, which is the ExampleDS Datasource.
<default-bindings context-service="java:jboss/ee/concurrency/context/default" datasource="java:jboss/datasources/ExampleDS" . . . . . />
Q: Can I configure a Driver without a Datasource and viceversa ?
You can configure a Driver without having a Datasource using it. But not the opposite!
Q: Your JDBC and pool statistics are empty. Why?
By default, statistics are disabled for performance reasons. You can enable both them with:
/subsystem=datasources/data-source=yourDS/statistics=jdbc:write- attribute(name=statistics-enabled,value=true) /subsystem=datasources/data-source=yourDS/statistics=pool:write-attribute(name=statistics-enabled,value=true)
Messaging
Q: When does a message go to the Dead Letter Queue and when does it go to the Expiry Queue? Explain the difference
A Dead Letter Queue (DLQ) comes into play after a specified number of unsuccessful deliveries. If that happens, messages are removed from their queue and sent to the DLQ.
An Expiry Queue relates to the message’s Time To Live. When a message exceeds the expiry-delay, it is removed from the queue and sent to the expiry address.
Q: Which JMS Resource can you use to connect to a remote Messaging system?
You can configure a ConnectionFactory, which references a remote-connector, to send messages to or receive messages from the server.The remote-connector, in turn, references a socket binding with the remote destination address.
Q: Which messaging protocol should you use to consume JMS messages (in an MDB) against a remote ArtemisMQ server?
The CORE protocol.
Q: You need to store the logic for routing the JMS Message somewhere. Which part of the JMS Message would you use for this purpose?
You should use the JMS Properties
Q: What wildcard expressions can you use in the Messaging Security Settings?
A wildcard expression in Artemis MQ configuration can be one of the following:
• The character # matches any sequence of zero or more characters at
the end of the expression.
• The character * indicates for matching a single word. You can use it anywhere in the
expression.
Command Line
Q: How can you upload an application into the data repository without deploying it?
A: You can use the “–disabled” option as follows:
deploy example.war --disabled
Q: What happens to your HTTP requests if you issue the “:suspend” command from the CLI?
Undertow will wait for all requests to finish before suspending the server.
Q: How can you make a backup of your configuration from the CLI:
You can simply make a snaphot of it that will be copied into the “snapshot” folder of your configuration, named by prefixing the file with the current date and time.
:take-snapshot
Network interview questions
Q: Why you should never put just the IP address in a network interface definition ?
If you configure your network interfaces as in the following example:
<interface name="public"> <inet-address value="192.168.10.1"/> </interface>
Then you won’t be able to use startup options to change the IP Address/hostname (-b or -Djboss.bind.address ). Therefore, never remove the alias jboss.bind.address from the Beanshell expression
Q: Can you use Environment variables in your configuration ?
Yes, that’s actually possible. You can do it in two ways:
Firstly, in the Java way, by passing it as argument to the startup script:
./standalone.sh -Dvariable=$variable
And then reference it as System Property in your configuration.
Then, using the env expression in your Bean Shell expression. Example:
<interface name="public"> <inet-address value="${jboss.bind.address,env.HOST:127.0.0.1}"/> </interface>
In the above example, if you don’t set jboss.bind.address, the binding will fall back to the environment variable HOST. If none is set, it will default to 127.0.0.1
Clustering interview questions
Q: How can I persist the cache of my Stateful EJBs in a Database ?
You can use a JDBC cache store, which persists data in a relational database using a JDBC driver. There are three implementations of the JDBC cache store: JdbcBinaryCacheStore, JbcStringBasedCacheStore, JdbcMixedCacheStore which you can use it depending on the type of key to be stored in the cache.
Q: What happens if there’s a concurrent access to an Entity in a clustered application ?
It depends on the isolation level defined in the “hibernate” Cache. The default isolation level for the hibernate cache is READ_COMMITTED which means that the non-repeatable reads phenomenon can occur in this isolation level. It is however the default in most databases like Oracle or PostgreSQL. It would make sense to configure REPEATABLE_READ in case the application evicts/clears entities from the Hibernate Session and then expects to repeatably reread them in the same transaction.
Q: What is an L1 cache and when I am allowed to use it ?
When running in distributed mode, it is possible to configure a special kind of cache named “L1” cache which temporarily holds entries of the cache. When an L1 cache is available, that is consulted locally before checking caches on remote servers. L1 entries are invalidated when the entry is changed elsewhere in the cluster so you are sure you don’t have stale entries cached in L1.
Q: Why should I prefer mod_cluster over mod_jk ?
In terms of configuration: The httpd side does not need to know cluster topology in advance, so the configuration is dynamic and not static and as a consequence you need very little configuration on the httpd side
In terms of load balancing: You have an improved load balancing as main calculations are done on the backend servers, where more information is available and thus a fine grained web application lifecycle control
Q: What protocol by default uses mod_cluster ?
By default, the mod_cluster subsystem’s balancer uses multicast UDP to advertise its availability to the background workers.
Q: Can I still use mod_cluster even if multicast is not available ?
Yes you can. You have to modify the HTTPD configuration to disable server advertising and to use a proxy list instead. The proxy list is configured on the worker, and contains all of the mod_cluster-enabled HTTPD servers the worker can talk to.
Q: What are the options to use a Web server as front-end for a WildFly cluster ?
There are multiple alternatives:
- Firstly, you can use a WildFly server with the “standalone-load-balancer” profile to act as a front-end to your Cluster. This option leverages mod_cluster to connect the front-end to the back-end
- Then, you can use an Apache HTTP Server with mod_cluster shared libraries installed as a front-end to your Cluster.
- Then, you can use Apache’s mod_jk and the ajp protocol to front WildFly cluster with Apache
- Finally, you can use any Apache’s mod_proxy HTTP Connector to front WildFly cluster with Apache
Maven
Q: How to deploy applications to WildFly using Maven ?
A: Yes there are several strategies for doing that. The recommended way is to include the WildFly Maven plugin in your application:
<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>2.0.0.Final</version> <configuration> <filename>${project.build.finalName}.war</filename> </configuration> </plugin>
Q: In your pom.xml file, what is the difference between the <dependencies> and <dependencyManagement> section ?
- Artifacts specified in the <dependencies> section will always be included as a dependency of the child modules.
- Artifacts specified in the <dependencyManagement> section, will only be included in the child module if they were also specified in the <dependencies> section of the child module itself. Here’s an example of it:
<dependencyManagement> <dependencies> <dependency> <groupId>org.wildfly.bom</groupId> <artifactId>wildfly-javaee7-with-tools</artifactId> <version>10.0.0.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
Q: What is wrong with the following plugin configuration ?
<plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-surefire-plugin</artifactid> </plugin>
Although technically correct, not specifying a version for a Maven plugin is a very bad idea. As a matter of fact, plugins can vary completely their behaviour from a version to another, causing your build to break or even cause unwanted results
Hibernate / JPA interview questions
Q:Are EJB 2 still supported in WildFly 10 and EAP 7 ?
WildFly 10 and EAP 7 still support Session Beans and MDBs written using the 2.X specification. What is not supported anymore are CMP Entity Beans 2.X
Q:What’s the difference between Hibernate JPA ?
JPA is a standard Java EE specification, meaning there is no implementation. You can annotate your classes using JPA annotations, however without an implementation nothing will happen. You can think of JPA as the abstract guidelines that must be followed or an interface, while Hibernate’s JPA implementation is code that meets the API as defined by the JPA specification and provides the under the hood functionality.
When you use Hibernate with JPA you are using, behind the scenes, the actual Hibernate JPA implementation. The benefit of this is that you can swap out Hibernate’s implementation of JPA for another implementation of the JPA specification. When you use straight Hibernate you are locking into the implementation because other ORMs may use different methods/configurations and annotations, therefore you cannot just switch over to another ORM.
On the other hand, the benefit of using Hibernate directly is that you can have access to API or functionalities which are still not available in JPA.
To learn more check this tutorial: JPA vs Hibernate in a nutshell
Q: Which Hibernate association fetches data eagerly and which lazily?
Since JPA 2.0 the @ManyToOne and @OneToOne annotations are fetched EAGERly, while the @OneToMany and @ManyToMany relationships are fetched LAZY.
Q: Should you use EAGER fetching or LAZY fetching?
You should stick to LAZY fetching as your default strategy.
FetchType.LAZY: fetches the child entities lazily, that is, at the time you are actually accessing the child Entity available in the Parent Entity. On the other hand, FetchType.EAGER: fetches the child entities along with parent.
- Lazy initialization improves performance by avoiding unnecessary queries. It also reduces memory requirements.
- Eager initialization, on the other hand, simplifies coding but requires more memory and processing speed.
Q:Which Hibernate object wraps the JDBC Connection ?
The Session interface wraps a JDBC Connection. This interface is a single threaded object which represents a single unit of work with application and persistent database. It’s retrieved by the SessionFactory’s openSession() method
Q: Is the Session Factory Thread safe?
Yes: that is many threads can access it concurrently and request for sessions. It holds cached data that has been read in one unit of work and may be reused in a future unit of work. A good practice is to create upon application initialization.
Q: Is the EntityManager Thread safe?
EntityManager instances are not thread-safe. On the other hand, if you are using the Entity Manager from CMT (e.g. Stateless Session Beans), then they are required to be safe. This happens because the Container provides an EntityManager proxy that gives the Container control over the lifecycle of the JPA EntityManagers underneath, allowing it to be tied to the thread and the transaction.
Q: Which options are available to map a Compound primary key with Hibernate /JPA ?
There are mainly two options: using an @IdClass and using an @Embeddable Compound key class.
Both approaches are similar from the model point of view, but there are some subtle differences.
Check this tutorial: Configuring Composite Primary key in JPA applications
Q: What is the difference between load() and get() in Hibernate ?
The main difference between load() and get() methods is that load() throws an Exception if there is no entry for the given id while get() method returns null. The other difference is that load() method creates a placeholder object-also called proxy”-.In other words hibernate will hit database only when the proxy object is actually used.
Transaction
Q: How can you start a JTA transaction from a Servlet deployed on JBoss ?
JBoss registers a JTA UserTransaction Object in the JNDI tree which you can use to start a Transaction:
@Resource UserTransaction userTransaction;
What is difference between EntityManager.getTransaction() and @Resource UserTransaction ?
In the context of a standalone application, you would use EntityManager.getTransaction() to demarcate yourself the transaction.
When you are in managed environment (like applications running on WildFly) which integrates with JTA, you use UserTransaction. The EntityManager hooks itself into the JTA distributed transaction manager so that the changes are flushed to the database when you issue a commit on the UserTransaction object. On the other hand, when running in a managed environment and you use EJB with CMT (container managed transaction), you don’t even need to use UserTransaction: the app server starts and stops the transactions for you
Q:What if you need to span your transaction across multiple Servlet invocations ?
You can’t with a Servlet. A JTA transaction must start and finish within a single invocation (of the service() method). You should consider using a Stateful SB. In a SFSB with a JTA transaction, the association between the bean instance and the transaction is retained across multiple client calls.
Q:What is the difference between a JTA datasource and an XA Datasource ?
Let’s take this example:
<datasource jta="true" . . . >
When JTA is true, the JCA connection pool manager knows to enlist the connection into the JTA transaction. This means that, if the Driver and the database support it, you can use JTA transaction for a single resource.
@PersistenceContext(unitName = "unit01") private EntityManager entityManager; public void addMovie(Movie movie) throws Exception { entityManager.persist(movie); }
In practice this means that if you try to manage a JDBC transaction by yourself when jta is set to true an exception will be raised:
12:11:17,145 SEVERE [com.sample.Bean] (http-/127.0.0.1:8080-1) null: java.sql.SQLException: You cannot set autocommit during a managed transaction! at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.setJdbcAutoCommit(BaseWrapperManagedConnection.java:961) at org.jboss.jca.adapters.jdbc.WrappedConnection.setAutoCommit(WrappedConnection.java:716)
On the other hand, an XA transaction, is usually referred as a “global transaction”, that is a set of two or more related transactions that must be managed in a coordinated way. The transactions that constitute a distributed transaction might be in the same database, but more typically are in different databases and often in different locations. Each individual transaction of a distributed transaction is referred to as a transaction branch.
For example, a distributed transaction might consist of money being transferred from an account in one bank to an account in another bank. You would not want either transaction committed without assurance that both will complete successfully.may span multiple resources. A non-XA transaction always involves just one resource.
Q: Can a non-XA resource partecipate in an XA Transaction?
Yes, it’s possible using an optimization known as the Last Resource Commit Optimization (LRCO). By using this protocol, the non-XA resource is processed at the end of the prepare phase, and an attempt is made to commit it. If the commit succeeds, the transaction log is written and the remaining resources go through the commit phase. If the last resource fails to commit, the transaction is rolled back. This procedure should be used only as last resort as an error between the commit of LRCO and writing the transaciton log will cause data inconsistency.
Q: How do you enable LCRO ? Is it available by default ?
No, it’s not available by default. You need to:
- Create tables in a database for storing Transaction Xid attributes.
- Enable the datasource attribute “connectable”.
- Add a section called commit-markable-resource into transactions subsystem referincing the datasource
Web Services – JBoss interview questions
Q:If you have defined a web service that needs to transfer quite a lot of information, how would you do ?
You might consider using an attachment to transfer the information. JAX-WS Web services provides Attachment support with MTOM (Message Transmission Optimization Mechanism) SOAP.
Q:What’s the difference between JAX-WS and JAX-RPC ?
Java API for XML-Based RPC (JAX-RPC) is a Legacy Web Services Java API, it uses SOAP and HTTP to do RPCs over the network and enables building of Web services and Web applications based on the SOAP 1.1 specification, Java SE 1.4 or lower.JAX-WS 2.0 is the successor to JAX-RPC 1.1. JAX-WS still supports SOAP 1.1 over HTTP 1.1, so interoperability will not be affected. However there are lots of differences:
- JAX-WS maps to Java 5.0 and relies on many of the features new in Java 5.0 like Web Service annotations.
- JAX-RPC has its own data mapping model, JAX-WS’s data mapping model is JAXB. JAXB promises mappings for all XML schemas.
- JAX-WS introduces message-oriented functionality, dynamic asynchronous functionality which are missing in JAX-RPC.
- JAX-WS also add support, via JAXB, for MTOM, the new attachment specification.
Q: Do you know how you could add support for Web Service transactions ?
JBossTS supports Web Services transactions, including extended transaction models designed specifically for loosely-coupled, long running business processes. J2EE transactions can integrate seamlessly with Web Services transactions using our integrated, bi-directional transaction bridge. Interoperability with many other vendors is provided out-of-the-box and JBoss is an active participant in these standards.
Various JBoss interview questions
Q:What are the differences between EJB 3.0 and EJB 2.0 ?
EJBs are now plain old Java objects (POJO) that expose regular business interfaces (POJI), and there is no requirement for home interfaces.
- Use of metadata annotations, an extensible, metadata-driven, attribute-oriented framework which generates Java code or XML deployment descriptors.
- Removal of the requirement for specific interfaces and deployment descriptors (deployment descriptor information can be replaced by annotations).
- Interceptor facility to invoke user methods at the invocation of business methods or at life cycle events.
- Relies upon Default values whenever possible (“configuration by exception” approach).
- Reduction in the requirements for usage of checked exception.
- A complete new persistence model (based on the JPA standard), that supersedes EJB 2.x entity beans
Q:What optimization could I use if the EJB container is the only point of write access to the database ?
You could activate the “Commit Option A” that is the container caches entity bean state between transactions. This option assumes that the container has exclusive access to the persistentstore and therefore it doesn’t need to synchronizethe in-memory bean state from the persistent store at the beginning of each transaction.
Q:Which component handles cluster communication in JBoss/WildFly ?
The JGroups framework provides services to enable peer-to-peer communications between nodes in a cluster. It is built on top a stack of network communication protocols that provide transport, discovery, reliability and failure detection, and cluster membership management services.
Spring Boot Interview questions?
Check this article: Spring Boot Interview Questions