Clustering the messaging subsystem

We will conclude the clustering chapter discussing about the messaging subsystem which uses HornetQ as JMS provider.

HornetQ clusters allow groups of HornetQ servers to be grouped together in order to share the message processing load. Each active node in the cluster is an active HornetQ server that manages its own messages and handles its own connections.

In order to enable clustering, you need a few simple enhancements to your server configuration file. At first, the JMS server must be configured to be clustered, so you will need to set the clustered element at the top of the messaging domain setting it to true (this element defaults to false).
<subsystem xmlns=”urn:jboss:domain:messaging:1.1″>
. . . . . .

Next, you need to configure the cluster connections. As a matter of fact, the cluster is formed by each node declaring cluster connections to other server nodes. Behind the scenes, when a node forms a cluster connection to another node, it creates a core bridge connection between it and the other node internally. Once the connection has been
established, it can be used to allow messages to flow between the nodes of the cluster and to balance the load.

Let’s see a typical cluster connection configuration which can be added to your messaging configuration within your <hornetq-server> definition:

<cluster-connection name=”mycluster”>

In the previous configuration, we have explicitly specified several parameters, although you might use the defaults for some. You can also reference the jboss-asmessaging_ 1_1.xsd for the full list of available parameters (available in the JBOSS_HOME/docs/schema folder of your server distribution).

The cluster-connection instance’s name attribute obviously defines the cluster connection name which we are going to configure (there can be zero or more cluster connections configured in your messaging subsystem).

The address element is a mandatory parameters and determines how messages are distributed across the cluster. In this example, the cluster connection will load balance messages sent to an address that start with jms. This cluster connection, will, in effect apply to all JMS queue and topic subscriptions because they map to core queues that
start with the substring jms.

The connector-ref element references the connector which has been defined in the connectors section of the messaging subsystem. In this case, we are using the netty connector (See Chapter 3, Configuring Enterprise Services, for more information about the available connectors).

The retry-interval determines the interval in milliseconds between the message retry attempts. As a matter of fact, if a cluster connection is created and the target node has not been started, or say, it is being rebooted, then the cluster connections from other nodes will retry connecting using the retry-interval time.

Next, the use-duplicate-detection when enabled will detect any duplicate messages which will be filtered out and ignored on receipt at the target node.

The forward-when-no-consumers element, when set to true, will ensure that each incoming message will be distributed round robin even though there are not consumers on some nodes of the cluster.

The communication between cluster nodes is achieved through the JGroups API that, by default, uses UDP multicast messages to handle the cluster lifecycle events.

The other building block of a clustered application is demanded to Infinispan advanced grid and caching platform.

Every key element of Enterprise applications that need to preserve the consistency of data in the cluster can configure the single cache containers, which are part of the Infinispan subsystems.

The SFSB’s cache-container is configured to replicate stateful session bean data across the cluster nodes.

The web’s cache-container is configured as well to replicate HTTP Session data across the cluster nodes.

The hibernate’s cache-container uses a more complex approach by defining a local-query strategy for handling local entities, then an invalidation-cache mechanism when data is updated and other cluster nodes need to be informed.

Finally, a replicated-cache is used to replicate the query timestamps.

Finally, we have covered the messaging subsystem which can be easily clusterable by setting the clustered element to true. This way, messages will be transparently load- balanced across your JMS servers. You can fine-tune your cluster connections by defining a cluster-connection section which will determine how messages are distributed across the cluster.

Configuring messaging credentials

When starting the cluster, you might have noticed the following warning in the server’s console (or log message):

09:29:07,573 WARNING [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-1) Security risk! It has been detected that the cluster admin user and password have not been changed from the installation default. Please see the HornetQ user guide, cluster chapter, for instructions on how to do this.

Actually, when creating connections between nodes of a cluster to form a cluster connection, HornetQ uses a cluster user and cluster password. It is imperative that these values are changed from their default values, or remote clients will be able to make connections to the server using the default values. If they are not changed from the default, HornetQ will detect this and pester you with a warning on every start-up.


0 Responses on Clustering the messaging subsystem"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.