Along with the definition of connection factories in the JMS subsystem, you can find the JMS destinations (Queues and Topics), which are part of the server distribution:
This name attribute of queue defines the name of the queue. At JMS level, the actual name of the queue follows a naming convention, so it will be jms.queue.testQueue.
The entry element configures the name that will be used to bind the queue to JNDI. This is a mandatory element and the queue can contain multiple of these to bind the same queue to different names.
So, for example, here’s how you would configure a MessageDrivenBean component to consume messages from the “queue/test” Queue:
Apparently it seems not important at all to know the server’s destination name (in the Apparently it seems not important at all to know the server’s destination name (in the example jms.queue.testQueue). Rather, we would be concerned about the JNDI entry where the destination is bound. However, the actual destination name plays an important role if you want to define some properties across a set of destinations. See the next section Customizing destinations with address settings.
The selector element defines what JMS message selector the pre-defined queue will have. Only messages that match the selector will be added to the queue. This is an optional element with a default value of null when omitted.
The durable element specifies whether the queue will be persisted. This again is optional and defaults to true, if omitted.
If you want to provide some custom settings for JMS destinations, you can use the address-setting block, which can be applied both to a single destination and to a set of them. The default configuration applies a set of minimal attributes to all destinations:
Here is a brief description of the addresses settings:
Address-setting’s match attribute defines a filter for the destinations. When using the wildcard “#”, the properties will be valid across all destinations. Other examples:
The settings would apply to all queues defined in the destination section.
The settings would apply to the queue named jms.queue.testQueue.
A short description of the destination’s properties follows here:
The last piece of information we need to cover is about message persistence. HornetQ has its own optimized persistence engine, which can be further tuned when you know all about its building blocks.
The secret of HornetQ’s high-data persistence consists in appending data to the journal files, instead of using the costly random access operations which requires a higher degree of disk-head movement
Journal files are pre-created and filled with padding characters at runtime. By pre-creating files, as one is filled, the journal can immediately resume with the next one without pausing to create it.
The following is the default journal configuration, which ships with JMS subsystem:
The default journal-file-size (expressed in bytes) is 100 KB. The minimum number of files the journal will maintain is indicated by the property journal-min-files, which states that at least two files will be maintained.
The property journal-type indicates the type of input/output libraries used for data persistence. Valid values are NIO or ASYNCIO.
Choosing NIO sets the Java NIO journal. Choosing AIO sets the Linux asynchronous IO journal. If you choose AIO but are not running Linux or you do not have libaio installed, then HornetQ will detect this and automatically fall back to using NIO.
The property persistence-enabled when set to false, will disable message persistence. That means no bindings data, message data, large message data, duplicate ID caches, or paging data will be persisted. Disabling data persistence will give to your applications a remarkable performance boost, however, the other side of it is your data messaging will inevitably lose reliability.
For the sake of completeness, we include some additional properties that can be included if you want to customize the messages/paging and journal storage directories:
For the best performance, we recommend the journal be located on its own physical volume in order to minimize disk-head movement. If the journal is on a volume that is shared with other processes, which might be writing other files (for example, bindings journal, database, or transaction coordinator), then the disk-head may be well moving rapidly between these files as it writes them, thus drastically reducing performance.
Ravindra Savaram is a Content Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.