The hibernate cache container is a key element of your configuration because it handles the data tier which is the backend of every application. As you probably know, JBoss uses hibernate as default JPA provider, so the concepts described in this chapter apply both for hibernating applications (configured to run on JBoss AS) and for JPA-based applications.
Hibernate caches are conceptually different from the session-based caches because they are based on a different assumption, that is you have a permanent storage for your data (the database files), so it’s not necessary to replicate or distribute copies of the Entities across the cluster in order to achieve high availability. You just need to inform your nodes when data has been modified, so it needs to be invalidated.
If a cache is configured for invalidation rather than replication, every time data is changed in a cache, other caches in the cluster receive a message informing them that their data is now stale and should be evicted from memory.
The benefit of this is twofold: network traffic is minimized as invalidation messages are very small compared to replicating updated data, and also that other caches in the cluster look up modified data in a lazy manner, only when needed.
Now let’s see in practice how the mechanism works: whenever a new entity or collection is read from a database, it’s only cached locally in order to reduce intra-cluster traffic:
The local query cache is configured by default to store up to 10000 entries in an LRU vector. Each entry will be evicted from the cache automatically if it has been idle for 100 seconds.
Once that a cached entity is updated, your cache will send a message to other members of the cluster telling them that the entity has been modified. Here, the invalidation cache comes into play:
By default invalidation uses the same eviction and expiration settings as for local query caching, that is, the maximum number of entries are 10,000 and the idle time before expiration is 100 seconds.
The invalidation too can be synchronous (SYNC) or asynchronous (ASYNC), and just as in the case of replication, synchronous invalidation blocks until all caches in the cluster receive invalidation messages and have evicted stale data while asynchronous invalidation works in a fire-and-forget mode, where invalidation messages are broadcast but doesn’t block and wait for responses.
By default, entities and collections are configured to use READ_COMMITTED as cache isolation level. It would, however, make sense to configure REPEATABLE_READ if the application evicts/clears entities from the hibernate session and then expects to repeatedly re-read them in the same transaction. If you really need to use REPEATABLE_READ, you can simply configure entities or collections to use
The last piece of code contained in the Infinispan subsystems concerns with the timestamp cache. The timestamp cache keeps track of the last update timestamp for each table (this timestamp is updated for any table modification).
This cache is strictly connected with the query cache, which is used to store the result set of a query made against the database. We will discuss more about query cache in the section named “Entity clustering”; however in short, if the query cache is enabled, any time a query is issued, the query cache is checked before issuing for a query. If the timestamp of the last update on a table is greater than the time the query results were cached, then the entry is removed and the lookup is a miss.
By default, timestamps cache is configured with asynchronous replication as clustering mode. Local or invalidated cluster modes are not allowed, since all cluster nodes must store all timestamps. As a result, no eviction/expiration is allowed for timestamp caches either.
There are situations when you could desire to replicate your entity cache across other cluster nodes, instead of using local caches and invalidation. This can be true when the following conditions are met:
In order to switch to a replicated cache, you have to configure your default-cache attribute as follows:
Infinispan has, however, a wealth of options available to further customize your cache. In this section, we will discuss customizing the thread configuration and the default transport configuration.
Configuring Infinispan threads Just as for the JGroups transport, you can externalize your Infinispan thread configuration, moving it into the thread pool subsystem. The following thread pools can be configured on a cache-container basis:
Customizing the thread pool can be advantageous in some cases, for example, if you plan to apply a cache replication algorithm, then it could be worthy to choose the number of threads used for replicating data. In the following example, we are externalizing the thread pools of the web’s cache-container by defining up to twentyfive threads for transporting data across other nodes and five threads for replicating data.
The Infinispan subsystem uses the JGroups subsystem to provide the foundation for the network transport of cache data. By default, cache containers use the default-stack, which is defined into the JGroups subsystem.
The default UDP transport is usually suitable for large clusters or if you are using replication or invalidation as it minimizes opening too many sockets.
The TCP stack performs better for smaller clusters, in particular, if you are using a distribution, as TCP is more efficient as a point-to-point protocol.
Free Demo for Corporate & Online Trainings.