How Hyperwallet Fell in Love with Hazelcast
We first came across Hazelcast during the JavaOne conference in 2015. At this point, we were using OpenMQ, an open-source Java Message Service (JMS), to handle our messaging needs. I won’t go into all the details, but suffice to say that we were not completely happy with this solution. We had spent a lot of (wo)man-hours trying to investigate and find solutions to resolve our concerns around its general stability with very little luck. We had two options: switch to an alternate JMS provider, or switch to a different messaging solution. The first option would have been easier as it would have needed minimal code changes and—since we were already familiar with the JMS standards—made for a smooth transition.
But where’s the fun in that?
Enter Hazelcast (or as we fondly refer to it, Hazelnut).
Getting Started with Hazelcast
There were a number of reasons why Hazelcast was appealing to our development team: WAN replication for disaster recovery, dynamic scalability, fast transaction speeds, XA transactions, and so on. But there were plenty of unknowns as well. That’s why the first thing we did was build a proof of concept (POC) to test and ensure Hazelcast could handle our messages in a reliable, performant, and scalable manner.
It’s worth quickly unpacking why these three factors were so important for us, specifically in terms of Hazelcast’s in-memory data grid solution:
- Reliability: Due to the architecture of multiple nodes supporting one another, if a node were to crash or become unavailable, the other nodes would pick up the slack and cover for the node that’s down. We wanted to make sure Hazlecast lived up to its claim of fault tolerance (no single point of failure).
- Performance: With our previous solution, we saw an impact on performance due to disk i/o. Reading and writing from memory is much faster than reading and writing to disk. Similarly, persisting to database meant higher latency. This would no longer be an issue with Hazelcast, which offered high throughput and low latency as the data is stored in memory rather than being persisted to disk or database.
- Scalability: It’s very easy to spin up another node or another 100 Hazelcast nodes in a cluster. Copy and paste an existing Hazelcast instance to a different location with minimal configuration changes and start it up.
After some testing to prove our POC successfully, we got down to the real work of integrating Hazelcast within our existing application.
How We Implemented: A Five Step Approach
Our Hazelcast implementation followed a five-step approach:
Step 1: Build a standalone module that was completely transparent to the main application
This triggered a period of versioning and build-automation hell, but by and large, we were able to integrate things into the application with minimal disruption.
Step 2: Enable SSL to ensure secure communication
This involved two steps:
- Generate certificate files.
- Modify Hazelcast configuration file.
a. Set in the file
b. Point it to the generated certificate.
Step 3: Port over existing functionality
This required changing a couple of lines of code for each functionality. Open-source JMS solution out, Hazelcast solution in!
Step 4: Testing, testing, testing
Our testing included running a whole array of regression, failover, and edge-case tests to ensure that the aforementioned requisites of performance, reliability, and scalability were met.
Step 5: Start monitoring
I’ve purposefully saved the best for last! For those of you who aren’t familiar with Hazelcast, it comes with an out-of-the-box management center that both looks cool and offers a ton of functionality. We use it extensively to manage and monitor our nodes and data.
In conclusion, I’d be lying if I said our Hazlecast implementation didn’t come without some hiccups (ugh, build automation). That being said, it was a relatively easy product to setup and use, and I’m looking forward to making the most of the various features that Hazelcast offers.