How to view Hazelcast cache logs - java

I have a project with two microservices and a gateway. Both were generated through JHipster. I use Hibernate 2nd level cache for caching and implemented this with Hazelcast cache solution. Everything deployed to Docker using docker-compose. My question is "When I load the entities the second time, how do I know if microservice hitting the database or fetching from cache". Where can I see generated hibernate queries for each request? And cache provider logs? Code uploaded to Github for reference

One approach, which can be utilized all the way up to production, is using the management center application.You can monitor put and get throughtput and many other performance statistics of your cluster using the management center application. Alternatively you can use jconsole, or mission control (if using Oracle’s JVM) to connect to each individual node to view JMX stats.

Related

Design considerations for J2EE webapp on Tomcat in Amazon WebServices

My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.

How do I scale a Java app with a REST API and a Database?

I have a typical stateless Java application which provides a REST API and performs updates (CRUD) in a Postgresql Database.
However the number of clients is growing and I feel the need to
Increase redundancy, so that if one fails another takes place
For this I will probably need a load balancer?
Increase response speed by not flooding the network and the CPU of just one server (however how will the load balancer not get flooded?)
Maybe I will need to distribute the Database?
I want to be able to update my app seamlessly (I have seen a thingy called kubernetes doing this): Kill each redundant node one by one and immediately replace it with an updated version
My app also stores some image files, which grow fast in disk size, I need to be able to distribute them
All of this must be backup-able
This is the diagram of what I have now (both Java app and DB are on the same server):
What is the best/correct way of scaling this?
Thanks!
Web Servers:
Run your app on multiple servers, behind a load balancer. Use AWS Elastic Beanstalk or roll your own solution with EC2 + Autoscaling Groups + ELB.
You mentioned a concern about "flooding" of the load balancer, but if you use Amazon's Elastic Load Balancer service it will scale automatically to handle whatever traffic you get so that you don't need to worry about this concern.
Database Servers:
Move your database to RDS and enable multi-az fail-over. This will create a hot-standby server that your database will automatically fail-over to if there are issues with your primary server. Optionally add read replicas to scale-out your database capacity.
Start caching your database queries in Redis if you aren't already. There are plugins out there to do this with Hibernate fairly easily. This will take a huge load off your database servers if your app performs the same queries regularly. Use AWS ElastiCache or RedisLabs for your Redis server(s).
Images:
Stop storing your image files on your web servers! That creates lots of scalability issues. Move those to S3 and serve them directly from S3. S3 gives you unlimited storage space, automated backups, and the ability to serve the images directly from S3 which reduces the load on your web servers.
Deployments:
There are so many solutions here that it just becomes a question about which method someone prefers. If you use Elastic Beanstalk then it provides a solution for deployments. If you don't use EB, then there are hundreds of solutions to pick from. I'd recommend designing your environment first, then choosing an automated deployment solution that will work with the environment you have designed.
Backups:
If you do this right you shouldn't have much on your web servers to backup. With Elastic Beanstalk all you will need in order to rebuild your web servers is the code and configuration files you have checked into Git. If you end up having to backup EC2 servers you will want to look into EBS snapshots.
For database backups, RDS will perform a daily backup automatically. If you want backups outside RDS you can schedule those yourself using pg_dump with a cron job.
For images, you can enable S3 versioning and multi-region replication.
CDN:
You didn't mention this, but you should look into a CDN. This will allow your application to be served faster while reducing the load on your servers. AWS provides the CloudFront CDN, and I would also recommend looking at CloudFlare.

WebSphere propagating changes to customized cache across all servers in a cluster

Food We are using WAS 7.0, with a customized local cache in a clustered environment. In our application there is some very commonly used (and very seldomly updated) reference information that is retrieved from the database and stored in a customized cache on server start up (We do not use the cache thru the application server). New Reference values can be entered through the application, when this is done, the data in the database is updated and the cached data on that single server (1 of 3) is reflected. If a user hits any of the other servers in that cluster they will not see the newly entered reference values (unless the server is bounced). Restarting the cluster is not a feasible solution as it will bring down production. My question is how do I tell the other servers to update their local cache as well?
I looked at JMS publish/subscribe mechanism. I was thinking of publishing a message every time I update values of any reference table, and the other servers can act as subscribers and get the message and refresh their cache. The servers need to act as both a publisher and a subscriber. I am not sure how to incorporate this solution to my application. I am open to suggestions and other solutions as well.
In general you should consider Dynamic cache service provided by application server. It has already replication options out of the box. Check Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more details.
And regarding your custom JMS solution - you should have MDB in your application, which will be configured to the topic. Once cache is changed, your application will publish change to that topic. MDB reads that message and updates the local cache.
But since it is quite complex change, I'd strongly consider switching to the built in cache.

Get current application instance using websphere api?

I've an app which is deployed on to a cluster with 2 jvms. The web application has cache implemented using Mbeans and the cache runs on each jvm. the cache is refreshed with a request pattern */refresh. The problem is that when the request goes through ODR, it routes it to only one server and the cache for only one server is refreshed. How do I solve this problem? Cache replication? I think it might be lot of work to implement cache replication. Any other solutions? Websphere api's ?
if I get the current instance of the application, I'm thinking of using AdminClient to get the clusters and then call the request on all the nodes on which the application is installed except for the current instance.
The Websphere way to do this is to use the DynaCache feature with DRS. The DynaCache is a kind of a hashmap, which can be distributed across the DRS cluster members. The dynacache has an API, DistributedMap, which extends the java.util.Map.
There are also a lot of configuration (Through AdminConsole and cachespec.xml) and monitoring possibilities (PMI with TPV).
Technical overview:
http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaag%2Fcache%2Fpubwasdynacachoverview.htm
DistributedMap API
http://pic.dhe.ibm.com/infocenter/adiehelp/v5r1m1/index.jsp?topic=%2Fcom.ibm.wasee.doc%2Finfo%2Fee%2Fjavadoc%2Fee%2Fcom%2Fibm%2Fwebsphere%2Fcache%2FDistributedMap.html
A good article from developerworks
http://www.ibm.com/developerworks/websphere/library/techarticles/0906_salvarinov/0906_salvarinov.html
The crude way we did something similar was to directly hit each Web Container on its own port. If you're able to reach them, that is.

advantage and disadvantage of Tomcat clustering

Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).

Categories

Resources