Vaadin session scaling - java

Anyone can suggest a solution to store sessions created in Vaadin in order to make a application fault tolerant?
Can sessions be stored in Redis or Memcached?
The idea is to run an large scale application (Vaadin + Spring) using only SPOT instances in AWS!

Related

what is the best usage Redis session for spring boot with external tomcat container

Folks.
I am using spring boot framework with tomcat containers, and because of several reasons for maintaining this service, I try to share sessions with Redis. Usually, I used spring-session-data-redis which is recommended by following the guide.
https://www.baeldung.com/spring-session
but, I have a question about session-sharing with Redis by using spring-session-data-redis. If I need to set and use multiple server clusters to reduce traffic stress (with load balancer), should I set also a Tomcat configuration to use Redis session? or is spring-session-data-redis enough to session-sharing for multiple server clusters?
if someone visited the wrong sub-path in the specific domain (for example somewheredomain.com/not_spring_project/some_path), I guess the spring session is not working to share session. if this guy visited A-tomcat server with the correct path and went to another tomcat server with the wrong-path, maybe another tomcat server which the someone visited the first time can generate(or re-write) jsessionid.
is there anyone able to explain the best usuage session sharing for spring boot with an external tomcat container?
A while i struggled to find the answer of this question. after making test-bed with multiple VMs on google cloud platform, finally I got the answer.
The answer is very simple, the session key of Spring-session is not jsession_id :P
so, developers don't need to worry The multiple tomcat servers issues new jsession_id.
just developers need to use spring-session-data with redis, then they can share session data on redis server.

How to view Hazelcast cache logs

I have a project with two microservices and a gateway. Both were generated through JHipster. I use Hibernate 2nd level cache for caching and implemented this with Hazelcast cache solution. Everything deployed to Docker using docker-compose. My question is "When I load the entities the second time, how do I know if microservice hitting the database or fetching from cache". Where can I see generated hibernate queries for each request? And cache provider logs? Code uploaded to Github for reference
One approach, which can be utilized all the way up to production, is using the management center application.You can monitor put and get throughtput and many other performance statistics of your cluster using the management center application. Alternatively you can use jconsole, or mission control (if using Oracle’s JVM) to connect to each individual node to view JMX stats.

Liferay 6 persistent sessions with very large table

I've implemented a liferay 6.2 cluster (with tomcat 7.x) and configured persistent sessions within tomcat configuration.
Everything is working fine, but i've notice that the table containing the sessions is very large.
Almost 46gb of space for ~2000 persisted sessions.
Is there any way to reduce the space of the data saved into session?
I see there is a liferay property:
session.shared.attributes=COMPANY_,LIFERAY_SHARED_,org.apache.struts.action.LOCALE,PORTLET_RENDER_PARAMETERS_,PUBLIC_RENDER_PARAMETERS_POOL_,USER_
but i don't know if is relevant or not
As Liferay says, session replication is not recommended.
https://web.liferay.com/es/community/wiki/-/wiki/Main/Clustering
Install an http load balancer and make sure your load balancer is set to sticky session mode. It is not recommended to use session replication for clustering.
Why? because it is not a scalable system. In 99% cases, you can use a load balancer with session affinity. BUT this is not the best way to scale a cluster, but it is better.
The best way would be implement session managament with JWT (json web token) or a similar mechanism, without java session, since every node doesn't know anything about sessions and it is the only way to scale linearly.

Design considerations for J2EE webapp on Tomcat in Amazon WebServices

My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.

How do I scale a Java app with a REST API and a Database?

I have a typical stateless Java application which provides a REST API and performs updates (CRUD) in a Postgresql Database.
However the number of clients is growing and I feel the need to
Increase redundancy, so that if one fails another takes place
For this I will probably need a load balancer?
Increase response speed by not flooding the network and the CPU of just one server (however how will the load balancer not get flooded?)
Maybe I will need to distribute the Database?
I want to be able to update my app seamlessly (I have seen a thingy called kubernetes doing this): Kill each redundant node one by one and immediately replace it with an updated version
My app also stores some image files, which grow fast in disk size, I need to be able to distribute them
All of this must be backup-able
This is the diagram of what I have now (both Java app and DB are on the same server):
What is the best/correct way of scaling this?
Thanks!
Web Servers:
Run your app on multiple servers, behind a load balancer. Use AWS Elastic Beanstalk or roll your own solution with EC2 + Autoscaling Groups + ELB.
You mentioned a concern about "flooding" of the load balancer, but if you use Amazon's Elastic Load Balancer service it will scale automatically to handle whatever traffic you get so that you don't need to worry about this concern.
Database Servers:
Move your database to RDS and enable multi-az fail-over. This will create a hot-standby server that your database will automatically fail-over to if there are issues with your primary server. Optionally add read replicas to scale-out your database capacity.
Start caching your database queries in Redis if you aren't already. There are plugins out there to do this with Hibernate fairly easily. This will take a huge load off your database servers if your app performs the same queries regularly. Use AWS ElastiCache or RedisLabs for your Redis server(s).
Images:
Stop storing your image files on your web servers! That creates lots of scalability issues. Move those to S3 and serve them directly from S3. S3 gives you unlimited storage space, automated backups, and the ability to serve the images directly from S3 which reduces the load on your web servers.
Deployments:
There are so many solutions here that it just becomes a question about which method someone prefers. If you use Elastic Beanstalk then it provides a solution for deployments. If you don't use EB, then there are hundreds of solutions to pick from. I'd recommend designing your environment first, then choosing an automated deployment solution that will work with the environment you have designed.
Backups:
If you do this right you shouldn't have much on your web servers to backup. With Elastic Beanstalk all you will need in order to rebuild your web servers is the code and configuration files you have checked into Git. If you end up having to backup EC2 servers you will want to look into EBS snapshots.
For database backups, RDS will perform a daily backup automatically. If you want backups outside RDS you can schedule those yourself using pg_dump with a cron job.
For images, you can enable S3 versioning and multi-region replication.
CDN:
You didn't mention this, but you should look into a CDN. This will allow your application to be served faster while reducing the load on your servers. AWS provides the CloudFront CDN, and I would also recommend looking at CloudFlare.

Categories

Resources