Jhipster app memory consumption on Amazon ec2 - java

My application is just a bigger version of the default Jhipster app.. I even have no Cache.
I deployed it successfully on an Amazon free tier t1.micro instance.
I experienced some random 503 errors. I checked the health of the instance and it sometimes said "no data sent" some other times "93% of memory is in use". Now it's down (red).
I cloned the environment, then terminated the original one. I get those various errors.
I deployed the war with Dev spring profile but I believe it's not what is causing this much horror.
Do I need to configure the java memory usage ? Why could the app be this memory hungry?
I posted the question on StackOverflow as I am caring more about performance tuning of the deployed Jhipster war but if you think it's more a problem with Amazon please let me know why you think that.
Thanks

Deploy the application on a instance with much more memory ie an t2.large (8GB)
The size on an existing instance can be altered by using the console "stop", find the console "instance settings" "instance type" change and start again
Ensure that your application has a method for attaching jconsole to it available (apparently the development version does, with jmx). See http://docs.oracle.com/javase/8/docs/technotes/guides/management/jconsole.html for more information on jconsole
Run the application and monitor the nice graphs in jconsole
See what the peak is over a few days of normal use. Also log on to the server with ssh and use free -m to see the system memory use ( see http://www.linuxatemyram.com/ for a guide to interpreting the data )
Once you know the actual amount of RAM it uses choose an appropriate instance size, see http://www.ec2instances.info/
You might need to adjust the -Xmx setting, I don't know the specifics with jhipster but this is a common requirement for java applications

Related

Profiling memory leak in a non-redundant uptime-critical application

We have a major challenge which have been stumping us for months now.
A couple of months ago, we took over the maintenance of a legacy application, where the last developer to touch the code, left the company several years ago.
This application needs to be more or less always online. It's developed many years ago without staging and test environments, and without a redundant infrastructure setup.
We're dealing with a legacy Java EJB application running on Payara application server (Glassfish derivative) on an Ubuntu server.
Within the last year or two, it has been necessary to restart Payara approximately once a week, and the Ubuntu server once a month.
This is due to a memory leak which slows down the application over a period of around a week. The GUI becomes almost entirely non-responsive, but a restart of Payara fixes this, at least for a while.
However after each Payara restart, there is still some kind of residual memory use. The baseline memory usage increases, thereby reducing the time between Payara restarts. Around every month, we thus do a full Ubuntu reboot, which fixes the issue.
Naturally we want to find the memory leak, but we are unable to run a profiler on the server because it's resource intensive, and would need to run for several days in order to capture the memory leak.
We have also tried several times to dump the heap using "gcore" command, but it always result in a segfault and then we need to reboot the Ubuntu server.
What other options / approaches do we have to figure out which objects in the heap are not being garbage collected?
I would try to clone the server in some way to another system where you can perform tests without clients being affected. Could even be a system with less resources, if you want to trigger a resource based problem.
To be able to observe the memory leak without having to wait for days, I would create a load test, maybe with Apache JMeter, to simulate accesses of a week within a day or even hours or minutes (don't know if the base load is at a level where that is feasible from the server and network infrastructure).
First you could set up the load test to act as a "regular" mix of requests like seen in the wild. After you can trigger the loss of response, you can try to find out, if there are specific requests that are more likely to be the cause for the leak than others. (It also could be that some basic component that is reused in nearly any call contains the leak, and so you cannot find out "the" call with the leak.)
Then you can instrument this test server with a profiler.
To get another approach (you could do it in parallel) you also can use a static code inspection tool like SonarQube to analyze the source code for typical patterns of memory leaks.
And one other idea comes to my mind, but it is coming with many preconditions: if you have recorded typical scenarios for the backend calls, and if you have enough development resources, and if it is a stateless web application where each call could be inspoected more or less individually, then you could try to set up partial integration tests where you simulate the incoming web calls, with database and file access, but if possible without the application server, and record the increase of the heap usage after each of the calls. Statistically you might be able to find out the "bad" call this way. (So this would be something I would try as very last option.)
Apart from heap dump have to tried any realtime app perf monitoring (APM) like appdynamics or the opensource alternative like https://github.com/scouter-project/scouter.
Alternate approach would be to analyse existing application issue Eg: Payara issues like these https://github.com/payara/Payara/issues/4098 or maybe the ubuntu patch you are currently running app on.
You can use jmap, an exe bundled with the JDK, to check the memory. From the documentation:-
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
For more information you can see the documentation or see the stackoverflow question How to analyse the heap dump using jmap in java
There is also a tool called jhat which can be used tp analise java heap.
From the documentation:-
The jhat command parses a java heap dump file and launches a webserver. jhat enables you to browse heap dumps using your favorite webbrowser. jhat supports pre-designed queries (such as 'show all instances of a known class "Foo"') as well as OQL (Object Query Language) - a SQL-like query language to query heap dumps. Help on OQL is available from the OQL help page shown by jhat. With the default port, OQL help is available at http://localhost:7000/oqlhelp/
See JHat Dcoumentation, or How to analyze the heap dump using jhat

Any idea of health check for Cloud Foundry (Java) application?

We have a Cloud Foundry (Java) application running on IBM Bluemix and we are looking for a way of health check for it. We mainly would like to monitor memory usage (both CF instance memory and JVM heap). We know that Auto-Scaling can do a similar thing but we think it keeps memory usages for recent 2 hours. (Please correct us if we are misunderstanding.) We prefer to monitor memory usage at least recent 24 hours. Any suggestions or comments must be appreciated. Thank you.
From a platform standpoint, you don't have a lot of options:
You can configure an HTTP-based health check for your app. Instead of just monitoring the port your application is listening to, this will actually send an HTTP request and check that it gets a valid response. If it does not, then your application will get automatically restarted by the platform. This does not keep track of any of the metrics that you listed. It's just purely a check to determine if your application is still alive.
Ex: cf set-health-check my-super-cool-app http --endpoint /health
https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html
You can connect to the firehose and pull metrics. This will include the container metrics of CPU, RAM & Disk usage. The firehose is just a method to obtain this information though, the whole problem of storage and pretty graphs is one that you'd still need to solve.
The firehose plugin is the best example of doing this: https://github.com/cloudfoundry-community/firehose-plugin
Beyond the platform itself, you might want to look at an APM (application performance monitoring) tool. I'm not going to list examples here. There are many which you can find with a quick Internet search. Some even integrate nicely with the Java buildpack, those are listed here.
This is probably the solution you want as it will give you all sorts of metrics, including the ones you mentioned above.
Hope that helps!

How to catch OutOfMemory errors on Amazon EBS (Elastic BeanStalk)

Here's a tricky one for ya - We have a Java web application, deployed on Tomcat web servers on Amazon ElasticBeanStalk. and we believe we have a memory leak b/c it seems that the JVM crashes every night with OutOfMemory exception.
The problem is that after the crash, EBS automatically scraps the old EC2 instance and starts a fresh one. all the logs and info get scrapped too...
I am now developing a custom CloudWatch metric to monitor the memory of the JVM (you would think there should be a prepared one...) but that won't help me generate heap dumps
Has anyone gone through a similar problem and knows how to catch these errors on EBS?
This certainly sounds like unusual EC2 (not EBS) instance behaviour. It's interesting that if Tomcats falls over then the machine instance gets affected (in terms of stopping or terminating).
This is what I would suggest to diagnose:
get a running instance read to examine / play with
take a look at the "Termination Protection" - is this set to "enabled" or not - that could explaing the "scrapping" part of your problem (if by scrapping you mean the instance terminates and is removed). This you can find in the properties of your EC2 instance using the AWS console.
take a look at the Java memory settings your Tomcat server is configured with. Perhaps the max is (Xmx) bigger that the virtual machine has!? If so perhaps Tomcat is literally running the machine out of memory which could explain some of the EC2-response to your out of memory. I assume you mean "stopped" rather than "scrapped" otherwise how would you know your are getting an out of memory error?
if you manually kill the tomcat/java process on a working instance, does the instance stay operational (or do you get booted off and the instance gets stopped)? If something happens simply because you stop tomcat, it means some monitoring process is kicking in and taking down the machine explicitly.
use the -XX:-HeapDumpOnOutOfMemoryError to produce a dump file - this will help you work out where your leak is and hopefully fix the root cause.
Good luck. Hope that helps.
Consider a log collection service like Sumologic. The log files you specify are collected and available for analysis online. So even if your EC2 instances get replaced you can do forensics to see what happened to them

Tomcat 6 Web Application Eating Up Memory Over Time

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

How to debug Java memory errors?

There is a Java Struts application running on Tomcat, that have some memory errors. Sometimes it becomes slowly and hoard all of the memory of Tomcat, until it crashes.
I know how to find and repair "normal code errors", using tests, debugging, etc, but I don't know how to deal with memory errors (How can I reproduce? How can I test? What are the places of code where is more common create a memory error? ).
In one question: Where can I start? Thanks
EDIT:
A snapshot sended by the IT Department (I haven't direct access to the production application)
Use one of the many "profilers". They hook into the JVM and can tell you things like how many new objects are being created per second, and what type they are etc.
Here's just one of many: http://www.ej-technologies.com/products/jprofiler/overview.html
I've used this one and it's OK.
http://kohlerm.blogspot.com/
It is quite good intro how to find memory leaks using eclipse memory analyzer.
If you prefer video tutorials, try youtube, although it is android specific it is very informative.
If your application becomes slowly you could create a heap dump and compare it to another heap dump create when the system is in a healthy condition. Look for differences in larger data structures.
You should run it under profiler (jprofile or yourkit, for example) for some time and see for memory/resource usage. Also try to make thread dumps.
There are couple of options profiler is one of them, another is to dump java heap to a file and analyze it with a special tool (i.e. IBM jvm provides a very cool tool called Memory Analizer that presents very detailed report of allocated memory in the time of jvm crash - http://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/).
3rd option is to start your server with jmx server enabled and connect to it via JConsole with this approach you would be able to monitor memory ussage/allocation in the runtime. JConsole is provided with standard sun jdk under bin directory (here u may find how to connect to tomcat via jconsole - Connecting remote tomcat JMX instance using jConsole)

Categories

Resources