Java web application really slow - java

I am using Rackspace as a hosting provider, using their Cloud server hosting, with 256mb plan.
I am using Geronimo 2.2 to run my java application.
The server starts up no problem, loads Geronimo quite fast, however, when I started to deploy my web application, it is taking forever, and once it is deployed, it takes forever to navigate through pages.
I've been monitoring the server activity, the CPU is not so busy, however, 60% of the memory is being used up. Could this be the problem?
If so, what are my options? Should I consider upgrading this cloud server to something with more RAM, or changing a host provider to better suit my needs?
Edit:
I should note that, even if I don't deploy my application, just having Geronimo loaded, sometimes I would get a connection time when I try to shut down Geronimo.
Also the database is on the same server as the application. (however I wouldn't say its query intensive)
Update:
After what #matiu suggested, I tried running free -m, and this is the output that I get:
total used free shared buffers cached
Mem: 239 232 6 0 0 2
-/+ buffers/cache: 229 9
Swap: 509 403 106
This was totally different result than running ps ux, which is how I got my previous 60%.
And I did an iostat check, and about 25% iowait time, and device is constantly writing and reading.
update:
Upgraded my hosting to 512MB, now it is up to speed! Something I should note is, I forgot about the Java's Permanent Generation memory, which is also used by Geronimo. So it turns out, I do need more RAM, and more RAM did solve my problem. (As expected) yay.

I'm guessing you're running into 'swapping'.
As you'll know Linux swaps out some memory to disk. This is great for memory that's not accessed very much.
When Java starts eating heaps and heaps, linux starts:
Swapping memory block A out to disk to make space to read in block B
Reading/writing block B
Swapping block B to disk to make space for some other block.
As disk is 1000s of times slower than RAM, as the memory usage increases your machine grinds more and more closer to a halt.
With 256 MB Cloud Servers you get 512 MB of Swap space.
Checking:
You can check if this is the case with free -m .. this page shows how to read the output:
Next I'd check with 'iostat 5' to see what the disk IO rate on the swap partition is. I would say a write rate of 300 or more means you're almost dead in the water. I'd say you'd want to keep the write rate of the swap partition down below 50 blocks a second and the read rate down below 500 blocks a second .. if possible both should be zero most of the time. Remember disk is 1000s of times slower than RAM.
You can check if it's Java eating the ram by running top and hitting shift+m to order the processes by memory consumption.
If you want .. you can disable the swap partition with swapoff -a .. then open up the web console, and hit the site a bit .. you'll soon see error messages in the console like 'OOM Killed process xxx' (OOM is for Out of Memory I think). If you see those that's linux trying to satisfy memory requests by killing processes. Once that happens, it's best to hard reboot.
Fixing:
If it's Java using the RAM .. this link might help.
I think the easy fix would be just to upgrade the size of the Cloud Server.
You may find a different Java RTE may be better.
If you run it in a 32 bit chroot it may use less RAM.

You should consider running a virtual dedicated Linux server, from something like linode.
You'd have to worry about how to start a Java service and things like firewalls, etc, but once you get it right, you are in effect you're own hosting provider, allowing you to do anything a standalone actual Linux box can do.
As for memory, I wouldn't upgrade until you have evidence that you do not have enough. 60% being used up is less than 100% used up...
Java normally assumes that it can take whatever it is assigned to it. Meaning, if you give it a max of 200MB, it thins that it's ok to take 200MB even though it's using much less.
There is a way to make Java use less memory, by using the -Xincgc incremental garbage collector. It actually ends up giving chunks of memory back to the system when it no longer needs it. This is a bit of a kept secret really. You won't see anyone point this out...

Based on my experience, memory and CPU load on VPSes are quite related. Meaning, when application server will take up all available memory, CPU usage starts to sky rock, finally making application inaccessible.
This is just a side effect though - you should really need to investigate where your problems origin!
If the memory consumption is very high, then you can have multiple causes:
It's normal - maybe you have reached a point, where all processes (application server, applications within it, background processes, daemons, Operating System, etc.) put together need that huge amount of memory. This is least probably scenario.
Memory leak - can happen due to bug in framework or some library (not likely) or your own code (possible). This can be monitored and solved.
Huge amount of requests - each request will take both CPU and memory to be processed. You can have a look at the correlation between requests per second and memory consumption, meaning, it can be monitored and resolved.
If you are interested in CPU usage:
Again, monitor requests to your application. For constant count of requests - nothing extraordinary should happen.
One component is exhausting most resources (maybe your database is installed on the same server and it uses all CPU power due to inefficient queries? Slow log would help.)
As you can see, it's not trivial task, but you have tools support which will can help you out. I personally use java melody and probe.

Related

How to profile spring-boot application memory consumption?

I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers

Java iscsi io performance

I have an interesting problem/situation I'm dealing with in Eclipse. I'm running an application who's processing lots of data, several tens of gigs worth.
I have more than enough RAM for how this application is supposed to operate and a very beefy CPU. My local disk is the first issue, while the application processes this data I run out of space on my local disk due to temp files. I solved this by moving my temp directory to my NAS which is mounted using iSCSI
-Djava.io.tmpdir=E:\tmp
Here's the actual question:
When I switched to my iSCSI drive I noticed more consistent memory consumption and quicker execution by the application. Even with my iSCSI drive being in a RAID 10 over a link aggregated connection, I would actually assume memory consumption on the system would increase due to overhead and I would see a slow down in application execution, which isn't the case.
Is a reduced memory footprint and quicker application execution in this situation expected? If so, why? If not, where might I begin to look for a reason why this has occurred?

Use Large pages in Jboss

I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon
In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.
It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.

Page Reads - Memory or coding issue?

According to MSDN documentation, Page Reads/sec is a good way to try to address whether or not the system's issue is lack of memory or if it's a coding issue/memory leak.
I'm looking for some advice from others and ways to go down the path to find out more.
I'm running the following on my machine (Windows 7. 64 Bit. 4GB Ram)
1. InteliJ 10 (Tomcat for my Web Services & JSP for the front end)
2. Oracle 11g
I'm trying to identify what/where the problem can be - so I created a script in JMeter to run to throttle the system a bit to create data & search the system for data.
Running Performance Monitor for a 10 minute period my data is as follows:
Page Reads/Sec (Average): 26.841
Pages Input/Pages Fault = Hard Fault % = 5%
Pages Fault/sec (Avg) = 2300
MSDN says a sustained value over 5 Page Reads/sec is often a strong indicator of a memory shortage but that's a sustained value and not average. It spikes a few times but over the long haul seems to go between 0-3 with a few spikes that hit really large.
I thought a memory leak might be the issue; however, after inspecting the code and checked that streams were closed (Files/Inputs/DB Connections/etc), I'm not so sure.
Does this data point more towards lack of memory, a memory leak in the Services or a configuration issue?
Edit 1: Looking at heap
I currently can only access my development system and not production. Would need to coordinate with someone else to get me access to logs & use jvisualVM on the system.
However, I did take a few heapdumps on the development system last week and today. Nothing crazy in class usage minus String & char[]. Looking at the Monitor my heapsize in development is 463,863 and the max is around 480. Used fluxuates between 415 and 450.
Using Eclipse Memory Analyzer and the "suspected leak report" on the heapdump shows 3 problem suspects.
1. One instanceo f "org.apache.jasper.servlet.JspServlet" loaded by "org.apache.catalina.loader.StandardClassLoader" Occupies 18.70% The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]
2. The thread org.apache.tomcat.util.threads.TaskThread # http-apr-8080-exec-19 keeps local variables with total size 12.30%. The memory is accumulated in one instance of "org.apache.tomcat.util.threads.TaskThread" loaded by "org.apache.catalina.loader.StandardClassLoader"
3. The thread org.apache.tomcat.util.threads.TaskThread #http-apr-8080-exec-24 keeps local variables with total size 10.72%. The m emory is accumulated in one instance of "org.apache.tomcat.util.threads.TaskThread' loaded by "org.apache.catalina.loader.StandardClassLoader"
I was under the impression (could be wrong) that some of this is normal as TomCat is goign to load a lot of stuff initially and store it.
One thing you can do is look in Task Manager and observe how much memory your Java-related processes are taking up.
Sum up all the Private Working Set memory of, say, the top 10 memory-intensive processes (whether they are Java-related or not). If this value is close to or greater than 75% of your physical memory capacity, OR if you're running a 32-bit operating system with PAE disabled (you didn't state your operating system), it may be a physical memory limitation.
If your private working set sum is using much less than your total physical memory, it's likely not a problem related to that. It could also be heavy disk I/O related to the Oracle database; more memory will allow your system to cache disk pages in RAM, which dramatically speeds up reads because it treats your RAM as a sort of disk (but RAM is 20 to 50 times faster than a hard disk).
If you're diagnosing an Oracle database MSDN is a very poor source of guidance: SQL Server's architecture is too different to be relevant. You should read the Concepts manual.
The data dictionary in Oracle has views which provide various insights into the performance of the system. The one which is relevant to you right now is V$SGA_TARGET_ADVICE; this will show you the predicted effect of increasing or decreasing the system's memory. Find out more.

Application Servers(java) : Should adding RAM to server depend on each domain's -Xmx value?

We have Glassfish application server running in Linux servers.
Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs
With that math,each of our server has 8GB RAM (2 GB Buffer)
First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB;
Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?
A few rules of thumb:
You really want the sum of your XmXs and the RAM needed for the applications you need to run on the box constantly (including the OS) to be lower than your physical RAM available. Otherwise, something will be swapped to disk, and when that "something" needs to run, things will slow down dramatically. If things go constantly in and out of swap, nothing will perform.
You might be able to get by with lower XmX on some of your application servers (your question seems to imply some of your JVMs have too much RAM allocated) . I believe Tomcat can start with as few as 64mb of XmX, but many applications will run out of memory pretty quickly with that setup. Tuning your memory allocation and the garbage collector might be worth it. Frankly, I'm not very up to date with GC performance lately (I haven't had to tune anything to get decent performance in 4-5 years), so I don't know how valuable will that be. GC collection pauses used to be bad, and bigger heaps meant longer pauses...
Any memory you spare will not be "wasted". The OS will use it for disk cache and that might be a benefit.
In any case, the answer is normally... you need to run tests. Being able to run stress tests is really invaluable, and I suggest you spend some time writing one and running it. It will allow you to take educated decisions in this matter.

Categories

Resources