Page Reads - Memory or coding issue? - java

According to MSDN documentation, Page Reads/sec is a good way to try to address whether or not the system's issue is lack of memory or if it's a coding issue/memory leak.
I'm looking for some advice from others and ways to go down the path to find out more.
I'm running the following on my machine (Windows 7. 64 Bit. 4GB Ram)
1. InteliJ 10 (Tomcat for my Web Services & JSP for the front end)
2. Oracle 11g
I'm trying to identify what/where the problem can be - so I created a script in JMeter to run to throttle the system a bit to create data & search the system for data.
Running Performance Monitor for a 10 minute period my data is as follows:
Page Reads/Sec (Average): 26.841
Pages Input/Pages Fault = Hard Fault % = 5%
Pages Fault/sec (Avg) = 2300
MSDN says a sustained value over 5 Page Reads/sec is often a strong indicator of a memory shortage but that's a sustained value and not average. It spikes a few times but over the long haul seems to go between 0-3 with a few spikes that hit really large.
I thought a memory leak might be the issue; however, after inspecting the code and checked that streams were closed (Files/Inputs/DB Connections/etc), I'm not so sure.
Does this data point more towards lack of memory, a memory leak in the Services or a configuration issue?
Edit 1: Looking at heap
I currently can only access my development system and not production. Would need to coordinate with someone else to get me access to logs & use jvisualVM on the system.
However, I did take a few heapdumps on the development system last week and today. Nothing crazy in class usage minus String & char[]. Looking at the Monitor my heapsize in development is 463,863 and the max is around 480. Used fluxuates between 415 and 450.
Using Eclipse Memory Analyzer and the "suspected leak report" on the heapdump shows 3 problem suspects.
1. One instanceo f "org.apache.jasper.servlet.JspServlet" loaded by "org.apache.catalina.loader.StandardClassLoader" Occupies 18.70% The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]
2. The thread org.apache.tomcat.util.threads.TaskThread # http-apr-8080-exec-19 keeps local variables with total size 12.30%. The memory is accumulated in one instance of "org.apache.tomcat.util.threads.TaskThread" loaded by "org.apache.catalina.loader.StandardClassLoader"
3. The thread org.apache.tomcat.util.threads.TaskThread #http-apr-8080-exec-24 keeps local variables with total size 10.72%. The m emory is accumulated in one instance of "org.apache.tomcat.util.threads.TaskThread' loaded by "org.apache.catalina.loader.StandardClassLoader"
I was under the impression (could be wrong) that some of this is normal as TomCat is goign to load a lot of stuff initially and store it.

One thing you can do is look in Task Manager and observe how much memory your Java-related processes are taking up.
Sum up all the Private Working Set memory of, say, the top 10 memory-intensive processes (whether they are Java-related or not). If this value is close to or greater than 75% of your physical memory capacity, OR if you're running a 32-bit operating system with PAE disabled (you didn't state your operating system), it may be a physical memory limitation.
If your private working set sum is using much less than your total physical memory, it's likely not a problem related to that. It could also be heavy disk I/O related to the Oracle database; more memory will allow your system to cache disk pages in RAM, which dramatically speeds up reads because it treats your RAM as a sort of disk (but RAM is 20 to 50 times faster than a hard disk).

If you're diagnosing an Oracle database MSDN is a very poor source of guidance: SQL Server's architecture is too different to be relevant. You should read the Concepts manual.
The data dictionary in Oracle has views which provide various insights into the performance of the system. The one which is relevant to you right now is V$SGA_TARGET_ADVICE; this will show you the predicted effect of increasing or decreasing the system's memory. Find out more.

Related

64-bit JVM limited to 300GB of memory?

I am attempting to run a Java application on a cluster computing environment (IBM LSF running CentOS release 6.2 Final) that can provide me with up to 1TB of RAM space.
I could create a JVM with up to 300GB of maximum memory (Xmx), although I need more than that (I can provide details, if requested).
However, it seems to be impossible to create a JVM with more than 300GB of maximum memory using the Xmx option. To be more specific, I get the classic error message:
Error occurred during initialization of VM.
Could not reserve enough space for object heap.
The details of my (64-bit) JVM are below:
OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
I've also tried with a Java 7 64-bit JVM but I've had exactly the same problem.
Moreover, I tried to create a JVM to run a HelloWorld.jar, but still JVM creation fails if you ask for more than -Xmx300G, so I don't think it has anything to do with the specific application.
Does anyone have any idea why I cannot create a JVM with more than 300G of max memory?
Can anyone please suggest a solution/workaround?
I can think of a couple of possible explanations:
Other applications on your system are using so much memory that there isn't 300Gb available right now.
There could be a resource limit on the per-process memory size. You can check this using ulimit. (Note that according to this bug, you will get the error message if the per-process resource limit stops the JVM allocating the heap regions.)
It is also possible that this is an "over commit" issue; e.g. if your application is running in a virtual and the system as a whole cannot meet the demand because there is too much competition from other virtuals.
A couple of the other ideas suggested are (IMO) unlikely:
Switching the JRE is unlikely to make any difference. I've never heard or seen of arbitrary memory limits in specific 64 bit JVMs.
It is unlikely to be due to not having enough contiguous memory. Certainly contiguous physical memory is not required. The only possibility might be contiguous space on the swap device, but I don't recall that being an issue for typical Linux OSes.
Can anyone please suggest a solution/workaround?
Check the ulimit.
Write a tiny C program that attempts to malloc lots of memory and see how much that can allocate before it fails.
Ask the system (or hypervisor) administrator for help.
(edited, see added section on swap space)
SHMMAX and SHMALL
Since you are using CentOS, you may have run into a similar issue about the SHMMAX and SHMALL kernel setting as described here for configuring the Oracle DB. Under that same link is an example calculation for getting and setting the correct SHMALL setting.
Contiguous memory
Certain users have already reported that not enough contiguous memory is available, others have said it is irrelevant.
I am not certain whether the JVM on CentOS requires a contiguous block of memory. According to SAS, fragmented memory can prevent your JVM to startup with a large max Xmx or start Xms memory setting, but other claims on the internet say it doesn't matter. I tried to proof or unproof that claim on my 48GB Windows workstation, but managed to start the JVM with an initial and max setting of 40GB. I am pretty sure that no contiguous block of that size was available, but JVMs on different OS's may behave differently, because the memory management can be different per OS (i.e., Windows typically hides the physical addresses for individual processes).
Finding the largest contiguous memory block
Use /proc/meminfo to find the largest contiguous memory block available, see the value under VmAllocChunk. Here's a guide and explanation of all values. If the value you see there is smaller than 300GB, try a value that falls right under the value of VmAllocChunk.
However, usually this number is higher than the physically available memory (because it is the virtual memory value available), it may give you a false positive. It is the value you can reserve, but once you start using it, it may require swapping. You should therefore also check the MemFree and the Inactive values. Conversely, you can also look at the whole list and see what values do not surpass 300GB.
Other tuning options you can check for 64 bit JVM
I am not sure why you seem to hit a memory limit issue at 300GB. For a moment I thought you might have hit a maximum of pages. With the default of 4kB, 300GB gives 78,643,200 pages. Doesn't look like some well-known magical number. If, for instance, 2^24 is the maximum, then 16,777,216 pages, or 64GB should be your theoretical allocatable maximum.
However, suppose for the sake of argument that you need larger pages (which is, as it turns out, better for performance of large memory Java applications), you should consult this manpage on JBoss, which explains how to use -XX:+UseLargePages and set kernel.shmmax (there it is again), vm.nr_hugepages and vm.huge_tlb_shm_group (not sure the latter is required).
Stress your system
Others have suggested this already as well. To find out that the problem lies with the JVM and not with the OS, you should stresstest it. One tool you could use is Stresslinux. In this tutorial, you find some options you can use. Of particular interest to you is the following command:
stress --vm 2 --vm-bytes 300G --timeout 30s --verbose
If that command fails, or locks your system, you know that the OS is limiting the use of that amount of memory. If it succeeds, we should try to tweak the JVM such that it can use the available memory.
EDIT Apr6: check swap space
It is not uncommon that systems with very large internal memory sizes, use little or no swap space. For many applications this may not be a problem, but the JVM requires the swap available swap space to be larger than the requested memory size. According to this bug report, the JVM will try to increase the swap space itself, however, as some answers in this SO thread suggested, the JVM may not always be capable of doing so.
Hence: check the currently available swap space with cat /proc/swaps # free and, if it is smaller than 300GB, follow the instructions on this CentOS manpage to increase the swap space for your system.
Note 1: we can deduct from bugreport #4719001 that a contiguous block of available swap space is not a necessity. But if you are unsure, remove all swap space and recreate it, which should remove any fragmentation.
Note 2: I have seen several posts like this one reporting 0MB swap space and being able to run the JVM. That is probably due to the fact that the JVM increases the swap space itself. Still doesn't hurt to try to increase the swap space by hand to find out whether it fixes your issue.
Premature conclusion
I realize that non of the above is an out-of-the-box answer to your question. I hope it gives you some pointers though to what you can try to get your JVM working. You might also try other JVM's, if the problem turns out to be a limit of the JVM you are currently using, but from what I have read so far, no limit should be imposed for 64 bit JVM's.
That you get the error right on initialization of the JVM leads me to believe that the problem is not with the JVM, but with the OS not being able to comply to the reservation of the 300GB of memory.
My own tests showed that the JVM can access all virtual memory, and doesn't care about the amount of physical memory available. It would be odd if the virtual memory is lower than the physical memory, but the VmAllocChunk setting should give you a hint in that direction (it is usually much larger).
If you have a look at the FAQ section of Java HotSpot VM, its mentioned that on 64-bit VMs, there are only 64 address bits to work with and hence the maximum Java heap size is dependent on the amount of physical memory & swap space present on the system.
If you calculate theoretically then you can have a memory of 18446744073709551616 MB, but there are above limitation to it.
You have to use -Xmx command to define maximum heap size for JVM, By default, Java uses 64 + 30% = 83.2MB on 64-bit JVMs.
I tried below command on my machine and it looked to work fine.
java -Xmx500g com.test.TestClass
I also tried to define maximum heap in terabytes but it doesn't work.
Run ulimit -a as the JVM Process's user and verify that your kernel isn't limiting your max memory size. You may need to edit /etc/security/limit.conf
According to this discussion, LSF does not pool node memory into a single shared space. You are using something else for that. Read that something's documentation, because it is possible it cannot do what you are asking it to do. In particular, it may not be able to allocate a single contiguous region of memory that spans all the nodes. Usually that's not necessary, as an application will make many calls to malloc. But the JVM, to simplify things for itself, wants to allocate (or reserve) a single contiguous region for the entire heap by effectively calling malloc just once. Or it could be something else related to whatever you are using to emulate a giant shared memory machine.

What is the maximum object size before GAE throws Heap overflow error

I never thought of this until now, I have been using GAE for quite some time already--but never think of its memory model, since its JVM is there already, I can't say which JVM or version of JVM they are using.
So my question would be when will GAE throw Heap overflow error? Or at least would break my app or whatvever the GAE will do. I don't know.
For example, I push the String to the limits that I put a data with sizeof 2^31 -1
Design wise: I know this is crazy, but the idea is the same with having millions or billions or users pushing data into your GAE application, then your application's job is to process it (serialize/deserialize) before persisting.
Then the heap sum of those will be huge, they might not happen all at the same time, but for sure there will be a tangent point where heap use will be huge.
Is this something that GAE application have to be concered with?
You can read more on Adjusting Application Performance for your running application based on your needs and from the same link you can see the memory and CPU that each front end class has.
You need to code it so you will never run oom independent of user load. If you allow mutithread your instance might get reused and you need to take that into account. If mem, cpu or queue is too high appengine automatically will launch more instances each with their own ram , specified in app settings (128mb, 256mb etc)
An application on GAE is distributed across many, many instances. If you're handling millions of simultaneous users, you'll probably running thousands of instances. Each instance has its RAM (stack + heap space).
Your total heap may be huge, but at any one point, you only need to manage the heap for the requests running on a particular instance, which can only handle a fairly limited number of requests at once. For the memory sizes for different instance types, refer to:
https://developers.google.com/appengine/docs/adminconsole/performancesettings?hl=en

Java web application really slow

I am using Rackspace as a hosting provider, using their Cloud server hosting, with 256mb plan.
I am using Geronimo 2.2 to run my java application.
The server starts up no problem, loads Geronimo quite fast, however, when I started to deploy my web application, it is taking forever, and once it is deployed, it takes forever to navigate through pages.
I've been monitoring the server activity, the CPU is not so busy, however, 60% of the memory is being used up. Could this be the problem?
If so, what are my options? Should I consider upgrading this cloud server to something with more RAM, or changing a host provider to better suit my needs?
Edit:
I should note that, even if I don't deploy my application, just having Geronimo loaded, sometimes I would get a connection time when I try to shut down Geronimo.
Also the database is on the same server as the application. (however I wouldn't say its query intensive)
Update:
After what #matiu suggested, I tried running free -m, and this is the output that I get:
total used free shared buffers cached
Mem: 239 232 6 0 0 2
-/+ buffers/cache: 229 9
Swap: 509 403 106
This was totally different result than running ps ux, which is how I got my previous 60%.
And I did an iostat check, and about 25% iowait time, and device is constantly writing and reading.
update:
Upgraded my hosting to 512MB, now it is up to speed! Something I should note is, I forgot about the Java's Permanent Generation memory, which is also used by Geronimo. So it turns out, I do need more RAM, and more RAM did solve my problem. (As expected) yay.
I'm guessing you're running into 'swapping'.
As you'll know Linux swaps out some memory to disk. This is great for memory that's not accessed very much.
When Java starts eating heaps and heaps, linux starts:
Swapping memory block A out to disk to make space to read in block B
Reading/writing block B
Swapping block B to disk to make space for some other block.
As disk is 1000s of times slower than RAM, as the memory usage increases your machine grinds more and more closer to a halt.
With 256 MB Cloud Servers you get 512 MB of Swap space.
Checking:
You can check if this is the case with free -m .. this page shows how to read the output:
Next I'd check with 'iostat 5' to see what the disk IO rate on the swap partition is. I would say a write rate of 300 or more means you're almost dead in the water. I'd say you'd want to keep the write rate of the swap partition down below 50 blocks a second and the read rate down below 500 blocks a second .. if possible both should be zero most of the time. Remember disk is 1000s of times slower than RAM.
You can check if it's Java eating the ram by running top and hitting shift+m to order the processes by memory consumption.
If you want .. you can disable the swap partition with swapoff -a .. then open up the web console, and hit the site a bit .. you'll soon see error messages in the console like 'OOM Killed process xxx' (OOM is for Out of Memory I think). If you see those that's linux trying to satisfy memory requests by killing processes. Once that happens, it's best to hard reboot.
Fixing:
If it's Java using the RAM .. this link might help.
I think the easy fix would be just to upgrade the size of the Cloud Server.
You may find a different Java RTE may be better.
If you run it in a 32 bit chroot it may use less RAM.
You should consider running a virtual dedicated Linux server, from something like linode.
You'd have to worry about how to start a Java service and things like firewalls, etc, but once you get it right, you are in effect you're own hosting provider, allowing you to do anything a standalone actual Linux box can do.
As for memory, I wouldn't upgrade until you have evidence that you do not have enough. 60% being used up is less than 100% used up...
Java normally assumes that it can take whatever it is assigned to it. Meaning, if you give it a max of 200MB, it thins that it's ok to take 200MB even though it's using much less.
There is a way to make Java use less memory, by using the -Xincgc incremental garbage collector. It actually ends up giving chunks of memory back to the system when it no longer needs it. This is a bit of a kept secret really. You won't see anyone point this out...
Based on my experience, memory and CPU load on VPSes are quite related. Meaning, when application server will take up all available memory, CPU usage starts to sky rock, finally making application inaccessible.
This is just a side effect though - you should really need to investigate where your problems origin!
If the memory consumption is very high, then you can have multiple causes:
It's normal - maybe you have reached a point, where all processes (application server, applications within it, background processes, daemons, Operating System, etc.) put together need that huge amount of memory. This is least probably scenario.
Memory leak - can happen due to bug in framework or some library (not likely) or your own code (possible). This can be monitored and solved.
Huge amount of requests - each request will take both CPU and memory to be processed. You can have a look at the correlation between requests per second and memory consumption, meaning, it can be monitored and resolved.
If you are interested in CPU usage:
Again, monitor requests to your application. For constant count of requests - nothing extraordinary should happen.
One component is exhausting most resources (maybe your database is installed on the same server and it uses all CPU power due to inefficient queries? Slow log would help.)
As you can see, it's not trivial task, but you have tools support which will can help you out. I personally use java melody and probe.

Java Memory Ocupation

i made a server side aplication that uses 18mb of non-heap and around 6mbs of head of a max of 30mbs. i set the max of heap with -Xms and Xmx, the problem is that when i run the program on ubuntu server it takes arround 170mbs instead of 18+30 or atleast 100Mbs in the max. Some one know how to put VM only getting 100MBs?
The JVM uses heap, other memory like thread stacks and share libraries. The shared libraries can be relatively large but they don't use real memory unless they are actually used. If you run JVMs they are shared between them but you cannot see this in the process information.
In a modern PC, 1 GB of memory costs around $100 so reducing every last MB many not be seen to be as important as it used to be.
In response to your comment
i have made some tests with Jconsole
and VMvisual. Xmx 40mbs Xms 25. the
problem is that iam restricted to
512mbs since its a VPS and i can't pay
for it now. The other thing is that
with 100mbs each i could put atleast 3
process's running.
The problem is, you are going about it the wrong way. Don't try and get your VM super small so you can run 3 VMs. Combine everything into 1 VM. If you have 512 memory, then make 1 VM with 256MB of heap, and let it do everything. You can have 10s or 100s of threads in a single VM. In most cases, this will perform better and use less total memory than trying to run many small VMs.

Debugging a strange memory leak - Java/Tomcat

I'm experiencing a very odd problem with a Java application running under Tomcat.
We tried to update the production code from a fresh newly produced in a 1-week sprint, the application has been running over months without hiccups and then this new code makes our Linux servers start swapping after some time.
The very strange thing is that when looking at VisualVM for memory usage it never exceeds the maximum heap size, the JVM does not throw an OutOfMemory, the machine only starts swapping and the JVM keeps running even after that.
So, it seems that's leaking memory from somewhere, it seems like it's from the new code but it's odd that it's not inside the JVM, any ideas in how to debug that?
Thanks!
Swapping is not a conclusive indicator of leakage. It results from low physical memory. Use vmstat on Linux to get swap usage. Try using a different machine, experiment with configurations --swap size, physical memory size, address space.
If you are confident that the problem is in your program try this:
Estimate the median and peak memory that your program should use. You must be able to account for all deviations from these metrics. If you cannot, proceed to step 3.
Assuming you did step 1 correctly and were able to account for all deviations, you can rule out the leak (sorry about such vague suggestions but debugging is only as good as the detective). You should now focus on GC tuning. First, enable GC logging. See if your heap is actually full and where the GC is spending most of its time collecting. This may be a good starting point to start optimizations. Try to see if adjusting GC options helps. Try experimenting with collection algorithms, max/min heap sizes, gen ratios etc. Only experiment when you have ruled out a leak (step 1).
Assuming you did step 1 correctly and were not able to account for all deviations, you can assume that you have a leak somwhere. Use a memory profiler to see what objects contribute to the heap size growth most. Leave a profiler running for an extended period of time --have your program handle some requests it routinely expects to get and then leave it relatively isolated after that. If the memory level keeps on growing you may have a leak somewhere. If not, then it is probably not a memory leak. Can you pin point the part of your program that may be creating them? If yes, try sending several requests that only target that part of your program. Does it replicate the problem deterministically? If no, repeat step 3. If yes, use divide and conquer and reapply step 3 till you can find the class/method that are the culprits. It can be a certain combination of multiple portions as well (meaning that individually they may look innocent but together they may form a brilliant crime syndicate).
Hope this helps, if not then please leave a comment to my post.
All the very best on your exercise!
I would suggest you look into creating heap dumps without using jvisualvm. For Unix-based Oracle JVM's this is normally done by sending a signal 3 to the JVM using kill.
For full details see http://www.startux.de/index.php/java/45-java-heap-dumpyvComment45
You can then see if the patterns changes.
If you do not get an idea from this, then this might be because you are storing a sub-string from a very large original string (which carries the underlying string array around), or because you hold on to operating system resources like open database connections etc.
You have checked your connection pool looks good?
If you aren't using it, I'd recommend using visual VM version 1.3.2 and all the plug-ins. It's a big jump up from earlier versions.
What happens to the perm gen space?
What are the memory settings you're using? Min and max, of course, but what about perm space size?

Categories

Resources