Track native memory usage from Java? - java

Is there any way I can log native memory usage from Java, i.e., either native memory directly or the total memory the process is using (e.g., ask the OS)?
I'd like to run this on user's machines behind the scenes, so the NativeMemoryTracking command line tool isn't really the most appealing option. I already log free/max/total heap sizes.
Background: A user of my software reported an exception (below) and I have no idea why. My program does use SWIG'd native code, but it's a simple API, I don't think it has a memory leak, and wasn't on the stacktrace (or run immediately before the error). My log indicated there was plenty of heap space available when the error occurred. So I'm really at a loss for how to track this down.
java.lang.OutOfMemoryError: null
at java.io.RandomAccessFile.writeBytes0(Native Method) ~[na:1.7.0_45]
at java.io.RandomAccessFile.writeBytes(Unknown Source) ~[na:1.7.0_45]
at java.io.RandomAccessFile.write(Unknown Source) ~[na:1.7.0_45]
The error occurred on Windows (7 or 10)(?) from within webstart, configured with these parameters:
<java href="http://java.sun.com/products/autodl/j2se" initial-heap-size="768m" java-vm-args="" max-heap-size="900m" version="1.7+"/>

If you want tp track down the JVM memory on your certain method or lines of code you can use the Runtime API.
Runtime runtime = Runtime.getRuntime();
NumberFormat format = NumberFormat.getInstance();
long maxMemory = runtime.maxMemory();
long allocatedMemory = runtime.totalMemory();
long freeMemory = runtime.freeMemory();
System.out.println("free memory: " + format.format(freeMemory / 1024));
System.out.println("allocated memory: " + format.format(allocatedMemory / 1024));
System.out.println("max memory: " + format.format(maxMemory / 1024));
System.out.println("total free memory: " + format.format((freeMemory + (maxMemory - allocatedMemory)) / 1024));

I ended up using this code which asks the OS for RSS and Peak memory usage. It was straightforward for me to add since I already have a SWIG module set up. The code might not be threadsafe since I hit a random malloc exception when I was testing, meaning I'm not sure I want to keep it in there.
I'm really surprised the JVM doesn't provide a way to do this. Please someone let me know if there's a way.

Here is a snippet of code that sets the string equal to the amount of memory used(mb)/total memory(mb) You can then use this to log however you want!
Runtime instance = Runtime.getRuntime();
String mem = "Memory Used: "+ (instance.totalMemory() - instance.freeMemory()) / mb +"MB ("+
(int)((instance.totalMemory() - instance.freeMemory())*1.0/instance.totalMemory()*100.0)+"%)"

public final void writeBytes(String s) throws IOException {
int len = s.length();
byte[] b = new byte[len];
s.getBytes(0, len, b, 0);
writeBytes(b, 0, len);
}
Looking at the source, it is possible that a sufficiently large String would have caused out of memory error. I suspect that your heap log was done before this happened which explains the free heap space you saw. I suggest you verify if this is the case and if yes, limit the String size and/or increase the heap size.

Related

Why docker java app is getting killed because of memory?

Problem: application is killed due to memory usage
Status reason OutOfMemoryError: Container killed due to memory usage
Exit Code 137
Environment: Spring Boot app in docker container on AWS ECS instance with configuration:
AWS hard memory limit/total RAM - 384 MB
-Xmx134m
-Xms134m
-XX:MaxMetaspaceSize=110m
According to the java max memory formula (which I have found during the weeks of research -https://dzone.com/articles/why-does-my-java-process and improved a bit):
max_memory = xmx + non-heap_memory + threads_number * xss
non-heap_memory = metaspace + (compressed_class_space + codeHeap_profiledmethods + CodeHeap_non-methods+ CodeHeap_non-profiled_methods).
Take into account that 2nd part of non-heap memory takes nearly 40mb combined
so in my case max_memory = 134(xmx) + 110(metaspace_max) + 40(non-heap-not_metaspace) + 9(threads) * 1(default Xss) = 293
However, under the load heapUsedMemory = ~105-120mb and non-heapUsedMemory(metaspace + JVM stuff) = ~140mb which means that there must be 384 - 120 - 140 = 124 mb of free memory.
So the problem is that there is plenty of free memory and all java tools are showing it(jstat -gc, Spring on grafana, different java API etc).
Here is code-snippet of an API I have developed and used during my research:
#GetMapping("/memory/info")
public Map<String, String> getMmUsage() {
Map<String, Long> info = new HashMap<>();
List<MemoryPoolMXBean> memPool = ManagementFactory.getMemoryPoolMXBeans();
for (MemoryPoolMXBean p : memPool) {
if ("Metaspace".equals(p.getName())) {
info.put("metaspaceMax", p.getUsage().getMax());
info.put("metaspaceCommitted", p.getUsage().getCommitted());
info.put("metaspaceUsed", p.getUsage().getUsed());
}
}
info.put("heapMax", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax());
info.put("heapCommitted", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getCommitted());
info.put("heapUsed", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getUsed());
info.put("non-heapMax", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getMax());
info.put("non-heapCommitted", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getCommitted());
info.put("non-heapUsed", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getUsed());
Map<String, String> memoryData = info.entrySet().stream().collect(Collectors.toMap(Entry::getKey, e -> {
long kb = e.getValue() / 1024;
return (kb / 1024) + " Mb (" + kb + " Kb)";
}, (v1, v2) -> v1, TreeMap::new));
Set<Thread> threads = Thread.getAllStackTraces().keySet();
memoryData.put("threadsCount", Integer.toString(threads.size()));
memoryData.put("threadsCountRunning",
Long.toString(threads.stream().filter(t -> t.getState() == Thread.State.RUNNABLE).count()));
return memoryData;
}
So my application should be stable since it has a lot of memory to deal with. But its not the case. As I described above Container killed due to memory usage
As I described above java tools shows that there are plenty of memory and that memory(heap) is being released. On the other hand AWS cloudwatch metric MmeoryUtilization show constant growth of memory (in very small portions):
Interesting exploration: during endless testing I have found next: when I set xmx=134MB application lives longer and was capable of surviving 5 rounds of performance tests, when I set xmx/xms = 200mb it survived 1 round of performance tests. How could this be possible??
My opinion: It looks like that there is something that uses memory and not releasing it properly.
I would like to hear your opinions regarding why my app constantly dying when there is 50+mb of free memory and why AWS metrics show different result comparing to java tools

Java OutOfMemory Error

I am seeing an OutOfMemory problem and I am not sure it is the PERM GEN area or the heap space. The error message does not say anything about which area ran out of memory.
Here is a partial stack trace:
The following is information that may be useful to the developer of BETWEENNESS: java.lang.OutOfMemoryError at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:127) at java.util.zip.ZipFile.(ZipFile.java:143) at com..util.internal.ZipFiles.unzip(ZipFiles.java:91)
I looked at the heap space just before it ran out of memory using the jmap -heap command:
using thread-local object allocation.
Parallel GC with 23 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 31675383808 (30208.0MB)
NewSize = 1310720 (1.25MB)
MaxNewSize = 17592186044415 MB
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 536870912 (512.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 9762177024 (9309.9375MB)
used = 7286558512 (6949.003707885742MB)
free = 2475618512 (2360.933792114258MB)
74.64071276403028% used
From Space:
capacity = 396230656 (377.875MB)
used = 340623360 (324.84375MB)
free = 55607296 (53.03125MB)
85.96592788620576% used
To Space:
capacity = 398131200 (379.6875MB)
used = 0 (0.0MB)
free = 398131200 (379.6875MB)
0.0% used
PS Old Generation
capacity = 1992163328 (1899.875MB)
used = 1455304512 (1387.8865356445312MB)
free = 536858816 (511.98846435546875MB)
73.05146578825087% used
PS Perm Generation
capacity = 418578432 (399.1875MB)
used = 418567008 (399.1766052246094MB)
free = 11424 (0.010894775390625MB)
99.99727076238844% used }
And also, I had supplied the following arguments to the JVM: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ but I do not see any heap.
My questions is why is the heap not being generated and how do I figure out which part of the JVm is getting full.
Thanks.
Your provided information says, that your PermGen is 99% full. And your Heap is already 73% full. So increasing both would not be bad at all.
Further you could activate the garbage collector's logging with -XX:+PrintGCDetails to get detailed information on how your JVM is using memory. Additionally activate -XX:+PrintGCTimeStamps and -XX:+PrintGCDateStamps. -Xloggc:$filename sends the logs to a file, which you could easily analyze with something like IMB PMAT tool or GCViewer.
Additionally you should consider using VisualVM to monitor your application while running.
Besides:
A colleague of mine found a smart way to get a heapdump many times faster through gdb:
cat > /tmp/dodump <<EOF
gcore jvm.core
detach
quit
EOF
time gdb --batch --command=/tmp/dodump --pid=`pgrep java`
jmap -dump:format=b,file=jvm.hprof `which java` jvm.core
rm jvm.core
gzip -9 jvm.hprof
Source
Credits go fully to him.
According to stack trace, OutOfMemoryError happens inside ZipFile.open() native method. This means that the problem has nothing to do with Java Heap size nor with PermGen. It is likely related with ZIP cache which is malloc'ed or mmap'ed internally by JDK library.
Try adding -Dsun.zip.disableMemoryMapping=true JVM option to see if it helps.
How large is ZIP file being opened?

Resolving java.lang.OutOfMemoryError: Java heap space

As the title suggests I'm getting this error inside a thread.
The offending LOCs looks like this:
for (int i = 0; i < objectListSize; i++) {
logger.INFO("Loop repeat: "+i+" ...", true);
final Double discreteScore = sp.getDouble(superPeerSocket);
final int expectedObjectIDs = sp.getInteger(superPeerSocket);
final String discreteObjects[] = new String[expectedObjectIDs];
for ( int j = 0; j < expectedObjectIDs; j++)
discreteObjects[j] = sp.getString(superPeerSocket);
htPlus.attachInitialDiscreteList2L1(discreteScore, discreteObjects);
}
The final String discreteObjects[] declaration is where I get the error. I am running this code inside a thread. I have two threads currently active when I get this. I also tried using the MAT tool from eclipse. here is a link with some chart files inside:
PLC chart files (dropbox URL)
If anyone has any idea for this problem I would be grateful.
P.S.: I am thinking to remove the loop although it just fails in the first loop pass.
(I get this in the output when the program fails)
Expected data size: 10
Repeat: 0 ...
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid3793.hprof ...
Heap dump file created [1404020 bytes in 0.015 secs]
Exception in thread "1" java.lang.OutOfMemoryError: Java heap space
at planetlab.app.factory.Worker$15.run(Worker.java:796)
at java.lang.Thread.run(Thread.java:662)
Something irrelevant:
What's with the code not properly formatted/intended error when making posts in stack overflow? It took me 15 minutes to figure out what to do :# :S :#
Every Java program runs in a sandbox. While your OS might have 10 GB of RAM available to it your app might only have 128 MB.
You need to make sure you app has enough ram allocated to the JVM by using the -Xms -Xmx arguments. -Xms indicates the minimum and -Xmx the maximum
It's been suggested in the comments that your expectedObjectIDs seem kinda high. I would certainly check that out. However, you can use the following code to get an idea as you to memory usage and available memory. Using that info you can adjust your -Xms -Xmx accordingly.
Good luck!
Runtime runtime = Runtime.getRuntime();
long maxMemory = runtime.maxMemory();
long allocatedMemory = runtime.totalMemory();
long freeMemory = runtime.freeMemory();
System.out.println("free memory: " + freeMemory / 1024);
System.out.println("allocated memory: " + allocatedMemory / 1024);
System.out.println("max memory: " + maxMemory /1024);
System.out.println("total free memory: " +
(freeMemory + (maxMemory - allocatedMemory)) / 1024);

OutOfMemory error in toString when ISO-8859-1 encoding in SuSE Linux

I've been beating my head on this issue for a while. I'm taking a 27K encoded string (similar to URL encoding) and turning it back into a 9K "ISO-8859-1" plaintext string.
byte outarray[] = new byte[decoded_msg_length]; // 9K
byte inarray[];
try {
inarray = instring.getBytes("ISO-8859-1"); // eg: "ÀÀÀÚßÐÀÀÃÐéÙÓåäàÈÂÁÙÈ...."
inarray = null; // free up whatever memory possible.
// ... for loop decodes chunks of 4 bytes...
Runtime runtime = Runtime.getRuntime();
System.out.println("freeMemory1="+runtime.freeMemory()); // freeMemory1=86441120
// yes I've tried methods like new String( outarray, "ISO-8859-1" );, etc.
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
byteStream.write(outarray);
outarray=null;
runtime.gc();
System.out.println("freeMemory2="+runtime.freeMemory()); //freeMemory2=133761568
// return new String(outarray,"ISO-8859-1"); // OutOfMemoryException thrown here
// return new String(outarray); // OutOfMemoryException thrown here too
return byteStream.toString("ISO-8859-1"); // OutOfMemoryException thrown here also
// sample output: "JOHN H SMITH 123 OAK ST..."
} catch( IOException ioe ) {
...
}
// Thrown exception:
Exception in thread "main" java.lang.OutOfMemoryError
at java.lang.StringCoding.decode(StringCoding.java:510)
at java.lang.String.<init>(String.java:232)
at java.io.ByteArrayOutputStream.toString(ByteArrayOutputStream.java:195)
...
It looks like I have plenty of memory. This same code runs fine with less than half as much free memory in Windows. I'm running this as a single standalone class. Anyone know of any Linux encoding issues with a JRE memory leak?
$ java -version
java version "1.5.0"
Java(TM) 2 Runtime Environment, Standard Edition (build pxi32dev-20080315 (SR7))
IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux x86-32 j9vmxi3223-20080315 (JIT enabled)
J9VM - 20080314_17962_lHdSMr
JIT - 20080130_0718ifx2_r8
GC - 200802_08)
JCL - 20080314
The Java Heap Size may have a different default limit in your Linux environment vs Windows. You can check this via the Runtime.maxMemory() method.
http://download.oracle.com/javase/1.5.0/docs/api/java/lang/Runtime.html#maxMemory()
If the limit is smaller under Linux you can increase it via the -Jmx command-line argument to java,
java -Xmx1024m YourClassNameHere
The 1024m will increase the size of the heap to 1GB, you can adjust the amount as needed. This is a max amount, your program may use much less.
I found the solution, though I'm not sure the exact reason why it occurs - most likely some internal static buffer variable. Even though the error throws at the toString, the fix was to resize decoded_msg_length to be the same as instring.
For some reason I have yet to fathom, instring.getBytes("ISO-8859-1") sets the size of the some internal buffer filled by byteStream.toString("ISO-8859-1"). Setting the decoded_msg_length size one byte short of that length causes Java to throw the error, even though there's nothing thread-unsafe, and I'm working with two different variables.
To top it off I can use CharsetDecoder and it'll still fail. I'll chalk it up to an OS JVM bug. Without that freakish fix, the code works fine in other OS's and JVMs.

What does "Insufficient system resources..." error mean?

This question spans both serverfault and stackoverflow so I just picked this one.
I get the following exception with some simple file copy code. Its running on Windows Server 2003 x64
Caused by: java.io.IOException: Insufficient system resources exist to complete the requested service
at sun.nio.ch.FileDispatcher.pwrite0(Native Method)
at sun.nio.ch.FileDispatcher.pwrite(Unknown Source)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.write(Unknown Source)
at sun.nio.ch.FileChannelImpl.write(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferFromFileChannel(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferFrom(Unknown Source)
at Tools.copy(Tools.java:473)
public static void copy(FileChannel input, FileChannel output) throws IOException {
final long size = input.size();
long pos = 0;
while (pos < size) {
final long count = (size - pos) > FIFTY_MB ? FIFTY_MB : (size - pos);
pos += output.transferFrom(input, pos, count);
}
}
The thing is the server that is running this code is brand new and super powerful, so I don't understand what system resource it could possibly be running out of.
This looks like the error described here:
http://support.microsoft.com/kb/304101
But I've tried adding the registry edits to increase kernel memory page size, and that didn't help.
What I really don't get is I've seen code that uses FileChannel transferFrom with a lot larger chunks of 50 MB. I've seen that code work for files well over 1 GB in one chunk. But the file the server is getting stuck on is just 32 MB!
What is going on here? Is this a problem with FileChannel or Windows?
It may be related to "Bug" ID 4938442: Insufficient System Resources When Copying Large Files with NIO FileChannels.
Evaluation: Not a bug. This is most likely a file-server (or possibly client)
configuration issue.
CUSTOMER SUBMITTED WORKAROUND :
Don't use NIO; we'd prefer to avoid this workaround since
NIO offers a significant performance boost for large files
(at least when performing local disk-to-local disk copies)
We can transfer using a smaller number of bytes. The
actual number of bytes that may be copied without
encountering this error seems to differ on Windows XP and
Windows 2000 server. Certainly a value of 32Mb appears to
work.

Categories

Resources