As the title suggests I'm getting this error inside a thread.
The offending LOCs looks like this:
for (int i = 0; i < objectListSize; i++) {
logger.INFO("Loop repeat: "+i+" ...", true);
final Double discreteScore = sp.getDouble(superPeerSocket);
final int expectedObjectIDs = sp.getInteger(superPeerSocket);
final String discreteObjects[] = new String[expectedObjectIDs];
for ( int j = 0; j < expectedObjectIDs; j++)
discreteObjects[j] = sp.getString(superPeerSocket);
htPlus.attachInitialDiscreteList2L1(discreteScore, discreteObjects);
}
The final String discreteObjects[] declaration is where I get the error. I am running this code inside a thread. I have two threads currently active when I get this. I also tried using the MAT tool from eclipse. here is a link with some chart files inside:
PLC chart files (dropbox URL)
If anyone has any idea for this problem I would be grateful.
P.S.: I am thinking to remove the loop although it just fails in the first loop pass.
(I get this in the output when the program fails)
Expected data size: 10
Repeat: 0 ...
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid3793.hprof ...
Heap dump file created [1404020 bytes in 0.015 secs]
Exception in thread "1" java.lang.OutOfMemoryError: Java heap space
at planetlab.app.factory.Worker$15.run(Worker.java:796)
at java.lang.Thread.run(Thread.java:662)
Something irrelevant:
What's with the code not properly formatted/intended error when making posts in stack overflow? It took me 15 minutes to figure out what to do :# :S :#
Every Java program runs in a sandbox. While your OS might have 10 GB of RAM available to it your app might only have 128 MB.
You need to make sure you app has enough ram allocated to the JVM by using the -Xms -Xmx arguments. -Xms indicates the minimum and -Xmx the maximum
It's been suggested in the comments that your expectedObjectIDs seem kinda high. I would certainly check that out. However, you can use the following code to get an idea as you to memory usage and available memory. Using that info you can adjust your -Xms -Xmx accordingly.
Good luck!
Runtime runtime = Runtime.getRuntime();
long maxMemory = runtime.maxMemory();
long allocatedMemory = runtime.totalMemory();
long freeMemory = runtime.freeMemory();
System.out.println("free memory: " + freeMemory / 1024);
System.out.println("allocated memory: " + allocatedMemory / 1024);
System.out.println("max memory: " + maxMemory /1024);
System.out.println("total free memory: " +
(freeMemory + (maxMemory - allocatedMemory)) / 1024);
Related
Problem: application is killed due to memory usage
Status reason OutOfMemoryError: Container killed due to memory usage
Exit Code 137
Environment: Spring Boot app in docker container on AWS ECS instance with configuration:
AWS hard memory limit/total RAM - 384 MB
-Xmx134m
-Xms134m
-XX:MaxMetaspaceSize=110m
According to the java max memory formula (which I have found during the weeks of research -https://dzone.com/articles/why-does-my-java-process and improved a bit):
max_memory = xmx + non-heap_memory + threads_number * xss
non-heap_memory = metaspace + (compressed_class_space + codeHeap_profiledmethods + CodeHeap_non-methods+ CodeHeap_non-profiled_methods).
Take into account that 2nd part of non-heap memory takes nearly 40mb combined
so in my case max_memory = 134(xmx) + 110(metaspace_max) + 40(non-heap-not_metaspace) + 9(threads) * 1(default Xss) = 293
However, under the load heapUsedMemory = ~105-120mb and non-heapUsedMemory(metaspace + JVM stuff) = ~140mb which means that there must be 384 - 120 - 140 = 124 mb of free memory.
So the problem is that there is plenty of free memory and all java tools are showing it(jstat -gc, Spring on grafana, different java API etc).
Here is code-snippet of an API I have developed and used during my research:
#GetMapping("/memory/info")
public Map<String, String> getMmUsage() {
Map<String, Long> info = new HashMap<>();
List<MemoryPoolMXBean> memPool = ManagementFactory.getMemoryPoolMXBeans();
for (MemoryPoolMXBean p : memPool) {
if ("Metaspace".equals(p.getName())) {
info.put("metaspaceMax", p.getUsage().getMax());
info.put("metaspaceCommitted", p.getUsage().getCommitted());
info.put("metaspaceUsed", p.getUsage().getUsed());
}
}
info.put("heapMax", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax());
info.put("heapCommitted", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getCommitted());
info.put("heapUsed", ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getUsed());
info.put("non-heapMax", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getMax());
info.put("non-heapCommitted", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getCommitted());
info.put("non-heapUsed", ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage().getUsed());
Map<String, String> memoryData = info.entrySet().stream().collect(Collectors.toMap(Entry::getKey, e -> {
long kb = e.getValue() / 1024;
return (kb / 1024) + " Mb (" + kb + " Kb)";
}, (v1, v2) -> v1, TreeMap::new));
Set<Thread> threads = Thread.getAllStackTraces().keySet();
memoryData.put("threadsCount", Integer.toString(threads.size()));
memoryData.put("threadsCountRunning",
Long.toString(threads.stream().filter(t -> t.getState() == Thread.State.RUNNABLE).count()));
return memoryData;
}
So my application should be stable since it has a lot of memory to deal with. But its not the case. As I described above Container killed due to memory usage
As I described above java tools shows that there are plenty of memory and that memory(heap) is being released. On the other hand AWS cloudwatch metric MmeoryUtilization show constant growth of memory (in very small portions):
Interesting exploration: during endless testing I have found next: when I set xmx=134MB application lives longer and was capable of surviving 5 rounds of performance tests, when I set xmx/xms = 200mb it survived 1 round of performance tests. How could this be possible??
My opinion: It looks like that there is something that uses memory and not releasing it properly.
I would like to hear your opinions regarding why my app constantly dying when there is 50+mb of free memory and why AWS metrics show different result comparing to java tools
I am writing a program in Java to periodically display the CPU and memory usage of a given process ID. My implementation invokes tasklist. It is pretty straightforward to get the memory usage by the following command:
tasklist /fi "memusage ge 0" /fi "pid eq 2076" /v
This will return the memory usage of process id 2076 and i can use this for my task. By invoking the following command, I can extract the CPU Time.
tasklist /fi "pid eq 2076" /fi "CPUTIME ge 00:00:00" /v
My question is, how would I go about getting the CPU usage of this process?
I found a post on StackOverflow for my question but the answer isn't clear and I don't understand what to type in the command to get what I need. The question was answered in 2008 and someone asked for clarification in 2013 but the person that answered the question hasn't replied.
Here is the post that I have found.
Memory is like a tea cup, it maybe full or empty, an instantaneous look at the cup allows you to see how full of tea it is (that is your "memusage" command).
CPU is like a ski lift. It moves at a reasonably constant rate irrespective of whether your are riding the lift or not. It is not possible to determine your usage in a single instantaneous observation - we need to know how long you were riding it for (that is your "cputime" command). You have to use the "cputime" command at least twice!
For example:
At 7:09 pm, you run the cputime command on your process, and it returns "28 minutes"
At 7:17 pm, you run the cputime command on your process again, and it returns "32 minutes"
From the first time you ran the cputime command to the second time, the usage has increased from 28 minutes to 32 minutes -- the process has used 4 minutes of CPU time.
From 7:09pm to 7:17pm is a duration of 8 minutes -- A total of 8 minutes of time were available, but your process just used 4 minutes: 4 / 8 = 50% average system usage.
If your system has multiple processors, then you can divide by the total number of CPUs to get an average per CPU - e.g. 50% / 2 = 25% average in a dual cpu system.
I used minutes above for ease of writing - in reality you may be looking at how many nanoseconds of CPU time the process used during a time window that is just milliseconds long.
tasklist does not provide the information you are looking for. I would suggest using Get-Counter. A comment on an answer from the SuperUser site looks to be on track for what you're after.
Get-Counter '\Process(*)\% Processor Time' | Select-Object -ExpandProperty countersamples| Select-Object -Property instancename, cookedvalue| ? {$_.instanceName -notmatch "^(idle|_total|system)$"} | Sort-Object -Property cookedvalue -Descending| Select-Object -First 25| ft InstanceName,#{L='CPU';E={($_.Cookedvalue/100/$env:NUMBER_OF_PROCESSORS).toString('P')}} -AutoSize
I once wrote a class:
private static class PerformanceMonitor {
private int availableProcessors = ManagementFactory.getOperatingSystemMXBean().getAvailableProcessors();
private long lastSystemTime = 0;
private long lastProcessCpuTime = 0;
/**
* Get's the cpu usage of the jvm
*
* #return the cpu usage a double of percentage
*/
private synchronized double getCpuUsage() {
if (lastSystemTime == 0) {
baselineCounters();
return 0d;
}
long systemTime = System.nanoTime();
long processCpuTime = 0;
if (getOperatingSystemMXBean() instanceof com.sun.management.OperatingSystemMXBean) {
processCpuTime = ((com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean()).getProcessCpuTime();
}
double cpuUsage = ((double) (processCpuTime - lastProcessCpuTime)) / ((double) (systemTime - lastSystemTime));
lastSystemTime = systemTime;
lastProcessCpuTime = processCpuTime;
return cpuUsage / availableProcessors;
}
private void baselineCounters() {
lastSystemTime = System.nanoTime();
if (getOperatingSystemMXBean() instanceof com.sun.management.OperatingSystemMXBean) {
lastProcessCpuTime = ((com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean()).getProcessCpuTime();
}
}
}
Which is used like:
private static final PerformanceMonitor _MONITOR = new PerformanceMonitor();
_MONITOR.getCpuUsage();
This prints out the usage of the cpu consumed by the process of this JVM.
Is there any way I can log native memory usage from Java, i.e., either native memory directly or the total memory the process is using (e.g., ask the OS)?
I'd like to run this on user's machines behind the scenes, so the NativeMemoryTracking command line tool isn't really the most appealing option. I already log free/max/total heap sizes.
Background: A user of my software reported an exception (below) and I have no idea why. My program does use SWIG'd native code, but it's a simple API, I don't think it has a memory leak, and wasn't on the stacktrace (or run immediately before the error). My log indicated there was plenty of heap space available when the error occurred. So I'm really at a loss for how to track this down.
java.lang.OutOfMemoryError: null
at java.io.RandomAccessFile.writeBytes0(Native Method) ~[na:1.7.0_45]
at java.io.RandomAccessFile.writeBytes(Unknown Source) ~[na:1.7.0_45]
at java.io.RandomAccessFile.write(Unknown Source) ~[na:1.7.0_45]
The error occurred on Windows (7 or 10)(?) from within webstart, configured with these parameters:
<java href="http://java.sun.com/products/autodl/j2se" initial-heap-size="768m" java-vm-args="" max-heap-size="900m" version="1.7+"/>
If you want tp track down the JVM memory on your certain method or lines of code you can use the Runtime API.
Runtime runtime = Runtime.getRuntime();
NumberFormat format = NumberFormat.getInstance();
long maxMemory = runtime.maxMemory();
long allocatedMemory = runtime.totalMemory();
long freeMemory = runtime.freeMemory();
System.out.println("free memory: " + format.format(freeMemory / 1024));
System.out.println("allocated memory: " + format.format(allocatedMemory / 1024));
System.out.println("max memory: " + format.format(maxMemory / 1024));
System.out.println("total free memory: " + format.format((freeMemory + (maxMemory - allocatedMemory)) / 1024));
I ended up using this code which asks the OS for RSS and Peak memory usage. It was straightforward for me to add since I already have a SWIG module set up. The code might not be threadsafe since I hit a random malloc exception when I was testing, meaning I'm not sure I want to keep it in there.
I'm really surprised the JVM doesn't provide a way to do this. Please someone let me know if there's a way.
Here is a snippet of code that sets the string equal to the amount of memory used(mb)/total memory(mb) You can then use this to log however you want!
Runtime instance = Runtime.getRuntime();
String mem = "Memory Used: "+ (instance.totalMemory() - instance.freeMemory()) / mb +"MB ("+
(int)((instance.totalMemory() - instance.freeMemory())*1.0/instance.totalMemory()*100.0)+"%)"
public final void writeBytes(String s) throws IOException {
int len = s.length();
byte[] b = new byte[len];
s.getBytes(0, len, b, 0);
writeBytes(b, 0, len);
}
Looking at the source, it is possible that a sufficiently large String would have caused out of memory error. I suspect that your heap log was done before this happened which explains the free heap space you saw. I suggest you verify if this is the case and if yes, limit the String size and/or increase the heap size.
I am seeing an OutOfMemory problem and I am not sure it is the PERM GEN area or the heap space. The error message does not say anything about which area ran out of memory.
Here is a partial stack trace:
The following is information that may be useful to the developer of BETWEENNESS: java.lang.OutOfMemoryError at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:127) at java.util.zip.ZipFile.(ZipFile.java:143) at com..util.internal.ZipFiles.unzip(ZipFiles.java:91)
I looked at the heap space just before it ran out of memory using the jmap -heap command:
using thread-local object allocation.
Parallel GC with 23 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 31675383808 (30208.0MB)
NewSize = 1310720 (1.25MB)
MaxNewSize = 17592186044415 MB
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 536870912 (512.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 9762177024 (9309.9375MB)
used = 7286558512 (6949.003707885742MB)
free = 2475618512 (2360.933792114258MB)
74.64071276403028% used
From Space:
capacity = 396230656 (377.875MB)
used = 340623360 (324.84375MB)
free = 55607296 (53.03125MB)
85.96592788620576% used
To Space:
capacity = 398131200 (379.6875MB)
used = 0 (0.0MB)
free = 398131200 (379.6875MB)
0.0% used
PS Old Generation
capacity = 1992163328 (1899.875MB)
used = 1455304512 (1387.8865356445312MB)
free = 536858816 (511.98846435546875MB)
73.05146578825087% used
PS Perm Generation
capacity = 418578432 (399.1875MB)
used = 418567008 (399.1766052246094MB)
free = 11424 (0.010894775390625MB)
99.99727076238844% used }
And also, I had supplied the following arguments to the JVM: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ but I do not see any heap.
My questions is why is the heap not being generated and how do I figure out which part of the JVm is getting full.
Thanks.
Your provided information says, that your PermGen is 99% full. And your Heap is already 73% full. So increasing both would not be bad at all.
Further you could activate the garbage collector's logging with -XX:+PrintGCDetails to get detailed information on how your JVM is using memory. Additionally activate -XX:+PrintGCTimeStamps and -XX:+PrintGCDateStamps. -Xloggc:$filename sends the logs to a file, which you could easily analyze with something like IMB PMAT tool or GCViewer.
Additionally you should consider using VisualVM to monitor your application while running.
Besides:
A colleague of mine found a smart way to get a heapdump many times faster through gdb:
cat > /tmp/dodump <<EOF
gcore jvm.core
detach
quit
EOF
time gdb --batch --command=/tmp/dodump --pid=`pgrep java`
jmap -dump:format=b,file=jvm.hprof `which java` jvm.core
rm jvm.core
gzip -9 jvm.hprof
Source
Credits go fully to him.
According to stack trace, OutOfMemoryError happens inside ZipFile.open() native method. This means that the problem has nothing to do with Java Heap size nor with PermGen. It is likely related with ZIP cache which is malloc'ed or mmap'ed internally by JDK library.
Try adding -Dsun.zip.disableMemoryMapping=true JVM option to see if it helps.
How large is ZIP file being opened?
I have 16GB RAM on my Linux machine and have set the maximum java heap memory to 4GB using -Xmx4096m argument. But I am getting the following error when i start my process.
Invalid maximum heap size: -Xmx4096m The specified size exceeds the
maximum representable size. Could not create the Java virtual machine.
It works fine when i set the value to 2048m.
Is there any other configuration parameter that i need to change to increase the heap size ?
Thanks in advance!
its not only about how much RAM you have
on a 32 bit machine max heap available is 1628MB
on a 64 bit machine max heap available is 2^64 (theoretically) but there are limitations
What OS are you using?
Take a look at the Oracle Hotspot FAQ. Look out for the following section:
Why can't I get a larger heap with the 32-bit JVM?
If you are using a 32-bit system, lower it to 1.6G. For 64-bit systems, check the supported systems list in the link provided.
Try -Xmx4066M
It's a bit less than 4G
after typing to command prompt (cmd)
java -XX:+PrintFlagsFinal -version | findstr /i "HeapSize PermSize ThreadStackSize"
Shows me following result
in order to get from bit's to Mb you have to
divide it by 1024^2=1'048'576 bits/mb
uintx HeapSizePerGCThread = 87241520 bit (83,1999... MiB)
uintx InitialHeapSize = 268435456 bit (256 MiB)
uintx LargePageHeapSizeThreshold = 134217728 bit (128 MiB)
uintx MaxHeapSize = 4263510016 bit (4066 MiB)<4G
When I've tryed changing -Xmx4G and display this is showed me MaxHeapSize =0...
But when i do -XmX4066M it works for me.
My problem is with InitalHeapSize I've Tryed changing -Xms2G or -Xms2000M and it doesnt work for me or it refreshes every time...
After I make a new project
public class MaxMemory {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
long usedMem= totalMem-freeMem;
double megs = 1048576.0;
System.out.println ("Max Memory: " + maxMem + " (" + (maxMem/megs) + " MiB)");
System.out.println ("Total Memory: " + totalMem + " (" + (totalMem/megs) + " MiB)");
System.out.println ("Used: " + usedMem + " (" + (usedMem/megs) + " MiB)");
System.out.println ("Free Memory: " + freeMem + " (" + (freeMem/megs) + " MiB)");
}
it throws me following resoult in one of my programs:
Max Memory: 259522560 (247.5 MiB)
Total Memory: 259522560 (247.5 MiB)
Used: 246496784 (235.07765197753906 MiB)
Free Memory: 13025776 (12.422348022460938 MiB)
and i may not increase this "Max Memory" which i think is InitialHeapSize
And I've tried to change it in
Controll Panel > java control panel > java (tab) > view (button) >
RuntimeParameter (block) > -Xms2000M
, of course after going out I click apply but Nothing happened... still run out of my memory ;'[