why jvm always do gc when startup - java

My application always do gc, when startup. even if there is no request arrive.
JVM options :
/opt/java/bin/java -server -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -Xms4g -Xmx4g -XX:MaxMetaspaceSize=128m -Xss256k -XX:G1ReservePercent=10 -XX:MaxGCPauseMillis=100 -XX:+AggressiveOpts -XX:+UseStringDeduplication -XX:+UseBiasedLocking -XX:+UseFastAccessorMethods -XX:+DisableExplicitGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintReferenceGC -XX:G1LogLevel=finest -XX:+PrintGCCause -verbose:gc -Xloggc:/data/logs/shiva-rds-nio/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -Djava.library.path=/opt/shiva-rds/lib -DSHIVA_RDS_HOME=/opt/shiva-rds -Dlogback.configurationFile=/opt/shiva-rds/conf/logback.xml -DLOG_HOME=/data/logs/shiva-rds-nio -jar lib/shiva-rds-proxy-2.3.1130-RELEASE.jar nio
gc logs:
Java HotSpot(TM) 64-Bit Server VM (25.111-b14) for linux-amd64 JRE (1.8.0_111-b14), built on Sep 22 2016 16:14:03 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 65937908k(5662448k free), swap 0k(0k free)
CommandLine flags: -XX:+AggressiveOpts -XX:+DisableExplicitGC -XX:G1LogLevel=finest -XX:G1ReservePercent=10 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -XX:InitialHeapSize=4294967296 -XX:MaxGCPauseMillis=100 -XX:MaxHeapSize=4294967296 -XX:MaxMetaspaceSize=134217728 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCCause -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 -XX:+UnlockExperimentalVMOptions -XX:+UseBiasedLocking -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastAccessorMethods -XX:+UseG1GC -XX:+UseStringDeduplication
0.022: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 4294967296 bytes, attempted expansion amount: 4294967296 bytes]
2016-12-13T15:06:54.946+0800: 0.279: Total time for which application threads were stopped: 0.0001481 seconds, Stopping threads took: 0.0000189 seconds
2016-12-13T15:06:55.168+0800: 0.501: Total time for which application threads were stopped: 0.0002564 seconds, Stopping threads took: 0.0000233 seconds
2016-12-13T15:06:55.173+0800: 0.506: Total time for which application threads were stopped: 0.0000858 seconds, Stopping threads took: 0.0000148 seconds
2016-12-13T15:06:55.302+0800: 0.635: Total time for which application threads were stopped: 0.0003145 seconds, Stopping threads took: 0.0000431 seconds
2016-12-13T15:06:55.388+0800: 0.721: Total time for which application threads were stopped: 0.0001337 seconds, Stopping threads took: 0.0000349 seconds
2016-12-13T15:06:55.460+0800: 0.793: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 13631488 bytes, new threshold 15 (max 15)
0.793: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 0, predicted base time: 10.00 ms, remaining time: 90.00 ms, target pause time: 100.00 ms]
0.793: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 102 regions, survivors: 0 regions, predicted young region time: 3100.70 ms]
0.793: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 102 regions, survivors: 0 regions, old: 0 regions, predicted pause time: 3110.70 ms, target pause time: 100.00 ms]
, 0.1121020 secs]
[Parallel Time: 108.1 ms, GC Workers: 18]

From reading the JVM source code, there is a subtle distinction between the -Xms and -XX:InitialHeapSize options.
The first is the minimum heap size, implying that the JVM should never make the heap any smaller.
The second one is the initial heap size, implying that the JVM could make the heap smaller.
There is also some rather convoluted logic for determining what an "reasonable" initial size should be which looks like it could override the InitialHeapSize.
I suspect that what happens in your case is that the JVM is using a smaller initial size than what you said (-XX:InitialHeapSize=429496729), and then resizing. Presumably the GC run and the resize are happening while your application is starting.
If you (really) wanted to avoid the garbage collection during startup, I'd suggest using -Xms instead. However that will cause the JVM to occupy more (virtual) memory, which is not necessarily a good thing.
UPDATE - apparently that is incorrect.
I now see that you are using -Xms on the command line. I guess that means that the "reasonableness" logic must apply to that option as well. If so, I doubt that there is anything you can do to avoid the GC. (But I wouldn't worry about that.)
The best I can say is that the meanings of -Xms and -XX:InitialHeapSize and their respective behaviors are not clearly specified.

Related

LineageOS build error OutOfMemoryError : Java Heap Space

I'm been trying to build LinageOS 18.1 but keep running into
OutOfMemoryError : Java Heap Space
I've increased the heap size with -Xxm25g and I can confirm it with java -version that the new heap size is indeed picked up by java, which shows Picked up _JAVA_OPTIONS: -Xxm25g
I've also setup a /swapfile size of 40GB
I have an 8GB RAM iMac with Ubuntu 18.04.6 on VMWare Fusion, using 4 processor
No matter how much -Xxm size I increase(even tried -Xxm50g), it still always errors out at this point of the build process :
//frameworks/base:api-stubs-docs-non-updatable metalava merged [common]
OutOfMemoryError : Java Heap Space
Is there a way to tweak the build process somewhere to get it to build?
I've read elsewhere that reducing the processor might also help, so I've also tried to reduce the no. processor to just 1 with brunch -j1 <target_name> but that doesn't work either as I believe Lineage uses the full available {n proc} so it's no accepting the -j argument. Is there a way to tell brunch to use just 1 processor?
I know an 8GB RAM is not the ideal build setup but I've read elsewhere it's possible. Thanks for any pointers help
Here's the memory statistics right before, during and after the failure :
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.5G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.4G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 3.0G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 2.9G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.4G 1.6G 5.1M 1.4G 2.7G
Swap: 49G 495M 49G

java process cannot be started

Background: Since the past few days, my linux development machine Java services one by one by the system kill, I looked at the system logs are OOM caused. Now I can't start the java process if I set the initial heap too large.
I can't see with the usual troubleshooting means, the development machine is a virtual machine (I don't think I can exclude the problem of the physical machine where the virtual machine is located, my colleague's machine and I applied at the same time, also have this problem), the total memory is about 6G, buff/cache + free total of about 5G. Thank you all for your help.
The crash logs at startup are in the attachment, and the system information and jdk information are in there.
enter link description here
Start-up log:
[~ jdk1.8.0_281]$java -Xms1000m -version
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a5400000, 699400192, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 699400192 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid7617.log
Memory usage is as follows:
[~ jdk1.8.0_281]$free -h
total used free shared buff/cache available
Mem: 5.7G 502M 213M 4.6G 5.0G 328M
Swap: 0B 0B 0B
The io situation is as follows:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.03 2.77 0.84 2.80 48.95 97.58 80.57 0.05 22.14 65.48 9.22 0.55 0.20
scd0 0.00 0.00 0.00 0.00 0.01 0.00 66.96 0.00 0.34 0.34 0.00 0.24 0.00

Eclipse Insufficient memory for the Java Runtime Environment

I have reached a problem where java virtual machine simply does not have enough memory to compile. What should I do about this?
The jvm is run by eclipse and it is already part of the 64 bit server. I am running approx 600 lines of code in one class with dependency on a few other classes with data read and saved in text file.
The error report is shown below
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1048576 bytes for
AllocateHeap
Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
This output file may be truncated or incomplete.
Out of Memory Error (memory/allocation.inline.hpp:61), pid=25196,
tid=0x0000000000006218
JRE version: (8.0_144-b01) (build )
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode windows-
amd64 compressed oops)
Failed to write core dump. Minidumps are not enabled by default on client
versions of Windows
--------------- T H R E A D ---------------
Current thread (0x00000000023a0800): JavaThread "Unknown thread"
[_thread_in_vm, id=25112, stack(0x00000000021b0000,0x00000000022b0000)]
Stack: [0x00000000021b0000,0x00000000022b0000]
[error occurred during error reporting (printing stack bounds), id
0xc0000005]
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
Other Threads:
=>0x00000000023a0800 (exited) JavaThread "Unknown thread" [_thread_in_vm,
id=25112, stack(0x00000000021b0000,0x00000000022b0000)]
VM state:not at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: None
Heap:
PSYoungGen total 37888K, used 655K [0x00000000d6400000,
0x00000000d8e00000, 0x0000000100000000)
eden space 32768K, 2% used
[0x00000000d6400000,0x00000000d64a3d80,0x00000000d8400000)
from space 5120K, 0% used
[0x00000000d8900000,0x00000000d8900000,0x00000000d8e00000)
to space 5120K, 0% used
[0x00000000d8400000,0x00000000d8400000,0x00000000d8900000)
ParOldGen total 86016K, used 0K [0x0000000082c00000,
0x0000000088000000, 0x00000000d6400000)
object space 86016K, 0% used
[0x0000000082c00000,0x0000000082c00000,0x0000000088000000)
Metaspace used 746K, capacity 4480K, committed 4480K, reserved
1056768K
class space used 75K, capacity 384K, committed 384K, reserved 1048576K
Card table byte_map: [0x0000000011860000,0x0000000011c50000] byte_map_base:
0x000000001144a000
Marking Bits: (ParMarkBitMap*) 0x000000006bb9d850
Begin Bits: [0x00000000122f0000, 0x0000000014240000)
End Bits: [0x0000000014240000, 0x0000000016190000)
Polling page: 0x0000000000630000
CodeCache: size=245760Kb used=328Kb max_used=328Kb free=245431Kb
bounds [0x00000000024a0000, 0x0000000002710000, 0x00000000114a0000]
total_blobs=58 nmethods=0 adapters=38
compilation: enabled
Compilation events (0 events):
No events
GC Heap History (0 events):
No events
Deoptimization events (0 events):
No events
Internal exceptions (0 events):
No events
Events (10 events):
Event: 0.310 loading class java/lang/Short
Event: 0.311 loading class java/lang/Short done
Event: 0.311 loading class java/lang/Integer
Event: 0.312 loading class java/lang/Integer done
Event: 0.312 loading class java/lang/Long
Event: 0.313 loading class java/lang/Long done
Event: 0.320 loading class java/lang/NullPointerException
Event: 0.320 loading class java/lang/NullPointerException done
Event: 0.320 loading class java/lang/ArithmeticException
Event: 0.320 loading class java/lang/ArithmeticException done
--------------- S Y S T E M ---------------
OS: Windows 10.0 , 64 bit Build 14393 (10.0.14393.1198)
CPU:total 4 (initial active 4) (2 cores per cpu, 2 threads per core) family
6 model 78 stepping 3, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1,
sse4.2, popcnt, avx, avx2, aes, clmul, erms, 3dnowpref, lzcnt, ht, tsc,
tscinvbit, bmi1, bmi2, adx
Memory: 4k page, physical 8204552k(3151664k free), swap 33370376k(6360k
free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for windows-amd64
JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with MS VC++
10.0 (VS2010)
time: Sat Nov 04 21:20:44 2017
elapsed time: 0 seconds (0d 0h 0m 0s)

My vps have enough memory,but vm can not run

it happened in my vps.
[root#kunphen ~]# free -m
total used free shared buffers cached
Mem: 12067 87 11980 0 0 0
-/+ buffers/cache: 87 11980
Swap: 0 0 0
[root#kunphen ~]# java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
when I change it to "java -Xms16m -Xmx16m -version",it work.
I try many times.Its largest size is 22m,but my memory also have many free zone.
run it like this:
_JAVA_OPTIONS="-Xmx384m" play <your commands>
or play "start 60000" -Xms64m -Xmx128m -server

UseConcMarkSweepGC vs UseParallelGC

I'm currently having problems with very long garbage collection times. please see the followig. My current setup is that I'm using a -Xms1g and -Xmx3g. my application is using java 1.4.2. I don't have any garbage collection flags set. by the looks of it, 3gb is not enough and I really have a lot of objects to garbage collect.
question:
should I change my garbage collection algorithm?
what should i use? is it better to use -XX:+UseParallelGC or -XX:+UseConcMarkSweepGC
or should i use this combination
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC
the ones occupying the memory are largely reports data and not cache data. also, the machine has 16gb memory and I plan to increase the heap to 8gb.
What are the difference between the two options as I still find it hard to understand.
the machine has multiple processors. I can take hits of up to 5 seconds but 30 to 70 seconds is really hard.
Thanks for the help.
Line 151493: [14/Jan/2012:11:47:48] WARNING ( 8710): CORE3283: stderr: [GC 1632936K->1020739K(2050552K), 1.2462436 secs]
Line 157710: [14/Jan/2012:11:53:38] WARNING ( 8710): CORE3283: stderr: [GC 1670531K->1058755K(2050552K), 1.1555375 secs]
Line 163840: [14/Jan/2012:12:00:42] WARNING ( 8710): CORE3283: stderr: [GC 1708547K->1097282K(2050552K), 1.1503118 secs]
Line 169811: [14/Jan/2012:12:08:02] WARNING ( 8710): CORE3283: stderr: [GC 1747074K->1133764K(2050552K), 1.1017273 secs]
Line 175879: [14/Jan/2012:12:14:18] WARNING ( 8710): CORE3283: stderr: [GC 1783556K->1173103K(2050552K), 1.2060946 secs]
Line 176606: [14/Jan/2012:12:15:42] WARNING ( 8710): CORE3283: stderr: [Full GC 1265571K->1124875K(2050552K), 25.0670316 secs]
Line 184755: [14/Jan/2012:12:25:53] WARNING ( 8710): CORE3283: stderr: [GC 2007435K->1176457K(2784880K), 1.2483770 secs]
Line 193087: [14/Jan/2012:12:37:09] WARNING ( 8710): CORE3283: stderr: [GC 2059017K->1224285K(2784880K), 1.4739291 secs]
Line 201377: [14/Jan/2012:12:51:08] WARNING ( 8710): CORE3283: stderr: [Full GC 2106845K->1215242K(2784880K), 30.4016208 secs]
xaa:1: [11/Oct/2011:16:00:28] WARNING (17125): CORE3283: stderr: [Full GC 3114936K->2985477K(3114944K), 53.0468651 secs] --> garbage collection occurring too often as noticed in the time. garbage being collected is quite low and if you would notice is quite close the the heap size. during the 53 seconds, this is equivalent to a pause.
xaa:2087: [11/Oct/2011:16:01:35] WARNING (17125): CORE3283: stderr: [Full GC 3114943K->2991338K(3114944K), 58.3776291 secs]
xaa:3897: [11/Oct/2011:16:02:33] WARNING (17125): CORE3283: stderr: [Full GC 3114940K->2997077K(3114944K), 55.3197974 secs]
xaa:5597: [11/Oct/2011:16:03:00] WARNING (17125): CORE3283: stderr: [Full GC[Unloading class sun.reflect.GeneratedConstructorAccessor119]
xaa:7936: [11/Oct/2011:16:04:36] WARNING (17125): CORE3283: stderr: [Full GC 3114938K->3004947K(3114944K), 55.5269911 secs]
xaa:9070: [11/Oct/2011:16:05:53] WARNING (17125): CORE3283: stderr: [Full GC 3114937K->3012793K(3114944K), 70.6993328 secs]
Since you have extremenly long GC pauses, it's don't think that changing GC algorithm would help.
Note that it's highly suspicious that you have only full collections. Perhaps you need to increase the size of young generation and/or survivor space.
See also:
Tuning Garbage Collection with the 1.4.2 Java[tm] Virtual Machine
Your heap is too small. The pause is so large because it's busy repeatedly scanning the entire heap desperately looking for anything to collect.
You need to do 1 or possibly more of the following;
find and fix a memory leak
tune the application to use less memory
configure the JVM is use a bigger heap
Are you tied to 1.4.2 for some reason? GC implementations really have moved on since then so you should consider upgrading if possible. I realise this may be a non trivial undertaking but it's worth considering anyway.
If you have high survival rate, your heap may be too large. The larger the heap, the longer the JVM can go without GC'ing, so once it hits, it has so much more to move around.
Step 1:
Make sure that you have set enough memory for your application.
Make sure that you don't have memory leaks in your application. Eclipse Memory Analyzer Tool or visualvm will help you to identify leaks in your application.
Step 2:
If you don't have any issues with Step 1 with respect to memory leaks, refer to oracle documentation page on use cases for specific garbage collection algorithm in "Java Garbage Collectors" section and gctuning article.
Since you have decided to configure larger heaps (>= 8 GB), G1GC should work fine for you. Refer to this related SE question on fine tuning key parameters:
Java 7 (JDK 7) garbage collection and documentation on G1

Categories

Resources