LineageOS build error OutOfMemoryError : Java Heap Space - java

I'm been trying to build LinageOS 18.1 but keep running into
OutOfMemoryError : Java Heap Space
I've increased the heap size with -Xxm25g and I can confirm it with java -version that the new heap size is indeed picked up by java, which shows Picked up _JAVA_OPTIONS: -Xxm25g
I've also setup a /swapfile size of 40GB
I have an 8GB RAM iMac with Ubuntu 18.04.6 on VMWare Fusion, using 4 processor
No matter how much -Xxm size I increase(even tried -Xxm50g), it still always errors out at this point of the build process :
//frameworks/base:api-stubs-docs-non-updatable metalava merged [common]
OutOfMemoryError : Java Heap Space
Is there a way to tweak the build process somewhere to get it to build?
I've read elsewhere that reducing the processor might also help, so I've also tried to reduce the no. processor to just 1 with brunch -j1 <target_name> but that doesn't work either as I believe Lineage uses the full available {n proc} so it's no accepting the -j argument. Is there a way to tell brunch to use just 1 processor?
I know an 8GB RAM is not the ideal build setup but I've read elsewhere it's possible. Thanks for any pointers help
Here's the memory statistics right before, during and after the failure :
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.5G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.4G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 3.0G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 2.9G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.4G 1.6G 5.1M 1.4G 2.7G
Swap: 49G 495M 49G

Related

java process cannot be started

Background: Since the past few days, my linux development machine Java services one by one by the system kill, I looked at the system logs are OOM caused. Now I can't start the java process if I set the initial heap too large.
I can't see with the usual troubleshooting means, the development machine is a virtual machine (I don't think I can exclude the problem of the physical machine where the virtual machine is located, my colleague's machine and I applied at the same time, also have this problem), the total memory is about 6G, buff/cache + free total of about 5G. Thank you all for your help.
The crash logs at startup are in the attachment, and the system information and jdk information are in there.
enter link description here
Start-up log:
[~ jdk1.8.0_281]$java -Xms1000m -version
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a5400000, 699400192, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 699400192 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid7617.log
Memory usage is as follows:
[~ jdk1.8.0_281]$free -h
total used free shared buff/cache available
Mem: 5.7G 502M 213M 4.6G 5.0G 328M
Swap: 0B 0B 0B
The io situation is as follows:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.03 2.77 0.84 2.80 48.95 97.58 80.57 0.05 22.14 65.48 9.22 0.55 0.20
scd0 0.00 0.00 0.00 0.00 0.01 0.00 66.96 0.00 0.34 0.34 0.00 0.24 0.00

Jenkins - Unable to create new native thread

I do a clean install on my Debian 9 VPS Server with 4GB RAM and 2 CPUs.
The installation is sucessfull but when I configure any MAVEN project, (On Java 8) I get the following error:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at jenkins.maven3.agent.Maven35Main.main(Maven35Main.java:137)
at jenkins.maven3.agent.Maven35Main.main(Maven35Main.java:65)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at hudson.remoting.AtmostOneThreadExecutor.execute(AtmostOneThreadExecutor.java:95)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at hudson.remoting.RemoteInvocationHandler$Unexporter.watch(RemoteInvocationHandler.java:826)
at hudson.remoting.RemoteInvocationHandler$Unexporter.access$100(RemoteInvocationHandler.java:409)
at hudson.remoting.RemoteInvocationHandler.wrap(RemoteInvocationHandler.java:166)
at hudson.remoting.Channel.<init>(Channel.java:582)
at hudson.remoting.ChannelBuilder.build(ChannelBuilder.java:360)
at hudson.remoting.Launcher.main(Launcher.java:770)
at hudson.remoting.Launcher.main(Launcher.java:751)
at hudson.remoting.Launcher.main(Launcher.java:742)
at hudson.remoting.Launcher.main(Launcher.java:738)
... 6 more
Finished: FAILURE
I did some changes but nothing works.
Try 1 (Not Working): Ulimit Configuration
See the current limits of your system, run ulimit -a on the command-line with the user running Jenkins (usually jenkins).
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 30
file size (blocks, -f) unlimited
pending signals (-i) 30654
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 99
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Increaseasing limits, adding these lines to /etc/security/limits.conf:
jenkins soft nofile 4096
jenkins hard nofile 8192
jenkins soft nproc 30654
jenkins hard nproc 30654
Get The Same Error
Try 2 (Not Working): MAVEN_OPTS
Adding some Options to maven configuration.
-Xmx256m -Xss228k
Get The Same Error
Try 3 (Not Working): Default Task Max
Update Default Task Max
nano /etc/systemd/system.conf
Put the next line into the file
DefaultTasksMax=10000
Reboot the system.
Get The Same Error
I am getting frustrated trying to use an CI Environment on Debian.
Any suggestions? I accept even a new CI server that works similar to jenkins.
Reduce the Thread Stack size.
Access to the configuration Jenkins File at:
nano /etc/default/jenkins
Changing the JAVA_ARGS on deploy:
JAVA_ARGS="-Xmx3584m -XX:MaxPermSize=512m -Xms128m -Xss256k -Djava.awt.headless=true"
(For 4Gb RAM)
Reboot the Jenkins Service:
systemctl restart jenkins
Note: per OP #JPMG Developer's own edit to the question.

JVM memory usage more than reported by OS [duplicate]

This question already has an answer here:
Why does a JVM report more committed memory than the linux process resident set size?
(1 answer)
Closed 3 years ago.
I have a JVM which reports committed heap memory as around 8GB (Other sections should be over and above this). But my OS shows memory usage as around 5GB. I understand the memory usage can be more than the committed memory due to non-heap, metaspace etc, but how is it possible that the usage is lesser than reported by the jvm?
The output of free shows memory usage of 5.5GB
#free -m
total used free shared buff/cache available
Mem: 24115 5536 16355 10 2223 18209
Swap: 0 0 0
Output of Native Memory Tracker (NMT) shows reserved memory as ~ 11 GB
#jcmd <pid> VM.native_memory
Total: reserved=12904933KB, committed=11679661KB
- Java Heap (reserved=8388608KB, committed=8388608KB)
(mmap: reserved=8388608KB, committed=8388608KB)
- Class (reserved=1161913KB, committed=127417KB)
(classes #20704)
(malloc=2745KB #33662)
(mmap: reserved=1159168KB, committed=124672KB)
- Thread (reserved=2585224KB, committed=2585224KB)
(thread #2505)
(stack: reserved=2574004KB, committed=2574004KB)
(malloc=8286KB #12532)
(arena=2934KB #5004)
- Code (reserved=264623KB, committed=90231KB)
(malloc=15023KB #22507)
(mmap: reserved=249600KB, committed=75208KB)
- GC (reserved=378096KB, committed=378096KB)
(malloc=34032KB #45794)
(mmap: reserved=344064KB, committed=344064KB)
- Compiler (reserved=776KB, committed=776KB)
(malloc=645KB #1914)
(arena=131KB #7)
- Internal (reserved=53892KB, committed=53892KB)
(malloc=53860KB #67113)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=26569KB, committed=26569KB)
(malloc=22406KB #204673)
(arena=4163KB #1)
- Native Memory Tracking (reserved=6756KB, committed=6756KB)
(malloc=494KB #6248)
(tracking overhead=6262KB)
- Arena Chunk (reserved=11636KB, committed=11636KB)
(malloc=11636KB)
- Tracing (reserved=10456KB, committed=10456KB)
(malloc=10456KB #787)
- Unknown (reserved=16384KB, committed=0KB)
(mmap: reserved=16384KB, committed=0KB)
OS - Debian 9
Java -
java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)
I have read through some awesome answers like this one which explains NMT very well, but it doesn't address this issue. I would like to understand how this is possible.
Duplicate of Why does a JVM report more committed memory than the linux process resident set size?
This pertains to the difference between Reserved / Committed / Resident memory as explained in the above answer.
Closing this question.

why jvm always do gc when startup

My application always do gc, when startup. even if there is no request arrive.
JVM options :
/opt/java/bin/java -server -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -Xms4g -Xmx4g -XX:MaxMetaspaceSize=128m -Xss256k -XX:G1ReservePercent=10 -XX:MaxGCPauseMillis=100 -XX:+AggressiveOpts -XX:+UseStringDeduplication -XX:+UseBiasedLocking -XX:+UseFastAccessorMethods -XX:+DisableExplicitGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintReferenceGC -XX:G1LogLevel=finest -XX:+PrintGCCause -verbose:gc -Xloggc:/data/logs/shiva-rds-nio/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -Djava.library.path=/opt/shiva-rds/lib -DSHIVA_RDS_HOME=/opt/shiva-rds -Dlogback.configurationFile=/opt/shiva-rds/conf/logback.xml -DLOG_HOME=/data/logs/shiva-rds-nio -jar lib/shiva-rds-proxy-2.3.1130-RELEASE.jar nio
gc logs:
Java HotSpot(TM) 64-Bit Server VM (25.111-b14) for linux-amd64 JRE (1.8.0_111-b14), built on Sep 22 2016 16:14:03 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 65937908k(5662448k free), swap 0k(0k free)
CommandLine flags: -XX:+AggressiveOpts -XX:+DisableExplicitGC -XX:G1LogLevel=finest -XX:G1ReservePercent=10 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -XX:InitialHeapSize=4294967296 -XX:MaxGCPauseMillis=100 -XX:MaxHeapSize=4294967296 -XX:MaxMetaspaceSize=134217728 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCCause -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 -XX:+UnlockExperimentalVMOptions -XX:+UseBiasedLocking -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastAccessorMethods -XX:+UseG1GC -XX:+UseStringDeduplication
0.022: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 4294967296 bytes, attempted expansion amount: 4294967296 bytes]
2016-12-13T15:06:54.946+0800: 0.279: Total time for which application threads were stopped: 0.0001481 seconds, Stopping threads took: 0.0000189 seconds
2016-12-13T15:06:55.168+0800: 0.501: Total time for which application threads were stopped: 0.0002564 seconds, Stopping threads took: 0.0000233 seconds
2016-12-13T15:06:55.173+0800: 0.506: Total time for which application threads were stopped: 0.0000858 seconds, Stopping threads took: 0.0000148 seconds
2016-12-13T15:06:55.302+0800: 0.635: Total time for which application threads were stopped: 0.0003145 seconds, Stopping threads took: 0.0000431 seconds
2016-12-13T15:06:55.388+0800: 0.721: Total time for which application threads were stopped: 0.0001337 seconds, Stopping threads took: 0.0000349 seconds
2016-12-13T15:06:55.460+0800: 0.793: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 13631488 bytes, new threshold 15 (max 15)
0.793: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 0, predicted base time: 10.00 ms, remaining time: 90.00 ms, target pause time: 100.00 ms]
0.793: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 102 regions, survivors: 0 regions, predicted young region time: 3100.70 ms]
0.793: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 102 regions, survivors: 0 regions, old: 0 regions, predicted pause time: 3110.70 ms, target pause time: 100.00 ms]
, 0.1121020 secs]
[Parallel Time: 108.1 ms, GC Workers: 18]
From reading the JVM source code, there is a subtle distinction between the -Xms and -XX:InitialHeapSize options.
The first is the minimum heap size, implying that the JVM should never make the heap any smaller.
The second one is the initial heap size, implying that the JVM could make the heap smaller.
There is also some rather convoluted logic for determining what an "reasonable" initial size should be which looks like it could override the InitialHeapSize.
I suspect that what happens in your case is that the JVM is using a smaller initial size than what you said (-XX:InitialHeapSize=429496729), and then resizing. Presumably the GC run and the resize are happening while your application is starting.
If you (really) wanted to avoid the garbage collection during startup, I'd suggest using -Xms instead. However that will cause the JVM to occupy more (virtual) memory, which is not necessarily a good thing.
UPDATE - apparently that is incorrect.
I now see that you are using -Xms on the command line. I guess that means that the "reasonableness" logic must apply to that option as well. If so, I doubt that there is anything you can do to avoid the GC. (But I wouldn't worry about that.)
The best I can say is that the meanings of -Xms and -XX:InitialHeapSize and their respective behaviors are not clearly specified.

My vps have enough memory,but vm can not run

it happened in my vps.
[root#kunphen ~]# free -m
total used free shared buffers cached
Mem: 12067 87 11980 0 0 0
-/+ buffers/cache: 87 11980
Swap: 0 0 0
[root#kunphen ~]# java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
when I change it to "java -Xms16m -Xmx16m -version",it work.
I try many times.Its largest size is 22m,but my memory also have many free zone.
run it like this:
_JAVA_OPTIONS="-Xmx384m" play <your commands>
or play "start 60000" -Xms64m -Xmx128m -server

Categories

Resources