I do a clean install on my Debian 9 VPS Server with 4GB RAM and 2 CPUs.
The installation is sucessfull but when I configure any MAVEN project, (On Java 8) I get the following error:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at jenkins.maven3.agent.Maven35Main.main(Maven35Main.java:137)
at jenkins.maven3.agent.Maven35Main.main(Maven35Main.java:65)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at hudson.remoting.AtmostOneThreadExecutor.execute(AtmostOneThreadExecutor.java:95)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at hudson.remoting.RemoteInvocationHandler$Unexporter.watch(RemoteInvocationHandler.java:826)
at hudson.remoting.RemoteInvocationHandler$Unexporter.access$100(RemoteInvocationHandler.java:409)
at hudson.remoting.RemoteInvocationHandler.wrap(RemoteInvocationHandler.java:166)
at hudson.remoting.Channel.<init>(Channel.java:582)
at hudson.remoting.ChannelBuilder.build(ChannelBuilder.java:360)
at hudson.remoting.Launcher.main(Launcher.java:770)
at hudson.remoting.Launcher.main(Launcher.java:751)
at hudson.remoting.Launcher.main(Launcher.java:742)
at hudson.remoting.Launcher.main(Launcher.java:738)
... 6 more
Finished: FAILURE
I did some changes but nothing works.
Try 1 (Not Working): Ulimit Configuration
See the current limits of your system, run ulimit -a on the command-line with the user running Jenkins (usually jenkins).
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 30
file size (blocks, -f) unlimited
pending signals (-i) 30654
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 99
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Increaseasing limits, adding these lines to /etc/security/limits.conf:
jenkins soft nofile 4096
jenkins hard nofile 8192
jenkins soft nproc 30654
jenkins hard nproc 30654
Get The Same Error
Try 2 (Not Working): MAVEN_OPTS
Adding some Options to maven configuration.
-Xmx256m -Xss228k
Get The Same Error
Try 3 (Not Working): Default Task Max
Update Default Task Max
nano /etc/systemd/system.conf
Put the next line into the file
DefaultTasksMax=10000
Reboot the system.
Get The Same Error
I am getting frustrated trying to use an CI Environment on Debian.
Any suggestions? I accept even a new CI server that works similar to jenkins.
Reduce the Thread Stack size.
Access to the configuration Jenkins File at:
nano /etc/default/jenkins
Changing the JAVA_ARGS on deploy:
JAVA_ARGS="-Xmx3584m -XX:MaxPermSize=512m -Xms128m -Xss256k -Djava.awt.headless=true"
(For 4Gb RAM)
Reboot the Jenkins Service:
systemctl restart jenkins
Note: per OP #JPMG Developer's own edit to the question.
Related
I'm been trying to build LinageOS 18.1 but keep running into
OutOfMemoryError : Java Heap Space
I've increased the heap size with -Xxm25g and I can confirm it with java -version that the new heap size is indeed picked up by java, which shows Picked up _JAVA_OPTIONS: -Xxm25g
I've also setup a /swapfile size of 40GB
I have an 8GB RAM iMac with Ubuntu 18.04.6 on VMWare Fusion, using 4 processor
No matter how much -Xxm size I increase(even tried -Xxm50g), it still always errors out at this point of the build process :
//frameworks/base:api-stubs-docs-non-updatable metalava merged [common]
OutOfMemoryError : Java Heap Space
Is there a way to tweak the build process somewhere to get it to build?
I've read elsewhere that reducing the processor might also help, so I've also tried to reduce the no. processor to just 1 with brunch -j1 <target_name> but that doesn't work either as I believe Lineage uses the full available {n proc} so it's no accepting the -j argument. Is there a way to tell brunch to use just 1 processor?
I know an 8GB RAM is not the ideal build setup but I've read elsewhere it's possible. Thanks for any pointers help
Here's the memory statistics right before, during and after the failure :
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.5G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 3.9G 2.4G 5.1M 1.0G 3.2G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 3.0G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.2G 2.0G 5.1M 1.2G 2.9G
Swap: 49G 495M 49G
dev#ubuntu:~$ free -h
total used free shared buff/cache available
Mem: 7.4G 4.4G 1.6G 5.1M 1.4G 2.7G
Swap: 49G 495M 49G
Background: Since the past few days, my linux development machine Java services one by one by the system kill, I looked at the system logs are OOM caused. Now I can't start the java process if I set the initial heap too large.
I can't see with the usual troubleshooting means, the development machine is a virtual machine (I don't think I can exclude the problem of the physical machine where the virtual machine is located, my colleague's machine and I applied at the same time, also have this problem), the total memory is about 6G, buff/cache + free total of about 5G. Thank you all for your help.
The crash logs at startup are in the attachment, and the system information and jdk information are in there.
enter link description here
Start-up log:
[~ jdk1.8.0_281]$java -Xms1000m -version
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a5400000, 699400192, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 699400192 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid7617.log
Memory usage is as follows:
[~ jdk1.8.0_281]$free -h
total used free shared buff/cache available
Mem: 5.7G 502M 213M 4.6G 5.0G 328M
Swap: 0B 0B 0B
The io situation is as follows:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.03 2.77 0.84 2.80 48.95 97.58 80.57 0.05 22.14 65.48 9.22 0.55 0.20
scd0 0.00 0.00 0.00 0.00 0.01 0.00 66.96 0.00 0.34 0.34 0.00 0.24 0.00
I recently run into docker and deploy my java application into a tomcat docker container. But I met a very specific error about NIO memory mapping a file:
File mark = new File("/location/to/docker/mounted/file");
m_markFile = new RandomAccessFile(mark, "rw");
MappedByteBuffer m_byteBuffer = m_markFile.getChannel().map(MapMode.READ_WRITE, 0, 20);
And the last function call failed as:
Caused by: java.io.IOException: Invalid argument
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:906)
at com.dianping.cat.message.internal.MessageIdFactory.initialize(MessageIdFactory.java:127)
at com.dianping.cat.message.internal.DefaultMessageManager.initialize(DefaultMessageManager.java:197)
... 34 more
I don't know what happened. I tested it in my local Mac environment, it's ok. And within the tomcat docker container, I change the file location to a normal file path, it's ok too. Seems which happens only on docker mounted file.
Other information:
root#4520355ed3ac:/usr/local/tomcat# uname -a
Linux 4520355ed3ac 4.4.27-boot2docker #1 SMP Tue Oct 25 19:51:49 UTC 2016 x86_64 GNU/Linux
Mounted a folder in Mac Users to /data
root#4520355ed3ac:/usr/local/tomcat# df
Filesystem 1K-blocks Used Available Use% Mounted on
none 18745336 6462240 11292372 37% /
tmpfs 509832 0 509832 0% /dev
tmpfs 509832 0 509832 0% /sys/fs/cgroup
Users 243924992 150744296 93180696 62% /data
/dev/sda1 18745336 6462240 11292372 37% /etc/hosts
shm 65536 0 65536 0% /dev/shm
docker version
huanghaideMacBook-Pro:cat huanghai$ docker --version
Docker version 1.12.3, build 6b644ec
huanghaideMacBook-Pro:cat huanghai$ docker-machine --version
docker-machine version 0.8.2, build e18a919
I am working on a Spring-MVC application in which I am computing the statistics every night. The problem is, yesterdays computation failed and I have this error and an hs_err_something.log file. The file basically says Out of memory error, but our servers have 32GB ram and quite lot of disk space too. Also, the server is kind of relaxed in night. Why I am I getting this error. I will post contents of relevant code.
StatisticsServiceImpl :
#Override
#Scheduled(cron = "0 2 2 * * ?")
public void computeStatisticsForAllUsers() {
// One of the count as part of statistics
int groupNotesCount = this.groupNotesService.getNoteCountForUser(person.getUsername());
}
GroupNotesDAOImpl :
#Override
public int getNoteCountForUser(String noteCreatorEmail) {
Session session = this.sessionFactory.getCurrentSession();
Query query = session.createQuery("select count(*) from GroupNotes as gn where gn.noteCreatorEmail=:noteCreatorEmail");
query.setParameter("noteCreatorEmail", noteCreatorEmail);
return new Integer(String.valueOf(query.uniqueResult()));
}
Error log :
Aug 05, 2015 2:02:02 AM org.apache.catalina.loader.WebappClassLoader loadClass
INFO: Illegal access: this web application instance has been stopped already. Could not load gn. The eventual following stack trace is cause
d by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no function
al impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1612)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
at com.journaldev.spring.dao.GroupNotesDAOImpl.getNoteCountForUser(GroupNotesDAOImpl.java:359)
hs_err.log file :
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 741867520 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2673), pid=20080, tid=140319513569024
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
What should I do. Any help would be nice. Thanks a lot.
it happened in my vps.
[root#kunphen ~]# free -m
total used free shared buffers cached
Mem: 12067 87 11980 0 0 0
-/+ buffers/cache: 87 11980
Swap: 0 0 0
[root#kunphen ~]# java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
when I change it to "java -Xms16m -Xmx16m -version",it work.
I try many times.Its largest size is 22m,but my memory also have many free zone.
run it like this:
_JAVA_OPTIONS="-Xmx384m" play <your commands>
or play "start 60000" -Xms64m -Xmx128m -server