PermGen Space issue in Jenkins - java

I am building a maven project, since last 1.5 yrs build was working fine but now I am getting the PermGen space error.
[ERROR] Internal error: java.lang.RuntimeException: org.jfrog.build.extractor.maven.BuildInfoRecorder.sessionEnded() listener has failed: java.io.IOException: Remote call on channel failed: PermGen space -> [Help 1]
[ERROR] org.jfrog.build.extractor.maven.BuildInfoRecorder.sessionEnded() listener has failed:
java.lang.OutOfMemoryError: PermGen space
I have tried below options in order to resolve it:-
1) Under Manage Jenkins, then Configure System. In the Global properties section, added Environment Variables called MAVEN_OPTS with the value set as -Xmx200m -XX:MaxPermSize=512m
2) Under job configuration, then Build, in the MAVEN_OPTS, added below properties:-
-DXms512m
-DXmx1024m
-DXX:PermSize=512m
-DXX:MaxPermSize=1024m
-DXX:+CMSClassUnloadingEnabled
-DXX:+UseConcMarkSweepGC
Still the error is occurring.
Note:- Error is not permanent, it goes away after few builds but again start appearing and then again goes away after several retries.
Thanks.

Maybe I'm wrong, but it seems that your memory error happens when Jenkins try to send the build information to Artifactory (with a post step action).
Did you upgrade the Jenkins Artifactory plugin recently?
Can you try do disable the "Capture and publish build info" option?
And last option, can you try update the JENKINS_JAVA_OPTIONS (in the jenkins service config file) to increase the MaxPermSize?
## Type: string
## Default: "-Djava.awt.headless=true"
## ServiceRestart: jenkins
#
# Options to pass to java when running Jenkins.
#
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Xms2G -Xmx4G -XX:MaxPermSize=256m"

As Maven project deploys artifacts not during build step on a node, but on the master, I suspect Permgen problem could by resolved by either upgrading Jenkins core, plugins or by increasing Permgen max size in Jenkins startup options.
ps. Java 8 obsoletes Permgen.

maybe I am blind and you have written it somewhere but when I have faced a similar issue, then I have upgraded the JRE version for Jenkins itself from 7 to 8. The problem started, when I have used Jenkins 2.X and Java 7.
Best regards,
Max

on centos6 install jenkins by rpm or yum
vim /etc/sysconfig/jenkins
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Xmx1024m -XX:MaxPermSize=512m"

Related

FATAL: Node Jenkins doesn't seem to be running on RedHat-like distro

I installed a new Jenkins node on ubuntu server following the documentation on jenkins.io
On my first deploy the console shows me this error:
Running as SYSTEM
Building in workspace /var/lib/jenkins/workspace/SIAL/SIAL_SVC
Checking OpenJDK installation...
FATAL: Node Jenkins doesn't seem to be running on RedHat-like distro
java.lang.IllegalArgumentException: Node Jenkins doesn't seem to be running on RedHat-like distro
at org.jenkinsci.plugins.openjdk_native.OpenJDKInstaller.isInstalled(OpenJDKInstaller.java:96)
at org.jenkinsci.plugins.openjdk_native.OpenJDKInstaller.performInstallation(OpenJDKInstaller.java:56)
at hudson.tools.InstallerTranslator.getToolHome(InstallerTranslator.java:70)
at hudson.tools.ToolLocationNodeProperty.getToolHome(ToolLocationNodeProperty.java:107)
at hudson.tools.ToolInstallation.translateFor(ToolInstallation.java:220)
at hudson.model.JDK.forNode(JDK.java:147)
at hudson.model.AbstractProject.getEnvironment(AbstractProject.java:339)
at hudson.model.Run.getEnvironment(Run.java:2419)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:943)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1290)
at hudson.scm.SCM.checkout(SCM.java:505)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1213)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:637)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:85)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:509)
at hudson.model.Run.execute(Run.java:1888)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:99)
at hudson.model.Executor.run(Executor.java:432)
Finished: FAILURE
I don't find any similar error on internet, and I don't make any change on the server, somebody knows why I have this error?
It has to do with the org.jenkinsci.plugins.openjdk_native plugin. You need to configure it inside of Jenkins to find a suitable JDK.
Go to Manage Jenkins -> Global Tool Configuration
Down to the JDK section
Add JDK -> Give it a name -> Add Installer and select OpenJDK Installer
I don't have the newer ones available so I selected the java-1.8.0-openjdk
Save, and go try your job again.
I had the same issue, i change default jdk in Jenkins (upgrade). For me this problème appeared when i updated Jenkins.

How to enable TLSv1.2 in java 7?

I'm trying to enable TSLv1.2 in my machine which has java 1.7 using the commands -mvn -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 install / mvn -Dhttps.protocols=TLSv1.2 install.
But its throwing some errors saying my pom.xml has some unresolved dependencies, those dependencies errors are there because my project couldn't download from maven repository due to TSLv1.2 issue.
Seems like a deadlock to me, Can anyone help me on how to resolve it?
You need to configure MAVEN_OPTS env variable or settings.xml to pass proper vm args to JVM (maven JVM)
For quick test, try this
set MAVEN_OPTS with -Dhttps.protocols=TLSv1.2,TLSv1.1
export MAVEN_OPTS=-Dhttps.protocols=TLSv1.2,TLSv1.1
(export is for unix based system, for windows see here)
and re-run your maven command
read more on maven's configuration

Error upon jar execution - unable to allocate file descriptor table

Upon trying to use a jar on the local linux machine, I am getting the following error:
library initialization failed - unable to allocate file descriptor table - out of memory
The machine has 32G RAM
I can provide additional information, if needed.
Any help would be appreciated.
In recent versions of Linux default limit for the number of open files has been increased significantly. Java 8 does the wrong thing of trying to allocate memory upfront for this number of file descriptors (see https://bugs.openjdk.java.net/browse/JDK-8150460). Previously this worked, when the default limit was much lower, but now it tries to allocate too much and fails. Workaround for this is to set a lower limit of number of open files (or use newer java):
$ mvn
library initialization failed - unable to allocate file descriptor table - out of memoryAborted
$ ulimit -n 10000
$ mvn
[INFO] Scanning for projects...
...
Had this happen to me on some Java applications and curiously all electron-based apps (such as Spotify) after upgrading my Manjaro Linux about a week ago (today is November 19, 2019).
What fixed it was this command (as root, sudo didn't do it:
echo >/etc/security/limits.d/systemd.conf "* hard nofile 1048576"
Then reboot
Hope this helps someone.
None of the other fixes I found online worked for me, however I noticed that the bug responsible for this defect is in Java 9, and has been resolved since.
I'm on ArchLinux so I notice that when I tried to start the elasticsearch.service in journalctl -xe it showed that for some reason JRE8 was running it, and indeed archlinux-java status showed that Java 8 was the default. Setting to Java 11 fixed the problem for me:
# archlinux-java set java-11-openjdk

Intellij occasionally unable to reserve enough space for object heap

RESOLVED check below for solution.
I'm using Intellij Idea 2017.2.2. Below is my intellij Specs.
My Intellij would only occasionally fail a maven build or a jboss server start with the error
Error occurred during initialization of VM
Could not reserve enough space for 1048576KB object heap
If I were to run the maven build with the vm args of
-Xms512m -Xmx1024m
The build would fail 9 out of 10 times (not exactly every 10th, but just randomly). But on the 10th time it would work. I simply have to keep pressing the install button until it works.
This was a major problem before was that if I don't specify the vm args then the build would go about halfway then fail on running out of java heap space.
The same exact behavior can be observed for my jboss server (JBOSS6.4 - 7.5.0.Final redhat 21), where the server would fail to start 9 out of 10 times. Then start up as randomly as it does not.
Specs
IntelliJ IDEA 2017.2.2
Build #IU-172.3757.52, built on August 14, 2017
Licensed to -----
Subscription is active until May 31, 2018
JRE: 1.8.0_152-release-915-b10 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 7 6.1
What I tested
I upgraded from Intellij 2017.1 to 2017.2 and the behavior did not change.
I also tried to do the same on Eclipse which also did not help.
Clean restart of PC, then close all the unnecessary apps, open intellij do a maven build and yet it fails, but a few more clicks and it works inconsistently as usual. (note at this moment only 6gb out of 16 is used, there is no way there is an insufficiency with memory)
*Edits
This PC have 16 gb of ram. While the fails are happening about 9.5GBs are being used at that moment.
Ultimately I was able to resolve the issue by updating the proper JDK.
My project was picking up on an incorrect JDK and hence was running the 32 Bit as opposed to 64 bit JDK.
Simply added the correct JDK under File > Project Settings.
It seemed that my project never required that much memory before, but once the need had risen, it seems that a 64 bit became required.
I have solved this problem by changing the build settings in Intellij.
Please follow the below steps.
For SBT:
1. Go to File -> Settings -> Build, Execution Deployment -> sbt
2. Change the value of Maximum heap size, MB to your available memory. Ex. 512
(Previously this value was 1536, due to this I faced the problem)
For Maven:
1. Go to File -> Settings -> Build, Execution Deployment -> Maven -> Importing
Change the value of VM options for importer to your available memory. Ex. -Xmx512m
I had to do the following options to get it pass the error.
Spec used: IDEA IntelliJ 2019.3.5 (Community Edition)
Increase the memory in IntelliJ VM Options
Step 1:
Go to Help -> Edit Custom VM Options
Step2: Change the Heap sizes as given below
Run the app with increased memory by setting the VM options
Run -> Edit Configurations
My issue was resolved by installing Visual C++ Redistributable Packages:
https://aka.ms/vs/16/release/vc_redist.x64.exe

PermGen size error while generating Cobertura Report

I ram running into issue where my application throws PermGen size error whenever I generate cobertura report using this command clean cobertura:cobertura. I have the tried almost everything such as the following:
Increase permgen size in sts.ini
Increase permgen size in jdk vm argumemts by going into Windows -> Preferences
-> Java -> Installed JREs -> click jdk -> edit and on the vm args
Increase permgen size in maven.bat file
Some in stackOverFlow recommended to mention the version in surefire plugin and I have that in place already
None of the above is helping me at all. I am using Mock/PowerMock objects a lot in my JUnit test cases. Maven test runs perfectly fine.
How can I fix this issue?
Thanks
Since you tried to change sts.ini, I'm assuming you are running it from inside STS. Even when Maven is kicked off by STS, it runs in a separate process from STS, so changing sts.ini wouldn't help. you need to set the MAVEN_OPTS with the increased permgen size in the run configuration for the mvn job.
set MAVEN_OPTS.
Value will be: -Xms1024m -Xmx3000m -XX:MaxPermSize=1024m
export MAVEN_OPTS="-Xms1024m -Xmx3000m -XX:MaxPermSize=1024m" //unix
set MAVEN_OPTS="-Xms1024m -Xmx3000m -XX:MaxPermSize=1024m" //window

Categories

Resources