I am using Gradle 2.5 to compile a Java project which consists of 5 modules. In order to speed things up I also use the gradle-daemon. However, During compilation there are up to 18 instances of the gradle-daemon running. After compilation finishes there are still 15 instances of the daemon left. The daemons process consumes about 600 MB of RAM. Is it normal to have that many daemons running in the background or is the gradle-daemon misconfigured?
UPDATE:
My operating system is Debian Jessie. Java version is Oracle Java 8.
Following Antoniossss' advice I got in touch with a developer. As it turns out, Gradle is in fact quite resource hungry. Even for a simple "Hello World" application the daemon might use very well up to 150 MB and maybe even more.
It is also alright, that multiple daemon threads are started, as long as they run within the same JVM.
There is only limited control on the user's side to control/limit memory usage.
One could set GRADLE_OPTS variable in order to pass Xmx options to the JVM, e.g., I managed to build my Android project with following settings:
$ export GRADLE_OPTS="-Xmx64m -Dorg.gradle.jvmargs='-Xmx256m -XX:MaxPermSize=64m'"
The first -Xmx option is set for the Gradle that you start in CLI, the second one (after -Dorg.gradle.jvmargs) is the -Xmx value for the Gradle-Daemon.
The less memory you allow for your JVM the higher the risk for your build to fail - obviously. So you might have to tune those settings until they suit your purposes.
Those settings can also be set in the gradle.properties file.
Related
I started learning flutter recently and i noticed even if vscode closed OpenJDK Platform Binary stays open and uses too much ram. Should i force close it on task manager everytime i finished working on vscode? Is there any way to automatically close it?
This is a documented behaviour of gradle. You can see this stackoverflow answer and this closed issue in the flutter github project.
Daemon processes will automatically terminate themselves after 3 hours
of inactivity. If you wish to stop a Daemon process before this, you
can either kill the process via your operating system or run the
gradle --stop command. The --stop switch causes Gradle to request that
all running Daemon processes, of the same Gradle version used to run
the command, terminate themselves.
You can disable it permanently by following these steps :
The Gradle Daemon is enabled by default, and we recommend always
enabling it. You can disable the long-lived Gradle daemon via the
--no-daemon command-line option, or by adding org.gradle.daemon=false to your gradle.properties file. You can find details of other ways to
disable (and enable) the Daemon in Daemon FAQ further down.
You can find an explanation here about why the daemon is important for performance :
Why the Gradle Daemon is important for performance
The Daemon is a long-lived process, so not only are we able to avoid
the cost of JVM startup for every build, but we are able to cache
information about project structure, files, tasks, and more in memory.
The reasoning is simple: improve build speed by reusing computations
from previous builds. However, the benefits are dramatic: we typically
measure build times reduced by 15-75% on subsequent builds. We
recommend profiling your build by using --profile to get a sense of
how much impact the Gradle Daemon can have for you.
When building project on Jenkins this error is thrown:
Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.3:findbugs (findbugs) on project module-set-view: Execution findbugs of goal org.codehaus.mojo:findbugs-maven-plugin:3.0.3:findbugs failed: Java returned: 137
Does anyone know what could be the problem?
Give it additional memory via MAVEN_OPTS setting
This setting is specified for your plugin requiring at least Xmx384M.
https://gleclaire.github.io/findbugs-maven-plugin/faq.html#How_do_I_avoid_OutOfMemory_errors
How do I avoid OutOfMemory errors?
When running findbugs on a project, the default heap size might not be enough to complete the build. For now there is no way to fork findbugs and run with its own memory requirements, but the following system variable will allow you to do so for Maven:
export MAVEN_OPTS=-Xmx384M
You can also use the fork option which will for a new JVM. You then use the maxHeap option to control the heap size
Configuring MAVEN_OPTS on your computer
set MAVEN_OPTS=-Xmx384M
Configuring MAVEN_OPTS in Jenkins
(Just a screen shot, use -Xmx384M for the value, not what is in screenshot)
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=65667926#content/view/65667926
If further troubleshooting is required
Jenkins hardware requirements can be found here. If you are running many jobs, then increase your system resources.
https://jenkins.io/doc/book/hardware-recommendations/
Jenkins JVM memory can be increased by editing the Jenkins.xml file on the Jenkins server, using the same -Xmx method.
Even though the memory requirements are generally low for Jenkins, having at least a 2GB host memory seems reasonable for a very basic server.
If you go to a production environment, Cloudbees recommendations for heap settings should be followed:
https://support.cloudbees.com/hc/en-us/articles/204859670-Java-Heap-settings-best-practice?mobile_site=true
I have a gradle task which starts a java project. Basically like this:
gradle run -PmainClass=package.path.ServiceMain
Now, I want to increase the heap for the java process started by gradle because the standard heap size is too small. The problem is that I succeed only to increase the size of the gradle process, not the size of the java process which is launched by the gradle project.
I check the heap size with this command in my Java code:
Runtime.getRuntime().totalMemory()
I tested this check and it is valid to use it like this. But it shows me that gradle starts my Java process always with the same heap size.
I experimented with these options:
DEFAULT_JVM_OPTS='-Xmx1024m -Xms512m'
GRADLE_OPTS='-Xmx1024m -Xms512m'
JAVA_OPTS='-Xmx1024m -Xms512m'
No success.
I also tried this:
gradle run -PmainClass=package.path.ServiceMain -DXmx1024m -DXms512m
Still, no success.
Of course, I already searched the web but I found only some hints saying that I could modify the build.gradle file. Unfortunately, this is not what I want/can.
I need to specify the java heap size for my java program on the command line when starting it by a gradle run task (due to the actual project structure).
Thanks in advance for support. Any help is appreciated.
As #Opal states above it is not possible.
The easiest/simplest alternative I could find (for now) is to add this little snippet to the build.gradle file:
tasks.withType(JavaExec) {
jvmArgs = ['-Xms512m', '-Xmx512m']
}
Alternatively, the environment variable _JAVA_OPTIONS the can be used.
Even better: the environment variable JAVA_TOOL_OPTIONS; the content of this variable will be used as (additional) JVM options.
Thanks #ady for the hints.
Use this command in Linux: export _JAVA_OPTIONS="-Xms4000m -Xmx8000m" where values 4000 and 8000 can be changed. Instead of JAVA_OPTS use _JAVA_OPTIONS
Old answer:
You can set or increase memory usage limits (or other JVM arguments) used for Gradle builds and the Gradle Daemon by editing $GRADLE_USER_HOME/.gradle/gradle.properties (~/.gradle/gradle.properties by default), and setting org.gradle.jvmargs:
org.gradle.jvmargs=-Xmx2024m -XX:MaxPermSize=512m
Source: https://riptutorial.com/gradle/example/11911/tuning-jvm-memory-usage-parameters-for-gradle
So I am trying to reproduce an issue with my maven project (resource allocation issue, OOM - can't create native thread) which I see in Jenkins, locally. Hence, I wanted to run the exact java command that Jenkins would run (in the background) along with the arguments but am not sure where to find it, or how to figure that out. The only thing in the configuration I see is the maven commands I have given it.
Any pointers?
Running jps will give you a list of instrumented JVMs running on a machine, including their runtime arguments.
http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
Other useful commands:
Running jmap will give you a heapdump or a histogram of counts of all objects allocated on the heap.
Running jvisualvm will start a monitoring tool that allows you to study the JVM interactively.
I've created a simple 1 file java application that iterates through a loop, calls some functions, allocates some memory, adds some numbers, etc. I run that application via eclipse's Run As->Java Application.
The running application shows up in Java VisualVM under Local.
I double click on that application and go to the Profiler tab.
The default settings are:
Start profiling from classes: my.main.package.**
Do not profile classes: java.*, javax.*,
sun.*, sunw.*, com.sun.*
I click on CPU. The CPU and Memory buttons gray out. Nothing happens.
The Status says profiling inactive.
When my application terminates the Status says application terminated.
What am I doing wrong here? Are there some settings I need to tweak? Do I need to set a VM flag when I launch my application?
I had the same issue after java 1.7.0_45 update. I had to delete the following folder:
C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
After doing so, everything works like a charm.
I'd guess the issue relates to the application being started from within Eclipse, this is because JVisualVM expects to find data in the java.io.tmpdir directory (usually C:\Users\[your username]\AppData\Local\Temp\hsperfdata_[your username] on a Windows system).
I assume rather than in the normal location where JPS, JVisualVM etc. expects it, Eclipse puts the data in it's own temp folder?
If so, try invoking JVisualVM using jvisualvm -J-Djava.io.tmpdir=[Eclipse's temp directory] to explicitly tell it where that data is.
If you can't find the hsperfdata_$USER folder, try just running your application outside Eclipse in the usual command line Java way.
Also note that there was a bug affecting the temp folder (case sensitivity) introduced around 1.6.0_23, so maybe you'd benefit by updating to a more recent Java 6 (or 7) build?
Mikaveli, Kuba and Somaiah Kumbera have provided great solutions. Just adding what I have done to make things work.
I first checked the location C:\users\'username'\AppData\Local\Temp\hsperfdata_'username' There was no file named with the process ID of my program running inside eclipse.
I simply stopped the program and added the following parameter to the Run Configurations of the program (Run Configurations -> Arguments -> VM Arguments)
-Djava.io.tmpdir=C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
I started the program again. Still could not profile it. But now I have a file created for the process at the given temp directory.
Then, a simple restart of VisualVM did the trick.
I had the same issue, but with the following symptoms:
I started jetty, with the work directory in
C:\Users\t852124\AppData\Local\Temp
Jetty was creating the hsperfdata_ directory but not setting a processID in it
So when I started visualVM, it could not get any java process info.
I solved this by starting jetty with the -Djava.io.tmpdir=C:/temp/java option.
Now when I started jetty, the process ID was created as a file in the hsperfdata_ directory.
So when I started visualVM, it was able to see my local java process
I had the same problem and running VisualVM with elevated privileges (admin rights) solved the issue.
On Linux with VisualVM 1.3.3 I have to remove local settings of application in ~/.visualvm/1.3.3/ to enable CPU Profiler and CPU Sampler.
Also note that /usr/bin/jvisualvm contains hardcoded path to OpenJDK (set with jdkhome variable), which seems to cause a lot of issues, comparing to running to Oracle JDK 1.7.
Also note that if your application is using a recent non-Oracle JVM, you may need to download the "bleeding edge" VisualVM from github.
For example, the VisualVM bundled with JDK 1.8.0.111 doesn't seem to work with the IBM 1.8 JVM. Possibly the IBM JVM was simply released after the Oracle 1.8 JVM, so including the necessary changes wasn't possible at that time.