I’m running a conversion project from svn to git. As the application is single threaded, I’m moving the project to a Faster PC.
So without any options bar httpSpooling = true; It runs OK on a VM – 4 CPU's, 20 Gb of Ram.
RAM Usage with two separate instances is 8GB, hitting a max of 9.8Gb.
Jobs Paused, Zipped & SCP'd to new machine – Bare Metal build of Deb9 (same as VM) i7 (8 CPUs(effective)) 16GB ram.
However when starting just one instance of SubGit; I get either Java out of memory or GC Overhead Limit Exceeded.
I’ve tried adding the following permutations to repo.git/subgit/config to [daemon]
javaOptions = -noverify -client -Djava.awt.headless=true -Xmx8g -XX:+UseParallelGC -XX:-UseGCOverheadLimit – This gives GC Overhead Limit Exceeded Error
#javaOptions = -noverify -client -Djava.awt.headless=true -Xmx8g -XX:+UseParallelGC -XX:-UseGCOverheadLimit – (OPS Disabled) Gives an out of memory error.
javaOptions = -noverify -client -Djava.awt.headless=true –Xmx12g -XX:-UseGCOverheadLimit – this gives out of memory errors.
I’ve tried other settings too, including changing –client for –server, but that appears to be more two way conversion, which is not something I’m trying to do.
There should be plenty of RAM based on the application usage on a system running successfully, so unless SubGit is ignoring some values, I can’t tell.
The 'javaOptions' in the [daemon] section may indeed be ignored depending on the operation you run: those java options affect SubGit daemon, but not the 'subgit install' or 'subgit fetch' operation. Since you've mentioned that repositories were moved to another machine, I believe, you have invoked either of those two commands to restart the mirror and that's why that 'daemon.javaOptions' is ignored. To tune SubGit's java options edit it right in the SubGit launching script (EXTRA_JVM_ARGUMENTS line):
EXTRA_JVM_ARGUMENTS="-Dsun.io.useCanonCaches=false -Djava.awt.headless=true -Djna.nosys=true -Dsvnkit.http.methods=Digest,Basic,NTLM,Negotiate -Xmx512m"
As for the memory consumption itself, it depends on which operations are being run. It's not completely clear how did you pause the jobs on the virtual machine (by 'subgit shutdown' or in another way?), which operations were running at that time (initial translation or regular fetches) and how did you restart the jobs on the new machine.
Related
I had a failing OOM unit test and wondered why I needed to manually allow for bigger maxHeapSize like so:
test {
maxHeapSize = '4G'
}
I assumed the max heap is not capped and couldn't find anything in gradle api sources to prove me wrong. However, running gradle with --info clearly reveals otherwise:
Starting process 'Gradle Test Executor 427'. Working directory: [project-dir] Command: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/bin/java -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -javaagent:build/tmp/expandedArchives/org.jacoco.agent-0.8.6.jar_a26f6511a7813217be4cd6439d66563b/jacocoagent.jar=destfile=build/jacoco/test.exec,append=true,inclnolocationclasses=false,dumponexit=true,output=file,jmx=false -Xmx512m -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /Users/[user]/.gradle/caches/6.4/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 427'
My question is where does the -Xmx512m above come from?
Next to that, is there a way to allow for unlimited max heap and would that be a totally unreasonable thing to do?
possible duplicate of Default maximum heap size for JavaExec Gradle task
The 512m default is set in DefaultWorkerProcessBuilder. 512m is a tradeoff, working for small to medium sized projects and given that multiple workers can be running simultaneously. According to the commit message:
Limit memory for worker processes by default
Our workers for compilation, testing etc. had no default
memory settings until now, which mean that we would use
the default given by the JVM, which is 1/4th of system memory
in most cases. This is way too much when running with a high
number of workers. Limiting this to 256m should be enough for
small to medium sized projects. Larger projects will have to
tweak this value.
Next to that, is there a way to allow for unlimited max heap and would that be a totally unreasonable thing to do?
At the time of writing, I am not aware of any JVM implementation allowing for unlimited memory and I honestly don't think it is advisable since it can slow down the system considerably.
Can someone please help me with the Out of Memory (OOM) error that I encountered while running JMeter
I am a newbie in Java and in JMeter, and also in performance testing. I used the command prompt but I am still encountering Out of Memory issue. I tried to adjust the heap size but running did not continue every time I change it.
My current heap size is below
set HEAP=-Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m
I added another memory: from 8GB, my memory is now 16GB.
But when I run it again with 1000 threads, the error below is shown:
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread in thread Thread[Thread Group 1-130,5,main]. See the log file for details.
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread in thread Thread[Thread Group 1-63,5,main]. See log file for details.
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread in thread Thread[Thread Group 1-135,5,main]. See log file for details.
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread in thread Thread[Thread Group 1-19,5,main]. See log file for details.
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
Sometimes I am also encountering just this error:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
Can someone please help me. I am just a beginner so I would appreciate if you will reply with terms that are not too technical.
Thank you in advance.
The error you're getting indicates that underlying operating system is not able to create a new native thread / process.
Try increasing limits for the operating system, i.e. if you're on Linux you can check the maximum number of threads using ulimit -n command
In any case you can decrease the stack size via -Xss JVM argument
Unfortunately in its current shape your question doesn't provide enough information in order to come up with a comprehensive answer, if above hints don't help consider adding more details like:
Operating system version and architecture
JMeter version
Java version and archivecture
At least first 20 lines of the .HPROF file
Also make sure to follow recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article
the flag Xmx indicates the maximum memory for the JVM. You should increase your xmx value to something more appropriate for your usage.
try:
set HEAP=-Xms1g -Xmx4g -XX:MaxMetaspaceSize=256m
change Heap size in jmeter.sh file if you are using Linux/ubuntu, Jmeter.bat in windows. Increase the -Xmx1g =to half of the memory of the system i.e., 4gb(-Xmx4g). also try to increase the user load from 500 and check the CPU and RAM utilization in Task Manager of Windows.
I have faced the out of memory issue on trying to run 10k vusers in 8GB RAM. Later on increased the instance memory as 16GB and 10k vusers load ran successfully.
Increase JVM memory as 12 GB in /etc/profile/
Increase JMeter Heap as 10GB in "jmeter" file(Linux OS). ('set HEAP' option is available in "jmeter.bat" file for Windows server)
3.Open files limit - 65535
maximum user process limit - 65535
Open the command window as root user
Execute following commands and update values to change the default limit
vim /etc/security/limits.conf
root soft nproc 65535
root hard nproc 65535
root soft nofile 65535
root hard nofile 65535
vim /etc/sysctl.conf
fs.file-max = 65535
sysctl -p
reboot
After rebooting, the changes will be affected in the machine and kindly cross verify that.
Should run the JMeter script in Non-GUI as a root user only.
Note: Delete dump heap file in "bin" if any.
Has anyone experienced or can someone explain the circumstances wherein the following occurs (and ideally help in resolving it):
What normally happens:
We execute a large user query/build report process
Java.exe memory steadily climbs as does CPU usage
Report Renders, Java.exe CPU utilization drops back down
What, I assume, should not be happening:
We execute a large user report query/build report process
ColdFusion.exe memory steadily climbs as does CPU usage
Java.exe memory and CPU usage do not budge
I should note I do not believe that ColdFusion.exe existed, or at least wasn't present in the computer processes., prior to this behavior. Basically instead of the server running off of Java.exe it's running off ColdFusion.exe. My only theory is that ColdFusion can't find the default java.
Thanks in advance.
Additional Details
Server is, and has been, running on ColdFusion 11 Standard
I, foolishly and absent mindlessly, hit the 'Update' button without taking a snap-shot. Unfortunately, I believe it was patching CF and updating to the most recent version of Java. CF would not restart so used Adobes Un-install and removed the patches. It restarted and decided to run the update at a later date.
My nightly backups had rolled off of the server, by a day, when I realized the server was running so slowly and that Coldfusion.exe was running instead of Java.exe so a simple restore is off the table.
The thing is, the server is running pretty well but not as well as it was. Additionally, I can not access the debug information by enabling it in the admin and adding my ip. I can access an error with a cftry and catch but the debug info gives me this and only this:
Debugging Information
ColdFusion Server Standard 11,0,07,296330
Template /FBI/witsec/locations/map/index.cfm
Time Stamp 31-Jan-16 01:47 PM
Locale English (US)
User Agent Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko
Remote IP 10.14.7.128
Host Name 10.14.7.128
------
Execution Time
It should be noted the the Execution Time is included in the above snippet, but obviously nothing follows.
Environment Settings - User:
Environment Settings - System:
Java and JVM CF 11 Admin Page
Java Virtual Machine Path:
C:\ColdFusion11\jre
Minimum JVM Heap Size: 4096 Maximum JVM Heap Size: 4096
JVM Arguments:
-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=49325
-server
-XX:NewRatio=3
-XX:SurvivorRatio=7
-XX:+UseCompressedOops
-Xss768k
-XX:MaxPermSize=256m
-XX:PermSize=128m
-XX:+DisableExplicitGC
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+CMSClassUnloadingEnabled
-XX:+CMSScavengeBeforeRemark
-XX:CMSInitiatingOccupancyFraction=68
-Dcoldfusion.home={application.home}
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.JavaUtilLog
-Duser.language=en
-Dcoldfusion.rootDir={application.home}
-Dcoldfusion.libPath={application.home}/lib
-Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true
-Dcoldfusion.jsafe.defaultalgo=FIPS186Random
-javaagent:C:/FusionReactor/instance/ColdFusionOnSGI/fusionreactor.jar=name=ColdFusionOnSGI,address=8088
I am using Gradle 2.5 to compile a Java project which consists of 5 modules. In order to speed things up I also use the gradle-daemon. However, During compilation there are up to 18 instances of the gradle-daemon running. After compilation finishes there are still 15 instances of the daemon left. The daemons process consumes about 600 MB of RAM. Is it normal to have that many daemons running in the background or is the gradle-daemon misconfigured?
UPDATE:
My operating system is Debian Jessie. Java version is Oracle Java 8.
Following Antoniossss' advice I got in touch with a developer. As it turns out, Gradle is in fact quite resource hungry. Even for a simple "Hello World" application the daemon might use very well up to 150 MB and maybe even more.
It is also alright, that multiple daemon threads are started, as long as they run within the same JVM.
There is only limited control on the user's side to control/limit memory usage.
One could set GRADLE_OPTS variable in order to pass Xmx options to the JVM, e.g., I managed to build my Android project with following settings:
$ export GRADLE_OPTS="-Xmx64m -Dorg.gradle.jvmargs='-Xmx256m -XX:MaxPermSize=64m'"
The first -Xmx option is set for the Gradle that you start in CLI, the second one (after -Dorg.gradle.jvmargs) is the -Xmx value for the Gradle-Daemon.
The less memory you allow for your JVM the higher the risk for your build to fail - obviously. So you might have to tune those settings until they suit your purposes.
Those settings can also be set in the gradle.properties file.
I am using the Apache Solr powered by BitNami EC2 AMI. Solr is running, but I'd like to change the startup configuration to increase the amount of memory allocated to JVM.
I have tried modifying the startup script at at /opt/bitnami/apache-solr/scripts/ctl.sh by modifying the following line:
SOLR="$JAVABIN -Dsolr.solr.home=$SOLR_HOME
-Djetty.logs=$INSTALL_PATH/logs/ -Djetty.home=$INSTALL_PATH/ -jar $INSTALL_PATH/start.jar $INSTALL_PATH/etc/jetty.xml"
I've tried different permutations for the memory flags and none of them work (some of them cause the Solr server to fail to start at all, while others allow it to start but have no effect on the JVM memory allocated). This is what I've tried adding to the line:
-Xmx 1000 -Xms 8000
-Xms1000m -Xmx8000m
-Xms1000 -Xmx8000
-Xms 1000m -Xmx 8000m
What is the correct way of going about this?
It turns out that the arguments needed to be at the start of the line. The following works:
SOLR="$JAVABIN -Xmx7168m -Xms1024m -Dsolr.solr.home=$SOLR_HOME
-Djetty.logs=$INSTALL_PATH/logs/ -Djetty.home=$INSTALL_PATH/ -jar $INSTALL_PATH/start.jar $INSTALL_PATH/etc/jetty.xml"