I have a Java web app running on heroku which keeps generating "Memory quota exceeded" messages. The app itself is quite big and has a lot of libraries, but it is getting only very few requests (it is only used by a handful of users so if none of the users are online the system may not get a single request for hours) and thus performance is not a primary problem.
Even though the is very little happening in my app the memory consumption is consistently high:
Before deploying the app on heroku I deployed the app using docker containers and never worried much about memory setting leaving everything at the defaults. The whole container usually consumed about 300 MB.
The first thing I tried was to reduce memory consumtion by using -Xmx256m -Xss512k however this did not seem to have any effect.
The heroku manual suggests to log some data about garbage collection, so used the following flags to run my application: -Xmx256m -Xss512k -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+UseConcMarkSweepGC. This gives me e.g. the following output:
2017-01-11T22:43:39.605180+00:00 heroku[web.1]: Process running mem=588M(106.7%)
2017-01-11T22:43:39.605545+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-11T22:43:40.431536+00:00 app[web.1]: 2017-01-11T22:43:40.348+0000: [GC (Allocation Failure) 2017-01-11T22:43:40.348+0000: [ParNew
2017-01-11T22:43:40.431566+00:00 app[web.1]: Desired survivor size 4456448 bytes, new threshold 1 (max 6)
2017-01-11T22:43:40.431579+00:00 app[web.1]: - age 1: 7676592 bytes, 7676592 total
2017-01-11T22:43:40.431593+00:00 app[web.1]: - age 2: 844048 bytes, 8520640 total
2017-01-11T22:43:40.431605+00:00 app[web.1]: - age 3: 153408 bytes, 8674048 total
2017-01-11T22:43:40.431772+00:00 app[web.1]: : 72382K->8704K(78656K), 0.0829189 secs] 139087K->78368K(253440K), 0.0830615 secs] [Times: user=0.06 sys=0.00, real=0.08 secs]
2017-01-11T22:43:41.298146+00:00 app[web.1]: 2017-01-11T22:43:41.195+0000: [GC (Allocation Failure) 2017-01-11T22:43:41.195+0000: [ParNew
2017-01-11T22:43:41.304519+00:00 app[web.1]: Desired survivor size 4456448 bytes, new threshold 1 (max 6)
2017-01-11T22:43:41.304537+00:00 app[web.1]: - age 1: 7271480 bytes, 7271480 total
2017-01-11T22:43:41.304705+00:00 app[web.1]: : 78656K->8704K(78656K), 0.1091697 secs] 148320K->81445K(253440K), 0.1092897 secs] [Times: user=0.10 sys=0.00, real=0.11 secs]
2017-01-11T22:43:42.589543+00:00 app[web.1]: 2017-01-11T22:43:42.526+0000: [GC (Allocation Failure) 2017-01-11T22:43:42.526+0000: [ParNew
2017-01-11T22:43:42.589562+00:00 app[web.1]: Desired survivor size 4456448 bytes, new threshold 1 (max 6)
2017-01-11T22:43:42.589564+00:00 app[web.1]: - age 1: 6901112 bytes, 6901112 total
2017-01-11T22:43:42.589695+00:00 app[web.1]: : 78656K->8704K(78656K), 0.0632178 secs] 151397K->83784K(253440K), 0.0633208 secs] [Times: user=0.06 sys=0.00, real=0.06 secs]
2017-01-11T22:43:57.653300+00:00 heroku[web.1]: Process running mem=587M(106.6%)
2017-01-11T22:43:57.653498+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
Unfortunately I am no expert on reading those logs, but on a first naive look it looks like the app is actually not consuming an amount of memory that would be a problem (or am I horribly misreading the logs?).
My Procfile reads:
web: java $JAVA_OPTS -jar target/dependency/webapp-runner.jar --port $PORT --context-xml context.xml app.war
Update
As codefinger suggested I added the Heroku Java agent to my app. For some reason after adding the java-agent the problem did not occur anymore. But now I have bean able to capture the problem. In the following except the memory limit was exceeded only for a short moment:
2017-01-24T10:30:00.143342+00:00 app[web.1]: measure.mem.jvm.heap.used=92M measure.mem.jvm.heap.committed=221M measure.mem.jvm.heap.max=233M
2017-01-24T10:30:00.143399+00:00 app[web.1]: measure.mem.jvm.nonheap.used=77M measure.mem.jvm.nonheap.committed=78M measure.mem.jvm.nonheap.max=0M
2017-01-24T10:30:00.143474+00:00 app[web.1]: measure.threads.jvm.total=41 measure.threads.jvm.daemon=24 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-24T10:30:00.147542+00:00 app[web.1]: measure.mem.linux.vsz=4449M measure.mem.linux.rss=446M
2017-01-24T10:31:00.143196+00:00 app[web.1]: measure.mem.jvm.heap.used=103M measure.mem.jvm.heap.committed=251M measure.mem.jvm.heap.max=251M
2017-01-24T10:31:00.143346+00:00 app[web.1]: measure.mem.jvm.nonheap.used=101M measure.mem.jvm.nonheap.committed=103M measure.mem.jvm.nonheap.max=0M
2017-01-24T10:31:00.143468+00:00 app[web.1]: measure.threads.jvm.total=42 measure.threads.jvm.daemon=25 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-24T10:31:00.153106+00:00 app[web.1]: measure.mem.linux.vsz=4739M measure.mem.linux.rss=503M
2017-01-24T10:31:24.163943+00:00 heroku[web.1]: Process running mem=517M(101.2%)
2017-01-24T10:31:24.164150+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-24T10:32:00.143066+00:00 app[web.1]: measure.mem.jvm.heap.used=108M measure.mem.jvm.heap.committed=248M measure.mem.jvm.heap.max=248M
2017-01-24T10:32:00.143103+00:00 app[web.1]: measure.mem.jvm.nonheap.used=108M measure.mem.jvm.nonheap.committed=110M measure.mem.jvm.nonheap.max=0M
2017-01-24T10:32:00.143173+00:00 app[web.1]: measure.threads.jvm.total=40 measure.threads.jvm.daemon=23 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-24T10:32:00.150558+00:00 app[web.1]: measure.mem.linux.vsz=4738M measure.mem.linux.rss=314M
2017-01-24T10:33:00.142989+00:00 app[web.1]: measure.mem.jvm.heap.used=108M measure.mem.jvm.heap.committed=248M measure.mem.jvm.heap.max=248M
2017-01-24T10:33:00.143056+00:00 app[web.1]: measure.mem.jvm.nonheap.used=108M measure.mem.jvm.nonheap.committed=110M measure.mem.jvm.nonheap.max=0M
2017-01-24T10:33:00.143150+00:00 app[web.1]: measure.threads.jvm.total=40 measure.threads.jvm.daemon=23 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-24T10:33:00.146642+00:00 app[web.1]: measure.mem.linux.vsz=4738M measure.mem.linux.rss=313M
In the following case the limit was exceeded for a much longer time:
2017-01-25T08:14:06.202429+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:14:06.202429+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:14:26.924265+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:14:26.924265+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:14:48.082543+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:14:48.082615+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:15:00.142901+00:00 app[web.1]: measure.mem.jvm.heap.used=164M measure.mem.jvm.heap.committed=229M measure.mem.jvm.heap.max=233M
2017-01-25T08:15:00.142972+00:00 app[web.1]: measure.mem.jvm.nonheap.used=121M measure.mem.jvm.nonheap.committed=124M measure.mem.jvm.nonheap.max=0M
2017-01-25T08:15:00.143019+00:00 app[web.1]: measure.threads.jvm.total=40 measure.threads.jvm.daemon=23 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-25T08:15:00.149631+00:00 app[web.1]: measure.mem.linux.vsz=4740M measure.mem.linux.rss=410M
2017-01-25T08:15:09.339319+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:15:09.339319+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:15:30.398980+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:15:30.399066+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:15:51.140193+00:00 heroku[web.1]: Process running mem=574M(111.5%)
2017-01-25T08:15:51.140280+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2017-01-25T08:16:00.143016+00:00 app[web.1]: measure.mem.jvm.heap.used=165M measure.mem.jvm.heap.committed=229M measure.mem.jvm.heap.max=233M
2017-01-25T08:16:00.143084+00:00 app[web.1]: measure.mem.jvm.nonheap.used=121M measure.mem.jvm.nonheap.committed=124M measure.mem.jvm.nonheap.max=0M
2017-01-25T08:16:00.143135+00:00 app[web.1]: measure.threads.jvm.total=40 measure.threads.jvm.daemon=23 measure.threads.jvm.nondaemon=2 measure.threads.jvm.internal=15
2017-01-25T08:16:00.148157+00:00 app[web.1]: measure.mem.linux.vsz=4740M measure.mem.linux.rss=410M
For the later log here is the bigger picture:
(memory consumption dropped because I restarted the server)
At the time the memory limit is first exceeded a cron job (spring scheduled) imports CSV files. The CSV files are processed in batches of 10,000 lines, so there are never more than 10K rows referenced in memory. Nevertheless a lot of memory is of course consumed overall, as many batches are processed. Also I tried to trigger the imports manually to check whether I can reproduce the memory consumption peak, but I can't: This does not always happen.
This isn't really an answer, but may help:
I looks like you have a surge (or possibly a leak) in off-heap memory consumption. The source is almost certainly the CSV processing. Here's a good article that describes a similar problem.
http://www.evanjones.ca/java-native-leak-bug.html
If you have exceeded about:
Process running mem=574M(111.5%)
maybe it is impossible do decrease under 500 in your app, i had similar problems with app and now my app works correct under 512MB.
you can try some of these options (my docker example):
ENTRYPOINT ["java","-Dserver.port=$PORT","-Xmx268M","-Xss512K","-
XX:CICompilerCount=2","-Dfile.encoding=UTF-8","-
XX:+UseContainerSupport","-Djava.security.egd=file:/dev/./urandom","-
Xlog:gc","-jar","/app.jar"]
of course Xmx value you have to match to your case (maybe more maybe less)
Related
My application always do gc, when startup. even if there is no request arrive.
JVM options :
/opt/java/bin/java -server -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -Xms4g -Xmx4g -XX:MaxMetaspaceSize=128m -Xss256k -XX:G1ReservePercent=10 -XX:MaxGCPauseMillis=100 -XX:+AggressiveOpts -XX:+UseStringDeduplication -XX:+UseBiasedLocking -XX:+UseFastAccessorMethods -XX:+DisableExplicitGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintReferenceGC -XX:G1LogLevel=finest -XX:+PrintGCCause -verbose:gc -Xloggc:/data/logs/shiva-rds-nio/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -Djava.library.path=/opt/shiva-rds/lib -DSHIVA_RDS_HOME=/opt/shiva-rds -Dlogback.configurationFile=/opt/shiva-rds/conf/logback.xml -DLOG_HOME=/data/logs/shiva-rds-nio -jar lib/shiva-rds-proxy-2.3.1130-RELEASE.jar nio
gc logs:
Java HotSpot(TM) 64-Bit Server VM (25.111-b14) for linux-amd64 JRE (1.8.0_111-b14), built on Sep 22 2016 16:14:03 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 65937908k(5662448k free), swap 0k(0k free)
CommandLine flags: -XX:+AggressiveOpts -XX:+DisableExplicitGC -XX:G1LogLevel=finest -XX:G1ReservePercent=10 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/shiva-rds-nio -XX:InitialHeapSize=4294967296 -XX:MaxGCPauseMillis=100 -XX:MaxHeapSize=4294967296 -XX:MaxMetaspaceSize=134217728 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCCause -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 -XX:+UnlockExperimentalVMOptions -XX:+UseBiasedLocking -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastAccessorMethods -XX:+UseG1GC -XX:+UseStringDeduplication
0.022: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 4294967296 bytes, attempted expansion amount: 4294967296 bytes]
2016-12-13T15:06:54.946+0800: 0.279: Total time for which application threads were stopped: 0.0001481 seconds, Stopping threads took: 0.0000189 seconds
2016-12-13T15:06:55.168+0800: 0.501: Total time for which application threads were stopped: 0.0002564 seconds, Stopping threads took: 0.0000233 seconds
2016-12-13T15:06:55.173+0800: 0.506: Total time for which application threads were stopped: 0.0000858 seconds, Stopping threads took: 0.0000148 seconds
2016-12-13T15:06:55.302+0800: 0.635: Total time for which application threads were stopped: 0.0003145 seconds, Stopping threads took: 0.0000431 seconds
2016-12-13T15:06:55.388+0800: 0.721: Total time for which application threads were stopped: 0.0001337 seconds, Stopping threads took: 0.0000349 seconds
2016-12-13T15:06:55.460+0800: 0.793: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 13631488 bytes, new threshold 15 (max 15)
0.793: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 0, predicted base time: 10.00 ms, remaining time: 90.00 ms, target pause time: 100.00 ms]
0.793: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 102 regions, survivors: 0 regions, predicted young region time: 3100.70 ms]
0.793: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 102 regions, survivors: 0 regions, old: 0 regions, predicted pause time: 3110.70 ms, target pause time: 100.00 ms]
, 0.1121020 secs]
[Parallel Time: 108.1 ms, GC Workers: 18]
From reading the JVM source code, there is a subtle distinction between the -Xms and -XX:InitialHeapSize options.
The first is the minimum heap size, implying that the JVM should never make the heap any smaller.
The second one is the initial heap size, implying that the JVM could make the heap smaller.
There is also some rather convoluted logic for determining what an "reasonable" initial size should be which looks like it could override the InitialHeapSize.
I suspect that what happens in your case is that the JVM is using a smaller initial size than what you said (-XX:InitialHeapSize=429496729), and then resizing. Presumably the GC run and the resize are happening while your application is starting.
If you (really) wanted to avoid the garbage collection during startup, I'd suggest using -Xms instead. However that will cause the JVM to occupy more (virtual) memory, which is not necessarily a good thing.
UPDATE - apparently that is incorrect.
I now see that you are using -Xms on the command line. I guess that means that the "reasonableness" logic must apply to that option as well. If so, I doubt that there is anything you can do to avoid the GC. (But I wouldn't worry about that.)
The best I can say is that the meanings of -Xms and -XX:InitialHeapSize and their respective behaviors are not clearly specified.
I am trying to deploy an Spring MVC app through Codeship CI flow in a 1x dyno on heroku with embedded jetty (version 8).
Codeship flow works but in the deployment step R14 and R10 heroku errors appears.
I have tried in my local with identical java parameters and it is deployed fine in 15-20 seconds.
What could be the reason?
PROCFILE
web: java -Dserver.port=$PORT $JAVA_OPTS -jar target/dependency/jetty-runner.jar target/*.war
system.properties
java.runtime.version=1.7
HEROKU LOG
2014-09-06T16:12:07.070516+00:00 heroku[web.1]: Starting process with command `java -Dserver.port=17223 -Xmx384m -Xms384m -Xss512k -XX:+UseCompressedOops -jar target/dependency/jetty-runner.jar target/*.war`
2014-09-06T16:12:07.698033+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Djava.rmi.server.useCodebaseOnly=true -Djava.rmi.server.useCodebaseOnly=true
2014-09-06T16:12:08.350797+00:00 app[web.1]: 2014-09-06 16:12:08.349:INFO:omjr.Runner:Runner
2014-09-06T16:12:08.350934+00:00 app[web.1]: 2014-09-06 16:12:08.350:WARN:omjr.Runner:No tx manager found
2014-09-06T16:12:08.454514+00:00 app[web.1]: 2014-09-06 16:12:08.454:INFO:omjr.Runner:Deploying file:/app/target/MagmaInside221B.war # /
2014-09-06T16:12:08.477820+00:00 app[web.1]: 2014-09-06 16:12:08.477:INFO:oejs.Server:jetty-8.y.z-SNAPSHOT
2014-09-06T16:12:08.607664+00:00 app[web.1]: 2014-09-06 16:12:08.607:INFO:oejw.WebInfConfiguration:Extract jar:file:/app/target/MagmaInside221B.war!/ to /app/target/MagmaInside221B
2014-09-06T16:12:19.847416+00:00 app[web.1]: 2014-09-06 16:12:19.847:INFO:oejpw.PlusConfiguration:No Transaction manager found - if your webapp requires one, please configure one.
2014-09-06T16:12:23.593483+00:00 heroku[web.1]: source=web.1 dyno=heroku.29253714.85382d49-d1fa-4998-86f7-12cea60f83a4 sample#memory_total=319.66MB sample#memory_rss=234.84MB sample#memory_cache=84.83MB sample#memory_swap=0.00MB sample#memory_pgpgin=97083pages sample#memory_pgpgout=15249pages
2014-09-06T16:12:44.726949+00:00 heroku[web.1]: source=web.1 dyno=heroku.29253714.85382d49-d1fa-4998-86f7-12cea60f83a4 sample#memory_total=557.48MB sample#memory_rss=511.58MB sample#memory_cache=0.33MB sample#memory_swap=45.57MB sample#memory_pgpgin=199724pages sample#memory_pgpgout=68675pages
2014-09-06T16:12:44.727477+00:00 heroku[web.1]: Process running mem=557M(108.9%)
2014-09-06T16:12:44.727730+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2014-09-06T16:13:05.520637+00:00 heroku[web.1]: source=web.1 dyno=heroku.29253714.85382d49-d1fa-4998-86f7-12cea60f83a4 sample#load_avg_1m=1.20
2014-09-06T16:13:05.520729+00:00 heroku[web.1]: source=web.1 dyno=heroku.29253714.85382d49-d1fa-4998-86f7-12cea60f83a4 sample#memory_total=635.76MB sample#memory_rss=511.89MB sample#memory_cache=0.11MB sample#memory_swap=123.77MB sample#memory_pgpgin=250735pages sample#memory_pgpgout=119665pages
2014-09-06T16:13:05.521268+00:00 heroku[web.1]: Process running mem=635M(124.2%)
2014-09-06T16:13:05.521494+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2014-09-06T16:13:07.096312+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2014-09-06T16:13:07.096536+00:00 heroku[web.1]: Stopping process with SIGKILL
2014-09-06T16:13:07.998339+00:00 heroku[web.1]: Process exited with status 137
2014-09-06T16:13:08.009655+00:00 heroku[web.1]: State changed from starting to crashed
You're exceeding the Heroku memory limit for your configured dyno. This has nothing to do with how you deploy (i.e. this will also happen when you push from your local machine). As for solving this problem, you'd either need to upgrade to a more powerful dyno or reduce the memory footprint of your application.
Disclaimer, I'm working for Codeship. I talked to Antonio via our in app support tool and we solved the issue, but I wanted to provide a public answer as well.
I've written an application on Play2 framework for Heroku and am having memory issues.
2013-03-21T01:28:35+00:00 heroku[web.1]: Process running mem=543M(106.1%)
2013-03-21T01:28:35+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
Locally I've profiled it with the same JVM settings and memory restrictions on Heroku (512MB) but almost instantly when I send requests at Heroku it runs our of heap space.
JAVA_OPTS: -Xmx384m -Xss512k -XX:+UseCompressedOops
I wouldn't have any issues if I could profile what's going on there, but the java-agent doest seem to work for me.
I havent come across any memory leaks that I've seen. I do know that every object I create is and will only be used once so I could make my young gen large and my old gen small. I've tried different JVM values but can't seem to find the right combination to get this working without the correct profiling.
I've read all the Heroku docs on tuning and such with no avail. Does anyone have any ideas on this, or maybe point me in the right direction?
EDIT
I still have not been able to get remote monitoring working, but here is some dumps from my local test system before and after 1 full CG.
{Heap before GC invocations=1747 (full 0):
PSYoungGen total 42496K, used 42496K [0x00000000f5560000, 0x00000000fded0000, 0x0000000100000000)
eden space 42176K, 100% used [0x00000000f5560000,0x00000000f7e90000,0x00000000f7e90000)
from space 320K, 100% used [0x00000000fde80000,0x00000000fded0000,0x00000000fded0000)
to space 640K, 0% used [0x00000000fdd90000,0x00000000fdd90000,0x00000000fde30000)
PSOldGen total 106176K, used 105985K [0x00000000e0000000, 0x00000000e67b0000, 0x00000000f5560000)
object space 106176K, 99% used [0x00000000e0000000,0x00000000e67804c8,0x00000000e67b0000)
PSPermGen total 43712K, used 43684K [0x00000000d5a00000, 0x00000000d84b0000, 0x00000000e0000000)
object space 43712K, 99% used [0x00000000d5a00000,0x00000000d84a9338,0x00000000d84b0000)
2013-03-21T14:09:36.827-0700: [GC [PSYoungGen: 42496K->384K(41536K)] 148481K->106450K(147712K), 0.0027940 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
Heap after GC invocations=1747 (full 0):
PSYoungGen total 41536K, used 384K [0x00000000f5560000, 0x00000000fde90000, 0x0000000100000000)
eden space 41152K, 0% used [0x00000000f5560000,0x00000000f5560000,0x00000000f7d90000)
from space 384K, 100% used [0x00000000fdd90000,0x00000000fddf0000,0x00000000fddf0000)
to space 640K, 0% used [0x00000000fddf0000,0x00000000fddf0000,0x00000000fde90000)
PSOldGen total 106176K, used 106066K [0x00000000e0000000, 0x00000000e67b0000, 0x00000000f5560000)
object space 106176K, 99% used [0x00000000e0000000,0x00000000e6794968,0x00000000e67b0000)
PSPermGen total 43712K, used 43684K [0x00000000d5a00000, 0x00000000d84b0000, 0x00000000e0000000)
object space 43712K, 99% used [0x00000000d5a00000,0x00000000d84a9338,0x00000000d84b0000)
}
{Heap before GC invocations=1748 (full 1):
PSYoungGen total 41536K, used 384K [0x00000000f5560000, 0x00000000fde90000, 0x0000000100000000)
eden space 41152K, 0% used [0x00000000f5560000,0x00000000f5560000,0x00000000f7d90000)
from space 384K, 100% used [0x00000000fdd90000,0x00000000fddf0000,0x00000000fddf0000)
to space 640K, 0% used [0x00000000fddf0000,0x00000000fddf0000,0x00000000fde90000)
PSOldGen total 106176K, used 106066K [0x00000000e0000000, 0x00000000e67b0000, 0x00000000f5560000)
object space 106176K, 99% used [0x00000000e0000000,0x00000000e6794968,0x00000000e67b0000)
PSPermGen total 43712K, used 43684K [0x00000000d5a00000, 0x00000000d84b0000, 0x00000000e0000000)
object space 43712K, 99% used [0x00000000d5a00000,0x00000000d84a9338,0x00000000d84b0000)
2013-03-21T14:09:36.830-0700: [Full GC [PSYoungGen: 384K->0K(41536K)] [PSOldGen: 106066K->13137K(52224K)] 106450K->13137K(93760K) [PSPermGen: 43684K->43684K(87936K)], 0.0666250 secs] [Times: user=0.06 sys=0.01, real=0.07 secs]
Heap after GC invocations=1748 (full 1):
PSYoungGen total 41536K, used 0K [0x00000000f5560000, 0x00000000fde90000, 0x0000000100000000)
eden space 41152K, 0% used [0x00000000f5560000,0x00000000f5560000,0x00000000f7d90000)
from space 384K, 0% used [0x00000000fdd90000,0x00000000fdd90000,0x00000000fddf0000)
to space 640K, 0% used [0x00000000fddf0000,0x00000000fddf0000,0x00000000fde90000)
PSOldGen total 52224K, used 13137K [0x00000000e0000000, 0x00000000e3300000, 0x00000000f5560000)
object space 52224K, 25% used [0x00000000e0000000,0x00000000e0cd4528,0x00000000e3300000)
PSPermGen total 87936K, used 43684K [0x00000000d5a00000, 0x00000000dafe0000, 0x00000000e0000000)
object space 87936K, 49% used [0x00000000d5a00000,0x00000000d84a9338,0x00000000dafe0000)
}
EDIT
This is what I can get -- which isnt much, but this is what happens after 100 requests as everything starts to degrade, you can see web.2 already swapped in this dump
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=load_avg_1m val=0.41
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_total val=246.95 units=MB
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_rss val=246.91 units=MB
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_cache val=0.05 units=MB
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_swap val=0.00 units=MB
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_pgpgin val=72259 units=pages
2013-03-21T22:24:23+00:00 heroku[web.1]: source=heroku.13369226.web.1.d615093e-77a3-42b1-8da1-a228bd7582a1 measure=memory_pgpgout val=9039 units=pages
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=load_avg_1m val=0.30
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_total val=532.83 units=MB
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_rss val=511.86 units=MB
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_cache val=0.04 units=MB
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_swap val=20.93 units=MB
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_pgpgin val=145460 units=pages
2013-03-21T22:24:25+00:00 heroku[web.2]: source=heroku.13369226.web.2.cb423d08-dd15-41c1-9843-95bcdc269111 measure=memory_pgpgout val=14414 units=pages
2013-03-21T22:24:25+00:00 heroku[web.2]: Process running mem=532M(104.1%)
2013-03-21T22:24:25+00:00 heroku[web.2]: Error R14 (Memory quota exceeded)
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=load_avg_1m val=1.83
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_total val=400.66 units=MB
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_rss val=400.61 units=MB
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_cache val=0.05 units=MB
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_swap val=0.00 units=MB
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_pgpgin val=113336 units=pages
2013-03-21T22:24:29+00:00 heroku[web.4]: source=heroku.13369226.web.4.25274242-a3af-4d2e-9da3-44e5e0a45c09 measure=memory_pgpgout val=10767 units=pages
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=load_avg_1m val=0.25
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_total val=397.70 units=MB
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_rss val=397.64 units=MB
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_cache val=0.05 units=MB
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_swap val=0.00 units=MB
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_pgpgin val=112163 units=pages
2013-03-21T22:24:29+00:00 heroku[web.3]: source=heroku.13369226.web.3.2132f01f-94b1-4151-8fa8-09cdb2774919 measure=memory_pgpgout val=10353 units=pages
I had the same issue. Heroku is telling you the machine is running out of memory, not the Java VM. There is actually a bug in the Heroku Play 2.2 deployment, the startup script reads java_opts, not JAVA_OPTS.
I fixed it by setting both:
heroku config:add java_opts='-Xmx384m -Xms384m -Xss512k -XX:+UseCompressedOops'
heroku config:add JAVA_OPTS='-Xmx384m -Xms384m -Xss512k -XX:+UseCompressedOops'
I also had to set -Xms otherwise I got an error saying the min and max were incompatible. I guess Play2.2 was using a default higher than 384m.
To find out your total memory used, this is a useful equation (pre java 8):
Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]
There are 3 diagnostic tools in this Heroku devcenter article that may be helpful:
https://devcenter.heroku.com/articles/java-memory-issues
Have a look at the memory logging agent, verbose GC flags, and log-runtime-metrics(https://devcenter.heroku.com/articles/log-runtime-metrics). Those should give you some more visibility.
You forgot to factor in the Permgen (pre-JRE 8) or Metaspace (JRE-8+) memory needs; this is the memory reserved for Java class information and certain static information. Plan on it being another 100-150MB on top of heap, looks like yours is higher. You can cap it with the -XX:MaxMetaspaceSize flag, but be aware that if you exceed that limit you'll get errors.
I have a server also capped at -Xmx384M, and it turns out that real memory use is also around 500 MB (but would go higher with more complex applications) with Metaspace factored in. It's running a fairly complex app (Jenkins), so the Metaspace size ends up being about 170 MB, of it the heap hits its limits, 554 MB of RAM used.
I'm currently having problems with very long garbage collection times. please see the followig. My current setup is that I'm using a -Xms1g and -Xmx3g. my application is using java 1.4.2. I don't have any garbage collection flags set. by the looks of it, 3gb is not enough and I really have a lot of objects to garbage collect.
question:
should I change my garbage collection algorithm?
what should i use? is it better to use -XX:+UseParallelGC or -XX:+UseConcMarkSweepGC
or should i use this combination
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC
the ones occupying the memory are largely reports data and not cache data. also, the machine has 16gb memory and I plan to increase the heap to 8gb.
What are the difference between the two options as I still find it hard to understand.
the machine has multiple processors. I can take hits of up to 5 seconds but 30 to 70 seconds is really hard.
Thanks for the help.
Line 151493: [14/Jan/2012:11:47:48] WARNING ( 8710): CORE3283: stderr: [GC 1632936K->1020739K(2050552K), 1.2462436 secs]
Line 157710: [14/Jan/2012:11:53:38] WARNING ( 8710): CORE3283: stderr: [GC 1670531K->1058755K(2050552K), 1.1555375 secs]
Line 163840: [14/Jan/2012:12:00:42] WARNING ( 8710): CORE3283: stderr: [GC 1708547K->1097282K(2050552K), 1.1503118 secs]
Line 169811: [14/Jan/2012:12:08:02] WARNING ( 8710): CORE3283: stderr: [GC 1747074K->1133764K(2050552K), 1.1017273 secs]
Line 175879: [14/Jan/2012:12:14:18] WARNING ( 8710): CORE3283: stderr: [GC 1783556K->1173103K(2050552K), 1.2060946 secs]
Line 176606: [14/Jan/2012:12:15:42] WARNING ( 8710): CORE3283: stderr: [Full GC 1265571K->1124875K(2050552K), 25.0670316 secs]
Line 184755: [14/Jan/2012:12:25:53] WARNING ( 8710): CORE3283: stderr: [GC 2007435K->1176457K(2784880K), 1.2483770 secs]
Line 193087: [14/Jan/2012:12:37:09] WARNING ( 8710): CORE3283: stderr: [GC 2059017K->1224285K(2784880K), 1.4739291 secs]
Line 201377: [14/Jan/2012:12:51:08] WARNING ( 8710): CORE3283: stderr: [Full GC 2106845K->1215242K(2784880K), 30.4016208 secs]
xaa:1: [11/Oct/2011:16:00:28] WARNING (17125): CORE3283: stderr: [Full GC 3114936K->2985477K(3114944K), 53.0468651 secs] --> garbage collection occurring too often as noticed in the time. garbage being collected is quite low and if you would notice is quite close the the heap size. during the 53 seconds, this is equivalent to a pause.
xaa:2087: [11/Oct/2011:16:01:35] WARNING (17125): CORE3283: stderr: [Full GC 3114943K->2991338K(3114944K), 58.3776291 secs]
xaa:3897: [11/Oct/2011:16:02:33] WARNING (17125): CORE3283: stderr: [Full GC 3114940K->2997077K(3114944K), 55.3197974 secs]
xaa:5597: [11/Oct/2011:16:03:00] WARNING (17125): CORE3283: stderr: [Full GC[Unloading class sun.reflect.GeneratedConstructorAccessor119]
xaa:7936: [11/Oct/2011:16:04:36] WARNING (17125): CORE3283: stderr: [Full GC 3114938K->3004947K(3114944K), 55.5269911 secs]
xaa:9070: [11/Oct/2011:16:05:53] WARNING (17125): CORE3283: stderr: [Full GC 3114937K->3012793K(3114944K), 70.6993328 secs]
Since you have extremenly long GC pauses, it's don't think that changing GC algorithm would help.
Note that it's highly suspicious that you have only full collections. Perhaps you need to increase the size of young generation and/or survivor space.
See also:
Tuning Garbage Collection with the 1.4.2 Java[tm] Virtual Machine
Your heap is too small. The pause is so large because it's busy repeatedly scanning the entire heap desperately looking for anything to collect.
You need to do 1 or possibly more of the following;
find and fix a memory leak
tune the application to use less memory
configure the JVM is use a bigger heap
Are you tied to 1.4.2 for some reason? GC implementations really have moved on since then so you should consider upgrading if possible. I realise this may be a non trivial undertaking but it's worth considering anyway.
If you have high survival rate, your heap may be too large. The larger the heap, the longer the JVM can go without GC'ing, so once it hits, it has so much more to move around.
Step 1:
Make sure that you have set enough memory for your application.
Make sure that you don't have memory leaks in your application. Eclipse Memory Analyzer Tool or visualvm will help you to identify leaks in your application.
Step 2:
If you don't have any issues with Step 1 with respect to memory leaks, refer to oracle documentation page on use cases for specific garbage collection algorithm in "Java Garbage Collectors" section and gctuning article.
Since you have decided to configure larger heaps (>= 8 GB), G1GC should work fine for you. Refer to this related SE question on fine tuning key parameters:
Java 7 (JDK 7) garbage collection and documentation on G1
The Java Virtual Machine supports several garbage collection strategies.
This article explains them.
Now I am wondering which (automatically selected) strategy my application is using, is there any way to let the JVM(version 1.6) print this information?
Edit: The JVM detects if it is in client or server mode. So the question really is how can I see which has been detected?
jmap -heap
Prints a heap summary. GC algorithm used, heap configuration and generation wise heap usage are printed.
http://java.sun.com/javase/6/docs/technotes/tools/share/jmap.html
http://java.sun.com/j2se/1.5.0/docs/guide/vm/gc-ergonomics.html which is applicable for J2SE 6 as well states that the default is the Parallel Collector.
We tested this once on a JVM 1.5 by setting only
-server -Xms3g -Xmx3g -XX:PermSize=128m -XX:LargePageSizeInBytes=4m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
and the output showed
41359.597: [GC [PSYoungGen: 90499K->32K(377344K)] 268466K->181862K(2474496K), 0.0183138 secs]
41359.615: [Full GC [PSYoungGen: 32K->0K(377344K)] [PSOldGen: 181830K->129760K(2097152K)] 181862K->129760K(2474496K) [PSPermGen: 115335K->115335K(131072K)], 4.4590942 secs]
where PS stands for Parallel Scavenging
Put this in the JAVA_OPTS:
-XX:+UseSerialGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
For the UseSerialGC we will see in the log:
7.732: [GC 7.732: [DefNew: 419456K->47174K(471872K), 0.1321800 secs] 419456K->47174K(1520448K), 0.1322500 secs] [Times: user=0.10 sys=0.03, real=0.14 secs]
For the UseConcMarkSweepGC we will see in the log:
5.630: [GC 5.630: ['ParNew: 37915K->3941K(38336K), 0.0123210 secs] 78169K->45163K(1568640K), 0.0124030 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
For the UseParallelGC we will see in the log:
30.250: [GC [PSYoungGen: 441062K->65524K(458752K)] 441062K->76129K(1507328K), 0.1870880 secs] [Times: user=0.33 sys=0.03, real=0.19 secs]
Looks like, we have more convenient way to define the version of GC at runtime. Always use tools, my suggestion.
To define GC version we need two tools that come with JVM (placed in your jdk/bin directory):
VisualVM - start it and try to profile some process (for example you can profile VisualVM itself). Your profile will show you a PID of process (see the green rectangles at a screenshot).
jMap - start this tool with -heap <PID> options and find a string dedicated to a Garbage Collector type (see a pink line at a screenshot)
As Joachim already pointed out, the article you refer to describes the VM strategies offered by Sun's VM. The VM specification itself does not mandate specific GC algorithms and hence it won't make sense to have e.g. enumerated values for these in the API.
You can however get some infos from the Management API:
List<GarbageCollectorMXBean> beans =
ManagementFactory.getGarbageCollectorMXBeans();
Iterating through these beans, you can get the name of the GC (although only as a string) and the names of the memory pools, which are managed by the different GCs.
Best way to get this is :
Go to command Line and enter the following command.
java -XX:+PrintCommandLineFlags -version
It will show you result like :
C:\windows\system32>java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=132968640 -XX:MaxHeapSize=2127498240 -XX:+PrintCommandLineFl
ags -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesInd
**ividualAllocation -XX:+UseParallelGC**
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)`enter code here`
You can write simple progam which connects via jmx to your java process:
public class PrintJMX {
public static void main(String[] args) throws Exception {
String rmiHostname = "localhost";
String defaultUrl = "service:jmx:rmi:///jndi/rmi://" + rmiHostname + ":1099/jmxrmi";
JMXServiceURL jmxServiceURL = new JMXServiceURL(defaultUrl);
JMXConnector jmxConnector = JMXConnectorFactory.connect(jmxServiceURL);
MBeanServerConnection mbsc = jmxConnector.getMBeanServerConnection();
ObjectName gcName = new ObjectName(ManagementFactory.GARBAGE_COLLECTOR_MXBEAN_DOMAIN_TYPE + ",*");
for (ObjectName name : mbsc.queryNames(gcName, null)) {
GarbageCollectorMXBean gc = ManagementFactory.newPlatformMXBeanProxy(mbsc,
name.getCanonicalName(),
GarbageCollectorMXBean.class);
System.out.println(gc.getName());
}
}
}