Why JVM uses max 40% of CPU? - java

The problem is: on load peaks CPU stucks on 40% and the request answers slow down.
I have dedicated backend API server with real load ~7,332 hits/min.
Database is on dedicated server and load is OK.
Almost no IO ops on this machine.
12 cores x 2 CPU = 24 cores
OS: OS Linux, 4.4.0-98-generic , amd64/64 (24 cores)
java version "1.7.0_151"
OpenJDK Runtime Environment (IcedTea 2.6.11) (7u151-2.6.11-0ubuntu1.14.04.1)
OpenJDK 64-Bit Server VM (build 24.151-b01, mixed mode)
Tomcat 7.0.82
-Xms50g
-Xmx50g
-XX:PermSize=512m
-XX:MaxPermSize=512m
-XX:MaxJavaStackTraceDepth=-1
-Djava.net.preferIPv4Stack=true
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70
-XX:+ScavengeBeforeFullGC
-XX:+CMSScavengeBeforeRemark
Used BIO (same thing), now use NIO connector.
<Connector port="8080" redirectPort="8443"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="2000"
minSpareThreads="20"
acceptCount="200"
acceptorThreadCount="2"
connectionTimeout="20000"
processorCache="-1"
URIEncoding="UTF-8" />
Stats from JavaMelody
Busy threads = 121 / 2,000
Bytes received = 8,679,400,308
Bytes sent = 83,345,586,407
Request count = 6,169,418
Error count = 961
Sum of processing times (ms) = 2,396,325,165
Max processing time (ms) = 4,168,515
Memory: Non heap memory = 275 Mb (Perm Gen, Code Cache),
Buffered memory = 5 Mb,
Loaded classes = 48,952,
Garbage collection time = 1,238,271 ms,
Process cpu time = 197,922,070 ms,
Committed virtual memory = 66,260 Mb,
Free physical memory = 267 Mb,
Total physical memory = 64,395 Mb,
Free swap space = 0 Mb,
Total swap space = 0 Mb
Perm Gen memory: 247 Mb / 512 Mb
Free disk space: 190,719 Mb
Can't reproduce this on test server.
Where is my bottleneck?
CPU usage chart from JMX
htop stats
Update:
Profiler screenshot TaskQueue.poll() hangs
Profiler screenshot ordered by self time CPU

Related

Will process be allocated enough heap when no explicit parameters are specified?

I am running java process through main class on java 8. I have not specified anywhere min(Xms) and max(Xmx) heap size. But when i check through visualVM it's
4267704320(i.e. 4.26 GB) which is the default max heap size for a given process(confirmed through windows command also which is
-XX:+PrintFlagsFinal -version 2>&1 | findstr MaxHeapSize. similarly on linux box too).
My question is if my process(on linux machine) requires more 5 GB(i have 30 GB RAM), will my process be allocated 5 GB memory when i have not specified
any explicit heap size(Xms and Xmx) parameters.
Given your the available RAM on your machine, if you are running a 64-bit JVM in server mode, then yes, the heap size will be able to go up to approximately 7.5 GB.
Documentation (I highlighted the relevant parts):
Client JVM Default Initial and Maximum Heap Sizes
The default maximum heap size is half of the physical memory up to a physical memory size of 192 megabytes (MB) and otherwise one fourth of the physical memory up to a physical memory size of 1 gigabyte (GB).
Server JVM Default Initial and Maximum Heap Sizes
The default initial and maximum heap sizes work similarly on the server JVM as it does on the client JVM, except that the default values can go higher. On 32-bit JVMs, the default maximum heap size can be up to 1 GB if there is 4 GB or more of physical memory. On 64-bit JVMs, the default maximum heap size can be up to 32 GB if there is 128 GB or more of physical memory. You can always set a higher or lower initial and maximum heap by specifying those values directly; see the next section.
Again, assuming a 64-bit JVM running in server mode, the default max heap size according to the documentation will be one fourth of your total RAM. So approximately 7.5 GB in your case (1/4 of 30 GB).
If running a 32-bit JVM in server mode, you'll be capped at 1 GB. And in client mode, your max will be 256 MB.

Loading data bigger than the memory size in h2o

I am experimenting with loading data bigger than the memory size in h2o.
H2o blog mentions: A note on Bigger Data and GC: We do a user-mode swap-to-disk when the Java heap gets too full, i.e., you’re using more Big Data than physical DRAM. We won’t die with a GC death-spiral, but we will degrade to out-of-core speeds. We’ll go as fast as the disk will allow. I’ve personally tested loading a 12Gb dataset into a 2Gb (32bit) JVM; it took about 5 minutes to load the data, and another 5 minutes to run a Logistic Regression.
Here is the R code to connect to h2o 3.6.0.8:
h2o.init(max_mem_size = '60m') # alloting 60mb for h2o, R is running on 8GB RAM machine
gives
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
.Successfully connected to http://127.0.0.1:54321/
R is connected to the H2O cluster:
H2O cluster uptime: 2 seconds 561 milliseconds
H2O cluster version: 3.6.0.8
H2O cluster name: H2O_started_from_R_RILITS-HWLTP_tkn816
H2O cluster total nodes: 1
H2O cluster total memory: 0.06 GB
H2O cluster total cores: 4
H2O cluster allowed cores: 2
H2O cluster healthy: TRUE
Note: As started, H2O is limited to the CRAN default of 2 CPUs.
Shut down and restart H2O as shown below to use all your CPUs.
> h2o.shutdown()
> h2o.init(nthreads = -1)
IP Address: 127.0.0.1
Port : 54321
Session ID: _sid_b2e0af0f0c62cd64a8fcdee65b244d75
Key Count : 3
I tried to load a 169 MB csv into h2o.
dat.hex <- h2o.importFile('dat.csv')
which threw an error,
Error in .h2o.__checkConnectionHealth() :
H2O connection has been severed. Cannot connect to instance at http://127.0.0.1:54321/
Failed to connect to 127.0.0.1 port 54321: Connection refused
which is indicative of out of memory error.
Question: If H2o promises loading a data set larger than its memory capacity(swap to disk mechanism as the blog quote above says), is this the correct way to load the data?
Swap-to-disk was disabled by default awhile ago, because performance was so bad. The bleeding-edge (not latest stable) has a flag to enable it: "--cleaner" (for "memory cleaner").
Note that your cluster has an EXTREMELY tiny memory:
H2O cluster total memory: 0.06 GB
That's 60MB! Barely enough to start a JVM with, much less run H2O. I would be surprised if H2O could come up properly there at all, never mind the swap-to-disk. Swapping is limited to swapping the data alone. If you're trying to do a swap-test, up your JVM to 1 or 2 Gigs ram, and then load datasets that sum more than that.
Cliff

How to reduce committed heap memory in JVM

Our JVMs are consuming more memory than expected. We did some profiling and found that there is no leak. Used heap memory goes till 2.9 GB max but it comes down to 800 MB during idle time. But committed heap increased till 3.5 GB (sometimes 4 GB) and never comes down. Also after the idle time, when used heap increases from 800 MB, then committed heap memory gets increased from 3.5 GB. So our server reaches max memory size soon and we have to restart them every other day.
My questions are
My understanding is that committed heap memory is currently allocated memory. When used heap memory reduces why is the committed memory not getting reduced as well?
When used heap memory increases from its level (800 MB) committed heap memory also get increased from its level (from 3.5GB)
We have the below memory settings in our servers:
-Xmx4096M -Xms1536M -XX:PermSize=128M -XX:MaxPermSize=512M
You can try to tune -XX:MaxHeapFreeRatio which is "maximum percentage of heap free after GC to avoid shrinking". Default value = 70

Tomcat was killed by kernel

My tomcat was auto-shutdown suddenly.I checked in log file and found that It was killed with message:
kernel: Killed process 17420, UID 0, (java) total-vm:8695172kB, anon-rss:4389088kB, file-rss:20kB
My setting for running tomcat is -Xms2048m -Xmx4096m -XX:NewSize=256m -XX:MaxNewSize=512m -XX:PermSize=256m -XX:MaxPermSize=1024m
My system when run command "free -m" is:
total used free shared buffers cached
Mem: 7859 7713 146 0 97 1600
-/+ buffers/cache: 6015 1844 Swap: 0 0 0
I monitor program with "top -p", the result as below
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8048440k total, 7900616k used, 147824k free, 100208k buffers Swap: 0k total, 0k used, 0k free, 1640888k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4473 root 20 0 8670m 2.5g 6568 S 0.0 32.6 71:07.84 java
My question is:
1.Why VIRT = 8670m (in "top -p" result) is greater than Mem: 8048440k total but my application is still running?
Why my tomcat was kill by kernel? I don't see any strange with memory (It's similar with when it's running)
To avoid this error happen, what will I do and why?
The only thing I know that causes the kernel to kill tasks in Linux is the out of memory killer. This article from Oracle might be a little more recently and relevant.
The solution depends on what else is running on the system. From what you showed, you have less than 2GB of usable memory, but your Java heap max is topping out around 4GB. What we don't know is how big the Java heap is at the time you took that snapshot. If it's at its initial 2GB, then you could be running close to the limit. Also based on your formatting, you have no swap space to use as a fallback.
If you have any other significant processes on the system, you need to account for their maximum memory usage. The short answer is try to reduce the Xmx and MaxPermSize if at all possible, you'll have to analyze your load to see if this is possible or will cause unreasonable GC CPU usage.
Some notes:
Java uses more memory than the heap, it has memory for the native code running the VM itself.
Java 8 stores permgen outside of the heap, so I believe it adds memory on top of the Xmx parameter, you may want to note that if running Java 8.
As you reduce the memory limit, you'll hit 3 ranges:
Far above real requirements: no noticeable difference
Very close to real requirements: server freezes/stops responding and uses 100% CPU (GC overhead)
Below real requirements: OutOfMemoryErrors
It's possible for a process's VM size to exceed RAM+swap size per your first question. I remember running Java on a swapless embedded system with 256MB RAM and seeing 500MB of memory usage and being surprised. Some reasons:
In Linux you can allocate memory, but it's not actually used until you write to it
Memory-mapped files (and probably things like shared memory segments) count towards this limit. I believe Java opens all of the jar files as memory mapped files so included in that virt size are all of the jars on your classpath, including the 80MB or so rt.jar.
Shared objects probably count towards VIRT but only occupy space once (i.e. one copy of so loaded for many processes)
I've heard, but I can't find a reference right now, that Linux can actually use binaries/.so files as read-only "swap" space, meaning essentially loading a 2MB binary/so will increase your VM size by 2MB but not actually use all of that RAM, because it pages in from disk only the parts actually accessed.
Linux OS has a OOM Mechanism, when OS's memory is insufficient. The OOM will kill the cost max memory program(In most situations, Linux Out Of Memory Management). Obviously Your tomcat own the max memory.
How to solve? In my experience, you must observe the memory usage of OS, you can use the top command to observe, and find the proper process. and at the same time, you can use the jvisualvm to observe the usage memory of tomcat.

Cannot understand Intellij IDEA's memory usage and management

Since a few years i am developing with IDEA again, and i am happy so far.
The problem is just weird memory usage behaviour and GC action while i am working on projects which causes my IDE freeze for a few seconds while GC is doing its job.
Regardless of how big the project is, i am working on, after a few days the memory usage increases upto 500 MBs (my heap space max 512 MB and actually, i assume, it had to be sufficient for web projects which has ca 100 java files). After GC did its job, i get 400 MB used - not collected - and just ca 100 MB free on heap and in a few mins the memory usage increases the heap is full again.
JVM version is 19.0-b09
using thread-local object allocation.
Parallel GC with 2 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 536870912 (512.0MB)
NewSize = 178257920 (170.0MB)
MaxNewSize = 178257920 (170.0MB)
OldSize = 4194304 (4.0MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 16777216 (16.0MB)
MaxPermSize = 314572800 (300.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 145489920 (138.75MB)
used = 81242600 (77.4789810180664MB)
free = 64247320 (61.271018981933594MB)
55.84070704004786% used
From Space:
capacity = 16384000 (15.625MB)
used = 0 (0.0MB)
free = 16384000 (15.625MB)
0.0% used
To Space:
capacity = 16384000 (15.625MB)
used = 0 (0.0MB)
free = 16384000 (15.625MB)
0.0% used
PS Old Generation
capacity = 358612992 (342.0MB)
used = 358612992 (342.0MB)
free = 0 (0.0MB)
100.0% used
PS Perm Generation
capacity = 172621824 (164.625MB)
used = 172385280 (164.3994140625MB)
free = 236544 (0.2255859375MB)
99.86296981776765% used
it's how my heap space seems. It is remarkable that Old Generation and Perm Generation use ca 100% their spaces. But i had triggered GC manually several times. The question is how can i get the IDE to sweep these objects in old generation without starting the IDE? (After start up the memory usage is about 60MB -90 MB) how can i find out what these are? There are some threads running which can be watched in VisualVM like RMI TCP Connection, RMI TCP Accept , XML RPC Weblistener and so on, although i do nothing on IDE and they're still consuming memory even 5-10 MBs per second.
$ uname -a
Linux bagdemir 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 21:21:01 UTC 2011 i686 GNU/Linux
$ java --version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) Server VM (build 19.1-b02, mixed mode)
UPDATE:
memory configuration:
-Xms256m -Xmx512m -Xmn170m -XX:MaxPermSize=300m
You may find this useful: Intellij Idea JVM options benchmark: default settings are worst
Right way to go is to get a memory snapshot and submit corresponding ticket to the JetBrains tracker with the snapshot attached.
Excess memory usage persists to this day, April 2 2019. By default the IntelliJ IDEA Ultimate Edition has 131 plugins enabled by default.
I turned off about 50 of those plugins.
Go to File >> Settings >> Plugins to manage plugins. Then click Installed to view the plugins already active.

Categories

Resources