How JVM Clustering for load balancing. - java

Default JVM uses maximum 1.5 GB RAM/JVM Java application.
But my Server have 8 GB. Application still need more RAM. how to start cluster of JVM single unit server.
if in case memory increase single JVM garbage collector and other JVM demon goes slow down...
what is solution for this.. is right thing JVM clusters???
Application work high configuration. when request start JVM slow down and memory usage 95% to 99%
My server configuration. Linux
4 Core Multi Processors
8 GB RAM
no issue for HDD space.
Any Solution for this problem??

You might want to look into memory grids like:
Oracle Coherence: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html
GridGain: http://www.gridgain.com/
Terracotta: http://terracotta.org/
We use Coherence to run 3 JVMs on 1 machine, each process is using 1 Gb of the RAM.

There are a number of solutions.
Use a larger heap size (possibly 64-bit JVM)
Use less heap and more off heap memory. Off heap memory can scale into the TB.
Split the JVM into multiple processes. This is easier for some applications than others. I tend to avoid this as my applications can't be split easily.

Related

How to dockerized java microservices efficiently

While a java application server will extend a unique JVM to run several (micro)services, a dockerized java microservices architecture will run a JVM for each dockerized microservice.
Considering 20+ java microservices and a limited number of host it seems that the amount of resources consumed by the JVMs on each host is huge.
Is there an efficient way to manage this problem ? Is it possible to tune each JVM to limit resources consumption ?
The aim is to limit the overhead of using docker in a java microservices architecture.
Each Docker and JVM copy running uses memory. Normally having multiple JVMs on a single node would use shared memory, but this isn't an option with docker.
What you can do is reduce the maximum heap size of each JVM. However, I would allow at least 1 GB per docker image as overhead plus your heap size for each JVM. While that sounds like a lot of memory it doesn't cost so much these days.
Say you give each JVM a 2 GB heap and add 1 GB for docker+JVM, you are looking needing a 64 GB server to run 20 JVMs/dockers.

How to chose the value of Maximum JVM Heap size

I want to chose a value for maximum JVM heap size. I have some questions about it.
1) Is it usually related with the available physical memory size on the system? If so, is there any formula about this?
2) For the 32-bit JVM, can I set a value larger than 4GB even if the physical memory is very large?
3) Do I need to consider some impact of OS (e.g. Windows, Linux)?
Thanks
1.) Yes amount of memory you can allocate to JVM is related to physical memory available on your system. There is no fixed formula, however there is a rule of thumb that you can use to decide the memory allocation size for your process. There is memory recommendation for different Operating Systems. So (PhysicalMemory - Recommended OS memory) can be allocated to the java process. It varies for 32-bit vs 64-bit OS. If you don't have any other process then this formula will work fine. However if you have other memory consuming processes then you need to take into account memory allocation of those processes as well.
e.g. Windows 7 has following recommended requirement 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit). So, if you have 4GB physical memory then you can allocate 2GB to your process assuming 2 GB is used by OS.
2.) For 32-bit JVM you can not set the value larger than 4GB. In fact if your total physical memory is 4GB. You cannot allocate more than 2GB to your Java process. Checkout this answer for more detailed explanation
3.) Yes you should consider OS impact. Check the recommended memory configuration for different Operations Systems and decide memory for your Java process.
Hope this helps.

Java OutofMemory Error in Ubuntu Even if enough memory available

I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys

32 bit JVM commits >3G virtual memory

we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
 
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory: 
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.

Practical limitations of JVM memory and CPU usage?

Let's say money was not a limiting factor, and I wanted to write a Java program that ran on a single powerful machine.
The goal would be to make the Java program run as fast as possible without having to swap or go to disk for anything.
Let's say that this computer has:
1 TB of RAM (64 16GB DIMMs)
64 processor cores (8 8-core processors)
running 64-bit Ubuntu
Could a single instance of a java program running in a JVM take advantage of this much RAM and processors?
Are there any practical considerations that might limit the usage and efficiency?
OS process (memory & threads) limitations?
JVM memory/heap limitations?
JVM thread limitations?
Thanks,
Galen
A single instance can try to acces all the memory, however NUMA regions mean that things such as GC perform badly accessing memory in another region. This is getting faster and JVM has some NUMA support but it needs to improve if you want scalability. Even so you can get 256 MB of heap and use 700 of native/direct memory without this issue. ;)
The biggest limitation if you have loads of memory is that arrays, collections and ByteBuffer (for memory mapped files) are all limited to a size of 2 billion. (2^31-1)
You can work around these problems with custom collections, but its really something Java should support IMHO.
BTW: You can buy a Dell R910 with 1 TB of memory and 24 cores/48 threads with Ubuntu for £40K.
BTW: I only have experience of JVMs up to 40 GB in size.
First of all, the Java program itself: a poorly designed code wouldn't use that much computer-power. Badly implemented threads, for example, could make your performance be slow.
OS is a limiting factor, too: not all OS can handle well that amount of memory.
I think the JVM can deal with that amount of memory, since the OS supports it.

Categories

Resources