Java seems to ignore -Xms and -Xmx options - java

I'd like to run a very simple bot written in java on my VPS.
I want to limit jvm memory to let's say 10MB (I doubt it would need any more).
I'm running the bot with the following command:
java -Xms5M -Xmx10M -server -jar
IrcBot.jar "/home/jbot"
But top shows that actual memory reserved for java is 144m (or am I interpreting things wrong here?).
13614 jbot 17 0 144m 16m 6740
S 0.0 3.2 0:00.20 java
Any ideas what can be wrong here?
Java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode)
BTW. I'm running CentOS - if it matters.
EDIT:
Thank you for your answers.
I can't really accept any of them, since it turns out the problem lies within the language i choose to write the program, not the JVM itself.

-Xmx specifies the max Java heap allocation (-Xms specifies the min heap allocation). The Java process has its own overhead (the actual JVM etc), plus the loaded classes and the perm gen space (set via -XX:MaxPermSize=128m) sits outside of that value too.
Think of your heap allocation as simply Java's "internal working space", not the process as a whole.
Try experimenting and you'll see what I mean:
java -Xms512m -Xmx1024m ...
Also, try using a tool such as JConsole or JVisualVM (both are shipped with the Sun / Oracle JDK) and you'll be able to see graphical representations of the actual heap usage (and the settings you used to constrain the size).
Finally, as #Peter Lawrey very rightly states, the resident memory is the crucial figure here - in your case the JVM is only using 16 MiB RSS (according to 'top'). The shared / virtual allocation won't cause any issues as long as the JVM's heap isn't pushed into swap (non-RAM). Again, as I've stated in some of the comments, there are other JVM's available - "Java" is quite capable of running on low resource or embedded platforms.

Xmx is the max heap size, but besides that there's a few other things that the JVM needs to keep in memory: the stack, the classes, etc. For a brief introduction see this post about the JVM memory structure.

The JVM maps in shared libraries which are about 150m. The amount of virtual memory used is unlikely to be important to use if you are trying to minimise physical main memory.
The number you want to look at is the resident memory which is amount of physical main memory actually used (which is 16 MB)

Related

Why is the java process user more memory than -XX:MaxRAM? [duplicate]

This question already has answers here:
How JVM -XX:MaxRAM option can be correctly used? [duplicate]
(1 answer)
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Closed 2 years ago.
I've set the options for a java process to use 80% of 1g max ram. But when I use 'ps -o vsz', I see it is using 3.5g (starting from 2.5g). This causes a lot of swap and thus freezing the device. Why is the discrepancy?
UPDATE: The options to the JVM are now: -Xmx256m -Xshare:off -XX:+UseSerialGC -XX:NativeMemoryTracking=summary -XX:MaxRAM=768m -XX:MaxRAMPercentage=60. They don't seem to change anything. The process starts at 2.4g and grows to 3.5g
UPDATE 2:
openjdk version "14" 2020-03-16
OpenJDK Runtime Environment (build 14+36)
OpenJDK 64-Bit Server VM (build 14+36, mixed mode)
The right option is -Xmx1g:
-X: Option specific to this implementation of java.
mx: max heap memory
1g: 1 gigabyte.
You may want to also apply -Xms1g which sets the minimum. Now the RAM Load of your
java app is stable.
Note that the memload of your VM can still be more than 1GB (though 3.5 sounds excessive): Heap isn't the only memory the VM uses; every thread takes '1 stack' worth, which is also configurable (via -Xss128k for example), so if you have a ton of threads, memory load goes up (with half a meg of stack per thread, 4000 threads imply 1 GB worth of stack memory!). The VM's own runtime stuff also exists outside of heap.
Also, the memory taken by shared libraries needs to be 'bookkept' by the OS; I believe usually OSes just add the full memory load of them to each and every process that uses the shared library, and java tends to claim that it 'uses' most of the ones that are already loaded regardless, which inflates the number as well.
Turns out measuring how much memory an app takes is surprisingly difficult sometimes.
-XX:MaxRam isn't the correct options use -Xmx

multiple java instances -Xms -Xmx

I am running at the same computer a java game-server and a game-client
the game-client with
java -Xms512m -Xmx1024m -cp etc etc
and the game-server
java -Xmx1024M -Xms1024M -jar etc etc
Computer Properties:
Windows 7 64 bit
8GB RAM
CPU i5-2500 # 3.3GHz
Intel HD Graphics
Problem: The game-client experience serious lags. At the game-server is also connected via LAN another player with no lag issues!
Has the problem of the lag to do anything with java virtual machine? Am I using one instance of machine or two?
Can I setup something different in order to optimize the performance?
I am thinking that the problem has to do with the fact that one machine is running and its max memory is not enough for both instances, but I do not really know how to solve that.
Edit: No app run out of memory.
Solution:
1:
Updated Java version from:
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
to
java version "1.7.0_15"
Java(TM) SE Runtime Environment (build 1.7.0_15-b03)
Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
2:
Changed the server properties in order to minimize requirements, this seems to be the main reason.
3:
Increased memory:
game-client with java -Xms1024m -Xmx1024m -cp etc etc
and the game-server java -Xmx2048M -Xms2048M -jar etc etc
Server runs at about 700MB now.
Has the problem of the lag to do anything with java virtual machine?
Possibly. You haven't presented enough evidence to be sure one way or the other.
The puzzling thing is that the your client running on a different machine is not laggy.
Am I using one instance of machine or two?
You are running two copies of java then you will have two JVMs.
Can I setup something different in order to optimize the performance?
The answer is probably yes. But you haven't provided enough information to allow us to make solid suggestions.
Lagginess can be caused by a number of things, including:
A network with high latency.
A JVM that has a heap that is too small.
An application that is generating lots of garbage and triggering an excessive number of garbage collection.
A mix of applications that is competing for resources; e.g. physical memory, CPU time or disc or network I/O time.
If you are going to get to the root cause of your problem, you will need to do some monitoring to figure out which of the above is the likely problem. Use the task manager or whatever to check whether the system is CPU bound, short of memory, doing lots of disk or network stuff, etc. Use VisualVM to see what is going on inside the JVMs.
Alternatively, you could try to fix with some totally unscientific "knob twiddling":
try making the -Xms and -Xmx parameters the same (that may reduce lagginess at the start ...)
try increasing the sizes of the JVMs' heaps; e.g. make them 2gb instead of 1gb
try using a more recent version of Java
try using a 64 bit JVM so that you can increase the heap size further
try enabling the CMS or G1 collectors (depending on what version of JVM you are using).
If I knew more about what you were currently using, I could possibly give more concrete suggestions ...
You are using two java apps in same computer, resulting in 2 JVMs running.
For a 64 bit system with 8GB RAM, its recommended to use max 2GB(25% of Physical memory OR 75% of free physical memory up to 2 GB) for JVM for better performance.
You may have to look on JVM size adjustment. For better performance, Xms and Xmx size can be kept same with a max size bracket.
Assigning memory size to Heap is not the only area to think of. JVM uses more memory than just Heap. Others memory areas like Thread Stack, Method areas, Class Loader Subsystem, Native Method Stack etc.
While both of the apps(game-server, game-client) are running, there is a chance of issue in memory management by OS between both apps, resulting in slowness.
In that case, client app can be deployed in another core, if available.

Java using up far more memory than allocated with -Xmx

I have a project I'm writing (in Java) for a class where the prof says we're not allowed to use more than 200m
I limit the stack memory to 50m (just to be absolutely sure) with -Xmx50m but according to top, it's still using 300m
I tried running Eclipse Memory Analyzer and it reports only 26m
Could this all be memory on the stack?, I'm pretty sure I never go further than about 300 method calls deep (yes, it is a recursive DFS search), so that would have to mean every stack frame is using up almost a megabyte which seems hard to believe.
The program is single-threaded. Does anyone know any other places in which I might reduce memory usage? Also, how can I check/limit how much memory the stack is using?
UPDATE: I'm using the following JVM options now with no effect (still about 300m according to top): -Xss104k -Xms40m -Xmx40m -XX:MaxPermSize=1k
Another UPDATE: Actually, if I let it run a little bit longer (with all these options) about half the time it suddenly drops to 150m after 4 or 5 seconds (the other half it doesn't drop). What makes this really strange is that my program has no stochastic (and as I said it's single-threaded) so there's no reason it should behave differently on different runs
Could it have something to do with the JVM I'm using?
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.3) (6b27-1.12.3-0ubuntu1~10.04)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
According to java -h, the default JVM is -server. I tried adding -cacao and now (with all the other options) it's only 59m. So I suppose this solves my problem. Can anyone explain why this was necessary? Also, are there any drawbacks I should know about?
One more update: cacao is really really slow compared to server. This is an awful option
Top command reflects the total amount of memory used by the Java application. This includes among other things:
A basic memory overhead of the JVM itself
the heap space (bounded with -Xmx)
The permanent generation space (-XX:MaxPermSize - not standard in all JVMs)
threads stack space (-Xss per stack) which may grow significantly depending on the number of threads
Space used by native allocations (using ByteBufer class, or JNI)
Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]
here max heap memory as -Xmx ,min heap memory as -Xms,stack memory as -Xss
and -XX maxPermSize
The following example illustrates this situation. I have launched my tomcat with the following startup parameters:
-Xmx168m -Xms168m -XX:PermSize=32m -XX:MaxPermSize=32m -Xss1m
With -Xmx you are configuring heap size. To configure stack size use -Xss parameter. Sum of those two parameters should be approximately what you want:
-Xmx150m -Xss50m
for example.
Additionally there is also -XX:MaxPermSize parameter which controls. This parameter for -client has default value of 32mb and for -server 64mb. According to your configuration calculate it as well. PermGen space is:
The permanent generation is used to hold reflective of the VM itself such as class objects and method objects.
So basically it stores internal data of the JVM, like classes definitions and intern-ed strings.
At the end I must say that there is one part which you can't control, that is memory used by native java process. Java is program, just like any other, so it uses memory also. If you are watching memory usage in Task Manager you will see this memory as well together with your program memory consumption.
It's important to note that "total memory used" (RSS in Linux land) includes JDK heap (+ other JDK areas) as well as any "native memory" allocated.
For instance, these people found that allocating too many jaxbcontexts (which have associated native memory) between GC's could cause it to use a lot of extra RAM. Another common one is apparently ZipInflater if you don't call close on it (or GZipStream, etc.)
http://sleeplessinslc.blogspot.com/2014/08/jvm-native-memory-leak.html
His final workaround/fix was to either GC "more often" (by using GC1 garbage collector, or specifying a smaller [ironically] -Xmx setting) or by cacheing the JaxBContext objects (since they have no close method so you can't control the leak).
Also note that sometimes you can find memory culprits by just examing jstack: http://javaeesupportpatterns.blogspot.com/2011/09/jaxbcontext-performance-problem-case.html
It's also sometimes possible to "miss" closing for instance GZipStreams accidentally http://kohsuke.org/2011/11/03/quiz-time-memory-leak-in-java
Have you tried using JVisualVM?
http://docs.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
I've often found it helps me track this stuff down. It will show you how much of each kind of memory is being used in even let you drill in and find out what.

application terminates on heapsize

I have defined -Xmsx 1.3GB in the java VM parameters and my Eclipse does not allow more than this, when running the application I got the below exception:
Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
What can I do?
You can set the maximum memory eclipse uses with -mx1300m or the like. This limitation will be because you are running 32-bit java on Windows. On a 64-bit OS, you won't have this problem.
However, its the maximum memory size you set for each application in eclipse which matters. What have you set in your run options in eclipse?
Your question is very unclear:
Are you running the application in a new JVM?
Did you set the -Xmx / -Xms parameters in the launcher for the child JVM?
If the answer to either of those questions is "no", then try doing ... both. (In particular, if you don't set at least -Xmx for the child JVM, you'll get the default heap size which is relatively small.)
If the answer to both of those questions is "yes", then the problem is that you are running into the limits of your hardware and/or operating system configuration:
On a typical 32bit Windows, a user process can only address a total 2**31 bytes of virtual memory, and some of that will be used by the JVM binaries, native libraries and various non-heap memory allocations. (On a 32 bit Linux, I believe you can have up to 2**31 + 2**30). The "fix" for this is to use a 64bit OS and a 64bit JVM.
In addition, a JVM is limited on the amount of memory that it can request by the resources of the OS'es virtual memory subsystem. This is typically bounded by the sum of the available RAM and the size of the disc files / partitions used for paging. The "fix" for this is to increase the size of the paging file / partition. Adding more RAM would probably be a good idea too.
You may want to look at the aggressive Heap option http://java.sun.com/docs/hotspot/gc1.4.2/#4.2.2.%20AggressiveHeap|outline
It solved a similiar issue for me.

How to calculate (and specify) the total memory space allowed for java process?

I have a system which cannot provide more than 1.5 Gb for Java process. Thus i need an exact way to specify java process settings, including all memory kinds inside java and possible fork.
One specific java process and system to illustrate my problem:
My current environment is java 1.6.0_18 under Ubuntu Linux 9.10.
I start large java server process with following JVM Options:
"-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"
Now, "top" command reports that the process uses 1.6gb memory...
Questions:
1 - how the maximal space used by java process is calculated? Please provide exact formula if possible.
( Smth. Like: max.heap + max.perm + stack + jvm space = maximal space )
2 - what is the infamous fork behavior under linux in my case? Will the forked JVM occupy extra 1.6 gb (resulting in total 3.2 Gb of used memory)?
3 - Which options must be used to absolutely ensure that no more than 1.5gb is used at any time?
thank you
#rancidfishbreath: "ulimit" will ensure that java cannot take more than specified amount of memory. My purpose is to ensure that java doesn't ever try to do that.
top reports 1.6GB because PermSize is ON TOP of the heap-size maximum heap size. In your case you set MaxPermSize to 512m and Xmx to 1024m. This amounts to 1536m. Just like in other languages, an absolutely precise number can not be calculated unless you know precisely how many threads are started, how many file handles are used, etc. The stack size per thread depends on the OS and JDK version, in your case its 1024k (if it is a 64bit machine). So if you have 10 threads you use 10240k extra as the stack is not allocated from the heap (Xmx). Most applications that behave nicely work perfectly when setting a lower stack and MaxPermSize. Try to set the ThreadStackSize to 128k and if you get a StackOverflowError (i.e. if you do lots of deep recursions) you can increase it in small steps until the problem disappears.
So my answer is essentially that you can not control it down to the MB how much the Java process will use, but you come fairly close by setting i.e. -Xmx1024m -XX:MaxPermSize=384m and -XX:ThreadStackSize=128k -XX:+UseCompressedOops. Even if you have lots of threads you will still have plenty of headroom until you reach 1.5GB. The UseCompressedOops tells the VM to use narrow pointers even when running on a 64bit JVM, thus saving some memory.
At high level JVM address space is divided in three main parts:
kernel space: ~1GB, also depends on platform, windows its more than 1GB
Java Heap: Java heap specified by user using the -Xmx, -XX:MaxPermSize, etc...
Rest of virtual address space goes to native usage of JVM, to accomodate the malloc/calloc done by JVM, native threads stack: thread respective the java threads and addition JVM native threads for GC, etc...
So you have (4GB - kernel space 1-1.25GB) ~2.75GB to play with,so you can set your java/native heap accordingly. But generally we should keep atleast 500MB for JVM native heap else there is a chances that you get native OOM. So we need to do a trade off here based on your application's java heap utilization.

Categories

Resources