Max memory you can give to java? [closed] - java

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I run a mine craft server on a 32 bit Ubuntu system if I upgrade to64 bit what is the max memory I can give to java? I want it to have about 12 gig of ram but I can't do that on 32bit

There is effectively no maximum in the amount of ram a 64-bit system can address. You will be stopped only by your computer's hardware. I don't think java has a max amount of alloted RAM, either, provided you use the right switch in the command.
http://en.wikipedia.org/wiki/64-bit

Just to be clear, "hardware" includes paging / swap space, so if you actually require 12GB and only have 8GB of RAM, you'll need to be sure to have 4GB of spare swap space in order for Java to allocate additional memory successfully.

From the Java Tuning white paper:
For a 32-bit process model, the maximum virtual address size of the process is typically 4 GB, though some operating systems limit this to 2 GB or 3 GB. The maximum heap size is typically -Xmx3800m (1600m) for 2 GB limits), though the actual limitation is application dependent. For 64-bit process models, the maximum is essentially unlimited.
But truth is, such a huge heap memory usage (12GB) is counterproductive. After running for a long time, the time your application spends doing garbage collection will negate the effect of having so much memory available.

Related

Where is permgen located in JDK 7 JVM? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm studying about JDK 7 JVM's Runtime Data Areas.
I want to compare JDK 7 JVM and JDK 8 JVM. There are some memory areas in JDK 7 JVM, but I'm confused.
I'm looking for JDK 7 JVM Runtime Data Areas Architecture picture and articles in blogs, but all of articles saying different.
Heap (including Young Generation, Old Generation)
Method Area (where is located in JVM? heap? non-heap? native memory? or independent?)
Runtime Constant Pool in Method Area
JVM Stack in Native Memory
Native Method Stack in Native Memory
PC Register in Native Memory
But I'm confused about PermGen's location in Runtime Data Areas.
someone telling PermGen is part of Method Area.
someone telling Method Area is part of PermGen.
someone telling PermGen is non-heap.
(then PermGen is located in Native Memory? then Runtime Data Areas separated 3 parts? (Heap, non-Heap(Method Area), Native Memory))
someone's picture telling PermGen is part of Heap
What is correct?
If you differentiate simply between Heap and Native memory PermGen is part of the Heap area. So is the method area.
The image you've attached is basically right in this regard.
In the Hotspot-VM Permanent Generation (PermGen) is/was one of the Heap areas.
It is, however, a special heap space separated from the main memory heap.
It is not affected by Java options like -Xmx or -Xms and has it's own limits and garbage collection behavior. Therefore one could also say it is non-heap depending on the viewpoint and context.

Why would a Java-based simulation program operate at the same speed on a faster computer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Context
Hi All,
I'm running a program called Firstrate 5 for work - its a simple java program to calculate thermal performance for buildings. After inputting a model the user presses Calculate and it computes the results. Some models are complex, and take in the order of minutes to calculate, which often as part of an iterative process is very time consuming. I had it on good authority that it was likely my old HDD slowing things down. It was time for an upgrade anyway, so I splashed out on a new computer with a SSD, faster CPU, 2x RAM. I was excited to see the performance improvement, so I decided to compare the time required to calculate the same model on my new an old machines. Lo and behold they both take exactly 3:15 seconds to calculate.
Question
What possible explanations are there for a java program to exhibit equivalent performance on a faster machine? Is this likely something to do with the way the software is coded, or could there be an equivalently sized hardware bottleneck on both systems? Does it have anything to do with the JRE? My main goal is to make the process faster, so any education or pointers you can give me may help me find a solution.
Old specs - (Toshiba Laptop Satellite L50-C) , Windows 10
CPU = Intel i7-5500U, 2 Cores #2.4GHz
RAM = 8GB (DDR3, 1600 MHz)
HDD = 5400 RPM
CPU usage during calculation ~ 54%
HDD usage during calculation ~ 20%
RAM usage during calculation ~ 53%
New specs - (PC Build), Windows 10
CPU = AMD Ryzen 5 2600X, 6 Cores #3.6 GHz
RAM = 16GB (DDR 4 2400 MHz)
SSD (Samsung 860 EVO)
CPU usage during calculation = 9.7%
HDD usage during calculation = 19%
RAM usage during calculation ~ 0%

JMeter: How much java heap size one can increase for 64 bit windows OS [duplicate]

This question already has answers here:
What is the largest possible heap size with a 64-bit JVM?
(6 answers)
Closed 6 years ago.
To avoid OutOfMemoryException in JMeter, I am increasing heap size to -Xms5120m.
I would like to know how much I can increase Java heap size?
With a Java 64 bits, you can increase heap to whatever you want provided you follow the following rules:
Don't exceed you RAM, and keep enough memory for the OS, so your heap should be RAM minus what OS and other software use. This is to ensure your machine does not swap.
Whenever you have big JVMs (> 4 GB), you may start facing big GC pauses which will required GC Tuning which is very complex. As a rule of thumb use the latest Java version (currently Java 8) and the G1 GC algorithm (using -XX:+UseG1GC).
Finally, with JMeter there is no reason to increase the heap too much, provided you follow best-practices:
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
http://jmeter.apache.org/usermanual/best-practices.html

Does JVM memory overhead scale linearly, or is it constant? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In my experience, a C program that uses around 10 megabytes of resident memory may use around 40 to 50 MB when translated into Java, and around 100 in Clojure or Scala. My question is whether this JVM memory overhead scales linearly; if the C version used 1 gigabyte, would the Java version require 4 GB? Or is the JVM memory overhead more a constant factor, such that the 1 GB C program might only use 1.5 GB in Java?
I'm aware that I could benchmark this, but I think hearing people's experience regarding JVM memory use in production would be more informative than an artificial benchmark, which could be skewed to favour either result depending on how it was designed.
The overhead is about 10MB + 4xC-memory.
The 10MB is the JVM without anything. Java 7 64bit version uses about this much.
The 4x memory is obviously a "guesstimate" because it depends on which data types you use. If you use 100% references in java they take up about 4 times as much memory. The same difference there is between int and Integer.
If there are a lot of malloc/new in your C code there will be that in Java too, and Java's GC might not run when you want it to, so there's also an overhead of "dead references not yet cleaned up" that depends greatly on things out of your control (GC timing).

Why is the call stack in Android 8KB? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wrote some simple tree/graph algorithms, but quickly ran into a lot of StackOverflowError exceptions with some pretty small data. It turns out the stack is 8KB by default on my Samsung Galaxy S3, which has 2GB of RAM. My computer 10 years ago had a 1MB stack. The Linux machine machine I'm using right now has 4GB RAM. My phone's RAM is only half the size of my computer's RAM, yet my phone's stack is over 1000 times smaller. Why?
What is the actual technical reason that the developers of Android had to limit the stack much more than other operating systems? E.g is it because some Android devices have a small amount of RAM like 1MB or 10MB? I haven't surveyed the range of devices, but I find it hard to believe that any device would be so small.
Just create a Thread with the stack size you want and run your code in it.
I believe that the default one of 8KB is due to the fact that it is most probably allocated on the heap, and a 32-bit architecture does not have a lot of VRAM to waste on it (with a 2GB memory split it's only 2GB per process available). Allocating a big stack would permanently take it out of the available VRAM of the process. This is different from the traditional native processes where the stack grows from top to bottom while the heap grows from the bottom to the top.

Categories

Resources