Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Correct me if I'm wrong, but in the C/C++ heap has no limit, you do not need to pass any argument to increase it.
So why java has a limit on the heap?
There are several reasons:
first you have the minimum heap size, which is there to prevent slow startup
the max heap size is there so that the GC knows when to start doing it's job, without it it would be much harder (but doable, you would just need to take into account different heuristics like number of allocations etc.)
the max heap size also prevents the JVM from hogging all the resources on the machine, which can be a really good thing
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using hashmap in my application at various places. Especially while executing jobs, it's consuming more memory resulting in server shutdown. what could be the replacement of hashmap in spring-boot application.
Hashmap itself is very standard and there isn't a more memory efficient replacement that is equivalent in general. You'll need to change how you are using it. You haven't provided enough information to diagnose the issue but my first guess would be that you might be storing things in maps which make them "reachable" after they are needed, thus preventing the Garbage Collector from freeing up the memory?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've written a simple program in two different languages, and the result has astonished me!
My application is a simple program (Hello world!).
The C-Sharp program took about 3 MB of RAM, but in Java-FX it was about 78 MB.
Is Java really using that much memory?!
Is there a way to reduce the amount of memory?
Depending on the version of the java virtual machine, the default initial heap size is a reasonable minimum, varies by platform, and can be set at startup. So yes, you can reduce it.
About changing the size and more details: https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
About the default heap size: How is the default java heap size determined?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With the module linking and custom runtime image creation features of JDK9 can a program have a lower RAM memory footprint in JDK9 than the same program with the standard JRE8 in JDK8? I imagine this would be because there are fewer classes to load from the classpath. It seems as though the RAM savings that would depend on whether the JVM lazily loads class definitions into RAM when an app starts up.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
We are using CMS gc for our java application. We wonder what would happen if we set gc parameter as -XX:+UseParNewGC instead of let it be the default in a single CPU environment. Will it change performance? If we use -server flag, parallel copying collector will be taken by jvm or we should always explicitly mention it?
From the Oracle GC tuning docs about the parallel gc:
On a machine with N hardware threads where N is greater than 8, the
parallel collector uses a fixed fraction of N as the number of garbage
collector threads. The fraction is approximately 5/8 for large values
of N. At values of N below 8, the number used is N.
found here
So you probable won't benefit unless your cpu is multithreaded. Also a good read is the section 5 of the aforementioned docs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can anybody tell me of a way that one can determine the efficiency of a program that i have written, that is not specific to a particular computer. For instance a piece of code running on an i3 may take 1 second, but on an i7 it may take 0.95 seconds. Then if you test the program again if the computer is doing just a little more work the times may increase to 1.0001 and 0.950003 respectively. I want a way to measure efficiency in a way that would be the same on all archs. Is that (mathematically,...,) possible?
I want a way to measure efficiency in a way that would be the same on all archs
You wont get exactly the same number on the same machine, running at the same CPU speed with the same code and the same version of Java.
You can't hope to get a number which will be the same across architectures, with different versions of Java, CPUs, speed, loads, OSes.
In short, your question is not possible on any real machine. Only a theoretical one which is why big-O is for a theoretical machine (and is derived from the code, not measured)