Java's Heap Structure Implementation [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I understand that Java allocates objects in the Heap and why. I understand that primitive data types and references variables are allocated on the Stack structure.
What I don't understand is it's efficiency. It seems to work pretty fast, in comparison to others'.
My questions are related to each other: What makes Java's Heap structure efficient? How is it implemented?

The HotSpot JVM uses a collection of different garbage collectors in tandem with one another to increase efficiency. Since most objects have extremely short lifetimes, it uses a stop-and-copy garbage collector with a small size for most allocations. Since stop-and-copy allows for almost instantaneous allocations (usually one or two assembly instructions), this makes most allocations fast. The cost of doing a "copy" step is low because most objects are reclaimed and the small size reserved for the copy collector gives a good upper-bound on the maximum time spent copying.
For objects that survive a long time in the first layer of the stop-and-copy collector, the HotSpot has a second level of stop-and-copied memory where it relocates those objects. This frees up more space in the top-level copy collector. Objects that survive long enough there are then moved to an area that uses mark-and-sweep collection. The idea is that anything that ends up there is likely to be around a long time because it's survived so long.
This hybrid approach - plus a bunch of other optimizations - explains why allocations and deallocations are so fast. Notice that the key tricks used here - namely, relocating objects - are hard to implement in low-level languages because of exposed pointers.

Related

How to resolve segmented code cache related memory issues in Java 9? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Segmented code cache is a new feature introduced in Java 9 that divides the code cache into segments, this improves performance to a considerable extent. But there can be some memory wastage due to fixed size per code heap, in case of small code cache, one code heap is full and there is still space in another code heap.
So what are possible workarounds to overcome these memory issues?
The Risks and Assumptions of the Segmented Code Cache JEP states the same in a cleaner way:-
Having a fixed size per code heap leads to a potential waste of memory
in case one code heap is full and there is still space in another code
heap. Especially for very small code cache sizes, it may happen that
the compilers are shut off even if there is still space available.
To solve this problem an option will be added to turn off the
segmentation for small code-cache sizes.
The following command-line switches are introduced to control the sizes of the code heaps:
-XX:NonProfiledCodeHeapSize: Sets the size in bytes of the code heap containing non-profiled methods.
-XX:ProfiledCodeHeapSize: Sets the size in bytes of the code heap containing profiled methods.
-XX:NonMethodCodeHeapSize: Sets the size in bytes of the code heap containing non-method code.

Java list expand strategy [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
for example, in an ArrayList, each item is very big, and the size of the list may be large enough to exceed the size of memory. What is the strategy to expand a list in this situation?
Thanks for all the replies. I have encountered such a problem that receiving a list of object by remote calling, and each object in the list may be quite large while the size of the list may be 10000 or more. I wonder how to store this list into memory during the execution.
List<BigItem> list = queryService.queryForList(params...);
Your question is very generic, but I think it is possible to give a certain "fact based" answer nonetheless:
If your setup is as such that memory becomes a bottleneck; then your application needs to be aware about that fact. In other words: you need to implement measurements within your application.
You have to enable your application to make the decision if "growing" a list (and "loading" those expensive objects for example) is possible, or not.
A simple starting point is described here; but of course, this is really a complicated undertaking. Your code has to constantly monitor its memory usage; and take appropriate steps if you get closer to your limits.
Alternatively, you should to profiling to really understand the memory consumption "behavior" of your application. There is no point in a putting effort into "self-controlling" ... if your application happens to have various memory leaks for example. Or if your code is generating "garbage" on a rate that makes the garbage collector spin constantly.
You see, a lot of aspects come into play here. You should focus on them one by one. Start with understanding your application; then decide if you have to improve its "garbage collection" behavior; or if you have go down the full nine yards and make your application manage itself!

Java Zero Allocation [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have came across several times with the term "zero-allocation" and I was looking some clarification on the subject.
When "zero-allocation" is mentioned, is it referring to programs that use little allocation or allocate everything at start-up time, for example? Because it seems to me that allocating no objects at all is unfeasible in a non-trivial program, but I might be wrong.
On the other hand, using off-heap memory, is that also considered "zero-allocation" and in this case, "zero-allocation" would mean no memory allocated to be handled by the Garbage Collector?
I first heard about this in the context of this presentation: http://www.infoq.com/presentations/panel-java-performance, around 15:35.
If you have a very tight and hot loop (i. e. a loop which is ran thousands if not millions a time in very short time), then it makes sense to move allocation outside the loop.
I wrote a simulation in Java ten years ago. There was an object list being manipulated in the loop. The loop was run thirty times a second and should complete within 30 milliseconds, and yet manipulate up to fifty thousand objects. A difficulty was that objects are created and deleted in a loop iteration.
We realized soon that we should avoid object allocation (and by consequence garbage collection). We solved this problem with a zero allocation approach within the loop. How?
We replaced the list by an array of flyweight objects. The array of fifty thousand objects is allocated before the loop starts. The second trick is using a variant of the flyweight pattern. Instead of deleting and creating objects in the loop we started with fifty thousand pre-allocated objects and added a flag to mark them as "active" or not "active". Whenever we wanted to remove an object we marked it as inactive. There were many such little tricks to avoid allocation.
And it helped! The simulation was able to run in realtime and without garbage collection jitter (sudden drops of the frame rate because of a major garbage collection run).
This is a little example to show you how zero allocation might work and why it is neccessary.

Best Way to Know When to Free Up Memory in an App [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Okay, so I have an app that works with several large data structures; for performance, these will over-allocate array sizes, and hold onto cleared spaces, in order to ensure it can expand quickly when new items are added, i.e - it avoids having to create new arrays as much as possible.
However, this can be a bit wasteful if a device is low on memory. To account for this I currently have some sanity checks that will shrink the arrays if the amount of unused spaces exceeds a certain amount within a certain amount of time since the last time the array size was changed, but this seems a bit of a clunky thing to do, as I don't know if the space actually needs to be freed up.
What I'm wondering is, if I have a method that tells my object to reclaim space, is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures? I ask because obviously some devices aren't very memory constrained at all, so it likely won't matter much if my app is a bit wasteful for the sake of speed on those, meanwhile others benefit from having as much free space as possible, but my current method treats both cases in exactly the same way.
is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures?
Override onTrimMemory() in relevant classes implementing ComponentCallbacks2 (e.g., your activities). In particular, states like TRIM_MEMORY_BACKGROUND, TRIM_MEMORY_MODERATE, and TRIM_MEMORY_COMPLETE are likely candidate times to "tighten your belt" from a memory consumption standpoint.

How many clock cycles for variable assignment in C? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does anyone know how many clock cycles it takes for a variable assignment in C language on a x86 platform? It is generally considered to take less when compared to java, what is the reason behind it?
The difference between C and Java is not depending on the difference between the languages themselves, but rather on the difference of technology behind those two languages:
C is compiled in binary, which is the code that gets directly executed by the processor. By opposition, Java is (most generally) never compiled totally, it is instead pseudo-compiled into bytecode.
This bytecode is designed to be interpreted by a virtual machine (the JVM in the case of Java), allowing for a much easier portability: while you need to adapt your C code to make it portable (see NetBSD for an example), or make different versions of it for each target; you just need a different JVM to run the same java bytecode on a different target.
It is worth noting that Java follows the JIT model, allowing for optimizations that are normally impossible, since they rely on conditions only known at run-time.
Now, in the case of your question, the real things to compare are: for a given machine, how much cycles does it take for a value to get copied in memory (RAM, even though that some C compilers can use CPU registers to store variables used a lot in a short time-span, like loop counters for example) with the assembly instruction versus how much cycles does the JVM take to do the same task upon reading the Java bytecode instruction to do so.
And I would say that with a good JVM implementation, there would be no difference for the allocation itself, as far as I understand. Now, there are other criteria to consider: Java usually makes heavy usage of objects, that take a lot of place in RAM, due to their complex nature, and therefore, take also more time to allocate. Also, I believe that Java makes more checks to avoid common mistakes, such as accessing a non-initialized variable, and those cost time too.
But keep in mind that a badly coded C program can take much more time to execute than a very well coded Java program.
Only after you understand this statement:
"Exactly zero lines of C code have ever been executed by a computer. Also, exactly zero lines of Java code have ever been executed by a computer."
You will never understand any answer to your question.
There answer to your question is:
"An assignment, written in C, takes an unknown number of clock cycles to complete."

Categories

Resources