Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have came across several times with the term "zero-allocation" and I was looking some clarification on the subject.
When "zero-allocation" is mentioned, is it referring to programs that use little allocation or allocate everything at start-up time, for example? Because it seems to me that allocating no objects at all is unfeasible in a non-trivial program, but I might be wrong.
On the other hand, using off-heap memory, is that also considered "zero-allocation" and in this case, "zero-allocation" would mean no memory allocated to be handled by the Garbage Collector?
I first heard about this in the context of this presentation: http://www.infoq.com/presentations/panel-java-performance, around 15:35.
If you have a very tight and hot loop (i. e. a loop which is ran thousands if not millions a time in very short time), then it makes sense to move allocation outside the loop.
I wrote a simulation in Java ten years ago. There was an object list being manipulated in the loop. The loop was run thirty times a second and should complete within 30 milliseconds, and yet manipulate up to fifty thousand objects. A difficulty was that objects are created and deleted in a loop iteration.
We realized soon that we should avoid object allocation (and by consequence garbage collection). We solved this problem with a zero allocation approach within the loop. How?
We replaced the list by an array of flyweight objects. The array of fifty thousand objects is allocated before the loop starts. The second trick is using a variant of the flyweight pattern. Instead of deleting and creating objects in the loop we started with fifty thousand pre-allocated objects and added a flag to mark them as "active" or not "active". Whenever we wanted to remove an object we marked it as inactive. There were many such little tricks to avoid allocation.
And it helped! The simulation was able to run in realtime and without garbage collection jitter (sudden drops of the frame rate because of a major garbage collection run).
This is a little example to show you how zero allocation might work and why it is neccessary.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Segmented code cache is a new feature introduced in Java 9 that divides the code cache into segments, this improves performance to a considerable extent. But there can be some memory wastage due to fixed size per code heap, in case of small code cache, one code heap is full and there is still space in another code heap.
So what are possible workarounds to overcome these memory issues?
The Risks and Assumptions of the Segmented Code Cache JEP states the same in a cleaner way:-
Having a fixed size per code heap leads to a potential waste of memory
in case one code heap is full and there is still space in another code
heap. Especially for very small code cache sizes, it may happen that
the compilers are shut off even if there is still space available.
To solve this problem an option will be added to turn off the
segmentation for small code-cache sizes.
The following command-line switches are introduced to control the sizes of the code heaps:
-XX:NonProfiledCodeHeapSize: Sets the size in bytes of the code heap containing non-profiled methods.
-XX:ProfiledCodeHeapSize: Sets the size in bytes of the code heap containing profiled methods.
-XX:NonMethodCodeHeapSize: Sets the size in bytes of the code heap containing non-method code.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
for example, in an ArrayList, each item is very big, and the size of the list may be large enough to exceed the size of memory. What is the strategy to expand a list in this situation?
Thanks for all the replies. I have encountered such a problem that receiving a list of object by remote calling, and each object in the list may be quite large while the size of the list may be 10000 or more. I wonder how to store this list into memory during the execution.
List<BigItem> list = queryService.queryForList(params...);
Your question is very generic, but I think it is possible to give a certain "fact based" answer nonetheless:
If your setup is as such that memory becomes a bottleneck; then your application needs to be aware about that fact. In other words: you need to implement measurements within your application.
You have to enable your application to make the decision if "growing" a list (and "loading" those expensive objects for example) is possible, or not.
A simple starting point is described here; but of course, this is really a complicated undertaking. Your code has to constantly monitor its memory usage; and take appropriate steps if you get closer to your limits.
Alternatively, you should to profiling to really understand the memory consumption "behavior" of your application. There is no point in a putting effort into "self-controlling" ... if your application happens to have various memory leaks for example. Or if your code is generating "garbage" on a rate that makes the garbage collector spin constantly.
You see, a lot of aspects come into play here. You should focus on them one by one. Start with understanding your application; then decide if you have to improve its "garbage collection" behavior; or if you have go down the full nine yards and make your application manage itself!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I understand that Java allocates objects in the Heap and why. I understand that primitive data types and references variables are allocated on the Stack structure.
What I don't understand is it's efficiency. It seems to work pretty fast, in comparison to others'.
My questions are related to each other: What makes Java's Heap structure efficient? How is it implemented?
The HotSpot JVM uses a collection of different garbage collectors in tandem with one another to increase efficiency. Since most objects have extremely short lifetimes, it uses a stop-and-copy garbage collector with a small size for most allocations. Since stop-and-copy allows for almost instantaneous allocations (usually one or two assembly instructions), this makes most allocations fast. The cost of doing a "copy" step is low because most objects are reclaimed and the small size reserved for the copy collector gives a good upper-bound on the maximum time spent copying.
For objects that survive a long time in the first layer of the stop-and-copy collector, the HotSpot has a second level of stop-and-copied memory where it relocates those objects. This frees up more space in the top-level copy collector. Objects that survive long enough there are then moved to an area that uses mark-and-sweep collection. The idea is that anything that ends up there is likely to be around a long time because it's survived so long.
This hybrid approach - plus a bunch of other optimizations - explains why allocations and deallocations are so fast. Notice that the key tricks used here - namely, relocating objects - are hard to implement in low-level languages because of exposed pointers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Okay, so I have an app that works with several large data structures; for performance, these will over-allocate array sizes, and hold onto cleared spaces, in order to ensure it can expand quickly when new items are added, i.e - it avoids having to create new arrays as much as possible.
However, this can be a bit wasteful if a device is low on memory. To account for this I currently have some sanity checks that will shrink the arrays if the amount of unused spaces exceeds a certain amount within a certain amount of time since the last time the array size was changed, but this seems a bit of a clunky thing to do, as I don't know if the space actually needs to be freed up.
What I'm wondering is, if I have a method that tells my object to reclaim space, is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures? I ask because obviously some devices aren't very memory constrained at all, so it likely won't matter much if my app is a bit wasteful for the sake of speed on those, meanwhile others benefit from having as much free space as possible, but my current method treats both cases in exactly the same way.
is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures?
Override onTrimMemory() in relevant classes implementing ComponentCallbacks2 (e.g., your activities). In particular, states like TRIM_MEMORY_BACKGROUND, TRIM_MEMORY_MODERATE, and TRIM_MEMORY_COMPLETE are likely candidate times to "tighten your belt" from a memory consumption standpoint.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a program that runs extremely slow when using big array numbers.
I use an int[3000][3000], a String[27000], and a String[5000] array in the final code. This code takes forever to run. Could this be because the arrays take too much space?
It depends a good deal on the complexity of your algorithms on which you are manipulating the data. with. This determines how much time it will take as you start throwing more data in it (by making the arrays larger and larger).If you are just iterating through the data, then it would be on the order of O(n), meaning it would be proportional to the amount of data give; so if you doubled the length of you arrays, it would take twice as long to execute your program. If you were, say , comparing every element with the other, it would mean it would be on the order of O(n^2), so if you doubled the length of your arrays, it would take around four times longer to process them.
You would have to post your program for us to have any idea if your algorithm is just to complex for your computer to handle.
see also: Big O notation
Many Factors :
Processor processing Speed.
Memory allocation
3000*3000*4 bytes = 36* 10.00.000 bytes = 338 MB
Use list