Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
for example, in an ArrayList, each item is very big, and the size of the list may be large enough to exceed the size of memory. What is the strategy to expand a list in this situation?
Thanks for all the replies. I have encountered such a problem that receiving a list of object by remote calling, and each object in the list may be quite large while the size of the list may be 10000 or more. I wonder how to store this list into memory during the execution.
List<BigItem> list = queryService.queryForList(params...);
Your question is very generic, but I think it is possible to give a certain "fact based" answer nonetheless:
If your setup is as such that memory becomes a bottleneck; then your application needs to be aware about that fact. In other words: you need to implement measurements within your application.
You have to enable your application to make the decision if "growing" a list (and "loading" those expensive objects for example) is possible, or not.
A simple starting point is described here; but of course, this is really a complicated undertaking. Your code has to constantly monitor its memory usage; and take appropriate steps if you get closer to your limits.
Alternatively, you should to profiling to really understand the memory consumption "behavior" of your application. There is no point in a putting effort into "self-controlling" ... if your application happens to have various memory leaks for example. Or if your code is generating "garbage" on a rate that makes the garbage collector spin constantly.
You see, a lot of aspects come into play here. You should focus on them one by one. Start with understanding your application; then decide if you have to improve its "garbage collection" behavior; or if you have go down the full nine yards and make your application manage itself!
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need a hint on an Interview question that I came across. I tried to find a solution but I need advice from experts over here. What are the different strategies you would employ had you came across this particular situation? The question and my thoughts are as follows:
Q. You want to store a huge number of objects in a list in java. The number of objects is very huge and gradually increasing, but you have very limited memory available. How would you do that?
A. I answered by saying that, once the number of elements in the list
get over a certain threshold, I would dump them to a file. I would typically then build cache-like data-structure that would hold the most-frequently or recently added elements. I gave an analogy of page swapping employed by the OS.
Q. But this would involve disk access and it would be slower and affect the execution.
I did not know the solution for this and could not think properly during the interview. I tried to answer as:
A. In this case, I would think of horizontally scaling the system or
adding more RAM.
Immediately after I answered this, my telephonic interview ended. I felt that the interviewer was not happy with the answer. But then, what should have been the answer.
Moreover, I am not curious only about the answer, I would like to learn about different ways with which this can be handled.
Maybe I am not sure but It indicates that somewhat Flyweight Pattern. This is the same pattern which is been used in String pool and its efficient implementation is must Apart from that, we need focus on database related tasks in order to persist the data if the threshold limit is surpassed. Another technique is to serialize it but as you said, the interviewer was not satisfied and wanted some other explanation.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have came across several times with the term "zero-allocation" and I was looking some clarification on the subject.
When "zero-allocation" is mentioned, is it referring to programs that use little allocation or allocate everything at start-up time, for example? Because it seems to me that allocating no objects at all is unfeasible in a non-trivial program, but I might be wrong.
On the other hand, using off-heap memory, is that also considered "zero-allocation" and in this case, "zero-allocation" would mean no memory allocated to be handled by the Garbage Collector?
I first heard about this in the context of this presentation: http://www.infoq.com/presentations/panel-java-performance, around 15:35.
If you have a very tight and hot loop (i. e. a loop which is ran thousands if not millions a time in very short time), then it makes sense to move allocation outside the loop.
I wrote a simulation in Java ten years ago. There was an object list being manipulated in the loop. The loop was run thirty times a second and should complete within 30 milliseconds, and yet manipulate up to fifty thousand objects. A difficulty was that objects are created and deleted in a loop iteration.
We realized soon that we should avoid object allocation (and by consequence garbage collection). We solved this problem with a zero allocation approach within the loop. How?
We replaced the list by an array of flyweight objects. The array of fifty thousand objects is allocated before the loop starts. The second trick is using a variant of the flyweight pattern. Instead of deleting and creating objects in the loop we started with fifty thousand pre-allocated objects and added a flag to mark them as "active" or not "active". Whenever we wanted to remove an object we marked it as inactive. There were many such little tricks to avoid allocation.
And it helped! The simulation was able to run in realtime and without garbage collection jitter (sudden drops of the frame rate because of a major garbage collection run).
This is a little example to show you how zero allocation might work and why it is neccessary.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Okay, so I have an app that works with several large data structures; for performance, these will over-allocate array sizes, and hold onto cleared spaces, in order to ensure it can expand quickly when new items are added, i.e - it avoids having to create new arrays as much as possible.
However, this can be a bit wasteful if a device is low on memory. To account for this I currently have some sanity checks that will shrink the arrays if the amount of unused spaces exceeds a certain amount within a certain amount of time since the last time the array size was changed, but this seems a bit of a clunky thing to do, as I don't know if the space actually needs to be freed up.
What I'm wondering is, if I have a method that tells my object to reclaim space, is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures? I ask because obviously some devices aren't very memory constrained at all, so it likely won't matter much if my app is a bit wasteful for the sake of speed on those, meanwhile others benefit from having as much free space as possible, but my current method treats both cases in exactly the same way.
is there a way that I can detect when my app should release some memory (e.g - memory is low and/or garbage collection is about to become more aggressive), so that I can shrink my data structures?
Override onTrimMemory() in relevant classes implementing ComponentCallbacks2 (e.g., your activities). In particular, states like TRIM_MEMORY_BACKGROUND, TRIM_MEMORY_MODERATE, and TRIM_MEMORY_COMPLETE are likely candidate times to "tighten your belt" from a memory consumption standpoint.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
First off all, I know there are several questions about "Java inline". But they are all about how the compiler or JVM inlines function calls. I'm interested in doing this myself, or create some kind of a View for it. I want to define a function call of a class, and want to see everything inlined. Every method call should get inlined. I'm not sure how to handle instantiation of new objects, but it doesn't matter as much.
The goal is manual optimization, i.e. if a parameter is checked too often against null. Is there a tool to to something like this? I would prefer a GUI, but some kind of command line tool where I can specify a class function and it dumps some text somewhere will suffice, too.
EDIT:
For clearification:
Today I argued to use the NullObjectPattern, because some are defensively overchecking for nulls everywhere. This makes the code unreadable and unclean. I dont like it and wanted to have some kind of a tool, to show them how often they are actually checking the very same parameter again and again for null.
As was said: Don't guess, especially when you don't know what the JIT compiler will do after the code has been running for a while. You can waste infinite time infinitely improving something that accounts for 1% of runtime and only save 1%, or you can spend a short time getting a 10% improvement of something that accounts for 20% of your runtime and save 2%; the latter is by far a better choice.
The way you determine what's worth improving is by properly profiling your code after it has been fully warmed up.
And the way you get a significant improvement generally has more to do with improved algorithms than with microtuning of single instructions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am looking into finding algorithm within the area of clustering or machine learning which will facilitate or creating a typical data reading for a group of readings. The issue is that it must facilitate time series data; thus some traditional (k-means) techniques are not as useful.
Can anyone recommend places to look or particular algorithms that would provide a typical reading and relatively simple to implement (in Java), manipulate and understand?
As an idea. Try to convert all data types into time, then you will have vectors of the same type (time), then any clustering strategy will work fine.
By converting to time I actually mean that any measurement or data type we know about has a time in its nature. Time is not a 4-th dimension, as many think! Time is actually 0-dimension. Even a point of no physical dimensions which may not exist in space, exists in time.
Distance, weight, temperature, pressure, directions, speed... all measures we do can be converted into certain functions of time.
I have tried this approach on several projects and it payed back with really nice solutions.
Hope, this might help you here as well.
For most machine learning problems in Java, weka usually works pretty well.
See, for example: http://facweb.cs.depaul.edu/mobasher/classes/ect584/weka/k-means.html