Object creation is a bottleneck in my application.
I think that adding more threads for object creation makes the situation worse, because object creation is a CPU-bound task, right?
Then, how to improve performance?
Often the problem is not object creation itself, but repeated object creation and garbage generation. That causes two performance hits: creating all those objects and extra garbage collection stalls.
First, you should use profiling tools to verify that excessive object creation is the source of your performance problems. Assuming that you have verified that this is the problem, there are various things to look for and strategies to try. It all depends on how your code is written, so there's no one recommendation that will work. This list of Java performance guidelines from IBM is definitely worth applying. It identifies how to avoid many of the most common sins: don't create objects inside loops; use StringBuilder instead of a series of string concatenation expressions; use primitive types and avoid auto-boxing/unboxing where possible; cache frequently used objects; allocate collection classes with an explicit capacity instead of allowing them to grow; etc.
Another nice resource is Chapter 4 of the book Java Performance Tuning. (You can read it on-line here.)
If you search the web for excessive object creation java, you can find lots of other recommendations.
You can still get significant performance improvement by multi-threading CPU bound tasks when your app is running on a machine with multiple processors.
As #Pst says - are you sure it's the bottleneck? because these days it's not a common one.
But given that. One thing you could try is avoiding creation by caching and reusing instances. But that totally depends on what your program does.
Java uses a TLAB (Thread Local Allocation Buffer) for small to medium sizes objects. This means each thread can allocate objects concurrently. i.e. you don't get a slow down for using multiple threads.
In general, more CPUs improve CPU-bound problems. Its IO bound tasks where one cpu can use all the available bandwidth, like disk access, which are no faster when you use multiple CPUs.
The simplest way to reduce the cost of Object Creation is to create/discard less objects. There is a common assumption that object creation is unavoidable, but the last 2.5 years I have worked on applications which GC less than once per day, even under production load.
Most application don't work this way because they don't need to. However, if you have a need to minimise object creation you can.
Related
I'm designing a class that provides statistical information about groups of Collatz sequences. One of my goals is to be able to process a large number of sequences containing enormous terms (on the scale of hundreds or even thousands of digits) simultaneously, with maximum efficiency.
To this end, I plan on using the best data collection technique for each individual statistic, which means some tasks may be more efficiently dealt with by a ForkJoinPool, others by the standard cached and fixed thread pools provided in Executors. Would the overhead of creating multiple thread pools, or shutting one down and creating another, if I went that route, cost me more than I would save?
Would the overhead of creating multiple thread pools, or shutting one down and creating another, if I went that route, cost me more than I would save?
How could we possibly tell you that?
There is definitely an overhead in shutting down and restarting a thread pool. If any kind. Creating threads is not cheap.
However, we have no way of quantifying how much you save by using different kinds of thread pool. If we can't quantify that it is impossible to advise you on whether your strategy will work ... or not.
(But I think that repeatedly shutting down and recreating thread pools would be a bad idea. The performance impact of an idle pool is minimal.)
This "smells" of premature optimization. (It is like trying to tune the engine of a racing car before you have manufactured the engine block!)
My advice would be to (largely1) forget about performance to start with. For now, focus on getting something that works. Here's what I would do:
Implement the code using the easiest strategy, write test cases, test / debug until it works.
Choose a sample problem or set of problems that is typical of the kind you will be trying to solve
Implement a test harness that allows you to measure the code's performance for the sample problems. (Beware of the standard problems with Java benchmarking ...)
Benchmark your code.
Is it fast enough? Stop NOW.
If not, continue.
Implement one of the alternative strategies, and test / debug.
Benchmark the modified code.
Is it fast enough? Stop NOW.
Is it clear that it doesn't help?. Abandon it, and try another strategy.
Can you tweak it? If so, try that.
Go to 5.
Also, it may be worthwhile implementing the different strategies in such a way that you can tune them or switch between them using command line or config file settings.
As a general rule, it is hard to determine a priori how well any complicated algorithm or strategy is going to perform. Generally speaking, there are too many factors to take into account for a theoretical ... or intuitive ... approach to give a reliable prediction. Benchmarking and tuning is the way to go.
1 - Obviously, if you know that some technique or algorithm will perform badly, and you have a better alternative that is about the same effort to implement ... do the sensible thing.
Since you are only talking about two different types of pools (fork-join and Executor based pools), and you claim that at least some of your tasks are more suited to one type or pool or the other, it is entirely likely that the overhead of using two types of pools is worth it.
After all, you can just keep both types of pools alive and so there is only a one time cost to setting up the pools and creating the threads, while the (apparent) benefit of the two pool types will apply across the entirety of your processing. Since you are doing an "enormous" amount of work even small benefits will eventually add up and overwhelm the one-time costs (which are probably measured in micro-architecture per thread).
Key to this observation is that there is no real ongoing overhead for existing but inactive threads in the pool you aren't using.
Of course, that said, the short answer it "just try both approaches and measure it!".
Should Java Objects be reused as often as it can be reused ? Or should we reuse it only when they are "heavyweight", ie have OS resources associated with it ?
All old articles on the internet talk about object reuse and object pooling as much as possible, but I have read recent articles that say new Object() is highly optimized now ( 10 instructions ) and Object reuse is not as big a deal as it used to be.
What is the current best practice and how are you people doing it ?
I let the garbage collector do that kind of deciding for me, the only time I've hit heap limit with freshly allocated objects was after running a buggy recursive algorithm for a couple of seconds which generated 3 * 27 * 27... new objects as fast as it could.
Do what's best for readability and encapsulation. Sometimes reusing objects may be useful, but generally you shouldn't worry about it.
If you use them very intensively and the construction is costly, you should try to reuse them as much as you can.
If your objects are very small, and cheap to create ( like Object ) you should create new ones.
For instance connections database are pooled because the cost of creating a new one is higher than those of creating .. mmhh new Integer for instance.
So the answer to your question is, reuse when they are heavy AND are used often ( it is not worth to pool a 3 mb object that is only used twice )
Edit:
Additionally, this item from Effective Java:Favor Immutability is worth reading and may apply to your situation.
Object creation is cheap, yes, but sometimes not cheap enough.
If you create a lot (and I mean A LOT) temporary objects in rapid succession, the costs for the garbage collector are considerable. However even with a good profiler you may not necessarily see the costs easily, as the garbage collector nowadays works in short intervals instead of blocking the whole application for a second or two.
Most of the performance improvements I got in my projects came from either avoiding object creation or avoiding the whole work (including the object creation) through aggressive caching. No matter how big or small the object is, it still takes time to create it and to manage the references and heap structures for it. (And of course, the cleanup and the internal heap-defrag/copying also takes time.)
I would not start to be religious about avoiding object creation at all cost, but if you see a jigsaw pattern in your memory-profiler, it means your garbage collector is on heavy duty. And if your garbage collector uses the CPU, the CPI is not available for your application.
Regarding object pooling: Doing it right and not running into either memory leaks or invalid states or spending more time on the management than you would save is difficult. So I never used that strategy.
My strategy has been to simply strive for immutable objects. Immutable things can be cached easily and therefore help to keep the system simple.
However, no matter what you do: Make sure you check your hotspots with a profiler first. Premature optimization is the root of most evilness.
Let the garbage collector do its job, it can be considered better than your code.
Unless a profiler proves it guilty. And don't even use common sense to try to figure out when it's wrong. In unusual cases even cheap objects like byte arrays are better pooled.
Rule 1 of optimization: don't do it.
Rule 2 (for experts only): don't do it yet.
The rule of thumb should be to use your common sense and reuse objects when their creation consumes significant resources such as I/O, network traffic, DB connections, etc...
If it's just creating a new String(), forget about the reuse, you'll gain nothing from it. Code readability has higher preference.
I would worry about performance issues if they arise. Do what makes sense first (would you do this with primatives), if you then run a profiling tool and find that it is new causing you problems, start to think about pre-allocation (ie. when your program isn't doing much work).
Re-using objects sounds like a disaster waiting to happen by the way:
SomeClass someObject = new SomeClass();
someObject.doSomething();
someObject.changeState();
someObject.changeOtherState();
someObject.sendSignal();
// stuff
//re-use
someObject.reset(); // urgh, had to put this in to support reuse
someObject.doSomethingElse(); // oh oh, this is wrong after calling changeOtherState, regardless of reset
someObject.changeState(); // crap, now this is wrong but it's not obvious yet
someObject.doImportantStuff(); // what's going on?
Object creation is certainly faster than it used to be. The newer generational GC in JDKs 5 and higher are improvements, too.
I don't think either of these makes excessive creation of objects cost-free, but they do reduce the importance of object pooling. I think pooling makes sense for database connections, but I don't attempt it for my own domain objects.
Reuse puts a premium on thread-safety. You need to think carefully to ensure that you can reuse objects safely.
If I decided that object reuse was important I'd do it with products like Terracotta, Tangersol, GridGain, etc. and make sure that my server had scads of memory available to it.
Second the above comments.
Don't try and second guess the GC and Hotspot. Object pooling may have been useful once but these days its not so useful unless you are talking about database connections or unique system resources.
Just try and write clean and simple code and be amazed at what Hotspot can do.
Why not use VisualVM or a profiler to take a look at your code?
May be this is a well known question, But i didn't find the best reference for this ques...
what is the formula to calculate and assign the default u-limit, verbose (for gc) and max heap memory value?
If there is no specific formula, what is the criteria to specify this for a particular machine.
If possible could anyone please explain these concepts also.
Is there any other concepts we need to consider for performance improvement?
How to tune the JVM for better performance,
Stop what you're doing right now.
Tuning the JVM is probably the last thing you should worry about. Until you've gone through every other performance trick in the book, the default settings should be just fine.
Firstly you need to profile your application and find out where the bottlenecks are. Specifically, you will want to know:
What functions /methods are consuming the majority of CPU time?
Where are all the memory allocations happening?
What kind of objects are taking up most space on the heap?
Then you should apply targeted optimisations to the areas that are causing problems. There are thousands of valid techniques, but here are the ones that I find are most useful:
Improve algorithms - anything that is taking up a decent chunk of CPU time and has complexity of O(n^2) or worse is probably a good candidate for improvement. Try to get it to O(n log n) or better.
Share immutable data - if you have a lot of copies of the same data then it makes sense to turn these into immutable objects and share a single instance. This can save a lot of memory (and has the nice effect of improving thread safety / concurrency)
Use primitive types - replace Integer with int etc. This saves memory and makes numerical operations faster.
Be lazy - don't compute things until they are definitely needed.
Cache things - if something is expensive to compute but frequently requested, store it in a cache after the first request. Use a cache backed by a SoftHashMap so that the memory can still be released if needed.
Offload work - Can you make use of multiple cores? Can the client application do some of the work for you?
After making any changes you then need to profile again. At the very least, you will want to confirm that your optimisations actually helped. Additionally, fixing one bottleneck will usually move the bottleneck to another part of the application. So you will need to identify the new place to focus next.
Repeat until your application is fast enough (as defined by your own or your customers' requirements).
I heard this statement many times when reading some java books/articles.My question is very straightforward that when we say that creating some object will be very expensive?
Here expensive is for what and at what scenario we should use this term.It would be very easy for me to understand if some one illustrate with small example and how to avoid this?
Expensive usually means it'll take a while, but it can also mean it'll take a lot of some other resource, such as memory, bandwidth, hosting budget, disk space, or anything else you'd like to use less of. For example,
new int[1000000000]
will be expensive, because it allocates and zeros an incredible amount of memory.
Expensive means it requires quite a good amount of system resources like memory, disk I/O. A good example would be creation of a databse connection object which requires quite a number of steps before you get an actual connection object. Each step in itself may perform operations like reading configuration from a file which requires I/O, loading the database driver, registering the driver etc.
Creating big arrays are also expensive because it takes a big chunk of memory.
Expensive word in used in sense of Activity performed (like io operations), actions, or creation of object.
In simplest words, anything that can make a slight performance hit is called expensive.
In case of object oriented programming Expensive is related to memory, resources etc your object is using.If unnecessarily your object is uses lots of memory or resources then somewhere you are not doing good programming.
Going through the Goetz "Java Concurrency in Practice" book, he makes a case against using object pooling (section 11.4.7) - main arguments:
1) allocation in Java is faster than C's malloc
2) threads requesting objects from a pool require costly synchronization
My problem is not so much that allocation is slow, but that periodic garbage collection introduces outliers in response time that could be eliminated by reducing object pools.
Are there any issues that I am not seeing in using this approach? Essentially I am partitioning an object pool across the threads...
If its thread local then you can forget about this:
2) threads requesting objects from a pool require costly synchronization
Being thread-local you need not worry about synchronization to retrieve from the pool itself.
(sun's) GC scans live objects. the assumption is that there are way more dead objects than live objects in a typical java program runtime. it marks live objects, and dispose the rest.
if you cache a lot of objects, they are all live. and if you have several GBs of such objects, GC is going to waste a lot of time scanning them in vain. long GC pauses can paralyze your application.
cache something just to make it non-garbage is not helping GC.
that's not to say caching is wrong. if you have 15G memory, and your database is 10G, why not cache everything in memory, so responses are lighting fast. note this is to cache something that would otherwise be slow to fetch.
to prevent GC from fruitlessly scanning the 10G cache, the cache must be outside GC's control. For example, use 'memcached" which lives in another process, and has its own cache-optimized GC.
the latest news is Terracotta's BigMemory which is a pure java solution that does similar thing.
an example of thread local pooling is sun's direct ByteBuffer pooling. when we call
channel.read(byteBuffer)
if byteBuffer is not "direct", a "direct" one must be allocated under the hood, used to communicate data with OS. in a network application, such allocations could be very frequent, it seems to be a waste, to discard a just allocated one, and immediately allocate another one in the next statement. sun's engineers, apparently don't trust GC that much, created a thread local pool of "direct" ByteBuffers.
In Java 1.4, object allocation was relatively expensive so Object pools for even simple objects could help. However, in Java 5.0, Object allocation was significantly improved, however synchronization still had a way to go meaning that object allocation was faster than synchronization. i.e. removing object pools improved performance in many cases. In Java 6, synchronization has improved to the point where an object pool can make a little difference to performance in simple cases.
Avoiding simple object pools is a good idea because it is simpler, not for performance reasons.
For more complex/larger objects, object pools can be useful in Java 6, even if you use synchronization. e.g. a Socket, File stream, or Database connection.
I think your case is reasonable situation to use pooling. There is no evil in pooling, Goetz means that you should not use it when it is not necessary. Another example is connection pooling, because creation of connection is very expensive.
If it is threadlocal, it's very likely you may not even need pooling. Of course it would depend on the use cases, but the chances are, on a given thread you will likely need only one object of that type at a given time.
The caveat with threadlocals, however, is memory management. Note that threadlocal values don't go away easily until the thread that owns those threadlocals go away. Therefore, if you have a large number of threads and a large number of threadlocals, they may contribute to used memory quite a bit.
I'd definitely try it out. Although is now "common knowledge" that one should not care about object creation, in fact there may be a lot of performance gained from using object pools and specific classes. For a file processing framework, I gained 5% read performance from pooling object[] objects.
So try it out and time your executions to see if you gain anything.
Even it's an old question, point of 2 threads requesting objects from a pool require costly synchronization does not completely hold true.
It's possible to write a concurrent (no synchronization) object pool that doesn't even exhibit sharing (even false sharing) on the fast path. In the simplistic case, of course, each thread might have its own pool (more like an associated object) but then such a greedy approach can lead to resource waste (or starvation/error if the resource cannot be allocated)
Pools are good for heavy objects like ByteBuffers, esp. direct ones, connections, sockets, threads, etc. Overall any objects that require non-java intervention.