Java Concurrent Dixon's Algorithm - java

I have endeavored to concurrently implement Dixon's algorithm, with poor results. For small numbers <~40 bits, it operates in about twice the time as other implementations in my class, and after about 40 bits, takes far longer.
I've done everything I can, but I fear it has some fatal issue that I can't find.
My code (fairly lengthy) is located here. Ideally the algorithm would work faster than non-concurrent implementations.

Why would you think it would be faster? Spinning up a thread and adding synchronized calls are HUGE time syncs. If you can't avoid the synchronized keyword, I highly recommend a single-threaded solution.
You may be able to avoid them in various ways--for instance by ensuring that a given variable is only written by one thread even if read by others or by acting like a functional language and making all your variables final using Recursion for variable storage (Iffy, hard to imagine this would speed anything).
If you really need to be fast, however, I did find some very counter-intuitive things out recently from my own attempt at finding a speedy solution...
Static methods didn't help over actual class instances.
Breaking the code down into smaller classes and methods actually INCREASED speed.
Final methods helped more than I would have thought they would
Once I noticed that adding a method call helped speed things along
Don't stress over one-time class allocations or data allocations but avoid allocating objects in loops (This one is obvious but I think it's the most critical)
What I've been able to intuit is that the compiler is extremely smart at optimizing and is tuned to optimize "Ideal" java code. Static methods are no where near ideal--they are kind of a counter-pattern.. one of the most.
I suggest you write the clearest, best OO code you can that actually runs correctly as a reference--then time it and start attempting tweaks to speed it up.

Related

Saving commonly called properties in variables, in Java?

When I was learning C, I was taught to do stuff, say, if I wanted to loop through a something strlen(string) times, I should save that value in an 'auxiliary' variable, say count and put that in the for condition clause instead of the strlen, as I'd save the need of processing that many times.
Now, when I started learning Java, I noticed this is quite not the norm. I've seen lots of code and programmers doing just what I was told not to do in C.
What's the reason for this? Are they trading efficiency for 'readibility'? Or does the compiler manage to fix that?
E: This is NOT a duplicate to the linked question. I'm not simply asking about string length, mine is a more general question.
In the old times, every function call was expensive, compilers were dumb, usable profilers yet to come, and computers slow. This way C macros and other terrible things were born. Java is not that old.
Efficiency is important, but the impact of most program parts on efficiency is very small. But reading code still needs programmers time and this is much more costly than CPU. So we'd better optimize for readability most of the time and care about speed just in the most important places.
A local variable can make the code simpler, when it avoids repetitions of complicated expressions - this happens sometimes. It can make it faster, when it avoids expensive computation which the compiler can't do - this happens rather rarely. When neither condition is met, it's just a wasted line, so why bother?

Can method extraction negatively impact code performance?

Assume you have quite long method with around 200 lines of very time sensitive code. Is it possible that extracting some parts of code to separate methods will slow down execution?
Most probably, you'll get a speedup. The problem is that optimizing a 200 lines beast is hard. Actually, Hotspot gives it up when the method is too long. Once I achieved a speedup factor of 2 by simply splitting a long method.
Short methods are fine, and they'll be inlined as needed. So the method call overhead gets minimized. By inlining, Hotspot may re-create your original method (improbable due to its excessive length) or create multiple methods, where some of them may contain code not present in the original method.
The answer is "yes, it may get slower." The problem is that the chosen inlining may be suboptimal. However, it's very improbable, and I'd expect a speedup instead.
The overhead is negligible, the compiler will inline the methods to execute them.
EDIT:
similar question
I don't think so.
Yes some calls would be added and some stackframes but that doesn't cost a lot of time and depending on your compiler it might even optimize the code in such a way that there is basically no difference in the version with one method compared to the one with many.
The loss of readability and reusability you would get by implementing all in one method is definitely not worth the (if at all existing) performance increase.
It is important that the factored out methods will be either declared private or final. The just in time compiler in the JVM will then inline everything, which means there will be a single big method executed as result.
However, always benchmark your code, when modifying it.

Downsides of structuring all multi-threading CSP-like

Disclaimer: I don't know much about the theoretical background of CSP.
Since I read about it, I tend to structure most of my multi-threading "CSP-like", meaning I have threads waiting for jobs on a BlockingQueue.
This works very well and simplified my thinking about threading a lot.
What are the downsides of this approach?
Can you think of situations where I'm performance-wise better off with a synchronized block?
...or Atomics?
If I have many threads mostly sleeping/waiting, is there some kind of performance impact, except the memory they use? For example during scheduling?
This is one possibly way to designing the architecture of your code to prevent thread issues from even happening, this is however not the only one and sometimes not the best one.
First of all you obviously need to have a series of tasks that can be splitted and put into such a queue, which is not always the case if you for example have to calculate the result of a single yet very straining formula, which just cannot be taken apart to utilize multi-threading.
Then there is the issue if the task at hand is so tiny, that creating the task and adding it into the list is already more expensive than the task itself. Example: You need to set a boolean flag on many objects to true. Splittable, but the operation itself is not complex enough to justify a new Runnable for each boolean.
You can of course come up with solutions to work around this sometimes, for example the second example could be made reasonable for your approach by having each thread set 100 flags per execution, but then this is only a workaround.
You should imagine those ideas for threading as what they are: tools to help you solve your problem. So the concurrent framework and patters using those are all together nothing but a big toolbox, but each time you have a task at hand, you need to select one tool out of that box, because in the end putting in a screw with a hammer is possible, but probably not the best solution.
My recommendation to get more familiar with the tools is, that each time you have a problem that involves threading: go through the tools, select the one you think fits best, then experiment with it until you are satisfied that this specific tool fits the specific task best. Prototyping is - after all - another tool in the box. ;)
What are the downsides of this approach?
Not many. A queue may require more overhead than an uncontended lock - a lock of some sort is required internally by the queue classs to protect it from multiple access. Compared with the advantages of thread-pooling and queued comms in general, some extra overhead does not bother me much.
better off with a synchronized block?
Well, if you absolutely MUST share mutable data between threads :(
is there some kind of performance impact,
Not so anyone would notice. A not-ready thread is, effectively, an extra pointer entry in some container in the kernel, (eg. a queue belonging to a semaphore). Not worth bothering about.
You need synchronized blocks, Atomics, and volatiles whenever two or more threads access mutable data. Keep this to a minimum and it needn't affect your design. There are lots of Java API classes that can handle this for you, such as BlockingQueue.
However, you could get into trouble if the nature of your problem/solution is perverse enough. If your threads try to read/modify the same data at the same time, you'll find that most of your threads are waiting for locks and most of your cores are doing nothing. To improve response time you'll have to let a lot more threads run, perhaps forgetting about the queue and letting them all go.
It becomes a trade off. More threads chew up a lot of CPU time, which is okay if you've got it, and speed response time. Fewer threads use less CPU time for a given amount of work (but what will you do with the savings?) and slow your response time.
Key point: In this case you need a lot more running threads than you have cores to keep all your cores busy.
This sort of programming (multithreaded as opposed to parallel) is difficult and (irreproducible) bug prone, so you want to avoid it if you can before you even start to think about performance. Plus, it only helps noticably if you've got more than 2 free cores. And it's only needed for certain sorts of problems. But you did ask for downsides, and it might pay to know this is out there.

Java call type performance

I put together a microbenchmark that seemed to show that the following types of calls took roughly the same amount of time across many iterations after warmup.
static.method(arg);
static.finalAnonInnerClassInstance.apply(arg);
static.modifiedNonFinalAnonInnerClassInstance.apply(arg);
Has anyone found evidence that these different types of calls in the aggregate will have different performance characteristics? My findings are they don't, but I found that a little surprising (especially knowing the bytecode is quite different for at least the static call) so I want to find if others have any evidence either way.
If they indeed had the same exact performance, then that would mean there was no penalty to having that level of indirection in the modified non final case.
I know standard optimization advice would be: "write your code and profile" but I'm writing a framework code generation kind of thing so there is no specific code to profile, and the choice between static and non final is fairly important for both flexibility and possibly performance. I am using framework code in the microbenchmark which I why I can't include it here.
My test was run on Windows JDK 1.7.0_06.
If you benchmark it in a tight loop, JVM would cache the instance, so there's no apparent difference.
If the code is executed in a real application,
if it's expected to be executed back-to-back very quickly, for example, String.length() used in for(int i=0; i<str.length(); i++){ short_code; }, JVM will optimize it, no worries.
if it's executed frequently enough, that the instance is mostly likely in CPU's L1 cache, the extra load of the instance is very fast; no worries.
otherwise, there is a non trivial overhead; but it's executed so infrequently, the overhead is almost impossible to detect among the overall cost of the application. no worries.

How expensive is Java Locking?

In general, how expensive is locking in Java?
Specifically in my case: I have a multi-threaded app in which there is one main loop that takes objects off a DelayQueue and processes them (using poll()). At some point a different thread will have to remove errant elements from the queue (using remove()).
Given that the remove() is relatively uncommon, I am worried that locking on each poll() will result in slow code. Are my worries justified?
They are not justified unless you profile your app and find that this is a bottleneck.
Generally speaking uncontested locking (i.e. locks that don't have to wait for someone to release it most of the time) have become a lot cheaper with some changes in Java 5 and Java 6.
Implement it safe and simple and profile if it's fast enough.
Have you taken some measurements and found that locking is too slow? No? Then it isn’t.
Honestly, though: too many people worry about too many irrelevant things. Get your code working before you worry about things like whether “++i” is faster than “i++” or similar stuff.

Categories

Resources