inside my java programm I need to have a control-structure which should give 3 different outcomes to 3 different inputs.
Input 1: Text 1
Input 2: Text 2
Input 3: Text 3
My Question here is: Best-Practice wise and efficiency wise, which control structure should be used? My First thought has been a switch case, but why would I choose that over IF-Structures or nested ? operators??
I think you'll find it generally agreed that the switch statement is mildly preferred in this situation, purely on the basis of readability. It scales a lot better if additional cases are added and even with three options, it's still a bit more readable, particularly with the cases being simply three variations of one input. Performance differences are negligible but there are surely discussions out there if you really want to get into that specific aspect.
I'd suggest avoiding the ternary operator (i.e., inline if/'?') for any more than two cases for similar reasons of readability. Personally, I don't parse it as well and I avoid it unless all expressions involved are very brief.
Mostly off-topic, but interestingly, switching on Strings wasn't added to Java til Java 7.
you should strive for good readability first, then for efficiency if required. If there are many options, use switch if small, use if/else
Maybe a nested ternary operator? Just kidding. I think efficiency here is going to be almost identical whatever structure is used. I would go for the if/else because I think is more readable than a swtich (an less error prone -don't forget the breaks-) but it's just an opinion.
Related
Some information (don't want to confuse you with a lot of shitty code):
I've done a pretty large console programm (my largest project so far) which helps me a lot with managing some accounts / assets and more. I'm constantly adding more features but at the same time I reshape the code to work on my shitty coding style.
The console program has a lot of commands the user can type and for every command different methods get called / objects get created / manipulated and so on.
My keywords which are saved in an ArrayList<String> and my commands have this type: [keyword] [...n more Strings]
DESIGN PROBLEM 1:
I have a method cmdProcessor(String[] arguments) which handles the input (command) of the user, and the [keyword] is always the first argument arguments[0]. That means I have a large number of if-statements of this type:
if(arguments[0].equalsIgnoreCase("keyword") callMethod(argmts); where in the String[] argmts the remaining arguments[1] ... [n] are.
Is this a good way to handle this or should I go with switch-case?
Or something else (what?)? Is it better to save the keywords in a HashMap<String, Method>?
DESIGN PROBLEM 2:
The methods (see above callMethod(argmts) ), which are triggered by the entered keyword look even more chaotic. Since the same method can have different numbers and forms of arguments saved in the String[] argmts the method is full of if(argmts.length == ...) to check length, and every of these if-blocks has a bunch of switch-case options which also have a lot of ifs and so on. The last else and the default-case in switch-case I always use for error-handling (throwing error codes and and explanation why the pattern doesn't match and so on).
Is this good or are there better ways?
I thought about using lots of submethods, which would also blow up
my program and cost a lot of time but maybe improve readability / overview. Is this okay, or what is the best
option in such cases (lots of ifs and switch-case)?
Since I want to build more and more around this program maybe I should start now to fix bad design before it's too late. :)
About Design-Problem 1:
My go-to would be to register a lot of Handlers, which you can base on a common interface and then implement the specific behavior individually. This is good, because the central method handling your input is slim, and you only need to register a lot of singletons once, on initialization. Disadvantage: if you forget one, it will not work. So maybe, you can register them automatically (reflection or something thelike).
Aside from that, a map is better than a List in this case, because (I assume) you don't need a sorting. You need a mapping from key to behavior, so a map seems better (though even a very large set of keywords would probably not be very inefficient, if you stick to a list).
About Design Problem 2:
If I was you, I'd use actual Regular-Expression patterns. Take a look at the java.util.regex.Pattern-class. You can isolate groups and validate the values you receive. Though it does not spare you the exception/error-handling, it does help a lot in segmentation and interpretation efforts.
We developed an API call which uses Java8 parallel streams and we have got very good performance, almost double compared to sequential processing when doing stress tests.
I know it depends on the use case, but I am using it for crypto operations, so I assume that this is a good use case.
However, I have read a lot of articles that encourages to be very careful with them. There are also articles that discuss that they are not very well internally designed, like here.
Thus: are parallel streams production ready; are they widely used in Production Systems?
This question invites for "opinions"; but I try to answer fact-based.
Fork/Join
These classes aren't new! As you can see, they were already introduced with Java 1.7. In other words: these classes are around for several years by now; and used in many places. Thus: low risk.
Parallel Streams
Were added "just recently" in Java terms (keep in mind how much of legacy Java has in 2017; and how slowly [compared to other languages] Java is evolving). I think the simple answer here is: we don't know yet if parallel streams will become a "cornerstone" of Java programming, or if people will prefer other ways to solve the problems addresses by parallel streams at some point.
Beyond that: users of other languages (such as JavaScript) are used to "changes gears" (aka frameworks) on an almost "monthly" basis. That is means a lot of churn, but it also means that "good things" are applied quickly; like in: why post-pone improving things?!
What I mean by that: when you find that parallel streams help you to improve performance; and when your team agrees "yes, we can deal with the stream()-way of writing code" ... then just go forward.
In other words: when parallel streams help your team/product to "get better", than why not try to capitalize on that? Now, not in 12 or 24 months.
If streams are "not that great big thing"; then well, maybe you have to rewrite some code at some point in the future.
Long story short: this is about balancing potential risks against potential gains. It seems that you made some positive experiences already; so I think a reasonable compromise would be: apply streams, but in a controlled way. So, that a later decision "wrong turn, get rid of them" doesn't become too expensive.
Since I wrote the article you linked to I should say a few words.
As others have said, try it and see. Parallel Streams wants to split into a balanced tree: Split left, right, left, right. I you do that then performance is good. If not, then performance is terrible.
The Framework uses dyadic recursive division. Streams are linear. That is not a good match. And never forget that volume changes everything. Adding scaling to the mix may surprise you, but you won't know until you try it.
Let us know how it works out.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I understand the JVM optimizes some things for you (not clear on which things yet), but lets say I were to do this:
while(true) {
int var = 0;
}
would doing:
int var;
while(true) {
var = 0;
}
take less space? Since you aren't declaring a new reference every time, you don't have to specify the type every time.
I understand you really would only need to put var outside of while if I wanted to use it outside of that loop (instead of only being able to use it locally like in the first example). Also, what about objects, would it be different that primitive types in that situation? I understand it's a small situation, but build-up of this kind of stuff can cause my application to take a lot of memory/cpu. I'm trying to use the least amount of operations possible, but I don't completely understand whats going on behind the scenes.
If someone could help me out, even maybe link me to somewhere I can learn about saving cpu by decreasing amount of operations, it would be highly appreciated. Please no books (unless they're free! :D), no way of getting one right now /:
Don't. Premature optimization is the root of all evil.
Instead, write your code as it makes most sense conceptually. Write it thoughtfully, yes. But don't think you can be a 'human compiler' and optimize and still write good code.
Once you have written your code (more or less naively, depending on your level of experience) you write performance tests for it. Try to think of different ways in which the code may be used (many times in a row, from front to back or reversed, many concurrent invocations etc) and try to cover these in test cases. Then benchmark your code.
If you find that some test cases are not performing well, investigate why. Measure parts of the test case to see where the time is going. Zoom into the parts where most time is spent.
Mostly, you will find weird loops where, upon reading the code again, you will think 'that was silly to write it that way. Of course this is slow' and easily fix it. In my experience most performance problems can be solved this way and 'hardcore optimization' is hardly ever needed.
In the end you will find that 99* percent of all performance problems can be solved by touching only 1 percent of the code. The other code never comes into play. This is why you should not 'prematurely' optimize. You will be spending valuable time optimizing code that had no performance issues in the first place. And making it less readable in the process.
Numbers made up of course but you know what I mean :)
Hot Licks points out the fact that this isn't much of an answer, so let me expand on this with some good ol' perfomance tips:
Keep an eye out for I/O
Most performance problems are not in pure Java. Instead they are in interfacing with other systems. In particular disk access is notoriously slow. So is the network. So minimize it's use.
Optimize SQL queries
SQL queries will add seconds, even minutes, to your program's execution time if you don't watch out. So think about those very carefully. Again, benchmark them. You can write very optimized Java code, but if it first spends ten seconds waiting for the database to run some monster SQL query than it will never be fast.
Use the right kind of collections
Most performance problems are related to doing things lots of times. Usually when working with big sets of data. Putting your data in a Map instead of in a List can make a huge difference. Also there are specialized collection types for all sorts of performance requirements. Study them and pick wisely.
Don't write code
When performance really matters, squeezing the last 'drops' out of some piece of code becomes a science all in itself. Unless you are writing some very exotic code, chances are great there will be some library or toolkit to solve your kind of problems. It will be used by many in the real world. Tried and tested. Don't try to beat that code. Use it.
We humble Java developers are end-users of code. We take the building blocks that the language and it's ecosystem provides and tie it together to form an application. For the most part, performance problems are caused by us not using the provided tools correctly, or not using any tools at all for that matter. But we really need specifics to be able to discuss those. Benchmarking gives you that specifity. And when the slow code is identified it is usually just a matter of changing a collection from list to map, or sorting it beforehand, or dropping a join from some query etc.
Attempting to optimise code which doesn't need to be optimised increases complexity and decreases readability.
However, there are cases were improving readability also comes with improved performance.
For example,
if a numeric value cannot be null, use a primitive instead of a wrapper. This makes it clearer that the value cannot be null but also uses less memory and reduces pressure on the GC.
use a Set when you have a collection which cannot have duplicates. Often a List is used when in fact a Set would be more appropriate, depending on the operations you perform, this can also be faster by reducing time complexity.
consider using an enum with one instance for a singleton (if you have to use singletons at all) This is much simpler as well as faster than double check locking. Hint: try to only have stateless singletons.
writing simpler, well structured code is also easier for the JIT to optimise. This is where trying to out smart the JIT with more complex solutions will back fire because you end up confusing the JIT and what you think should be faster is actually slower. (And it's more complicated as well)
try to reduce how much you write to the console (and IO in general) in critical sections. Writing to the console is so expensive, both for the program and the poor human having to read it that is it worth spending more time producing concise console output.
try to use a StringBuilder when you have a loop of elements to add. Note: Avoid using StringBuilder for one liners, just series of append() as this can actually be slower and harder to read.
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. --
Antoine de Saint-Exupery,
French writer (1900 - 1944)
Developers like to solve hard problems and there is a very strong temptation to solve problems which don't need to be solved. This is a very common behaviour for developers of up to 10 years experience (it was for me anyway ;), after about this point you have already solved most common problem before and you start selecting the best/minimum set of solutions which will solve a problem. This is the point you want to get to in your career and you will be able to develop quality software in far less time than you could before.
If you dream up an interesting problem to solve, go ahead and solve it in your own time, see what difference it makes, but don't include it in your working code unless you know (because you measured) that it really makes a difference.
However, if you find a simpler, elegant solution to a problem, this is worth including not because it might be faster (thought it might be), but because it should make the code easier to understand and maintain and this is usually far more valuable use of your time. Successfully used software usually costs three times as much to maintain as it cost to develop. Do what will make the life of the poor person who has to understand why you did something easier (which is harder if you didn't do it for any good reason in the first place) as this might be you one day ;)
A good example on when you might make an application slower to improve reasoning, is in the use of immutable values and concurrency. Immutable values are usually slower than mutable ones, sometimes much slower, however when used with concurrency, mutable state is very hard to get provably right, and you need this because testing it is good but not reliable. Using concurrency you have much more CPU to burn so a bit more cost in using immutable objects is a very sensible trade off. In some cases using immutable objects can allow you to avoid using locks and actually improve throughput. e.g. CopyOnWriteArrayList, if you have a high read to write ration.
Example: Simple program of swapping two nos.
int a = 10;
int b = 20;
a = a+b;
b = a-b;
a = a-b;
Now in the following piece of code:
a=a+b-(b=a);
I mean What is the difference b/w these two piece of codes?
Addition : What if the addition of these two exceed the legitimate limit of an Integer which is different in case of Java & C++?
Neither of these looks good to me. Readability is key. If you want to swap values, the most "obvious" way to do it is via a temporary value:
int a = 10;
int b = 20;
int tmp = a;
a = b;
b = tmp;
I neither know nor would I usually care whether this was as efficient as the "clever" approaches involving arithmetic. Until someone proves that the difference in performance is significant within a real application, I'd aim for the simplest possible code that works. Not just here, but for all code. Decide how well you need it to perform (and in what dimensions), test it, and change it to be more complicated but efficient if you need to.
(Of course, if you've got a swap operation available within your platform, use that instead... even clearer.)
In C++, the code yields undefined behavior because there's no sequence point in a+b-(b=a) and you're changing b and reading from it.
You're better off using std::swap(a,b), it is optimized for speed and much more readable than what you have there.
Since your specific code is already commented upon, i would just add a general aspect. Writing one liners doesn't really matter because at instruction level, you cannot escape the number of steps your assembly is going to translate into machine code. Most of the compilers would already optimize accordingly.
That is, unless the one liner is actually using a different mechanism to achieve the goal for e.g. in case of swapping two variables, if you do not use a third variable and can avoid all the hurdles such as type overflow etc. and use bitwise operators for instance, then you might have saved one memory location and thereby access time to it.
In practice, this is of almost no value and is trouble for readability as already mentioned in other answers. Professional programs need to be maintained by people so they should be easy to understand.
One definition of good code is Code actually does what it appears to be doing
Even you yourself would find it hard to fix your own code if it is written cleverly in terms of some what shortened but complex operations. Readability should always be prioritized and most of the times, the real needed efficiency comes from improving design, approach or better data structures/algorithms, than instead short - one liners.
Quoting Dijkstra: The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague.
A couple points:
Code should first reflect your intentions. After all, it's meant for humans to read. After that, if you really really must, you can start to tweak the code for performance. Most of all never write code to demonstrate a gimmick or bit twiddling hack.
Breaking code onto multiple lines has absolutely no impact on performance.
Don't underestimate the compiler's optimizer. Just write the code as intuitively as possible, and the optimizer will ensure it has the best performance.
In this regard, the most descriptive, intuitive, fastest code, is:
std::swap(a, b);
Readability and instant understand-ability is what I personally rate (and several others may vote for) when writing and reading code. It improves maintainability. In the particular example provided, it is difficult to understand immediately what the author is trying to achieve in those few lines.
The single line code:a=a+b-(b=a); although very clever does not convey the author's intent to others obviously.
In terms of efficiency, optimisation by the compiler will achieve that anyway.
In terms of java at least i remember reading that the JVM is optimized for normal straight forward uses so often times you just fool yourself if you try to do stuff like that.
Moreover it looks awful.
OK, try this. Next time you have a strange bug, start by squashing up as much code into single lines as you can.
Wait a couple weeks so you've forgotten how it's supposed to work.
Try to debug it.
Of course it depends on the compiler. Although I cannot foresee any kind of earth-shattering difference. Abstruse code is the main one.
When developing for Android is a switch statement more efficient than an if-else chain? A switch statement takes more lines of code, but looking at anecdotal evidence seems to be the more commonly used in Android applications.
The examples below illustrate the same programming construct with a case statement and if-else chain. The switch statement requires 10 lines while the if-else chain requires 7.
Case Statement
public void onClickWithSwitch(View v) {
switch(v.getId()) {
case R.id.buttonA:
buttonA();
break;
case R.id.buttonB:
buttonB();
break;
case R.id.buttonC:
buttonC();
}
}
If-else chain
public void onClickWithIf(View v) {
int id = v.getId();
if(id == R.id.buttonA)
buttonA();
else if (id == R.id.buttonB)
buttonB();
else if (id == R.id.buttonC)
buttonC();
}
Why would switch be more common than an if-else chain? Do switch statements offer better performance when compared to if-else chains?
The reason languages have switch statements is to allow the compiler to generate a jump table, which is fast if it's large, because at run-time it can get to the desired code in O(1) rather than O(N) time.
It's only helpful speed-wise if there are many cases and the code to execute in each case does not take much time, and the program spends much percentage of time in this code at all.
Other than that it's purely a matter of taste.
There is no relationship between number of code lines and speed. What matters is the kind of assembly language code that's generated, which I'd encourage you to get familiar with.
Unless your sequence of ifs/cases is truly vast, I don't think it matters. With the switch statement, it's more clear what's going on. The only downside is all the break statements and the potential to miss one, but a static analyzer should catch that.
Best would be a map keyed by the id or some clever use of subclassing
"More efficient" is a vague concept, because there are so many ways to measure it. I suppose most people think of execution time. On the other hand, most people don't think of memory efficiency. A switch statement with widely spaced test values can be a horrible memory hog, unless the compiler is smart enough to re-interpret it as an if-else chain.
There's a lot to be said, as well, for programming efficiency, including maintenance and readability. As sblundy noted, a switch statement can be clearer about the programmer's intent than an if-else chain. Comments can counterbalance that, but that requires more work for the programmer and there's also the risk that the comments and code don't match (particularly after a few maintenance cycles).
I imagine that most people follow whatever style they have been taught (or told to follow), without thinking about it too much. The rest of the time, I think most decisions about switch vs. if-else are based on which one best matches the programmer's thinking at the moment the code is being generated.
You asked: Is a switch statement really more efficient?
Anybody claiming to have a definitive and general answer on this question, talks nonsense. There is exactly one way to find out which is faster in your case: Use a proper micro-benchmarking framework on your target plattform with your complete software, not a simplified example. If that reveals a measurable and statistically signifanct difference I'd be interested in hearing about it. I doubt you'll find any measurable difference for a real program.
Therefore, I would strictly go for readability.
While we're on the subject, nobody mentioned that you should always have a default line in switch statement. Usually you want to throw an exception, but at least you should assert and/or log the error.
This is just good basic defensive programming. It alerts you that you have a programming error if you later add another button (in this case).