Tool for java code performance anaylizer - java

There are many tools for code quality. But sometimes need gain performance also if code is not corresponds to rules of cod quality. Exists some open source tool for this?
Thanks.

There's no tool for that, but you can try out jVisualVM, however.
http://download.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
It usually comes with your jdk. # C:\Program Files\Java\jdk1.6.0_21\bin

No tool is going to tell you performance and quality. Both are hard to measure.
You can certainly use something like FindBugs or IntelliJ's Inspector to examine your code, but they'll just look for rule violations. I'm not aware of a tool that will point out when I've written code that performs badly. How will a Java code inspector know that your database has no indexes?

I can't answer you regarding code quality. Others can. But when you "need gain performance", I would rather tell you how to do it than tell you what tools to use.
There are tools, but more important than tools is understanding what you're doing.
The most important is to understand that measuring doesn't tell you what to fix to get higher performance; it only tells you how much improvement you got.
The way to improve performance is to find activities, whatever they are, that account for a significant fraction of time and can be improved.
Measuring is not finding.
Example:
I can manually sample the state of a program, several times, and see it much of the time doing container class manipulations, like fetching elements, testing for end conditions, etc.
(That's the finding part.)
This can be happening in many different places in the code, so no particular routine appears to be causing a large fraction of time to be spent.
There is no particular hotspot or obvious bottleneck.
There is no "bad algorithm" or "slow routine", the kinds of thing people say they look for.
Nevertheless, I can see in those few samples that it is doing container class operations, and I can see exactly where.
If I can replace those container class operations with something else that accomplishes the same purpose, I can save time.
How much time? Up to roughly the fraction of time I saw those operations happening, and that can be quite large.
The real payoff for doing this is there can be multiple issues.
Suppose issue A costs 40% of the time, B costs 20%, and C costs 10%,
and the total time is, say, 10 seconds.
You go after A, the most obvious one.
Fixing it reduces time to about 6 seconds. (Speedup 10/6 = 1.67).
Then problem B takes a larger percent of time (2/6 = .33) so it is easier to find with samples.
Fixing it reduces time to 4 seconds (Speedup 6/4 = 1.5)
Then C is (1/4 = 25%) and is much easier to find than before.
Removing it reduces time to 3 seconds (Speedup 4/3 = 1.33).
The total speedup factor is 10/3 = 3.33.
You can look at it as the compounded product of each speedup: 10/6 * 6/4 * 4/3 = 10/3.
Now I'm dealing in numbers here, but none of these had to be measurements of time spent in localized pieces of code.
They were just rough estimates gotten from describing what was happening in a small number of detailed samples of what the program was doing.
The samples aren't really concerned with measuring.
They are concerned with exposing the problems.

Related

Genetic Algorithm - convergence

I have a few questions about my genetic algorithm and GAs overall.
I have created a GA that when given points to a curve it tries to figure out what function produced this curve.
An example is the following
Points
{{-2, 4},{-1, 1},{0, 0},{1, 1},{2, 4}}
Function
x^2
Sometimes I will give it points that will never produce a function, or will sometimes produce a function. It can even depend on how deep the initial trees are.
Some questions:
Why does the tree depth matter in trying to evaluate the points and
produce a satisfactory function?
Why do I sometimes get a premature convergence and the GA never
breaks out if the cycle?
What can I do to prevent a premature convergence?
What about annealing? How can I use it?
Can you take a quick look at my code and tell me if anything is obviously wrong with it? (This is test code, I need to do some code clean up.)
https://github.com/kevkid/GeneticAlgorithmTest
Source: http://www.gp-field-guide.org.uk/
EDIT:
Looks like Thomas's suggestions worked well I get very fast results, and less premature convergence. I feel like increasing the Gene pool gives better results, but i am not exactly sure if it is actually getting better over every generation or if the fact that it is random allows it to find a correct solution.
EDIT 2:
Following Thomas's suggestions I was able to get it work properly, seems like I had an issue with getting survivors, and expanding my gene pool. Also I recently added constants to my GA test if anyone else wants to look at it.
In order to avoid premature convergence you can also use multiple-subpopulations. Each sub-population will evolve independently. At the end of each generation you can exchange some individuals between subpopulations.
I did an implementation with multiple-subpopulations for a Genetic Programming variant: http://www.mepx.org/source_code.html
I don't have the time to dig into your code but I'll try to answer from what I remember on GAs:
Sometimes I will give it points that will never produce a function, or will sometimes produce a function. It can even depend on how deep the initial trees are.
I'm not sure what's the question here but if you need a result you could try and select the function that provides the least distance to the given points (could be sum, mean, number of points etc. depending on your needs).
Why does the tree depth matter in trying to evaluate the points and produce a satisfactory function?
I'm not sure what tree depth you mean but it could affect two things:
accuracy: i.e. the higher the depth the more accurate the solution might be or the more possibilities for mutations are given
performance: depending on what tree you mean a higher depth might increase performance (allowing for more educated guesses on the function) or decrease it (requiring more solutions to be generated and compared).
Why do I sometimes get a premature convergence and the GA never breaks out if the cycle?
That might be due to too little mutations. If you have a set of solutions that all converge around a local optimimum only slight mutations might not move the resulting solutions far enough from that local optimum in order to break out.
What can I do to prevent a premature convergence?
You could allow for bigger mutations, e.g. when solutions start to converge. Alternatively you could throw entirely new solutions into the mix (think of is as "immigration").
What about annealing? How can I use it?
Annealing could be used to gradually improve your solutions once they start to converge on a certain point/optimum, i.e. you'd improve the solutions in a more controlled way than "random" mutations.
You can also use it to break out of a local optimum depending on how those are distributed. As an example, you could use your GA until solutions start to converge, then use annealing and/or larger mutations and/or completely new solutions (you could generate several sets of solutions with different approaches and compare them at the end), create your new population and if the convergence is broken, start a new iteration with the GA. If the solutions still converge at the same optimum then you could stop since no bigger improvement is to be expected.
Besides all that, heuristic algorithms may still hit a local optimum but that's the tradeoff they provide: performance vs. accuracy.

Java Junit precise Benchmarking and conclusion of complexity

I am using java and I have used junit-benchmarks-0.7.2 for JUNIT performance tests , it works fine for the part of warm ups , multiple runs for test functions and for plotting results , i just want to ask about two features that i can't find in junit-benchmark :
1-it is not precise for execution time in milliseconds (specially in plots , so i just have plots for functions taking more than 0.1 sec in execution
2-Is there a plugin that can give rough or exact estimation for the complexity of my code ??
even if it displays the performance of my code vs the expected performance for cases like O(N^2) or O(N) or how ever it calculates it ??? (it doesn't matter if it is free or paid plugin , I just want one to do the task )
I guess this is so far the answer:
Won't fix; will stick with millis as the default granularity. If somebody really needs nanosecond-grade timing, run your benchmarks with caliper.
This is what I'd actually recommend, too, as to me junit-benchmarks doesn't seem to be as advanced. But I may be wrong as I haven't watched it closely.
You can write a JUnit test which is also a caliper benchmark like I did, if it helps.
Concerning the complexity estimator, there were such plans for caliper, but I strongly doubt that anyone did it. You could do it yourself... in a few hours, I guess. I'm afraid, that it won't be really useful: It can just extrapolate what it sees and there may be problems which manifest themselves only outside of the measured range. So you should better interpolate only and then it loses sense as you can spot problems without the tool.

Can you/How do you save CPU and memory by choosing wisely [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I understand the JVM optimizes some things for you (not clear on which things yet), but lets say I were to do this:
while(true) {
int var = 0;
}
would doing:
int var;
while(true) {
var = 0;
}
take less space? Since you aren't declaring a new reference every time, you don't have to specify the type every time.
I understand you really would only need to put var outside of while if I wanted to use it outside of that loop (instead of only being able to use it locally like in the first example). Also, what about objects, would it be different that primitive types in that situation? I understand it's a small situation, but build-up of this kind of stuff can cause my application to take a lot of memory/cpu. I'm trying to use the least amount of operations possible, but I don't completely understand whats going on behind the scenes.
If someone could help me out, even maybe link me to somewhere I can learn about saving cpu by decreasing amount of operations, it would be highly appreciated. Please no books (unless they're free! :D), no way of getting one right now /:
Don't. Premature optimization is the root of all evil.
Instead, write your code as it makes most sense conceptually. Write it thoughtfully, yes. But don't think you can be a 'human compiler' and optimize and still write good code.
Once you have written your code (more or less naively, depending on your level of experience) you write performance tests for it. Try to think of different ways in which the code may be used (many times in a row, from front to back or reversed, many concurrent invocations etc) and try to cover these in test cases. Then benchmark your code.
If you find that some test cases are not performing well, investigate why. Measure parts of the test case to see where the time is going. Zoom into the parts where most time is spent.
Mostly, you will find weird loops where, upon reading the code again, you will think 'that was silly to write it that way. Of course this is slow' and easily fix it. In my experience most performance problems can be solved this way and 'hardcore optimization' is hardly ever needed.
In the end you will find that 99* percent of all performance problems can be solved by touching only 1 percent of the code. The other code never comes into play. This is why you should not 'prematurely' optimize. You will be spending valuable time optimizing code that had no performance issues in the first place. And making it less readable in the process.
Numbers made up of course but you know what I mean :)
Hot Licks points out the fact that this isn't much of an answer, so let me expand on this with some good ol' perfomance tips:
Keep an eye out for I/O
Most performance problems are not in pure Java. Instead they are in interfacing with other systems. In particular disk access is notoriously slow. So is the network. So minimize it's use.
Optimize SQL queries
SQL queries will add seconds, even minutes, to your program's execution time if you don't watch out. So think about those very carefully. Again, benchmark them. You can write very optimized Java code, but if it first spends ten seconds waiting for the database to run some monster SQL query than it will never be fast.
Use the right kind of collections
Most performance problems are related to doing things lots of times. Usually when working with big sets of data. Putting your data in a Map instead of in a List can make a huge difference. Also there are specialized collection types for all sorts of performance requirements. Study them and pick wisely.
Don't write code
When performance really matters, squeezing the last 'drops' out of some piece of code becomes a science all in itself. Unless you are writing some very exotic code, chances are great there will be some library or toolkit to solve your kind of problems. It will be used by many in the real world. Tried and tested. Don't try to beat that code. Use it.
We humble Java developers are end-users of code. We take the building blocks that the language and it's ecosystem provides and tie it together to form an application. For the most part, performance problems are caused by us not using the provided tools correctly, or not using any tools at all for that matter. But we really need specifics to be able to discuss those. Benchmarking gives you that specifity. And when the slow code is identified it is usually just a matter of changing a collection from list to map, or sorting it beforehand, or dropping a join from some query etc.
Attempting to optimise code which doesn't need to be optimised increases complexity and decreases readability.
However, there are cases were improving readability also comes with improved performance.
For example,
if a numeric value cannot be null, use a primitive instead of a wrapper. This makes it clearer that the value cannot be null but also uses less memory and reduces pressure on the GC.
use a Set when you have a collection which cannot have duplicates. Often a List is used when in fact a Set would be more appropriate, depending on the operations you perform, this can also be faster by reducing time complexity.
consider using an enum with one instance for a singleton (if you have to use singletons at all) This is much simpler as well as faster than double check locking. Hint: try to only have stateless singletons.
writing simpler, well structured code is also easier for the JIT to optimise. This is where trying to out smart the JIT with more complex solutions will back fire because you end up confusing the JIT and what you think should be faster is actually slower. (And it's more complicated as well)
try to reduce how much you write to the console (and IO in general) in critical sections. Writing to the console is so expensive, both for the program and the poor human having to read it that is it worth spending more time producing concise console output.
try to use a StringBuilder when you have a loop of elements to add. Note: Avoid using StringBuilder for one liners, just series of append() as this can actually be slower and harder to read.
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. --
Antoine de Saint-Exupery,
French writer (1900 - 1944)
Developers like to solve hard problems and there is a very strong temptation to solve problems which don't need to be solved. This is a very common behaviour for developers of up to 10 years experience (it was for me anyway ;), after about this point you have already solved most common problem before and you start selecting the best/minimum set of solutions which will solve a problem. This is the point you want to get to in your career and you will be able to develop quality software in far less time than you could before.
If you dream up an interesting problem to solve, go ahead and solve it in your own time, see what difference it makes, but don't include it in your working code unless you know (because you measured) that it really makes a difference.
However, if you find a simpler, elegant solution to a problem, this is worth including not because it might be faster (thought it might be), but because it should make the code easier to understand and maintain and this is usually far more valuable use of your time. Successfully used software usually costs three times as much to maintain as it cost to develop. Do what will make the life of the poor person who has to understand why you did something easier (which is harder if you didn't do it for any good reason in the first place) as this might be you one day ;)
A good example on when you might make an application slower to improve reasoning, is in the use of immutable values and concurrency. Immutable values are usually slower than mutable ones, sometimes much slower, however when used with concurrency, mutable state is very hard to get provably right, and you need this because testing it is good but not reliable. Using concurrency you have much more CPU to burn so a bit more cost in using immutable objects is a very sensible trade off. In some cases using immutable objects can allow you to avoid using locks and actually improve throughput. e.g. CopyOnWriteArrayList, if you have a high read to write ration.

How to measure C++ or Java file complexity?

I want to start measuring what Michael Feathers has referred to as the turbulence of code, namely churn vs. complexity.
To do this, I need to measure the complexity of a C++ or Java file. So I found a couple tools that measure cyclomatic complexity (CC). They each measure CC well at the function or method level. However, I need a metric at the file level, and they don't do so well there. One tool just returns the average of all method complexities in the file, and the other tool treats the whole file like it is one giant method, i.e., it counts all the decision points in the whole file.
So I did some research and found that McCabe defines CC only in terms of modules--and they define a module as a function--not as a file (see slides 20 and 30 of this presentation). And I think that makes sense.
So now I'm left with trying to figure out how to represent file complexity. My thought is that I should just use the maximum method CC for that file.
Any thoughts about that approach or any other suggestions?
Thanks!
Ken
Few years ago I had the same question. I answered it in the following way and it worked and works for me perfectly:
The purpose to minimize complexity is to improve maintainability. Cyclomatic complexity is an indicator of logical complexity, and you are right - it is applied to the smallest 'unit', i.e. function. It is possible to derive 'summary' metrics, like total/max/min/etc but they rarely show something useful, when it is about cyclomatic complexity. I tried to use 'summary' metrics to compare 2 code bases, but concluded that only distribution graphs of cyclomatic complexity are really useful here.
So, what could be used to indicate something about maintainability level for bigger units/levels of abstractions, like files/components/subsystems? I found that the first metric is a size of a unit in lines of code. If you limit the size of a file, like 1000 lines, and limit cyclomatic complexity for each function in the file, you will have relatively "simple" file, because it is "small" and contains only "simple" functions. You may include or exclude comment/blank lines or count only statements or only executable lines...
However, I concluded that it does not really matter in this particular application. Just limit some 'size' metric and it will serve the purpose in most cases.. Later you may think about limiting the total number of lines of code per a component/subsystem. It will have the same effect - component is "simple", because it contains "small" number of "simple" files.
The post you referred to is very good. It can be extended to broader metric, which usually is named as 'maintainability index'. The index is very high if a function is complex, file is big and has got frequent changes, little coverage by tests, and so on (add here whatever you think defines maintainability). It is the best way, I know, to find hot-spots for re-factoring...
Disclaimer: I am looking after Metrix++ tool which executes the use case scenario, I explained above.

Speed of memory read vs. simple arithmetic and conditionals

I'm writing a program that requires massive number crunching in any case.
Often, I have the option of either calculating a value by means of {3-4 additions or multiplications and an if-else check or two, maybe a sort of about five numbers}, or reading the value up from a lookup table. Everything is int.
How fast is a memory read in comparsion to such simple operations, roughly?
Basic principle of performance tuning ; "Don't guess, measure it"
This is impossible to answer in any meaningful way. It depends on the actual code, and on the platform you are using. As a general rule, if there are simple local optimizations that might work, the JIT compiler will do them for you.
You are better doing the following:
Write the program in a simple and natural way.
Get it working.
Run it on a typical input dataset / problem. If it is fast enough, then stop.
Profile the code as it executes a typical input dataset / problem.
Use the profiling results to identify the most critical hotspot in your code.
Examine the code, and identify a possible optimization.
Code the optimization and rerun the profiling. Did it improve things?
Repeat from step 3 until either the program is running fast enough, or you have run out of possible optimizations.
The problem with lookup tables is that you are trading off time for space, and the space usage depends on the number of combinations of inputs that are your application uses. The lookup table approach only pays off in limitted cases.

Categories

Resources