Sonar Qube UI long to update - java

We have recently installed a SonarQube instance to check our source code.
The codebase is pretty large, with more than 1 million lines of code.
We run sonar-runner automatically via Jenkins.
Now I get that the UI gets updates only after sonar-runner stores its results in the database.
But it seems to really take ages sometimes, up to an hour after the success of sonar-runner before we are able to see anything coming in the UI.
So I have a couple questions, all related :
Is there a way to see analysis that are still 'in the pipes'?
Where can I see whether the conversion from database to the UI has failed?
Is there a way to speed the process?
So if I summarize? How can I impact the sonar-runner to sonar UI latency?
I went through all the docs but couldn´t find much about this yet.
Thanks for the info,

Is there a way to see analysis that are still 'in the pipes'?
Yes, log in as admin and go to Settings > System > Analysis report
Where can I see whether the conversion from database to the UI has failed?
have a look at the content of the "Current Activity" and "Past Reports" tabs
Is there a way to speed the process?
This is a very broad question which implies tones of different answers. It all depends on where time is spent. You may be CPU bound, or memory bound or database bound, ...
Having a look at the queue of report processing might give you a hint.

1 MLoc is not so huge. I run SonarQube thru sonar-runner+Jenkins, and when Jenkins indicates in the log that the analysis has been successful, I am able to see it in SonarQube's dashboard. So I would say your 'latency' is not normal.
Could you please precise your environment? Physical/virtual? OS? DB? SQ release? etc.

After loads of searching around, I realized that for some reason sonarQube didn´t handle correctly the fact that I was running several sonar-runner analysis right after each other.
After the ´Store results in database´ message, there are a couple seconds for which starting a new analysis will cause SonarQube GUI to not see the analysis.
Running analysis with a bit more time between them reduced the latecny by a great deal.
Due to the fact that Seb gave a lot of insight about SonarQube itself, I will accept his answer. It is also probably more fit to a general public and less specific to my situation.

Related

How to speed up frequent writing

we created an java agent which does a check on our application suite to see if for instance the parent/child structure is still correct. Therefore it needs to check for 8000+ documents accros several applications.
The check itself goes very fast. We use a navigator to retrieve data from views and only read data from those entries. The problem is within our logging mechanism. Whenever we report a log entry with level SEVERE ( aka: A realy big issue ) the backend document is directly updated. This is becuase we dont want to lose any info about these issues.
In our test runs we see that everything runs smoot but as soon as we 'create' a lot of severe issues the performance drops enormously because of all the writes. I would like to see if there are any notes developers facing the same challenge.. How couuld we speed up the writing without losing any data?
-- added more info after comment from simon --
Its a scheduled agent which runs every night to check for inconsistencies. Goal is ofcourse to find inconsistencies and fix the cause and to eventualy have no inconsistencies reported at all.
Its a scheduled agent which runs every night to check for
inconsistencies.
OK. So there are a number of factors to take into account.
Are there any embedded Jars? When an agent has embedded jars the server has to detach them from the agent to the disk before they can run the code. This is done every time the agent executes. This can be a performance hit. If your agent spawns a number of times, remove the embedded jars and put them into the lib\ext folder on the server instead (requires server restart).
You mention it runs at night. By default general housekeeping processes run at night. Check the notes ini for Server Tasks scheduled and appraise what impact they have on the server/agent when running. For example:
ServerTasksAt1=Catalog,Design
ServerTasksAt2=Updall
ServerTasksAt5=Statlog
In this case if ran between 2-5 then UPDALL could have an impact on it. Also check program documents for scheduled executions.
In what way are you writing? If you are creating a document for each incident and the document contents is not much then the write time should be reasonable. What is liable to be a hit in performance is one of the following.
If you are multi threading those writes.
Pulling a log document, appending a line, saving and then repeating.
One last thing to think about. If you are getting 3000 errors, there must be a point where X amount of errors means that there is no point continuing and instead to alert the admin via SNMP/email/etc? It might be worth coding that in as well.
Other then that, you should probably post some sample code in relation to the write.
Hmm, difficult or general question.
As far as I understand, you update the documents in the view you are walking through. I would set view.AutoUpdate to false. This ensures that the view is not reloaded while you are running your code. This should speed up your code.
This is an extract from the Designer help:
Avoid automatically updating the parent view by explicitly setting
AutoUpdate to False. Automatic updates degrade performance and may
invalidate entries in the navigator ("Entry not found in index"). You
can update the view as needed with Refresh.
Hope that helps.
If that does not help you might want to post a code fragment or more details.
Create separate documents for each error rather than one huge document.
or
Write to a text file directly rather than a database and then pulling if necessary into a document. This should speed things up considerably.

jvisualvm doesn't exclude certain methods from CPU profiling

I am trying to profile an application with jvisualvm. The application consists of a loop, in which data is loaded from a database and then some complex calculations are performed on the data. When a set of data is processed, the next set is loaded and calculated.
When I start my application and attach jvisualvm, I set up a filter on the CPU profiling page ("Sart profiling from classes" and "Do not profile classes"), since I am not interested in anything that relates to the database access, and other input/output related stuff.
The filter works - almost. My problem is, that the profiler reports most of the time is spent in sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(), even though sun.* is entered into the "Do not profile classes" filter. This is the only method in sun.* appearing in my profiling results.
Has anyone seen this before and knows how to get rid of it? Problem is, all other methods show up only with tiny amounts (<1%) in the "Self Time" column, most are displayed with 0%.
The jvisualvm version used is 1.3.2.
Thanks in advance,
Axel
Sounds like most of the time is spent waiting from the database. If you want to profile the rest of the stuff, you can either
stub the database so that it returns quickly (thus making the rest of your code take most of the time), or
use a better profiler such as YourKit or JProfiler (paid, definitely support what you want) or TPTP (free, but I'm not sure how powerful it us)
Uncheck 'Profile new Runnables' on the CPU profiling page.
To answer your other question with "Self Time" - you need to take a CPU snapshot of profiled data. The snapshot contains total method time info.

How to find an infinite loop in a java web application?

One day our java web application goes up to 100% CPU usage.
A restart solve the incident but not the problem because a few hours after the problem came back.
We suspected a infinite loop introduced by a new version but we didn't make any change on the code or on the server.
We managed to find the problem by making several thread dumps with kill -QUIT and by looking and comparing every thread details.
We found that one thread call stack appear in all the thread dumps.
After analysis, there was a while loop condition that never go false for some data that was regularly updated in the database.
The analysis of several thread dumps of web application is really tedious.
So do you know any better way or tools to find such issue in a production environment ?
After some queries, I found an answer in Monitoring and Managing Java SE 6 Platform Applications :
You can diagnose looping thread by using JDK’s provided tool called JTop that will show the CPU time each thread is using:
With the thread name, you can find the stack trace of this thread in the “Threads” tab of by making a thread dump with a kill -QUIT.
You can now focus on the code that cause the infinite loop.
PS.: It seems OK to answer my own question according to https://blog.stackoverflow.com/2008/07/stack-overflow-private-beta-begins/ :
[…]
“yes, it is OK and even encouraged to answer your own questions, if you find a good answer before anyone else.”
[…]
PS.: In case sun.com domain will no longer exists:
You can run JTop as a stand-alone GUI:
$ <JDK>/bin/java -jar <JDK>/demo/management/JTop/JTop.jar
Alternately, you can run it as a JConsole plug-in:
$ <JDK>/bin/jconsole -pluginpath <JDK>/demo/management/JTop/JTop.jar
Fix the problem before it occurs! Use a static analysis tool like FindBugs or PMD as part of your build system. It won't find everything, but it is a good first step.
Think of using coverage tools like Cobertura.
It would have shown you, that you didn't test these code-paths.
Testing sth. like this can become really cumbersome, so try to avoid this by introducing quality measurements.
Anyways tools like VisualVM will give you a nice overview of all threads, so it becomes relatively easy to identify threads which are working for an unexpectedly long time.

Limiting Profiling in Visual VM

I am trying out the VisualVM program that comes with the new JDKs. I am doing profiling on it and trying to profile CPU on only methods in a particular package.
I put the following in the "Profile Only Classes:"
jig.*
Where jig is the package I want to instrument. Unfortunately I get back results on other methods that are not in that package or any subpackages.
The only way I can reproduce your problem is if I leave the "Profile new Runnables" box checked. When I leave that checked, the profiler picks up code started as new threads, even if that code does not meet the filtering criteria. I guess this is unclear functionality.
You should make sure you uncheck that box before you do your profiling activity. Just be aware that with it unchecked, that probably means you won't see profile information of any of your own code that happens to be started as a separate Thread. (But I figure there's a good chance you're not doing that, so you have nothing to be concerned about.)
Actually there's an opened bug about that:
https://java.net/jira/browse/VISUALVM-546
I totally agree with the submitter (and with your disappointing about the "strange" behavior of VisualVM). Even with "Profile new Runnables" checked the filter must be honored in my opinion.
Profiling it's an important task to do especially with large project typically deployed on application-server where it's the common-way (and the right-way) have threads for background tasks and to serve user requests.
I invite everybody to vote for give attention from the VisualVM developers.
You can enter a filtering criterion in the text field at the bottom of the "Profiling Results" list, that should do the trick.

Java: How can I see what parts of my code are running the most? (profiling)

I am writing a simple checkers game in Java. When I mouse over the board my processor ramps up to 50% (100% on a core).
I would like to find out what part of my code(assuming its my fault) is executing during this.
I have tried debugging, but step-through debugging doesn't work very well in this case.
Is there any tool that can tell me where my problem lies? I am currently using Eclipse.
This is called "profiling". Your IDE probably comes with one: see Open Source Profilers in Java.
Use a profiler (e.g yourkit )
Profiling? I don't know what IDE you are using, but Eclipse has a decent proflier and there is also a list of some open-source profilers at java-source.
In a nutshell, profilers will tell you which part of your program is being called how many often.
I don't profile my programs much, so I don't have too much experience, but I have played around with the NetBeans IDE profiler when I was testing it out. (I usually use Eclipse as well. I will also look into the profiling features in Eclipse.)
The NetBeans profiler will tell you which thread was executing for how long, and which methods have been called how long, and will give you bar graphs to show how much time each method has taken. This should give you a hint as to which method is causing problems. You can take a look at the Java profiler that the NetBeans IDE provides, if you are curious.
Profiling is a technique which is usually used to measure which parts of a program is taking up a lot of execution time, which in turn can be used to evaluate whether or not performing optimizations would be beneficial to increase the performance of a program.
Good luck!
1) It is your fault :)
2) If you're using eclipse or netbeans, try using the profiling features -- it should pretty quickly tell you where your code is spending a lot of time.
3) failing that, add console output where you think the inner loop is -- you should be able to find it quickly.
Yes, there are such tools: you have to profile the code. You can either try TPTP in eclipse or perhaps try JProfiler. That will let you see what is being called and how often.
Use a profiler. There are many. Here is a list: http://java-source.net/open-source/profilers.
For example you can use JIP, a java coded profiler.
Clover will give a nice report showing hit counts for each line and branch. For example, this line was executed 7 times.
Plugins for Eclipse, Maven, Ant and IDEA are available. It is free for open source, or you can get a 30 day evaluation license.
If you're using Sun Java 6, then the most recent JDK releases come with JVisualVM in the bin directory. This is a capable monitoring and profiling tool that will require very little effort to use - you don't even need to start your program with special parameters - JVisualVM simply lists all the currently running java processes and you choose the one you want to play with.
This tool will tell you which methods are using all the processor time.
There are plenty of more powerful tools out there, but have a play with a free one first. Then, when you read about what other features are available out there, you'll have an inking about how they might help you.
This is a typically 'High CPU' problem.
There are two kind of high CPU problems
a) Where on thread is using 100% CPU of one core (This is your scenario)
b) CPU usage is 'abnormally high' when we execute certain actions. In such cases CPU may not be 100% but will be abnormally high. Typically this happens when we have CPU intensive operations in the code like XML parsing, serialization de-serialization etc.
Case (a) is easy to analyze. When you experience 100% CPU 5-6 thread dumps in 30 sec interval. Look for a thread which is active (in "runnable" state) and which is inside the same method (you can infer that by monitoring the thread stack). Most probably that you will see a 'busy wait' (see code below for an example)
while(true){
if(status) break;
// Thread.sleep(60000); // such a statement would have avoided busy wait
}
Case (b) also can be analyzed using thread dumps taken in equal interval. If you are lucky you will be able to find out the problem code, If you are not able to identify the problem code by using thread dump. You need to resort to profilers. In my experience YourKit profiler is very good.
I always try with thread dumps first. Profilers will only be last resort. In 80% of the cases we will be able to identify using thread dumps.
Or use JUnit test cases and a code coverage tool for some common components of yours. If there are components that call other components, you'll quickly see those executed many more times.
I use Clover with JUnit test cases, but for open-source, I hear EMMA is pretty good.
In single-threaded code, I find adding some statements like this:
System.out.println("A: "+ System.currentTimeMillis());
is simpler and as effective as using a profiler. You can soon narrow down the part of the code causing the problem.

Categories

Resources