How does sonar:sonar work? - java

We want to use SonarQube with some CI tool in our project. Sonar Server url is configured in main pom.xml.
There are several team memebers in the project. So what happen when one team member executes sonar:sonar locally with his local changes, then another team member executes sonar:sonar with his local changes, and then someone executes sonar:sonar from CI tool (it is configured to analyse source code in Git repository)?
Will SonarQube display issues related team members local changes? What if there are differences between team members local source code and source code in Git repository?

sonar:sonar executes analysis and sends the results to the server - assuming you're running it with the token of an account that has the appropriate privileges.
Developers should not be running this type of analysis locally to check their changes. Instead, they should be using SonarLint and perhaps pull request analysis (depending on your infrastructure).
(EDIT: Pull request analysis has been deprecated and replaced by a fuller-featured Branch analysis($).)
To expand a little on why developers shouldn't be using sonar:sonar locally: it updates the central server in a last-saved-wins manner. So if you've edited A.java and analyzed it locally before commit, and I've renamed A.java to B.java and done a similar pre-commit local analysis... what's visible on the SonarQube server? Depends on who saved/analyzed last.
Instead, sonar:sonar should be run only from your CI tool on the checked-in code that's already visible to the whole team.

As you said that sonar server url is cofigured in mail pom.xml and your team member's run sonar:sonar command at there local machine. But because of all referring to the same server url. So all changes will reflect for a common project. If you want to see then difference separately for each team member. then either you need to update the project name distinct from each other. Other wise you will see latest one there.
Sonar gives you difference in graphical manner. as well as you can compare two build too.
Second If user has administrator grant only then they can update sonar server else not.

Related

Jenkinsfile path location "#2 converted to %402" and build is failing

I have one Jenkins job which does following things
Checks out the Jenkinsfile from github at some location (c:\jenkins\workspace\my_build)
Jenkinsfile checks out java source code to (c:\jenkins\workspace\my_build#2)
mvn clean install
When I run "mvn clean install" on my build machine it works perfectly fine.
But when I run it thorough Jenkisnfile I have few unit test cases which runs when building my project, those units tests are failing with java.io.FileNotFoundException(The system cannot find the path specified) exception/error.
When I ran maven in debug mode(using -X) I found out the workspace path(c:\jenkins\workspace\my_build#2) is being converted to c:\jenkins\workspace\my_build%402 hence maven is unable to find the file which is required for my unit test cases to pass.
How can I fix this issue?
I managed to fix the issue by using a custom workspace. Something like this
ws("c:\jenkins\my_custom_location") {
// git checkout
// mvn clean install
}
Jenkins didn't create any directory with #2 or #3 when using custom workspace.
#user3847894,
You did not fix the issue, merely worked around the problem (avoidance). Now, if you run builds in parallel, they will all use the same workspace, probably with horribly unintended consequences.
You can try choose a different symbol:
hudson.slaves.WorkspaceList
Since: 1.424 Default:
# Description: When concurrent builds is
enabled, a unique workspace directory name is required for each
concurrent build. To create this name, this token is placed between
project name and a unique ID, e.g. "my-project#123".
Or figure out the real problem:
where your system is pulling the wrong character set (ANSI vs UTF-8 ? locale), encoding or something is wrongly "sanitizing the path" (eg: OWASP Sanitizer).
You'd have to provide way more info re: OS, jdk, system and startup parameters, etc to diagnose, list of plugins (maven and Jenkins), so can't help further. Check controller and agent system info (${JENKINS_URL}/systemInfo and ${JENKINS_URL}/computer/myNode/systemInfo) and also see what maven reports in the settings and help:system. On your own; good luck

How to configure Auto Commit in build step

I'm using TeamCity plugin with IntelliJ and working with the following workflow:
https://confluence.jetbrains.com/display/TCD10/Pre-Tested+%28Delayed%29+Commit
My concern is if this process can be automated in a manner that the last step which says "commit my local patch if the build succeeded" and the step will be done on the TeamCity side and not on my local laptop.
Can I add some extra step in the build that says "If the build succeeded commit this patch to SVN" (from the TeamCity server)
It just feels like the wrong way that the server needs to keep a connection with my laptop until the end of the build.
It causes a lot of commit failures due to network glitches, authentication failures, files with other revision etc.
This set up is not possible, as TeamCity is not aware of your local version control credentials and settings, that will be used during commit from the IDE.

New Sonar IntelliJ plugin incremental analysis and detection not working

I’d like to ask for help. I installed the plugin and successfully added the SonarQube server. I upgraded to version 4.1.2. I use IntelliJ IDEA 13.0.2.
I also successfully associated the project and inspection warnings appeared. But when I fix the issue the status is not updated.
How to synchronize ?
I also run inspection by name ‘SonarQube issues’.
Nevertheless after running the inspection the warnings are also there, where the code was fixed. It seems to me the synchronization somehow fails.
Any idea, what todo, please?
What I had done in my situation.
Installed SonarQube 4.4.
Installed IntelliJ 13.1.4b (the same history was with 13.0.4).
Configured SonarQube intelliJ plugin (not community provided one but plugin from SonarSource). All instructions are under the link on Wiki.
Attached to Sonar project. Every step just as SonarSource recommends.
This gave me mapping from Sonar onto source code. But it does not update when I simply fix the issue. But if I fix issues I then re-post project onto Sonar through mvn sonar:sonar and this gives me updated picture next time I run inspection through sonarqube plugin.
But this is not 100% what is wanted. But definitely better than nothing. Also I have imported Sonar rules through QAPlug - this is far less useful. But notable faster.
Overall this configuration allows me to make what is needed but I'd like to receive real incremental processing without publishing local changes onto Sonar server.
Give a try to the SonarQube IntelliJ Community Plugin, it is made for fixing issues detected by jenkins on the local dev machine. Different then the official plugin you will need to do more configuration by yourself, but it gives you also the freedom to specify your local analysis script like you need.
What you need to do:
configure the plugin
setup sonar server
setup local analysis script
Afterwards you can run SonarQube (new issues) inspection, this will run the script and show the results in the Intellij. If you are fixing issues, you can just rerun the inspection from inside the inspections results tool window. This will rerun the script and show new results.
see also: https://github.com/sonar-intellij-plugin/sonar-intellij-plugin

Coq as part of continuous integration

In my current project we use Java and Coq. We have a continuous integration set up, using maven. We want to check coq files as part of it. I.e. we need:
Download and install coq locally if it isn't installed (like maven does with frameworks like gwt, etc)
Check that coq files are correct
Did anybody try setting up this? How can this be done?
I don't recommend automate that from your CI Build. Instead, it looks more like a Machine Configuration Dependency.
In cases like this, it is worth it to rely in tools like Puppet and Vagrant in order to ensure your Development Environment conforms to a given context, so your code needs to deal with this as either a premise or - better yet - ensure it is available in your PATH.
I know this is a really old question, but I have a different answer.
I have a similar CI setup that needs to install build tools. In some cases, such as on bitbucket, I pre-build a docker image containing the tools and update the build configuration each time I update the tools. In bitbucket, this works well because the source code of the package being built points to the particular docker image version to use to build it, which ensures that older builds can still be built, assuming the older docker images are retained.
Otherwise, I just script the installation of the tools using wget or curl to download as necessary.

What is build automation software (for example, Ant)?

I see reference of ant a lot but I don't get exactly what its meant to do? from what i've heard its supposed to compile your projects but can't i just do that by clicking Run->Run in eclipse?
Edit : I guess I should rephrase my question. I already know that ant is a 'build automation software', my question is, what exactly is build automation? I thought that you're supposed to test your app, and when it is running you click the 'build' button in eclipse or through command-line java, and it makes a .jar file out of it? So why do you need to 'automate' this process?
I already know that ant is a 'build automation software', my question is, what exactly is build automation? I thought that you're supposed to test your app, and when it is running you click the 'build' button in eclipse or through command-line java, and it makes a .jar file out of it? So why do you need to 'automate' this process?
Not all the Java development is done through eclipse and not all the jars may be built from the command line ( or should be built from the command line ) .
You may need additionally run test cases, unit tests, and many, many other process.
What ant does, is provide a mechanism to automate all this work ( so you don't have to do it every time ) and perhaps you may invoke this ant script each day at 6 p.m.
For instance, in some projects, a daily build is needed, the following are the task that may be automated with ant, so they can run without human intervention.
Connect to subversion server.
Download/update with the latest version
Compile the application
Run the test cases
Pack the application ( in jar, war, ear, or whatever )
Commit this build binaries to subversion.
Install the application in a remote server
Restart the server
Send an email with the summary of the job.
Of course for other projects this is overkill, but for some others is very helpful.
rogeriopvl is absolutely correct, but to answer your "can't I just do that by clicking Run->Run in Eclipse?" question: that's fine for a project that you're working on on your own, and don't need a repeatable, scriptable build in multiple environments.
If you're working on an open source project, however, or professional software which needs to be able to build on a build server etc, requiring a particular IDE to be running isn't a good idea.
Ant is used to automate a build process, but a build process is often much more than compiling. Ant has "tasks" that can be used to perform miscellaneous useful functions. You can create your own task to do just about anything by writing a java class and telling ant where to find it. You can then mix and match these tasks to create targets that will execute a set of tasks.
You can also set up a dynamic environment in which to build your application. You can set up property files to hold variables that can be used in the build process, i.e. to hold file paths, class paths, etc. This is useful for instance to differentiate between test and production builds where deployment paths, database instances, etc. might change. Ant also includes flow control (if, etc.)
Some things I've seen ant do:
Compile code
Use version control to checkout the latest version or to tag the version being built
Run sql scripts to build or rebuild a test database
Copy files from an external resource for inclusion in a project
Bundle code into a jar, war or ear file
Deploy a web application to an application server
Restart an application server
Execute a test suite
Static analysis, i.e. CheckStyle or PMD
Send email to a team to alert them to a build.
Generate files based on information from the build.
Example: I have a jsp in my app that does nothing but display version/build information. It is generated by ant when I run a build, and the production operations team checks this page when they deploy the application to make sure they've deployed the correct build.
In many larger companies (and likely some smaller ones), you'll find that production code is not built by the people who developed it. Instead, the developers may check their code into a source code repository and tag it. Then they give this tag to a build team.
The build team, in a separate (clean) area - possibly on some headless server (i.e. with no GUI) - will then check out the code and run a build script. The build script will be completely independent of the desktop environment/IDE.
This ensures that nothing which happens to be on any one developer's computer is "polluting" the build. (Or, more likely, nothing outside source control is required for the system to work!)
So most software you use will never, ever be built from a developer's desktop.
PS. You might also want to look at the idea of Continuous Integration
The short answer is that Ant is a great way to create a complete project build that is independent of any particular tool any developer may be using. Without an independent build, things can go haywire quickly - especially for large project teams.
And now for the long answer... I have been brought into several projects without any sense of an independent build. On one project, there was one guy who was not a developer that was tasked with building and deploying the software. He had created 147 separate windows batch files to compile each EJB, each servlet, and each client component. There was no error checking for this build. All log messages, including error messages went to standard out. It was up to him to manually recognize by reading this log which exception or message printed was a normal and which message was an error. He also had to deploy this software he just built. Deploying was equally as complex since there were several load-balanced tiers. Each module had to be placed in the right place manually with options setup to match downstream and upstream tiers. Building and deploying this software took him at least 3 days using this method. Of course, only then could anyone determine if the build "worked". Usually, after this period all the programmers would scramble to debug the build. Programmers would say my module works fine in my IDE. I just click run like this, see?
Indeed, the individual software modules usually worked, but the build and deployment was horribly ineffective. And just as bad, it was equally as difficult for anyone to deploy a build to more than one environment. Management would say, ok you now have this build working in our regression testing environment. Now deploy that same build in this other environment so the sales guys can demo up and coming software. That should be simple to do, but it also took at least 2 days, followed by a "debugging the build" period. Builds and deploys were never simple and never accurate. It really slowed the project down.
Anyway, we replaced this entire procedure with a complete Ant based build and deploy mechanism. The end result was that a complete build could be created and deployed in less than 30 minutes, completely automated. The QA guy managing the builds and deploys could keep a whiteboard of which environment had which build deployed to it and which group was using that environment. This was something that was just not possible with the old system.
Ant is for automating software build processes:
http://en.wikipedia.org/wiki/Apache_Ant
Ant allows CRISP (complete, repeatable, informative, schedulable, portable) builds. You can find great info on it in this presentation by Mike Clark and in his book, Pragmatic Project Automation.
Ant is a build tool, akin to makefiles (albeit with a very different syntax in XML). If you're only using Eclipse it's fine to stick to that and you can always convert an Ant build file into an Eclipse project (Eclipse's launch configurations are then, if I remember correctly, the equivalent of Ant's build targets).
If you want to deploy the source code of the application and allow others to easily build or set it up, automating that using Ant is probably not a bad idea. But it's usually not a consistent experience for users or at least I haven't seen much consensus on what targets should be there and doing what so far.
Ant may also be used for regular automated builds (you wouldn't want to hit Run in Eclipse every night, right? :-))
If there's one close to you I think you'd get a lot out of CITCON, the Continuous Integration and Testing Conference. You get to talk with lots of people about the benefits of automation applied to building and testing software.
Basically people use Ant (with other tools) to automate everything they want to have happen after a commit. The basic advantages of such automation are faster, better and cheaper.
Faster because things happen right away without waiting for a human to get around to it.
Better because computers are really really good at doing the same thing the same way every time. (Humans tend to suck at that.)
Cheaper because you have fewer mistake and the mistakes that occur are caught sooner and therefore cheaper to fix.
You are also referring to the ""Export ant buildfile".
If you write your own Ant script for building your application outside eclipse, you can write your own targets that use the Ant task to delegate to the generated build.xml.
Also, you can configure a project's 'builders' (project properties » Builders) to run any script (ant or otherwise) you want when you build the project, manually or automatically.
Joel (Spolsky) has a great article on "The Joel Test." Many of them revolve around being able to do important things often, quickly and reliably. One of those things is your build.
Eclipse is using ant for building, running, deploying, ...
"Ant is a Java-based build tool. In theory, it is kind of like Make, without Make's wrinkles and with the full portability of pure Java code." (from link text

Categories

Resources