Do we have any plugin/way in eclipse that forbids(show error/warning) while using some method. In my code, I want way to confirm that System.currentTimeMillis() / Calendar.getInstance() is not being called by anyone also if someone tries to use this, eclipse should show error.
Thanks
Ankush
You could write your own FindBugs plugin that relies on the FindBugs API to provide you with metadata about method invocations (among other opcodes found in the bytecode). You'll need to specifically implement a custom BytecodeScanningDetector that can verify if an operation involves the execution of a set of banned APIs (System.currentTimeMillis() and Calendar.getInstance()) in your case.
Here's a tutorial to get you started on writing a FindBugs plugin. You can use this plugin in Eclipse, but it is preferable to run it in a CI server as part of your build.
You may also find that other static analysis tools like PMD may have the same features, exposed using their own APIs.
Also, if you want to run this only as part of your build, you can delegate these checks to Sonar, which allows for easy construction of architectural rules like these. Note - I haven't tried this on methods, and so I'd warn that Sonar architectural constraints appear to be better suited when you want to ban entire class usages instead of method usages.
Eclipse is an IDE. It happens to include a Java compiler, but there are plenty of other Java compilers that run outside of Eclipse too. Eclipse shows errors returned by the Java compiler (the IDE itself isn't the one coming up with the errors).
Your best bet is to use Eclipse's ability to search for references within a project. It's a manual process, but it will find most references to what you want. Of course if someone uses reflection, all bets are off.
Related
I'm working in Java development. I have recently come into a situation where I have to comply to coding standards: member and method ordering, naming conventions, modifier sequence. I am thinking about methods to either automate checking for compliance, or generate some sort of mechanism that does the reordering.
We're developing with Eclipse, but the technology would be open. One way it might work is to generate an external builder tool and add this to the projects. The disadvantage would be that it would automagically apply to all files, which could run into problems with legacy code, blowing up the error count to a degree where it is no longer a sensible metric of compliance. Also, it makes code reviews much more difficult, which is not wanted.
Another way would be some kind of parser with only informative capabilities. We could run a process inside Jenkins, and that certainly would work, but that would also mean that the code had already passed a review, which is usually a little late for a code compliance check.
Are there suggested or even easy methods to integrate such functionality into either IDE, the source control system (Mercurial) or even Jenkins? How is this enforced elsewhere?
I would not recommend doing such changes automatically. Even though most of the checkstyle/pmd complies are valid, it happens to me that I need to ignore some of the warnings/errors. Moreover - there is only very small pool of such easy issues. Most of the notifications require more complex operations and probably couldn't be done without human interaction.
I'm using Sonar integration. It contains many external checkers like PMD, CPD, Checkstyle, Findbugs and can integrate with some other useful tools like Cobertura (test coverage statistics). Its almost trivial to bind Sonar build to Jenkins build and trying to avoid major/critical issues might be considered as a good approach.
In developer environment I use Eclipse integration with findbugs. There is also some point of integration with sonar but it requires either submitting the code to server or running server locally, which I personally don't like. However after few cycles of polishing the code after code review in Sonar you will notice that you (and other team members) stick to most of the rules and checking reports on daily basis is enough.
One solution is to use JCSC / checkstyle or other tools that are command line friendly. Integrate this with your build process. Individual developer runs this on his branch.
Most tools integrate well with Jenkins (via plugins), which can be used as dash board
Further to #Jayan's answer, Jenkins has a CheckStyle plugin that will display the results of each CheckStyle run and let you set the build status depending on how many violations are found. So your setup steps would be:
Set up your CheckStyle rules to fit your coding standards
Add a step to your Jenkins build that runs CheckStyle
Add a post build step to publish the CheckStyle results.
Jayan mentions checkstyle, which is great for checking against coding standards.
I remember using Jalopy years ago for automated code formatting, it might suit your needs as well.
In all honesty though, I wouldn't reformat the code automatically. Using tools such as checkstyle to raise warnings is one thing. Taking control of a developer's source code is quite another, and most people find it terribly intrusive and unpleasant.
Also, a bug in a code checker will at worst generate incorrect warnings. A bug in a code beautifier might corrupt and destroy hours worth of work.
You can use JArchitect to check your best practice using CQLinq queries which is useful
to create easly your custom rules
JArchitect is free for open source contributors :http://www.jarchitect.com/JArchitectForOSS.aspx
I have a series of eclipse projects that use a bunch of third party jars.
So many are included of different versions.
But I have noticed that some of these libraries, due to code changes over the time, are not used any more but the reference to the library is there.
Is there any plugin that shows the jar dependencies of each project and which I can remove safely?
JarAnalyzer can be used for this purpose.
I am not aware of any plugin or tool that helps in doing what you want to do. However, there may be rules or procedures that help to reach the final end: a reduced set of libraries that is needed and consistent.
I have found the the "Java API Compliance Checker" which allows to compare two versions of the same library. May help to reduce the number of the used libraries for the same purpose. I have not used it, so I cannot tell you about my experience.
Define if it is allowed to have the same kind of library in different version available. Depending on the environment, this may or may not allowed.
Incremental process to reduce the amount of libraries needed:
Remove one library each time from eclipse.
Look if compile errors result from that.
If yes resolve the compile errors.
When all are resolved, start your unit tests (you have unit tests, of course :-)) and see if any unit test breaks.
Do these steps for each library you want to remove.
At the end it could be worthwhile to look at a tool like ivy that allows you to manage the libraries explicitly. Or even switch to Maven which allows you the same.
Final remark: The usage of a library should be
decided by the architect of an application only and
documented in the architecture handbook together with the reasons for doing that.
Try open your Manifest file. You can edit and remove the dependency from there
A java app I have been writing for the last few years has now become quite bloated and I am conscious of including unused code etc…
I know for css and javascript you perform code cleanups and compact the code etc.. does anything like this exist for java?
For some reason developers always dismiss the importance of the java garbage collector.
This should help: http://www.devdaily.com/java/edu/pj/pj010008
Linda
The fact that you are quoting css as an example for an language that enables code cleanup, tells me that you are probably new to java.
Java has one of the Richest set of tools for performing code cleanup (or general re-factoring). There are two ways to perform code cleanups. One is to use Static code analyzer and other way is to have an instrumentation support to monitor your running programs.
There are many static code analyzer in the market and the some of the best ones are open source. If you are a serious Java developer then you cannot get a better IDE than Intellij ( no, I do not get paid by them..). They have one of the best code analyzer tools integrated with an IDE. This link should help you.
You do not have to use Intellij to get the same kind of code Analysis. You can use PMD and FindBugs plugins for any leading IDE. Among all the code analyzers, I believe these two are the best to dig out deep problems in the code and not just superficial mistakes. ( like formatting Issue).
These tools will help you clean up dead codes, find probably bugs and unclosed objects. You should also customize the tools based on your requirements, but that comes later.
Once you have identified and fixed all the problems identified by your static analyzer, then you need to monitor your program for potential problems like Memory Leak. Some problems are only found during runtime.Java had an inbuild instrumentation mechanism called JMX and almost all major Server Vendors have exposed it. You could also use Jconsole, that could act as an abstraction layer over JMX.
Have you considered using any code coverage tools to determine exactly what portions of your code are getting executed? Take a look at EMMA:
http://emma.sourceforge.net/
Optimizing and cleaning up are two separate issues. You can start on both using Eclipse.
Take a look at these sections of your Eclipse configuration:
Window > Preferences > Java > Code Style > Clean up / Code Templates / Formatter
And:
Window > preferences > Java > Compiler > Errors/Warnings
You'll find a lot of handy features for flagging and even automatically fixing things that are commonly considered to be bad code.
For performance optimization, you might try the Eclipse Test & Performance Tools Platform Project.
I'm coming up with performance goals for the new year, and I thought I'd be fun to put a goal to reduce the size of the code base, especially boilerplate. One action I've come up with to address this is to use Project Lombok to make beans as short as they should be. But I have a habit of overlooking downsides to new software and approaches, so I'm relying on the Stack Overflow community: Can anyone tell me why Lombok is a bad idea?
A limitation of Lombok is the fact that it is closely tied to the java compiler. Since the annotation processor API only allows creation of new files during the compilation (and not the modification of the existing files) lombok uses that API as a entry point to modify the java compiler. Unfortunately these modifications of the compiler make heavy usage of non-public APIs. Using lombok may be a good idea but you must be aware that upgrading your compiler may break your code. The probability is low but I always feel uncomfortable using non-public APIs.
In my opinion source code in "Java+Lombok" is not Java source code anymore. I see it as something similar Borland company made many years ago in their Borland C++ Builder IDE for VCL - they introduced "properties" in C++ code effectively introducing some kind of a new programming language that wasn't C++ anymore (not C++ in sense of standard of C++ language). Sources using "Java+Lombok" are not valid sources in sense of Java language specification. Moreover I think annotations were not designed to influence language semantic.
A major downside is IDE support. Since Lombok is not actually a language change, and since your IDE only understands java, you will need an IDE that supports Lombok to get things working right. As of now, that's only Eclipse that includes Eclipse and IntelliJ. If you use eclipse that might be ok, but remember that you are making a decision for future developers as well.
I'd suggest you consider moving some of your code into a less ceremonial language such as groovy. We've had success moving some of our business logic and models into groovy and it works really smoothly.
One potential disadvantage to something like Lombok is that with the setters/getters "missing", source tools may not "recognize" aspects of the resulting object that give it "bean" qualities, since those qualities only manifest in the compiled class.
Another disadvantage is that it's Yet Another piece of "black magic" within the tool chain. Fortunately, it seems to be a rather benign piece (I have not used it), and the fact that it happens at compile time rather than runtime is actually a blessing (IMHO). But, you're not going to be able to reuse or share your code without the project since it's adding artifacts to your code base. So, while the compiled class file may be a "POJO", I'd argue that your source code is NOT a POJO.
Neither of these are crippling downsides, rather just aspects to be aware of looking forward.
As pointed out by user #Jcs in another answer, i would like to add more.
In our project we, are using mapstruct which is used to generate mapper classes, before the code is compiled, using mvn generate-sources command, this is done at process phase using maven processor plugin.
project lombok adds the bytecode for the getter/setter in class file at compile phase.
since process phase is executed before the compile, it finds that there is no getter/setter available in class.
There are some workarounds available to execute compile phase more than one.
See this git hub ticket for more details.
Note : I am using STS ide by Spring and it is supported by lombok :)
It's a third party library, and there are developers who don't know it well.
IDE should support annotations processing (there are plugins for IDEA and Eclipse).
As was mentioned above, your code will be without getters/setters. It leads to sonar/checkstyle violations.
In my opinion, The most obvious risk with Project Lombok is that when you decide to use lombok, you also decide that everyone else who deals with your code uses lombok. This is a true statement for all libraries, but Lombok is special in that it is a build-time dependency and your IDE needs plugins to figure out what is going on. That means anyone who has reason to touch your code ex. someone trying to debug weird behavior, etc.) needs to know how to set it up and how it works. That can be a little frustrating.
To add to other responses.
The main reason to not use it is a new record keyword added as experimental feature in Java 14. Java 16 brings records out of preview which will make project Lombok obsolete in most cases.
Since Java 14 one is able able to write:
record Book(String title, String author, String isbn);
which gives automatically access to the constructor, getters/setter, hashCode, equals and toString methods without any annotations.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
What tools do you use to find unused/dead code in large java projects? Our product has been in development for some years, and it is getting very hard to manually detect code that is no longer in use. We do however try to delete as much unused code as possible.
Suggestions for general strategies/techniques (other than specific tools) are also appreciated.
Edit: Note that we already use code coverage tools (Clover, IntelliJ), but these are of little help. Dead code still has unit tests, and shows up as covered. I guess an ideal tool would identify clusters of code which have very little other code depending on it, allowing for docues manual inspection.
An Eclipse plugin that works reasonably well is Unused Code Detector.
It processes an entire project, or a specific file and shows various unused/dead code methods, as well as suggesting visibility changes (i.e. a public method that could be protected or private).
CodePro was recently released by Google with the Eclipse project. It is free and highly effective. The plugin has a 'Find Dead Code' feature with one/many entry point(s). Works pretty well.
I would instrument the running system to keep logs of code usage, and then start inspecting code that is not used for months or years.
For example if you are interested in unused classes, all classes could be instrumented to log when instances are created. And then a small script could compare these logs against the complete list of classes to find unused classes.
Of course, if you go at the method level you should keep performance in mind. For example, the methods could only log their first use. I dont know how this is best done in Java. We have done this in Smalltalk, which is a dynamic language and thus allows for code modification at runtime. We instrument all methods with a logging call and uninstall the logging code after a method has been logged for the first time, thus after some time no more performance penalties occur. Maybe a similar thing can be done in Java with static boolean flags...
I'm suprised ProGuard hasn't been mentioned here. It's one of the most mature products around.
ProGuard is a free Java class file shrinker, optimizer, obfuscator,
and preverifier. It detects and removes unused classes, fields,
methods, and attributes. It optimizes bytecode and removes unused
instructions. It renames the remaining classes, fields, and methods
using short meaningless names. Finally, it preverifies the processed
code for Java 6 or for Java Micro Edition.
Some uses of ProGuard are:
Creating more compact code, for smaller code archives, faster transfer across networks, faster loading, and smaller memory
footprints.
Making programs and libraries harder to reverse-engineer.
Listing dead code, so it can be removed from the source code.
Retargeting and preverifying existing class files for Java 6 or higher, to take full advantage of their faster class loading.
Here example for list dead code: https://www.guardsquare.com/en/products/proguard/manual/examples#deadcode
One thing I've been known to do in Eclipse, on a single class, is change all of its methods to private and then see what complaints I get. For methods that are used, this will provoke errors, and I return them to the lowest access level I can. For methods that are unused, this will provoke warnings about unused methods, and those can then be deleted. And as a bonus, you often find some public methods that can and should be made private.
But it's very manual.
Use a test coverage tool to instrument your codebase, then run the application itself, not the tests.
Emma and Eclemma will give you nice reports of what percentage of what classes are run for any given run of the code.
We've started to use Find Bugs to help identify some of the funk in our codebase's target-rich environment for refactorings. I would also consider Structure 101 to identify spots in your codebase's architecture that are too complicated, so you know where the real swamps are.
In theory, you can't deterministically find unused code. Theres a mathematical proof of this (well, this is a special case of a more general theorem). If you're curious, look up the Halting Problem.
This can manifest itself in Java code in many ways:
Loading classes based on user input, config files, database entries, etc;
Loading external code;
Passing object trees to third party libraries;
etc.
That being said, I use IDEA IntelliJ as my IDE of choice and it has extensive analysis tools for findign dependencies between modules, unused methods, unused members, unused classes, etc. Its quite intelligent too like a private method that isn't called is tagged unused but a public method requires more extensive analysis.
In Eclipse Goto Windows > Preferences > Java > Compiler > Errors/Warnings
and change all of them to errors. Fix all the errors. This is the simplest way. The beauty is that this will allow you to clean up the code as you write.
Screenshot Eclipse Code :
IntelliJ has code analysis tools for detecting code which is unused. You should try making as many fields/methods/classes as non-public as possible and that will show up more unused methods/fields/classes
I would also try to locate duplicate code as a way of reducing code volume.
My last suggestion is try to find open source code which if used would make your code simpler.
The Structure101 slice perspective will give a list (and dependency graph) of any "orphans" or "orphan groups" of classes or packages that have no dependencies to or from the "main" cluster.
DCD is not a plugin for some IDE but can be run from ant or standalone. It looks like a static tool and it can do what PMD and FindBugs can't. I will try it.
P.S. As mentioned in a comment below, the Project lives now in GitHub.
There are tools which profile code and provide code coverage data. This lets you see (as code is run) how much of it is being called. You can get any of these tools to find out how much orphan code you have.
FindBugs is excellent for this sort of thing.
PMD (Project Mess Detector) is another tool that can be used.
However, neither can find public static methods that are unused in a workspace. If anyone knows of such a tool then please let me know.
User coverage tools, such as EMMA. But it's not static tool (i.e. it requires to actually run the application through regression testing, and through all possible error cases, which is, well, impossible :) )
Still, EMMA is very useful.
Code coverage tools, such as Emma, Cobertura, and Clover, will instrument your code and record which parts of it gets invoked by running a suite of tests. This is very useful, and should be an integral part of your development process. It will help you identify how well your test suite covers your code.
However, this is not the same as identifying real dead code. It only identifies code that is covered (or not covered) by tests. This can give you false positives (if your tests do not cover all scenarios) as well as false negatives (if your tests access code that is actually never used in a real world scenario).
I imagine the best way to really identify dead code would be to instrument your code with a coverage tool in a live running environment and to analyse code coverage over an extended period of time.
If you are runnning in a load balanced redundant environment (and if not, why not?) then I suppose it would make sense to only instrument one instance of your application and to configure your load balancer such that a random, but small, portion of your users run on your instrumented instance. If you do this over an extended period of time (to make sure that you have covered all real world usage scenarios - such seasonal variations), you should be able to see exactly which areas of your code are accessed under real world usage and which parts are really never accessed and hence dead code.
I have never personally seen this done, and do not know how the aforementioned tools can be used to instrument and analyse code that is not being invoked through a test suite - but I am sure they can be.
There is a Java project - Dead Code Detector (DCD). For source code it doesn't seem to work well, but for .jar file - it's really good. Plus you can filter by class and by method.
Netbeans here is a plugin for Netbeans dead code detector.
It would be better if it could link to and highlight the unused code. You can vote and comment here: Bug 181458 - Find unused public classes, methods, fields
Eclipse can show/highlight code that can't be reached. JUnit can show you code coverage, but you'd need some tests and have to decide if the relevant test is missing or the code is really unused.
I found Clover coverage tool which instruments code and highlights the code that is used and that is unused. Unlike Google CodePro Analytics, it also works for WebApplications (as per my experience and I may be incorrect about Google CodePro).
The only drawback that I noticed is that it does not takes Java interfaces into account.
I use Doxygen to develop a method call map to locate methods that are never called. On the graph you will find islands of method clusters without callers. This doesn't work for libraries since you need always start from some main entry point.