intellij , gradle "implementation" doesn't work, transitive dependecies leaks - java

note: title updated, the problem seems to be in intellij
I've create a simple repo to test this problem :
https://github.com/fvigotti/gradle-implementation-error
Gradle 5.4.1 but also tested with previous versions before upgrading..
Expected Behavior
implementation should not leak "implementation libraries"
Current Behavior
everything leaked..
Context
I've created an empty ad-hoc sample project to demonstrate this..
https://github.com/fvigotti/gradle-implementation-error
Steps to Reproduce
gradle clean build publishToMavenLocal
then go to another project and import:
implementation "net.me:library-sample:1.0-SNAPSHOT"
this should not be exposed :
import org.apache.commons.codec.Decoder but it is!
I'm testing more sources found on github seems that almost everyone uses
from components.java without worrying about implementation dependencies leakage... I'm sure I'm missing something here..
thank you,
Francesco
UPDATE
here is the video of the issue :
https://vimeo.com/334392418

You say in your bug report to IntelliJ, „I'm surprised no one surfaced this“. Probably the reason why it hasn't surfaced, is because what you are encountering is not a problem in the real world.
I think the real problem might be a matter of understanding the notoriously arcane nature of class loading in Java.
Class loading must be able to work at both the system level and the application level. Class.forName(String) is really working at the system level. But you're sort of driving the system level from the application level with that call on Class.
Gradle's api/implementation constraints apply at the application level only. Not at the system level. Outside of the context of a Gradle build run, Gradle can't enforce constraints on how the Java class loading system itself is designed to operate.
It's like I shouldn't be able to access my computer's CPU instruction register from a word processor application. But that doesn't mean that the operating system should be forbidden from interfacing with the CPU somehow.
In your case, your net.me.consumer.Main class is the „word processor“. org.apache.commons.csv.CSVFormat and org.apache.commons.codec.Decoder are analogous to instructions in the CPU. And java.lang.Class is analogous to an operating system.
You can't access the CPU directly from your word processor. However, you can do things in a word processor that cause the OS to interface with the CPU. You're doing something analogous to that with Class.forName(decoderClazzName).
The same „leakage“ your project demonstrates is repoducable in Eclipse too. And also in Visual Studio Code. In fact, I can add this to both your Main and MainTest classes of your consumer project:
org.apache.commons.codec.Decoder decoder = new Decoder() {
#Override
public Object decode(Object source) throws DecoderException {
// TODO Auto-generated method stub
return null;
}
};
Eclipse not only allows it, it automatically creates it for me. Plus it adds the necessary import statements without me even asking it to. And very snappily and happily compiles it with no problems nor complaints. Even though there is no dependency on commons-codec defined in your consumer's build.gradle.
And though I haven't tried it in NetBeans, I suspect that you would get the same results in that or any other Java IDE.
The way you've filed a bug with IntelliJ, you would also have to file a bug with every Java IDE, every Java-based application server, every JVM-based compiler, and so on.
I don't think it is reasonable to expect every software vendor out there in the Java ecosystem to comply with the constraints defined by one single dependency management tool. Regardless of how elephantine they like to think they are ;)

Related

Library does not find its own "sublibrary"

I'm trying to make an addresslist in Java, which saves its contents in a Sqlite database.
Therefor (and for other future uses), I tried to create my own library for all kinds of database connections ("PentagonsDatabaseConnector-1.0.jar"). It currently supports Sqlite and MySql.
It references other libraries for them to provide the JDBC-drivers ("mysql-connector-java-8.0.16.jar" and "sqlite-jdbc-3.30.1.jar").
Problem: My Library works just fine if I'm accessing it from its own project folder, but as soon as I compile it and add it to the "Adressliste"-project, it isn't able to find the JDBC-drivers anymore (I can access the rest of my self-written library without problems though). Also, as shown in the screenshot, "PentagonsDatabaseConnector-1.0.jar" brings the JDBC-libraries with itself in "lib"-folder.
LINK TO THE SCREENSHOT
Do you guys have an idea whats wrong?
Thank you for your help!
Ps: Sorry for bad English, I'm German :)
Java cannot read jars-in-jars.
Dependencies come in a few flavours. In this case, PentagonsDC is a normal dependency; it must be there at runtime, and also be there at compile time.
The JDBC libraries are a bit special; they are runtime-only deps. You don't need them to be around at compile time. You want this, because JDBC libraries are, as a concept, pluggable.
Okay, so what do I do?
Use a build system to manage your dependencies is the answer 90%+ of java programmers go to, and what I recommend you do here. Particularly for someone starting out, I advise Maven. Here you'd just put in a text file the names of your dependencies and maven just takes care of it, at least at compile time.
For the runtime aspect, you have a few options. It depends on how your java app runs.
Some examples:
Manifest-based classpaths
You run your java application 'stand alone', as in, you wrote the psv main(String[]) method that starts the app and you distribute it everywhere it needs to run. In this case, the usual strategy is to have an installer (you need a JVM on the client to run your application and neither oracle nor any OS vendor supports maintaining a functioning JVM on end-user's systems anymore; it is now your job – this is unfortunately non-trivial), and given that you have that, you should deploy your jars such that they contain in the manifest (jars are zips, the manifest ends up at META-INF/MANIFEST.MF):
Main-Class: com.of.yourproj.Main
Class-Path: lib/sqlite-jdbc.jar lib/mysql-jdbc.jar lib/guava.jar
And then have a directory stucture like so:
C:\Program Files\yourapp\yourapp.jar
C:\Program Files\yourapp\lib\sqlite-jdbc.jar
C:\Program Files\yourapp\lib\mysql-jdbc.jar
Or the equivalent on any other OS. The classpath entries in the manifest are space separated and resolved relative to the dir that 'yourapp.jar' is in. Done this way, you can run yourapp.jar from anywhere and it along with all entries listed in Class-Path are now available to it.
Build tools can make this manifest for you.
Shading / Uberjars
Shading is the notion of packing everything into a single giant jar; not jars-in-jars, but unpack the contents of your dependency jars into the main app jar. This can be quite slow in the build (if you have a few hundred MB worth of deps, those need to be packed in and all class files need analysis for the shade rewrite, that's a lot of bits to process, so it always takes some time). The general idea behind shading is that deployment 'is as simple as transferring one jar file', but this is not actually practical, given that you can no longer assume that end users have a JVM installed, and even if they do, you cannot rely on it being properly up to date. I mention it here because you may hear this from others, but I wouldn't recommend it.
If you really do want to go for this, the only option is build systems: They have a plugin to do it; there is no command line tool that ships with java itself that can do this. There are also caveats about so-called 'signed jars' which cannot just be unpacked into a single uberjar.
App container
Not all java apps are standalone where you provide the main. If you're writing a web service, for example, you have no main at all; the framework does. Instead of a single entrypoint ('main' - the place where your code initially begins execution), web services have tons of entrypoints: One for every URL you want to respond to. The framework takes care of invoking them, and usually these frameworks have their own documentation and specs for how dependencies are loaded. Usually it is a matter of putting a jar in one place and its dependencies in a subdir named 'lib', or you build a so-called war file, but, really, so many web frameworks and so many options on how they do this. The good news is, usually its simple and the tutorial of said framework will cover it.
This advice applies to any 'app container' system; those are usually web frameworks, but there are non-web related frameworks that take care of launching your app.
Don't do these
Don't force your users to manually supply the -classpath option or mess with the CLASSPATH environment variable.
Don't try to write a custom classloader that loads jars-in-jars.
NB: Sqlite2 is rather complicated for java; it's not getting you many of the benefits that the 'lite' is supposed to bring you, as it is a native dependency. The simple, works everywhere solution in the java sphere is 'h2', which is written in all java, thus shipping the entire h2 engine as part of the java app is possible with zero native components.

NiFi custom processors with shared references use the oldest version

Been working with Nifi for a few months and noticed a dependency pattern that doesn't make sense, hopefully I can describe it clearly in case someone can shed light.
We've been prototyping several related but distinct custom processors, and these processors all use some common jar libraries.
For example,
Processors A and B use Library1
ProcessorA gets developed along with some of the code in Library1, build Library1 and then build ProcessorA for testing on the NiFi server, all is good
ProcessorB gets developed also with some of the code in Library1, and Library1 is rebuilt before building ProcessorB as a deployable nar
Then when testing ProcessorB on the NiFi server (including attaching to the process to step through) we find need to update relevant code in Library1
But, despite this update to the Library, ProcessorB still executes the old Libary code. A specific example was a for loop; it would still initialize int i = 1 after I changed the code to int i = 0.
Only after I did a clean/rebuild of ALL the processors (both A and B) did ProcessorB start recognizing the updated library code.
This is surprising since every processor nar package should have its own copy of the library jar, but it is acting like the earliest version takes precedent.
My question, finally: is this expected behavior when using shared code libraries among nar packages, or is there a better practice for architecting these? Or am I missing some wisdom from from the java/maven realm?
NiFi is v 1.8 but also experienced this in 1.5
Many TIAs

A tool to detect broken JAR dependencies on class and method signature level

The problem scienario is as follows (Note: this is not a cross-jar dependency issue, so tools like JarAnalyzer, ClassDep or Tattletale would not help. Thanks).
I have a big project which is compiled into 10 or more jar artifacts. All jars depend on each other and form a dependency hierarchy.
Whenever I need to modify one of the jars, I would check out the relevant source code and the source code for projects that depend on it. Modify the code, compile, repackage the jars. So far so good.
The problem is: I may forget to check one of the dependent projects, because inter-jar dependencies can be quite long, and may change with time. If this happens some jars may go "out-of-sync" and I will eventually get a NoSuchMethodException or a some other class incompatibility issue at run-time, which is what I want to avoid.
The only solution I can think of, the most straighforward one, is to check out all projects, and recompile the bunch. But this takes time, especially if I re-build it every small change. I do have a continuous integration server, that could do this for me, but it's shared with other developers, so seeing if the build breaks is not an option for me.
However, I do have all the jars so hypothetically it should be possible to verify jars which depend on the code that I modified have an inconsistency in method signature, class names, etc. But how could I perform such check?
Has anyone faced a similar problem before? If so, how did you solve it? Any tools or methodologies would be appreciated.
Let me know if you need clarification. Thanks.
EDIT:
I would like to clarify my question a little bit.
The ultimate goal of this task is to check that the changes that I have made will compile against the whole project. I am looking for a tool/technique that would aid me perform such check.
Consider this example:
You have 2 projects: A and B which are deployed as A.jar and B.jar respectively. A depends on B.
You wish to modify B, so you check it out and modify a method signature that A happens to depend on. You can compile B and run all tests by itself without any problems because B itself does not depend on anything. So you happily commit your changes.
In a few hours the complete project integration fails because A could not be compiled!
How do I avoid this?
The kind of tool I am looking for would retrieve A.jar and check that all dependencies in A on the new modified B are still fine. Like a potential compilation error that would happen if I were to recompile A and B sources together.
Another solution, as was suggested by many of you, is to set up a local continuous integration system that would recompile the whole project locally. I don't mind doing this, but I want to avoid doing it inside my workspace. On the other hand, if I check-out all sources to another temporary workspace, then I need to mirror my local changes to the temporary workspace.
This is quite a big issue in my team, as builds break very often because somebody forgot to check out (or open in Eclipse) the right set of projects. I tried persuading people to check-out source and recompile the bunch before commits, but not only it takes time, it needs running quite a few commands so most people just find it too troublesome to do. If the technique is not easy or automated, then it's unusable.
If you do not want to use your shared continuous integration server you should set up a local one on your developer machine where you perform the rebuild processes on change.
I know Jenkins - it is easy to setup (just start) on a local machine and I would advice to run it locally if no one is provided in the IT infrastructure that fits your needs.
Checking signatures is unfortunately not enough. Having the correct signatures does not mean it'll work. It's all about contracts and not just signatures. I mean what happens if the new version of a library has the same method signature, but accepts an ArrayList parameter now in reversed order? You will run into issues - sooner or later. I guess you maybe consider implementing tools like Ivy or Maven:
http://ant.apache.org/ivy/
http://maven.apache.org/
Yes it can be pain to implement it but once you have it it will "guard" your versions forever. You should never run into such an issue. But even those build tools are not 100% accurate. The only proper way of dealing with incompatible libraries, I know you won't like my answer, is extensive regression testing. For this you need bunch of testing tools. There are plenty of them out there: from very basic unit testing (JUnit) to database testing (JDBC Proxy) and UI testing frameworks like SWTBot (depends if your app is a web app or thick client).
Please note if your project gets really huge and you have large amount of dependencies you always not using all of the code there. Trying to check all interfaces and all signatures is way too much. Its not necessary to test it all when your code use lets say 30 % of the library code. What you need is to test what you really use. And this can be only done with extensive regression testing.
I have finally found a whole treasure box of answers at this post. Thanks for help, everyone!
The bounty goes to K. Claszen for the quickest and most input.
I'm also think that just setup local Jenkins is a best idea. What tool you use for build? Maybe you can improve you situation with switching to Maven as build tool? In more smart and don't recompile full project if you don't ask it directly. But switch to in can be HUGE paint in the neck - it hardly depends on how you project organized now...
And about VCS- exist Mercurial/SVN bridge - so you can use local Mercurial for you development ....
check this link: https://www.mercurial-scm.org/wiki/WorkingWithSubversion
There is a solution, jarjar, which allows to have different versions of the same library to be included multiple times in the dependency graph.
I use IntelliJ, not Eclipse, so maybe my answer is too IDE-specific. But in IntelliJ, I would simply include the modules from B into A, so that when I make changes to A, it breaks B immediately when compiling in the IDE. Modules can belong to multiple projects, so this is not anything like duplication, it's just adding references in the IDE to modules in other projects.

IntelliJ Doesn't Notice Changes in Interface?

[I've decided to give IntelliJ another go (to replace Eclipse), since its Groovy support is supposed to be the best. But back to Java...]
I have an Interface that defines a constant
public static final int CHANNEL_IN = 1;
and about 20 classes in my Module that implement that interface. I've decided that this constant was a bad idea so I did what I do in Eclipse: I deleted the entire line. This should cause the Project tree to light up like a Christmas tree and all classes that implement that interface and use that constant to break. Instead, this is not happening. If I don't actually double-click on the relevant classes -- which I find using grep -- the module even builds correctly (using Build -> Make Module). If I double-click on a relevant class, the error is shown both in the Project Tree and in the Editor.
I am not able to replicate this behavior in small tests, but in large modules it works (incorrectly) this way. Is there some relevant setting in IntelliJ for this?
What you have here is an interaction between a standard java issue and a standard IDEA behavior. Constant expressions like this are inlined in the class compilation (as per the Java Language Specification), so in fact the class referencing this constant did not change just because you removed the line (obviously) and there is no recorded dependency between the constant and the class anymore since it was inlined. This causes the compilation to not fail (the class wouldn't fail at runtime either if that was the only change - it will only fail when you do a clean build).
One way around that in IDEA is to do a Build->Rebuild Project when you have such a change. The other is in Settings->Compiler there is an Honor Dependencies on "Compile" command. This can adversely affect performance in large projects (hence it is disabled by default), but is supposed to solve this kind of problem.
The other part of this problem is that IDEA does not automatically recalculate all inspections on a change like that. It recalculates when you open a file. I'm not aware of a setting that makes IDEA do that. When you rebuild any problems found will get highlighted (up to where the compiler gave up), but the highlight won't go away until you open the class or recompile as well.

How to mark some code that must be removed before production?

Sometimes for testing/developing purposes we make some changes in the code that must be removed in a production build. I wonder if there is an easy way of marking such blocks so that production build would fail as long as they are present or at least it will warn you during the build somehow.
Simple "//TODO:" doesn't really work because it is ofter forgotten and mixed with tons of other todos. Is there anything stronger?
Or maybe even if I can create some external txt file and put there instructions on what to do before production, and that ant would check if that file is present then cancel build.
We are using Eclipse/Ant (and java + Spring).
Update: I don't mean that there are big chunks of code that are different in local and production. In fact all code is the same and should be the same. Just lets say I comment out some line of code to save lot of time during development and forget to uncomment it or something along those lines. I just want to be able to flag the project somehow that something needs an attention and that production build would fail or show a warning.
Avoid the necessity. If you're placing code into a class that shouldn't be there in production, figure out how to do it differently. Provide a hook, say, so that the testing code can do what it needs to, but leave the testing code outside the class. Or subclass for testing, or use Dependency Injection, or any other technique that leaves your code valid and safe for production, while still testable. Many such techniques are well-documented in Michael Feathers' fantastic book, Working Effectively with Legacy Code.
You could also just define stronger task comment markers: FIXME (high priority) and XXX (normal priority) are standard in Eclipse, and you could define more task tags (Eclipse Properties -> Java -> Compiler -> Task Tags)
If you want to fail your build, you could use the Ant (1.7) contains file selector to look for files containing specified text:
<target name="fixmeCheck">
<fail message="Fixmes found">
<condition>
<not>
<resourcecount count="0">
<fileset dir="${pom.build.sourceDirectory}"
includes="**/*.java">
<contains text="FIXME" casesensitive="yes"/>
</fileset>
</resourcecount>
</not>
</condition>
</fail>
</target>
<target name="compile" depends="fixmeCheck">
Obviously, change ${pom.build.sourceDirectory} to your source directory, and FIXME to the comment that you want to search for.
Does anyone know a nice way to print out the files found in this fileset in the build file (other than just looking in Eclipse again)?
Add a unit test that fails if the block is present. Maybe the block sets a global variable CODE_BLOCK_IS_NOT_DELETED = true; that the unit test checks for.
However, your bigger problem is that you test/develop with code that you don't need or use in production. That doesn't sound right.
One somehow dirty suggestion would be to create a class with a static method lets say
class Prod {
public static void uction(){
}
}
and then mark the places you want with
Prod.uction();
Then before production simply delete the class and you will get compiler errors where needed :D
However you technically solve this, I would recommend to do it the other way round: do not do something special for the production build but structure your code and build environment in such a way that the magic happens during the development build. The production build should be as foolproof (or Murphy proof) as possible.
If something goes wrong in the development build: so what.
Anything going wrong in the production build will hurt much more.
[edit:] Works for C++... :-)
Use these preprocessor defintions and all your problems will be solved:
#ifdef _DEBUG
#define COMMENT (code) /* code */
#else
#define COMMENT (code) #error "Commented out code in release!"
#endif
Not sure if the syntax is entirely correct, but you get the idea.
We added a trigger to subversion that blocks \\NOCOMMIT: You could have a \\NODEPLOY: tag that your build script would look for before allowing a build.
TDD and Dependency Inversion concepts might help you here. By putting the code that varies into a class that implements an interface, you can control when the Test version of that code runs and when the prod version runs.
Then you have a file, clearly named as being for testing, that you can leave out of your build.
In projects I've worked on, I've had various tidbits of code that are in place to enable easy testing during development. I wrap these in an if block that checks a final boolean. When the boolean is true, the code can be accessed. When the boolean is false, I depend on the compiler removing the code from the resulting .class files as an optimization. For instance:
public class Test {
public static void main(String[] args) {
final boolean TESTABLE = true;
if (TESTABLE) {
// do something
}
}
}
Typically, I manage these variables on my own, using them during development and setting TESTABLE to false when I'm done. A development team could easily agree to a convention for variable names, like TESTABLE, and the build file's production target could check for and fail if any source files had a TESTABLE variable = true.
In addition to all the above suggestions (what's with all the manual crap and adding cruft to the code? automate things people...), I notice that you're using Eclipse, Spring, and ANT. Eclipse supports multiple source folders - separate your code out into a "source" and "testing" folder, put anything for production in the source folder and put anything "not production" in the testing folder. Spring allows you to have multiple configurations that reference different implementations - so you can have a production configuration that references classes only in production, and testing configuration(s) to run with your testing code. Have the ANT script build the production and testing versions of your app - for testing add the "testing" folder to your compile path, for production leave it off. If a class references a testing class from production you'll get a compile error - if a production Spring configuration references a class from testing in production, it will fail as soon as it tries to load it.
Maybe if you mark those classes/methods as depricated, then they would be flagged during compilation time?
For our production environments, we have a couple of simple C tools for stripping out sections using a very special comments. /*#BEGIN_SKIP*/ and /*#END_SKIP*/. Stick to standard C run-time, and you can compile on any environment.
You can change your entire build cycle to replicate the source code, transform it, and compile it.
I would try to avoid this as far as possible. - An alternative approach would be to use dependency injection to inject different implementations for testing.
Or...
Add an inTest boolean field to the objects and wrap the optional code in an if statement.
if(inTest) {
testMethod();
}
You could set this vboolean with dependency injection or read it from a passed in system property (-DinTest=true)
Hope this helps.
You can use a java preprocessor. For j2me applications I use antenna preprocessor. The code looks like this
public void someMethod() {
//#ifdef DEBUG
doDebugStuff();
//#endif
}
Eclipse allows for other markers than just //TODO, you can add, for example, //TOBEREMOVED and give it a high priority, so it shows up before all the other TODO markers.
Just add some //TODO: -- then make a c# script (cs-script.net) which looks for //TODO in your code and displays them. You can then add this script to your automated builds (if you're doing that), so each time you do a build, you can see what there is to do. Review your code's todo list before deploying.
Alternatively to writing your own script, there's some instructions on how to integrate vstudio with some tool that points out your todo lines as well: http://predicatet.blogspot.com/2006/12/show-all-tasks-in-visual-studion-2005-c.html
However, it seems to me, setting up that tool is more of a pain than writing a simple C# script with a regex.
I use the //FIXME keyword that eclipse displays, together with //TODO, in the Tasks View (you can filter what to see on it).
You shouldn't go out to production if there is some //FIXME around :)
My solution is to work on two seperate branches of code. One production branch which only gets clean code without any debugging code and another (Sometimes I even have several of these) for testing, debugging, trying new struff etc.
In eclipse these are separate projects (or groups of projects).
Mercurial is a nice VCS for this type of work but CVS and subversion are good too.
The obvious way to solve this is to have a unit test that only runs on the build that is intended to build the files for production (or checks if the current build is targeted for production and runs the test if it is) and fail the build if the test fails.
You won't ever forget. In terms of what kind of test, ideally it would actually check for whatever the code does. If this is not possible, then a global static as Terry Lorber suggested would be a lot better than what you have now.

Categories

Resources