In my multi-module project, I created module-info.java only for few modules. And during compilation with maven-compiler-plugin:3.7.0 I'm getting next warning:
[WARNING]
* Required filename-based automodules detected. Please don't
publish this project to a public artifact repository! *
What does it mean? Is that because I have only a few modules with module-info.java and not the whole project?
Automatic module recap
An explicit module (i.e. one with a module-info.java) can only access code of modules that it requires (ignoring implied readability for a moment). That's great if all dependencies are modularized, but what if they are not? How to refer to a JAR that isn't modular?
Automatic modules are the answer: Any JAR that ends up on the module path gets turned into a module. If a JAR contains no module declaration, the module system creates an automatic module with the following properties:
inferred name (this is the important bit here)
reads all other modules
exports all packages
Maven relies on that mechanism and once you create a module-info.jar it places all dependencies on the module path.
Automatic names
There are two ways to infer an automatic module's name:
entry in the manifest
guess from the JAR file name
In the first case, the name was deliberately picked by the maintainer, so it can be assumed to be stable (for example it doesn't change when the project gets modularized). The second one is obviously unstable across the ecosystem - not all project setups lead to the exact same file names of their dependencies.
What does it mean?
The reason for the warnings is that some of your dependencies are automatic modules and do not define their future module name in the manifest. Instead, their name is derived from the file name, which makes them unstable.
Stable names
So why are unstable names such a problem? Assume your library gets published with requires guava and my framework gets published with requires com.google.guava. Now somebody uses your library with my framework and suddenly they need the modules guava and com.google.guava on their module path. There is no painless solution to that problem, so it needs to be prevented!
How? For example by discouraging developers from publishing artifacts that depend on filename-based automatic modules. 😉
[WARNING] * Required filename-based automodules detected. Please don't
publish this project to a public artifact repository! *
Is that because I have only a few modules with module-info.java and not the whole project?
No, its not because of a few module listed on the module-info.java but generated by the maven-compiler-plugin for all the automatic modules found in the module graph.
What does it mean?
Not to publish the current project is insisted probably since the automatic modules are expected to be converted to named or explicit modules by their owners and then published to repositories which might result in change to their module name as well. Additionally a precautionary point to note here is that according to the progress document of Maven ~> Java+9+-+Jigsaw, they are still not completely ready with the JDK9 compatible plugin versions.
Just to portray an example for such a use case. Think over these lines -
I've published an artifact com-foo-bar:1.0.0-SNAPSHOT:jar.
Another project of mine com-xyz:1.0.0 depends on it.
Eventually a project of yours relies on com-foo-bar transitively via com-xyz
You plan to modularize your code and make use of something like
module your.module {
requires com.foo.bar;
requires com.xyz;
}
(you need to specify the transitive dependencies in the module declarations separately)
All worked fine, but until the time I decided to modularize my libraries.
Now, the first thing I did was name my modules!
And I did something fantastic to explicitly call out my efforts like this:-
module modular.com.foo.bar {}
I end up breaking the code of any dependent library and eventually any that depends on yours in a modularized way.
Note: I agree over not practicing to use SNAPSHOTs in production, but there could be cases when eventually you rely on an artifact which is still in development phase.
Edit: From the comments by #khmarbaise
Its understood that people would wish to publish to artifactories but
if they are not aware of the consequences you will be beaten in the
future by this.
Maven would like to make it clear that the WARNING in this case is very
serious which could have instead been a FAILURE but that would had been a
bad user experience to deal with.
The ideal way to deal with this is that the library owners plan to migrate their artifacts to JDK9 and the tree is traversed bottom-up in which case the named/explicit module would be the only aspect prevailing without the need of the automatic module names and such warnings.
With the maven-compiler-plugin v3.7.0 it's an informational message. Not sure why you see it as a warning...
Here's what I get when I build my Java 10 based project with a module-info.java:
[INFO] Required filename-based automodules detected. Please don't publish this project to a public artifact repository!
Related
From the viewpoint of a Gradle java library author, I understand that a dependency specified in the implementation configuration will be marked with the runtime scope in the resulting POM file that gets published (using the maven-publish Gradle plugin). This makes sense, as anyone consuming my published library doesn't need the dependency for compilation (it is an internal dependency), but instead only for runtime. If I specify a dependency in the api configuration, it will be marked with the compile scope in the resulting POM file, which again makes sense, as anyone consuming my library needs this for compilation (and runtime).
This makes me believe that the meaning of the Maven dependency scope is relative to anyone consuming the component, and not relative to the component itself. Consider a published Maven library (containing Java class files) a dependency marked with compile should mean:
If you compile against me, then use this dependency on the compilation classpath too!
However, according to the Maven docs, it seems that it means:
I was compiled with that dependency on my compilation classpath, and if you want to compile me again, do the same!
If this were true, then one could not distinguish between API-dependencies and implementation-dependencies like Gradle does. Also, it would only make sense if the published component actually contains the sources, not only the class files.
Did Gradle actually "misuse" the meaning of these scopes to make some improvements, or did I fundamentally misunderstand something?
Gradle cleverly "misuses" the scopes.
Maven has the design flaw that the build POM is published 1:1 as consumer POM (this will change with the upcoming Maven 4.x). So Maven does not have the chance to use something for compilation in the project, but for runtime when consumed by another project (at least not without applying tricks). The Maven docs therefore do not discuss the possibility of "implementation/api".
I have a JavaFX application that works as expected. I need to use Apache POI to read and write excel files. The following are the steps I have taken:
Added the required dependency
implementation 'org.apache.poi:poi-ooxml:5.2.3'
Added the module to module-info.java
requires org.apache.poi.ooxml;
Tried to use the library within a function:
#FXML
private void downloadTemplate() {
XSSFWorkbook workbook = new XSSFWorkbook();
}
All this is fine with no issues. However when I try to run the application, I get the following two errors (interchanging)
> Task :Start.main() FAILED
Error occurred during initialization of boot layer
java.lang.module.FindException: Module SparseBitSet not found, required by org.apache.poi.ooxml
and
> Task :Start.main() FAILED
Error occurred during initialization of boot layer
java.lang.module.FindException: Module commons.math3 not found, required by org.apache.poi.ooxml
I can however, clearly see both libraries under 'external libraries'
I am using IntelliJ Community Edition 2022.1.2 and running the project using Java 17.0.1. Any help would be highly appreciated.
SparseBitSet is an automatic module, it has no module-info of its own (probably commons-math3 is as well), and is without an Automatic-Module-Name entry in its manifest.
Gradle puts libraries without a module-info.class or an Automatic-Module-Name in their manifest on the class path, not the module path, so they won't be treated as modules, and the module finder won't find them.
You can:
hack the gradle build to allow the modules to be found. (I don't use Gradle so I have no specific advice on how to do that other than referring to the documentation).
Hack the library jar which you want to be treated as a module to include a module-info.class or an Automatic-Module-Name in its manifest.
Or, switch to maven, which automatically places automatic modules on the module path.
The easiest way to do this, IMO, is to create a new JavaFX project in Idea, then add the required dependencies as maven dependencies and add your code.
Or, as swpalmer suggests in the comments, request that library maintainers update their codebase to make their libraries modular.
And, when you run your app, make sure all jars are on the module path, not the class path.
Or, make your app non-modular by removing the module-info.java from it, then manually place the JavaFX modules on the module-path and add them with the --add-modules switch.
FAQ
Are you SURE that automatic modules are put on the class path by Gradle?
From the Gradle documentation section Building Modules for the Java Module System:
To tell the Java compiler that a Jar is a module, as opposed to a
traditional Java library, Gradle needs to place it on the so called
module path. It is an alternative to the classpath, which is the
traditional way to tell the compiler about compiled dependencies.
Gradle will automatically put a Jar of your dependencies on the module
path, instead of the classpath, if these three things are true:
java.modularity.inferModulePath is not turned off
We are actually building a module (as opposed to a traditional
library) which we expressed by adding the module-info.java file.
(Another option is to add the Automatic-Module-Name Jar manifest
attribute as described further down.)
The Jar our module depends on is itself a module, which Gradles
decides based on the presence of a module-info.class — the compiled
version of the module descriptor — in the Jar. (Or, alternatively, the
presence of an Automatic-Module-Name attribute the Jar manifest)
It is the third point that is key. Java can treat a library with no module-info.class and no Automatic-Module-Name in the Jar manifest as an automatic module if it is on the module path. However, Gradle will by default, only place libraries which fulfill one of those two conditions on the module path.
Using jewelsea's answer above, I have been able to solve the problem. I am posting the answer here to help anyone else who encounters the problem in future.
So, the overall problem is, as said in the answer above, both SparseBitSet and commons-math3 are automatic modules with no module-info of their own. The solution that worked for me was to convert them into the modules expected by the project. Here are the steps I took:
Use a gradle plugin 'extra-java-module-info'. The github page didn't show how to import it to a normal gradle file so here it is:
plugins {
id 'org.gradlex.extra-java-module-info' version '1.0'
}
Note the names that your application expects for the modules. In my case, from the error messages thrown, they were 'SparseBitSet' and 'commons-math3'
Locate the said libraries on the sidebar under 'external libraries' and note the 'jar' file names. In my case, they were 'commons-math3-3.6.1.jar' and 'SparseBitSet-1.2.jar'.
Add a section 'extraJavaModuleInfo' to your gradle files and use the parameters as follows: module('jar file name', 'name expected by your project', 'jar version'), as shown in the blue rectangle in the image above.
extraJavaModuleInfo {
module('commons-math3-3.6.1.jar', 'commons.math3', '3.6.1')
module('SparseBitSet-1.2.jar', 'SparseBitSet', '1.2')
}
That's it. Try to sync and run your project. Thanks jewelsea.
I have a project 'java11-core' that generates a test jar artifact to share with project 'java11-app'. These projects build fine with command line Maven, but within Eclipse, the classes shared in the test jar cannot be found.
Version Info:
Apache Maven 3.6.0 (command line and Eclipse)
Java version: 11.0.1, vendor: Oracle Corporation
Eclipse IDE: Version: 2018-09 (4.9.0)
M2E Plugin: 1.9.1.20180912-1601
I originally created these to projects as tradition non-JPMS projects. These projects compiled and ran tests normally as expected. After I added module-info.java to both java11-core and java11-app, the Eclipse compiler could not recognize the shared test files from the core project.
Here is a snapshot of the package explorer for an overview of the project structure.
The added java11-app and java11-core module-info contents respectively:
module com.java11.app {
exports com.java11.app;
requires com.java11.core;
}
module com.java11.core {
exports com.java11.core;
}
As you can see, I do not export the test utilities package from com.java11.core. I do not want to export the test packages because this would make the test classes publicly available. I also do not wish to introduce a new test project, because in real-world scenarios, this is very likely to require cyclic dependencies between test utilities and the projects they assist in testing.
Build errors for in AppTest.java. The failure reported by Eclipse is interesting is that it does not claim it cannot find the CoreTestUtil class, but rather:
The type com.java11.test.core.util.CoreTestUtil is not accessible AppTest.java /java11-app/src/test/java/com/java11/app line 8 Java Problem
CoreTestUtil cannot be resolved AppTest.java /java11-app/src/test/java/com/java11/app line 21 Java Problem
My assumption is that the lack of an export for this package from java11-core and/or the lack of a requires for this package in java11-app make eclipse believe the access is restricted, even though the classes exist in a separate test-jar.
The module path for java11-app shows it includes java11-core as a module, and the Without test code is set to No.
I know I am working with newly release features and suspect that sharing test classes across Eclipse JPMS project is not yet supported. But, I am not sure where to look (Eclipse? M2E plugin) for an update on it being supported. I am also not aware of a work-around that would allow me to be productive while adopting JPMS for my software projects.
For those that believe test utilities should not be shared this way...
This subject has been characterized as a best-practice issue that should be resolved by refactoring test utilities into a separate module. I respect this perspective, but in attempting to follow that guidance, I found myself being forced to violate other best-practices, including DRY (Don't Repeat Yourself), and cyclic dependencies between packages.
It is common for a test utility to emerge while developing a module that both assists in effective testing of that module, as well as depends on that module. This creates a cycle if those utilities are pulled out to separate module. Furthermore, some of these utilities can be equally useful when testing other modules that depend upon that module. This creates duplicate code if those utilities are copied to a new test module for dependents. This reasoning may have been why Maven 'test-jar' support was originally added.
Eclipse does not support multiple module-info per project: in whatever source folder (main or test), you must only have one module-info.
From Eclipse point of view, your only luck is to create a new Java project referencing the other and with its proper module-info/exports:
module mod.a {
exports com.example.a;
// com.example.a.Main
}
module mod.a.tests { // (1)
exports com.example.a.tests;
// com.example.a.tests.MainUtils calling com.example.a.Main
requires mod.a;
}
In case (1), you will have problems if you don't use mod.a.tests: Java will never find com.example.a.Main, probably because the second project shadows the first project.
I am not an OSGI expert, but I think that's one of those reason for why most Eclipse plugin do have a main and test projects: org.eclipse.m2e.core is patched by org.eclipse.m2e.core.tests
However module-info does not have any knowledge of "patches": you may patch a module on command line (java --patch-module), but not in module-info itself: perhaps Eclipse could do that on your behalf, but it don't.
As you can see, two project in Eclipse = two Maven module.
In the case of Maven, you can certainly create other artefacts with the same build (and I do think it tends to pollute the dependencies, because every time your secondary artefacts will requires a dependency, it would have to go to the common scope).
This can be done using maven-compiler-plugin, maven-shade-plugin and maven-jar-plugin:
I think you should not rely test-jar because you want to emulate the --patch-module of Java by merging the classes and test-classes directories.
You don't want to import this project in Eclipse due to multiple module-info; or you must ensure that its module-info is only visible to Maven (you can use a profile + m2e.version do detect m2e and disable it).
I fully agree with you. Why should I only use the src-main code from some core module when I also could inherit some src-test functionalities?
But how to handle the scope problem? When I use the "test"-scope I loose the relation to the src-main code. When I dont use the test scope I loose the relation to the src-test code.
My core test code does not change very often, so to get the stuff working in Eclipse
I install the test-jar to my local repository and everything works fine.
EDIT: This is about doing Continuous Delivery with Maven and having it orchestrated with Jenkins. Maven is definitively not designed for that, and this question is part of our effort to get an efficient workflow without using Maven releases. Help is appreciated.
We use Maven -SNAPSHOTs within major versions to ensure customers always get the latest code for that given version, which works well. For technical reasons we have two independent Maven jobs - one for compiling sources to jars, and one for combining the appropriate jars to a given deployment. This also works well.
We then have Jenkins orchestrating when to invoke the various steps, and this is where it gets a bit tricky, because if we do the normal mvn clean install in step one, this means that all the snapshot artifacts get recompiled, which in turn makes Jenkins think that all the snapshots changed (as their fingerprint - aka MD5 checksum - changed) even if the sources used to generate the artifacts did not change, triggering all the downstream builds instead of just those which dependencies did change.
I have so far identified these things as varying between builds:
META-INF/maven/.../pom.properties (as it contains a timestamp)
META-INF/MANIFEST.MF (contains JDK and user)
timestamps in jar file
I have found ways around the two first, but the latter is a bit more difficult. It appears that AbstractZipArchiver (which does all the work in zipFile() and zipDir()) is not written to allow any kind of extension to how the archive is being generated.
For now I can imagine four approaches (but more ideas are very welcome):
Create a derivative of the current maven-jar-plugin implementation allowing for a timestamp=<number> attribute which is then used for all entries inserted into the jar file. If not set, the current behavior is kept.
Revise the Jenkins fingerprinting scheme so it knows about jar files and only looks at the entries contents, not their metadata.
Attach a plugin to the prepare-package stage responsible for touching the files with a specific time stamp. This requires all files to be present at that time (meaning that the jar plugin cannot be allowed to touch the MANIFEST.MF file)
Attach an extra plugin to the "package" phase which rewrites the finished jar file, zeroing out all zip entry timestamps in the process.
Again, the goal is to make maven SNAPSHOT artifacts fully time independent so given the same source you get an artifact with the same MD5 checksum. I also believe, however, that this could be beneficial for release builds.
How should I approach this?
As per my comment, I still think the answer is to do none of the things you suggest, and instead use releases in preference to snapshots for artifacts which you are in fact releasing to customers.
The problems you describe are:
you have a multi-module project which takes a long time to build because you have more than 100 modules,
you have two snapshot artifacts which you think ought to be identical (because the source code and metadata were identical at build time), but they have different checksums.
My experience with Maven tells me that if you try and adhere to the "Maven Way", tools will work well for you out-of-the-box, but if you deviate then you'll have a bad time. Unfortunately, the Maven Way is sometimes elusive :-)
Multi-module projects in Maven are very useful when you have families of modules with code that varies in sympathy, e.g. you have a module containing a bunch of interfaces, and some sibling modules providing implementations. It would be unusual to have more than a dozen modules in a multi-module project. All the modules ought to share the version number of the parent (Maven doesn't enforce this, which in my opinion is confusing).
When you build a snapshot version of a multi-module project, snapshots of all modules are built, even if the code in a particular module hasn't changed. Therefore you can look at a family of modules in your repositiory, and know that at compile time the inter-module code references were satisfied.
For example, in a domain model module you might have an interface:
public interface Student {
void study();
}
and in some sibling modules, which would declare compile-scoped dependencies on the domain model in their POMs, you might have implementations.
If you were then to change the interface in the domain model module:
public interface Student {
void study();
void drink(Beer beer);
}
and rebuild the multi-module project, the build will fail. The dependent modules will fail to build, even though their code and POMs have remained the same. In a multi-module project, you only install or deploy artifacts if all the child modules build successfully, so rebuilding snapshots is usually very desirable - it's telling you something about the inter-module dependencies.
If:
you have an excessive number of modules, and/or
those modules can't reasonably share the same version number, and/or
you don't need any guarantees about code references between modules,
then your modularisation is incorrect. Don't use multi-module projects as a build system (you have Jenkins for that), use it instead to express relationships between modules of your code.
In your comment, you say:
RELEASE artifacts behave the same way when being rebuilt by Jenkins.
The point of point of release artifacts is that you do not rebuild them - they are definitive! If you use something like Artifactory, you will find that you cannot deploy a release artifact more than once - your Jenkins job should fail if you attempt it.
This is a fundamental tenet in Maven. One of the aims of Maven is that it if two developers on separate workstations were to attempt the same release, they would build artifacts which were functionally identical. If you are build an artifact which expresses a dependency (maybe for compilation purposes, or because it's being assembled into .war etc.) on another, then:
if the dependency is a snapshot, Maven might seek a newer version from the repository.
if the dependency is a release, the version in your local repository is assumed to be definitive.
If you could rebuild a release artifact, you would create the possibility that two developers have dissimilar versions in their repository, and you'd have dissimilar builds depending on which workstation you used. Don't do it.
Another critical detail is that a release artifact cannot depend on snapshot artifacts, again, you would lose various guarantees.
Releases are definitive, and it sounds like you want your assembly to depend on definitive artifacts. Jenkins makes tagging and releasing multi-module projects very straightforward.
In summary:
Check your modularisation: one enormous multi-module project is not useful.
If you don't want to continually rebuild snapshots, you need to do releases.
Never release snapshots to your customer.
Follow the dependency graph of your assembly project and release any snapshots.
Release the assembly project, bumping your minor version.
Ensure your customer refers to the complete version number of your assembly in communications.
What are the possibilities to enforce restrictions on the package dependencies in a Java build system? For example, the myapp.server.bl.Customer class should not be allowed to refer to the myapp.client.ui.customlayout package.
I'm interested in either Ant-based or IDE-specific solutions.
I'd like to get an error message in the build indicating that a (custom) package dependency rule has been violated and the build aborted. I also would like to maintain the dependencies in a list, preferably in a text file, outside of the Ant scripts or IDE project files.
(I don't know Maven but I've read it here it has better support for module dependency management)
I believe Checkstyle has a check for that.
It's called Import Control
You can configure Eclipse projects to specify Access Rules. Access rules can specify "Forbidden", "Discouraged", and "Accessible" levels all with wildcard rules. You can then configure violations of either Discouraged or Forbidden to be flagged as either warnings or errors during builds.
Kind of an old article on the idea (details may be out of date):
http://www.eclipsezone.com/eclipse/forums/t53736.html
If you're using Eclipse (or OSGi) plugins, then the "public" parts of the plugin/module are explicitly defined and this is part of the model.
ivy seems like a good solution for your problem (if you are using ant). Ivy is the offical dependency management component of Ant and thus integrates nicely with ant. It is capable of resolving dependencies, handle conflicts, create exclusions and so on.
It uses a simple xml structure to describe the dependencies and is easier to use than Maven, because it only tries to address dependency resolution problems.
From the Ivy homepage:
Ivy is a tool for managing (recording, tracking, resolving and reporting) project dependencies. It is characterized by the following:
flexibility and configurability - Ivy is essentially process agnostic and is not tied to any methodology or structure. Instead it provides the necessary flexibility and configurability to be adapted to a broad range of dependency management and build processes.
tight integration with Apache Ant - while available as a standalone tool, Ivy works particularly well with Apache Ant providing a number of powerful Ant tasks ranging from dependency resolution to dependency reporting and publication.
For the IDE specific solutions, IntelliJ IDEA has a dependency analysis tool that allows one to define invalid dependencies as well.
http://www.jetbrains.com/idea/webhelp2/dependency-validation-dialog.html
The dependency violation will be shown both when compiling and live, while editing the dependent class (as error/warning stripes in the right side error bar).
Even more automation can be obtained with JetBrains' TeamCity build server, that can run inspection builds and report the above configured checks.
For another IDE independent solution, AspectJ can be used to declare invalid dependencies (and integrate the step in the build process, in order to obtain warning/error info for the issues).
Eclipse has support for this via Build Path properties / jar properties. I think it may only work across jar / project boundaries.
Maybe Classsycle can be used:
http://classycle.sourceforge.net/ddf.html
You can use multiple modules in IDEA or Maven or multiple projects in Eclipse and Gradle. The concept is the same in all cases.
A trivial interpretation would be a module for myapp.server.bl and another for myapp.client.ui.customlayout with no compile time dependencies between either of them. Now any attempt to compile code or code-complete against the opposite module/project will fail as desired.
To audit how extensive the problem already is, a useful starting point for IntelliJ IDEA is Analyzing Dependencies:
http://www.jetbrains.com/idea/webhelp/analyzing-dependencies.html
From that article you can see how to run and act on dependency analysis for your project.