I am trying to build a core module to be used across services, its collection of enum and interfaces with(without) default implementation only. Wanted to know if its possible to impose this rule either as part of mvn build or ci flow to check if someone has pushed a class in it. Being able to done at mvn would be preferred though.
Is this even possible? If yes can someone point me how?
Not sure which is the best way but I would suggest a custom rule on sonar/findbugs/checkstyle/pmd with the appropriate plugin.
Or a custom annotation processor that you use at compile time.
Related
I'm looking for different ways to prevent internals leaking into an API. This is a huge problem because once these internals leak into the API; you can run either into unexpected incompatibility issues or into frozen internals.
One of the simplest ways to do so is just make use of different Maven modules; one module with API and one module with implementation. This way it is impossible to expose the implementation from the API.
Unfortunately not everyone agrees this is the best approach; But are there other alternatives? E.g using checkstyle or other 'architecture checking' tools?
PS: Java 9 for us is not usable, since we are about to upgrade to Java 8 and this will be the lowest supporting version for quite some time to come.
Following your checkstyle idea, it should be possible to set up rules which examine import statements in source files.
Checkstyle has built-in support for that, specifically the IllegalImport and ImportControl rules.
This of course works best if public and internal classes can be easily separated by package names.
The idea for IllegalImport would be that you configure a TreeWalker in checkstyle which only looks at your API-sources, and which excludes imports from internal packages.
With the ImportControl rule on the other hand you can define very detailed access rules for the whole application/module in a separate XML file.
It is standard in Java to define an API using interfaces and implement them using classes. That way you can change the "internals" however you want and nothing changes for the user(s) of the API.
One alternative is to have one module (Jar file) for API and implementation (but then again, is it an API or just any kind of library?). Inside one separates classes and interfaces by using packages, e.g. com.acme.stuff.api and com.acme.stuff.impl. It is important to make classes inside the latter package protected or just package-protected.
Not only does the package name show the consuming developer "hey, this is the implementation", it is also not possible to use anything inside (let's omit reflections at this point for the sake of simplicity).
But again: This is against the idea of an API, because usually the implementation can be changed. With this approach one cannot separate API from implementation, because both are inside the same module.
If it is only about hiding internals of a library, then this is one (not the one) feasible approach.
And just in case you meant a library instead of an API, which only exposes its "frontend" (by using interfaces or abstract classes and such), use different package names, e.g. com.acme.stuff and com.acme.stuff.internal. The same visibility rules apply of course.
Also: This way one does not need Checkstyle and other burdens.
Here is a good start : http://wiki.netbeans.org/API_Design
Key point : Do not expose more than you want Obviously the less of the implementation is expressed in the API, the more flexibility one can have in future. There are some tricks that one can use to hide the implementation, but still deliver the desired functionality
I think you don't need any checkstyle or anything like that, just a good old solid design and architecture should be enough. Polymorphism is all you need here.
One of the simplest ways to do so is just make use of different Maven
modules; one module with API and one module with implementation. This
way it is impossible to expose the implementation from the API.
Yes, I totally agree, hide as much as possible, separate your interface in a standalone project.
I am wondering If it is possible to implement an interface project in maven as below.
Project-A and Project-B have same classes implements interfaces in Project-C.
Project-Z uses only Project-I(POM) which includes Project A,B,C as module.
I want to use profile mechanism so that either Project-A or Project-B's implementation will be used by Project-Z.
Please give me a simple example if you think it is possible.
Thanks.
You can use java SPI (Service Provider Interface) by which the two implementing jars declare the same service in the manifest. Using apps can do a lookup, iterate over them and pick one. (I did not find a simple example on the spot.)
Simplest way that comes to mind if you really want to do it with a maven profile, you could set a property with the classname in the profile. Then instantiate the class in this property at runtime.
<profile>
<id>a</id>
<properties>
<myproject.componentX.implementation.class>com.foo.bar.BazA</myproject.componentX.implementation.class>
seems perhaps strange why you'd want to do it at build time in the root pom .... maybe you plan on not changing it often. there are better ways, depending on what you plan to do.
could you explain in what cases you want to change it and why and how often?
This is going to be a tough question to describe, but here goes.
We are using the Delphi Spring Framework. (http://code.google.com/p/delphi-spring-framework/)
Let's say I have UnitA that declares InterfaceA which is implemented by ClassA.
Similarly, I have UnitB that declares InterfaceB which is implemented by ClassB.
Both registered their interface and their class with the Spring Container in their respective initialization sections.
InterfaceA has a dependency on InterfaceB, but because we are using Spring, UnitA doesn't have UnitB in its uses clause. In other words, we've done our job -- we've decoupled UnitA and UnitB, but we still are able to have InterfaceA depend on InterfaceB.
However, given the above scenario, we need to make sure that both UnitA and UnitB are included in the project so that the dependencies can be resolved.
Imagine, now, that we start a new project. That new project uses UnitA, but the developer doesn't realize that if one is to use UnitA, one also has to include UnitB in the project. There will be no compiler error, because the dependency is resolved at run time, not compile time.
And herein lies the question: What is the right way to ensure that this dependency on UnitB is known before the app gets deployed?
We can foresee a situation in a complex app where, despite thorough testing, a given code path isn't executed possibly for a long time, and this missing dependency isn't discovered before deployment.
We've implemented a system where each interface resolution call is accompanied by a Requires call that checks and raises an exception at startup, ensuring we'll see the error. But we are wondering if there is a "best practice" or standard way to detect this or otherwise deal with this issue.
Added: Is this an issue in Java and other languages?
You need to use a dependency management solution like Maven or Ivy. Once you do this, you will be able to say that UnitA depends on UnitB and once someone adds the UnitA as a dependency, the tool (either Maven or Ivy) will be forced to download the dependency and include it in your project.
Maven itself has an Eclipse plugin that is able to detect even if you already have the other project at your current workspace.
I'm a little confused. Its the use of interfaces that gives you loose coupling not an IoC container. Its only natural that UnitA would use UnitB if that's where InterfaceB is declared.
The implementations of the interfaces is a different story. Those would need references to the interfaces they implement and any interfaces they make use of but should not have references to any other implementations.
I've not used Spring for Delphi but I'm familiar with other IoC containers. If it behaves similarly then you are registering an interface along with an implementation of it. When you call resolve you are passing in either the interface's name or some other information (type info) about the interface and expecting the IoC container to return a reference to the interface you requested. Which implementation is behind that interface is determined by which ones were registered and what rules are in place for resolving the request.
If the interface you requested was never registered you get an exception. Some IoC containers can resolve entire chains of dependencies with a single call.
You're asking for a way to determine at build time whether a dependency will be resolved at runtime but the dependencies aren't registered until runtime. I don't think that can be guaranteed no matter what tool you use.
UnitA should include UnitB in its uses clause. The implementations of the interfaces in UnitA and UnitB could and probably should be located in units separate from A and B, especially if there is more than one implementation of each interface.
The developer that uses UnitA in a project would be then be forced to include UnitB in the project as well. If they are using test driven development in the new project they would find out pretty quickly that they need to provide implementations (even if they are only mocks) for both InterfaceA and InterfaceB for their tests to pass.
Quite frankly, if a critical dependency is overlooked and the project gets deployed without it then the tests weren't thorough enough. Unit tests might not catch this but this is exactly what a suite of integration tests would normally catch.
I would recommend something like Fit or Fitnesse for integration testing (though I'm not sure how mature the Fit4Delphi project is). The tests can be written by anyone who can write a word document or edit a wiki and the developer just has to write classes that allow the tests to drive the production code. If setup correctly you can run most of the integration tests against the actual release version of the project.
I'm looking for some ideas on how to compile Java code with some other pieces of code missing (method calls). I am fully aware that javac will not allow you to compile Java files if cannot find all dependencies. But maybe there is some way how to bypass it, something like force compile.
My bytecode knowledge is not so good but I think some method invoke is just full package definition of class and method name with parameters. So if compiler just puts this data to class file and assume in running process dependency will be available (if not simple NoSuchMethodExp).
Only workaround so far I found is to create empty missing class files with empty methods to "cheat" compiler. Works perfectly but there should be easier way :)
Any ideas?
Use Interfaces.
Create the interfaces that have the methods you need. At runtime, inject (Spring, Guice, etc.) or generate (cglib ...) classes that implement the interface.
If you're modifying a jar, you can extract the class files you are not modifying to another directory and include that in the classpath. That way they will be available to the compiler.
Bad luck! Probably all you can do is to create mock objects for missing parts of code just to compile your code (empty methods, so the compiler can find it).
Another question - if you miss some classes, how will you execute that code?
UPDATED according to information provided:
Well, there is another option to modify classes in jar, you can use AOP, and to make it done read about AspectJ - actually for me this is the easiest option (typically you need to spend time mocking objects, writing empty methods, so I would contribute that time to study new technology, which will help you many times ;)
And btw the easiest way to implement it, if you use Eclipse, is:
install AJDT
create aspect project
create aspect which modifies code (depending on what you need to change)
add jar file you want to modify
immediately get modified code in
another already packed jar file
Sounds magically :)
In this case you don't need any dependencies in classpath, except for libraries which are needed for new code you add!
Methods aren't dependencies. They are part of the class definition. The only places the java runtime looks for method definitions is in the class def that was compiled at compile time and in its parent classes. If you're problem is that a super class is incomplete, I don't think I can help you.
If not, you could define some of these methods as abstract and than have a child class implement them.
What kind of code is missing? Normally this happens if you refer to libraries your compiler can't find. Maybe you simply need to extend the classpath the compiler is searching for classes.
If you really refer to code that is not available yet you need to implement at least those methods you refer to. But that sounds strange... maybe you can clear things up.
I'm developing a Maven plugin that will have provide 5 goals. You can either execute goals 1-4 individually, or execute goal5, which will execute goals 1-4 in sequence. I've been looking for a way to reuse (i.e. invoke) one Maven goal from within another, but haven't found it yet.
Of course, I could just have goalX delegate to ClassX for most of it's functionality, then when goal5 is invoked, it delegates to Class1...Class4, but this still involves a certain amount of code duplication in terms of specifying, reading and validating each goal's configuration.
Is there a way to reuse one goal within another?
Thanks,
Don
Is there a way to reuse one goal within another?
AFAIK, the Maven API doesn't offer any facility for this because the Maven folks don't want to promote a practice leading to strong coupling between plugins which is considered as bad. You'll find background on that in Re: calling plugin in another plugin?.
That being said, this blog post shows how you could instantiate a Mojo and use reflection to set its field before to call execute.
You might also want to check the mojo-executor library.
But be sure to read the mentioned thread, I think it's important.
Of course, I could just have goalX delegate to ClassX for most of it's functionality, then when goal5 is invoked, it delegates to Class1...Class4, but this still involves a certain amount of code duplication in terms of specifying, reading and validating each goal's configuration.
So then why not provide a common class for your other classes for the purpose of goal validation? I think the easiest thing to do here is to have one goal invoke the other in your code.
The "Maven mindset" appears to be that configuration is the responsibility of the pom.xml author, not the Mojo implementor. If you move all your configuration and such into a common base class, you end up bypassing this mechanism.
It kind of sounds like what you want are sub-projects: Each of your goals 1-4 live in their own project, or you can run goal 5, which runs them all. Perhaps this might help?: http://i-proving.com/space/Technologies/Maven/Maven+Recipes/Split+Your+Project+Into+Sub-Projects
If your source trees don't split nicely along project lines, you might be able to do something with profiles (though I haven't tried this). Check out the accepted answer here: How to bind a plugin goal to another plugin goal.