How to share entity between REST service between two microservices? - java

I have created two micro-services using java. I need to make a REST api call from service A to service B. The data sent will be in JSON format. Using jax-rs I need to create entity class in both the service.
Since both the entity class be same in both the projects. Do i
Create an common jar and use is for all my entity/domain objects? Does this make my microservice more tightly coupled?
Do i create the same class in both the microservice projects? This will just mean repeating the work in both the projects?
Is there a better way to communicate between the sevices?

In terms of having your two micro services independent and having them also independent in the future I would also duplicate the code. We had the exact same situation before. Several microservices seem to use some "common" classes that can be put to a seperate jar.
In the end we had following situation:
several (5+) services using the same JAR
turned out that classes that we thought are the same, seemed to have slightly different semantics in different services
a change on one of the classes more or less forced us to have a release on every microservice, when it came to releasing (no independency here anymore)
developers tend to see "common" behavior everywhere, so you most likely end up with some "Helper/Utility" classes there as well which is in the meanwhile considered a code smell in OOP
Long story short, in the meanwhile we switched to having the code duplicated, which gives us the freedom to handle our mircoservices really independently, as we only need to stick to the service contract. What happens internally is fully up to the service and we don't have to release all services in the end of an iteration. I'm not saying that the other option is wrong, but it turned out that it was not suitable for us. If you really see common classes between two services and you are sure you don't mess your common library up with other crap, your save to go.
EDIT
Maybe as follow up, we had the same discussion in regards of tests (unit and integration) having share test code in some common classes. In the end this was hell, as every slight change in code or acceptance criteria made 50% of tests fail. Meanwhile our strategy is to not share anything on test level and have everything right at the tests place. By that you are super fast in eliminating or changing tests. In the end the lesson for us was to keep business code as clean and elegante as suitable and the test code in a way to give us the least headache possible.
Edit2
Meanwhile, we define all our REST interface with open api specifications and create the actual DTO objects that are exchanged via the maven plugin openapi-generator. The spec resides in the project that implements the interface and it is published to artifactory. The project implementing the client pulls it and creates DTOs based on that. By that, you have a single point of truth and no need to write DTO boilerplate code.

I'd say it depends on the situation. If you use a shared package, this will introduce a coupling between the two projects. This makes sense, if both of the project build up on the same data classes and therefore will have the same dto objects to work with. Ideally you would have your own nexus which simplifies the usage of the shared artefact.
Otherwise, if only a few classes are redundant I probably would implement it in each sevice separately, which decouples them too.
I am afraid that you need to decide which one the right solution is for your project.

This is common situation where we as developer gets confused. I would suggest to have a common jar(shared) which can be used in both micro services (A and B). It is nothing but sharing a third resource as we use third-party libraries.
In my current project we were in the same situation and we found the best approach to have separate shared libraries(api-shared as name) and consuming it as jar in different micro-services.
In your second approach you ended up with redundant code and also difficult to maintain. Lets say if you have any changes in entity then you have to change in both the entities which is not quite a good way to synchronize the thing.
All in all I would suggest you to use shared jar for both micro services.
Regards
Techno

Related

Prevent Internals leaking into API

I'm looking for different ways to prevent internals leaking into an API. This is a huge problem because once these internals leak into the API; you can run either into unexpected incompatibility issues or into frozen internals.
One of the simplest ways to do so is just make use of different Maven modules; one module with API and one module with implementation. This way it is impossible to expose the implementation from the API.
Unfortunately not everyone agrees this is the best approach; But are there other alternatives? E.g using checkstyle or other 'architecture checking' tools?
PS: Java 9 for us is not usable, since we are about to upgrade to Java 8 and this will be the lowest supporting version for quite some time to come.
Following your checkstyle idea, it should be possible to set up rules which examine import statements in source files.
Checkstyle has built-in support for that, specifically the IllegalImport and ImportControl rules.
This of course works best if public and internal classes can be easily separated by package names.
The idea for IllegalImport would be that you configure a TreeWalker in checkstyle which only looks at your API-sources, and which excludes imports from internal packages.
With the ImportControl rule on the other hand you can define very detailed access rules for the whole application/module in a separate XML file.
It is standard in Java to define an API using interfaces and implement them using classes. That way you can change the "internals" however you want and nothing changes for the user(s) of the API.
One alternative is to have one module (Jar file) for API and implementation (but then again, is it an API or just any kind of library?). Inside one separates classes and interfaces by using packages, e.g. com.acme.stuff.api and com.acme.stuff.impl. It is important to make classes inside the latter package protected or just package-protected.
Not only does the package name show the consuming developer "hey, this is the implementation", it is also not possible to use anything inside (let's omit reflections at this point for the sake of simplicity).
But again: This is against the idea of an API, because usually the implementation can be changed. With this approach one cannot separate API from implementation, because both are inside the same module.
If it is only about hiding internals of a library, then this is one (not the one) feasible approach.
And just in case you meant a library instead of an API, which only exposes its "frontend" (by using interfaces or abstract classes and such), use different package names, e.g. com.acme.stuff and com.acme.stuff.internal. The same visibility rules apply of course.
Also: This way one does not need Checkstyle and other burdens.
Here is a good start : http://wiki.netbeans.org/API_Design
Key point : Do not expose more than you want Obviously the less of the implementation is expressed in the API, the more flexibility one can have in future. There are some tricks that one can use to hide the implementation, but still deliver the desired functionality
I think you don't need any checkstyle or anything like that, just a good old solid design and architecture should be enough. Polymorphism is all you need here.
One of the simplest ways to do so is just make use of different Maven
modules; one module with API and one module with implementation. This
way it is impossible to expose the implementation from the API.
Yes, I totally agree, hide as much as possible, separate your interface in a standalone project.

Java SE - Clever way to implement "plug and play" for different library modules

I'm trying to do something clever. I am creating a weather application in which we can replace the weather API with another weather API without affecting the code base. So I started with a Maven project with multiple modules.
I have a Base module that contains the Interface class and the Base class. The Interface class contains the calls to the APIs (all calls are similar, if not exact) and the Base class contains the properties to the APIs (again, all properties are similar, if not exact).
I have a module for each of the two weather APIs we are testing with plans to create more modules for new weather APIs as we grow the application.
Finally, I have created a Core module (includes main) to implement the specific module class for the weather API I want to test.
Now, I know the simplest way to do this would be to use a switch statement and enumeration. But I want to know if there is a more clever way to do this. Maybe using a Pattern? Any suggestions?
Here is a picture of the structure I have just described:
Here is the UML representation:
This is a learning process for me. I want to discover how a real Java Guru would implement the appropriate module and class based on a specified configuration.
Thank you for your suggestions.
I'm trying to do something clever. I am creating a weather application
in which we can replace the weather API with another weather API
without affecting the code base.
Without reading further down, this first statement makes me think about a plugin architecture design, but in the process of software design, decisions must not be rushed, the more you delay, the more information you have and a better informed decision can be made, for now is just an idea to keep in mind.
I have a Base module that contains the Interface class and the Base
class. The Interface class contains the calls to the APIs (all calls
are similar, if not exact) and the Base class contains the properties
to the APIs (again, all properties are similar, if not exact).
When different modules share behaviour/state, it is a good idea to refactor them and produce base abstract classes and interfaces, so you are on the right track, but, if there are differences, those shouldn't be refactored into the base module. The reason behind that is simple, maintainability. If you start adding if clauses or switches to deal with these differences, you just introduced coupling between modules, and you'll be always having to make changes in the base module, whenever you add/modify other modules, and this is not desirable at all.
This is reflected by the Open/Closed principle form the SOLID principles, which states that a class should be open for extension but closed for modifications.
So after you've refactored the common behaviour into the base modules, then each new API should extend the base module, as you did.
Finally, I have created a Core module (includes main) to implement the
specific module class for the weather API I want to test.
Now, I know the simplest way to do this would be to use a switch
statement and enumeration. But I want to know if there is a more
clever way to do this. Maybe using a Pattern? Any suggestions?
Indeed, making use of a switch, makes it work, but its not a clean design at all, for the same reason as before, when adding, modifying or removing modules, would require to modify this module aswell, and also this code can potentially break.
One possible solution, would be to delegate this responsability on a new component and make use of a creational design pattern like the Abstract Factory, which will provide a interface to instantiate components without specifying its classes.
As for the architecture, so far, the plugin architecture still makes sense, but what if the different modules extend the base contract adding more features? One option is to use the Facade pattern to adapt the module calls and provide an output that implements an interface that clients expect.
But then again, with the provided details, this is the solution I'd suggest, but the scenario should be studied carefully and in greater detail, in order to be able to assure that these are the right tools for the job, and commit to them.
In addition to Salvador Juan Martinez's answer...
To implement a plugin architecture Java's Jar File Specification provides support for service provider interfaces (SPI) and how they are looked up.
As of Java 1.6. you can use the ServiceLoader to lookup service providers. For Java 1.5. and less you must do it on your own or use a library. E.g. commons-discovery.
The usage is quiet simple. In your case put a META-INF/services/com.a2i.weatherbase.IWeather file in each plugin module.
In the Weather Forecast IO module the file should contain only one line
com.a2i.weatherforecastio.ForecastIO
The line must be the full quallified name of an IWeather implementation class.
Do the same for the other module and you can load the implementations via ServiceLoader.
ServiceLoader<IWeather> weatherServicesLoader = ServiceLoader.load(IWeather.class);
Iterator<IWeather> weatherServices = weatherServicesLoader.iterator();
Now it depends on your runtime classpath how many services will be found. Try to add and remove module jar archives from the classpath and run your application.
EDIT
I wrote a blog about a pluggable architecture with standard java. See http://www.link-intersystems.com/blog/2016/01/02/a-plug-in-architecture-implemented-with-java/
Source code is also available at https://github.com/link-intersystems/blog/tree/master/java-plugin-architecture
One solution is you have to define the common interface with all the identified common operations. The extensions/plugins need to implement that interface and have to provide the implementation to common operations.
You can use an abstract factory design pattern to hook up the exact implementation at runtime based on the input parameters.
Interfaces and abstract classes are always good in such scenarios, Thanks.

Android App Architecture : How should the packages be formed?

I am new to android programming. I often see that programmers create packages as collection of activities, fragments, adapters, etc. To me it seems more intuitive to put all java code required for an activity/screen in one place. For example: For home screen, I will keep the activity, fragments, adapters, custom views, etc all at one place.
Is there is any definite reason the the general practice or is it just a traditional practice ?
This has to do with creating components, reusable objects and code maintenance in a codebase as it grows. Your approach will work for a small application, and there is no rule against it. However, generally creating package/file structures according to the recommended and common approaches makes it easier to make modifications to code and work with others on the same project. Consider the following:
If you have many Activities spread across many packages or folders, then someone tasked with changing the UI will have to traverse those packages. That makes it difficult to identify UI patterns that could be used across Activities and even harder to use those patterns, since you will need to implement them in each package/folder.
This also creates a problem seeing less obvious patterns in non-UI components like data object models, view controllers, etc. For example, if you need a "user" object in two different Activities do you create 2 different objects? This is not reusable code.
So let's say you decide to reuse the "user" object so that you only have 1 class. Then do you sub-class in the other packages that need it in order to follow your pattern? Then if one UI element needs a new method, do you implement it in just that place? Or the base object?
Or do you make the "user" object public and reference it from other packages/folders? If this is your answer then you will begin to create objects in places based on the evolution of the code, instead of based on logic or ease of maintenance. Among other things, this makes it very difficult to train a new person on "where everything is" in your codebase. The "user" object will sit in one place, and then the "user account" object ends up where it is first needed, but not likely to be with the "user" object.
As a project grows to hundreds of classes, I think it is obvious that this approach becomes unmanageable for many applications. Classes will appear in packages based on the UI requirement, not based on the function it performs. Maintaining them becomes challenging.
For example in the case of Lollipop to Marshmallow, Apache http became deprecated. If you had this dependency scattered throughout your project, then you will be looking in a lot of places at how to handle this change. On a small project that might be fine, but on a larger project if you try to do this while other development is taking place, this can become a real mess since you are now modifying across many packages and folders instead of in only a few locations.
If, however, you have a Data Access Layer or Model Layer components that encapsulate the behavior in one or several folders, then the scope of your changes is easier to see to those around you. When you merge your changes into the project, it is easy for the people you work with to know if other components were impacted.
So while it is not necessary to follow these guidelines (especially for small projects), as a project grows and several or many people become involved in the development, you will see variations but the general practice is to group by purpose or function rather than group by UI / visual component. If you start off with some of this in place, you will have less work later to deal with the change. (However, starting with too much structural support early in a project can put the project at risk of never being completed...)
Several answers provides links to the guidelines. I hope this answer helps to explain why those guidelines exist, which I believe is at the heart of your question.
Is there is any definite reason the the general practice or is it just
a traditional practice ?
Yes. In my current application I have over 50 custom UI views and a few activities. At least 10 singleton controller and a lot of database model. So to not lost in the project, I'm using a tidy structure like this:
Activity
Adapter
Controller
Native
Model
-Database
-Rest
Ui
I suggest you to use this structure.
There are no official rules, well maybe best practices which I have not in mind.
I so we get now a opinion based answer:
I use the package names for grouping classes to a logical topic like adapters, activities, etc.
If you want another structure do it like you want, just it could confuse other devs.
Keep in mind that the package name should be unique so you should use a prefix like a domain you own or you are allowed to use (in reversed order of cause).
Check also this link where are some more ideas pointed out: http://www.javapractices.com/topic/TopicAction.do?Id=205
The first question in building an application is "How do I divide it up into packages?". For typical business applications, there seems to be two ways of answering this question.
Package By Feature
Package-by-feature uses packages to reflect the feature set. It tries to place all items related to a single feature (and only that feature) into a single directory/package. This results in packages with high cohesion and high modularity, and with minimal coupling between packages. Items that work closely together are placed next to each other. They aren't spread out all over the application. It's also interesting to note that, in some cases, deleting a feature can reduce to a single operation - deleting a directory. (Deletion operations might be thought of as a good test for maximum modularity: an item has maximum modularity only if it can be deleted in a single operation.)
Normally the activities are places in the main package and fragments, adapters, utils, models in their own packages like fragments in fragments packages and ISODateParser class could go into utils package.
You can find more about it in the Android Best Practices guide which contains best practices for android.
The guidelines about which classes should be placed under which packages are discussed under the Java packages architecture heading in the guide.
Hope it Helps!

How to organize the specs definition in Cucumber?

We are considering to use Cucumber on our project for acceptance testing.
When we write a scenario in a Cucumber feature, we write a list of Given, When and Then statements.
As we use cucumber-jvm project, the Given, When and Then statement are related to Java methods in (JUnit) classes.
I want to know what is the best organization for the code related to Given / When / Then in the project structure. My main concern is the maintenance of the cucumber tests on a big project, where the number of scenario is quite important, and especially regarding the items that are shared between features.
I can see at least 2 main approaches:
Each feature is related to it's own JUnit class. So if I have a foo/bar/baz.feature cucumber file, I will find the releated foo.bar.Baz JUnit class with the adequate #Given, #When and #Then annotated methods.
Separate #Given, #When and #Then methods into "thematic" classes and packages. For example, if in my cucumber scenario I have a statement Given user "foo" is logged, then the #Given("^user \"([^\"]*)\" is logged$") annotated method will be located in the foo.user.User class method, but potentially, the #When method used later in the same cucumber scenario will be in a different Java class and package (let say foo.car.RentCar).
For me, the first approach seems good in the way that I can easily do the relation between my cucumber features and my Java code. But the drawback is that I can have a lot of redundancies or code duplication. Also, it may be hard to find a possible existing #Given method, to avoid to recreate it (the IDE can help, but here we are using Eclipse, and it does not seem to give a list of existing Given statement?).
The other approach seems better essentially when you have Given conditions shared among several cucumber feature, and thus I want to avoid code duplication. The drawback here is that it can be hard to make the link between the #Given Java method and the Given cucumber statement (maybe, again, the IDE can help?).
I'm quite new to cucumber, so maybe that my question is not a good question, and with time and experience, the structure will be self-evident, but I want to get good feedbacks on its usage...
Thanks.
I would suggest grouping your code according to the objects it refers to, similar to option #2 you presented in your question. The reasons being:
Structuring your code based on how and where it's being used is a big no-no. It's actually creating coupling between your feature files and your code.
Imagine such a thing in your product's code- the SendEmail() function wouldn't be in a class called NewEmailScreenCommands, would it? It would be in EmailActions or some such.
So the same applies here; structure your code according to what it does, and not who uses it.
The first approach would make it difficult to re-organize your feature files; You'd have to change your code files whenever you change your feature files.
Keeping code grouped by theme makes DRYing it much easier; you know exactly where all the code dealing with the user entity is, so it's easier for you to reuse it.
On our project we use that approach (i.e BlogPostStepDefinitions class), with further separating the code, if the class gets too large, to types of steps (i.e BlogPostGivenStepDefinitions).
We have also started using Cucumber-JVM for acceptance testing and have similar problems with organising code. We have opted to have 1 step definition class for each feature. At the moment this is fine as the features we are testing aren't very complex and quite separate, there is very little overlap in our features.
The second approach you mentioned would be better I think, but it is often challenging to tie together several different step definition classes for a single scenario. I think the best project structure will become clearer once you start adding more features and refactor as normal.
In the meantime here is an Eclipse plugin for cucumber,
https://github.com/matthewpietal/Eclipse-Plugin-for-Cucumber
it has syntax highlighting as well as a list of existing available steps when writing a feature.
On the current project I am taking part in, we asked ourselves the very same question.
After fiddling a bit with the possibilities, what we opted for was a mix of both the solutions you exposed.
Have steps regrouped in theme-centric common steps classes
app-start steps
security check steps
[place random feature concern here] steps
And classes of scenario (and in some case even feature) specific steps
This was to have at the same time the grouping of factorized code which is pretty easily identifiable on it's whatabouts, whereabouts and whatnot.
Yet it allows not to clutter those common classes with overly specific code.
The wiring between all these classes is handled by spring (with cucumber spring which does a great job once you get the hang of it).

Mocking / Testing a core object in my system

I've been asked to work on changing a number of classes that are core to the system we work on. The classes in question each require 5 - 10 different related objects, which themselves need a similiar amount of objects.
Data is also pulled in from several data sources, and the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
I'm beginning to get overwhelmed with this task. I have tried unit testing with JUnit and Easymock, but as soon as I mock or stub one thing, I find it needs lots more. Everything seems to be quite tightly coupled such that I'm reaching about 3 or 4 levels out with my stubs in order to prevent NullPointerExceptions.
Usually with this type of task, I would simply make changes and test as I went along. But the shortest build cycle is about 10 minutes, and I like to code with very short iterations between executions (probably because I'm not very confident with my ability to write flawless code).
Anyone know a good strategy / workflow to get out of this quagmire?
As you suggest, it sounds like your main problem is that the API you are working with is too tightly coupled. If you have the ability to modify the API, it can be very helpful to hide immediate dependencies behind interfaces so that you can cut off your dependency graph at the immediate dependency.
If this is not possible, an Auto-Mocking Container may be of help. This is basically a container that automatically figures out how to return a mock with good default behavior for nested abstractions. As I work on the .NET framework, I can't recommend any for Java.
If you would like to read up on unit testing patterns and best practices, I can only recommend xUnit Test Patterns.
For strategies for decoupling tightly coupled code I recommend Working Effectively with Legacy Code.
First thing I'd try to do is shorting the build cycle. Maybe add in the options to only build and test the components currently under development.
Next I'd look at decoupling some of the dependencies by introducing interfaces to sit between each component. I'd also want to move the coupling out in the open most likely using Dependency Injection. If I could notmove to DI I would have two ctors, on no-arg ctor that used the service locator (or what have thee) and one injectable ctor.
the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
Is that without meant to be a with? I would look at moving as much into POJOs as you can so it can be tested without needing to know anything EJB-y.
If you project can compile with Java 1.5 you shoul look at JMock? Things can get stubbed pretty quickly with 2.* version of this framework.
1.* version will work with 1.3+ Java compiler but the mocking is much more verbose, so I would not recommend it.
As for the strategy, my advice to you is to embrace interfaces. Even if you have a single implementation of the given interface, always create an interface. They can be mocked very easily and will allow you much better decoupling when testing your code.

Categories

Resources