What do you consider as verticle design guideline in Vert.x? - java

While the core manual (and other documentation) of Vert.x shows several use cases and gives good explanations of Vert.x in general, I am curious what are do and donts when designing verticle classes.
Preword: I am aware of that the design of Vert.x is in general AGAINST giving strict design guidelines. So, no need to mention this in answers.
An example that led me to this question is as following. I created a verticle named ServiceDiscoveryVerticle.java which has the following responsibilities:
read in a configuration file of services and then publish them via the Vert.x ServiceDiscovery
manage services additionaly in lists(published/unpublished) to keep track of unpublished ones
receive messages via the event bus, for either publishing or unpublishing a certain service
All this is code is manifested in the overriden start method.
So the core questions I ask in this questions are:
What are do and donts when designing verticle classes ? (by your personal prefence/opinion)
Are there any general guidelines of what belong to a verticle and what not ? (officialy or community-wise)
Is it recommandable to split the start method up into private methods (if so, should it be in the same class or better put in a seperate one like OwnServiceDiscovery.java) ?
Any other ideas/remarks on my given example(ServiceDiscoveryVerticle.java) ?

One can do a lot of philosophy here, but I will try to keep it simple.
The fact is that a verticle and its start() is and will be the main way that you init your system, mount handlers, trigger things like loading config and co. So don't be too hard on yourself, this part is correct.
If you are using Web Service API or Service Proxy then handlers are mounted automatically for you. The actual code of these handlers are in external classes that you can decide on how to structure them.
If you are mounting your handlers on your own, then you can use a lot of inline code, or you can decide to extract them into classes. In a lager application however you will probably split and extract code as much as you can.
I personally extract code out of verticle as much as I can and make it a rather coordination and setup place. Also my start() method (or rather rxStart()) is a bunch of calls to other methods who's names give me an overview of what is going on in the start of system rather than having a lot of code that I can't read. But these are all personal preferences as you said. Vert.x does not imply any of it on you!

Don't block the Event Loop
Don't call verticle from another verticle, use EventBus
If you're using executeBlocking, you're probably doing something wrong
If you're constantly deploying/undeploying verticles, you're probably doing something wrong
Don't share state using verticles
Keep your verticles small, but not too small

Related

How to share entity between REST service between two microservices?

I have created two micro-services using java. I need to make a REST api call from service A to service B. The data sent will be in JSON format. Using jax-rs I need to create entity class in both the service.
Since both the entity class be same in both the projects. Do i
Create an common jar and use is for all my entity/domain objects? Does this make my microservice more tightly coupled?
Do i create the same class in both the microservice projects? This will just mean repeating the work in both the projects?
Is there a better way to communicate between the sevices?
In terms of having your two micro services independent and having them also independent in the future I would also duplicate the code. We had the exact same situation before. Several microservices seem to use some "common" classes that can be put to a seperate jar.
In the end we had following situation:
several (5+) services using the same JAR
turned out that classes that we thought are the same, seemed to have slightly different semantics in different services
a change on one of the classes more or less forced us to have a release on every microservice, when it came to releasing (no independency here anymore)
developers tend to see "common" behavior everywhere, so you most likely end up with some "Helper/Utility" classes there as well which is in the meanwhile considered a code smell in OOP
Long story short, in the meanwhile we switched to having the code duplicated, which gives us the freedom to handle our mircoservices really independently, as we only need to stick to the service contract. What happens internally is fully up to the service and we don't have to release all services in the end of an iteration. I'm not saying that the other option is wrong, but it turned out that it was not suitable for us. If you really see common classes between two services and you are sure you don't mess your common library up with other crap, your save to go.
EDIT
Maybe as follow up, we had the same discussion in regards of tests (unit and integration) having share test code in some common classes. In the end this was hell, as every slight change in code or acceptance criteria made 50% of tests fail. Meanwhile our strategy is to not share anything on test level and have everything right at the tests place. By that you are super fast in eliminating or changing tests. In the end the lesson for us was to keep business code as clean and elegante as suitable and the test code in a way to give us the least headache possible.
Edit2
Meanwhile, we define all our REST interface with open api specifications and create the actual DTO objects that are exchanged via the maven plugin openapi-generator. The spec resides in the project that implements the interface and it is published to artifactory. The project implementing the client pulls it and creates DTOs based on that. By that, you have a single point of truth and no need to write DTO boilerplate code.
I'd say it depends on the situation. If you use a shared package, this will introduce a coupling between the two projects. This makes sense, if both of the project build up on the same data classes and therefore will have the same dto objects to work with. Ideally you would have your own nexus which simplifies the usage of the shared artefact.
Otherwise, if only a few classes are redundant I probably would implement it in each sevice separately, which decouples them too.
I am afraid that you need to decide which one the right solution is for your project.
This is common situation where we as developer gets confused. I would suggest to have a common jar(shared) which can be used in both micro services (A and B). It is nothing but sharing a third resource as we use third-party libraries.
In my current project we were in the same situation and we found the best approach to have separate shared libraries(api-shared as name) and consuming it as jar in different micro-services.
In your second approach you ended up with redundant code and also difficult to maintain. Lets say if you have any changes in entity then you have to change in both the entities which is not quite a good way to synchronize the thing.
All in all I would suggest you to use shared jar for both micro services.
Regards
Techno

Logging and Dependency Injection

I try to build and application based on Java.
For dependency injection I use Google Guice.
Now I came up with the problem of logging some information during the application. I do not talk about general logging in a way of method calls etc. I know about AOP and that I can do like method call tracing etc. with that.
What I look for is manual logging. I need some way of logging in nearly each class in my application. So I thought about two options:
getting the logger by using the Guice injection framework doing this for me through the constructor (or setter or private ...) but it feels like adding the logging concern really to each class and pollutes my constructor
using a global service locator in the method where I want to call the log. Uhh but all DI fans will hate me for doing that
So what is the best way from a practical point of view?
I need some way of logging in nearly each class in my application.
Think again. If you think you need logging in nearly every class, your design might be sub optimal and might cause maintenance issues in the long run. This Stack Overflow answer talks about what's the issue and how to improve it. It's answered in the context of .NET, but the answer is applicable to Java as well.
That answer mainly talks about exception logging, for non-exception logging I would say: Prevent logging too much information at too many places. For each info or warning that you want to log, question whether this shouldn't have been an exception in the first place. For instance, don't log things like "we shouldn't be in this branch", but throw an exception!
And even when you want to log debug information, does anyone ever going to read this? You'll end up with log files with thousands and thousands of lines that nobody ever reads. And if they read it, they have to wade through all those lines of text and do complicated regex searches through it to get the information they were looking for.
Another reason I see developers do this is to cover up for their used coding practices. Just as comments are used in this way. I see developers log things like "we have executed this block" or "this if branch skipped". This way they can trace through the code and big methods.
However, instead of writing big methods, we all know by now that methods should be small. No, even smaller. Besides, if you unit test your code thoroughly, there is not much reason to debug the code and you have verified that it does what it is supposed to do.
And again good design can help here. When you use a design as described in that Stack Overflow answer (with command handlers), you can again create a single decorator that can serialize any arbitrary command message and log it to disk before the execution starts. This gives you an amazingly accurate log. Just add some context information (such as execution time and user name) to the log and you have an audit trail that could even be used to replay commands during debugging or even load testing.
I use this type of application design for a couple of years now, and since then, I hardly ever have any reason to do extra logging within the business logic. It is needed now and then, but those cases are pretty rare.
but it feels like adding the logging concern really to
each class and pollutes my constructor
It does, and you'll end up with constructors with too many parameters. But don't blame the logger, blame your code. You are violating the Single Responsibility Principle here. You can 'hide' this dependency by calling it through a static facade, but that doesn't lower the number of dependencies and overall complexity of a class.
using a global service locator in the method where I want to call the log. Uhh but all DI fans will hate me for doing that
In the end, you will hate yourself for that, because every class is still has an extra dependency (a well hidden dependency in this case). This makes each class more complicated, and will force you to have more code: more code to test, more code to have bugs, more code to maintain.
The topic of logging and how one should go about it is actually a more complex topic than one might at first think.
As with many questions, the answer to how one should approach logging is "It depends". There are certainly some use cases which can be mitigated without the need of components taking on a logging dependency. For example, a need to uniformly log all method calls within a library can be addressed with the Decorator Pattern and superfluous uses of logging exceptions can be addressed by centralizing such logging at the top of a call stack. Such use cases are important to consider, but they don't speak to the essence of the question which really is "When we would like to add detailed logging to a component in strongly-typed languages such as Java and C#, should the dependency be expressed through the component's constructor?"
Use of the Service Locator pattern is considered to be an anti-pattern due to the fact that it's misuse leads to opaque dependencies. That is to say, a component which obtains all its dependencies through a Service Locator doesn't express everything that's needed without knowledge of the internal implementation details. Avoiding the Service Locator pattern is a good rule-of-thumb, but adherents of this rule should understand the when and why so as not to fall into the cargo cult trap.
The goal of avoiding the Service Locator pattern is ultimately to make components easier to use. When constructing a component, we don't want consumers to guess at what is needed for the component to function as expected. Developers using our libraries shouldn't have to look at the implementation details to understand which dependencies are needed for the component to function. In many cases, however, logging is an ancillary and optional concern and serves only to provide tracing information for library maintainers to diagnose issues or to keep an audit log of usage details for which the consumers are neither aware or interested. In cases where consumers of your library must provide dependencies not needed for the primary function of the component, expressing such dependencies as invariant (i.e. constructor parameters) actually negates the very goal sought by avoiding the Service Locator pattern. Additionally, due to the fact that logging is a cross-cutting concern (meaning such needs may be widely desired across many components within a library), injecting logging dependencies through the constructor further amplifies usage difficulty.
Still another consideration is minimizing changes to a library's surface-area API. The surface-area API of your library is any public interfaces or classes required for construction. It's often the case that libraries are constructed through a DI container, particularly for internally-maintained libraries not meant for public consumption. In such cases, registration modules may be supplied by the library for specific DI containers or techniques such as convention-based registration may be employed which hide the top level types, but that doesn't change the fact that they are still part of the surface-area API. A library which can easily be used with a DI container, but can also be used without one is better than one which must be used with a DI container. Even with a DI container, it's often the case with complex libraries to reference implementation types directly for custom registration purposes. If a strategy of injecting optional dependencies is employed, the public interface is changed each time a developer wants to add logging to a new type.
A better approach is to follow the pattern established by most logging libraries such as Serilog, log4net, NLog, etc. and obtain loggers through a logger factory (e.g. Log.ForContext<MyClass>();). This has other benefits as well, such as utilizing filtering capabilities by each respective library. For more discussion on this topic, see this article.

How to refactor procedural start-up code?

I have a class (Android Activity) which handles start-up of my application. The application has some pretty complex start-up rules. Right now it looks like a bunch of spaghetti and I'm looking for strategies for refactoring it.
It's honestly such a mess I'm having problems hacking it down to provides pseudo code. In general there are some rules for start-up that are basically codified in logic:
Steps:
Check for error on last exit and flush local cache if necessary
Download settings file
Parse settings and save settings to local native format
Using the values in settings, do a bunch of 'house keeping'
Using a value in settings, download core data component A
Parse component A and load up local cache
During this logic, its also updating the user interface. All of this is handled in a zig-zagging, single monolithic class. Its very long, its got a bunch of dependencies, the logic is very hard to follow and it seems to touch way too many parts of the application.
Is there a strategy or framework that can be used to break up procedural start-up code?
Hmmm. Based on your steps, I see various different "concerns":
Reading and saving settings.
Downloading settings and components (not sure what a "component" is here) from the server.
Reading and instantiating components.
Flush and read cache.
Housekeeping (not really sure what this all entails).
UI updates (not really sure what this requires either).
You might try splitting up the code into various objects along the lines of the above, for example:
SettingsReader
ServerCommunicationManager (?)
ComponentReader
Cache
Not sure about 5 and 6, since I don't have much to go on there.
Regarding frameworks, well, there are various ones such as the previously mentioned Roboguice, that can help with dependency injection. Those may come in handy, or it may be easier just to do this by hand. I think that before you consider dependency injection, though, you need to untangle the code. All that dependency injection frameworks do is to initialize your objects for you -- you have to make sure that the objects make sense first.
Without any more details, the only suggestion that I can think of is to group the various steps behind well structured functions which do one thing and one thing only.
Your 6 steps look to be a good start for the 6 functions your init function should have. If #2 was synchronous (I doubt it), I would merge #2, #3 into a getSettings function.

Introduce per-customer personalization in java application

I've searched on internet and here on SO, but couldn't wrap my mind around the various options.
What I need is a way to be able to introduce customer specific customization in any point of my app, in an "external" way, external as in "add drop-in jar and get the customized behavior".
I know that I should implement some sort of plugin system, but I really don't know where to start.
I've read some comment about spring, osgi, etc, can't figure out what is the best approach.
Currently, I have a really simple structure like this:
com.mycompany.module.client.jar // client side applet
com.mycompany.module.server.jar // server side services
I need a way of doing something like:
1) extend com.mycompany.module.client.MyClass as com.mycompany.module.client.MyCustomerClass
2) jar it separately from the "standard" jars: com.mycompany.customer.client.jar
3) drop in the jar
4) start the application, and have MyCustomerClass used everywhere the original MyClass was used.
Also, since the existing application is pretty big and based on a custom third-party framework, I can't introduce devastating changes.
Which is the best way of doing this?
Also, I need the solution to be doable with java 1.5, since the above mentioned third-party framework requires it.
Spring 3.1 is probably the easiest way to go about implementing this, as their dependency injection framework provides exactly what you need. With Spring 3.1's introduction of Bean Profiles, separating concerns can be even easier.
But integrating Spring into an existing project can be challenging, as there is some core architecture that must be created. If you are looking for a quick and non-invasive solution, using Spring containers programmatically may be an ideal approach.
Once you've initialized your Spring container in your startup code, you can explicitly access beans of a given interface by simply querying the container. Placing a single jar file with the necessary configuration classes on the classpath will essentially automatically include them.
Personalization depends on the application design strongly. You can search for a pluggable application on the Internet and read a good article (for an example: http://solitarygeek.com/java/a-simple-pluggable-java-application). In the pluggable application, you can add or remove a feature that a user decides. A way for the pluggable application is using the Interface for de-coupling of API layer and its implementation.
There is a good article in here
User personalisation is something which needs to be in the design. What you can change as an after thought if the main body of code cannot be changed, is likely to be very limited.
You need to start be identifying what can be changed on a per user basis. As it appears this cannot be changed, this is your main limiting factor. From this list determine what would be useful to change and implement this.

Use of AspectJ for debugging Enterprise Java applications

The idea is to utilize AOP for designing applications/tools to debug/view execution flow of an application at runtime. To begin with, a simple data(state) dump at the start and end of method invocation will do the necessary data collection.
The target is not application developers but high level business analyst or high level support people for whom a execution flow could prove helpful. The runtime application flow can also be useful in reducing the learning curve of an application for new developers especially in configuration loaded systems.
I wanted to know if there already exists such tools/applications which could be used. Or better, if this makes sense, then is there a better way to achieve this.
You could start with Spring Insight (http://www.springsource.org/insight) and add your own plugins to collect data appropriate for business analysts/support staff. If that doesn't meet needs, you can write your own custom aspects. It is not that hard.
You could write your own aspects, as suggested by ramnivas, but to prepare for the requests from the users, you may want to just have the aspects compiled into the application, so that you don't have to take a hit at run-time, and then they could just select which execution flows or method groups they are interested in, and you just call the server and set some variable to give them the information desired.
Writing the aspects is easy, but to limit recompiling, you may want to get an idea what the users will want, for example, if they want to have a log of every call made from the time a webservice is called until it gets to the database, then you can build that in, but it would be easier to know this up-front.
Otherwise the aspect does nothing, if the variable is not set, and perhaps unset the variable when finished.
You could also have where they can pick which type of logging and for which user, which may lead to more useful information.

Categories

Resources