Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two micro services(ms) ms1 and ms2. In both of the ms there is duplicate reusable code(like hashing/security related/orm/etc). Just FYI this usable
code can sometimes maintain some state as well in DB (if it matters by any chance)
There are two ways, I can proceed
Either extract the reuable code as separate library and include in both ms
Create separate ms for this reusable code and expose it as rest end points
If I take approach 2, advantage is I just have to redeploy the ms3 in case of any change. If take approach 1, I need to redploy both ms. At the same
time approach 2 will require separate maintenance/resources/monitoring.
Which one is more ideal approach in terms of system design considering hardware resource is not a challenge. I just mentioned two microservicebut in some cases
there are more than two ms having duplicate code.
I am not sure what is the criteria which can help me to decide whether to go towards shared library or micro service ?
Update :-
I have got some clarity from below blogs but still have question. Will think and post anew question if required.
https://blog.staticvoid.co.nz/2017/library_vs_microservice/
https://dzone.com/articles/dilemma-on-utility-module-making-a-jar-or-separate-2
Microservices are only one of architectural styles. In some cases it is better, in some it is worse than other styles. If you don't use microservices it not mean that your architecture is not good.
If you still want to have microservices, then none of these approaches (shared library vs. library as a new "microservice") is good.
I'd suggest to consider following.
Microservice approach does not mean, that each end point should be encapsulated into a separate microservice. It is normal, that one microservice provides several different end points. If this is your case, then put your two services into a sinbgle microservice and make them reachable via two different end points. Then it is fine that both of them share some classes.
Microservices should normally have independent persistence layer. If there is a strong dependency on the common persistence layer, check, what was the reason to split them into different microservices. Do they really work with different business domains? Can these service be developed and deployed independently on each other? If not, then may be there is no reason to put them into different microservices and it could be better to put them into a single microservice. Then it would be fine if they share some classes.
A good microservice should be provide functionality for some domain. If you put shared code to a separate microservice, then it may be that your shared "microservice" does not provide any functionality for a domain, but is just a wrapper for utilities. That would be not a microservice.
If you have strong reason to separate your services into two different microservices, then duplicate the code. Each microservice should be independent on the others. It should be possible to replace database and to replace any classes of one microservice without affecting the other one. One normal way to make them independable is duplicate the classes that you (currently) consider as shared. If the services are really independent with the time this duplicated code will change and will be different in each microservice. If you have to change this code in both services simultaneously, then it means that your split is not correct and that what you have are not microservices.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I already have a blog application which is built on Spring-Boot as a monolith.
There are 2 entities.
user
post
And the mapping is one to many.
A single user can create multiple blog posts.
How can i recreate the same functionality as separate microservice applications.
So far on researching through internet, what i see people saying is create database per service etc.
Suppose if i create 2 services say
UserService(which has separate DB and CRUD operations associated with it)
PostService(which has separate DB and CRUD operations associated with it)
How can i make communications in between them.
In the monolith app the POST entity has createdBy mapped to User.
But how this works in Microservices architecture?
Can any one please help me how can i design such architecture?
First list out the reasons why you want to break it in micro-services. To make it more scalable (in following scenarios for example).
Post comments becomes slow, and during this period registration of
new Users to remain unaffected.
Very few users upload/download files, and you want general users who simply view comments and post comments to be unaffected, while upload/download files may remain
slow.
Answers of the above question and analyzing,priotizing other NFR's shall help to determine how and what to break.
Additionally,
Post service only needs to validate whether the user is a valid logged in user.(Correct?)
User Service does not really need to communicate with post service at all.
Further you might want to decouple other minor features as well. Which in turn talk to each other which can be authenticated via other means like(Certificates, etc.) as they will be internal and updating some stats(user ranking), aggregates data etc.
The system might also have a lot of smaller hidden features, which might or might not have to do anything with Post Service at all, which can be separated in terms of different micro-services(like video/file/picture/any binary content upload/download) also and prioritized based on computation power needed, hit frequency and business priority.
After breaking it in to micro-services, you need to run some stress tests (based on current load) to know which services needs replication and which not and needs a automatic load balancing. Writing stress load first before breaking can also help to understand which features need to be move out of the monolith first.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have been doing research on micro services. I have used spring boot seeing how simple it is to start with. During my research i have read that is a good approach that every micro service, that is the ones that do data access from a database, have its own database.
I am curious how this works when starting multiple instances of that same micro service. Can those instances of the same micro service work with just one database or they also need a separate database? The dilemma for me is the data would be different across multiple databases. How does load balancing micro services work for such situations?
edited after the first 3 comments
I appreciate the comments. I feel i was lacking in explaining my thoughts behind this question. I am used to building monolithic applications. I made use of spring and hibernate (hibernatedaosupport) and lately also hibernate envers. I use transaction management of spring to manage the commits and rollback situation of the database. This has worked for me so far. I have started looking into micro services and so far am unable to find a proper explanation of how spring transaction management used with hibernate and envers as a micro service would work with a single database. I can understand just one instance of this micro service working, but i am curious if multiple instances of this micro service would work properly with just one database. Especially considering the fact that hibernate would cache objects of the database for performance reasons, not to mention envers and its actions.
There is no requirement about that micro-service must have a different database, you could share one database across all your micro-services or have one per micro-service. It depends on you and architectural decisions taking into account the different tradeoffs.
If you decide one database per micro-service and you have many instances of the same micro-service. You must use just one database (like with monolith). About your concerns of Hibernate and Cache you must handle the cache in different way, by example using Hazelcast (https://hazelcast.com/use-cases/caching/hibernate-second-level-cache/) or EhCache.
Anyway the design patterns are just best practices with different tradeoffs, you must understand the advantages and disadvantages of every pattern for later take a decision.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
A relative of mine wants me to to make a program to organize some information at where she works. It's fairly simple, however I don't know what kind of office drama they have going on, but she doesn't want to bother IT with all sorts of questions about the database that can be used, how it will connect (multiple people will read info stored in a database of some sort hosted within the companies intranet).
Anyways, I'm thinking it shouldn't be a problem to just use something like a local Microsoft access database file for now, and then rewrite the database component when I have more information. Is that an insane idea? This program is not hard, it can probably be written and tested in a week if I was working on it full time (I'm still in college). For that matter, I am thinking of using Java in Netbeans simply because I am comfortable with it. Should I worry that I find out they use some sort of database or other solution that cannot be (easily) worked with in Java?
While knowing a requirement like database type upfront is a good idea, being able to adapt to new requirements is a part of Agile development.
I'd argue it's not an insane idea. If you're careful about your design, switching out database won't be too bad. If you don't mind, I'll elaborate on a (possible) pattern that might save you trouble.
Overview
In my experience I have found it best to abstract the database logic (how to communicate) from the business logic (how to accomplish a task). These two layers are going to make your code much more maintainable for when you find out the company is running an Oracle database and not Access.
Data Access Layer
The DAL has one job and that is to communicate to the database. It needs to read, it needs to write, and that is it. Your classes will likely include information like table attribute names or queries. It's OK to do that here since the class is specific to a particular database. Using a DAL will greatly simply your database calls later on.
I would highly suggest looking into factory pattern for how to design these classes. Using factory pattern will completely decouple the Business Layer from the database specific classes using interfaces. In other words, you can completely change out the database without needing to modify the business logic.
Business Layer
In fancy terms, all I'm talking about is the logic for how to accomplish a task. The business layer doesn't have anything to do with where buttons appear on a screen nor should it worry about table names.
In this layer you will find yourself needing access to the database to read/write information and that is when you call on your Data Access Layer. It will handle the ugly details keeping your business logic from having to know what type of database your are using.
Data Transfer Object
Lastly, you're going to be pushing a lot of information between these layers. I suggest you design some classes that can help you transfer data that belongs together. Consider a call to the DAL requesting a book...
Book book = libraryAccessObject.getBookById("ABC123.45");
Getting a book is going to return a lot of information. Creating a book object to organize that information will make life easier.
In summary, it's not a far fetched idea but be careful with your design. Poor deign now could cause you a lot of problems next week. Layering your code will make it much more maintainable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My team has been tasked with converting our application's existing SOAP API to REST. My preference is to re-write the API from scratch (reusing only the actual business logic that each operation performs). Others in my team want to just have REST as a wrapper over the existing SOAP. Meaning we would expose REST services but when a request comes in our application would internally call the existing SOAP operations.
Could you please offer suggestions on which of these is the best approach? It looks like my way is cleaner and lighter and faster but their way allows some code re-use.
It depends what is your priority and weather you are going to receive too many requests for changes in behavior of API.
Ample time and more changes expected:
If you have got time, of course writing from scratch is recommended as
it would mean cleaner, lighter and faster. This will also make
shipping new features easy.
Less time and less changes expected. API too big to do regression testing:
BUT if you have time constraints, I would suggest go with REST over
SOAP api. Anyways you are going to expose only REST api to client, so
you can do internal refactoring and phasing out of SOAP as and when
time permits you. Changing whole code means regression testing of
entire module
Could you please offer suggestions on which of these is the best
approach? It looks like my way is cleaner and lighter and faster but
their way allows some code re-use.
I wrote a framework that does the SOAP -> REST conversion. It was used internally in one of the companies I used to work for. The framework was capable of doing this with a mapping file in less than 10 minutes, but we did not use it for all services. Here's why...
Not all services (WSDL based) are designed with REST in mind. Some of them are just remote methods being invoked on a service and nothing more.
The service may not have resources that can be mapped.
Even if there are resources they may not map correctly to REST (GET / POST etc) and some of the calls are not easily translatable.
A mapping framework has an overhead of it's own. The framework's SLA was quite low (single digit millis), but even a small overhead may not be suitable for critical services. The time it takes to profile and get this overhead down should not be underestimated.
In summary, the approach works for some services but it takes some engineering effort to get there. It makes sense to do this if you have say 500+ services that need to be converted quickly in a short span of time, temporarily.
The fact that you would have to convert your REST calls to SOAP calls in order to reuse your current code definitely suggests a rewrite to me!
To make testing, debugging, and profiling easier, you should have a pure POJO based API that is called by your REST API.
What back end server are you using? As several of the more popular Java web servers have tooling that makes creating REST APIs really easy.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a website, consisting of about 20 Java Web applications (Servlet/JSP based webapps), of varying sizes, each handling different areas of the site.
The combined size of all 20 war's is 350mb, however by combining them I anticipate being able to ultimately reduce that and realise combined caching benefits.
Is it best to keep them separate, or merge them into a single Uber webapp war file? (and why)
I'm particularly interested in knowing any technical drawbacks of merging them.
I "vote" to combine them.
Pros
Code sharing: If you combine them, you can share code between them (becase there will be only one).
This does not apply to just your code, it also applies all the external libraries you use which will be the bigger gain I think.
Less memory: Combined will also require less memory (might be very significant) because the external libraries used by multiple apps will only have to be loaded once.
Maintainability: Also if you change something in your code base or database, you only have to change it in one place and re-deploy one app only.
Easier synchronization: If the separate apps do something critical in the database for example, it's harder to synchronize them compared to the case when everything is in one app.
Easier collaboration between different parts/modules of the code. If they are combined, you can simply call methods of other modules. If they are in different web apps, you have to do it in a dirty way like HTTP calls, RMI etc.
Cons
It will be bigger (obviously). If you worry about it being too big, just exclude the libs from the deployment war, place it under the tomcat libs.
The separate apps might use different versions of the same lib. But it's better to sort them out early when it can be done easier and with less work.
Another drawback can be the longer deployment time. Again, "outsourcing" the libs can help making it faster.
There is no drawback in terms of size, memory issues or performance when used in single file as systems are getting faster each day. And as you said running in different apps or same one, the total combined resources consumed will be the same in terms of processing or computation power. Now its a maintenance and administration issues that decides to keep a single or multiple. If you have multiple modules which might changes frequently and independently of one another, its better to have multiple webapps, talking via RMI or WS calls for intercommunication(if required). If all of them are oriented as one unit, where everything changes at once you may go with single app. having multi apps will help to install and update each one easily with respect to change in functionality at module level
deploying multiple applications to Tomcat
http://www.coderanch.com/t/471496/Tomcat/Deploying-multiple-applications-WAR
Hope it helps