two logical java projects in one physical project [closed] - java

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have a project which will have 2 different parts - one part will send messages to rabbitMQ, other one will receive and process messages from rabbitMQ. Both parts uses the same common java classes. For example java class-wrapper which will be sent and received. I think I should create one java project on maven or gradle but build it in 2 different ways - for the first and the second parts. Or may be it`s better to split it to 2 projects, but how to share common java files?
UPDATE:
I have 2 projects: project A sends objects to rabbitMQ queue, project B receives objects from rabbitMQ queue, both A and B uses common classes, so those classes should be shared between A and B. Currently, I created one maven project which includes A and B as modules. And I can compile both of them the same time. So I don't have a mess with third (common java classes) project, and everything is in one place. So I just wanted to ask how to do it correctly and better. Should I separate it to 3 projects or leave as it is or do it somehow else?

You should create 3 projects.
A) one for sending
B) one for receiving and processing
C) one for common code
When you build, you will build A+C, and B+C separately. This keeps everything decoupled from the common code.
The reason to keep the common code in a separate project as it acts as the intermediary that defines the API between the two. Think of deploying on a large scale. You may have separate developers for each of the three projects, so you may choose to version releases to clearly define when dependencies are. If everything is in one project, everything needs to be changed immediately, but you may not want your "common code" developer messing with your sending code (eg. in order for everything to compile). If the different builds are deployed on different machines, only the common code needs to be updated, and not all projects. Having separated projects may take a bit of extra setup work, but will save you from numerous headaches in the future.
Unless you have a specific need to always deploy A+B+C together, 3 projects is the best design. One example is making your own self-contained TX/RX protocol (eg. a walkie-talkie) where the client and server need to be co-located, and there is no central server. But even in this case, to make it compatible with other applications using your protocol, breaking it up into 3 projects still makes a lot of sense.

Related

Shared library vs Microservice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two micro services(ms) ms1 and ms2. In both of the ms there is duplicate reusable code(like hashing/security related/orm/etc). Just FYI this usable
code can sometimes maintain some state as well in DB (if it matters by any chance)
There are two ways, I can proceed
Either extract the reuable code as separate library and include in both ms
Create separate ms for this reusable code and expose it as rest end points
If I take approach 2, advantage is I just have to redeploy the ms3 in case of any change. If take approach 1, I need to redploy both ms. At the same
time approach 2 will require separate maintenance/resources/monitoring.
Which one is more ideal approach in terms of system design considering hardware resource is not a challenge. I just mentioned two microservicebut in some cases
there are more than two ms having duplicate code.
I am not sure what is the criteria which can help me to decide whether to go towards shared library or micro service ?
Update :-
I have got some clarity from below blogs but still have question. Will think and post anew question if required.
https://blog.staticvoid.co.nz/2017/library_vs_microservice/
https://dzone.com/articles/dilemma-on-utility-module-making-a-jar-or-separate-2
Microservices are only one of architectural styles. In some cases it is better, in some it is worse than other styles. If you don't use microservices it not mean that your architecture is not good.
If you still want to have microservices, then none of these approaches (shared library vs. library as a new "microservice") is good.
I'd suggest to consider following.
Microservice approach does not mean, that each end point should be encapsulated into a separate microservice. It is normal, that one microservice provides several different end points. If this is your case, then put your two services into a sinbgle microservice and make them reachable via two different end points. Then it is fine that both of them share some classes.
Microservices should normally have independent persistence layer. If there is a strong dependency on the common persistence layer, check, what was the reason to split them into different microservices. Do they really work with different business domains? Can these service be developed and deployed independently on each other? If not, then may be there is no reason to put them into different microservices and it could be better to put them into a single microservice. Then it would be fine if they share some classes.
A good microservice should be provide functionality for some domain. If you put shared code to a separate microservice, then it may be that your shared "microservice" does not provide any functionality for a domain, but is just a wrapper for utilities. That would be not a microservice.
If you have strong reason to separate your services into two different microservices, then duplicate the code. Each microservice should be independent on the others. It should be possible to replace database and to replace any classes of one microservice without affecting the other one. One normal way to make them independable is duplicate the classes that you (currently) consider as shared. If the services are really independent with the time this duplicated code will change and will be different in each microservice. If you have to change this code in both services simultaneously, then it means that your split is not correct and that what you have are not microservices.

Single Large Webapp or multiple small webapps? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a website, consisting of about 20 Java Web applications (Servlet/JSP based webapps), of varying sizes, each handling different areas of the site.
The combined size of all 20 war's is 350mb, however by combining them I anticipate being able to ultimately reduce that and realise combined caching benefits.
Is it best to keep them separate, or merge them into a single Uber webapp war file? (and why)
I'm particularly interested in knowing any technical drawbacks of merging them.
I "vote" to combine them.
Pros
Code sharing: If you combine them, you can share code between them (becase there will be only one).
This does not apply to just your code, it also applies all the external libraries you use which will be the bigger gain I think.
Less memory: Combined will also require less memory (might be very significant) because the external libraries used by multiple apps will only have to be loaded once.
Maintainability: Also if you change something in your code base or database, you only have to change it in one place and re-deploy one app only.
Easier synchronization: If the separate apps do something critical in the database for example, it's harder to synchronize them compared to the case when everything is in one app.
Easier collaboration between different parts/modules of the code. If they are combined, you can simply call methods of other modules. If they are in different web apps, you have to do it in a dirty way like HTTP calls, RMI etc.
Cons
It will be bigger (obviously). If you worry about it being too big, just exclude the libs from the deployment war, place it under the tomcat libs.
The separate apps might use different versions of the same lib. But it's better to sort them out early when it can be done easier and with less work.
Another drawback can be the longer deployment time. Again, "outsourcing" the libs can help making it faster.
There is no drawback in terms of size, memory issues or performance when used in single file as systems are getting faster each day. And as you said running in different apps or same one, the total combined resources consumed will be the same in terms of processing or computation power. Now its a maintenance and administration issues that decides to keep a single or multiple. If you have multiple modules which might changes frequently and independently of one another, its better to have multiple webapps, talking via RMI or WS calls for intercommunication(if required). If all of them are oriented as one unit, where everything changes at once you may go with single app. having multi apps will help to install and update each one easily with respect to change in functionality at module level
deploying multiple applications to Tomcat
http://www.coderanch.com/t/471496/Tomcat/Deploying-multiple-applications-WAR
Hope it helps

many identical websites on one server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a server (Linux/Apache-Tomcat & MySQL) which hosts several almost identical websites. At least, the java libraries are identical.
Right now, every website has it's own .jar file with these java classes.
I'd like to know if this is a good practice, or if I should have these classes in one place where each of the websites can access them? Would this improve performance in any way? Would it result in less memory usage for the JVM? Are there any down-sides?
I haven't been able to find any information related to this situation.
Upsides: a small amount of disk space and RAM is saved. Remember that the only heap space taken belongs to the java.lang.Class instances representing the types you actually load from that JAR file.
Downsides: all applications in the JVM are locked-into using the version of the library that is shared. If you really want all deployed webapps to be identical, then this really is no downside. Deployments can get tricky because you have to maintain a non-standard deployment process (e.g. the webapp is not self-contained) that may be different from container-to-container or between versions of the same container (e.g. Tomcat changed its mind between versions 4 and 5, 5 and 5.5, and 5 and 6 for how to configure "common" and "shared" libraries).
If the web applications are identical, you should ask yourself: should you even be deploying more than one? Instead, you could sniff the URL and use a configuration for each kind of client instead of deploying the applications separately.

What's the best way to manage multiple dependent projects in Git or Mercurial? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a dozen Java projects that depend on each other, and I frequently make changes that cross-cut all of them. However, many of the projects are libraries that could be used independently of each other as well.
Right now I use mercurial subrepos, which works well except that very few third-party tools support it - it's hard to set up code review tools, continuous integration, etc.
What's the best way to address this situation? Split everything into separate projects and build separate JARs? Migrate to git and use git subrepositories? Check everything in to a single repo and accept that I have to check out everything to use anything? Something else?
I would say the best way to do it would be to cut your dependencies so that they can reference as external jars. This way when you make potentially breaking changes you don't necessarily have to fix the affected areas straight away. Since they depend on a previously built jar it allows you to properly isolate your coding. If you use something like Maven to manage your dependencies you will also benefit from the ability to more easily keep track of the different versions of your jars.
If the subprojects are sufficiently autonomous, I would advise setting them up as separate maven projects with separate VCS repos.
This will give you the modularity you need paired with a working dependency management.

Complex profiles in maven [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
I've been looking at profiles in maven for selecting different sets of dependencies. This is fine when you want to build say a debug build differently from a release build. My problem is that I want to do a fair bit more than this. For my application (Mobile Java app where J2ME is just one target among many) there may be a large number of possible combinations of variations on a build.
Using some made-up command line syntax to illustrate what I'd like to see, I'd imagine typing in something like
mvn -Pmidp,debug,local-resources
What Maven does in this case is to build three different builds. What I want to do is use those three (Or more, or less) switches to affect just one build. So I'd get a MIDP-targetting debug build with 'local resources' (Whatever that might mean to me - I'm sure you can imagine better examples).
The only way I can think of doing this would be to have lots and lots of profiles which becomes quite problematic. In my example, I'd have
-Pmidp-debug-localresources
-Pmidp-release-localresources
-Pmidp-debug-remoteresources
-Pmidp-release-remoteresources
...
Each with its own frustratingly similar set of dependencies and build tag.
I'm not sure I've explained my problem well enough, but I can re-write the question to clarify it if comments are left.
UPDATE:
The question isn't actually valid since I'd made a false assumption about the way maven works.
-Pmidp,debug,local-resources
does not do 3 builds. It in fact enables those 3 profiles on one build, which was ironically what I was looking for in the first place.
The Maven way is to create a lot of artifacts with less complexity. I'd say your best bet is to abstract the common parts of each build into a separate artifact, then create a project for each build that defines the build specific parts. This will leave you with a lot of projects, but each will be much simpler.

Categories

Resources