Run multiple spring boot jars in one jvm - java

My project contains several services, each one is annotated with #SpringBootApplication and can be run on a random port via "gradle bootRun".
Is it possible to build the services into jars and run them together in one JVM? Not matter by programmatic method or just put them in a container.
Please show me some instructions if possible. Thanks!

It's a little hacky, but can be done. I wrote a blog post about it some time ago: Running Multiple Spring Boot Apps in the Same JVM. The basic idea is to run every Spring Boot application in a different classloader (because otherwise there would be resource conflicts).
I, personally, only used it for testing. I would prefer to run the different applications in different docker containers in production. But for testing it's pretty cool: You can quickly boot up your application and debug everything...

If you want to launch multiple spring boot microservices in single JVM then you can achieve this by launching multiple threads. Please refer sample code here https://github.com/rameez4ever/springboot-demo.git

Yes you can please check this SO.
However, if separating the running-user processes and simplicity is core , I would recommend the use of Docker containers, each running instance of the container(your apps) will runs in its own JVM on the same or distributed host

This is applicable, as David Tanzer said, by using two classloaders to start each Spring application in one JVM process. And no special code changes are required for these Spring apps.
In this way, almost every resource under those classloaders are separated: spring beans, class instances and even static fields of a same class.
But there are still concerns if you decide to hack like this:
Some resources like ports, cannot be reused in one JVM;
JVM system properties are shared within JVM process so pay attention if those two apps are reading a system property with same name. If you are using Spring, could try setting properties via command line argument to override those from system properties.
Classes loaded by the system class loader or its parents will share static fields and class definitions. For example, Spring Boot's thin jar launcher will use the system class loader to load bean class definition by default so there will be only one class definition even you have launched Spring apps in separate class loaders.

Related

Spring: Dynamic registrations of beans, rest-controllers, and more

I am new to Spring and would like to convert my existing applications to Spring Boot.
However, I am using a self-written module framework that allows me to add or remove components or additional functions of the application dynamically at runtime. The whole thing can be compared to plugin frameworks like PF4J or the plugin mechanism in Minecraft servers.
The advantage of this is obvious. The application is much more dynamic and certain parts of the program can be updated at runtime without having to restart the whole application.
Under the hood, a new ClassLoader is created for each module when it is loaded. The ClassPath of this ClassLoader contains the JAR file of the module. Afterwards, I load the respective classes with this ClassLoader and execute there an init method, which contains each module.
Now, I would like of course in connection with Spring that both the dependency injection in the modules functions, and that beans or, for example, rest controllers, which are in the modules, register with the module loading and unregister with the module unloading.
Example: I have a staff module. When I register it, the employee endpoint is registered and is functional. When I unload the module, the employee endpoint is removed again.
Now to my problem:
Unfortunately, I don't know how to implement this with Spring, or if something like this is even possible in Spring. Or are there even already other solutions for this?
I also read something about application contexts. Do I have to create a new application context for each module, which I then somehow "closed" when unloading the module?
I hope you can help me, also with code examples.
This post helped me a bit: https://hdpe.me/post/modular-architecture-with-spring-boot/
In short for each module a new ApplicationContext (e.g. AnnotationConfigApplicationContext) is created. If you want to share beans between the modules, you have to publish them to the main application context.
Beans can be registered at runtime by ((GenericApplicationContext) applicationContext).registerBeanDefinition(name, beanDefinition); at the main Application Context.
Another problem is that additional configurations are required, for example for #RestController or similar, in order for them to work. See other questions on StackOverFlow from me.

Let a container use the OpenJDK and libraries of an existing container

I am doing some experiments with my thesis involving the cold start problem that occurs with containers. My test application is a spring boot application that is build on the openjdk image. The first thing I want to try to resolve the problem of cold start, is the following:
Have a container ready, in the container is the openjdk and the libraries that the springboot app uses. I start my other container, using the ipc and networknamespace of the already existing container, and then be able to use the openjdk and the libraries of this container to run the jar file.
I am not exactly sure on how to achieve this? Can i achieve this by using volumes or should i be looking for a completely different approach?
On a another note, if i want x containers to run, i will make sure there are x pre-existing containers running. This is to make sure that every container has its own specific librarycontainer to work with. Would this be okay ?
In short, any way that I can speed up the spring boot application by using a second containers that is connected through ipc/net; would be helpfull to my problem.
Spring boot is a purely "runtime" framework.
If I've got your question right, you describe the following situation:
So, say you have a container A with JDK and some jars. This alone doesn't mean that you have a running process though. So its more like a volume with files ready to be reused (or maybe a layer in terms of docker images).
In addition you have another container B with a spring boot application that should be started somehow (probably with the open jdk from container A or its dedicated JDK).
Now what exactly would like like to "speed up"? The size of image (smaller image means faster deployment in CI/CD pipeline for example)? The spring boot application startup time (the time interval between the point of spawning the JVM till the Spring boot application is up and running)? Or maybe you're trying to load less classes in runtime?
The techniques that solve the arised issues are different. But all in all I think you might need to check out the Graal VM integration that among other things might create native images and speed up the startup time. This stuff is fairly new, I by myself haven't tried this yet. I believe its work-in-progress and spring will put an effort on pushing this forward (this is only my speculation so take it with a grain of salt).
Anyway, you may be interested in reading this article
However, I doubt it has something to do with your research as you've described it.
Update 1
Based on your comments - let me give some additional information that might help. This update contains information from the "real-life" working experience and I post it because it might help to find directions in your thesis.
So, we have a spring boot application on the first place.
By default its a JAR and its Pivotal's recommendation there is also an option of WARs(as Josh Long, their developer advocate says: "Make JAR not WAR")
This spring boot application usually includes some web server - Tomcat for traditional Spring Web MVC applications by default, but you can switch it to Jetty, or undertow. If you're running a "reactive applcation" (Spring WebFlux Supported since spring boot 2) your default choice is Netty.
One side note that not all the spring boot driven applications have to include some kind of embedded web server, but I'll put aside this subtle point since you seem to target the case with web servers (you mention tomcat, a quicker ability to serve requests etc, hence my assumption).
Ok, now lets try to analyze what happens when you start a spring boot application JAR.
First of all the JVM itself starts - the process is started, the heap is allocated, the internal classes are loaded and so on and so forth. This can take some time (around a second or even slightly more depending on server, parameters, the speed of your disk etc).
This thread address the question whether the JVM is really slow to start I probably won't be able to add more to that.
Ok, So now, its time to load the tomcat internal classes. This is again can take a couple of seconds on modern servers. Netty seems to be faster, but you can try to download a stanalone distribution of tomcat and start it up on your machine, or create a sample application wihout spring boot but with Embedded Tomcat to see what I'm talking about.
So far so good, not comes our application. As I said in the beginning, spring boot is purely runtime framework. So The classes of spring/spring boot itself must be loaded, and then the classes of the application itself. If The application uses some libraries - they'll be also loaded and sometimes even custom code will be executed during the application startup: Hibernate may check schema and/or scan db schema definitions and even update the underlying schema, Flyway/Liquidbase can execute schema migrations and what not, Swagger might scan controllers and generate documentation and what not.
Now this process in "real life" can even take a minute and even more, but its not because of the spring boot itself, but rather from the beans created in the application that have some code in "constructor"/"post-construct" - something that happens during the spring boot application context initialization. Another side note, I won't really dive into the internals of spring boot application startup process, spring boot is an extremely power framework that has a lot of things happening under the hood, I assume you've worked with spring boot in one way or another - if not, feel free to ask concrete questions about it - I/my colleagues will try to address.
If you go to start.spring.io can create a sample demo application - it will load pretty fast. So it all depends on your application beans.
In this light, what exactly should be optimized?
You've mentioned in comments that there might be a tomcat running with some JARs so that they won't be loaded upon the spring boot application starts.
Well, like our colleagues mentioned, this indeed more resembles a "traditional" web servlet container/application server model that we, people in the industry, "used for ages" (for around 20 years more or less).
This kind of deployment indeed has an "always up-and-running" a JVM process that is "always" ready to accept WAR files - a package archive of your application.
Once it detects the WAR thrown into some folder - it will "deploy" the application by means of creating the Hierarchical class-loader and loading up the application JARs/classes. Whats interesting in your context is that it was possible to "share" the libraries between multiple wars so that were loaded only once. For example, if your tomcat hosts, say, 3 applications (read 3 WARs) and all are using, oracle database driver, you can put this driver's jar to some shared libs folder and it will be loaded only once by the class loader which is a "parent" for the class loaders created per "WAR". This class loader hierarchy thing is crucial, but I believe its outside the scope of the question.
I used to work with both models (spring boot driven with embedded server, an application without spring boot with embedded Jetty server and "old-school" tomcat/jboss deployments ).
From my experience, and, as time proves, many of our colleagues agree on this point, spring boot applications are much more convenient operational wise for many reasons (again, these reasons are out of scope for the question IMO, let me know if you need to know more on this), that's why its a current "trend" and "traditional" deployments are still in the industry because or many non pure technical reasons (historical, the system is "defined to be" in the maintenance mode, you already have a deployment infrastructure, a team of "sysadmins" that "know" how to deploy, you name it, but bottom line nothing purely technical).
Now with all this information you probably understand better why did I suggest to take a look at Graal VM that will allow a faster application startup by means of native images.
One more point that might be relevant. If you're choosing the technology that will allow a fast startup, probably you're into Amazon Lambda or the alternative offered by other cloud providers these days.
This model allows virtually infinite scalability of the "computational" power (CPU) and under the hood they "start" containers and "kill" them immediately once they detect that the container does actually nothing. For this kind of application spring boot simple is not a good fit, but so is basically Java, again, because the JVM process is relatively slow-to-start, so once they start the container like this it will take too long till the time it becomes operational.
You can read Here about what spring ecosystem has to offer at this field, but its not really relevant to your question (I'm trying to provide directions).
Spring boot shines when you need an application that might take some time to start, but once it starts it can do its job pretty fast. And yes, its possible to stop the application (we use the term scale out/scale in) if its not "occupied" by doing an actual work, this approach is also kind of new (~3-4 years) and works best in "managed" deployment environments like kubernetes, amazon ECS, etc.
So if speed-up application start is your goal i think you would need a different approach here a summary of why i think so:
docker: a container is a running instance of an image, you can see an image as a filesystem (actually is more than that but we are talking about libraries). In a container you have jdk (and i guess your image is based on tomcat). Docker engine has a very well designed cache system so containers starts very quickly, if no changes are made on a container docker only need to retrieve some info from a cache. These containers are isolated and for good reasons (security, modularity and talking about libraries isolation let you have more version of a library in different containers). Volumes do not what you think, they are not designed to share libraries, they let you break isolation to make some things for example you can create a volume for your codebase so you have not to rebuild an image for each change during the programming phase, but usually you won't see them in a production environment (maybe for some config files).
java/spring: spring is a framework based on java, java is based on a jdk and java code runs on a vm. So to run a java program you have to start that vm(no other way to do that) and of course you cannot reduce this startup time. Java environment is very powerfull but this is why a lot of people prefer nodejs expecially for little services, java code is slow in startup (minutes versus seconds). Spring as said before is based on java, servelets and context. Spring application lives in that context, so to run a spring application you have to initialize that context.
You are running a container, on top of that you are running a vm, then you are initializing a spring context and finally you are initializing beans of your application. These steps are sequential for dependencies reasons. You cannot initialize docker,vm and a spring context and run somewhere else your application, for example if you in a spring application add a chainfilter you would need to restart application because you would need to add a servlet to your system. If you want to speed-up the process of startup you would need to change the java vm or make some changes in the spring initialization. In summary you are tryng to deal with this problem at a high level instead of low level.
To answer your first question:
I am not exactly sure on how to achieve this? Can I achieve this by using volumes or should I be looking for a completely different approach?
This has to be balanced with the actually capabilities of your infrastructure.
One could have a full optical fibre cable network, a Docker repository in this network and so a bandwidth than can allow for "big" images
One could have a poor connection to its Docker repository, and so need extra small images but a quick connection with the libraries repositories
One could have a blue/green deployment technique and care about neither the size of the images and layer nor the boot time
One thing is, if you care about image and layer size this is good and this is definitely a good practice advised by Docker, but all depends of your needs. The recommendation about keeping images and layers small if for images that you'll distribute. If this is your own image for your own application, then, you should act upon your needs.
Here is for a little bit of my own experience: in a company I was working on, we need the database to be synchronised back from the production to user acceptance test and developer environment.
Because of the size of the production environment, importing the data from an SQL file in the entrypoint of the container took around twenty minutes. This might have been alright for the UAT environment, but, was not for the developer's one.
So after trying all sort of minor improvement in the SQL file (like disabling foreign keys checks and the like), I came up with a totally new approach: I created a big fatty image, in a nightly build that would contain the database already. This is, indeed, against all good practice of Docker, but the bandwidth at the office allowed the container to start in a matter of 5 minutes at worse, compared to the twenty that was before.
So I indeed ended up with a build time of my Docker SQL image being humongous, but a download time acceptable, considering the bandwidth available and a run time reduced to the maximum.
This is taking the advantage of the fact that the build of an image only happens for once, while the start time will happens for all the containers derivating from this image.
To answer your second question:
On a another note, if I want x containers to run, I will make sure there are x pre-existing containers running. This is to make sure that every container has its own specific librarycontainer to work with. Would this be okay?
I would say the answer is: no.
Even in a micro services architecture, each service should be able to do something. As I understand it, you actual not-library-container are unable to do anything because they are tightly coupled to the pre-existence of another container.
This said there are two things that might be of interest to you:
First: remember that you can always build from another pre-existing image, even your own.
Given this would be your library-container Dockerfile
FROM: openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Credits: https://spring.io/guides/topicals/spring-boot-docker/#_a_basic_dockerfile
And that you build it via
docker build -t my/spring-boot .
Then you can have another container build on top of that image:
FROM: my/spring-boot
COPY some-specific-lib lib.jar
Secondly: there is a nice technique in Docker to deal with libraries that is called multi-stage builds and that can be used exactly for your case.
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
Credits: https://spring.io/guides/topicals/spring-boot-docker/#_multi_stage_build
And as you can see in the credits of this multi-stage build, there is even a reference to this technique in the guide of the Spring website.
The manner you are attempting to reach your goal, defies the entire point of containerisation.
We may cycle back to firmly focus on the goal -- you are aiming to "resolve the problem of cold start" and to "speed up the spring boot application".
Have you considered actually compiling your Java application to a native binary?
The essence of the JVM is to support Java's feature of interoperability in a respective host environment. Since containers by their nature inherently resolves interoperability, another layer of resolution (by the JVM) is absolutely irrelevant.
Native compilation of your application will factor out the JVM from your application runtime, therefore ultimately resolving the cold start issue. GraalVM is a tool you could use to do native compilation of a Java application. There are GraalVM Container Images to support your development of your application container.
Below is a sample Dockerfile that demonstrates building a Docker image for a native compiled Java application.
# Dockerfile
FROM oracle/graalvm-ce AS builder
LABEL maintainer="Igwe Kalu <igwe.kalu#live.com>"
COPY HelloWorld.java /app/HelloWorld.java
RUN \
set -euxo pipefail \
&& gu install native-image \
&& cd /app \
&& javac HelloWorld.java \
&& native-image HelloWorld
FROM debian:10.4-slim
COPY --from=builder /app/helloworld /app/helloworld
CMD [ "/app/helloworld" ]
# .dockerignore
**/*
!HelloWorld.java
// HelloWorld.java
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, Native Java World!");
}
}
Build the image and run the container:
# Building...
docker build -t graalvm-demo-debian-v0 .
# Running...
docker run graalvm-demo-debian-v0:latest
## Prints
## Hello, Native Java World!
Spring Tips: The GraalVM Native Image Builder Feature is an article that demos building a Spring Boot application with GraalVM.

Can multiple portlets share a singleton? [duplicate]

I have a couple of Singleton classes in a Liferay application that hold several configuration parameters and a ServiceLocator with instances to WebServices I need to consume.
I have put these classes in a jar that is declared as a dependency on all my portlets.
The thing is, I have put some logging lines for initialization in theses singleton classes, and when I deploy my portlets I can see these lines multiple times, once for every portlet, since each portlet has its own class context.
For the AppConfig class it might not be such a big deal but my ServiceLocator does actually hold a bunch of references that take a good bit of memory.
Is there any way that I can put these Singleton references in some kind of Shared context in my Liferay Portal?
The problem is that every Portlet runs in its own WAR file and aech war file has its own classloader.
Usually when I had to achieve a requirement like this, I had to put the Singleton classen in a JAR file and this JAR file in the common class loader library instead of packing it into each WAR. (In Tomcat: <tomcatHome>/common/lib or something like that)
Then you'll also have to put all dependent libraries into that common lib dir, too. Don't know how to do that in Liferay, though. For tomcat see this thread: stackoverflow.com/questions/267953/ and this documentation: http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html. Depends on the Servlet container.
Alexander's answer gives the general answer that's true with or without Liferay in mind.
Liferay (as you mention it) adds another option to this: ServiceBuilder. You'll end up with the actual instances contained in exactly one web application, and you'll have an interfacing jar that you can distribute with every dependent application. This way you can more easily update your implementation: It's easy to hot-deploy new and updated web applications to your application server - it's harder to update code that's living on the global classpath.
The global classpath (Alexander's answer) however brings you immediate success while ServiceBuilder comes with its own learning curve and introduces some more dependencies. I don't mind those dependencies, but your mileage might vary. Decide for yourself
With maven portlet you can make a common Spring component and import in the pom of each portlet.
Another solution is to use service builder.
Spring MVC portlet would be the most recommended for this.

Spring multiple application contexts vs single application context in an ear

I have inherited an app which is packaged as ear file that has inside
- ear :
-APP-INF/lib (persitence.jar - hibernate+spring ... etc)
-war (web-services)
-jar (mdb)
-jar (mdb)
As I studied the app noticed that each module has inside the jar creates it's own Spring application context that is loaded on runtime.
It works ok but it would not be better to have only one application context ?
I wonder what are the benefits and drawback which this structure compared to the one where it is only an single application context used ?
To be more clear on runtime there are 3 application context roots loaded.It is not only that there are more application context files
Thanks
first of all: are you sure it isn't the same application context everywhere ? have you tested this ?
if they are all seperate:
the advantage is that those application contexts are shielded from eachother, which you could call loose coupling, which is a good thing; one can't influence the other, it keeps things clearer for the programmer.
the disadvantage is, it might be harder to access one application context from the other, but you can always find a way around this.
If the application very large then the application context of different modules is easy to manage and it doesn't create any overhead. At the time when application is up all the context xml files will be combined.
And I will also prefer to maintain separate application context files for separate set of configurations eg. security, datasource, aop etc. should be placed in separate context files.
When the application is small then you can go for single application context file for whole application. Otherwise different application context for different modules is easy to manage in case when you need to do some changes in any one of them. If you combine all of them then it will be very difficult to do any changes in that.
Hope this helps you. Cheers.
Few points against single application-context file that I can think of:
One file will get huge and it will be maintenance nightmare.
Developers of each component will modify,update same file can lead to errors.
Changes in one component will lead to changes in one centralized file, again may lead to issues.
It gives every component developer "Separation of Concern", they don't have to see, know others work while carrying out there task.
I stumbled on this thread which seeking a solution for multi-tenant application. Most articles are just about datasource (Spring HotSwapable datasource targets) etc but what you have is a separate context at the war level.
This gives me another idea of bundling my application in a way that makes it multi-tenant.
If the wars are skinny just to inject special runtimes and provide additional context path qualifies this may work for a large multi-tenant application. Common classes will be loaded at the EAR level and application contexts per war. I guess this should be ok for small number of tenants.

Do servlet containers prevent web applications from causing each other interference and how do they do it?

I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process.
I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications.
As depicted here:
Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view.
However, a colleague disagrees based on their experience of attempting just that.
They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work).
Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing
The two views don't seem to be compatible, so I ask you:
Do servlet containers prevent web applications deployed to the same container from conflicting with each other?
If yes, How do they do this?
If no, Why does interference occur?
and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?
The short answer is that the servlet container isolates the applications by using a separate classloader for each application - classes loaded by separate classloaders (even when from the same physical class files) are distinct from each other. However, classloaders share a common parent classloader and the container may provide a number of other container-wide resources, so the applications are not completely isolated from each other.
For example, if two applications share some common code by each including the same jar in their war, then each application will load their own instance of the classes from the jar and a static variable (e.g. a singleton) of a class in one application will be distinct from the static variable of the same class in the other application.
Now, take for example, that the applications try to use java.util.Logger (and presumably don't include their own instance of the Logger classes in their war files). Each application's own classloader will not find the class in the war file, so they will defer to their parent classloader, which is probably the shared, container-wide classloader. The parent classloader will load the Logger class and both applications will then be sharing the same Logger class.
Servlets in the same container will share some resources. I think it should be possible to deploy the same web application twice in the same container provided that you give each a different name and they don't collide on a particular resource. This would theoretically be the same as deploying two different servlets which just happen to have the same implementation, which we do all the time.
Some shared resources, off the top of my head (and I'm not an expert so don't quote any of this!):
Libraries (jars) in tomcat/common/lib (Tomcat 5) or tomcat/lib (Tomcat 6).
Settings in the global server.xml, web.xml, tomcat-users.xml
OS provided things, such as stdin/stdout/stderr, network sockets, devices, files, etc.
The logging system.
Java system properties (System.getProperty(), System.setProperty())
I suspect... static variables? I'm not sure if the ClassLoader design would prevent this or not.
Memory. This is the most common problem: one servlet can deny others availability by consuming all memory.
CPU - especially with multi-threaded apps. On the HotSpot JVM, each Java thread is actually an OS-level thread, which are expensive and you don't want more than a few thousand of them.
Doubtless there are more.
Many of these things are protected by a security manager, if you're using one.
I believe the isolation is in the class loader. Even if two applications use the same class name and package, their class loader will load the one deployed with the application.

Categories

Resources