I develop applications using a customized version of Tomcat.
There is support for dynamic class loading, which I use a lot in test and development environments, and I'm curious about the impact of using it in production environment also.
By my knowledge, dynamic class loading is not to be used in production due to performance issues, due to the class loader polling classes at each access.
But, this applcation server support configuration of polling frequency for loading new classes.
If I understand it correctly, if I configure the dynamic class loading to poll every reasonably large amount of time (ie. 1hour) I should avoid most adverse performance impacts.
In production, this mechanism would be used in a limited number of ways:
1) emergency patching without user downtime (fixes would be released at the first shutdown)
2) permanent patching of classes of which we do not have the source (legacy third party libs)
Is this a reliable solution?
Thanks
IMHO you should not use the feature for production. Apart from the polling overhead there may be lot of other scenario which would not cause problems in a dev environment
Few things that come to my mind are
Impact on any constants that are inlined during compiletime
Impact on perm space
You lose track of the version you have deployed in production
there can be mistakes like missing a classfile during the patch
Without knowing what "customized Tomcat version" you have it is obviously difficult to tell if you understand it correctly. However yes, if it does what it says on the tin, you will be avoiding most advers effects.
Still for Emergegency Pachting and the like, why would you want to wait up to an hour for the classes to be reloaded when you could just use the Tomcat Manager Application as described below?
If you want to use this feature for Emergency patching or generell patching, I would strongly suggest using Tomcat Manager Application to trigger a reload when requried:
reloadable
Set to true if you want Catalina to monitor classes in
/WEB-INF/classes/ and /WEB-INF/lib for changes, and automatically
reload the web application if a change is detected. This feature is
very useful during application development, but it requires
significant runtime overhead and is not recommended for use on
deployed production applications. That's why the default setting for
this attribute is false. You can use the Manager web application,
however, to trigger reloads of deployed applications on demand.
http://tomcat.apache.org/tomcat-7.0-doc/config/context.html
Related
After Updating log4j - to mitigate the log4shell vulnerability (CVE-2021-44228) - in class path, (or any other library generally), Do I need to restart JVM to make updates get into count?
Here, is mentioned that the "newly added classes" inside class path are getting loaded to the JVM automatically without restart, but what about the classes that are already loaded with the same name via class loader? Do they get overwritten?
The same question applies for tomcat (Although I guess it would be the same as JVM?)
Even if the new classpath would immediately be used by the JVM, there may be a number of objects instantiated from the old classes in memory. The new classes would then only apply to new instances. AFAIK log4j would not throw away it's objects during runtime.
To be on the safe side you definitely want to restart the JVM.
Is it required to restart JVM after updating log4j in classpath?
Probably yes. It depends on which classloader loads log4j.
If the log4j libraries are exclusively part of your webapps, you might be able to get away with hot-loading all of the webapps.
But you said "in classpath" and I guess that mean's in Tomcat's classpath; i.e. the shared libraries.
My advice would be not to take the risk. Restart Tomcat.
(Your systems should be designed so that Tomcat restarts are not a significant problem. There are various ways to do that. Indeed, one could argue that if downtime of your (single) Tomcat instance is an operational concern, then you should be running multiple copies.)
... but what about the classes that are already loaded with the same name via class loader? Do they get overwritten?
A classloader won't notice things have changed in its classpath. They are not designed to work that way.
And even if it did, a classloader cannot reload a class. The JVM architecture / runtime type safety don't allow it.
The hot-loading feature that (some) people use to avoid Tomcat restarts actually involves creating a brand new classloader to load the new version. The old version of the class will still exist in its original classloader, and other code will remain bound to it.
I have to write a Java library that will need to be embedded in various run-time environments (some web services, desktop applications, even possibly in an Oracle database JVM).
It is important that the library run in the JVM of the application that uses it (e.g., I can't just make all the apps call a web service that uses this library).
It is also important that every application running the library runs the same version of it, as the library will be enforcing business logic that must be applied consistently across the enterprise.
Finally, some of the applications using the library are customer facing and have demanding up-time requirements.
Goal: I want to load a Java library from a central network location, but also cache it locally, to be used in the event that that central location is unavailable
**Question: can I do this somehow by writing a custom class loader that will search first on a network URL and, if found, load the class and save it locally and, if not found, load a previously locally saved version of the class? **
I don't know everything about classloaders and I am concerned about a few things. The location of the locally saved (cached) classes would probably not be in the CLASSPATH of the JVM. Would that be a problem? What other factors would make this approach problematic / unworkable?
I'm using the Liferay platform to develop a company portal (version 6.1.1). This portal already have a considerable implementation and database size (174 tables).
As expected, from the beginning the build services and deploys were getting slower as the project were growing.
The problem is that with the current implementation it takes like 20 minutes to perform the 'build services' and about 3/4 minutes to perform a deploy which happens even if i change a simple string in the code. And for every 3 deploys it´s necessary to restart the server because it seems to froze.
My machine specs are:
Intel core i5-3210M
8GB RAM
64bits
And this are the memory args of my liferay server:
-Xms1024m -Xmx1024m -XX:PermSize=1024m -XX:MaxPermSize=1024m
As you know this waiting times have a huge drop of performance in the implementation.
My questions are: is this normal? If yes, what kind of alternatives do i have in a future portal implementation?
Thank you.
174 tables are quite a lot - more than Liferay itself brings. I'd recommend to spread out your application into separately deployable plugins - they don't (technically) need to be in the same plugin, service builder allows you to use the services across different plugins.
Proper dependency management should help you to isolate the functionality that you'll extract into separate applications. Declare which application needs which other application deployed before, and you can access the services cross-context.
To answer your comment-question, sampling with only two projects: Create them, both with service-builder. Let's call them common-portlet and app1-portlet. Obviously, app1-portlet uses components (and services) from common-portlet.
In app1-portlet, edit docroot/WEB-INF/liferay-plugin-package.properties and add the line
required-deployment-contexts=common-portlet
This will make sure that app1-portlet is only deployed when common-portlet is available. Also, common-service.jar, the API of common-portlet, generated with service-biulder, will automatically be put on the classpath of app1-portlet, in other words, you can call the services that you implemented in common-portlet.
Assuming your more abstract portlets have a more stable interface (typically this indicates a proper architecture), changes to app1-portlet (or app2-portlet etc.) will only affect the portlet where you make a change in. Even if you have a change in common-portlet, service-builder will be relatively quick, however, on interface changes you still need to recompile everything, but that's the nature of dependencies. If you don't change your interfaces, you'll only need a redeploy.
We have a web application made in Java, which uses struts2, spring and JasperReport. This application runs on glassfish 4.0.
The libraries of the application are in the WEB-INF/lib folder, and also in glassfish are installed 4 more than uses the same libraries.
Glassfish is configured to use 1024mb for heapspace and 512m for permgen, and the most of the memory consumption when i use libraries per application is in the struts actions and spring aop classes (using netbeans profiler).
The problem we are having is the amount of memory consumed by having libraries in the classloader per application because is to high and generates PermGen errors and we have also noticed that the application run slower with more users.
because of that we try to use shared-libraries, put it in domain1/lib folder and found that with a single deployed application the load time and memory consumption is much lower, and the application works faster in general. But when we deploy the rest of the applications on the server only the first application loaded works well and the rest has errors when we calls struts2 actions.
We believe that is because each application has slightly different settings on struts2 and log4j.
We have also tried to put only certain libraries on glassfish and leaving only struts2 in the application but it shows InvocationTargetException errors because all libraries depend the lib from apache-common and it dont matter if we put those lib on one place or another. Also if we put it in both places the application don’t start.
there any special settings or best practices for using shared-libraries?
Is there a way to use shared-libraries but load settings per application? or we have to change the settings to make them all the same?
Is there any special settings or best practices for using shared-libraries? Is there a way to use shared-libraries but load settings per application? or we have to change the settings to make them all the same?
These are actually interesting questions... I don't use GlassFish but, according to the documentation :
Application-Specific Class Loading
[...]
You can specify module- or application-specific library classes [...] Use the asadmin deploy command with the --libraries option and specify comma-separated paths
[...]
Circumventing Class Loader Isolation
Since each application or individually deployed module class loader universe is isolated, an application or module cannot load classes from another application or module. This prevents two similarly named classes in different applications or modules from interfering with each other.
To circumvent this limitation for libraries, utility classes, or individually deployed modules accessed by more than one application, you can include the relevant path to the required classes in one of these ways:
Using the Common Class Loader
Sharing Libraries Across a Cluster
Packaging the Client JAR for One Application in Another Application
Using the Common Class Loader
To use the Common class loader, copy the JAR files into the domain-dir/lib or as-install/lib directory or copy the .class files (and other needed files, such as .properties files) into the domain-dir/lib/classes directory, then restart the server.
Using the Common class loader makes an application or module accessible to all applications or modules deployed on servers that share the same configuration. However, this accessibility does not extend to application clients. For more information, see Using Libraries with Application Clients. [...]
Then I would try:
Solution 1
put all the libraries except Struts2 jars under domain1/lib ,
put only Struts2 jars under domain1/lib/applibs,
then run
$ asadmin deploy --libraries struts2-core-2.3.15.2.jar FooApp1.war
$ asadmin deploy --libraries struts2-core-2.3.15.2.jar FooApp2.war
To isolate Struts2 libraries classloading while keeping the rest under Common Classloader's control.
Solution 2
put all the libraries except Struts2 jars under domain1/lib ,
put only Struts2 jars under domain1/lib/applibs, in different copies with different names, eg appending the _appname at the jar names
then run
$ asadmin deploy --libraries struts2-core-2.3.15.2_FooApp1.jar FooApp1.war
$ asadmin deploy --libraries struts2-core-2.3.15.2_FooApp2.jar FooApp2.war
To prevent sharing of the libraries by istantiating (mock) different versions of them.
Hope that helps, let me know if some of the above works.
You can try to create what is known as a skinny WAR. Pack all your WARs inside an EAR and move all the common JARs from WEB-INF/lib to the lib/ folder in the EAR (don't forget to set <library-directory> in the application.xml).
I'd bet that placing the libs under lib/ or lib/ext won't resolve your performance issues. You did not write anything about the applications or server settings, like size of application, available Heap and PermGen space, but nonetheless I would recommend to stay with separate libs per app.
If you place the libs in server dirs, they will be shared among all apps. You will loose the option to upgrade only one of your applications to a new framework or to get rid away of any of them. Your deployment will be bound to a specific server architecture.
And you wrote it did not solve your problems, it even may raise new ones.
I would recommend to invest some hours into tuning the server. If it runs with defaults, allocate more PermGen and HeapSpace.
If this does not help, you should analyze in deep what's going wrong. Shared libs might be a solution, but you don't know the problem, yet. IBM offer some cool and free tools to analyze heap dumps, this could be a good starting point.
I came here in search of guidance about installing libraries that are shared among multiple applications or projects. I am deeply disappointed to read that the accepted practice favors installing a copy of every shared library into each project. So, if you have ten Web application, all of which use, e. g., httpcomponents-client, mysql-connector-java, etc., then your installation contains ten copies of each.
This behavior reminds me, painfully, of the way of thinking that motivated me to abandon the mainframe in favor of the PC; the thinking seemed to be "I don't care how many resources my application consumes. In fact, I'd like to be able to brag about what a resource hog it is." Excuse me, please, while I hurl.
The interface exposed by a library is an immutable contract that is not subject to change at the developer's whim.
There is this concept called backwards compatibility. If you break it, you create a new interface.
I know of at least two types of interfaces that adhere to the letter and spirit of these rules.
By far the oldest is the IBM System/370 system libraries. You might have Foo and Foo2, where the latter extends and/or breaks the contract made by the Foo interface in some way that made it incompatible.
From its beginnings in the Bell Labs Unix project, the standard C runtime library has adhered to the above rules.
Though it is much newer, the Microsoft COM interface specification enforces the same rule.
To their credit, Microsoft generally adheres to those rules in the Win32 API, too, although there are a handful of exceptions in that API. To a degree, they went backwards with the .NET Framework, which seems slavishly to follow in the footsteps of the Java environment that it so eagerly seeks to replace.
I've been using libraries since 1978, and my understanding was and is that the goal of putting code into a library was to make it reusable. While maintaining copies of the library code in each application eliminates the need to implement it again for each new project, it severely complicates upgrading, since you now have ten (or more) copies of the library, each of which must be updated.
If libraries adhere to the rule that an interface is an immutable contract, why shouldn't they live in a shared library directory, as do the Unix system libraries that live in its /lib directory, from which everything that runs on the host shares a single copy of the standard C runtime library, Zlib, and so forth.
Color me seriously disappointed.
Our current app runs in a single JVM.
We are now splitting up the app into separate logical services where each service runs in its own JVM.
The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed.
For inter service communication we use a combination of REST, an MQ system bus, and database views.
What I don't like about this:
REST means we have to marshal data to/from XML
DB views couple the systems together which defeats the whole concept of separate services
MQ / system bus is added complexity
There is inevitably some code duplication between services
You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc.
Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?
I'm a little confused as to what you're really asking here. If you split your application up into different services running across the network, then data marshalling has to occur somewhere.
Having said that, have you investigated OSGi ? You can deploy different bundles (basically, jar files with additional metadata defining the interfaces) into the same OSGi server, and the server will facilitate communication between these bundles transparently, since everything is running within the same JVM - i.e. you call methods on objects in different bundles as you would normally.
An OSGi server will permit unloading and upgrades of bundles at runtime and applications should run normally (if in a degraded fashion) provided the OSGi bundle lifecycle states are respected.
It sounds like your team has a manual QA process and the real issue is automating regression tests so that you can deploy new releases quickly and with confidence. Breaking up the code into separate servers is a workaround for that.
If you're willing to restart the server then one approach might be to compile the code into separate jar files, and deploy a module by dropping in a new jar and restarting. This is largely a matter of structuring your code base so that bad dependencies don't creep in and the calls between jars are made via interfaces that don't change. (Or alternately, use abstract classes so you can add a new method with a default implementation.) Your build system could help by making sure that separately deployed modules can only depend on common interfaces and anything else is a compile error. But note that your compiler isn't going to help you detect incompatibilities when you're swapping in jars that you didn't compile against, so I'm not sure this really avoids having a good QA process.
If you want to deploy new code without restarting the JVM then OSGI is the standard way to do that. (But one that I know little about.)