Manage a WebSphere shared library without OS access - java

We have two JEE applications in our WebSphere 8.5.x environment that both depend on a common JAR. Without going into too much detail, suffice it to say that the JAR needs to be set up as a shared library and attached at the server level. I would like to know how to manage this shared library without having OS access to update the file, instead updating through the WAS console.
As I understand it, for a WAS shared library you need to have placed it somewhere on the OS before you can set up the shared library reference since you have to provide the path to the e.g. JAR file.
I would like to be able to complete a deployment, replacing all binaries including the shared library, without having to remote into the OS and update the JAR on disk. I would prefer a method where someone using the WAS console could update the shared library on disk much like they can upload a new application.
I've looked into a WAS "Asset" (WAS Console -> Applications -> Application Types -> Assets), with the hope that this would allow me to manage and upload the JAR file through the console. I am able to upload the JAR as an asset and can see it when it lands on disk, but I am not understanding what to expect from this and am loathe to use this without knowing everything that's going on.
I see the "Asset binaries destination URL" option but that doesn't seem to do anything. I can put anything I want in that field when importing the asset and it always goes to "${APP_SERVER_ROOT}/config/cells/${CELL_NAME}/assets/${ASSET_NAME}/aver/BASE/bin/" on the management node, not the worker node. This location is also the configuration repository, and I get a funny feeling mapping directly into a repository location like this.
I could, I suppose, create the shared library with a path directly into that location but I am concerned that I'm missing something and that this isn't a good idea.
Does anyone have any insight into this they could share?

To use a shared library with WebSphere you can go to Environment > Shared Libraries and upload/configure a shared library on that page (including choosing which applications you want to associate with the shared library).
For full info, see IBM's doc: Creating shared libraries

You could try to use Business level applications and upload jars as asset, specify them as shared libraries and map during application installation (however I've never done that and it might not work). Check the following links:
Business level applications
Creating business-level applications with the console

Related

Custom classloader, JSP execution and resource retrieval inside webapp

Due project requirements, I need to create a webapp that, when executing, will allow some users to upload zip files which are like small apps and will contain .class files, resources (images, css, js, ...) and even lib files. That zip file is almost like a war file.
Any way to code it easily? AFAIK I think I know how to code the custom ClassLoader to load classes from inside the zip file ( Java - Custom ClassLoader - trying to load a class using class file full path ) and even code the resource retrieval when requested by the browser but no idea of how to execute JSP files which will be inside the zip file or load the jar lib files inside the zip file.
EDIT: the webapp must manage applications loaded, there is no way to implement this as answered below because the webapps need the "master" webapp to live. Also that "master" webapp allows versioning of applications. The user will be able to upload a new version and upgrade to it and even do a downgrade if the new version starts to fail.
There is no easy way to do this. It's a lot of work. Classloaders are very finicky beasts. Arguably the bulk of the work of creating something like Tomcat is wrangling the class loaders, the rest is just configuration. And even after all these years, we still have problems.
Tomcat, for example, is very aggressive on how it tries to unload existing webapps, using internal information of the Java class libraries to try and hunt down places for class loader leaks, etc. And despite their efforts, there's still problems.
The latest version of Glassfish has (or will have) the ability to version application deployments. You might have good luck simply hacking on Tomcats internal routing and mapping code to manage versioning.
If you're running an EJB container, you could put your core services in the EJBs and let the WARs talk to them (you could do this with web services in a generic servlet container, but many EJB containers can convert Remote semantics in to Local semantics for calls in to the same container).
You can also look at OSGI. That's another real pain to manage, but it might have enough granularity to even give you versioning, but none of your users will want to use it. Did I mention it's a real pain to manage? We do this for dynamic loading of web content and logic, but we don't version this.
If you must have everything under control of a single WAR, then your best bet is to punt on Java and instead use a scripting language. You tend to have a bit more control over the runtime of scripting environment, particularly if you DON'T let them access arbitrary Java classes.
With this you can upload whatever payload you want, handle all of the dispatch yourself to static resources and logic (which means you get to handle the versioning aspect). Use something like Velocity for your "JSP" pages, and then use Javascript or whatever for logic.
The versioned environment can be pain to pull off. If you don't care about doing it atomically, it's obviously easier. If you can afford "down time" (bring v1 offline then bring up v2), it's a lot easier. If you're uploading the full contents of each version, it's really easy. My system allowed incremental changes and had copy-on-write semantics, so it was a lot harder. But I didn't really want to upload several Gb of media for each version.
The basic takeaway is that when dealing with Classloaders, there be dragons -- nothing is easy with those and there are alternatives that actually get code in to production rather than creating scars and pissed off dragons. Using a scripting language simplifies that immensely. All the rest is dispatch, and that can be done with a filter or servlet.
You WILL get the great joy of reimplementing a solid chunk of the HTTP protocol doing this, that's always a treat as well since the servlet container doesn't really expose that functionality to you. That is, you'll want to do that if you want to be a good citizen on the web. You could always just continually shove content down the clients throat, caching and proxies be damned.
You could manually create a WAR-like structures inside your web container webapps directory and put classes, JARs and JSPs there.
Given that hot redeployment is enabled in your web container it would automatically designate a separate classloader to this new web application that it finds.
In most cases web containers consider any folder having a WEB-INF subfolder containing a valid web.xml file to be a web application. You can restrict access to this new webapp by modifying its context configuration, located in META-INF/context.xml in case of Tomcat.
Controlling hot redeployment, classloader policies etc is dependent on the type of your web container, but I hope your is not worse than Tomcat which could handle all of that.

Java EE EAR shared location of read/write resources within clustered environment

Within a Java EE environment (happens to be WAS 6.1 but could be any application server) I need to place a XML file, which is a configuration file, so that I can read and write to it.
This needs to be available in a clustered environment so I am looking at using the class path to load the file.
I am thinking I can store this file in the EAR root, reference it in the manifest and then load and save it.
I have tried this approach by having my file in a JAR and making this available via the MANIFES and I can load the config file from the class path no problem using the following.
this.getClass().getClassLoader().getResourceAsStream("configFileName");
That loads the file that is in the JAR, which is fantastic. But if I want to edit this file, programmatically, I cannot access the JAR files location (the EAR root) it returns me an interpreted path like this:
/usr/IBM/WebSphere/AppServer/profiles/AppSrv01/installedApps/localhostNode01Cell/MyApp.ear/MyApp.war/TB_config.jar
That is not the correct location of the JAR the correct location is at MyApp.ear.
So the question is: how can I access and update (copy contents, create new, save, delete old) the JAR with my config file.
Or should I put the config file somewhere else?
What is the standard Java EE to make files that need read/write access available to WARs on a cluster?
Ok I have built a solution for this. It is more WebSphere based (our platform) but it is J2EE and I am suprised it was not mentioned. Basically I have used JMX to synchronise the nodes. The files are stored, and saved to, the deployment manager the nodes are then resynchronised using JMX calls and then the engines withing the applicaitons are restarted by calling servlets within the applications.
It works a dream
So #stacker, nodes are managed and the manager distributes files to the nodes.
The problem that you've hit is not unique. A lot of Java EE programmers can struggle with providing a "configurable" property file to administrators of a cluster. And the solution that you've chosen, well, has its limitations.
The problem with embedding a config file inside a JAR, is absolute path or the physical path of the file, in case you need to update it. If your container will not explode your EAR and WAR files, then placing the config file alongside the code is a bad idea - the administrator will have to deploy a newer version of the EAR/WAR/JAR. That is unless, of course, you can configure the container to explode the artifacts - WebLogic Server does this, I'm not sure about WAS.
There are several ways to resolve this problem:
Store the config file in a SAN that is accessible to all the nodes in the cluster via a 'canonical' path. That way, you could locate the file from any node in the cluster and update it. Remind yourself to restrict access to this directory. Although this sounds simple, it need not be - Java objects might have to be 'flushed' across nodes, once the configuration file has been updated. Moreover, you might have to cater to the scenario where property files can get edited outside the application.
Use a database. Much simpler and almost hasslefree, except that the Java objects might have to be flushed again.
Use a MBean. As good as a database, except that I haven't known a lot of people vouching for the MBean support in WAS. Also, I'm not really sure if object states can go haywire across a cluster, in this case.
You cannot write to an ear file, you should place the XML file in the DB as a text lob (large object).
Actually, as I am using WebSphere, it appears I can use the dynamic cache provided by the WebSphere deployment manager. The last chapter in the link below dicusses the use of the Dynamic Cache providing a shared object in a cluster. The configuration file is XML that is parsed as such by the engine (into a Document object) of the application and so is a Java object, thus it can be placed into the DistributedMap.
Looks like a clean solution. Thanks all for reading and your replies.
http://www.ibm.com/developerworks/websphere/library/techarticles/0606_zhou/0606_zhou.html

How to configure Classpath in Websphere application server?

I need to add log4j jar in classpath of WAS server but I am unable to put it. Please suggest.
I tried to add this jar in start script of WAS server.
As Michael Ransley mentioned, you need to determine who needs log4j. If it is a web application, then WEB-INF/lib is the best location.
If it used by EJB components then place the log4j as a utility jar in the EAR.
Alternatively, create a Shared Library and associate the shared library to your application.
Another choice would be to associate the shared library to your server (instead of the application) in which case, it becomes available to all the applications that are running on that server.
Storing in the App Server lib/ext or the other base classpath(s) is usually a bad idea. The reason is this could cause conflicts (log4j does not cause conflicts but other Jars could likely cause conflicts) and might prevent the application server from even starting up.
Also remember, depending on where the log4j.jar is kept (or associated via shared libraries) different class loaders would be picking up this JAR file.
From the Admin console, select Environment->Shared Libraries
Then in the page displayed, select New and follow the directions to add you library.
It depends why you want to add it. Do you need access to log4j from within your applications, if so you can add it into the application (i.e. in the WEB-INF/lib directory), if you are writing a component that needs to run within the WebSphere runtime (i.e. a JMX library) then you can put it into WebSphere/AppServer/lib/ext.
If you have multiple webapps that needs to share the same log4j.xml, you could drop it in IBM\WebSphere\PortalServer\shared\app\\
Otherwise, put it in web-inf/lib of your web app.
PROFILE_ROOT/properties
this folder is on the classpath, and its used to store properties
if you have different profiles for example for test or integration they may have different settings
source

How do you manage embedded configuration files and libraries in java webapps?

I'm currently working on a j2ee project that's been in beta for a while now. Right now we're just hammering out some of the issues with the deployment process. Specifically, there are a number of files embedded in the war (some xml-files and .properties) that need different versions deploying depending on whether you are in a dev, testing or production environment. Stuff like loglevels, connection pools, etc.
So I was wondering how developers here structure their process for deploying webapps. Do you offload as much configuration as you can to the application server? Do you replace the settings files programmatically before deploying? Pick a version during build process? Manually edit the wars?
Also how far do you go in providing dependencies through the application servers' static libraries and how much do you put in the war themselves? All this just to get some ideas of what the common (or perhaps best) practice is at the moment.
I think that if the properties are machine/deployment specific, then they belong on the machine. If I'm going to wrap things up in a war, it should be drop-innable, which means nothing that's specific to the machine it's running on. This idea will break if the war has machine dependent properties in it.
What I like to do is build a project with a properties.example file, each machine has a .properties that lives somewhere the war can access it.
An alternative way would be to have ant tasks, e.g. for dev-war, stage-war, prod-war and have the sets of properties part of the project, baked in in the war-build. I don't like this as much because you're going to end up having things like file locations on an individual server as part of your project build.
I work in an environment where a separate server team performs the configuration of the QA and Production servers for our applications. Each application is generally deployed on two servers in QA and three servers in Production. My dev team has discovered that it is best to minimize the amount of configuration required on the server by putting as much configuration as possible in the war (or ear). This makes server configuration easier and also minimizes the chance that the server team will incorrectly configure the server.
We don't have machine-specific configuration, but we do have environment-specific configuration (Dev, QA, and Production). We have configuration files stored in the war file that are named by environment (ex. dev.properties, qa.properties, prod.properties). We put a -D property on the server VM's java command line to specify the environment (ex. java -Dapp.env=prod ...). The application can look for the app.env system property and use it to determine the name of the properties file to use.
I suppose if you have a small number of machine-specific properties then you could specify them as -D properties as well. Commons Configuration provides an easy way to combine properties files with system properties.
We configure connection pools on the server. We name the connection pool the same for every environment and simply point the servers that are assigned to each environment to the appropriate database. The application only has to know the one connection pool name.
wrt configuration files, I think Steve's answer is the best one so far. I would add the suggestion of making the external files relative to the installation path of the war file - that way you can have multiple installations of the war in the one server with different configurations.
e.g. If my dev.war gets unpacked into /opt/tomcat/webapps/dev, then I would use ServletContext.getRealPath to find the base folder and war folder name, so then the configuration files would live in ../../config/dev relative to the war, or /opt/tomcat/config/dev for absolute.
I also agree with Bill about putting as little as possible in these external configuration files. Using the database or JMX depending on your environment to store as much as it makes sense to. Apache Commons Configuration has a nice object for handling configurations backed by a database table.
Regarding libraries, I agree with unknown to have all the libs in the WEB-INF/lib folder in the war file (self-packaged). The advantage is that each installation of the application is autonomous, and you may have different builds of the war using different versions of the libraries concurrently.
The disadvantage is that it will use more memory as each web application will have its own copy of the classes, loaded by its own class loader.
If this poses a real concern, then you could put the jars in the common library folder for your servlet container ($CATALINA_HOME/lib for tomcat). All installations of your web application running on the same server have to use the same versions of the libraries though. (Actually, that's not strictly true as you could put overriding versions in the individual WEB-INF/lib folder if necessary, but that's getting pretty messy to maintain.)
I would build an automated installer for the common libraries in this case, using InstallShield or NSIS or equivalent for your operating system. Something that can make it easy to tell if you have the most up to date set of libraries, and upgrade, downgrade, etc.
I usually make two properties files:
one for app specifics (messages, internal "magic" words) embedded in the app,
the other for environment specifics (db access, log levels & paths...) exposed on each server's classpath and "sticked" (not delivered with my app). Usually I "mavenise" or "anttise" these one to put specific values, depending on the target env.
Cool guys use JMX to maintain their app conf (conf can be modified in realtime, without redeploying), but it's too complex for my needs.
Server's (static ?) libraries: I strongly discourage server library use in my apps as it adds dependency to the server:
IMO, my app must be "self-packaged": dropping my war, and that's all. I have seen wars with 20 Mbs of jars in it, and that's not disturbing for me.
A common best-practice is to limit your external dependencies to what is offered by the J2EE dogma: the J2EE API (use of Servlets, Ejbs, Jndi, JMX, JMS...). Your app has to be "server agnostic".
Putting dependencies in your app (war, ear, wathever) is self-documenting: you know what libraries your app depends on. With server libs, you have to clearly document these dependencies as they are less obvious (and soon your developers will forget this little magic).
If you upgrade your appserver, chances that the server lib you depends on will also change. AppServer editors are not supposed to maintain compatibility on their internal libs from version to version (and most of the time, they don't).
If you use a widely-used lib embedded in your appServer (jakarta commons logging, aka jcl, comes to mind) and want to ugrade it's version to get the latest features, you take the huge risk that your appServer will not support it.
If you relies on a static server object (in a static field of a server class, e.g. a Map or a log), you'll have to reboot your appserver to clean this object. You loose the ability to hot-redeploy your app (old server object will still exists between redeployments). Using appServer-wide objects (other than those defined by J2EE) can lead to subtle bugs, especially if this object is shared between multiple apps. That's why I strongly discourage the use of objects which resides in a static field of an appServer lib.
If you absolutely need "this object in this appserver's jar", try to copy the jar in your app, hoping there's no dependency on other server's jar, and checking your app's classloading policy (I take the habit to put a "parent last" classloading policy on all my apps: I'm sure I won't be "polluted" by server's jars - but I don't know if it is a "best practice").
I put all configuration in the database. The container (Tomcat, WebSphere, etc) gives me access to the initial database connection and from then on, everything comes out of the database. This allows for multiple environments, clustering, and dynamic changes without downtime (or at least without a redeploy). Especially nice is being able to change the log level on the fly (although you'll need either an admin screen or a background refresher to pick up the changes). Obviously this only works for things that aren't required to get the app started, but generally, you can get to the database pretty quickly after startup.

What is the proper way to store app's conf data in Java?

Where do you store user-specific and machine-specific runtime configuration data for J2SE application?
(For example, C:\Users\USERNAME\AppData\Roaming</em> on Windows and /home/username on Unix)
How do you get these locations in the filesystem in platform-independent way?
First on the format:
Java property files are good for key/value pairs (also automatically handle the newline chars). A degree of structure is possible by using 'dot notation'. Drawback is that the structure does not allow you to easily enumerate top-level configuration entities and work in drill-down manner. Best used for a small set of often tweakable environment-specific settings
XML files - quite often used for more complex configuration of various Java frameworks (notably J2EE and Spring). I would advice that you at least learn about Spring - it contains many ideas worth knowing even if you decide not to use it. If you decide to roll your own XML configuration, I'd recommend using XStream with customized serialization options or if you just need to parse some XML, take a look at XOM. BTW Spring also allows you to plug your custom XML configuration language, but it's a relatively complex task. XML configuration is best used for more complex 'internal' configuration that is not seen or tweaked by the end user.
Serialized Java objects - a quick and easy way to persist the state of an object and restore it later. Useful if you write a configuration GUI and you don't care if the configuration is human readable. Beware of compatibility issues when you evolve classes.
Preferences - introduced in Java 1.4, allow you to store typed text, numbers, byte arrays and other primitives in platform-specific storage. On Windows, that is the registry (you can choose between /Software/JavaSoft/Prefs under HKLM or HKCU). Under Unix the same API creates files under the user home or /etc. Each prefs hive can be exported and imported as XML file. You can specify custom implementation of the PreferencesFactory interface by setting the "java.util.prefs.PreferencesFactory" JVM property to your implementation class name.
In general using the prefs API can be a good or a bad thing based on your app scenario.
If you plan to have multiple versions of the same code running on the same machine with different configuration, then using the Preferences API is a bad idea.
If you plan using the application in a restricted environment (Windows domain or tightly managed Unix box) you need to make sure that you have proper access to the necessary registry keys/directories. This has caught me by surprise more than once.
Beware from roaming profiles (replicated home dirs) they make up for some funny scenarios when more than one active machines are involved.
Preferences are not as obvious as a configuration file under the application's directory. most of the desktop support staff doesn't expect and doesn't like them.
Regarding the file layout of the prefs it again depends on your application. A generic suggestion is:
Package most of your XML files inside application's JAR either in the root or under /META-INF directory. These files will be read-only and are considered private for the application.
Put the user modifiable configuration under $APP_HOME/conf . It should consist mainly of properties files and occasionally a simple XML file (XStream serialization). These files are tweaked as part of the installation process and are usually not user serviceable.
Under the user-home, in a dot-directory (i.e. '~/.myapplication') store any user configuration. The user configuration may override the one in the application conf directory. Any changes made from within the application go here (see also next point).
You can also use an $APP_HOME/var directory to store any other mutable data which is specific to this application instance (as opposed to the user). Another advantage of this approach is that you can move and backup the whole application and it's configuration by simple copy of one directory.
This illustrates some standard techniques for managing configuration. You can implement them using different libraries and tools, starting from raw JRE, adding Spring/Guice or going for a full J2EE container (possibly with embedded Spring)
Other approaches for managing configuration are:
Using multiple base directories for running multiple instances of the application using different configurations.
Using lightweight registries for centralized configuration management
A centrally managed Configuration Management Database (CMDB) file, containing the host-specific values for each machine is rsync-ed every night to all production hosts. The application uses a templated configuration and selects from the CMDB during runtime based on the current hostname.
That depends on your kind of J2SE Application:
J2SE executable JAR file (very simple): use user.home System property to find home-dir. Then make a subdir accordingly (like e.g. PGP, SVN, ... do)
Java Web Start provides very nice included methods to safe properties. Always user-specific
Finally Eclipse RCP: There you have the notion of the workspace (also derived from user.home) for users and configuration (not totally sure how to access that tricky in Vista) for computer wide usage
All these approaches are, when used with care -- use correct separatorChar -- OS neutral.
Java has a library specifically for doing this in java.util.prefs.Preferences.
Preferences userPrefs = Preferences.getUserNodeForPackage(MyClass.class); // Gets user preferences node for MyClass
Preferences systemPrefs = Preferences.getSysteNodeForPackage(MyClass.class); // Gets system preferences node for MyClass
Preferences userPrefsRoot = Preferences.getUserRoot(); // Gets user preferences root node
Preferences systemPrefsRoot = Preferences.getSystemRoot(); // Gets system preferences root node
I use this
String pathFile = null;
if(OS.contains("win")){
pathFile = System.getenv("AppData");
}else{
pathFile = System.getProperty("user.home");
}
I save the settings of my application here
C:\Users\USERNAME\AppData\ on windows
user.home (/home/USERNAME) on other platfroms
For user specific config, you could write a config file to the folder pointed to by the "user.home" system property. Would only work on that machine of course.
You might want to look at Resource Bundles.

Categories

Resources