How to hold data for jar as library with cluster awareness - java

I am creating a library (java jar file ) to provide a solution of a problem. Library is mainly targeted for web application (j2ee application) can be used with spring and other framework.
Targeted j2ee application will be deployed in clustered environment.User will use this library by adding it in application class path.
Library has a dependency of some configuration which is packaged itself in library (jar) which will be used at run time.
At run time configuration can be modified.
As it is targeted for clustered environment, In case of any modification to configuration , updated configuration must be replicated to all of nodes of clustered environment.
As per my understanding there can be two ways to hold configuration to use at run time (I am not sure correct me if I am wrong)
1.Store configuration in file
2.Store configuration in database
In first approach (store configuration in file)
There will a property file in library to hold initial configuration .
At server start up time configuration from property file will be copied to some file (abc.xml) at server physical location.
There will be set of APIs to perform CRUD action in abc.xml file from user home location.
And every time abc.xml file will be used.
In this approach holding data is possible but for clustered environment I am not getting how it will update all the nodes of cluster in case of modification.
In second appraoch (store configuration in database table)
While publishing toolkit (jar file) sql table queries also published with jar.
User have to create table using that queries.
There will a property file in library to hold initial configuration .
At server start up time configuration from property file will be copied to database.
There will be set of APIs to perform CRUD action on database.
As there is any modification to configuration all nodes of cluster can updated with latest data using some third party tool (Hazel cast or any thing else.)
In analysis I found Quartz uses database approach to hold its configuration.
So when one download quartz distribution, it also have sql queries to create required tables in database, that will be used by quartz it self.
I want to know what are the standard design pratices to hold configuration in library (jar) form and what are the factor need to be noticed in such cases.

There are other solutions as well. Use a cluster aware caching technologies like EhCache or Apache JCS or Hazelcast. Use the cache API to retrieve configuration data from the library. You could add in a listener within your library to poll on to the configuration file and update the cache.
If you are planning to use solution 1 which you mentioned, you could set up a listener within your library which listens to the configuration file and updates the server copy whenever there is a change. Similarly for Solution 2 as well but if I were in your similar situation, I would rather use a caching technology for frequently accessed data(configurations). The advantage it would give me is that I would not have to update the configuration in all the nodes because it self replicates.

Related

How to manage shared bootstrap data accross multiple projects

I have 3 web-projects that use the same database and same models. These systems require partly the same bootstrap data in the database in order to run properly. All Systems share library code that will read the data from database and update it according to the bootstrap data in the code (add new, remove unused, update changed). Every application will perform this when they start and most of the time nothing needs to be done since the data is already correct. This data is also used by some of the integration tests.
The problem is that when some of the common data needs to be changed, then all 3 applications needs to be re-deployed with the new bootstrap data because otherwise they will bootstrap with the old data in case they are restarted (server reboots for example).
I'm looking for the best way to manage shared bootstrap data for multiple projects.
You could create a plugin that contains a service that does what you need and include the plugin in all projects. Then simply call the plugin service within each bootstrap.

Is it best practice to design Spring Application in such a way that we need to create same context multiple times?

I am going to start a new project using Spring framework. As I dont have much experience in Spring I need your help to sort out few confusions.
Lets look at use case
My application uses Spring integration framework. The core functionality of my app is,
I need to poll multiple directories from file system,
read the files(csv mostly),
process some operations on them and insert them to database.
Currently I have set up spring integration flow for it. Which has inbound-chaneell-adapter for polling and then file traverse through the channels and at the end inserted into database.
My concerns are
Number of directories application supposed to poll will be decided at runtime. Hence I need to create inbound-chanell-adapter at runtime (as one chanell adapter can poll only one directory at once) and cant define them statically in my spring context xml (As I dont know the how many I need).
Each directory has certain properties which should be applied to the file while processing.(While going through the integration flow)
So right now what I am doing is I am loading new ClassPathXmlApplicationContext("/applicationContext.xml"); for each directory. And cache the required properties in that newly created context. And use them at the time of processing (in <int:service-activator>).
Drawbacks of current design
Separate context is created for each directory.
Unnecessary beans are duplicated. (Database session factories and like)
So is there any way to design the application in such a way that context will not be duplicated. And still I can use properties of each directory throughout the integration flow at the same time???
Thanks in advance.
See the dynamic ftp sample and the links in its readme about creating child contexts when needed, containing new inbound components.
Also see my answer to a similar question for multiple IMAP mail adapters using Java configuration and then a follow-up question.
You can also use a message source advice to reconfigure the FileReadingMessageSource on each poll to look at different directories. See Smart polling.

JSF, website first setup and external configuration files

I have a website written in JSF backed by MySQL database running on Tomcat 7. Now there is only one missing part - project first setup/installation. I want my war when deployed for the first time to offer you installation/first time setup with following steps:
Setup database - enter mysql parameters needed to successfully connect to mySQL server.
Write those parameters into some external file for further use (of course encrypted).
Install database - take a file with SQL inside that creates all the tables in database.
Create first user etc.
Delete installation files.
Similar steps are used in PHP Content Management systems like Drupal. I know perfectly how to work with files in Java. I also know, that I can't change content inside a jar once it's deployed and running, so I have to put my files with SQL and database parameters somewhere else.
My questions are
Where can I put these configuration files to make them readable ? And how ?
Is there another way to achieve this goal ? What is commonly used by Java developers ?
Thank you for your answers.
You can use JPA(Java persistent API) and put all this configuration on persistance.xml and set the schema generation to create Table also the things related to role and user is dependent to application server.
JPA use ORM(object relational mapping) to map between you objects (entity) and database tables

Coordinated save in database and disk in java

In my task I need to save file on disk and update info about in database.
Exception can happen when saving file and when updating info in database.
Do exist some ready open source solutions for it or it need write from scratch?
Thanks.
There's XADisk which provides transactional access to file systems. From their web site:
XADisk (pronounced 'x-a-disk') enables transactional access to
existing file systems by providing APIs to perform file/directory
operations. With simple steps, it can be deployed over any JVM and can
then start serving all kinds of Java/JavaEE application running
anywhere.
In Java, enterprise transactions management is ruled by the JTA spec wich is used in Java EE.
JTA allows to create several TransactionManager with differents implementations (one for database, one for file) and make them work together to define a cross-transaction
I think this could be a way for you to do what you want
Outside of a container, there are possibility to integrate JTA. You should have a look at Spring or JBoss implementations
Look at this blog post for more information about Spring and transactions usage
The file system isn't directly supported by Java EE, but you could implement/ search for a resource adapter for the file system.
http://docs.oracle.com/javaee/6/tutorial/doc/gipgl.html
http://docs.oracle.com/javaee/6/tutorial/doc/giqjk.html

Syncing of files in a filesystem on a clustered environment

I have a Java based CRUD service which allows creation, retrieval, update and delete of files on/from the filesystem. This service can be deployed in a clustered environment.
Are there any design patterns or solutions which can help sync these files between the nodes in a cluster?
Can the folders be configured for sync?
Is there a chance (e.g. in case of an update) that a user on one node will not get the updated file?
I am fine with solutions that are tomcat, websphere or weblogic specific.
Thank you.
Unless you specifically want to code this yourself, why not use a distributed file system like NFS or, if you would like something Java-based, the Hadoop Distributed File System (HDFS) instead. More information can be found here.

Categories

Resources