Best strategy to persist configuration variables of a JSP web aplication? - java

I'm implementing a web application using java (jsp). Now I'm adding some configuration buttons into the application, and i need to know which is the best strategy to persist them.
For example, I need to add a switch which will tell if each 10 jobs with errors the main worker must be stopped. It is a boolean variable called "safeStop". For that i need to persist if that config value is activated. This persistent value must be there even if the server is reset so it is not enought to persist it on RAM, it mus be persisted on disk.
This application will process thousands of works at day, so I need a safe and efficient way of persisting configuration.
I can't find any coherent info about different strategies of doing this and which is the best option. Any help or strategies will be helpful.
Thank you

I think the best practice would be to keep all the conf variables in a database and set a cookie on client side if that data needs to be processed. Once operations are performed update the values on server on session end or after a certain time or data volume or at the end when all processing operations are complete. This way you'll achieve performance, store information on something besides ram while using the RAM for shorter transaction operations

Related

Interprocess communication via a database

If I have 2 processes running in different nodes and they share a database, is there a pattern that one node be able to send some notification to the other process via the database?
Is some kind of polling a table normally used or is there a better way?
Instead of polling (which translates into burning not only CPU cycles but in this case also database resources and bandwidth), how about this? if you were using Oracle you could define a trigger ON UPDATE for the table you want to be notified and call a Java Stored Procedure (JSP) from the trigger. The JSP could then use whatever notification mechanism to notify the other component about the change. This is not going to be extremely fast but well ...
The proper way would be to have the component updating the Database sending a parallel notification to the other component and again use any available technology for this RMI, JMS etc
If you want to use a database, you can insert entries into a table on the producing side and poll to find new entries on the consuming side. This may be the simplest option for your project.
There are many possible alternatives such as JMI, RMI, Sockets, NoSql databases, files, but without more information it's not possible to tell if these would be better. (Often simplest is best)
Polling is not an optimal solution. If you have a large number of clients or users, the database is going to be kept busy answering to the pollsters.
Users blocking or waiting for an update is much preferable, if possible. Users generally prefer a responsive system.
The two main criteria to consider before deciding are the maximum number of concurrent users and how quickly users needs to be notified of the event they have expressed an interest in.
A better solution than polling, if your database supports it, is select() or something like inotify(). For instance, PostgreSQL supports select(), so you can do a non-busy-loop while waiting for input into the DB. That being said, Database-as-IPC is considered an anti-pattern.
The simplest solution is just polling on another process.
However, if you want another process receives the data change immediate, you then should consider using some notification mechanism, such as rpc, http request, etc.

Java synchronization options for preventing duplicate orders (file, db locking?)

I have two use cases for placing an order on a website. One is directly submitted from a web front end with a creditcard, and the other is a notification of an external payment from a processor like paypal. In both situations, I need to ensure that the order is only placed one time.
I would like to use the same mechanism for both scenarios if possible, to help with code reuse. In the first use case, the user can submit the order form multiple times and result in different theads trying to place an order. I can use ajax to stop this, but I need a server side solution for certainty. In the second usecase, the notification messages may be sent through in duplicates so I need to protect against that too.
I want the solution to be scalable across a distributed environment, so a memory lock is out of the question. I was looking at saving a unique token to the database to prevent multiple submissions there, but I really don't want to be messing with the existing database transactions. The real solution it seems is to lock on something external like a file in a shared location across jvms.
All orders have a unique long id, so I could use that to synchronize. What would be the best way of doing this? I could potentially create a file per id, or do something fancier with a region of the file. However I don't have much experience with file locking, so if there is a better option I would love to hear it. Any code samples would help very much.
If you already have a unique long id, nothing better than a simple database table with manually assigned primary keys can't happen to you. Every RDBMS (and also key-value NoSQL databases) will effectively and efficiently discover primary keys clashes. It is basically:
Start transaction
INSERT INTO orders VALUES (your_unique_id)
Commit
Depending on the database, 2. or 3. will throw an exception which you can easily catch.
If you really want to avoid databases (could you elaborate a little bit more why?), you can:
Use file locking (nasty and not scalable), don't go that way.
In-memory locking with clustering (with Terracotta it's like working with normal boolean that is magically clustered)
Queuing requests and having only single consumer.
Using JMS and single-threaded consumer looks promising, however you still have to discover duplicates (but at least you avoid concurrently placed orders) and it might be terribly slow...

caching readonly data for java application

I have a database which has around 150K records of data with a primary key on the table. The data size for each record will take less than 1kB. The processing time for constructing a POJO from the DB record takes about 1-2 secs(there is some business logic that takes too much time). This is read-only data. Hence I'm planning to implement caching the data. What I'm thinking to do is. Load the data in subsets(200 records each time) and create a thread that'll construct the POJOs and keep them in a hashtable. While the cache is being loaded(when I start the application) the User will see a wait sign. For storing the data in HashTable is an issue I'll actually store the processed data in to another DB table(marshall the POJO to xml).
I use a third party API to load the data from database. Once I load a record I'll have load the data I'll have to load associations for the loaded data and then associations for the association found at the top level. It's like loading a family tree.
I can't use Hibernate or any ORM framework as I'm using a third party API to load the data which is shipped with the database it self(it's a product). More over I don't think loading data once is not a big issue.
If there is a possibility to fine tune the business logic I wouldn't have asked this question here.
Caching the data on demand is an option, but I'm trying to see if I can do anything better.
Suggest me if there is a better idea that you are aware of. Thank you./
Suggest me if there is a better idea that you are aware of.
Yes, fix the business logic so that it doesn't take 1 to 2 seconds per record. That's a ridiculously long time.
Before you do that, profile your application to make sure that it is really the business logic that is causing the slow record loading, and not something else. (For example, it could be a pathological data structure, or a database issue.)
Once you've fixed the root cause of the slow record loading, it is still a good idea to cache the read-only records, but you probably don't need to preload the cache. Instead, just load the records on demand.
It sounds like you are reinventing the wheel. I'd be looking to use hibernate. Apart from simplifying the code to access the database, hibernate has built-in caching and lazy loading of data so it only creates objects as you request them. Ergo, a lot of what you describe above is already in place and you can concentrate on sorting out your business logic. I suspect that once you solve the business logic performance issue, there will be no need to do such as complicated caching system and hibernate defaults will be sufficient.
As maximdim said in a comment, preloading the whole thing will take a lot of time. If your system is not very strange, the user won't need all data at once. Just cache on demand instead. I would also recommend using an established caching solution, such as EHCache, which has persistence via DiskStore -- the only issue is that whatever you cache in this case has to be Serializable. Since you can marshall it as XML, I'm betting you can serialize it too, which should be faster.
In a past project, we had to query a very busy, very sluggish service running in an off-site mainframe in order to assemble one of the entities. Average response times from our app were dominated by this query. Since the data we retrieved was mostly read-only caching with EHCache solved our problems.
jdbm has a nice, persistent map implementation (http://code.google.com/p/jdbm2/) - that may help you do local caching - it would certainly be a lot faster than serializing your POJOs to XML and writing them back into a SQL database.
If your data is truly read-only, then I'd think that the best solution would be to treat the source database as an input queue that feeds your app database. Create a background process (heck, a service would be better), and have it monitor the source database and keep your app database synced.

How much session data is too much?

We are running into unusually high memory usage issues. And I observed that many places in our code we are pulling 100s of records from DB, packing it in custom data objects, adding it to an arraylist and storing in session. I wish to know what is the recommended upper limit storing data in session. Just a good practice bad practice kind of thing.
I am using JRockit 1.5 and 1.6GB of RAM. I did profiling with Jprobe and found that some parts of app have very heavy memory footprint. Most of this data is being into session to be used later.
That depends entirely on how many sessions are typically present (which in turn depends on how many users you have, how long they stay on the site, and the session timeout) and how much RAM your server has.
But first of all: have you actually used a memory profiler to tell you that your "high memory usage" is caused by session data, or are you just guessing?
If the only problem you have is "high memory usage" on a production machine (i.e. it can handle the production load but is not performing as well as you'd like), the easiest solution is to get more RAM for the server - much quicker and cheaper than redesigning the app.
But caching entire result sets in the session is bad for a different reason as well: what if the data changes in the DB and the user expects to see that change? If you're going to cache, use one of the existing systems that do this at the DB request level - they'll allow you to cache results between users and they have facilities for cache invalidation.
If you're storing data in session to improve performance, consider using true caching since cache is application-wide, whereas session is per-user, which results in unneccessary duplication of otherwise similar objects.
If, however, you're storing them for user to edit this objects (which I doubt, since hundreds of objects is way too much), try minimizing the amount of data stored or research optimistic concurrency control.
I'd say this heavily depends on the number of active sessions you expect. If you're writing an intranet application with < 20 users, it's certainly no problem to put a few MB in the session. However, if you're expecting 5000 live session for instance, each MB of data stored per session accounts for 5GB of RAM.
However, I'd generally recommend not to store any data from DB in session. Just fetch from DB for every request. If performance is an issue, use an application-wide cache (e.g. Hibernate's 2nd level cache).
What kind of data is it? Is it really needed per session or could it be cached at application level? Do you really need all the columns or only a subset? How often is it being accessed? What pages does it need to be available on? And so on.
It may make much more sense to retrieve the records from the DB when you really need to. Storing hundreds of records in session is never a good strategy.
I'd say try to store the minimum amount of data that will be enough to recreate the necessary environment in a subsequent request. If you're storing in memory to avoid a database round-trip, then a true caching solution such as Memcache might be helpful.
If you're storing these sessions in memory instead of a database, then the round-trip is saved, and requests will be served faster as long as the memory load is low, and there's no paging. Once the number of clients goes up and paging begins, most clients will see a huge degradation in response times. Both these variables and inversely related.
Its better to measure the latency to your database server, which is usually low enough in most cases to be considered as a viable means of storage instead of in-memory.
Try to split the data you are currently storing in the session into user-specific and static data. Then implement caching for all the static parts. This will give you a lot of reuse application-wide and still allow you to cache the specific data a user is working on.
You could also make per-user mini sqlite database and connect to it, and store the data the user is accessing in it, then just retrieve the records from it, while the user is requesting it, and after the user disconnects just delete the sqlite database.

web application session cache

I want to cache data for a user session in a web application built on struts.What is the best way to do it .Currently we store certain information from the DB in java objects in the user's session .Works fine till now but people are now concerned about memory usage etc.
Any thought on how best to get around this problem.
Works fine till now but people are now
concerned about memory usage etc.
Being "concerned" is relatively meaningless - do they have any concrete reason for it? Statistics that show how much memory session objects are taking up? Along the same line: do you have concrete reasons for wanting to cache the data in the user session? Have you profiled the app and determined that fetching this data from the DB for each request is slowing down your app significantly?
Don't guess. Measure.
It's usually bad practice to store whole objects in the user session for this reason. You should probably store just the keys in the session and re-query the database when you need them again. This is trade-off, but usually acceptable for most use-cases. Querying the database on keys is usually acceptable to do between requests, rather than storing objects in session.
If you must have them in session, consider using something like an LRUMap (in Apache Collections).

Categories

Resources