We have a situation where we have to perform a lengthy query to the database based on human input. As the input changes, the query has to be done over and over again, and the input may change once per second.
The problem is, we know that this will cause a spike in server activity for several seconds, and since it is not critical to have an answer immediately or on every input change, it means we can afford executing or not executing the query.
The criteria we would like to use is the current state of the database server, and only allow the query to be done if it is in a low or medium load state, skipping the query when the database server is under stress.
We use Oracle database for this, and so far we have not found any way, from Java, to do this except by actually loading into the server a known query and benchmark it, but that is essentially adding some load to the server. So my question: is there any other way, specifically in Oracle database, where we can discover from the Java side of the application the load of the database?
Depending on how you define "low or medium load state", I'd guess that hitting v$osstat would give you the information you're after Of course, hitting v$osstat constantly will also add to the load on the server. You may want to write a job that copies the v$osstat data to a table you control periodically (and can thus index appropriately) so that your application can hit that table rather than hitting the dynamic performance view constantly. Depending on the goal (i.e. are you trying to ensure that other users have enough resources or are you trying to ensure that your app remains responsive), you may want to use Resource Manager to control resource utilization among users, you may want to run the query asynchronously from the application, and/or you may want to use some sort of cache at the middle tier to avoid hitting the database every time.
we have to implement a monitoring module to existing web application. The purpose of the application is to record time taken at each method executed at run time and saving to 2 monitoring tables. As there are many method invocations happening in the application, we can't save to table every time a method is executed.We have used spring aop to intercept time.We used redis to cache data, and in every ten minutes, take the data from cache and save in the database.But redis seems to be a troublesome ideas as it keep making new connections and this give the application a nightmare.Is there any alternative way to do this.We considered about writing to a file and taking data from the file periodically.But that also seemed to be a resource consuming solutions.
I would suggest storing everything in your Java application. If all you want is a few stats, you can keep track of this information in a HashMap and write it every 10 minutes to the database from another thread. Min/Max/Avg time won't take much memory, and you can reset stats once it's written to the database.
If that doesn't suite your needs for some reason, Redis should be fast, and can even help you compute your statistics. I'd setup a connection pool to prevent it from making a new network connection each time. Then, I'd use asynchronous writes (pipelining) in your Redis client to speed up writes. Jedis and JRedis both support this.
I need to create a Java agent that can be aware and execute its instruccions as soon as any update for particular tables in a Mysql or Psql Database occurr.
Everything needs to be done automaticaly.
I was wondering given Im a novice in Java you guys could give me any advice..
My options are:
1) Having a trigger that after a commit could awake my java application. (using Pg_notify and others)
2) or Having the java application subscribed to a particular ID in a database (not sure if this can be done given asynchronous updates are not possible and I might need to have my agent asking xx second to the dabatase for changes)
Thanks!
Yes, a trigger that uses NOTIFY is a good way to do it in PostgreSQL. The important problem when using the JDBC driver is that there is no way to receive notifications asynchronously, you have to poll. This is usually fine as the NOTIFY/LISTEN mechanism is very light-weight: if you want to poll 10 (100?) times a second, then you can do so without causing performance problems. See http://jdbc.postgresql.org/documentation/83/listennotify.html for more.
MySQL is a little less helpful; you'll need to have triggers INSERT rows into a monitoring table and repeatedly poll that table with SELECT * (and then DELETE). This will work, but you are more likely to end up in a latency/performance trade-off.
We have J2EE application built using Hibernate and struts. We have RMI registry based implementation for business functionality.
In our application around 250 concurrent users are going to upload batches containing huge data named BATCHDET. These batches are first validated against 30 validation and then they are inserted to tables where we have parent and child relationship. Similar there are other operation which need huge processing. like printing etc.
There is one table containing 10 million record which gets accessed for all types of transactions and every process inserts and updates this table. This table has emerged as bottleneck. We have added all the required indexes as well.
After 30 minutes of run system JVM utilizes all the allocated 6GB of RAM and goes in no response state. When we tried to find out root cause we realized there was lock at database site and all the update queries related to BATCHDET table were in wait state. We tried everything which we could but no luck.
System run smooth when tried with 50 concurrent user but dies with 250 users which are expected. BATCHDET has lot of dependency on almost every module, not in mood to rewrite the implementation, could you please provide quick fix to it.
we have Thread based transaction demarcation at Hibernate implemented with HIbernateUtil.java. Transaction isolation is ReadCommitted. Is there any way where we can define no lock for all search operation. we have oracle 10G RDBMS.
Let me know if you need any other details.
~Amar
" Is there any way where we can define no lock for all search operation. we have oracle 10G RDBMS."
Oracle doesn't lock on selects, so in effect this is already in place.
Oracle also locks at a row level, so you need to stop thinking about the table as a whole and start thinking individual rows.
You need to talk with your DBA. There's a whole bunch of stuff to monitor in Oracle at both the system and session level. The DBA will be able to be able to look at v$session and tell you what the individual sessions are waiting on. There might be locks, it might be a disk bottle neck, it may be index contention, or it may be the database is sat there idle and all the inefficiency is in the java layer.
I'm doing a school software project with my class mates in Java.
We store the info on a remote db.
When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens).
In the application we edit some of these objects and then when we exit the application
we save or update information in the database using Hibernate.
As you see we dont use Hibernate for pulling in information, we use it just for saving and updating.
We have 2, but very similar problems.
The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time.
And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small.
How could we increase performance ?
later edit: if we use a local database it runs very quick, it just runs slow on remote databases
Are you saying you are loading the entire database into memory and then manipulating it? If that is the case, why don't you instead simply use the database as a storage device, and do lookups and manipulation as necessary (using Hibernate if you like, or something else if you don't)? The key there is to make sure that you are using connection pooling, as that will reduce the connection time.
If this is what you are doing, then you could be running into memory issues as well - first, by not caching the entire database in memory, you will reduce memory and will spread out the network load from the beginning/end to the times when it needs to happen.
These 2 sentences are red flags for me :
When we start the application we pull
all the information from the database
and transform it into objects to use
in our application (using java sql
statemens). In the application we edit
some of these objects and then when we
exit the application we save or update
information in the database using
Hibernate.
Is there a requirements reason that you are loading all the information from the database into memory at startup, or why you're waiting until shutdown to save changes back in the database?
If not, I'd suggest a design change. If you've already got Hibernate mappings for the tables in the DB, I'd use Hibernate for both all of your CRUD (create, read, update, delete) operations. And, I'd only load the data that each page in your app needs, as it needs it.
If you can't make that kind of design change at this point, I think you've got to look closely at how you're managing the database connections. Are you using connection pools? Are you opening up multiple connections? Forgetting to release them?
Something else to look at. How are you using Hibernate to save the entities to the db? Are you doing a getHibernateTemplate().get on each one and then doing an entity.save or entity.update on each one? If so, that means you are also causing Hibernate to run a select query for each database object before it does a save or update. So, essentially, you'd be loading each database object twice (once at the beginning of the program, once before saving). To see if that's what's happening, you can turn on the show_sql property or use P6Spy to see exactly what queries Hibernate is running.
For what you are doing, you may very well be better off serializing your objects and writing them out to a flat file.
But, much more likely, you should just read / update objects directly from your database as needed instead of all at once, for all the reasons aperkins gives.
Also, consider what happens if your application crashes? If all of your updates are saved only in memory until the application is closed, everything would be lost if the app closes unexpectedly.
The difference in loading everything from a remote DB server versus loading everything from a local DB server is the network latency / pipe size. The network is a much smaller pipe than anything else. Two questions: first, how much data are we really talking about? Second, what is your network speed? 10/100/1000? Figure between 10 and 20% of your pipe size is going to be overhead due to everything from networking protocols to the actual queries themselves.
As others have stated, the way you've architected is usually high on the list of "don't do". When starting, pull only enough data to initialize the app. As the user works through it, pull what you need for that task.
The ONLY time you pull everything is when they are working in a disconnected state. In that case, you still don't load everything as objects in the application, you just work from a local data store which gets sync'ed with the remote server every so often.
The project its pretty much complete. we cant do large refactoring on it now.
I tried to use a second level cache for Hibernate when saving. EhCacheProvider.
in hibernate.xml:
net.sf.ehcache.hibernate.EhCacheProvider
i have done a config for the cache, ehcache.xml:
i have put the cache.jar in the project build path
and i have set the hibernate property for every class and set in the mapping.
But this cache doesn't seem to have an effect. I dont know if it works(if it is used).
Try minimising number of SQL queries, since every query has its own overhead.
You can enable database compression, which should speed things up when there is a lot of data.
Maybe you are connecting to the database many times?
Check the ping time of remote database server - it might be the problem.
As your application is just slow when running on a remote database server, I'd assume that the performance loss is due to:
Connecting to the server: try to reuse connections (pass the instance around) or use connection pooling
Query round-trip time: use as few queries as possible, see here in case of a hand-written DAL:
Preferred way of retrieving row with multiple relating rows
For hibernate you may use its batch functionality and adjust hibernate.batch_size.
In all cases, especially when you can't refactor larger parts of the codebase, use a profiler (method time or sql queries) to find the bottleneck. I bet you'll find thousands of queries, each taking 10ms RTT) which could be merged into one.
Some other things you can look into:
You can allocate more memory to the JVM
Use the jconsole tool to investigate what the bottlenecks are.
Why dont you have two separate threads?
Thread 1 will load your objects one by one.
Thread 2 will process objects as they are loaded.
Your app will seem more interactive at startup.
It never hurts to review the basics:
Improving speed means reducing time (obviously), and to do that, you find activities that take significant time but can be eliminated or replaced with something that uses less time. What I mean by activity is almost always a function call, method call, or property call, performed on a specific line of code for a specific purpose. If may invoke I/O or it may invoke computation, or both. If its purpose is not essential, then it can be optimized.
Many people use profilers to try to find these time-wasting lines of code, but most profilers miss the target because they look at functions, not lines, they go to sleep during I/O, and they worry about "self time".
Many more people try to guess what could be the problem, or they ask others to guess, such as by asking on SO. Such guesses, in the nature of guesses, are sometimes right - more often not, but people still invest time and resources in them.
There's a very simple way to find out for sure, without guessing, what could fruitfully be optimized, and here is one way to do it in Java.
Thanks for your answers. Their were more than helpful.
We completely solved this problem like so:
Refactored the LOAD code. Now it uses Hibernate with Lazy Fetching.
Refactored the SAVE code. Now it saves, just the data that was modified and right after the time it was modified. This way we dont have a HUGE save an the end.
Im amazed of how good it all went. The amount of new code we had to write was very very small.