Sorry to ask this in case it has been answered before, but I heard (from a potential other noob) that Hibernate has/had some kind of connection pool manager that also handles locking of the database. Now I read this was abolished in Hibernate 3 so I, as a noob, am very confused what to use.
I have a Postgresql db with multiple clients that each use max. one db connection at any given time. I use JDBC but want to move to Hibernate.
So in case two concurrent update operations occur, I don't know if this is handled by the DBMS correctly. I thought about locking a db table manually in case someone operates on it, but there must be a better way.
I only operate with simple, single sql-statements, sometimes prepared statements. No big updates, just single line updates.
Do you have any idea how this, generally, is to be solved? Is this even a problem?
This is too general for a truly useful answer, and I should really just close-vote it. But I'll try to help.
The connection pool has nothing to do with locking. The two are unrelated topics.
I think you're vaguely trying to refer to the optimistic concurrency control in Hibernate. This is an alternative strategy to normal row locking, with a different set of advantages and disadvantages.
See the Hibernate documentation for more information, and the wikipedia article on optimistic concurrency control.
I also wrote a recent blog entry on this topic that may be useful.
Above all else, though, there's no substitute for actually understanding concurrency in the application and database. I very strongly recommend reading the PostgreSQL documentation chapter on concurrency control in detail.
Related
Background]
- There are two java applications (A and B), and they can only communicate via Oracle DB
- A and B share the same database table
- A and B stores the data in cache
Problem]
If A performs simple transaction (insert/update/delete), the cache in A is updated. Also, the cache in B should be updated automatically!
Current Status]
Two solutions I found and tried
- Solution1) Using DatabaseChangeListener
- Solution2) Using Socket Programming
Question]
The solution will be used for company, and I would like to know if there is anything that I can improve my solutions.
1) What could be disadvantages if I use DatabaseChangeListener?
2) What could be disadvantages if I use socket programming? (Maybe it's too low-level that developer cannot maintain due to company policy?)
3) I heard there are 3rd party cache that also supports synchronization. Am I correct?
Please let me know if you need more information!
Thank you very much in advance!
[EDIT]
If would be much appreciated if you can leave a comment when you down-vote this. I would like to know how I can improve this question with your feedback! Thank you
Your question appears every now and then with slightly different aspects. One useful answer to that is here: Guava Cache, how to block access while doing removal
About using the DatabaseChangeListener:
Although you are fine with oracle, I would discourage the use of vendor specific interfaces. For me, it would be okay to use, if it is an performance optimization, but I would never use vendor specific interfaces for basic functionality.
Second, the usage of the change listener may still lead to dirty reads.
About "distributed caches" as veritas suggested:
There is a difference between distributed caches and clustered caches. Distributed caches spread (aka distribute) the cached data on different nodes, clustered caches are caches for clustered applications that keep track of data consistency within the cluster. A distributed cache usually is a clustered cache, but not the other way around. For a general idea on the topic I recommend the infinispan documentation on clustering as an intro: http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_clustering
Wrap up:
A clustered cache implementation is the thing you need. However, if you want data consistency, you still need to carefully design your transaction handling.
You can, of course, also do socket communication yourself and send simple object invalidate messages to the other applications. The challenging part is the error handling. When was the invalidate successful? Is there a timeout for the other nodes to acknowledge? When to drop a node and maintain a cluster state at all?
I will suggest for the 3rd Party Cache, if you have many similar use cases or many tables need to be updated .
Please read about terracotta Distributed Cache.
It gives exactly what you want.
You can also look for hazelcast or memcached
I am shifting back from hibernate to plain JDBC in order to overcome the overheads incurred in using hibernate.I wanted to know how to deal with the sessions associated with hibernate.How should i convert back to Plain JDBC so that all my sessions are replaced with the JDBC connections.And please let me know if I am wrong in my thoughts that replacing a session with a connection converts back to plain JDBC as I am not well versed in these concepts and dont know if i am going in the right way.
I have used Hibernate extensively in high-performance tasks, including batch insertion of millions of records. Your problem is not with Hibernate, but with the way you are using it.
Above all, do not use Hibernate as a persistent state manager; use it as a thin layer above the raw SQL and you won't complain about performance.
Always prefer StatelessSession (it works for everything you need except save operations)`;
never use lazy fetching, use explicit joins for everythng;
never fetch whole objects, use SELECT to fetch exactly what you need;
fetch as much as possible in a single statement, avoid n+1 selects at all costs;
for large result sets, never use list, use iterate or scroll.
The list goes on, but this is what I have come up with at this moment.
As far as your direct question, it depends on the application. If it is a Spring application, then you will certainly want to use its declarative transaction management. Basically, you just put a few lines of XML config and you'll have an open DataSource in your DAO code ready to be used, with no management on your part.
If you are doing something more raw, then by all means use a connection pool library, such as the great BoneCP. You acquire connections from it and later return them to it, again with no explicit management.
Lastly, if you really want a bare-bones, unsafe and non-scalable approach, then you can create connections directly from the JDBC driver. This approach is really only for schoolwork and it is not recommended even in the smallest of production-worthy projects.
A Hibernate session is much more than a JDBC connection. It contains multiple such connections (usually managed via a JDBC Connection Pool which recycles JDBC Connection instances), a bunch of entities which are attached to, and managed by said session and other things as well (caching, etc).
Removing Hibernate and doing everything with the JDBC API-only will imply more than just replacing Hibernate Session instances with one or more JDBC connections followed by a duplication of the Hibernate code into analogous JDBC API calls. If you'd only do that, you'd simply do a lot of work for nothing, as you'd lose all of Hibernate's advantages (less verbose code, a higher level of abstraction, etc) and gain nothing of JDBC's advantages (less heap memory used, fewer method calls (yes, even with Hibernate's Javassist magic, this still counts towards performance in some cases), finer grained control of the database interactions, etc).
My advice is to first really look into the problems your app has (apparently due to Hibernate) and at least for the major ones, try to first see if you can't do something to optimize it without getting rid of Hibernate. Yes, Hibernate can become heavy and memory hungry, but more often than not, the issue with performance comes from improper use of the framework (are you sure you're fetching all the necessary associated entities in one query, or do you make Hibernate make hidden joins or pseudo joins in the background? Are you doing or you data operations on the database side, or is some of that done in Java code after a more-than-necessarily-generic Hibernate query is executed to fetch the data? etc.)
If you really need to get rid of Hibernate (maybe you need to use some very specific features of your database which are not standard SQL and which Hibernate doesn't let you access, like MySQL's ability to import big amounts of data via a custom flat-file format) then make sure that what ever it is you're replacing it with (plain JDBC, or maybe some other ORM like EclipseLink) can tackle the issue and solve it in a more performant way. Doing a small POC to test these before you start re hauling your code can save you a ton of time.
While I strongly urge you to heed the advice of Marko and Shivan, you could use hibernate to manage your connections/sessions/transactions and to execute your SQL queries without much overhead being generated.
a quick google search yielded this on executing SQL from a hibernate session.
http://www.informit.com/guides/content.aspx?g=java&seqNum=575
While I agree with both of the earlier answers, if you truly want to go down the road of executing straight SQL, I would look into this option for two reasons.
1) your sessions are already in place. If you don't have hibernate load up all of your entities I don't see how hibernate would generate that much overhead.
2)If the problem is speed, and not overhead which I have run into before, you can implement this to quickly execute native SQL in your problem areas and keep all of hibernates ORM goodies in place.
All of that being said, I would also urge you to dig into the documentation for hibernate. I have used hibernate for several high performance solutions with great success. While the nuances can be hard to grapple with in the beginning, the benefits of using hibernate (or at least something that adheres to JPA standard) far outweigh the cost of not doing so down the road scalability wise.
I have a scenario where in I need to keep a log of all incoming files (flat, xml) to an application. This log table is hardly used, except for fault investigation or regulatory purposes and things like that, and data will be purged regularly.
We are using JPA 2.0 for persistence. We tried the initial prototype with pure JPA persistence using entityManager.persist(); and flush immediately. But the performance was not up to the expectation. So I suggested NativeNamedQueries for this operation and the performance improvement was huge (300 milliseconds vs 47 milliseconds) on tests.
But the lead engineer is bit adamant on using NativeNamedQueries, saying that its coupled to the database and less maintainable and things like that.
Questions :
What is your take on this, in case if you had to take a decision. How often does database or schema changes happen once the application goes to production ?
Is there any other way to improve performance? Performance is very very critical for this application.
Its only 4 years since I started programming, but never seen a DB schema change or DB provider change happening for an existing application.
Note : We are using EclipseLink 2.3 and Oracle. Also its a fresh application that we are developing. Just in case these points makes question more clear
How often does database or schema changes happen once the application goes to production ?
This is immaterial to your problem at hand. The quantity of changes to database schemas does not matter. What matters is the maintainability of your database model, how well it has been designed. Most business apps will see a lot of changes being done if sufficient performance testing hasn't been done, which is sadly true for most apps.
If you are a writing a typical line-of-business application, I would expect some form of round-trip engineering between the object model and the database model to occur in development. Your DBAs ought to own and know the database model quite well, so that they can aid or perform the fine-tuning the queries issued by your ORM framework. This is keeping in mind that you may not rely on the queries issued by the ORM framework alone. All changes should preferably be done and tested in the development and integration-testing (and possibly UAT, if you have one) environments before it is rolled out to production, and as common sense would suggest, all changes would be under version control.
On the topic of coupling the queries to a database, then that is a decision your business has to take. If you are in the business of supporting multiple databases, then you ought to testing against all. Also, you should be capable of providing different distributions for supporting different databases; this is made easier if you place your native queries in database specific orm.xml files like orm-oracle.xml, orm-mysql.xml etc. and rename the files to orm.xml before you prepare a distribution. Using Maven or Ant would make the proposed change easy to implement.
Is there any other way to improve performance? Performance is very very critical for this application.
That would depend on how well you have designed your object and data models, how well you've understood your ORM framework and how willing you are in "corrupting" your object model.
The first bit of performance tuning any application is to always measure twice and cut once. You cannot simply iterate through a list of possible solutions and try each one of them without knowing how they work and in what circumstances they are useful; okay, you could do that if your business is willing to invest time in that, but it is often not the case.
To begin, you'll need to understand why native queries are providing or appear* to provide a better performance. Maybe this has got a lot to do with the fact that you are merely inserting data, and it would be better for an ORM framework to simply issue the INSERT statement rather than construct one from HQL or the abstract query notation used under the hood; only a profiler will reveal the difference.
If the above is true, then you could reconsider whether your audit tables must be managed by the ORM framework. If your application is responsible for only writing to these tables and not reading from them (and it is quite possible that another app is responsible for reading the entries), then I would suspect that not managing these tables in ORM would provide better performance, especially if you use plain JDBC to issue the INSERT statement. The reason is quite simple - if your ORM framework is managing the entity, then it is also responsible for managing the persistence context (which now includes the class and the associated table); not having ORM manage the entity would possibly result in the scenario where the persistence context need not be updated at all for audit entries.
There is a healthy possibility of other performance tuning measures that you can undertake, but like I stated earlier, it would require you to understand a profiler report and estimate which possible choices would be better in your application.
* I'm afraid that unless you publish benchmarks and how you conducted them I will be skeptical of claims.
It's quite rare that you actually DO switch the database provider, especially once you've paid several 100k's of license for an excellent and high-performant database like Oracle. Besides, the SQL syntax variants of the INSERT statement are not so distinct that you wouldn't be able to switch the database, even when using native SQL, exceptionally.
I don't see why patching a single query that needs extra tuning is bad. Ask your lead developer why he's so strict. But before you do, use a profiler, such as JProfiler, or Yourkit to identify the exact spot that's causing the performance issues. With JPA, any of these may cause issues: caching, eager loading of dependent data (which you wouldn't need, probably), inefficient SQL generation, a bad query execution plan in your Oracle database, etc... Maybe you don't need a native query after all.
If performance is so critical, then maybe JPA is not good enough for the job. Have you (and your lead developer) considered other frameworks such as jOOQ, QueryDSL, MyBatis or anything similar? I have understood from your comments that your main use-cases are OLAP-querying, and not OLTP, hence you might even like to use advanced Oracle features, such as analytic functions and data-warehousing functionality, for which jOOQ has native support, for instance...
1) I have seen only 2 applications that moved from oracle to MySQL (to save on license costs) in 10 years, so it's not something that happens very often, BUT if you want to write integration tests using another database (eg hsqldb) you'll be in trouble.
About how often schema changes after an app goes to production, my answer is: A LOT!! If the app will be updated regularly, expect LOTs of changes, as usually the team understand the business better. I even worked on the project in which the schema was considerably different after one year of the app going live.
At the same time, this looks like you deferred optimizing the until the last posible time (a good thing to do) and now you need optimize the sql using some native queries (which also happens quite regularly)... What I'm trying to say is that your idea doesn't sound bad at all for me.
2) In the past I've used a mix of Hibernate and iBatis (or mybatis nowadays) for similar situations (in case you want to check iBatis). And one question, why are you doing a flush() after each persist()? You shoulnd't really need to do that.
Also, I'm quite surprised that the inserts take so much longer if they're done in EclipseLink. The calls to persist() should take almost the same amount of time as native query (I assuming they'll take longer if there is any lifecycle callbacks). I assume you've seen the sql generated by eclipseLink, is it that different?
I know my answer is not specific at all, but I hope it helps.
I am working with an object that serves as a database in my application. However, I need to have redundant copies of this database. So, on init, I create multiple instances (say 5) copies of the same object. (I am using JAVA for this, so any hint of pre-existing libraries could be helpful as well.)
The object is a server that listens on a port for request for the information it is holding. This information may be updated by other entities via the same or a different port at any time.
My question is as follows:
Would a lock strategy
work in this case? That is, every time an update is made in
any instance, that instance contacts
all other instances and passes the
update.
During this time, all the requests
(read or update) from other entities
are queued.
Would this approach work? I have my doubts because, even if this works, I think the system is creating its own bottleneck. What do you guys say? Is there a better way of doing this distributed synchronization?
What you're describing is a distributed cache. The big player in that space is currently Coherence though I believe JBoss Cache is catching up.
As for rolling your own, having seen the complexity in what superficially sounds quite a simple problem, I wouldn't recommend it in a comercial setting, though it'd be a fun home project.
Are you talking about a distributed cache? Have you looked at ehcache?
Would this approach work? I have my
doubts because, even if this works, I
think the system is creating its own
bottleneck.
It would be creating its own bottleneck. You'd be better off using an in-memory database like HSQLDB or an embedded database like SQLite.
There is lot more to distributed syntonization than it's possible to mention in a single answer. You have to worry about two-phase commits, network partitions, etc. etc. I would advise you to look into an existing distributed DB solution combined with an n-tier Java EE architecture that includes load-balancing.
I have a thick client, java swing application with a schema of 25 tables and ~15 JInternalFrames (data entry forms for the tables). I need to make a design choice of straight JDBC or ORM (hibernate with spring framework in this case) for DBMS interaction. Build out of the application will occur in the future.
Would hibernate be overkill for a project of this size? An explanation of either yes or no answer would be much appreciated (or even a different approach if warranted).
TIA.
Good question with no single simple answer.
I used to be a big fan of Hibernate after using it in multiple projects over multiple years.
I used to believe that any project should default to hibernate.
Today I am not so sure.
Hibernate (and JPA) is great for some things, especially early in the development cycle.
It is much faster to get to something working with Hibernate than it is with JDBC.
You get a lot of features for free - caching, optimistic locking and so on.
On the other hand it has some hidden costs. Hibernate is deceivingly simple when you start. Follow some tutorial, put some annotations on your class - and you've got yourself persistence. But it's not simple and to be able to write good code in it requires good understanding of both it's internal workings and database design. If you are just starting you may not be aware of some issues that may bite you later on, so here is an incomplete list.
Performance
The runtime performance is good enough, I have yet to see a situation where hibernate was the reason for poor performance in production. The problem is the startup performance and how it affects your unit tests time and development performance. When hibernate loads it analyzes all entities and does a lot of pre-caching - it can take about 5-10-15 seconds for a not very big application. So your 1 second unit test is going to take 11 secods now. Not fun.
Database Independency
It is very cool as long as you don't need to do some fine tuning on the database.
In-memory Session
For every transaction Hibernate will store an object in memory for every database row it "touches". It's a nice optimization when you are doing some simple data entry. If you need to process lots of objects for some reason though, it can seriously affect performance, unless you explicitly and carefully clean up the in-memory session on your own.
Cascades
Cascades allow you to simplify working with object graphs. For example if you have a root object and some children and you save root object, you can configure hibernate to save children as well. The problem starts when your object graph grow complex. Unless you are extremely careful and have a good understanding of what goes on internally, it's easy to mess this up. And when you do it is very hard to debug those problems.
Lazy Loading
Lazy Loading means that every time you load an object, hibernate will not load all it's related objects but instead will provide place holders which will be resolved as soon as you try to access them. Great optimization right? It is, except you need to be aware of this behaviour otherwise you will get cryptic errors. Google "LazyInitializationException" for an example. And be careful with performance. Depending on the order of how you load your objects and your object graph you may hit "n+1 selects problem". Google it for more information.
Schema Upgrades
Hibernate allows easy schema changes by just refactoring java code and restarting. It's great when you start. But then you release version one. And unless you want to lose your customers you need to provide them schema upgrade scripts. Which means no more simple refactoring as all schema changes must be done in SQL.
Views and Stored Procedures
Hibernate requires exclusive write access to the data it works with. Which means you can't really use views, stored procedures and triggers as those can cause changes to data with hibernate not aware of them. You can have some external processes writing data to the database in a separate transactions. But if you do, your cache will have invalid data. Which is one more thing to care about.
Single Threaded Sessions
Hibernate sessions are single threaded. Any object loaded through a session can only be accessed (including reading) from the same thread. This is acceptable for server side applications but might complicate things unnecessary if you are doing GUI based application.
I guess my point is that there are no free meals.
Hibernate is a good tool, but it's a complex tool, and it requires time to understand it properly. If you or your team members don't have such knowledge it might be simpler and faster to go with pure JDBC (or Spring JDBC) for a single application. On the other hand if you are willing to invest time into learning it (including learning by doing and debugging) than in the future you will be able to understand the tradeoffs better.
Hibernate can be good but it and other JPA ORMs tend to dictate your database structure to a degree. For example, composite primary keys can be done in Hibernate/JPA but they're a little awkward. There are other examples.
If you're comfortable with SQL I would strongly suggest you take a look at Ibatis. It can do 90%+ of what Hibernate can but is far simpler in implementation.
I can't think of a single reason why I'd ever choose straight JDBC (or even Spring JDBC) over Ibatis. Hibernate is a more complex choice.
Take a look at the Spring and Ibatis Tutorial.
No doubt Hibernate has its complexity.
But what I really like about the Hibernate approach (some others too) is the conceptual model you can get in Java is better. Although I don't think of OO as a panacea, and I don't look for theoritical purity of the design, I found so many times that OO does in fact simplify my code. As you asked specifically for details, here are some examples :
the added complexity is not in the model and entities, but in your framework for manipulating all entities for example. For maintainers, the hard part is not a few framework classes but your model, so Hibernate allows you to keep the hard part (the model) at its cleanest.
if a field (like an id, or audit fields, etc) is used in all your entities, then you can create a superclass with it. Therefore :
you write less code, but more importantly ...
there are less concepts in your model (the unique concept is unique in the code)
for free, you can write code more generic, that provided with an entity (unknown, no type-switching or cast), allows you to access the id.
Hibernate has also many features to deal with other model caracteristics you might need (now or later, add them only as needed). Take it as an extensibility quality for your design.
You might replace inheritance (subclassing) by composition (several entities having a same member, that contains a few related fields that happen to be needed in several entities).
There can be inheritance between a few of your entities. It often happens that you have two tables that have pretty much the same structure (but you don't want to store all data in one table, because you would loose referential integrity to a different parent table).
With reuse between your entities (but only appropriate inheritance, and composition), there is usually some additional advantages to come. Examples :
there is often some way to read the data of the entities that is similar but different. Suppose I read the "title" field for three entities, but for some I replace the result with a differing default value if it is null. It is easy to have a signature "getActualTitle" (in a superclass or an interface), and implement the default value handling in the three implementations. That means the code out of my entities just deals with the concept of an "actual title" (I made this functional concept explicit), and the method inheritance takes care of executing the correct code (no more switch or if, no code duplication).
...
Over time, the requirements evolve. There will be a point where your database structure has problems. With JDBC alone, any change to the database must impact the code (ie. double cost). With Hibernate, many changes can be absorbed by changing only the mapping, not the code. The same happens the other way around : Hibernate lets you change your code (between versions for example) without altering your database (changing the mapping, although it is not always sufficient). To summarize, Hibernate lets your evolve your database and your code independtly.
For all these reasons, I would choose Hibernate :-)
I think either is a fine choice, but personally I would use hibernate. I don't think hibernate is overkill for a project of that size.
Where Hibernate really shines for me is dealing with relationships between entities/tables. Doing JDBC by hand can take a lot of code if you deal with modifying parent and children (grandchildren, siblings, etc) at the same time. Hibernate can make this a breeze (often a single save of the parent entity is enough).
There are certainly complexities when dealing with Hibernate though, such as understanding how the Session flushing works, and dealing with lazy loading.
Straight JDBC would fit the simplest cases at best.
If you want to stay within Java and OOD then going Hibernate or Hibernate/JPA or any-other-JPA-provider/JPA should be your choice.
If you are more comfortable with SQL then having Spring for JDBC templates and other SQL-oriented frameworks won't hurt.
In contrast, besides transactional control, there is not much help from having Spring when working with JPA.
Hibernate best suits for the middleware applications. Assume that we build a middle ware on top of the data base, The middelware is accessed by around 20 applications in that case we can have a hibernate which satisfies the requirement of all 20 applications.
In JDBC, if we open a database connection we need to write in try, and if any exceptions occurred catch block will takers about it, and finally used to close the connections.
In jdbc all exceptions are checked exceptions, so we must write code in try, catch and throws, but in hibernate we only have Un-checked exceptions
Here as a programmer we must close the connection, or we may get a chance to get our of connections message…!
Actually if we didn’t close the connection in the finally block, then jdbc doesn’t responsible to close that connection.
In JDBC we need to write Sql commands in various places, after the program has created if the table structure is modified then the JDBC program doesn’t work, again we need to modify and compile and re-deploy required, which is tedious.
JDBC used to generate database related error codes if an exception will occurs, but java programmers are unknown about this error codes right.
While we are inserting any record, if we don’t have any particular table in the database, JDBC will rises an error like “View not exist”, and throws exception, but in case of hibernate, if it not found any table in the database this will create the table for us
JDBC support LAZY loading and Hibernate supports Eager loading
Hibernate supports Inheritance, Associations, Collections
In hibernate if we save the derived class object, then its base class object will also be stored into the database, it means hibernate supporting inheritance
Hibernate supports relationships like One-To-Many,One-To-One, Many-To- Many-to-Many, Many-To-One
Hibernate supports caching mechanism by this, the number of round trips between an application and the database will be reduced, by using this caching technique an application performance will be increased automatically
Getting pagination in hibernate is quite simple.
Hibernate has capability to generate primary keys automatically while we are storing the records into database
... In-memory Session ... LazyInitializationException ...
You could look at Ebean ORM which doesn't use session objects ... and where lazy loading just works. Certainly an option, not overkill, and will be simpler to understand.
if billions of user using out app or web then in jdbc query will get executed billions of time but in hibernate query will get executed only once for any number of user most important and easy advantage of hibernate over jdbc.