How to keep Hibernate mapping use under control as requirements grow - java

I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow.
Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle.
I'm just wondering if there are recommendations about ways to handle/mitigate this.

Does Java have a port of this? http://fluentnhibernate.org/ It should ease your life, speed things up, plus it offers many other benefits.
Edit:
To add, it's often much cheaper to simply throw hardware at the problem. If things are too slow it's worth spending a couple $100 or $1000 on hardware rather than embarking on a lengthy (read: costly) re-engineering effort.

If you aren't making use of Hibernate's lazy associations, this may be the place to start. Converting can be a challenge, as you'll discover how much code assumes the whole graph is loaded when you don't have a Session open. In which case, you'll need this pattern.

This type of problem can quickly devolve into an ugly religious debate. Many aspects of the drawbacks of ORMs have been discussed here. It is hard to have a discussion in this area however without digressing into a religious war. ORM's are great, but for some situations, maybe not the best fit. Perhaps you are experiencing a situation where the ORM is an impediment regarding what your system needs to accomplish. Perhaps not, and some Hibernate experts can navigate you through the shoals.

A couple of tricks:
Dont use eager fetching. If you are using annotation, your should make sure that every *ToMany annotations are made lazy (fetch=...LAZY). Eager fetching is evil. If for some reason you really want eager fetching, you can always specify it in the query.
Dont load too much objects in a single transaction. Say you have a transaction that will process a large amount of entities (> 1000). You should use some pagination and process each page in it's own transaction. Loading a lot of objects in a single transaction will load the session and performance will degrade. Alternatively, you could evict objects that are not used anymore but this can be tricky if an object graph is loaded.
A final note: I have used hibernate on systems with more than 300 tables and with tables having > 30 millions records. Tables have a lot of relations and we don't see any performance issues with the database.

Related

How Can I Make Fast Desktop Application With Remote Database?

I am going to make a desktop application with mysql database. My database tables are frequenlty changing -- almost 60% of the tables. So I think caching may be a bad idea. Can anyone suggest me:
How can I make a fast desktop application with a remote database ?
My language is Java.
The biggest problem with most projects that have performance as their primary concern is that people tend make some exotic choices that end up complicating the project without any real benefits. Unless you have previous actual hands-on experience with the environment you will be working start simple.
Set some realistic goals about how often you have to refresh your data before you start. If your data changes very frequently, eg. every second, does it make sense to try and show the changes in real time? A query every second will make everyone involved miserable.
Use a thread to take care of the queries. You don't need more than one, since any more will only make the race conditions in the database worst.
Design your database layer to be insulated from the rest of the application. Also time your DB-related operations from the beginning in order to track the impact of your optimizations.
Start with Hibernate / ORMLite. Although I cannot talk about ORMLite, I have used (optimized) Hibernate in heavy load environments without any problems. If you have complicated objects you should give it a try, it sure beats using plain JDBC and implementing the cache mechanism yourself.
Find out when you need lazy loading and when it's slowing you down (due to the select n+1 problem).
If you have performance issues optimize. You don't have to map every single relationship. Use custom SQL in separate methods to get the objects you need when you need them. You can write a query that only returns table ids and afterwards ask Hibernate to load the corresponding objects.
Optimize your SQL. Avoid joins, use subselects, where id in etc.
Implement (database) paging if it makes sense.
If all else fails, start using plain SQL. You' ll have already written the most complex queries and you'll know where your bigger bottlenecks are.
You could use a local SQLite to save the less volatile data and talk to the database mainly to get lists of ids and the stuff that you're missing. For example if you have users and orders, you can assume that you will have many more new orders per minute/second than users per hour.
To sum up, set clear performance goals before you start, always use a separate thread for data retrieval, avoid reinventing the wheel and keep it as simple as possible.
Here goes some generic approaches to the problem.
0) HW: make sure you are not having bottlenecks in you hardware, that you can cheaply increase. (adding HW is faster and cheaper that dev hours in most cases)
1) Caching:
Perhaps you can cache (locally or in a distributed cache like memcache) the 40% of data that tends to be immutable. You could invalidate the cache when data gets modified. You should choose the right entities and granularity level for building the keys.
2) Replication:
If the first is to much overhead, you could create slaves of your mysql and read from there. Again, you have to know when you can afford to have some stale data.
3) NoSQL:
Moving in that direction, but increasing the dev effort, you could move to some distributed store (take a look at the CAP theorem before making a choice)
Hope it helps
Depends on your database structure and application. You can use an object relational mapping library like ormlite and refresh objects loaded from database at the background with threads. With ormlite you may also use LazyForeignCollection to load only required data in your application.
Minimize unnecessary database call.
If your fields on database is changing, you can shift from relational to NoSQL database like MongoDB.
You can perform multithreading on the server side for data processing and clustering of application servers. While using multithreading use it effectively, be aware of the sychronized keyword, it will degrade the performance to some extend.
Perform best practice of coding, don't use more instance variable, try to use local variable, it will make you thread safe also.
You can use Mybatis for ORM also for large queries.
You can perform caching on DAO layer, service layer and even in client side but be sure to sychronize with the database, you can use different caching soutions.
You can do database indexing for first retrival.
Do not use same service for large data querying break it down into different services which will help u to process in multithreading way.
If the application is not very hard real time system you can use messaging solution also, like asychronously processing of data.

Hibernate multiple users, dynamically changing

There are technically two questions here, but are tightly coupled :)
I'm using Hibernate in a new project. It's a POS project.
It uses Oracle database.
We have decided to use Hibernate because the project is large, and because it provides (the most popular) ORM capabilities.
Spring is, for now, out of the question - the reason being: the project is a Swing client-server application, and it adds needless complexity. And, also, Spring is supposed to be very hungry on the hardware resources.
There is a possibility to throw away Hibernate, and to use JDBC. Why? The project requirement is precise database interaction. Meaning, we should have complete control over the connections, sessions and transactions(and, yes, going as low as unoptimized queries).
The first question is - what are your opinions on using the mentioned requrement?
The second question revolves around Hibernate.
We developed a simple Hibernate pilot project.
Another project requirement is - one database user / one connection per user / one session per user / transactions are flexibile(we can end them when we want, as sessions).
Multiple user can log in the application at the same time.
We achived something like that. To be precise, we achived the full described functionality without the multiple users requirement.
Now, looking at the available resources, I came to a conclusion that if we are to have multiple users on the database(on the same schema), we will end up using multiple SessionFactory, implementing a dynamic ConnectionProvider for new user connections. Why?
The users hashed passwords are in the database, so we need to dynamically add a user to the list of current users.
The second question is - can this be done a little easier, it seems weird that Hibernate doesn't support such configurations.
Thank you.
If you're pondering about weather to use Hibernate or JDBC, honestlly go for JDBC. If your domain model is not too complex, you don't really get a lot of advantages from using hibernate. On the other hand using JDBC will greatly improve performance, as you have better control on your queries, and you get A LOT less memory usage from not habing all the Hibernate overhead. Balance this my making an as detailed as possible first scetch of your model. If you're able to schetch it all from the start (no parts that are possible to change wildly in throughout the project), and if said model doesn't look to involved, JDBC will be your friend.
About your users and sessions there, I think you might be mistaking (tho it could just be me), but I don't think you need multiple SessionFactories to have multiple sessions. SessionFactory is a heavy object to initialize, but once you have one you can get multiple hibernate session objects from it which are lightweight.
As a final remark, if you truly stick with an ORM solution (for whatever reason), if possible chose EclipseLink JPA2 implementation. JPA2 has more features over hibernate and the Eclipselink implementation is less buggy then hibernate.
So, as far as Hibernate goes, I still dont know if the only way to dynamicaly change database users(change database connections) was to create multiple session factories, but I presume it is.
We have lowered our requriements, and decided to use Hibernate, use only one user on the database(one connection), one session per user(multiple sessions/multiple "logical" users). We created a couple of Java classes to wrap that functionality. The resources how this can be done can be found here.
Why did we use Hibernate eventually? Using JDBC is more precise, and more flexibile, but the effort to once again map the ResultSet values into objects is, again, the same manual ORM approach.
For example, if I have a GUI that needs to save a Page, first I have to fetch all the Page Articles and then, after I save the Page, update all the Articles FK to that Page. Notice that Im speaking in nouns(objects), and I dont see any other way to wrap the Page/Articles, except using global state. This is the one thing I wouldnt like to see in my application, and we are, after all, using Java, a OO language.
When we already have an ORM mapper that can be configured(forced would be the more precise word to use in this particular example) to process these thing itself, why to go programming it?
Also, we decided to user google Guice - its much faster, typesafe, and could significantly simplify our development/maintence/testing.

JPA Native Queries versus 'pure' JPA persistence

I have a scenario where in I need to keep a log of all incoming files (flat, xml) to an application. This log table is hardly used, except for fault investigation or regulatory purposes and things like that, and data will be purged regularly.
We are using JPA 2.0 for persistence. We tried the initial prototype with pure JPA persistence using entityManager.persist(); and flush immediately. But the performance was not up to the expectation. So I suggested NativeNamedQueries for this operation and the performance improvement was huge (300 milliseconds vs 47 milliseconds) on tests.
But the lead engineer is bit adamant on using NativeNamedQueries, saying that its coupled to the database and less maintainable and things like that.
Questions :
What is your take on this, in case if you had to take a decision. How often does database or schema changes happen once the application goes to production ?
Is there any other way to improve performance? Performance is very very critical for this application.
Its only 4 years since I started programming, but never seen a DB schema change or DB provider change happening for an existing application.
Note : We are using EclipseLink 2.3 and Oracle. Also its a fresh application that we are developing. Just in case these points makes question more clear
How often does database or schema changes happen once the application goes to production ?
This is immaterial to your problem at hand. The quantity of changes to database schemas does not matter. What matters is the maintainability of your database model, how well it has been designed. Most business apps will see a lot of changes being done if sufficient performance testing hasn't been done, which is sadly true for most apps.
If you are a writing a typical line-of-business application, I would expect some form of round-trip engineering between the object model and the database model to occur in development. Your DBAs ought to own and know the database model quite well, so that they can aid or perform the fine-tuning the queries issued by your ORM framework. This is keeping in mind that you may not rely on the queries issued by the ORM framework alone. All changes should preferably be done and tested in the development and integration-testing (and possibly UAT, if you have one) environments before it is rolled out to production, and as common sense would suggest, all changes would be under version control.
On the topic of coupling the queries to a database, then that is a decision your business has to take. If you are in the business of supporting multiple databases, then you ought to testing against all. Also, you should be capable of providing different distributions for supporting different databases; this is made easier if you place your native queries in database specific orm.xml files like orm-oracle.xml, orm-mysql.xml etc. and rename the files to orm.xml before you prepare a distribution. Using Maven or Ant would make the proposed change easy to implement.
Is there any other way to improve performance? Performance is very very critical for this application.
That would depend on how well you have designed your object and data models, how well you've understood your ORM framework and how willing you are in "corrupting" your object model.
The first bit of performance tuning any application is to always measure twice and cut once. You cannot simply iterate through a list of possible solutions and try each one of them without knowing how they work and in what circumstances they are useful; okay, you could do that if your business is willing to invest time in that, but it is often not the case.
To begin, you'll need to understand why native queries are providing or appear* to provide a better performance. Maybe this has got a lot to do with the fact that you are merely inserting data, and it would be better for an ORM framework to simply issue the INSERT statement rather than construct one from HQL or the abstract query notation used under the hood; only a profiler will reveal the difference.
If the above is true, then you could reconsider whether your audit tables must be managed by the ORM framework. If your application is responsible for only writing to these tables and not reading from them (and it is quite possible that another app is responsible for reading the entries), then I would suspect that not managing these tables in ORM would provide better performance, especially if you use plain JDBC to issue the INSERT statement. The reason is quite simple - if your ORM framework is managing the entity, then it is also responsible for managing the persistence context (which now includes the class and the associated table); not having ORM manage the entity would possibly result in the scenario where the persistence context need not be updated at all for audit entries.
There is a healthy possibility of other performance tuning measures that you can undertake, but like I stated earlier, it would require you to understand a profiler report and estimate which possible choices would be better in your application.
* I'm afraid that unless you publish benchmarks and how you conducted them I will be skeptical of claims.
It's quite rare that you actually DO switch the database provider, especially once you've paid several 100k's of license for an excellent and high-performant database like Oracle. Besides, the SQL syntax variants of the INSERT statement are not so distinct that you wouldn't be able to switch the database, even when using native SQL, exceptionally.
I don't see why patching a single query that needs extra tuning is bad. Ask your lead developer why he's so strict. But before you do, use a profiler, such as JProfiler, or Yourkit to identify the exact spot that's causing the performance issues. With JPA, any of these may cause issues: caching, eager loading of dependent data (which you wouldn't need, probably), inefficient SQL generation, a bad query execution plan in your Oracle database, etc... Maybe you don't need a native query after all.
If performance is so critical, then maybe JPA is not good enough for the job. Have you (and your lead developer) considered other frameworks such as jOOQ, QueryDSL, MyBatis or anything similar? I have understood from your comments that your main use-cases are OLAP-querying, and not OLTP, hence you might even like to use advanced Oracle features, such as analytic functions and data-warehousing functionality, for which jOOQ has native support, for instance...
1) I have seen only 2 applications that moved from oracle to MySQL (to save on license costs) in 10 years, so it's not something that happens very often, BUT if you want to write integration tests using another database (eg hsqldb) you'll be in trouble.
About how often schema changes after an app goes to production, my answer is: A LOT!! If the app will be updated regularly, expect LOTs of changes, as usually the team understand the business better. I even worked on the project in which the schema was considerably different after one year of the app going live.
At the same time, this looks like you deferred optimizing the until the last posible time (a good thing to do) and now you need optimize the sql using some native queries (which also happens quite regularly)... What I'm trying to say is that your idea doesn't sound bad at all for me.
2) In the past I've used a mix of Hibernate and iBatis (or mybatis nowadays) for similar situations (in case you want to check iBatis). And one question, why are you doing a flush() after each persist()? You shoulnd't really need to do that.
Also, I'm quite surprised that the inserts take so much longer if they're done in EclipseLink. The calls to persist() should take almost the same amount of time as native query (I assuming they'll take longer if there is any lifecycle callbacks). I assume you've seen the sql generated by eclipseLink, is it that different?
I know my answer is not specific at all, but I hope it helps.

When to use Hibernate caching (second level)?

This is a basic question about Hibernate Caching, but I've to be sure before going forward. I had use query caching before in small projects, but now I'm involved in a big project, so this is:
In really big projects (national) what are your suggestion about when to use Query Caching in Hibernate?
note: *The platform is Struts2, Spring3, Hibernate, Java6 WAS6 *
2nd level cache is used, when your db relations r complex as in that case you know hitting db each and every time will be a costly operation. Performance of app can be increased by using cache in such cases.
I reckon you mean second-level cache, that is cache which spans more than one Hibernate session.
Generally, query cache is used for queries that are heavy or often accessed, to make your app hit the database less often.
I'm not sure if your question includes entity cache, but you definitely should investigate it as well. This cache includes individual entities or their collections regardless of context (i.e. concrete queries). I would say it's the most beneficial type of caching.
The bigger your TPS or number of entities, the more you will benefit from using such cache. When you run into having a few thousand queries per transaction, fetching entities from cache (usually in RAM) rather than querying database and mapping can save a lot of precious time.
Be careful when you need 100% up-to-date (online) results.
See also:
Improving Performance at Hibernate docs.
I highly recommend the article truly understanding the second level and query caches. In general, caching has a lot of benefits but also introduces a lot of complexity, and you should have a good reason for caching, and understand what benefits/risks it will give you.
Note that turning on the query cache is by itself not enough, you need to mark things as cacheable, here is an explanation. This whole article is really good and discusses when the query cache is not helpful. Again, make sure you have a good reason for turning on query caching in your application.

Hibernate or JDBC

I have a thick client, java swing application with a schema of 25 tables and ~15 JInternalFrames (data entry forms for the tables). I need to make a design choice of straight JDBC or ORM (hibernate with spring framework in this case) for DBMS interaction. Build out of the application will occur in the future.
Would hibernate be overkill for a project of this size? An explanation of either yes or no answer would be much appreciated (or even a different approach if warranted).
TIA.
Good question with no single simple answer.
I used to be a big fan of Hibernate after using it in multiple projects over multiple years.
I used to believe that any project should default to hibernate.
Today I am not so sure.
Hibernate (and JPA) is great for some things, especially early in the development cycle.
It is much faster to get to something working with Hibernate than it is with JDBC.
You get a lot of features for free - caching, optimistic locking and so on.
On the other hand it has some hidden costs. Hibernate is deceivingly simple when you start. Follow some tutorial, put some annotations on your class - and you've got yourself persistence. But it's not simple and to be able to write good code in it requires good understanding of both it's internal workings and database design. If you are just starting you may not be aware of some issues that may bite you later on, so here is an incomplete list.
Performance
The runtime performance is good enough, I have yet to see a situation where hibernate was the reason for poor performance in production. The problem is the startup performance and how it affects your unit tests time and development performance. When hibernate loads it analyzes all entities and does a lot of pre-caching - it can take about 5-10-15 seconds for a not very big application. So your 1 second unit test is going to take 11 secods now. Not fun.
Database Independency
It is very cool as long as you don't need to do some fine tuning on the database.
In-memory Session
For every transaction Hibernate will store an object in memory for every database row it "touches". It's a nice optimization when you are doing some simple data entry. If you need to process lots of objects for some reason though, it can seriously affect performance, unless you explicitly and carefully clean up the in-memory session on your own.
Cascades
Cascades allow you to simplify working with object graphs. For example if you have a root object and some children and you save root object, you can configure hibernate to save children as well. The problem starts when your object graph grow complex. Unless you are extremely careful and have a good understanding of what goes on internally, it's easy to mess this up. And when you do it is very hard to debug those problems.
Lazy Loading
Lazy Loading means that every time you load an object, hibernate will not load all it's related objects but instead will provide place holders which will be resolved as soon as you try to access them. Great optimization right? It is, except you need to be aware of this behaviour otherwise you will get cryptic errors. Google "LazyInitializationException" for an example. And be careful with performance. Depending on the order of how you load your objects and your object graph you may hit "n+1 selects problem". Google it for more information.
Schema Upgrades
Hibernate allows easy schema changes by just refactoring java code and restarting. It's great when you start. But then you release version one. And unless you want to lose your customers you need to provide them schema upgrade scripts. Which means no more simple refactoring as all schema changes must be done in SQL.
Views and Stored Procedures
Hibernate requires exclusive write access to the data it works with. Which means you can't really use views, stored procedures and triggers as those can cause changes to data with hibernate not aware of them. You can have some external processes writing data to the database in a separate transactions. But if you do, your cache will have invalid data. Which is one more thing to care about.
Single Threaded Sessions
Hibernate sessions are single threaded. Any object loaded through a session can only be accessed (including reading) from the same thread. This is acceptable for server side applications but might complicate things unnecessary if you are doing GUI based application.
I guess my point is that there are no free meals.
Hibernate is a good tool, but it's a complex tool, and it requires time to understand it properly. If you or your team members don't have such knowledge it might be simpler and faster to go with pure JDBC (or Spring JDBC) for a single application. On the other hand if you are willing to invest time into learning it (including learning by doing and debugging) than in the future you will be able to understand the tradeoffs better.
Hibernate can be good but it and other JPA ORMs tend to dictate your database structure to a degree. For example, composite primary keys can be done in Hibernate/JPA but they're a little awkward. There are other examples.
If you're comfortable with SQL I would strongly suggest you take a look at Ibatis. It can do 90%+ of what Hibernate can but is far simpler in implementation.
I can't think of a single reason why I'd ever choose straight JDBC (or even Spring JDBC) over Ibatis. Hibernate is a more complex choice.
Take a look at the Spring and Ibatis Tutorial.
No doubt Hibernate has its complexity.
But what I really like about the Hibernate approach (some others too) is the conceptual model you can get in Java is better. Although I don't think of OO as a panacea, and I don't look for theoritical purity of the design, I found so many times that OO does in fact simplify my code. As you asked specifically for details, here are some examples :
the added complexity is not in the model and entities, but in your framework for manipulating all entities for example. For maintainers, the hard part is not a few framework classes but your model, so Hibernate allows you to keep the hard part (the model) at its cleanest.
if a field (like an id, or audit fields, etc) is used in all your entities, then you can create a superclass with it. Therefore :
you write less code, but more importantly ...
there are less concepts in your model (the unique concept is unique in the code)
for free, you can write code more generic, that provided with an entity (unknown, no type-switching or cast), allows you to access the id.
Hibernate has also many features to deal with other model caracteristics you might need (now or later, add them only as needed). Take it as an extensibility quality for your design.
You might replace inheritance (subclassing) by composition (several entities having a same member, that contains a few related fields that happen to be needed in several entities).
There can be inheritance between a few of your entities. It often happens that you have two tables that have pretty much the same structure (but you don't want to store all data in one table, because you would loose referential integrity to a different parent table).
With reuse between your entities (but only appropriate inheritance, and composition), there is usually some additional advantages to come. Examples :
there is often some way to read the data of the entities that is similar but different. Suppose I read the "title" field for three entities, but for some I replace the result with a differing default value if it is null. It is easy to have a signature "getActualTitle" (in a superclass or an interface), and implement the default value handling in the three implementations. That means the code out of my entities just deals with the concept of an "actual title" (I made this functional concept explicit), and the method inheritance takes care of executing the correct code (no more switch or if, no code duplication).
...
Over time, the requirements evolve. There will be a point where your database structure has problems. With JDBC alone, any change to the database must impact the code (ie. double cost). With Hibernate, many changes can be absorbed by changing only the mapping, not the code. The same happens the other way around : Hibernate lets you change your code (between versions for example) without altering your database (changing the mapping, although it is not always sufficient). To summarize, Hibernate lets your evolve your database and your code independtly.
For all these reasons, I would choose Hibernate :-)
I think either is a fine choice, but personally I would use hibernate. I don't think hibernate is overkill for a project of that size.
Where Hibernate really shines for me is dealing with relationships between entities/tables. Doing JDBC by hand can take a lot of code if you deal with modifying parent and children (grandchildren, siblings, etc) at the same time. Hibernate can make this a breeze (often a single save of the parent entity is enough).
There are certainly complexities when dealing with Hibernate though, such as understanding how the Session flushing works, and dealing with lazy loading.
Straight JDBC would fit the simplest cases at best.
If you want to stay within Java and OOD then going Hibernate or Hibernate/JPA or any-other-JPA-provider/JPA should be your choice.
If you are more comfortable with SQL then having Spring for JDBC templates and other SQL-oriented frameworks won't hurt.
In contrast, besides transactional control, there is not much help from having Spring when working with JPA.
Hibernate best suits for the middleware applications. Assume that we build a middle ware on top of the data base, The middelware is accessed by around 20 applications in that case we can have a hibernate which satisfies the requirement of all 20 applications.
In JDBC, if we open a database connection we need to write in try, and if any exceptions occurred catch block will takers about it, and finally used to close the connections.
In jdbc all exceptions are checked exceptions, so we must write code in try, catch and throws, but in hibernate we only have Un-checked exceptions
Here as a programmer we must close the connection, or we may get a chance to get our of connections message…!
Actually if we didn’t close the connection in the finally block, then jdbc doesn’t responsible to close that connection.
In JDBC we need to write Sql commands in various places, after the program has created if the table structure is modified then the JDBC program doesn’t work, again we need to modify and compile and re-deploy required, which is tedious.
JDBC used to generate database related error codes if an exception will occurs, but java programmers are unknown about this error codes right.
While we are inserting any record, if we don’t have any particular table in the database, JDBC will rises an error like “View not exist”, and throws exception, but in case of hibernate, if it not found any table in the database this will create the table for us
JDBC support LAZY loading and Hibernate supports Eager loading
Hibernate supports Inheritance, Associations, Collections
In hibernate if we save the derived class object, then its base class object will also be stored into the database, it means hibernate supporting inheritance
Hibernate supports relationships like One-To-Many,One-To-One, Many-To- Many-to-Many, Many-To-One
Hibernate supports caching mechanism by this, the number of round trips between an application and the database will be reduced, by using this caching technique an application performance will be increased automatically
Getting pagination in hibernate is quite simple.
Hibernate has capability to generate primary keys automatically while we are storing the records into database
... In-memory Session ... LazyInitializationException ...
You could look at Ebean ORM which doesn't use session objects ... and where lazy loading just works. Certainly an option, not overkill, and will be simpler to understand.
if billions of user using out app or web then in jdbc query will get executed billions of time but in hibernate query will get executed only once for any number of user most important and easy advantage of hibernate over jdbc.

Categories

Resources