What can be the pin points when executing performance test on an application in Java, which uses Hibenate as ORM Tool and Oracle 11G as database.
I am also thinking of bench-marking the applcation. So for that what should i do.
Thanks
The key things are:
1) as far as practical test the application using real-world usage scenarios - this can be rather complicated in practice - I've used Perl scripts based on www::mechanize and http::recorder for this in the past.
2) failing that ab or jmeter
3) record as mauch as possible (you don't mention what webserver you are using - if its apache, add %D in the logs)
4) make sure you saturate the system - you want to make sure you're getting some major garbage collections (or prove its homoeostatic - which is a very rare thing for a Java program)
5) analyse the webserver and gc logs.
The first place to start is to agree what is acceptable performance. Without that agreement, anything else is premature.
Different application types will have different pain points. Mix of read and writes, concurrent updates (especially on the same data - eg selling concert tickets or airplane seats), data volumes.
Not sure to what extent your app "uses Oracle 11G as database" or even what type of environment you have (i assume typical oltp), but from the Oracle side you can do several things (to name a few):
From an overall db standpoint, look into AWR (Automatic Workload Repository, formerly statspack). I believe this is built into Enterprise Manager as well.
SQL Trace + tkprof.
If using any pl/sql, DBMS_HPROF (Hierarchical profiler).
If using any pl/sql, log significant actions to log tables (via autonomous transactions), recording timestamps of each entry, action taken, etc. Roll your own or use an existing framework (there are several out there). Just make sure its flexible (can change level of logging output).
Have Hibernate log all executed SQL.
Check this link for configuration properties and set hibernate.show_sql to true. Once you can see what's being executed, check any unusual statements and profile them if you suspect they're slower than expected.
Afterwards checking their execution plan and tweaking them to be more optimized will help your application, and your database.
Not sure if using different technologies (e.g. Hibernate in your case) would change the way you would perform the performance test of an application.
There are standrard tools to run the performance test and they should be applicable with the technologies being used by your application too.
Making show_sql true would definitely help in looking at the queries and analyzing them further, but may not help in overall performance test of you application.
Look at the following post Java benchmarking tool for benchmarking tools.
Hope this helps.
Related
We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding
within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
Our web infrastructure consist of a few independent web applications within Tomcat 7 for printing, product description, product order (this is not persisted in the database!), category description etc.
They are all build on Servlet 2 API.
So each instance/implementation of repository holds a specialised kind of data represented by serializable classes and the instances of this serialzable classes are set up/filled by an periodically executed database query (for every resultrow the setters of the fields are called; reminds me of domain oriented entity beans with CMP).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances).
Each context has a own connection to the Oracle database (set up by resource description file on deployment).
All the data is read only, we never need to write back to the database.
Because we need some of these data types for more than one web application (context) and some even for more than one servlet within the same web context repositories with an identical data type are instantiated more than once - e.g. four times, twice within the same application.
In the end some of the data is doubled and I'm not sure if this is as clever and efficient as it should be. It should be possible to share the same repository object to more than one application (JNDI?) but at least it must be possible to share it for several servlets within the same application context.
Despite I'm irritated by the idea to use a "self build" repository instead of something like a well tested, open developed cache (ehcache, jcs, ...) because some of these caches also provide options for distributed caches (so it should also work within the same container).
If certain entries are searched the search algorithm iterates over all entries in the repository (s. link above). For every search pattern there are specialised functions which are directly called from within the business logic classes using the "entity beans"; there's no specification object or interface.
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM (at least for approximately 10000 DB entries); this is in my opinion most probably correlated to the use of serializeable XSD-to-JAXB-generated classes.
Additionally every time a application is deployed for tests you have to wait at least two minutes until all entries of the database have been loaded into the repositories - when deploying on live there's a well recognizable out of service phase on context/servlet start up.
I tend to think all of this is closely related to the solutions I described above.
Because I haven't got any experiences in this field and I'm new in the company I don't want to be to obtrusive.
Maybe you can help me to evaluate ideas for a better setup:
Is it for performance and memory better to unify all the repositories into one "repository servlet" and request objects from there via HTTP (don't think so, though it seems quite modular/distributed system friendly) or should I try to go with JNDI (never did that before) and connect to the repository similar to a JDBC database?
Wouldn't it be even more sensible, faster and efficient to at least use only one single connection pool for the whole Tomcat (and reference this connection pool from within the web apps deployment descriptor)? Or might that slow down connections or limit it in any other aspect?
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that). I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower. - I believe it will be faster and have shorter start up times respectively it shouldn't be needed to redeploy it that often.
I'm very grateful for every tip or hint and your thoughts. Would be marvellous to get a peer review of my ideas based on practical experiences.
So thank you very much in advance!
Is it better to hold a repository for every web application (context) or is it better to share a common instance by JDNI or a similar technique
Unless someone proves me otherwise I would say there is no way to do it, in a standard way, meaning as defined in the Servlet Sepc or in the rest of the Java EE spec canon.
There are technical ways to do it which probably depend on a specific application server implementation, but this cannot be "better" in its universal sense.
If you have two applications that operate on the same data, I wonder whether the partitioning of the applications is useful. Maybe all functionality operating on some kind of data needs to be in the same application?
within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
I looked up Evans in our book shelf. The blog post is quite weird. A repository and a DAO are basically the same thing, it provides CRUD operations for an object or for a tree of objects (Evans says only the the aggregate roots).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances). Each context has a own connection to the Oracle database (set up by resource description file on deployment). [ ... ]
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM
When something performs badly its the best to do profiling, e.g. with YourKit or with perf and FlameGraphs if you are on Linux. If your applications need a lot of RAM, analyze the heap e.g. with Eclipse MAT. There is no way somebody can give you a recommendation or hint on a best practice without seeing any line of code.
A general answer would include anyting about performance tuning for Oracle DBs, JDBC, Java Collections and Concurrent Programming, Networking and Operating Systems.
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that)
I can. EHCache is between 10-20 times slower then a simple HashMap. See: cache benchmarks. You only need a map, when you do a complete preload and don't have any mutations.
I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower
Distributed caches need to go over the network and add serialization/deserialization overhead. That's probably another factor 30 slower. When is the distributed cache updated?
I'm very grateful for every tip or hint and your thoughts.
Wrap up:
Do the normal software engineering homework, do profiling and analyzing and spend the effort of tuning at the right places
Ask specific questions on one topic on stackoverflow and share your code and performance data. Ask a question about one thing at one time and read https://stackoverflow.com/help/on-topic
You may also come to the conclusion that there is nothing to tune. There are applications out there that need a day to build up an in memory data structure from persistent data. Maybe its just a lot of data? If you do not like the downtime use green blue deployment. Also use smaller data sets for development and testing
I need to log the SQL execution times in my Java EE application (Any further statistics would be an optional bonus).
Things are setup in a more-less standard way: Datasource on Application server serving pooled JDBC connections.
Application uses for DB access mix of:
Hibernate and
Spring JDBCTemplate
It runs on:
Glassfish OSE and
Oracle DBS
I know about: Anything better than P6Spy? however the question/answers are outdated, from my point of view.
What I've found so far:
I could go for the pure Hibernate approach (hibernate show query execution time)
but it's not feasible due to mixed DB access in my case
I could use some of the custom JDBC drivers
p6spy - however project seems couple years dead (last commit 3 years ago: http://sourceforge.net/p/p6spy/code/23/tree/trunk/)
log4jdbc - however no release for more than 1 year, and source activity seems to be cca 6 months not touched (http://code.google.com/p/log4jdbc/source/list)
another 2: log4jdbc-log4j2 and log4jdbc-remix - which seem alive, but I'm not sure about stability and broad usage
Recommendations based on experiences are very welcome.
Please note, I'm interested in kind of answers like: We're using XYZ and this is our experience, rather than I googled just now and feel like...
If you want dead accurate SQL execution time then best option is sql trace. But you want to put it in your Java EE application, so obviously want a somewhat accurate execution time.
Following are the things i would suggest [I have implemented them in my code]:
If you just want it for loggin purpose then have appropriate Log4j debug messages which wil l print time along with log entry.
I implemented a BatchLog table for my application which use to record start and end time of the operation. So in your case it will be start and end time of your query. Now if it is just a single query then probably triggers might help here or else you can update log table just before and after running query. Or even better will be a stored procedure which can take care of whole thing and give more accurate data.
Measuring time and logging it seems like a job for AOP. If you are using EJB, a simple interceptor should solve your problem(for example http://www.javacodegeeks.com/2013/07/java-ee-ejb-interceptors-tutorial-and-example.html). If its Spring(judging from JDBCTemplate), try Aspectj.
OK, thanks a lot for your answers.
Finally I decided to go for the P6Spy with patches specific to our scenario.
Moreover, as I believe that the gap P6spy is filling still exists these days, I decided to participate in it's development (https://github.com/p6spy/p6spy). Feel free to report your issues or feature requests to: https://github.com/p6spy/p6spy/issues to make it fit your needs.
I have a scenario where in I need to keep a log of all incoming files (flat, xml) to an application. This log table is hardly used, except for fault investigation or regulatory purposes and things like that, and data will be purged regularly.
We are using JPA 2.0 for persistence. We tried the initial prototype with pure JPA persistence using entityManager.persist(); and flush immediately. But the performance was not up to the expectation. So I suggested NativeNamedQueries for this operation and the performance improvement was huge (300 milliseconds vs 47 milliseconds) on tests.
But the lead engineer is bit adamant on using NativeNamedQueries, saying that its coupled to the database and less maintainable and things like that.
Questions :
What is your take on this, in case if you had to take a decision. How often does database or schema changes happen once the application goes to production ?
Is there any other way to improve performance? Performance is very very critical for this application.
Its only 4 years since I started programming, but never seen a DB schema change or DB provider change happening for an existing application.
Note : We are using EclipseLink 2.3 and Oracle. Also its a fresh application that we are developing. Just in case these points makes question more clear
How often does database or schema changes happen once the application goes to production ?
This is immaterial to your problem at hand. The quantity of changes to database schemas does not matter. What matters is the maintainability of your database model, how well it has been designed. Most business apps will see a lot of changes being done if sufficient performance testing hasn't been done, which is sadly true for most apps.
If you are a writing a typical line-of-business application, I would expect some form of round-trip engineering between the object model and the database model to occur in development. Your DBAs ought to own and know the database model quite well, so that they can aid or perform the fine-tuning the queries issued by your ORM framework. This is keeping in mind that you may not rely on the queries issued by the ORM framework alone. All changes should preferably be done and tested in the development and integration-testing (and possibly UAT, if you have one) environments before it is rolled out to production, and as common sense would suggest, all changes would be under version control.
On the topic of coupling the queries to a database, then that is a decision your business has to take. If you are in the business of supporting multiple databases, then you ought to testing against all. Also, you should be capable of providing different distributions for supporting different databases; this is made easier if you place your native queries in database specific orm.xml files like orm-oracle.xml, orm-mysql.xml etc. and rename the files to orm.xml before you prepare a distribution. Using Maven or Ant would make the proposed change easy to implement.
Is there any other way to improve performance? Performance is very very critical for this application.
That would depend on how well you have designed your object and data models, how well you've understood your ORM framework and how willing you are in "corrupting" your object model.
The first bit of performance tuning any application is to always measure twice and cut once. You cannot simply iterate through a list of possible solutions and try each one of them without knowing how they work and in what circumstances they are useful; okay, you could do that if your business is willing to invest time in that, but it is often not the case.
To begin, you'll need to understand why native queries are providing or appear* to provide a better performance. Maybe this has got a lot to do with the fact that you are merely inserting data, and it would be better for an ORM framework to simply issue the INSERT statement rather than construct one from HQL or the abstract query notation used under the hood; only a profiler will reveal the difference.
If the above is true, then you could reconsider whether your audit tables must be managed by the ORM framework. If your application is responsible for only writing to these tables and not reading from them (and it is quite possible that another app is responsible for reading the entries), then I would suspect that not managing these tables in ORM would provide better performance, especially if you use plain JDBC to issue the INSERT statement. The reason is quite simple - if your ORM framework is managing the entity, then it is also responsible for managing the persistence context (which now includes the class and the associated table); not having ORM manage the entity would possibly result in the scenario where the persistence context need not be updated at all for audit entries.
There is a healthy possibility of other performance tuning measures that you can undertake, but like I stated earlier, it would require you to understand a profiler report and estimate which possible choices would be better in your application.
* I'm afraid that unless you publish benchmarks and how you conducted them I will be skeptical of claims.
It's quite rare that you actually DO switch the database provider, especially once you've paid several 100k's of license for an excellent and high-performant database like Oracle. Besides, the SQL syntax variants of the INSERT statement are not so distinct that you wouldn't be able to switch the database, even when using native SQL, exceptionally.
I don't see why patching a single query that needs extra tuning is bad. Ask your lead developer why he's so strict. But before you do, use a profiler, such as JProfiler, or Yourkit to identify the exact spot that's causing the performance issues. With JPA, any of these may cause issues: caching, eager loading of dependent data (which you wouldn't need, probably), inefficient SQL generation, a bad query execution plan in your Oracle database, etc... Maybe you don't need a native query after all.
If performance is so critical, then maybe JPA is not good enough for the job. Have you (and your lead developer) considered other frameworks such as jOOQ, QueryDSL, MyBatis or anything similar? I have understood from your comments that your main use-cases are OLAP-querying, and not OLTP, hence you might even like to use advanced Oracle features, such as analytic functions and data-warehousing functionality, for which jOOQ has native support, for instance...
1) I have seen only 2 applications that moved from oracle to MySQL (to save on license costs) in 10 years, so it's not something that happens very often, BUT if you want to write integration tests using another database (eg hsqldb) you'll be in trouble.
About how often schema changes after an app goes to production, my answer is: A LOT!! If the app will be updated regularly, expect LOTs of changes, as usually the team understand the business better. I even worked on the project in which the schema was considerably different after one year of the app going live.
At the same time, this looks like you deferred optimizing the until the last posible time (a good thing to do) and now you need optimize the sql using some native queries (which also happens quite regularly)... What I'm trying to say is that your idea doesn't sound bad at all for me.
2) In the past I've used a mix of Hibernate and iBatis (or mybatis nowadays) for similar situations (in case you want to check iBatis). And one question, why are you doing a flush() after each persist()? You shoulnd't really need to do that.
Also, I'm quite surprised that the inserts take so much longer if they're done in EclipseLink. The calls to persist() should take almost the same amount of time as native query (I assuming they'll take longer if there is any lifecycle callbacks). I assume you've seen the sql generated by eclipseLink, is it that different?
I know my answer is not specific at all, but I hope it helps.
I'm hoping to find out what tools folks use to synchronize data between databases. I'm looking for a JDBC solution that can be used as a command-line tool.
There used to be a tool called Sync4J that used the SyncML framework but this seems to have fallen by the wayside.
I have heard that the Data Replication Service provided by Db4O is really good. It allows you to use Hibernate to back onto a RDBMS - I don't think it supports JDBC tho (http://www.db4o.com/about/productinformation/drs/Default.aspx?AspxAutoDetectCookieSupport=1)
There is an open source project called Daffodil, but I haven't investigated it at all. (https://daffodilreplicator.dev.java.net/)
The one I am currently considering using is called SymmetricDS (http://symmetricds.sourceforge.net/)
There are others, they each do it slightly differently. Some use triggers, some poll, some use intercepting JDBC drivers. You need to decide what technical limitations you are under to determine which one you really want to use.
Wikipedia provides a nice overview of different techniques (http://en.wikipedia.org/wiki/Multi-master_replication) and also provides a link to another alternative DBReplicator (http://dbreplicator.org/).
If you have a model and DAO layer that exists already for your codebase, you can just create your own sync framework, it isn't hard.
Copy data is as simple as:
read an object from database A
remove database metadata (uuid, etc)
insert into database B
Syncing has some level of knowledge about what has been synced already. You can either do it at runtime by getting a list of uuids from TableInA and TableInB and working out which entries are new, or you can have a table of items that need to be synced (populate with a trigger upon insert/update in TableInA), and run from that. Your tool can be a TimerTask so databases are kept synced at the time granularity that you desire.
However there is probably some tool out there that does it all without any of this implementation faff, and each implementation would be different based on business needs anyway. In addition at the database level there will be replication tools.
True synchronization requires some data that I hope your database schema has (you can read the SyncML doc to see how they proceed). Sync4J won't help you much, it's really high-level and XML oriented. If you don't foresee any conflicts (which means: really easy synchronisation), you could try with a lightweight ETL like Enhydra Octopus.
I'm primarily using Oracle at the moment, and the most full-featured route I've come across is Red Gate's Data Compare:
http://www.red-gate.com/products/oracle-development/data-compare-for-oracle/
This old blog gives a good summary of the solution routes available:
http://www.novell.com/coolsolutions/feature/17995.html
The JDBC-specific offerings I've come across have been very basic. The solution mentioned by Aidos seems the most feature complete if you want to go down the publish-subscribe route:
http://symmetricds.codehaus.org/
Hope this helps.