This question already has answers here:
Difference between Statement and PreparedStatement
(15 answers)
Closed 7 years ago.
I came across below statement that tells about the performance improvement that we get with JDBC PreparedStatement class.
If you submit a new, full SQL statement for every query or update to
the database, the database has to parse the SQL and for queries create
a query plan. By reusing an existing PreparedStatement you can reuse
both the SQL parsing and query plan for subsequent queries. This
speeds up query execution, by decreasing the parsing and query
planning overhead of each execution.
Let's say I am creating the statement and providing different values while running the queries like this :
String sql = "update people set firstname=? , lastname=? where id=?";
PreparedStatement preparedStatement =
connection.prepareStatement(sql);
preparedStatement.setString(1, "Gary");
preparedStatement.setString(2, "Larson");
preparedStatement.setLong (3, 123);
int rowsAffected = preparedStatement.executeUpdate();
preparedStatement.setString(1, "Stan");
preparedStatement.setString(2, "Lee");
preparedStatement.setLong (3, 456);
int rowsAffected = preparedStatement.executeUpdate();
Then will I still get performance benefit, because I am trying to set different values so I can the final query generated is changing based on values.
Can you please explain exactly when we get the performance benefit? Should the values also be same?
When you use prepared statement(i.e pre-compiled statement), As soon as DB gets this statement, it compiles it and caches it so that it can use the last compiled statement for successive call of same statement. So it becomes pre-compiled for successive calls.
You generally use prepared statement with bind variables where you provide the variables at run time. Now what happens for successive execution of prepared statements, you can provide the variables which are different from previous calls. From DB point of view, it does not have to compile the statement every time, will just insert the bind variables at rum time. So becomes faster.
Other advantages of prepared statements is its protection against SQL-injection attack
So the values does not have to be same
Although it is not obvious SQL is not scripting but a "compiled" language. And this compilation aka. optimization aka hard-parse is very exhaustive task. Oracle has a lot of work to do, it must parse the query, resolve table names, validate access privileges, perform some algebraic transformations and then it has to find effective execution plan. Oracle (and other databases too) can join only TWO tables - not more. It means then when you join several tables in SQL, Oracle has to join them one-by-one. i.e. if you join n tables in a query there can be at least up to n! possible execution plans. By default Oracle is limited up to 8000 permutations when search for "optimal" (not the best one) execution plan.
So the compilation(hard-parse) might be more exhaustive then query execution itself. In order to spare resources, Oracle shares execution plans between sessions in a memory structure called library cache. And here another problem might occur, too many parsing require exclusive access to a shared resource.
So if you do too many (hard) parsing your application can not scale - sessions are blocking each other.
On the other hand, there are situations where bind variables are NOT helpful.
Imagine such a query:
update people set firstname=? , lastname=? where group=? and deleted='N'
Since the column deleted is indexed and Oracle knows that there are 98% of values ='Y' and only 2% of values = 'N' it will deduce to use and index in the column deleted. If you used bind variable for condition on deleted column Oracle could not find effective execution plan, because it also depends on input which is unknown in the time of the compilation.
(PS: since 11g it is more complicated with bind variable peeking)
I am trying to write a database independant application with JDBC. I now need a way to fetch the top N entries out of some table. I saw there is a setMaxRows method in JDBC, but I don't feel comfortable using it, because I am scared the database will push out all results, and only the JDBC driver will reduce the result. If I need the top 5 results in a table with a billion rows this will break my neck (the table has an usable index).
Writing special SQL-statements for every kind of database isn't very nice, but will let the database do clever query planning and stop fetching more results than necessary.
Can I rely on setMaxRows to tell the database to not work to much?
I guess in the worst case I can't rely on this working in the hoped way. I'm mostly interested in Postgres 9.1 and Oracle 11.2, so if someone has experience with these databases, please step forward.
will let the database do clever query planning and stop fetching more
results than necessary.
If you use
PostgreSQL:
SELECT * FROM tbl ORDER BY col1 LIMIT 10; -- slow without index
Or:
SELECT * FROM tbl LIMIT 10; -- fast even without index
Oracle:
SELECT *
FROM (SELECT * FROM tbl ORDER BY col1 DESC)
WHERE ROWNUM < 10;
.. then only 10 rows will be returned. But if you sort your rows before picking top 10, all basically qualifying rows will be read before they can be sorted.
Matching indexes can prevent this overhead!
If you are unsure, what JDBC actually send to the database server, run a test and have the database engine log the statements received. In PostgreSQL you can set in postgresql.conf:
log_statement = all
(and reload) to log all statements sent to the server. Be sure to reset that setting after the test or your log files may grow huge.
The thing which could/may kill you with billion(s) of rows is the (highly likely) ORDER BY clause in your query. If this order cannot be established using an index then . . . it'll break your neck :)
I would not depend on the jdbc driver here. As a previous comment suggests it's unclear what it really does (looking at different rdbms).
If you are concerned regarding speed of your query you can use a LIMIT clause as well. If you use LIMIT you can at least be sure that it's passed on to the DB server.
Edit: Sorry, I was not aware that Oracle doesn't support LIMIT.
In direct answer to your question regarding PostgreSQL 9.1: Yes, the JDBC driver will tell the server to stop generating rows beyond what you set.
As others have pointed out, depending on indexes and the plan chosen, the server might scan a very large number of rows to find the five you want. Proper server configuration can help accurately model the costs to prevent this, but if value distribution is unusual you may need to introduce and optimization barrier (like with a CTE) to coerce the planner to produce a good plan.
Hibernate Criteria support provides a setMaxResults() method to limit the results returned from the db.
I can't find any answer to this in their documentation - how is this implemented? Is it querying for the entire result set and then returning only the request number? Or is it truly limiting the query on the database end (think LIMIT keyword as in mySql).
This is important because if a query could potentially return many many results, I really need to know if the setMaxResults() will still query for all the rows in the database (which would be bad).
Also - if its truly limiting the number of rows on the database end, how is it achieving this cross-db (since I don't think every rdbms supports a LIMIT functionality like mySql does).
Hibernate asks the database to limit the results returned by the query. It does this via the dialect, which uses whatever database-specific mechanism there is to do this (so for SQL Server it will do somthing like "select top n * from table", Oracle will do "select * from table where rownum < n", MySQL will do "select * from table limit n" etc). Then it just returns what the database returns.
The class org.hibernate.dialect.Dialect contains a method called supportsLimit(). If dialect subclasses override this method, they can implement row limit handling in a fashion native to their database flavor. You can see where this code is called from in the class org.hibernate.loader.Loader which has a method titled prepareQueryStatement, just search for the word limit.
However, if the dialect does not support this feature, there is a hard check in place against the ResultSet iterator that ensures Java object (entity) results will stop being constructed when the limit is reached. This code is also located in Loader as well.
I use both Hibernate and Hibernate Search and without looking at the underlying implementation I can tell you that they definitely do not return all results. I have implemented the same query returning all results and then changed it to set the first result and max results (to implement pagination) and the performance gains were massive.
They likely use dialect specific SQL for this, e.g. LIMIT in MySQL, ROWNUM in Oracle. Your entity manager is aware of the dialect that you are using so this is simple.
Lastly if you really want to check what SQL Hibernate is producing for this query, just set the "show_sql" property to true when you create your entity manager / factory and it spits out all the SQL it is running to the console.
HQL does not suppport a limitation inside a query like in SQL, only the setMaxResults() which you also found.
To find out if it transform the setMaxResults() into a LIMIT query, you can turn on your SQL logging.
I know Question is bit old. But yes setMaxResults() is truly limiting the number of rows on the database end.
If you really look into your Hibernate SQL output, you can find the following SQL statement has been appended to your query.
limit ?
I have a java program that runs a bunch of queries against an sql server database. The first of these, which queries against a view returns about 750k records. I can run the query via sql server management studio, and I get results in about 30 seconds. however, I kicked off the program to run last night. when I checked on it this morning, this query still had not returned results back to the java program, some 15 hours later.
I have access to the database to do just about anything I want, but I'm really not sure how to begin debugging this. What should one do to figure out what is causing a situation like this? I'm not a dba, and am not intimately familiar with the sql server tool set, so the more detail you can give me on how to do what you might suggest would be appreciated.
heres the code
stmt = connection.createStatement();
clientFeedRS = stmt.executeQuery(StringBuffer.toString());
EDIT1:
Well it's been a while, and this got sidetracked, but this issue is back. I looked into upgrading from jdbc driver v 1.2 to 2.0, but we are stuck on jdk 1.4, and v 2.0 require jdk 1.5 so that's a non starter. Now I'm looking at my connection string properties. I see 2 that might be useful.
SelectMethod=cursor|direct
responseBuffering=adaptive|full
Currently, with the latency issue, I am running with cursor as the selectMethod, and with the default for responseBuffering which is full. Is changing these properties likely to help? if so, what would be the ideal settings? I'm thinking, based on what I can find online, that using a direct select method and adaptive response buffering might solve my issue. any thoughts?
EDIT2:
WEll I ended changing both of these connection string params, using the default select method(direct) and specifying the responseBuffering as adaptive. This ends up working best for me and alleviates the latency issues I was seeing. thanks for all the help.
I had similar problem, with a very simple request (SELECT . FROM . WHERE = .) taking up to 10 seconds to return a single row when using a jdbc connection in Java, while taking only 0.01s in sqlshell. The problem was the same whether i was using the official MS SQL driver or the JTDS driver.
The solution was to setup this property in the jdbc url :
sendStringParametersAsUnicode=false
Full example if you are using MS SQL official driver : jdbc:sqlserver://yourserver;instanceName=yourInstance;databaseName=yourDBName;sendStringParametersAsUnicode=false;
Instructions if using different jdbc drivers and more detailled infos about the problem here : http://emransharif.blogspot.fr/2011/07/performance-issues-with-jdbc-drivers.html
SQL Server differentiates its data types that support Unicode from the ones that just support ASCII. For example, the character data types that support Unicode are nchar, nvarchar, longnvarchar where as their ASCII counter parts are char, varchar and longvarchar respectively. By default, all Microsoft’s JDBC drivers send the strings in Unicode format to the SQL Server, irrespective of whether the datatype of the corresponding column defined in the SQL Server supports Unicode or not. In the case where the data types of the columns support Unicode, everything is smooth. But, in cases where the data types of the columns do not support Unicode, serious performance issues arise especially during data fetches. SQL Server tries to convert non-unicode datatypes in the table to unicode datatypes before doing the comparison. Moreover, if an index exists on the non-unicode column, it will be ignored. This would ultimately lead to a whole table scan during data fetch, thereby slowing down the search queries drastically.
In my case, i had 30M+ records in the table i was searching from. The duration to complete the request went from more than 10 seconds, to approximatively 0.01s after applying the property.
Hope this will help someone !
It appears this may not have applied to your particular situation, but I wanted to provide another possible explanation for someone searching for this problem.
I just had a similar problem where a query executed directly in SQL Server took 1 minute while the same query took 5 minutes through a java prepared statemnent. I tracked it down to the fact that it is was done as a prepared statement.
When you execute a query directly in SQL Server, you are providing it a non-parameterized query, in which it knows all of the search criteria at optimization time. In my case, my search criteria included a date range, and SQL server was able to look at it, decide "that date range is huge, let's not use the date index" and then it chose something much better.
When I execute the same query through a java prepared statement, at the time that SQL Server is optimizing the query, you haven't yet provided it any of the parameter values, so it has to make a guess which index to use. In the case of my date range, if it optimizes for a small range and I give it a large range, it will perform slower than it could. Likewise if it optimizes for a large range and I give it a small one, it's again going to perform slower than it could.
To demonstrate this was indeed the problem, as an experiment I tried giving it hints as to what to optimize for using SQL Server's "OPTIMIZE FOR" option. When I told it to use a tiny date range, my java query (which actually had a wide date range) actually took twice as long as before (10 minutes, as opposed to 5 minutes before, and as opposed to 1 minute in SQL Server). When I told it my exact dates to optimize for, the execution time was identical between the java prepared statement.
So my solution was to hard code the exact dates into the query. This worked for me because this was just a one-off statement. The PreparedStatement was not intended to be reused, but merely to parameterize the values to avoid SQL injection. Since these dates were coming from a java.sql.Date object, I didn't have to worry about my date values containing injection code.
However, for a statement that DOES need to be reused, hard coding the dates wouldn't work. Perhaps a better option for that would be to create multiple prepared statements optimized for different date ranges (one for a day, one for a week, one for a month, one for a year, and one for a decade...or maybe you only need 2 or 3 options...I don't know) and then for each query, execute the one prepared statement whose time range best matches the range in the actual query.
Of course, this only works well if your date ranges are evenly distributed. If 80% of your records were in the last year, and 20% percent spread out over the previous 10 years, then doing the "multiple queries based on range size" thing might not be best. You'd have to optimize you queries based on specific ranges or something. You'd need to figure that out through trial an error.
Be sure that your JDBC driver is configured to use a direct connection and not a cusror based connection. You can post your JDBC connection URL if you are not sure.
Make sure you are using a forward-only, read-only result set (this is the default if you are not setting it).
And make sure you are using updated JDBC drivers.
If all of this is not working, then you should look at the sql profiler and try to capture the sql query as the jdbc driver executes the statement, and run that statement in the management studio and see if there is a difference.
Also, since you are pulling so much data, you should be try to be sure you aren't having any memory/garbage collection slowdowns on the JVM (although in this case that doesn't really explain the time discrepancy).
If the query is parametrized it can be a missing parameter or a parameter that is set with the wrong function, e.g. setLong for string, etc.
Try to run your query with all parameters hardcoded into the query body without any ? to see of this is a problem.
I know this is an old question but since it's one of the first results when searching for this issue I figured I should post what worked for me. I had a query that took less than 10 seconds when I used SQL Server JDBC driver but more than 4 minutes when using jTDS. I tried all suggestions mentioned here and none of it made any difference. The only thing that worked is adding this to the URL ";prepareSQL=1"
See Here for more
I know this is a very old question but since it's one of the first results when searching for this issue I thought that I should post what worked for me.
I had a query that took about 3 seconds when I used SQL Server Management Studio (SSMS) but took 3.5 minutes when running using jTDS JDBC driver via the executeQuery method.
None of the suggestion mentioned above worked for me mainly because I was using just Statement and not Prepared Statement. The only thing that worked for me was to specify the name of the initial or default database in the connection string, to which the connecting user has at least the db_datareader database role membership. Having only the public role is not sufficient.
Here’s the sample connection string:
jdbc:jtds:sqlserver://YourSqlServer.name:1433/DefaultDbName
Please ensure that you have the ending /DefaultDbName specified in the connection string. Here DefaultDbName is the name of the database to which the user ID specified for making the JDBC connection has at least the db_datareader database role. If omitted, SQL Server defaults to using the master database. If the user ID used to make the JDBC connection only has the public role in the master database, the query takes exceptionally long.
I don’t know why this happens. However, I know a different query plan is used in such circumstances. I confirmed this using the SQL Profiler tool.
Environment details:
SQL Server version: 2016
jTDS driver version: 1.3.1
Java version: 11
Pulling back that much data is going to require lots of time. You should probably figure out a way to not require that much data in your application at any given time. Page the data or use lazy loading for example. Without more details on what you're trying to accomplish, it's hard to say.
The fact that it is quick when run from management studio could be due to an incorrectly cached query plan and out of date indexes (say, due to a large import or deletions). Is it returning all 750K records quickly in SSMS?
Try rebuilding your indexes (or if that would take too long, update your statistics); and maybe flushing the procedure cache (use caution if this is a production system...): DBCC FREEPROCCACHE
To start debugging this, it would be good to determine whether the problem area is in the database or in the app. Have you tried changing the query such that it returns a much smaller result? If that doesnt return, I would suggest targeting the way you are accessing the DB from Java.
Try adjusting the fetch size of the Statement and try selectMethod of cursor
http://technet.microsoft.com/en-us/library/aa342344(SQL.90).aspx
We had issues with large result sets using mysql and needed to make it stream the result set as explained in the following link.
http://helpdesk.objects.com.au/java/avoiding-outofmemoryerror-with-mysql-jdbc-driver
Quote from the MS Adaptive buffer guidelines:
Avoid using the connection string property selectMethod=cursor to allow the application to process a very large result set. The adaptive buffering feature allows applications to process very large forward-only, read-only result sets without using a server cursor. Note that when you set selectMethod=cursor, all forward-only, read-only result sets produced by that connection are impacted. In other words, if your application routinely processes short result sets with a few rows, creating, reading, and closing a server cursor for each result set will use more resources on both client-side and server-side than is the case where the selectMethod is not set to cursor.
And
There are some cases where using selectMethod=cursor instead of responseBuffering=adaptive would be more beneficial, such as:
If your application processes a forward-only, read-only result set slowly, such as reading each row after some user input, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce resource usage by SQL Server.
If your application processes two or more forward-only, read-only result sets at the same time on the same connection, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce the memory required by the driver while processing these result sets.
In both cases, you need to consider the overhead of creating, reading, and closing the server cursors.
See more: http://technet.microsoft.com/en-us/library/bb879937.aspx
Sometimes it could be due to the way parameters are binding to the query object.
I found the following code is very slow when executing from java program.
Query query = em().createNativeQuery(queryString)
.setParameter("param", SomeEnum.DELETED.name())
Once I remove the "deleted" parameter and directly append that "DELETED" string to the query, it became super fast. It may be due to that SQL server is expecting to have all the parameters bound to decide the optimized plan.
Two connections instead of two Statements
I had one connection to SQL server and used it for running all queries I needed, creating a new Statement in each method that needed DB interaction.
My application was traversing a master table and, for each record, fetching all related information from other tables, so the first and largest query would be running from beginning to end of the execution while iterating its result set.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// iterating rs will cause the other queries to complete Entities read from MASTER
// ...
Statement st1 = conn.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// st1.executeQuery() makes rs to be cached
// ...
Statement st2 = conn.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
This meant that any subsequent queries (to read single records from the other tables) would cause the first result set to be cached entirely and not before that the other queries would run at all.
The solution was running all other queries in a second connection. This let the first query and its result set alone and undisturbed while the rest of the queries run swiftly in the other connection.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// ...
Connection conn2 = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st1 = conn2.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// ...
Statement st2 = conn2.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
Does it take a similar amount of time with SQLWB? If the Java version is much slower, then I would check a couple of things:
You shoudl get the best performance with a forward-only, read-only ResultSet.
I recall that the older JDBC drivers from MSFT were slow. Make sure you are using the latest-n-greatest. I think there is a generic SQL Server one and one specifically for SQL 2005.