According to mysql doc:
https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-usagenotes-last-insert-id.html
At times, it can be tricky to use the SELECT LAST_INSERT_ID() query,
as that function's value is scoped to a connection. So, if some other
query happens on the same connection, the value is overwritten. On the
other hand, the getGeneratedKeys() method is scoped by the Statement
instance, so it can be used even if other queries happen on the same
connection, but not on the same Statement instance.
First, I consider LAST_INSERT_ID().
The SQL function LAST_INSERT_ID() is connection safe, but not session/transaction/statement safe. It can't be used in production, because in real environment multiple session/transaction/statement in one connection is very common.
Then getGeneratedKeys() using JDBC. When I'm using getGeneratedKeys() in Java. I want to see what it does in database. I try to track the SQL statement with the following statements after a simple insert into a demo table with auto increase primary key using JDBC:
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
SELECT * FROM mysql.general_log;
I'm sure the new row is correctly inserted and getGeneratedKeys() brings the auto-incremented id back. However, I find nothing but just an insert statement which JDBC executed before, and some static data like "SELECT database(),version()...".
Now, conclusion is, getGeneratedKeys() doesn't execute any SQL statement to get auto-incremented id. Then I find another possibility, I debug into call stacks, see JDBC get auto-incremented id from an object called OkPacket. It has a property called last_insert_id. Here I find it finally.
My questions are:
Is there really no way to get a STATEMENT SAFE (at least transaction safe) auto-incremented id using pure SQL statement (without JDBC)?
How does OkPacket work under hood? How does it get a statement safe auto increased id? Maybe it calls some low level C function in MySQL driver or MySQL server/client protocol?
MySQL has an API that clients use to communicate commands and get results.
Actually, it has two forms of this API. One is called the "SQL protocol" in which statements are sent as strings like SELECT * FROM mytable, etc. The other form is called the "binary protocol" where commands are sent using some byte that the server recognizes, even though they are not human-readable strings.
Some commands can be executed by either the SQL protocol or the binary protocol.
For example, START TRANSACTION, COMMIT, PREPARE... there are textual SQL statements for these commands, but there are also non-textual ways for the API to invoke these commands.
You can certainly query SELECT LAST_INSERT_ID(); and get the most recent generated id, but only the most recent. Another INSERT statement will overwrite this value, as you read.
The OkPacket is filled in by the binary protocol. That is, the MySQL Server returns an OkPacket with several pieces of metadata about any statement execution.
See https://github.com/mysql/mysql-connector-j/blob/release/8.0/src/main/protocol-impl/java/com/mysql/cj/protocol/a/result/OkPacket.java#L55
The Ok Packet includes the following:
Affected row count (if any)
Last insert id (if any)
Status flags
Warning count (if any)
String with error message (if any)
The MySQL Server code that documents the OK Packet is unusually thorough with examples:
https://github.com/mysql/mysql-server/blob/8.0/sql/protocol_classic.cc#L665-L838
There's no way to fetch the OK Packet for earlier SQL statements. The client must save the result immediately after a statement execution. In object-oriented code such as the JDBC driver, it makes sense to store that in the NativeResultset object: https://github.com/mysql/mysql-connector-j/blob/release/8.0/src/main/protocol-impl/java/com/mysql/cj/protocol/a/result/NativeResultset.java#L77-L82
My Scenario is i have big query with lot of joins and lot of decode/case calls in select and i am passing one param to where condition from java and i see for 150000 rows java fetch is very slow but query is running faster in SQL developer client interface.
i thought of creating or replacing a view which takes one parameter and call that view from java.
Did not find resource to know how to pass prams to create or replace view statement from java ?
Any one suggest other approach that fetches rows quickly ?
Using oracle 12c and driver is jdbc7 and jdk8
First (and easiest):
Set the JDBC fetch size to a high number in your statement. There is a setFetchSize(int) method on Statement, PreparedStatement, CallableStatement, and ResultSet objects.
This defaults to something small like 10 rows. Set that to a reasonably high number, such as 500 or more.
This is a setting that will definitely slow down a query that pulls back hundreds of thousands of records.
Second:
Verify that the query is indeed running fast in SQL Developer, to the last row.
You can export to a file or try wrapping the query in a PL/SQL statement that will loop through all records.
If you wish, you can use AUTOTRACE in SQL*Plus to your advantage:
SET TIMING ON
SET AUTOTRACE TRACEONLY
<your query>
This will run the query to the end, pulling all records over the network but not displaying them.
The goal here is to prove that your SQL statement is indeed returning all records as quickly as needed.
If not, then you have a standard tuning exercise. Get it running to completion quickly in SQL Developer first.
I have a web application that needs a database back-end.
My back-end is really small (max 4 tables) and the SQL operations are not that much.
So I decided that some robust ORM solution is like hitting a moschito with a hummer and I am going just to do a little DAO pattern so that the code is more clean (instead of hitting the db directly with sql commands).
So far it works but I am not sure that I haven't stepped into a pittfall without knowing.
I use Tomcat's connection pool and I expect concurrent access to the database.
My question is related to concurrency and the use of the java sql objects.
Example:
I do the following:
do a query
get a result set and use that to build an object (dto)
building this object I do a new sql query (using the same connection
and having the previous resultset open)
Is this correct/safe?
Also can I reuse the same connection in a re-entrant manner?
I assume it is no problem to use it via multiple threads right?
Generally any tips/guide to get in the right track is welcome
Regarding the connections, as long as you use the connection pool you are guaranteeing that each thread takes its own connection, so from that point of wiew, there is no problem in your approach in a multithreaded environment (you can check Is java.sql.Connection thread safe?).
With respect to the ResultSet and the second query you are performing, you must take into account that a ResultSet maintains a cursor pointing to its current row of data. So the key point in your question is if you are using the same "SELECT statement", because of in that case, you could get the same cursor attributes and some problems may arise.
Check ResultSet's javadoc, especially this sentence:
A ResultSet object is automatically closed when the Statement object that generated it is closed, re-executed, or used to retrieve the next result from a sequence of multiple results.
and How can I avoid ResultSet is closed exception in Java?.
I am trying to create a program that updates 2 different tables using sql commands. The only thing I am worried about is that if the program updates one of the tables and then loses connection or whatever and does NOT update the other table there could be an issue. Is there a way I could either
A. Update them at the exact same time
or
B. Revert the first update if the second one fails.
Yes use a SQL transaction. Here is the tutorial:JDBC Transactions
Depending on the database, I'd suggest using a stored procedure or function based on the operations involved. They're supported by:
MySQL
Oracle
SQL Server
PostgreSQL
These encapsulate a database transaction (atomic in nature -- it either happens, or it doesn't at all), without the extra weight of sending the queries over the line to the database... Because they already exist on the database, the queries are parameterized (safe from SQL injection attacks) which means less data is sent -- only the parameter values.
Most SQL servers support transactions, that is, queueing up a set of actions and then having them happen atomically. To do this, you wrap your queries as such:
START TRANSACTION;
*do stuff*
COMMIT;
You can consult your server's documentation for more information about what additional features it supports. For example, here is a more detailed discussion of transactions in MySQL.
I have a java program that runs a bunch of queries against an sql server database. The first of these, which queries against a view returns about 750k records. I can run the query via sql server management studio, and I get results in about 30 seconds. however, I kicked off the program to run last night. when I checked on it this morning, this query still had not returned results back to the java program, some 15 hours later.
I have access to the database to do just about anything I want, but I'm really not sure how to begin debugging this. What should one do to figure out what is causing a situation like this? I'm not a dba, and am not intimately familiar with the sql server tool set, so the more detail you can give me on how to do what you might suggest would be appreciated.
heres the code
stmt = connection.createStatement();
clientFeedRS = stmt.executeQuery(StringBuffer.toString());
EDIT1:
Well it's been a while, and this got sidetracked, but this issue is back. I looked into upgrading from jdbc driver v 1.2 to 2.0, but we are stuck on jdk 1.4, and v 2.0 require jdk 1.5 so that's a non starter. Now I'm looking at my connection string properties. I see 2 that might be useful.
SelectMethod=cursor|direct
responseBuffering=adaptive|full
Currently, with the latency issue, I am running with cursor as the selectMethod, and with the default for responseBuffering which is full. Is changing these properties likely to help? if so, what would be the ideal settings? I'm thinking, based on what I can find online, that using a direct select method and adaptive response buffering might solve my issue. any thoughts?
EDIT2:
WEll I ended changing both of these connection string params, using the default select method(direct) and specifying the responseBuffering as adaptive. This ends up working best for me and alleviates the latency issues I was seeing. thanks for all the help.
I had similar problem, with a very simple request (SELECT . FROM . WHERE = .) taking up to 10 seconds to return a single row when using a jdbc connection in Java, while taking only 0.01s in sqlshell. The problem was the same whether i was using the official MS SQL driver or the JTDS driver.
The solution was to setup this property in the jdbc url :
sendStringParametersAsUnicode=false
Full example if you are using MS SQL official driver : jdbc:sqlserver://yourserver;instanceName=yourInstance;databaseName=yourDBName;sendStringParametersAsUnicode=false;
Instructions if using different jdbc drivers and more detailled infos about the problem here : http://emransharif.blogspot.fr/2011/07/performance-issues-with-jdbc-drivers.html
SQL Server differentiates its data types that support Unicode from the ones that just support ASCII. For example, the character data types that support Unicode are nchar, nvarchar, longnvarchar where as their ASCII counter parts are char, varchar and longvarchar respectively. By default, all Microsoft’s JDBC drivers send the strings in Unicode format to the SQL Server, irrespective of whether the datatype of the corresponding column defined in the SQL Server supports Unicode or not. In the case where the data types of the columns support Unicode, everything is smooth. But, in cases where the data types of the columns do not support Unicode, serious performance issues arise especially during data fetches. SQL Server tries to convert non-unicode datatypes in the table to unicode datatypes before doing the comparison. Moreover, if an index exists on the non-unicode column, it will be ignored. This would ultimately lead to a whole table scan during data fetch, thereby slowing down the search queries drastically.
In my case, i had 30M+ records in the table i was searching from. The duration to complete the request went from more than 10 seconds, to approximatively 0.01s after applying the property.
Hope this will help someone !
It appears this may not have applied to your particular situation, but I wanted to provide another possible explanation for someone searching for this problem.
I just had a similar problem where a query executed directly in SQL Server took 1 minute while the same query took 5 minutes through a java prepared statemnent. I tracked it down to the fact that it is was done as a prepared statement.
When you execute a query directly in SQL Server, you are providing it a non-parameterized query, in which it knows all of the search criteria at optimization time. In my case, my search criteria included a date range, and SQL server was able to look at it, decide "that date range is huge, let's not use the date index" and then it chose something much better.
When I execute the same query through a java prepared statement, at the time that SQL Server is optimizing the query, you haven't yet provided it any of the parameter values, so it has to make a guess which index to use. In the case of my date range, if it optimizes for a small range and I give it a large range, it will perform slower than it could. Likewise if it optimizes for a large range and I give it a small one, it's again going to perform slower than it could.
To demonstrate this was indeed the problem, as an experiment I tried giving it hints as to what to optimize for using SQL Server's "OPTIMIZE FOR" option. When I told it to use a tiny date range, my java query (which actually had a wide date range) actually took twice as long as before (10 minutes, as opposed to 5 minutes before, and as opposed to 1 minute in SQL Server). When I told it my exact dates to optimize for, the execution time was identical between the java prepared statement.
So my solution was to hard code the exact dates into the query. This worked for me because this was just a one-off statement. The PreparedStatement was not intended to be reused, but merely to parameterize the values to avoid SQL injection. Since these dates were coming from a java.sql.Date object, I didn't have to worry about my date values containing injection code.
However, for a statement that DOES need to be reused, hard coding the dates wouldn't work. Perhaps a better option for that would be to create multiple prepared statements optimized for different date ranges (one for a day, one for a week, one for a month, one for a year, and one for a decade...or maybe you only need 2 or 3 options...I don't know) and then for each query, execute the one prepared statement whose time range best matches the range in the actual query.
Of course, this only works well if your date ranges are evenly distributed. If 80% of your records were in the last year, and 20% percent spread out over the previous 10 years, then doing the "multiple queries based on range size" thing might not be best. You'd have to optimize you queries based on specific ranges or something. You'd need to figure that out through trial an error.
Be sure that your JDBC driver is configured to use a direct connection and not a cusror based connection. You can post your JDBC connection URL if you are not sure.
Make sure you are using a forward-only, read-only result set (this is the default if you are not setting it).
And make sure you are using updated JDBC drivers.
If all of this is not working, then you should look at the sql profiler and try to capture the sql query as the jdbc driver executes the statement, and run that statement in the management studio and see if there is a difference.
Also, since you are pulling so much data, you should be try to be sure you aren't having any memory/garbage collection slowdowns on the JVM (although in this case that doesn't really explain the time discrepancy).
If the query is parametrized it can be a missing parameter or a parameter that is set with the wrong function, e.g. setLong for string, etc.
Try to run your query with all parameters hardcoded into the query body without any ? to see of this is a problem.
I know this is an old question but since it's one of the first results when searching for this issue I figured I should post what worked for me. I had a query that took less than 10 seconds when I used SQL Server JDBC driver but more than 4 minutes when using jTDS. I tried all suggestions mentioned here and none of it made any difference. The only thing that worked is adding this to the URL ";prepareSQL=1"
See Here for more
I know this is a very old question but since it's one of the first results when searching for this issue I thought that I should post what worked for me.
I had a query that took about 3 seconds when I used SQL Server Management Studio (SSMS) but took 3.5 minutes when running using jTDS JDBC driver via the executeQuery method.
None of the suggestion mentioned above worked for me mainly because I was using just Statement and not Prepared Statement. The only thing that worked for me was to specify the name of the initial or default database in the connection string, to which the connecting user has at least the db_datareader database role membership. Having only the public role is not sufficient.
Here’s the sample connection string:
jdbc:jtds:sqlserver://YourSqlServer.name:1433/DefaultDbName
Please ensure that you have the ending /DefaultDbName specified in the connection string. Here DefaultDbName is the name of the database to which the user ID specified for making the JDBC connection has at least the db_datareader database role. If omitted, SQL Server defaults to using the master database. If the user ID used to make the JDBC connection only has the public role in the master database, the query takes exceptionally long.
I don’t know why this happens. However, I know a different query plan is used in such circumstances. I confirmed this using the SQL Profiler tool.
Environment details:
SQL Server version: 2016
jTDS driver version: 1.3.1
Java version: 11
Pulling back that much data is going to require lots of time. You should probably figure out a way to not require that much data in your application at any given time. Page the data or use lazy loading for example. Without more details on what you're trying to accomplish, it's hard to say.
The fact that it is quick when run from management studio could be due to an incorrectly cached query plan and out of date indexes (say, due to a large import or deletions). Is it returning all 750K records quickly in SSMS?
Try rebuilding your indexes (or if that would take too long, update your statistics); and maybe flushing the procedure cache (use caution if this is a production system...): DBCC FREEPROCCACHE
To start debugging this, it would be good to determine whether the problem area is in the database or in the app. Have you tried changing the query such that it returns a much smaller result? If that doesnt return, I would suggest targeting the way you are accessing the DB from Java.
Try adjusting the fetch size of the Statement and try selectMethod of cursor
http://technet.microsoft.com/en-us/library/aa342344(SQL.90).aspx
We had issues with large result sets using mysql and needed to make it stream the result set as explained in the following link.
http://helpdesk.objects.com.au/java/avoiding-outofmemoryerror-with-mysql-jdbc-driver
Quote from the MS Adaptive buffer guidelines:
Avoid using the connection string property selectMethod=cursor to allow the application to process a very large result set. The adaptive buffering feature allows applications to process very large forward-only, read-only result sets without using a server cursor. Note that when you set selectMethod=cursor, all forward-only, read-only result sets produced by that connection are impacted. In other words, if your application routinely processes short result sets with a few rows, creating, reading, and closing a server cursor for each result set will use more resources on both client-side and server-side than is the case where the selectMethod is not set to cursor.
And
There are some cases where using selectMethod=cursor instead of responseBuffering=adaptive would be more beneficial, such as:
If your application processes a forward-only, read-only result set slowly, such as reading each row after some user input, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce resource usage by SQL Server.
If your application processes two or more forward-only, read-only result sets at the same time on the same connection, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce the memory required by the driver while processing these result sets.
In both cases, you need to consider the overhead of creating, reading, and closing the server cursors.
See more: http://technet.microsoft.com/en-us/library/bb879937.aspx
Sometimes it could be due to the way parameters are binding to the query object.
I found the following code is very slow when executing from java program.
Query query = em().createNativeQuery(queryString)
.setParameter("param", SomeEnum.DELETED.name())
Once I remove the "deleted" parameter and directly append that "DELETED" string to the query, it became super fast. It may be due to that SQL server is expecting to have all the parameters bound to decide the optimized plan.
Two connections instead of two Statements
I had one connection to SQL server and used it for running all queries I needed, creating a new Statement in each method that needed DB interaction.
My application was traversing a master table and, for each record, fetching all related information from other tables, so the first and largest query would be running from beginning to end of the execution while iterating its result set.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// iterating rs will cause the other queries to complete Entities read from MASTER
// ...
Statement st1 = conn.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// st1.executeQuery() makes rs to be cached
// ...
Statement st2 = conn.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
This meant that any subsequent queries (to read single records from the other tables) would cause the first result set to be cached entirely and not before that the other queries would run at all.
The solution was running all other queries in a second connection. This let the first query and its result set alone and undisturbed while the rest of the queries run swiftly in the other connection.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// ...
Connection conn2 = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st1 = conn2.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// ...
Statement st2 = conn2.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
Does it take a similar amount of time with SQLWB? If the Java version is much slower, then I would check a couple of things:
You shoudl get the best performance with a forward-only, read-only ResultSet.
I recall that the older JDBC drivers from MSFT were slow. Make sure you are using the latest-n-greatest. I think there is a generic SQL Server one and one specifically for SQL 2005.