The Following simple query does not complete. Prepared Statement - java

i have the following query:
String updatequery = "UPDATE tbl_page SET linkCount = ?, pageProcessed = 1 WHERE pageUrl =?";
PreparedStatement updatestmt = kon.prepareStatement(updatequery);
updatestmt.clearParameters();
//updatestmt.setQueryTimeout(10);
updatestmt.setInt(1, linkCount);
updatestmt.setString(2, urlLink);
updatestmt.executeUpdate();
When i set the query timeout for 10 seconds it will catch an exception the query timed out. but when i dont it goes on waiting. Whats wrong with the query? pageUrl column is the Primary Key with varchar(900)
I know something might be wrong with the prepared statement because when i run this query in MS SQl Server Management Studio ('?' replaced with its value) it works fine.
Am i missing something in Java or MSSQL?

Since the code looks just fine, this could be an issue at database side. May be someone else has blocked the row by updating it and not doing a commit/rollback (most possibly from you MS-SQL Server Management studio !). You could look for locks owned by other processes for the same record so that you can be sure that this is not a database issue.

Create an index on pageUrl:
create index tbl_page_pageUrl_index on tbl_page(pageUrl);
That will allow speedy access to the rows you want to update.
Without this index, the database must do a full table scan, and when combined with an update command, if likely to lead to lock contention and possibly even deadlocks, depending on your locking options.

Related

How to set lock timeout in postgres - Hibernate

I'm trying to set a Lock for the row I'm working on until the next commit:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
What I thought should happen is that if two threads will try to write to the db at the same time, one thread will reach the update operation before the other, the second thread should wait 10 seconds and then throw PessimisticLockException.
But instead the thread hangs until the other thread finishes, regardless of the timeout set.
Look at this example :
database.createTransaction(transaction -> {
// Execute the first request to the db, and lock the table
requestAndLock(transaction);
// open another transaction, and execute the second request in
// a different transaction
database.createTransaction(secondTransaction -> {
requestAndLock(secondTransaction);
});
transaction.commit();
});
I expected that in the second request the transaction will wait until the timeout set and then throw the PessimisticLockException, but instead it deadlocks forever.
Hibernate generates my request to the db this way :
SELECT value from Table where id=123 FOR UPDATE
In this answer I saw that Postgres allows only SELECT FOR UPDATE NO WAIT that sets the timeout to 0, but it isn't possible to set a timeout in that way.
Is there any other way that I can use with Hibernate / JPA?
Maybe this way is somehow recommended?
Hibernate supports a bunch of query hints. The one you're using sets the timeout for the query, not for the pessimistic lock. The query and the lock are independent of each other, and you need to use the hint shown below.
But before you do that, please be aware, that Hibernate doesn't handle the timeout itself. It only sends it to the database and it depends on the database, if and how it applies it.
To set a timeout for the pessimistic lock, you need to use the javax.persistence.lock.timeout hint instead. Here's an example:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
I think you could try
SET LOCAL lock_timeout = '10s';
SELECT ....;
I doubt Hibernate supports this out-of-box. You could try find a way to extend it, not sure if it worth it. Because I guess using locks on a postges database (which is mvcc) is not the smartest option.
You could also do NO WAIT and delay-retry several times from your code.
There is the lock_timeout parameter that does exactly what you want.
You can set it in postgresql.conf or with ALTER ROLE or ALTER DATABASE per user or per database.
The hint for lock timeout for PostgresSQL doesn't work on PostreSQL 9.6 (.setHint("javax.persistence.lock.timeout", 10000)
The only solution I found is uncommenting lock_timeout property in postgresql.conf:
lock_timeout = 10000 # in milliseconds, 0 is disabled
For anyone who's still looking for a data jpa solution, this is how i managed to do it
First i've created a function in postgres
CREATE function function_name (some_var bigint)
RETURNS TABLE (id BIGINT, counter bigint, organisation_id bigint) -- here you list all the columns you want to be returned in the select statement
LANGUAGE plpgsql
AS
$$
BEGIN
SET LOCAL lock_timeout = '5s';
return query SELECT * from some_table where some_table.id = some_var FOR UPDATE;
END;
$$;
then in the repository interface i've created a native query that calls the function. This will apply the lock timeout on that particular transaction
#Transactional
#Query(value = """
select * from function_name(:id);
""", nativeQuery = true)
Optional<SomeTableEntity> findById(Long id);

Slow queries with preparedStatement but not with executeQuery

I'm having a weird problem with an Grails application accessing data. Going deeper I've isolated the problem to a plain java8 small application using PreparedStatement.executeQuery vs Statement.executeQuery.
Consider the following snippet of code:
// executes in milliseconds
directSql = "select top(10) * from vdocuments where codcli = 'CCCC' and serial = 'SSSS' ORDER BY otherField DESC;";
stmt = con.createStatement();
rs = stmt.executeQuery(directSql);
// More than 10 minutes
sqlPrepared = "select top(10) * from vdocuments where codCli = ? and serial = ? ORDER BY otherField DESC;";
PreparedStatement pStatement = con.prepareStatement( sqlPrepared );
pStatement.setString(1, "CCCC");
pStatement.setString(2, "SSSS");
rsPrepared = pStatement.executeQuery();
Same query.
Data comes from a view on SqlServer (2008, I think, have no access right now) from a table with more than 15 Million records. There are indexes for all needed fields and the same query (the first one) executed from console runs also quite fast.
If I execute the slow PreparedStatement query without the ORDER clause it also runs fast.
It looks clear to me that for any cause the database it's not using indexes and make a full scan when using preparedStatement, but maybe I'm wrong so I'm open to any idea.
I thought maybe the driver (sqlserver official latest and jtds has been tested) was holding the data waiting for any kind of EOF from connection but I've checked with tcpdump on my side and no data is received.
I can't find why this is happening so any idea will be welcomed.
Thank you in advanced!
I've finally found a solution, at least in for my case. I got it here http://mehmoodbluffs.blogspot.com.es/2015/03/hibernate-queries-are-slow-sql-servers.html . Telling (driver? sqlServer?) not to send parameters as Unicode have resolved the problem.
Current connection string it's now:
String connectionUrl = "jdbc:sqlserver://server:port;databaseName=myDataBase;sendStringParametersAsUnicode=false";
And now both direct queries and preparedStatements runs at millisecond speed.
Thank you #DanGuzman for your suggestions!

Same SQL Query returning 2 different results

I have a java application that does a SQL query against an Oracle database, that for some reason gives way less values when executed from the SQL Developer and from the application itself.
Now to the technicalities. The application produces a connection to the db using a wrapper library that employs c3p0. The c3p0 config has been checked, so we know that this things can't be:
-Pointing to wrong database/schema
-Restricted user
Then there's the query:
select to_char(AGEPINDW.TRANSACTION.TS_TRANSACTION,'yyyy-mm') as Time,result, count(*) as TOTAL, sum(face_value) as TOTAL_AMOUNT
from AGEPINDW.TRANSACTION
where (ts_transaction >= to_timestamp(to_char(add_months(sysdate,-1),'yyyy-mm'),'yyyy-mm')
and ts_transaction < to_timestamp(to_char(sysdate,'yyyy-mm'),'yyyy-mm')) and service_id in (2,23)
group by to_char(AGEPINDW.TRANSACTION.TS_TRANSACTION,'yyyy-mm'), result;
It doesn't have any parameter and is executed via your standard PreparedStatement. Yet the return from the app is wrong and I don't know what may be. Any suggestions?

MySQL query request from WorkBench client takes much less time than JDBC .executeQuery

I am using this code:
double timeBefore = System.currentTimeMillis();
ResultSet rs = st.executeQuery(sql);
double timeAfter = System.currentTimeMillis();
System.out.println(timeAfter - timeBefore);
The return is 9904.0
While when I do the exact same query from WorkBench MySQL client:
SELECT DISTINCT completeAddress FROM DB_M3_Medium.AvailableAddressesV2 where postNr = 2300 ORDER BY completeAddress ASC;
it takes 0.285s
How is that possible?
PS: I tried it with different payload sizes and it's always approx. 10s with Java JDBC
EDIT:
I tried PreparedStatement with the same query as above and it took the same time, approx. 1s less.
I have also tried pinging with following code:
String query = "/* ping */ SELECT 1";
double timeBefore = System.currentTimeMillis();
PreparedStatement preparedStatement = DBConnect.getInstance().con.prepareStatement(query);
ResultSet rs = preparedStatement.executeQuery(query);
double timeAfter = System.currentTimeMillis();
System.out.println(timeAfter - timeBefore);
And the response was: 1306.0 which is not perfect, but better.
But I am still not getting what is wrong with it.
EDIT2:
I have figured out that the time that it takes is related to the amount of data in the DB (not the payload that I am retrieving). It appears to me like indexing didn't work. But why would I then have the issue only when I go with JDBC but not with WorkBench.
while you code in java, you are creating connection,then passing the query. That query is compiled(as you are using Statement) in the sql server and then you will get the result. This whole process needs some time.But when you execute direclty in workbench you are neither creating connection nor compiling,you are simply running the sql.Hence the time taken is less
As #SpringLearner suggested in JDBC every time you execute a query and made a new connection and cost some time. You can use a Data Source to avoid this overhead and better performance.
One thing to bear in mind is that the JDBC driver is pure Java, so you are probably running into some early JIT compilation that would not apply with the MySQL workbench. After the JDBC driver code has been through the JIT, you will probably see comparable performance. The real test for you would be put that code few more times after that and see what happens.
You can also use a PreparedStatement and see if that helps since that should be the API most comparable to what the MySQL workbench is using to avoid unnecessary recompilation of the query.

DeadLock Found Error in Batch DELETE Prepared Statement MYSQL

I am deleting more than 90k rows in my table through JDBC prepared statement.
My code looks like this:
//Open JDBC Connection
//MYSQl QUERY
String fetchDataSQL="DELETE FROM MYTABLE WHERE ID=? AND X=? AND Y=?";
preparedStatement = dbConnection.prepareStatement(fetchDataSQL);
rs = preparedStatement.executeQuery();
dbConnection.setAutoCommit(false);
for (Data dt : dataList) {
preparedStatement.setLong(1, dt.getID());
preparedStatement.setLong(2, dt.getX());
preparedStatement.setLong(3, dt.getY());
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
dbConnection.commit();
//cose prepared statement
//close connection
Here dataList contains more than 90k record which I want to delete.I had applied mysql indexing also to MYTABLE for id,X,Y
Unfortunately I got error
Deadlock found when trying to get lock; try restarting transaction
I have googled, I did not found working solution.
please help me to find solution or alternative if present.
Thank you
As #bodi0 points out, the MySQL Reference Manual (14.2.7.9. How to Cope with Deadlocks) has lots of advice on how to diagnose deadlocks, and how to deal with them.
In this case I can think of two possible explanations:
You are deadlocking against some other transaction being performed by a different database connection or a different database client.
You have entries in the datalist that have the same {ID, X, Y} values, so you end up adding multiple deletes for the same row or rows to the batch. Maybe this results in MySQL attempting to lock the same row twice, and deadlocking. (Just a theory ...)
But a better idea would be to diagnose the deadlock for yourself.

Categories

Resources