Batch delete throws QueryCollectorSignal exception? - java

When I try to do a batch delete, nothing happens. I stepped into the debugger, and see that BatchCRUD::executeAction calls UpdatableRecord::delete, and down in the callstack, UpdatableRecordImpl::checkIfChanged calls fetchOne() which throws a QueryCollectorSignal. The sql executed works fine in PGAdmin (postgres), so I'm wondering what's going on here? How do I do a proper batch delete?

When does this happen:
you have turned on the executeWithOptimisticLocking setting
this particular table doesn't have any timestamp or version column
Why does this happen?
batchStore(), batchDelete() and similar calls execute the respective store(), delete() etc. calls on each individual UpdatableRecord, but with an ExecuteListener that aborts execution (via this QueryCollectorSignal exception) and just collects the SQL query that would have been executed. It then batches these SQL queries, rather than executing them individually. The ExecuteListener is, unfortunately, also applied to the SELECT query that is needed for optimistic locking.
The safest solution would probably be to just turn off optimistic locking before we provide a fix. I've registered an issue for this:
https://github.com/jOOQ/jOOQ/issues/5383

Related

Trigger on standard oracle base table recommended ? Oracle application

I want to write trigger on oracle application( ERP) one of the vital base table wf_notifications , from this trigger I will be calling a java concurrent program using fnd_request.submit_request , all of my operations I will be doing in this java class file used in concurrent program . so will other operations affect if concurrent program fails ??? on other side oracle dose'nt recommend writing triggers on oracle standard base tables
If you put a trigger on a table and the trigger throws an exception your DML statement (insert, update, delete) will fail with that exception.
You can put an catch all exception handler in the trigger to make sure your DML continues to work even if the trigger fails but you may never no that there is a problem with the triggered function.
So that would be something along these lines:
create or replace trigger mytrigger
before insert or update or delete
on mytable
for each row
declare
begin
fnd_request.submit_request;
exception
-- Hide whatever error this trigger throws at us.
when others then null;
end mytrigger;
Also, the trigger could slow down your table inserts and updates because of the extra work being done.

JPA using eclipselink and java.sql: when connect to DB

I explain better my question since from the title it could be not very clear, but I didn't find a way to summarize the problem in few work. Basically I have a web application whose DB have 5 tables. 3 of these are managed using JPA and eclipselink implementation. The other 2 tables are manager directly with SQL using the package java.sql. When I say "managed" I mean just that query, insertion, deletion and updates are performed in two different way.
Now the problem is that I have to monitor the response time of each call to the DB. In order to do this I have a library that use aspects and at runtime I can monitor the execution time of any code snippet. Now the question is, if I want to monitor the response time of a DB request (let's suppose the DB in remote, so the response time will include also network latency, but actually this is fine), what are in the two distinct case described above the instructions whose execution time has to be considered.
I make an example in order to be more clear.
Suppose tha case of using JPA and execute a DB update. I have the following code:
EntityManagerFactory emf = Persistence.createEntityManagerFactory(persistenceUnit);
EntityManager em=emf.createEntityManager();
EntityToPersist e=new EntityToPersist();
em.persist(e);
Actually it is correct to suppose that only the em.persist(e) instruction connects and make a request to the DB?
The same for what concern using java.sql:
Connection c=dataSource.getConnection();
Statement statement = c.createStatement();
statement.executeUpdate(stm);
statement.close();
c.close();
In this case it is correct to suppose that only the statement.executeUpdate(stm) connect and make a request to the DB?
If it could be useful to know, actually the remote DBMS is mysql.
I try to search on the web, but it is a particular problem and I'm not sure about what to look for in order to find a solution without reading the JPA or java.sql full specification.
Please if you have any question or if there is something that is not clear from my description, don't hesitate to ask me.
Thank you a lot in advance.
In JPA (so also in EcliplseLink) you have to differentiate from SELECT queries (that do not need any transaction) and queries that change the data (DELETE, CREATE, UPDATE: all these need a transacion). When you select data, then it is enough the measure the time of Query.getResultList() (and calls alike). For the other operations (EntityManager.persist() or merge() or remove()) there is a mechanism of flushing, which basically forces the queue of queries (or a single query) from the cache to hit the database. The question is when is the EntityManager flushed: usually on transaction commit or when you call EntityManager.flush(). And here again another question: when is the transaction commit: and the answer is: it depends on your connection setup (if autocommit is true or not), but a very correct setup is with autocommit=false and when you begin and commit your transactions in your code.
When working with statement.executeUpdate(stm) it is enough to measure only such calls.
PS: usually you do not connect directly to any database, as that is done by a pool (even if you work with a DataSource), which simply gives you a already established connection, but that again depends on your setup.
PS2: for EclipseLink probably the most correct way would be to take a look in the source code in order to find when the internal flush is made and to measure that part.

PHP PDO Batch Update - Exists?

I am looking similar way to jdbc driver in Java, to perform a batch of updates in PHP.
In jdbc there is an API of PreparedStatement.executeBatch(), which executes the whole statements in one round trip to the DB.
Does PHP PDO has similar API, and if not, does starting transaction, doing the updates and then commit will do the same effect of executing all updates in one round trip to the DB or each update will round trip to the DB and immediately executing the statement (although not visible to others, since it is in transaction)?
There is no such thing like "batch update" in Mysql. There are SQL queries only.
As long as you can do your updates in one query, your updates will be done in one round-trip. Otherwise there would be many. No matter what API being used.
Speaking of single SQL queries, there are 2 possible ways
CASE statement in WHERE
A neat trick with INSERT(!) query with ON DUPLICATE UPDATE statement. Which will actually update your data.
PHP PDO doesn't have batch execution of queries.
Running many inserts and updates in a transaction usually improves greatly the execution speed. If you're making batch jobs on a database you should run queries in bulks within a transaction.

Batch prepared statement auto commit

I am trying to create multiple tables (upto 20) via java.sql prepared statement batch execute. Most of tables are related to eachother. But there is some confusion in my mind.
1) set connection auto commit true or false?
2) Is there any special pattern for BatchExecute.? like up down. I want to parent table create query must execute first.
3) If error ouccurs all the batch is rollback?
The behavior of batch execution with auto commit on is implementation defined, some drivers may not even support that. So if you want to use batch execution, set auto commit to false.
That said, some databases implicitly commit each DDL statement; this might interfere with correct working of batched execution. I would advise to take the safe route and not use batched execution for DDL, but to use a normal Statement and execute(String) for executing DDL.
Actually using batch execution in this case does not make much sense. Batch execution gives you a (big) performance improvement when inserting or updating thousands of rows at once.
You just need to have all your statements within a transaction:
call Connection.setAutoCommit(false)
execute your create-table statements with Statement.executeUpdate
call Connection.commit()
You need to order the create-table statements yourself based on the foreign-keys between them.
As Mark pointed out, the DB you are using might commit each create-table right away and ignore the transaction. Not all DBs support transactional creation of tables. You will need to test this or do some more research regarding this aspect.

Oracle SQL from Java using Spring returns nothing, and doesnt throw exception

I have a Java code that uses Spring to connect and execute sql on an Oracle DB. I have a query that takes long time to execute (20 minutes or sometimes more). I have a Executor Service and it has a Thread that will execute the query and process the results. If i put a timeout to the DB and Spring, the system will time out correctly but will return nothing else before that. If i run the query from SQL plus, it will return values. The time out is set up 3 times what it takes to execute on SQL Developer.
Any ideas!?
Assuming that your Spring query is using bind variables, are you using bind variables when you execute the query in SQL*Plus/ SQL Developer? Or are you using literals?
What version of Oracle are you using?
Have you checked to see whether the query plans for the two environments are different?
20 minutes for a query in Oracle? I'll bet you don't have appropriate indexes on the columns in your WHERE clause.
The dead giveaway is to do an EXPLAIN PLAN on the query. If you see a TABLE SCAN, take appropriate measures.
If you can run the same query in SQL*Plus and see it return in a reasonable time, then I'm incorrect and the problem is due to something else that you did in Java code.
I don't see why you need a separate thread for a query. I'd run the code straight, without a thread, and see how it behaves. If you aren't indexed properly, add some; if the query brings back too much data, add WHERE clauses to restrict it. You've taken extraordinary measures without really understanding what the root cause is.

Categories

Resources