Is there a DB2 System Table - Batch Runtime log in Mainframe? In DB2 for i Series, there is a table function QSYS2.GET_JOB_INFO() that returns Job Information during runtime including the Status (Active /Complete) and most importantly V_SQL_STATEMENT_TEXT - Statement of the last SQL run.
Scenario:
I want to retrieve the last executed SQL Statement during runtime in Cobol Batch Job. The main purpose of this is to determine if a COMMIT or ROLLBACK has been issued, while the job is running. The aim is to create small program, let's call it "controller", to monitor DB2 when Commit or Commit interval is issued, or even Rollback. To be more specific - this "controller" will act as mini OS and will have the capacity to trigger the Main Programs.
For instance, if the Main program issues a ROLLBACK the "controller program" can issue specific business logic and can control the updates. Updates can be done in both T1 and T2 Type of DB2 Connection. By that means, updates are done in batch client side or Java side running in EXCI (EXCI using RRS recovery).
A quick look in the IBM Documentation for DB2 seems to indicate "no."
However, while not an exact match for your situation, here's what we used to do...
Create a table, call it APP_RESTART_DATA with columns to uniquely identify an execution of your process. We used PROC_NAME and STEP_NAME as we were confined to batch jobs. Also have a KEY column and any other metadata you might find helpful in a restart situation. Some people stored the record number instead of the actual key value.
In your controller program, begin by doing a SELECT with your unique identifier(s) to determine if you're in restart mode. If you get an SQLCODE of 0 then you are in restart mode and will have retrieved the last KEY for which a COMMIT was successfully executed. Under these circumstances you must locate that key in your input data and then begin normal processing with the data immediately subsequent. If you got an SQLCODE of 100 then you are not in restart mode; under these circumstances you can just begin normal processing at the start of your input data.
As you process the input data and reach a COMMIT point, also UPDATE your APP_RESTART_DATA table with the new KEY. Then COMMIT. Our COMMIT points were also dictated by a parameter indicating how many logical units of work to process between COMMITs. We could decrease this parameter if it became necessary to run batch processes during prime shift that were normally run off-shift.
When you complete processing of your input data, DELETE the row for your process in the APP_RESTART_DATA table.
Catching ROLLBACK might be tricky. You could flag your row in APP_RESTART_DATA as having performed a ROLLBACK when done in the code, but if done implicitly in an abend situation you may find yourself registering a condition handler via the Language Environment CEEHDLR callable service so you get control and can indicate a ROLLBACK occurred.
Related
I want to delete entries from multiple tables in a postgreSQL DB.
The tables have foreign key constraints, so I need to delete them in particular order only (otherwise delete will fail).
I am thinking of adding them to a batch and running executeBatch()
I understand that executeBatch submits all the statements together to driver but how are statements executed? Is the order of deletion will be maintained as per order of adding to the batch? I can't find it mentioned in API doc
The JDBC 4.3 specification explicitly specifies the behaviour of a batch execution in section 14.1.2 Successful Execution:
Batch commands are executed serially (at least logically) in the order
in which they were added to the batch.
and
The entries in the array are ordered according to the order in which
the commands were processed (which, again, is the same as the order in
which the commands were originally added to the batch).
The "at least logically" gives databases some leeway to reorder things as an optimization, as long as the resulting behaviour is the same as if the batch was executed in the specified order. Execution in-order is also necessary to ensure the returned update counts match, and for exception behaviour.
They are executed in order.
The purpose of "batching" is to collect the SQL statements and transmit them as a block, a sequence of statements, in order to reduce the network overhead of communicating with the database server.
A full "send SQL, wait for response" takes time, so by sending multiple requests together, a lot of waiting time can be eliminated.
In my application I have the problem that sometimes SELECT statements run into a java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction exception. Sadly I can't create an example as the circumstances are very complex. So the question is just about general understanding.
A little bit of background information: I'm using MySQL (InnoDB) with READ_COMMITED isolation level.
Actually I don't understand how a SELECT can ever run into a lock timeout with that setup. I thought that a SELECT would never lock as it will just return the latest commited state (managed by MySQL). Anyway according to what is happening this seems to be wrong. So how is it really?
I already read this https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html but that didn't really give me a clue. No SELECT ... FOR UPDATE or something like that is used.
That is probably due to your database. Usually this kind of problems come from that side, not from the programming side that access it.In my experience with db's, these problems are usually due to that. In the end, the programming side is just "go and get that for me in that db" thing.
I found this without much effort.
It basically explains that:
Lock wait timeout occurs typically when a transaction is waiting on row(s) of data to update which is already been locked by some other transaction.
You should also check this answer that has a specific transaction problem, which might help you, as trying to change different tables might do the timeout
the query was attempting to change at least one row in one or more InnoDB tables. Since you know the query, all the tables being accessed are candidates for being the culprit.
To speed up queries in a DB, several transactions can be executed at the same time. For example if someone runs a select query over a table for the wages of the employees of a company (each employee identified by an id) and another one changes the last name of someone who e.g. has married, you can execute both queries at the same time because they don't interfere.
But in other cases even a SELECT statement might interfere with another statement.
To prevent unexpected results in a SQL transactions, transactions follow the ACID-model which stands for Atomicity, Consistency, Isolation and Durability (for further information read wikipedia).
Let's say transaction 1 starts to calculate something and then wants to write the results to table A. Before writing it it locks all SELECT statements to table A. Otherwise this would interfere with the Isolation requirement. Because if a transaction 2 would start while 1 is still writing, 2's results depend on where 1 has already written to and where not.
Now, it might even produce a dead-lock. E.g. before transaction 1 can write the last field in table A, it still has to write something to table B but transaction 2 has already blocked table B to read safely from it after it read from A and now you have a deadlock. 2 wants to read from A which is blocked by 1, so it waits for 1 to finish but 1 waits for 2 to unlock table B to finish by itself.
To solve this problem one strategy is to rollback certain transactions after a certain timeout. (more here)
So that might be a read on for your select statement to get a lock wait timeout exceeded.
But a dead-lock usually just happens by coincidence, so if transaction 2 was forced to rollback, transaction 1 should be able to finish so that 2 should be able to succeed on a later try.
I have a J2EE server, currently running only one thread (the problem arises even within one single request) to save its internal model of data to MySQL/INNODB-tables.
Basic idea is to read data from flat files, do a lot of calculation and then write the result to MySQL. Read another set of flat files for the next day and repeat with step 1. As only a minor part of the rows change, I use a recordset of already written rows, compare to the current result in memory and then update/insert it correspondingly (no delete, just setting a deletedFlag).
Problem: Despite a purely sequential process I get lock timeout errors (#1204) and Innodump show record locks (though I do not know how to figure the details). To complicate things under my windows machine everything works, while the production system (where I can't install innotop) has some record locks.
To the critical code:
Read data and calculate (works)
Get Connection from Tomcat Pool and set to autocommit=false
Use Statement to issue "LOCK TABLES order WRITE"
Open Recordset (Updateable) on table order
For each row in Recordset --> if difference, update from in-memory-object
For objects not yet in the database --> Insert data
Commit Connection, Close Connection
The Steps 5/6 have an Commitcounter so that every 500 changes the rows are committed (to avoid having 50.000 rows uncommitted). In the first run (so w/o any locks) this takes max. 30sec / table.
As stated above right now I avoid any other interaction with the database, but it in future other processes (user requests) might read data or even write some fields. I would not mind for those processes to read either old/new data and to wait for a couple of minutes to save changes to the db (that is for a lock).
I would be happy to any recommendation to do better than that.
Summary: Complex code calculates in-memory objects which are to be synchronized with database. This sync currently seems to lock itself despite the fact that it sequentially locks, changes unlocks the tables without any exceptions thrown. But for some reason row locks seem to remain.
Kind regards
Additional information:
Mysql: show processlist lists no active connections (all asleep or alternatively waiting for table locks on table order) while "show engine INNODB" reports a number of row locks (unfortuantely I can't understand which transaction is meant as output is quite cryptic).
Solved: I wrongly declared a ResultSet as updateable. The ResultSet was closed only on a "finalize()" method via Garbage Collector which was not fast enough - before I reopended the ResultSet and tried therefore to aquire a lock on an already locked table.
Yet it was odd, that innotop showed another query of mine to hang on a completely different table. Though as it works for me, I do not care about oddities:-)
Is it possible to abort an insert ... select clause from Java ? Using either JDBC or Hibernate, it doesn't matter. The DB is Oracle.
I reckon it's not possible because there is a single DB call and the process is running in Oracle, not the JVM.
Oracle OCI (C driver) provides an OCIBreak() function. It's even thread-safe and you can call it from any bg thread, while the main thread is using the same connection.
Maybe that Statement.cancel() does the same thing.
This OCIBreak() requires round trip to DB server (i.e. the network must be functional) and then the main thread receives an error:
java.sql.SQLException: ORA-01013: user requested cancel of current operation
You should be able to mark this exception as non-critical on JBOSS level (using ExceptionSorter).
PS: I'm really curious if this could be called from hibernate. As JPA leaves many long running queries on our DB servers.
This kind of thing has been done a million times I'm sure, but my search foo appears weak today, and I'd like to get opinions on what is generally considered the best way to accomplish this goal.
My application keeps track of sessions for online users in a system. Each session corresponds to a single record in a database. A session can be ended in one of two ways. Either a "stop" message is received, or the session can timeout. The former case is easy, it is handled in the message processing thread and everything is fine. The latter case is where the concern comes from.
In order to process timeouts, each record has an ending time column that is updated each time a message is received for that session. To make timeouts work, I have a thread that returns all records from the database whose endtime < NOW() (has an end time in the past), and goes through the processing to close those sessions. The problem here is that it's possible that I might receive a message for a session while the timeout thread is going through processing for the same session. I end up with a race between the timeout thread and message processing thread.
I could use a semaphore or the like and just prevent the message thread from processing while timeout is taking place as it only needs to run every 30 seconds or a minute. However, as the user table gets large this is going to run into some serious performance issues. What I think I would like is a way to know in the message thread that this record is currently being processed by the timeout thread. If I could achieve that I could either discard the message or wait for timeout thread to end but only in the case of conflicts now instead of always.
Currently my application uses JDBC directly. Would there be an easier/standard method for solving this issue if I used a framework such as Hibernate?
This is a great opportunity for all kinds of crazy bugs to occur, and some of the cures can cause performance issues.
The classic solution would be to use transactions (http://dev.mysql.com/doc/refman/5.0/en/commit.html). This allows you to guarantee the consistency of your data - but a long-running transaction on the database turns it into a huge bottleneck; if your "find timed-out sessions" code runs for a minute, the transaction may run for that entire period, effectively locking write access to the affected table(s). Most systems would not deal well with this.
My favoured solution for this kind of situation is to have a "state machine" for status; I like to implement this as a history table, but that does tend to lead to a rapidly growing database.
You define the states of a session as "initiated", "running", "timed-out - closing", "timed-out - closed", and "stopped by user" (for example).
You implement code which honours the state transition logic in whatever data access logic you've got. The pseudo code for your "clean-up" script might then be:
update all records whose endtime < now() and whose status is "running, set status = "timed-out - closing"
for each record whose status is "timed-out - closing"
do whatever other stuff you need to do
update that record to set status "timed-out - closed" where status = "timed-out - closing"
next record
All other attempts to modify the current state of the session record must check that the current status is valid for the attempted change.
For instance, the "manual" stop code should be something like this:
update sessions
set status = "stopped by user"
where session_id = xxxxx
and status = 'running'
If the auto-close routine has kicked off in the time between showing the user interface and the database code, the where clause won't match any records, so the rest of the code simply doesn't run.
For this to work, all code that modifies the session status must check its pre-conditions; the most maintainable way is to encode status and allowed transitions into a separate database table.
You could also write triggers to enforce this logic, though I'm normally not a fan of triggers - only do this if you have to.
I don't think this adds significant performance worries - but test and optimize. The majority of the extra work on the database is by adding extra "where" clauses to your update statements; assuming you have an index on status, it's unlikely to have a measurable impact.