I have bunch of MySQL queries that use temporary tables to split complex/expensive queries into small pieces.
create temporary table product_stats (
product_id int
,count_vendors int
,count_categories int
,...
);
-- Populate initial values.
insert into product_stats(product_id) select product_id from product;
-- Incrementally collect stats info.
update product_stats ... join vendor ... set count_vendors = count(vendor_id);
update product_stats ... join category... set count_categories = count(category_id);
....
-- Consume the resulting temporary table.
select * from product_stats;
The problem is that, as I use connection pool, these tables are not cleared even if I close the java.sql.Connection.
I can manually remove them (drop temporary table x;) one by one before executing the needed queries, but that may take place for mistakes.
Is there a way (JDBC/MySQL , API/configuration) to reset all the temporary tables created within the current session without closing the database connection (as you know, I'm not reffering to java.sql.Connection.close()), so that I can still use the advantages that provides connection pool?
Edited:
It seems that only from MySQL version 5.7.3 they started imlpementing the "reset connection" feature. (Release note: https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-3.html) However, I will not use it for the moment because version 5.7 is still on a development release.
Q: Is there a way (JDBC/MySQL , API/configuration) to reset all the temporary tables created within the current session without closing the database connection.
A: No. There's no "reset" available. You can issue DROP TEMPORARY TABLE foo statements within the session, but you have to provide the name of the temporary table you want to drop.
The normative pattern is for the process that created the temporary table to drop it, before the connection is returned to the pool. (I typically handle this in the finally block.)
If we are expecting other processes may leave temporary tables in the session (and to be defensive, that's what we expect), we typically do a DROP TEMPORARY TABLE IF EXISTS foo before we attempt to create a temporary table.
EDIT
The answer above is correct for MySQL up through version 5.6.
#mr.Kame (OP) points out the new mysql_reset_connection function (introduced in MySQL 5.7.3).
Reference: 22.8.7.60 mysql_reset_connection() http://dev.mysql.com/doc/refman/5.7/en/mysql-reset-connection.html
Looks like this new function achieves nearly the same result as we'd get by disconnecting from and reconnecting to MySQL, but with less overhead.
(Now I'm wondering if MariaDB has introduced a similar feature.)
Related
I am trying to log a “change summary” from each INSERT/UPDATE MySQL/SQL Server query that executes in a Java program. For example, let’s say I have the following query:
Connection con = ...
PreparedStatement ps = con.prepareStatement(“INSERT INTO cars (color, brand) VALUES (?, ?)”);
ps.setString(1, “red”);
ps.setString(2, “toyota”);
ps.executeUpdate();
I want to build a “change set“ from this query so I know that one row was inserted into the cars table with the values color=red and brand=toyota.
Ideally, I would like MySQL/SQL Server to tell me this information as that would be the most accurate. I want to avoid using a Java SQL parser because I may have queries with “IF EXISTS BEGIN ELSE END”, in which case I would want to know what was the final query that was inserted/updated.
I only want to track INSERT/UPDATE queries. Is this possible?
What ORM do you use? If you don't use one, now could be the time to start - you give the impression that you have all these prepared statement scattered throughout the code, which is something that needs improving anyway.
Using something like Hibernate means you can just activate its logging and keep the query/parameter data. It might also make you focus your data later a bit more (if it's a bit haphazardly structured right now).
If you're not willing to switch to using an ORM consider creating your own class, perhaps called LoggingPreparedStatement, that is identical to normal PreparedStatement (subclass or wrapper of PreparedStatement such that it uses all the same method names etc so it's a drop in replacement) and logs whatever you want. Use find/replace across the code base to switch to using it.
As an alternative to doing it on the client side, you can get the database to do It. For SQL server it has change tracking, don't know what there is for MySQL but it'll be something proprietary. For something consistent, most DB have triggers that have some mechanism of identifying old and new data and you can stash this in a history table(s) to see what was changed and when. Triggers that keep history have a regularity to their code that means they can be programmatically generated from a list of the table columns and datatypes, so you can query the db for the column names (most db have some virtual tables that tell you info about the real tables) etc and generate your triggers in code and (re)apply them whenever schema changes. The advantage of using triggers is that they really easily identify the data that was changed. The disadvantage is that this is all they can see so if you want your trigger to know more you have to add that info to the table or the session so the trigger can access it - stuff like who ran the query, what the query was. If you're not willing to add useless columns to a table (and indeed, why should you) you can rename all your tables and provide a set of views that select from the new names and are named the old names. These new views can expose extra columns that your client side can update and the views themselves can have INSTEAD OF triggers that update the real tables. Doesn't help for selections though because deleting data doesn't need any data from the client, so the whole thing is a mess. If you were going that wholesale on your DB you'd just switch to using stored procedures for your data modifications and embark on a massive job to change your client side calls. An alternative that is also well leveraged for SQL Server is the CONTEXT_INFO variable, a 128byte variable block of binary data that lives for the life of your connection/session or it's newer upgrade SESSION_CONTEXT, a 256kb set of key value pairs. If you're building something at the client side that logs the user, query and parameter data and you're also building a trigger that logs the data change you could use these variables, programmatically set at the start of each data modification statement, to give your trigger something more involved than "what is the current time" to identify which triggered dataset relates to which query logged. Generating a guid in the client and passing it to the db in some globally readable way that means the database trigger can see it and log it in the history table , tying the client side log of the statement and parameters to the server side set of logged row changes
I have Java application that uses Spring JPA and Hibernate to connect to ORACLE 11g database.
From time to time, I need to drop partition in the DB and rebuild all the UNUSABLE global indexes to USABLE state. (The indexes become unusable due to drop partition command)
Between the time when my partition is dropped and UNUSABLE indexes are not yet rebuild, my online application fails with ORA-01502 error like below.
Caused by: java.sql.BatchUpdateException: ORA-01502: index 'USER.INDEX_NAME' or partition of such index is in unusable state
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10070)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:213)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
... 94 more
In SQL there is an option to ignore UNUSABLE indexes by setting skip_unusable_indexes=TRUE. This way query optimizer selects a different (expensive) execution plan that does not use index and does not report any failure on DML queries due to unusable indexes.
Is there any such similar option in Hibernate that I can use to not to fail when indexes are in UNUSABLE status?
Versions I am using
Hibernate: 3.6.9
Oracle: 11g
Java: 7
You can rebuild the index:
ALTER INDEX USER.INDEX_NAME REBUILD;
You may try to execute:
ALTER SESSION SET skip_unusable_indexes=true
like this but this session will be returned to the collection pool and reused so this will affect more than one query.
If I were you I would ask myself "Why my indexes are unusable?" This is a situation that should not happen unless you are executing some maintenance or executing some very large batch proccess. You may have a 24/7 system where you don't really wan't to stop the system for maintenance. In this case you can set the option system wise without a single change to your code. This way the system will be slower but behave nicer during maintenace. Just remember that index used to enforce constrains can't be ignored and insert/update queries will fail anyway. And add some automatic check that reports unusable indexes in PRO at certain times. Just a PL/SQL process that send emails can be OK
Another alternative is to change the option only during changes in the database:
ALTER SYSTEM SET skip_unusable_indexes=true;
ALTER TABLE T1 DROP PARTITION P1;
ALTER INDEX I1 REBUILD ONLINE
ALTER SYSTEM SET skip_unusable_indexes=false;
In dba.stackexchange.com there is a discussion about the better way to drop a partition. So you are not alone but the solution is for Oracle 12C.
I am doing a bulk insert using sybase temporary table approach (# table name). This happens in a transaction. However this operation is committing the data transaction. ( I am not doing a connection.commit myself). I don't want this commit to happen since I might have to roll back the entire transaction later on. Any idea why insert using temp table is committing the transaction withought being asked?. How do I fix this issue ?
The sql is something like
select * into #MY_TABLE_BUFFER from MY_TABLE where 0=1;
load table #MY_TABLE_BUFFER from 'C:\temp\123.tmp' WITH CHECKPOINT ON;
insert into MY_TABLE on existing update select * from #MY_TABLE_BUFFER;
drop table #MY_TABLE_BUFFER;
And I am using statement.executeUpdate() to execute it
Figured out that its due to temp table not participating in transaction and doing a commit.
Is there any workaround for this?
Sybase is funny about using user-specified (aka explicit) transactions in conjunction w/ the use of #temp tables (where the temp table is created while in the transaction). For better or worse, Sybase considers the creation of a #temp table (including via 'select into' statement) to be a DDL statement in the context of tempdb. In the editor, w/ default server/db settings, you'll get an error when you do this.
As a test, you could try setting the 'ddl in tran' setting (in the context of the tempdb database) to true. Then, see if the behavior changes.
Note, however, that permanently leaving that setting in place is a bad idea (per Sybase documentation). I'm proposing it for investigative purposes only.
The real solution (if my assumption of the problem is correct) likely lies in creating the #temp table first, then beginning the transaction, to avoid any DDL stmts in the scope of the transaction.
sp_dboption tempdb, 'ddl in tran',true
the above shuld work,even am also not able to create /update #tables when proc created with anymode.
I am trying to create a program that updates 2 different tables using sql commands. The only thing I am worried about is that if the program updates one of the tables and then loses connection or whatever and does NOT update the other table there could be an issue. Is there a way I could either
A. Update them at the exact same time
or
B. Revert the first update if the second one fails.
Yes use a SQL transaction. Here is the tutorial:JDBC Transactions
Depending on the database, I'd suggest using a stored procedure or function based on the operations involved. They're supported by:
MySQL
Oracle
SQL Server
PostgreSQL
These encapsulate a database transaction (atomic in nature -- it either happens, or it doesn't at all), without the extra weight of sending the queries over the line to the database... Because they already exist on the database, the queries are parameterized (safe from SQL injection attacks) which means less data is sent -- only the parameter values.
Most SQL servers support transactions, that is, queueing up a set of actions and then having them happen atomically. To do this, you wrap your queries as such:
START TRANSACTION;
*do stuff*
COMMIT;
You can consult your server's documentation for more information about what additional features it supports. For example, here is a more detailed discussion of transactions in MySQL.
I am stuck at some point wherein I need to get database changes in a Java code. Request is to get any record updated, added, deleted in any table of db; should be recognized by Java program. How could it be implemented JMS? or a Java thread?
Update: Thanks guys for your support i am actually using Oracle as DB and Weblogic 10.3 workshop. Actually I want to get the updates from a table in which I have only read permission so guys what do you all suggest. I can't update the DB. Only thing I can do is just read the DB and if there is any change in the table I have to get the information/notification that certain data rows has been added/deleted or updated.
Unless the database can send a message to Java, you'll have to have a thread that polls.
A better, more efficient model would be one that fires events on changes. A database that has Java running inside (e.g., Oracle) could do it.
We do it by polling the DB using an EJB timer task. In essence, we have a status filed which we update when we have processed that row.
So the EJB timer thread calls a procedure that grabs rows which are flagged "un-treated".
Dirty, but also very simple and robust. Especially, after a crash or something, it can still pick up from where it crashed without too much complexity.
The disadvantage is the wasted load on the DB, and also response time will be limited (probably requires seconds).
We have accomplished this in our firm by adding triggers to database tables that call an executable to issue a Tib Rendezvous message, which is received by all interested Java applications.
However, the ideal way to do this IMHO is to be in complete control of all database writes at the application level, and to notify any interested parties at this point (via multi-cast, Tib, etc). In reality this isn't always possible where you have a number of disparate systems.
You're indeed dependent on whether the database in question supports it. You'll also need to take the overhead into account. Lot of inserts/updates also means a lot of notifications and your Java code has to handle them consistently, else it will bubble up.
If the datamodel allows it, just add an extra column which holds a timestamp which get updated on every insert/update. Most major DB's supports an auto-update of the column on every insert/update. I don't know which DB server you're using, so I'll give only a MySQL-targeted example:
CREATE TABLE mytable (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
somevalue VARCHAR(255) NOT NULL,
lastupdate TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX (lastupdate)
)
This way you don't need to worry about inserting/updating the lastupdate yourself. You can just do an INSERT INTO mytable (somevalue) VALUES (?) or UPDATE mytable SET somevalue = ? WHERE id = ? and the DB will do the magic.
After ensuring that the DB server's time and Java application's time are the same, you can just fire a background thread (using either Timer with TimerTask, or ScheduledExecutorService with Runnable or Callable) which does roughly this:
Date now = new Date();
statement = connection.prepareStatement("SELECT id FROM mytable WHERE lastupdate BETWEEN ? AND ?");
statement.setDate(1, this.lastTimeChecked);
statement.setDate(2, now);
resultSet = statement.executeQuery();
while (resultSet.next()) {
// Handle accordingly.
}
this.lastTimeChecked = now;
Update: as per the question update it turns out that you have no control over the DB. Well, then you don't have much good/efficient options. Either just refresh the entire list in Java memory with entire data from DB without checking/comparing for changes (probably the fastest way), or dynamically generate a SQL query based on the current data which excludes the current data from the results.
I assume that you're talking about a situation where anything can update a table. If for some reason you're instead talking about a situation where only the Java application will be updating the table that's different. If you're using Java only you can put this code in your DAO or EJB doing the update (it's much cleaner than using a trigger in this case).
An alternative way to do this is to funnel all database calls through a web service API, or perhaps a JMS API, which does the actual database calls. Processes could register there to get a notification of a database update.
We have a similar requirement. In our case we have a legacy system that we do not want to adversely impact performance on the existing transaction table.
Here's my proposal:
A new work table with pk to transaction and insert timestamp
A new audit table that has same columns as transaction table + audit columns
Trigger on transaction table to dump all insert/update/deletes to an audit table
Java process to poll the work table, join to the audit table, publish the event in question and delete from the work table.
Question is: What do you use for polling? Is quartz overkill? How can you scale back the polling frequency based on the current DB load?