Hibernate Interceptor - Try-Catch T-SQL - java

I have the requirement to ignore primary key violations from SQL Server in my inserts. While there are several ways to accomplish this task, many of them have performance implications or are not feasible. I need to batch insert into a table and there is a possibility of duplicates.
To accomplish the task, I've written a Hibernate interceptor that intercepts onPrepareStatement and wraps the Hibernate generated SQL in a t-SQL TRY CATCH construct. The TRY CATCH looks for primary key violations and silences them. Below is my code for the interceptor.
private static final String FORMAT_TRY_CATCH = "BEGIN TRY %s END TRY\r\n" + "BEGIN CATCH\r\n" + "\r\n"
+ "DECLARE #ErrorMessage NVARCHAR(4000),\r\n" + " #ErrorNumber INT,\r\n" + " #ErrorSeverity INT,\r\n" + " #ErrorState INT,\r\n"
+ " #ErrorLine INT,\r\n" + " #ErrorProcedure NVARCHAR(200) ;\r\n" + "\r\n"
+ "SELECT #ErrorNumber = ERROR_NUMBER(), #ErrorSeverity = ERROR_SEVERITY(),\r\n"
+ " #ErrorState = ERROR_STATE(), #ErrorLine = ERROR_LINE(),\r\n"
+ " #ErrorProcedure = ISNULL(ERROR_PROCEDURE(), '-') ;\r\n" + "\r\n"
+ "SELECT #ErrorMessage = N'Error %%d, Level %%d, State %%d, Procedure %%s, Line %%d, ' +\r\n"
+ " 'Message: ' + ERROR_MESSAGE() ;\r\n" + " IF #ErrorNumber <> 2627\r\n" + " BEGIN\r\n"
+ "RAISERROR (#ErrorMessage, #ErrorSeverity, 1, #ErrorNumber, -- parameter: original error number.\r\n"
+ " #ErrorSeverity, -- parameter: original error severity.\r\n" + " #ErrorState, -- parameter: original error state.\r\n"
+ " #ErrorProcedure, -- parameter: original error procedure name.\r\n"
+ " #ErrorLine-- parameter: original error line number.\r\n" + " ) ;\r\n" + " END\r\n" + "END CATCH;";
#Override
public String onPrepareStatement(String sql) {
if (sql.toLowerCase().startsWith("insert")) {
return String.format(FORMAT_TRY_CATCH, sql);
}
return sql;
}
The interceptor works except that when there is a primary key constraint violation the statement returns 0 affected rows. Hibernate has a check that throws an exception if the number of affected rows doesn't match the expected number of rows. I can work around the issue by executing a dummy t-SQL statement that returns one affected row.
DECLARE #dummy TABLE (col1 int)
INSERT INTO #dummy values (1)
This dummy logic will decrease the performance when there is a primary key constraint violation. Is there a better performing dummy script, or a better way to catch primary key constraint violations?
Note: doing a select before each insert is not acceptable for performance reasons. I cannot recreate my SQL's primary key to turn on IGNORE_DUP_KEY.

Related

java postgresql using trigger

SQL = "create view CSaccept as select sID, cName from Apply where major = 'CS' and decision = 'Y' ";
stmt.executeUpdate(SQL);
SQL = "create or replace function test1() returns trigger as $$\n" +
"begin\n" +
" update Apply set cName = New.cName where (sID = Old.sID and cName = Old.cName and Apply.major = 'CS' and Apply.decision = 'Y');\n" +
" return Old;\n" +
"end;\n" +
"$$\n" +
"language 'plpgsql';\n" +
"create trigger CSacceptUpdate\n" +
"instead of update of cName on CSaccept\n" +
"for each row\n" +
"execute procedure test1();";
stmt.executeUpdate(SQL);
Hi I am writing a program in java using postgresql, and the error like below keeps popping up in the above SQL statement. What is the problem?
Exception in thread "main" org.postgresql.util.PSQLException: ERROR: INSTEAD OF triggers cannot have column lists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2285)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:323)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:473)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:393)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:322)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:308)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:284)
at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:258)
at SqlTest2.main(SqlTest2.java:270)
SQL = "create or replace function test2() returns trigger as $$\n" +
"begin\n" +
" delete from Apply where sID = old.sID and cName = old.cName and major = 'CS' and decision = 'Y';\n" +
" return Old;\n" +
"end;\n" +
"$$\n" +
"language 'plpgsql';\n" +
"create trigger CSacceptDelete\n" +
"instead of delete on CSaccept\n" +
"for each row\n" +
"execute procedure test2();";
above one works well. I don't know why First one is not working.

Oracle merge query failing with unique constraints in hibernate transaction when multiple thread execute

I have one method which has merge query inside it. If entry already not present this method inserts new row and if already present updates the existing record.
The problem in this method is when 2 threads executes this method at same time. First thread enters in method and doesn't commit transaction and at same time second thread enters. Now one of them inserts row, second one is also trying to insert instead of updating and causing unique constraint error.
I am wondering why this is happening, second thread should update instead of inserting new row.
I am new to hibernate. So is it hibernate transaction problem or something else.
Please help me to understand this. Thanks.
Below is the mergeFunction() code added for clarity. when simulated log line printed in below sequence
updating MY_TABLE ... thread-1
updating MY_TABLE ... thread-2
updated MY_TABLE ... thread-1
Unique Key constraint error ... thread-2
This problem is simulating only few times. When luckily 2 threads tries to execute same method.
public void mergeFunction(long customerId, long assetId, String module, Status status, String errorMsg) {
log.debug("Updating MY_TABLE for asset {} module {} status ={} customer ={}", assetId, module, status.name(),customerId);
StatelessSession statelessSession = session.getSessionFactory().openStatelessSession();
try {
Transaction tx = statelessSession.beginTransaction();
try {
String sql = "merge into MY_TABLE x " +
"using (select " + customerId + " customer_id, '" + module + "' module, " + assetId + " asset_id from dual) y " +
"on (x.asset_id = y.asset_id and x.module = y.module) " +
"when matched then " +
" update set x.status = :status, x.ERROR = :errMsg where x.asset_id = :aid and x.module = :module " +
"when not matched then " +
" insert (id, uuid, customer_id, module, asset_id, status, activated_on) " +
" values (id_MY_TABLE.nextval, portal_pck.get_uuid(), " + customerId + ", '" + module + "', " + assetId + ", '" + status + "', sysdate)";
statelessSession.createSQLQuery(sql).setParameter("aid", assetId)
.setParameter("module", module)
.setParameter("status", status.name())
.setParameter("errMsg", StringUtils.isEmpty(errorMsg) ? " " : errorMsg)
.executeUpdate();
tx.commit();
log.debug("Updated MY_TABLE for {} asset for {} status ={} customer={}",assetId,module, status.name(),customerId);
} catch (Exception exe) {
log.debug("error while updating MY_TABLE table for asset {} module{} status {} for customer {} exception ={}",
assetId, module, status.name(), customerId, exe);
tx.rollback();
throw exe;
}
} finally {
statelessSession.close();
}
}

Copy Tables From Source Database to Destination Database On Same Host System (java.lang.OutOfMemoryError)

I need to query a database and copy the resultset into another database, which has the same database structure and is also on the same host system.
The following JAVA-function works pretty well (fast and without errors), if the query result is pretty small:
public void copyTableData(Connection dbConnOnSrcDB, Connection dbConnOnDestDB,
String sqlQueryOnSrcDB, String tableNameOnDestDB)
throws SQLException {
try (
PreparedStatement prepSqlStatmOnSrcDB = dbConnOnSrcDB.prepareStatement(sqlQueryOnSrcDB);
ResultSet sqlResultsFromSrcDB = prepSqlStatmOnSrcDB.executeQuery()
) {
ResultSetMetaData sqlMetaResults = sqlResultsFromSrcDB.getMetaData();
// Stores the query results
List<String> columnsOfQuery = new ArrayList<>();
// Store query results
for (int i = 1; i <= sqlMetaResults.getColumnCount(); i++)
columnsOfQuery.add(sqlMetaResults.getColumnName(i));
try (
PreparedStatement prepSqlStatmOnDestDB = dbConnOnDestDB.prepareStatement(
"INSERT INTO " + tableNameOnDestDB +
" (" + columnsOfQuery.stream().collect(Collectors.joining(", ")) + ") " +
"VALUES (" + columnsOfQuery.stream().map(c -> "?").collect(Collectors.joining(", ")) + ")")
) {
while (sqlResultsFromSrcDB.next()) {
for (int i = 1; i <= sqlMetaResults.getColumnCount(); i++)
prepSqlStatmOnDestDB.setObject(i, sqlResultsFromSrcDB.getObject(i));
prepSqlStatmOnDestDB.addBatch();
}
prepSqlStatmOnDestDB.executeBatch();
}
}
}
But I have very large database queries and resultsets in the range of several hundred megabytes.
Problem A: I found out that the below OutOfMemoryError is raising, when the second line of code is processed:
ResultSet sqlResultsFromSrcDB = prepSqlStatmOnSrcDB.executeQuery()
JAVA-Exeption:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredField(Class.java:2068)
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl$1.run(AtomicReferenceFieldUpdater.java:323)
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl$1.run(AtomicReferenceFieldUpdater.java:321)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl.<init>(AtomicReferenceFieldUpdater.java:320)
at java.util.concurrent.atomic.AtomicReferenceFieldUpdater.newUpdater(AtomicReferenceFieldUpdater.java:110)
at java.sql.SQLException.<clinit>(SQLException.java:372)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2156)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:117)
at Application.copyTableData(Application.java:159)
at Application.main(Application.java:585)
Problem B: The copy job needs really much time. Is there a way to speed up the copy process?
The DB queries are:
String[] tables = new String[]{
"table1",
"table1_properties",
"table1_addresses",
"table2",
"table3",
"table4",
"table5",
"table6",
"table7",
"table8",
"table9",
"table10"
};
Function call:
for( String table : tables ){
getDataFromSrcDB = "SELECT " + table + ".* " +
"FROM table1 " +
"FULL JOIN table1_properties " +
"ON table1_properties.d_id=table1.d_id " +
"FULL JOIN table1_addresses " +
"ON table1_addresses.d_id=table1_properties.d_id " +
"FULL JOIN table2 " +
"ON table2.p_id=table1_properties.p_id " +
"FULL JOIN table3 " +
"ON table3.d_id=table1.d_id " +
"FULL JOIN table4 " +
"ON table4.d_id=table1.d_id " +
"FULL JOIN table5 " +
"ON table5.d_id=table1.d_id " +
"FULL JOIN table6 " +
"ON table6.d_id=table1.d_id " +
"FULL JOIN table7 " +
"ON table7.d_id=table1.d_id " +
"FULL JOIN table8 " +
"ON table8.id=table4.id " +
"FULL JOIN table9 " +
"ON table9.d_id=table1.d_id " +
"FULL JOIN table10 " +
"ON table10.a_id=table1_addresses.a_id " +
"WHERE ST_Intersects(ST_MakeEnvelope(" +
minLong + "," +
minLat + "," +
maxLong + "," +
maxLat + ",4326), geom :: GEOMETRY) OR " +
"ST_Intersects(ST_MakeEnvelope(" +
minLong + "," +
minLat + "," +
maxLong + "," +
maxLat + ",4326), CAST(table3.location AS GEOMETRY))";
copyTableData(dbConnOnSrcDB, dbConnOnDestDB, getDataFromSrcDB, table);
}
When the size of the batch is huge, you get this error :
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
I have some solution.
First Solution
Instead you can divide the batch for small batches for example each 1_000 persist the data, you need some configuration also, as Mark Rotteveel mention in the comment, and as the documentation mention Getting results based on a cursor :
By default the driver collects all the results for the query at once.
This can be inconvenient for large data sets so the JDBC driver
provides a means of basing a ResultSet on a database cursor and only
fetching a small number of rows.
So what you should to do :
The connection to the server must be using the V3 protocol.
The Connection must not be in autocommit mode.
The query given must be a single statement
The fetch size of the Statement is needed to the appropriate size
..read the details in the documentation
in this case your code can be like this :
//Note here you set auto commit for the source connection
dbConnOnSrcDB.setAutoCommit(false);
final int batchSize = 1000;
final int fetchSize = 50;
int count = 0;
...
//Set the appropriate size for the FetchSize
sqlResultsFromSrcDB.setFetchSize(fetchSize);
while (sqlResultsFromSrcDB.next()) {
for (int i = 1; i <= sqlMetaResults.getColumnCount(); i++) {
prepSqlStatmOnDestDB.setObject(i, sqlResultsFromSrcDB.getObject(i));
}
prepSqlStatmOnDestDB.addBatch();
if (++count % batchSize == 0) {
prepSqlStatmOnDestDB.executeBatch();
}
}
prepSqlStatmOnDestDB.executeBatch(); // insert remaining records
Second Solution
Because you are using PostgreSQL I would like to use dblink to transfer data between database to another database.
Some usefull links :
https://viralpatel.net/blogs/batch-insert-in-java-jdbc/
How to use (install) dblink in PostgreSQL?
http://www.postgresonline.com/journal/archives/44-Using-DbLink-to-access-other-PostgreSQL-Databases-and-Servers.html
You have many ways to achieve it. Here are few options you can apply-
Read Data from First database and write it into csv file and then read again from csv file in chunks and write to another database.(Easy to implement but more coding)
https://gauravmutreja.wordpress.com/2011/10/13/exporting-your-database-to-csv-file-in-java/
If you don't have much data manipulation before data transfer to another database, you can write a simple DB function to read data from one Database and write into another one.
or you can try Spring batch to do this.
When you fetch data directly it is fillin ram until all datas fetched. So you can easly see OutOfMemoryError error.
If you fetch data with stream you can capture unlimited data, because of stream fetching, and processing and contiue with new datas, clearing ram for processed datas (as parted with fetchSize)

How can I create a Table with two Foreign Key References to one other Table via UCanAccess?

To build the References direct in MS-Access is no Problem.
To do it with UCanAccess results in a "net.ucanaccess.jdbc.UcanaccessSQLException:...".
Class.forName("net.ucanaccess.jdbc.UcanaccessDriver");
Connection connection = DriverManager.getConnection("jdbc:ucanaccess://e:/TestDB.accdb;memory=true");
Statement statement = connection.createStatement();
//
String tableToBeReferenced = "PersonsTable";
String tableWithTheReferences = "RelationShipsTable";
try {// Tidy up
statement.execute("DROP TABLE " + tableWithTheReferences);
} catch (Exception exeption) {}
try {// Tidy up
statement.execute("DROP TABLE " + tableToBeReferenced);
} catch (Exception exeption) {}
statement.execute("CREATE TABLE " + tableToBeReferenced + "(ID autoincrement NOT NULL PRIMARY KEY,"//
+ "Name VARCHAR(255)"//
+ ")");
statement.execute("CREATE TABLE " + tableWithTheReferences + "(ID LONG NOT NULL PRIMARY KEY,"//
+ "RelationShip VARCHAR(255) NOT NULL DEFAULT 'FRIENDS',"//
+ "Person1Id LONG NOT NULL,"//
+ "Person2Id LONG NOT NULL)");
// reference #1
statement.execute("ALTER TABLE " + tableWithTheReferences + //
" ADD CONSTRAINT FOREIGN_KEY_1 FOREIGN KEY (Person1Id) REFERENCES " //
+ tableToBeReferenced + "(ID) ON DELETE CASCADE");
// reference #2
statement.execute("ALTER TABLE " + tableWithTheReferences + //
" ADD CONSTRAINT FOREIGN_KEY_2 FOREIGN KEY (Person2Id) REFERENCES " //
+ tableToBeReferenced + "(ID) ON DELETE CASCADE");
If I create only the first Reference it works.
If I create only the second Reference it works.
But when I try to build both References it fails.
I am able to reproduce the issue under UCanAccess 4.0.3. Neither HSQLDB nor Jackcess has a problem with creating two independent FK relationships between the same two tables, so it looks like it might be a bug in UCanAccess. I will report the issue to the UCanAccess development team and update this answer with any news.
Update:
A fix for this issue has been implemented and will be included in the UCanAccess 4.0.4 release.
I think it will not work since you have "ON DELETE CASCADE" for both your foreign keys.

Unable to insert record in MySql using JAVA

I am new to Java and MYSql in fact using this combination first time and facing real trouble. I want to insert few records in a table but unable to do so. Following are the fields and datatype in the table named tbl_cdr in MySql.
**Field** **Type**
DATEANDTIME datetime NULL
VALUE1 int(50) NULL
VALUE2 varchar(50) NULL
VALUE3 varchar(50) NULL
VALUE4 varchar(50) NULL
VALUE5 varchar(50) NULL
The record I want to insert contains following values
2014-05-19 02:37:18, 405, MGW190514023718eab4, 923016313475, IN, ALERTSC
I am using following query and statements to Insert record in table
sqlQuery = "INSERT INTO tbl_cdr (DATEANDTIME,VALUE1,VALUE2,VALUE3,VALUE4,VALUE5)" + "VALUES ("+ forDateAndTime.format(date) + ", " + columnsList.get(1) + ", " + columnsList.get(2) + ", " + columnsList.get(3) + ", " + columnsList.get(4) + ", " + columnsList.get(5) + ")";
try
{
Statement qryStatement = conn.createStatement();
qryStatement.executeUpdate(sqlQuery);
qryStatement.close();
} catch (SQLException ex)
{
Logger.getLogger(CdrProject.class.getName()).log(Level.SEVERE, null, ex);
}
But when I reach the statement qryStatement.executeUpdate(sqlQuery); exception is thrown as:
MySQLSyntaxErrorException: You have an error in your SQL syntax;
check the manual that corresponds to your MySQL server version for the
right syntax to use near '02:37:18, 405, MGW190514023718eab4,
923016313475, IN, ALERTSC)' at line 1
value2 ,value3 ,value4 and value 5 are varchars so it should be written within ''.
Do like this
sqlQuery = "INSERT INTO tbl_cdr (DATEANDTIME,VALUE1,VALUE2,VALUE3,VALUE4,VALUE5)" + "VALUES ("+ forDateAndTime.format(date) + ", " + columnsList.get(1) + ", '" + columnsList.get(2) + "',' " + columnsList.get(3) + "',' " + columnsList.get(4) + "',' " + columnsList.get(5) + "')";
You're inserting the date incorrectly. MySQL allows you to insert a string literal or a number.
You're trying to use 02:37:18 as a number, when really you should be using it as a string literal: '02:37:18'
Here is the MySql Reference describing this.
You're also not treating your varchars as strings either, they should be enclosed with quotes.

Categories

Resources