Can anyone help me, how to create a function in java to do scanning every 5 seconds to know the existence of new data entered in a mysql table.
In case you need java solution only, you can do it with timer & timer task provided by java.
Here is the code.
java.util.TimerTask task = new java.util.TimerTask() {
int prevCount = 0; // you can declare it static
#Override
public void run() {
Connection conn = getConnection();
try {
ResultSet rs = conn.prepareStatement("Select Count(*) from table").executeQuery();
int count = rs.getInt(1);
System.out.println("Count diff:"+ (prevCount-count));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
};
java.util.Timer timer = new java.util.Timer(true);// true to run timer as daemon thread
timer.schedule(task, 0, 5000);// Run task every 5 second
try {
Thread.sleep(60000); // Cancel task after 1 minute.
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
timer.cancel();
Making searches in big tables - especially in big tables - is a heavy operation. So, probably, you can reduce amount of table reads by detecting new data in other way.
For example, you can check table's size before actual data fetch. For doing that you can just perform "select count(*) from table" operation or even calculate table's size on disk like here: How to get the sizes of the tables of a mysql database?
Variant with database trigger also can help. For example, what if your trigger will update some marker of the last table's update on which your java app will look. That variant also will help to avoid performing idle reads of your table.
You don't need a java program to scan changes in DB , apart from a redundant it will be costly to network and DB, it's not a good practice to implement a feature that is already been given as a standard solution.
CREATE TRIGGER `some_update_happened` BEFORE/AFTER INSERT/UPDATE/DELETE
ON `mydb`.`mytable`
FOR EACH ROW BEGIN
// your code here for db trigger calling java function
END;
What you can do is use DB Update Triggers refer this link & this link .
After implementing trigger I suppose you need to catch trigger using a service implemented in php, java etc. You need to implement event listener for receiving trigger automation. just like done here for PHP and there is also an example for it in oracle docs but that is for Oracle. here is an example in java+mysql .
I understand you are a beginner , just go step by step , error by error and you will get there. good luck.
Related
I am working on a project where we are using dynamoDB as the database.
I used the TableUtils of import com.amazonaws.services.dynamodbv2.util.TableUtils;
to create table if it does not exist.
CreateTableRequest tableRequest = dynamoDBMapper.generateCreateTableRequest(cls);
tableRequest.setProvisionedThroughput(new ProvisionedThroughput(5L, 5L));
boolean created = TableUtils.createTableIfNotExists(amazonDynamoDB, tableRequest);
Now after creating table i have to push the data once it is active.
I saw there is a method to do this
try {
TableUtils.waitUntilActive(amazonDynamoDB, cls.getSimpleName());
} catch (Exception e) {
// TODO: handle exception
}
But this is taking 10 minutes.
Is there a method in TableUtils which return as soon as table becomes active.
You may try something as follows.
Table table = dynamoDB.createTable(request);
System.out.println("Waiting for " + tableName + " to be created...this may take a while...");
table.waitForActive();
For more information check out this link.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AppendixSampleDataCodeJava.html
I had implemented the solution for this in GO language.
Here is the summary.
You have to use an API - DescribeTable or corresponding API.
The input to this API will be DescribeTableInput, where you specify the table name.
You will need to do polling in a loop till the table becomes active.
The output of the Describe table will provide you status of the table ( result.Table.TableStatus)
If the status is "ACTIVE" then you can insert the info. Else you will need to continue with the loop.
In my case, the tables are becoming active in less than one minute.
I have a requirement to update more than 1000000 records in DB2 database.
I tried using hibernate with multi threaded application updating the records. However, on doing so I was getting lockacquisitionexception. I feel it's because of the bulk commits I am doing along with multiple threads.
Can someone recommend a better solution or better way to do so.
Please let me know if I need to upload the code I am using.
Thanks in advance.
//Code running multiple times with threads
Transaction tx = null;
tx = session.beginTransaction();
for(EncryptRef abc : arList) {
String encrypted = keyUtils.encrypt(abc.getNumber()); //to encrypt some data
Object o = session.load(EncryptRef.class,new Long(abc.getId())); //primary key EncryptRef object = (EncryptRef)o;
object.setEncryptedNumber(encrypted); //updating the row
}
tx.commit(); //bulk commiting the updates
Table contains just three columns. ID|PlainText|EncryptedText
Update:
I tried batch updates using JDBC prepared statemenets. However, I am still facing the below exception:
com.ibm.db2.jcc.am.BatchUpdateException:
[jcc][t4][102][10040][3.63.75] Batch failure. The batch was
submitted, but at least one exception occurred on an individual member
of the batch. Use getNextException() to retrieve the exceptions for
specific batched elements. ERRORCODE=-4229, SQLSTATE=null at
com.ibm.db2.jcc.am.fd.a(fd.java:407) at
com.ibm.db2.jcc.am.n.a(n.java:386) at
com.ibm.db2.jcc.am.zn.a(zn.java:4897) at
com.ibm.db2.jcc.am.zn.c(zn.java:4528) at
com.ibm.db2.jcc.am.zn.executeBatch(zn.java:2837) at
org.npci.ThreadClass.run(ThreadClass.java:63) at
java.lang.Thread.run(Thread.java:748)
Below is the code executed with batch size of 50-100 records:
String queryToUpdate = "UPDATE INST1.ENCRYPT_REF SET ENCR_NUM=? WHERE ID=?";
PreparedStatement pstmtForUpdate = conn.prepareStatement(queryToUpdate);
for (Map.Entry<Long,String> entry : encryptMap.entrySet()) {
pstmtForUpdate.setString(1, entry.getValue());
pstmtForUpdate.setLong(2, entry.getKey());
pstmtForUpdate.addBatch();
}
pstmtForUpdate.executeBatch();
conn.close();
Without knowing anything about your database structure it’s hard to recommend a specific solution. If you can change the database, a good strategy would be to partition your table and then arrange for each thread to update a separate partition. Instead of having multiple threads updating one large database and conflicting with each other, you would effectively have each thread each updating its own smaller database.
You should also make sure you’re effectively batching updates and not committing too often.
If your table has tons of indexes, it might be more efficient to drop some/all and rebuild after your update than to update then on an ongoing basis. Similarly you might consider removing triggers, referential integrity constraints, etc., then patching up later.
Not an answer to the question. Used for better formatting.
To catch the actual db2 SQLCODE use the following technique. Otherwise it's impossible to understand the root cause of the problem.
try {
...
} catch (SQLException ex) {
while (ex != null) {
if (ex instanceof com.ibm.db2.jcc.DB2Diagnosable) {
com.ibm.db2.jcc.DB2Diagnosable db2ex =
(com.ibm.db2.jcc.DB2Diagnosable) ex;
com.ibm.db2.jcc.DB2Sqlca sqlca = db2ex.getSqlca();
if (sqlca != null) {
System.out.println("SQLCODE: " + sqlca.getSqlCode());
System.out.println("MESSAGE: " + sqlca.getMessage());
} else {
System.out.println("Error code: " + ex.getErrorCode());
System.out.println("Error msg : " + ex.getMessage());
}
} else {
System.out.println("Error code (no db2): " + ex.getErrorCode());
System.out.println("Error msg (no db2): " + ex.getMessage());
}
ex = ex.getNextException();
}
...
}
As for ENCR_NUM field.
Is it possible to have actual values for this column outside of your application?
Or are such values can be generated by your application only?
Do you have to update all the table rows or is there some condition on the set of IDs which need to be updated?
I am experiencing an unexpected behaviour using the Java API to truncate an HBase table. In detail, I am doing the following operations:
Disable the table
Truncate the table
Enable the table
The code corresponding to these operations is the following:
Configuration conf = HBaseConfiguration.create();
// ...
// Setting properly the configuration information
// ...
try (HBaseAdmin admin = new HBaseAdmin(conf)) {
if (admin.isTableEnabled(TABLE_NAME)) {
admin.disableTable(TABLE_NAME);
}
admin.truncateTable(TableName.valueOf(TABLE_NAME), false);
// Enabling the table after having truncated
admin.enableTable(TABLE_NAME);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Now, the statement admin.enableTable(TABLE_NAME) after the truncate operation throws an org.apache.hadoop.hbase.TableNotDisabledException. Is it correct? Truncating a table via Java API re-enables it automagically?
I have checked the API and I did not find any reference to this behaviuor.
I am using HBase, version 1.0.0-cdh5.5.0.
Hbase truncate needs to perform 3 operations:
Disables table if it already presents(as it drops table in second operation which needs to be disabled first)
Drops table if it already presents
Recreates the mentioned table(any create will automatically enables the table)
Same behavior is with hbase-shell truncate command. The only difference between shell and java api is that shell automatically performs all, but in java api, table needs to be disabled first explicitly to achieve second drop operation and last operation creates it again, so new table is enabled by default.
Hope this explains...
So I'm trying to understand how to use the RowSet API, specifically CachedRowSet, and I feel like I've been bashing my head against a wall for the last hour or so and could use some help.
I've got some very simple tables set up in a MySQL database that I'm using to test this. I should also add that everything I'm attempting to do with RowSet I've been able to do successfully with ResultSet, which leads me to believe that the issue is with my usage of the ResultSet API, rather than the operation I'm attempting to do itself.
Anyway, I'm trying to insert a new row using ResultSet. I'll paste my code here, then add some notes about it below:
CachedRowSet rowSet = null;
try {
RowSetFactory rsFactory = RowSetProvider.newFactory();
rowSet = rsFactory.createCachedRowSet();
rowSet.setUrl("jdbc:mysql://localhost:3306/van1");
rowSet.setUsername("####");
rowSet.setPassword("####");
rowSet.setKeyColumns(new int[]{1});
} catch (SQLException e) {
e.printStackTrace();
}
String query = "select * from phone";
try {
rowSet.setCommand(query);
rowSet.execute();
printTable(rowSet);
rowSet.moveToInsertRow();
rowSet.setInt(1, 4);
rowSet.setString(2, "Mobile");
rowSet.setString(3, "1");
rowSet.setString(4, "732");
rowSet.setString(5, "555");
rowSet.setString(6, "1234");
rowSet.setString(7, "");
rowSet.insertRow();
rowSet.moveToCurrentRow();
rowSet.acceptChanges();
printTable(rowSet);
} catch (SQLException e) {
e.printStackTrace();
}
So, as you can see, I'm trying to update a table of phone numbers with a new phone number. Here are the details:
1) All the phone number fields are datatype char, so that leading zeroes are not lost.
2) I'm using the default CachedRowSet implementation provided by the JDBC API, as opposed to anything specific from the MySQL driver. Not sure if that matters or not, but I'm putting it here just in case. Also, I didn't see an option to import CachedRowSet from the driver library anyway.
3) I'm setting a value for every column in the table, because the RowSet API doesn't allow for rows to be inserted without a value for every column.
4) I've tried the operation using both the setter methods and the update methods. Same result either way.
5) As far as I can tell, I'm on the insert row when executing the insertRow() method. I also return to the current row before invoking acceptChanges(), but since my code never gets that far I can't really comment on that part.
6) The exception is a SQLException (no chained exception within it) thrown on the invocation of the insertRow() method. Here is the stack trace:
java.sql.SQLException: Failed on insert row
at com.sun.rowset.CachedRowSetImpl.insertRow(Unknown Source)
at firsttry.RowSetPractice.rowSetTest(RowSetPractice.java:87)
at firsttry.RowSetPractice.main(RowSetPractice.java:20)
So, I'm out of ideas. Any help would be appreciated. I've searched every thread on this site I could find, all I see is stuff about it failing on the acceptChanges() method rather than insertRow().
I have two java apps: one of them inserts records to Table1.
Second application reads first N items and removes them.
When 1st application inserts data intensive, 2nd failed when I try to delete any rows with CannotSerializeTransactionException. I don't see any problems: inserted items are visible in select/delete only when insert transaction is finished. How can I fix it? Thanks.
TransactionTemplate tt = new TransactionTemplate(platformTransactionManager);
tt.setIsolationLevel(Connection.TRANSACTION_SERIALIZABLE);
tt.execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
List<Record> records = getRecords(); // jdbc select
if (!records.isEmpty()) {
try {
processRecords(records); // no database
removeRecords(records); // jdbc delete - exception here
} catch (CannotSerializeTransactionException e) {
log.info("Transaction rollback");
}
} else {
pauseProcessing();
}
}
});
pauseProcessing() - sleep
public void removeRecords(int changeId) { String sql = "delete from RECORDS where ID <= ?";
getJdbcTemplate().update(sql, new Object[]{changeId});}
Are you using Connection.TRANSACTION_SERIALIZABLE also in first application? Looks like first application locks table, so second one cannot access it (cannot start transaction). Maybe Connection.TRANSACTION_REPEATABLE_READ could be enough?
Probably you can also configure second application not to throw exception when it cannot access resources, but to wait for it.
This sounds as if you're reading uncommitted data. Are you sure you're properly settings the isolation level?
It seems to me that you're mixing up constants from two different classes: Shouldn't you be passing TransactionDefinition.ISOLATION_SERIALIZABLE instead of Connection.TRANSACTION_SERIALIZABLE to the setIsolationLevel method?
Why do you set the isolation level anyway? Oracle's default isolation level (read committed) is usually the best compromise between consistency and speed and should nicely work in you case.