Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have an Oracle Database and I want to monitor a particular table/column in the DB . If the column value crosses threshold value I want to call my java application code to perform some operation LIKE EXAMPLE SCALE UP OR SHUTDOWN. How can I achieve this? Any pointers or help needed.
Set Serverout On
Declare
comm Varchar2(2000);
Begin
comm := OSCommand_Run('/home/jguy/runJavaApp.sh')--for calling commands
DBMS_OUTPUT.Put_Line(comm);
End;
I suggest you to use triggers, they are designed to do stuff like this and I really wouldn't mess up it with Java in this case.
You can use triggers (i this case an update trigger) that will execute a particular function/stored procedure on certain conditions.
There are multiple ways you can do this:
Trigger on the table, which will check column value upon each DML statement and if value crosses threshold.
As trigger may have some performance impact on table, you can create a scheduler job calling a stored procedure which will invoke on periodic basis (every min or anything you like) and then check the column value and call java app if it crosses threshold.
I prefer trigger but check the performance impact on the table.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 17 days ago.
Improve this question
If I need to perform an automated housekeeping task, and this is my query:
delete from sample_table where id = '1'
And, this scheduled query gets executed from multiple service instances.
Will this have a significant performance impact? What would be an appropriate way of testing this?
Issuing multiple deletes for the same partition can have a significant impact on your cluster.
Remember that all writes in Cassandra (INSERT, UPDATE, DELETE) are inserts under the hood. Since Cassandra does not perform a read-before-write (with the exception of lightweight transactions), issuing a DELETE will insert a tombstone marker regardless of whether the data exists or has already been deleted.
Every single DELETE you issue counts as a write request so depending on how busy your cluster is, it may have a measurable impact on its performance. Cheers!
Erick's answer is pretty solid, but I'd just like to add that the time that you'll likely see performance issues is at read-time. That's because doing a:
SELECT * FROM sample_table WHERE id='1';
...will read ALL of the times that the DELETE was written (tombstones) from the SSTable file. The default settings on a table result in deleted data staying around for 10 days (to ensure proper replication) before they can be picked-up by compaction.
So figure out how many times that DELETE happens per key over a 10 day period, and that's about how many Cassandra will have to reconcile at read-time.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am trying to learn multi-threading in java.
Suppose I have a very big application with 100 running threads trying to execute a synchronized method which inserts a row in database.
As, the method is synchronized so only one thread will get the lock for that method and rest 99 will wait.
Only 1 thread is able to edit the Database and rest will be waiting. It seems a slow process. As all the threads will be editing the DB one by one. Is there any other way or concept to safely edit the database in a faster way?
i will recommend u to read about isolation level in transaction to handle some cases in concurrent application https://en.wikipedia.org/wiki/Isolation_(database_systems), sometimes is handles by default.
if for isntance u only adding new rows in table u shouldn't care about it and remove synchronized
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am writing a report generation program in Java, with oracle DB. I have a stored procedure, that will retrieve one value at a time. From my Java Program I am calling the procedure repeatedly. In extreme case, I have to call the procedure 60,000 times. But it shows problems like, wrong value is returned after a specified calls (like 300 calls). kindly tell me how to sort out this.
Thanks.
Its not a good practice to call DB with such high frequency. You can use cursor in your stored procedure and fetch the required records at once. Check the link for reference Cursors in Oracle Stored Procedure
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I think I have to ask my question in another way.
At transactional database which of the following case has been recommended:
writing multi or 2 insert query for saving log of program on DB that has more pressure on server .
writing trigger after inserting for saving log of program on DB that has more pressure on DB.
Thanks for attention.
If you are sure that the insert to the DB will happen only from your application end then I would go for the first option by creating a procedure and include both the INSERT statement in a TRANSACTION block. which will make sure atomic operation.
But, in case there are possibilities that insert to the DB may happen through adhoc query or through third party ETL tool then you have no other option than having a trigger AFTER INSERT TRIGGER to perform the log insert operation (2nd option) since there is no way to call the other INSERT statement automatically or explicitly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let us say that I have a method that writes to a file or database. What if different parts of application calls this method too many times at the same time or in same interval of time. All those method calls are maintained in some stack/queue in memory and wait for previous requests to be served ?
Writing to the same file is platform dependent, like Unix allows concurrent writes to the same file.
You need to see the synchronization techniques - how you want to manage the read write operations.
If you see from DB perspective the db engine handles it properly - whichever comes first will be served. The next insert would depend on the first insert(in case you already inserted with the same key in the previous operation - then obviously it ll throw an exception)
Also I would say if different parts of your application are appending data to the same file at the same time - there could be design flaw and you need to reconsider the design