Safely edit the Database with multiple threads in java [closed] - java

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am trying to learn multi-threading in java.
Suppose I have a very big application with 100 running threads trying to execute a synchronized method which inserts a row in database.
As, the method is synchronized so only one thread will get the lock for that method and rest 99 will wait.
Only 1 thread is able to edit the Database and rest will be waiting. It seems a slow process. As all the threads will be editing the DB one by one. Is there any other way or concept to safely edit the database in a faster way?

i will recommend u to read about isolation level in transaction to handle some cases in concurrent application https://en.wikipedia.org/wiki/Isolation_(database_systems), sometimes is handles by default.
if for isntance u only adding new rows in table u shouldn't care about it and remove synchronized

Related

Need of SingleThreadPool in Java? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a few questions regarding ExecutorService.
In what cases we should use newSingleThreadExecutor() than others and Why?
Can you tell me the real use case of having SnewSingleThreadExecutor()?
If we have a single thread either from (newSingleThreadExecutor() or newFixedThreadPool(1) or newCacheThreadPool(1)) Do we still need to check for Thread Safety?
Why do we need newSingleThreadExecutor() if we can already create a single thread using newFixedThreadPool(1)
When you don't want tasks to run in parallel because of common data.
Swing's Event Dispatch Thread. It is not called executor, but in fact it is, just its execute method is called invokeLater.
It depends on what data you access. If that data can be accessed outside the tasks running on this executor, then yes. It does not depend of how you built your executor.
We do not need. I don't know what SingleThreadPool do you mean - there is no such class in Java runtime libarary.

How can I monitor the changes in database? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to monitor changes that occur in the database. Say, for instance when an item is removed from the database. I want to be able to use an interface to monitor such changes and create an alert of how many no of items are left in the database . Every 5 min or so. Is there any plugin for java, or some kind of interface or something else? I will be grateful for the help.
Best Option Use DDL TRIGGER as it is DB change event meaning
CREATE DDL TRIGGER
ON DB AFTER DELETE AS
"YOUR ACTION STATEMENT" ;

What is better for performance - Make a bulk call to DB or single call with a loop for calculation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am making an online test with predefined questions and it's correct answer I let student enter their answer to each question and I took them and check them against correct form DB. is it better to call DB to for each answer to check or get all question right answer and make loop o(n)^2.I am using hibernate
A bulk operation is the best. It requires a single database roundtrip and you can do all the processing in the database so that you don't have to move one million records back and forth the DB.
For more details about Bulk Updates, check out this article.
In your example, it does not matter if the user had a test with 1000 questions. If they have a predefined right answer, you can match that in the DB automatically.
If you need to manually validate answers, do it with batch processing and process only N answers at a time and send all answers in a single batch to the DB.

TRIGGER or MULTI Insert [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I think I have to ask my question in another way.
At transactional database which of the following case has been recommended:
writing multi or 2 insert query for saving log of program on DB that has more pressure on server .
writing trigger after inserting for saving log of program on DB that has more pressure on DB.
Thanks for attention.
If you are sure that the insert to the DB will happen only from your application end then I would go for the first option by creating a procedure and include both the INSERT statement in a TRANSACTION block. which will make sure atomic operation.
But, in case there are possibilities that insert to the DB may happen through adhoc query or through third party ETL tool then you have no other option than having a trigger AFTER INSERT TRIGGER to perform the log insert operation (2nd option) since there is no way to call the other INSERT statement automatically or explicitly.

What happens if a method is called too many times at the same time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let us say that I have a method that writes to a file or database. What if different parts of application calls this method too many times at the same time or in same interval of time. All those method calls are maintained in some stack/queue in memory and wait for previous requests to be served ?
Writing to the same file is platform dependent, like Unix allows concurrent writes to the same file.
You need to see the synchronization techniques - how you want to manage the read write operations.
If you see from DB perspective the db engine handles it properly - whichever comes first will be served. The next insert would depend on the first insert(in case you already inserted with the same key in the previous operation - then obviously it ll throw an exception)
Also I would say if different parts of your application are appending data to the same file at the same time - there could be design flaw and you need to reconsider the design

Categories

Resources