synchronized a oracle package that invoked from jdbc - java

In my program data model, exist a table with two column as following:
Id_1 and Id_2 with Number data type. This table hasn't any primary key and unique key.
I have a package with a procedure as persist. This procedure using for adding a row to the table.
My procedure in package is as following:
procedure persist(id_1 out Number,
id_2 out Number)is
begin
insert into middle_table values(id_1,id_2);
end;
Problem is: I have a scenario as following:
Thread one and thread 2 concurrent call above procedure with same parameter and result is : 2 equals row added to above table and this wrong in my application.
My question is: What I do for prevent this situation in procedure?

You should always have a constraint, but still this requirement may be valid in some cases.
A Simple and elegant solution is to do a MERGE or do a SELECT and INSERT. So how many times, the proc is executed, you are safe.
You can have client side implementation by putting the procedure call in a syncronized method of your thread class. So, it cant be run parallely.
public void your_method() {
// Other statements
synchronized( this ) { // blocks "this" from being executed by parallel threads
// call your oracle stored proc here
}
}
But if there are multiple clients across different platforms, you may have to write something in Oracle itself!
A Simple and elegant solution is to do a MERGE or do a SELECT and INSERT
procedure persist(id_1 out Number,
id_2 out Number)
is
retcode NUMBER := 0;
begin
retcode := 100;
/* Checking for semaphore, else wait ! */
WHILE(retcode = 100)
LOOP
retcode = check_semphore(); /* Returns 100 if present else 0 */
IF(retcode = 100) THEN
/* Semaphore present */
NULL;
ELSE
write_semaphore;
/* probably a entry in a table with commit,
have to use savepoints, else every other transactions would be comitted! */
MERGE INTO middle_table m
USING (SELECT id_1,id_2 FROM dual) new_Values
ON ( new_Values.id_1 = m.id_1
AND new_Values.id_2 = m.id_2)
WHEN NOT MATCHED
THEN
INSERT INTO middle_table VALUES(id_1,id_2);
delete_semaphore;
/* delete tht entry */
EXIT;
END IF;
END LOOP;
end;
/

What about adding a unique constraint to the table in question?
or instead of having the threads writing directly to the db put the new objects in a hashtable, check for duplicate, join the threads and then use JPA to persit the objects found in the hashtable.

Related

JPA concurrent postgresql counter column with retrieving value

Pre-requisites
Postgresql
Spring boot with spring data jpa
Problem
I have 2 tables. Products and ProductsLocationCounter. Each product has a location_id and counter_value fields among others. location_id is also the primary key of ProductsLocationCounter.
The ProductsLocationCounter is meant to keep a counter of the number of products grouped by a specific location_id whenever a new product is added.
The problem is that I also need the counter value at that point in time to be attached to the product entity.
So the flow would be like
1. create product
2. counter_value = get counter
3. increment counter
4. product.counter_value = counter_value
Of course this has to be done in a concurrent matter.
Now, i've read/tried different solutions.
this stackoverflow post suggests that i should let the db to handle the concurrency, which sounds fine by me. But the trick is that I need the value of the counter in the same transaction. So I've created a trigger
CREATE FUNCTION maintain_location_product_count_fun() RETURNS TRIGGER AS
$$
DECLARE
counter_var BIGINT;
BEGIN
IF TG_OP IN ('INSERT') THEN
select product_location_count.counter into counter_var from product_location_count WHERE id = new.location_id FOR UPDATE;
UPDATE product_location_count SET counter = counter + 1 WHERE id = new.location_id;
UPDATE products SET counter_value = counter_var WHERE location_id = new.location_id;
END IF;
RETURN NULL;
END
$$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_location_product_count_trig
AFTER INSERT ON products
FOR EACH ROW
EXECUTE PROCEDURE maintain_location_product_count_fun();
and tested it with a parallel stream
IntStream.range(1, 5000)
.parallel()
.forEach(value -> {
executeInsideTransactionTemplate(status -> {
var location = locationRepository.findById(location.getId()).get();
return addProductWithLocation(location)
});
});
Got no duplication on the counter_value column. Is this trigger safe for multi-threaded apps? Haven't worked with triggers/postgresql functions before. Not sure what to expect
The second solution I tried was to add PESIMISTIC_WRITE on findById method of the ProductsLocationCounter entity but i ended up getting
cannot execute SELECT FOR UPDATE in a read-only transaction even though i was executing the code in a #Transactional annotated method ( which by default has read-only false).
The third one was to update and retrieve the value of the counter in the same statement but spring jpa doesn't allow that (nor the underlying db) as the update statement only return the number of rows affected
Is there any other solution or do i need to add something to the trigger function to make it threadsafe? Thank you
This is how I've achieved what i needed.
Long story short, i've used a sql function and I called it inside repository. I didn't need the trigger anymore.
https://stackoverflow.com/a/74208072/3018285

Oracle RESET_PACKAGE does not reset value of a variable in the session

I have an app where JDBC connections are pooled. This is related to my question. For simplicity let's assume I have 1 connection and I need to set a variable then reset session / context state. However the idea is not reverse / reset the 'app1_ctx' variable particularly as in the actual case users can enter many procedures that set many variables so what I need is one procedure that clears all session related variables or even restart session. (please check this question too to understand the problem )
Below is my procedure:
CREATE OR REPLACE CONTEXT app1_ctx USING app1_ctx_package;
CREATE OR REPLACE PACKAGE app1_ctx_package IS
PROCEDURE set_empno (empno NUMBER);
END;
CREATE OR REPLACE PACKAGE BODY app1_ctx_package IS
PROCEDURE set_empno (empno NUMBER) IS
BEGIN
DBMS_SESSION.SET_CONTEXT('app1_ctx', 'empno', empno);
END;
END;
Then when checking value of 'empno':
select SYS_CONTEXT ('app1_ctx', 'empno') employee_num from dual;
I get employee_num = null
To set the empno variable we run the following:
begin
APP1_CTX_PACKAGE.SET_EMPNO(11);
end;
Then when re-checking value of 'empno', I get employee_num = 11
What we need is to clear all session / package variables after this.
I try to clear session variables using RESET_PACKAGE or the below similar procedures.
begin
DBMS_SESSION.RESET_PACKAGE;
end;
begin
DBMS_SESSION.modify_package_state(DBMS_SESSION.reinitialize);
end;
begin
DBMS_SESSION.MODIFY_PACKAGE_STATE(DBMS_SESSION.FREE_ALL_RESOURCES);
end;
But then when rechecking the variable it still has the same value.
How can I achieve this?
I am not sure how use CLEAR_ALL_CONTEXT procedure.
dbms_session.clear_all_context( 'app1_ctx' );
You'd need to pass the same namespace to clear_all_context that you passed as the first parameter to set_context.
If you don't know all the contexts your application uses but you do know all the schemas it uses
for ctx in (select *
from dba_context
where schema in (<<schemas your application uses>>))
loop
dbms_session.clear_all_context( ctx.namespace );
end loop;
In this example, there are no package variables so there would be no need to call reset_package or modify_package_state.

Deadlock during insert/update in parallel

I am new to MS SQL Server and I am trying to update the record by incrementing occurrence counter(+1) if data is missing or I freshly insert it with counter value zero '0'.
Moreover my application runs in parallel to process each element of data array a[]. When processing array in parallel SQL Server throws deadlock for the same. Though I set transaction isolation level yet the same deadlock is happening on the table. My application is written in Java/Camel/Hibernate.
Stored procedure:
IF(#recordCount = 0 OR #recordCount > 1 )
BEGIN
IF(#chargeAbbreviation IS NOT NULL)
BEGIN
set transaction isolation level READ COMMITTED;
begin transaction;
UPDATE dbo.SLG_Charge_Abbreviation_Missing_Report WITH (UPDLOCK, HOLDLOCK)
SET dbo.SLG_Charge_Abbreviation_Missing_Report.Occurrence_Count+=1,dbo.SLG_Charge_Abbreviation_Missing_Report.ModifiedAt=GETDATE()
WHERE dbo.SLG_Charge_Abbreviation_Missing_Report.Jurisdiction_ID = #jurisdictionId AND
UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Charge_Abbreviation) = #chargeAbbreviation AND
(UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code) = #statuteCode OR (dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code IS NULL AND #statuteCode IS NULL)) AND
dbo.SLG_Charge_Abbreviation_Missing_Report.Product_Category_id = #productCategoryId
IF(##ROWCOUNT = 0)
BEGIN
INSERT INTO dbo.SLG_Charge_Abbreviation_Missing_Report VALUES(#OriginalChargeAbbreviation,#jurisdictionId,#OriginalStatuteCode,#productCategoryId,GETDATE(),GETDATE(),1);
END
commit
END
SELECT TOP 0 * FROM dbo.SLG_Charge_Mapping
END
It looks like you're trying to use some version of Sam Saffron's upsert method.
To take advantage of the Key-Range Locking when using holdlock/serializable you need to have an index that covers the columns in the query.
If you don't have one that covers this query, you could consider creating one like this:
create unique nonclustered index ux_slg_Charge_Abbreviation_Missing_Report_jid_pcid_ca_sc
on dbo.slg_Charge_Abbreviation_Missing_Report (
Jurisdiction_id
, Product_Category_id
, Charge_Abbreviation
, Statute_Code
);
I don't think the line: set transaction isolation level read committed; is doing you any favors in this instance.
set nocount on;
set xact_abort on;
if(#recordCount = 0 or #recordCount > 1 )
begin;
if #chargeAbbreviation is not null
begin;
begin tran;
update camr with (updlock, serializable)
set camr.Occurrence_Count = camr.Occurrence_Count + 1
, camr.ModifiedAt = getdate()
from dbo.slg_Charge_Abbreviation_Missing_Report as camr
where camr.Jurisdiction_id = #jurisdictionId
and camr.Product_Category_id = #productCategoryId
and upper(camr.Charge_Abbreviation) = #chargeAbbreviation
and (
upper(camr.Statute_Code) = #statuteCode
or (camr.Statute_Code is null and #statuteCode is null)
)
if ##rowcount = 0
begin;
insert into dbo.slg_Charge_Abbreviation_Missing_Report values
(#OriginalChargeAbbreviation,#jurisdictionId
,#OriginalStatuteCode,#productCategoryId
,getdate(),getdate(),1);
end;
commit tran
end;
select top 0 from dbo.slg_Charge_Mapping;
end;
Note: holdlock is the same as serializable.
Links related to the solution above:
Insert or Update pattern for Sql Server - Sam Saffron
Key-Range Locking - MSDN
Documentation on serializable and other Table Hints - MSDN
Error and Transaction Handling in SQL Server Part One – Jumpstart Error Handling - Erland Sommarskog
SQL Server Isolation Levels: A Series - Paul White
Simpletalk - SQL Server Deadlocks by Example - Gail Shaw

Proper way to insert record with unique attribute

I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.

JPA executeUpdate always returns 1

I'm having an issue here where the executeUpdate command always returns value 1 even though there's no record to be updated.
First I retrieve several records, do a bit of calculation, and then update the status of some of the retrieved records.
The JPA update code:
private int executeUpdateStatusToSuccess(Long id, Query updateQuery) {
updateQuery.setParameter(1, getSysdateFromDB());
updateQuery.setParameter(2, id);
int cnt = updateQuery.executeUpdate();
return cnt; // always return 1
}
The update query:
UPDATE PRODUCT_PARAM SET STATUS = 2, DATA_TIMESTAMP=? WHERE ID = ? AND STATUS=-1
Note that STATUS column is practically never valued < 0. I'm purposely adding this condition here just to show that even though it shouldn't have updated any record, the executeUpdate() still returns the value 1.
As an additional note, there is no update process anywhere between the data retrieval and the update. It's all done within my local environment.
Any advice if I'm possibly missing anything here? Or if there's some configuration parameter that I need to checK?
EDIT:
For the JPA I'm using EclipseLink.
For the database I'm using Oracle 10g with driver ojdbc5.jar.
In the end I have to look into the EclipseLink JPA source code. So the system actually executes this line
return Integer.valueOf(1);
from the codes inside basicExecuteCall method of DatabaseAccessor class below:
if (isInBatchWritingMode(session)) {
// if there is nothing returned and we are not using optimistic locking then batch
//if it is a StoredProcedure with in/out or out parameters then do not batch
//logic may be weird but we must not batch if we are not using JDBC batchwriting and we have parameters
// we may want to refactor this some day
if (dbCall.isNothingReturned() && (!dbCall.hasOptimisticLock() || getPlatform().canBatchWriteWithOptimisticLocking(dbCall) )
&& (!dbCall.shouldBuildOutputRow()) && (getPlatform().usesJDBCBatchWriting() || (!dbCall.hasParameters())) && (!dbCall.isLOBLocatorNeeded())) {
// this will handle executing batched statements, or switching mechanisms if required
getActiveBatchWritingMechanism().appendCall(session, dbCall);
//bug 4241441: passing 1 back to avoid optimistic lock exceptions since there
// is no way to know if it succeeded on the DB at this point.
return Integer.valueOf(1);
} else {
getActiveBatchWritingMechanism().executeBatchedStatements(session);
}
}
One easy hack will be by not using the batch. I've tried turning it off in persistence.xml and the update returns the expected value, which is 0.
<property name="eclipselink.jdbc.batch-writing" value="none" />
I'm expecting better solution but this one will do for now in my situation.
I know that this question and answer are pretty old but since I stumbled upon this same problem recently and figured out a solution for my use-case (keep batch-writing enabled and still get the updated rows count for some queries), I figured my solution might be helpful to somebody else in the future.
Basically, you can use a query hint to signal that a specific query does not support batch execution. The code to do this is something like this:
import org.eclipse.persistence.config.HintValues;
import org.eclipse.persistence.config.QueryHints;
import javax.persistence.Query;
public class EclipseLinkUtils {
public static void disableBatchWriting(Query query) {
query.setHint(QueryHints.BATCH_WRITING, HintValues.FALSE);
}
}

Categories

Resources