I am new to MS SQL Server and I am trying to update the record by incrementing occurrence counter(+1) if data is missing or I freshly insert it with counter value zero '0'.
Moreover my application runs in parallel to process each element of data array a[]. When processing array in parallel SQL Server throws deadlock for the same. Though I set transaction isolation level yet the same deadlock is happening on the table. My application is written in Java/Camel/Hibernate.
Stored procedure:
IF(#recordCount = 0 OR #recordCount > 1 )
BEGIN
IF(#chargeAbbreviation IS NOT NULL)
BEGIN
set transaction isolation level READ COMMITTED;
begin transaction;
UPDATE dbo.SLG_Charge_Abbreviation_Missing_Report WITH (UPDLOCK, HOLDLOCK)
SET dbo.SLG_Charge_Abbreviation_Missing_Report.Occurrence_Count+=1,dbo.SLG_Charge_Abbreviation_Missing_Report.ModifiedAt=GETDATE()
WHERE dbo.SLG_Charge_Abbreviation_Missing_Report.Jurisdiction_ID = #jurisdictionId AND
UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Charge_Abbreviation) = #chargeAbbreviation AND
(UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code) = #statuteCode OR (dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code IS NULL AND #statuteCode IS NULL)) AND
dbo.SLG_Charge_Abbreviation_Missing_Report.Product_Category_id = #productCategoryId
IF(##ROWCOUNT = 0)
BEGIN
INSERT INTO dbo.SLG_Charge_Abbreviation_Missing_Report VALUES(#OriginalChargeAbbreviation,#jurisdictionId,#OriginalStatuteCode,#productCategoryId,GETDATE(),GETDATE(),1);
END
commit
END
SELECT TOP 0 * FROM dbo.SLG_Charge_Mapping
END
It looks like you're trying to use some version of Sam Saffron's upsert method.
To take advantage of the Key-Range Locking when using holdlock/serializable you need to have an index that covers the columns in the query.
If you don't have one that covers this query, you could consider creating one like this:
create unique nonclustered index ux_slg_Charge_Abbreviation_Missing_Report_jid_pcid_ca_sc
on dbo.slg_Charge_Abbreviation_Missing_Report (
Jurisdiction_id
, Product_Category_id
, Charge_Abbreviation
, Statute_Code
);
I don't think the line: set transaction isolation level read committed; is doing you any favors in this instance.
set nocount on;
set xact_abort on;
if(#recordCount = 0 or #recordCount > 1 )
begin;
if #chargeAbbreviation is not null
begin;
begin tran;
update camr with (updlock, serializable)
set camr.Occurrence_Count = camr.Occurrence_Count + 1
, camr.ModifiedAt = getdate()
from dbo.slg_Charge_Abbreviation_Missing_Report as camr
where camr.Jurisdiction_id = #jurisdictionId
and camr.Product_Category_id = #productCategoryId
and upper(camr.Charge_Abbreviation) = #chargeAbbreviation
and (
upper(camr.Statute_Code) = #statuteCode
or (camr.Statute_Code is null and #statuteCode is null)
)
if ##rowcount = 0
begin;
insert into dbo.slg_Charge_Abbreviation_Missing_Report values
(#OriginalChargeAbbreviation,#jurisdictionId
,#OriginalStatuteCode,#productCategoryId
,getdate(),getdate(),1);
end;
commit tran
end;
select top 0 from dbo.slg_Charge_Mapping;
end;
Note: holdlock is the same as serializable.
Links related to the solution above:
Insert or Update pattern for Sql Server - Sam Saffron
Key-Range Locking - MSDN
Documentation on serializable and other Table Hints - MSDN
Error and Transaction Handling in SQL Server Part One – Jumpstart Error Handling - Erland Sommarskog
SQL Server Isolation Levels: A Series - Paul White
Simpletalk - SQL Server Deadlocks by Example - Gail Shaw
Related
Pre-requisites
Postgresql
Spring boot with spring data jpa
Problem
I have 2 tables. Products and ProductsLocationCounter. Each product has a location_id and counter_value fields among others. location_id is also the primary key of ProductsLocationCounter.
The ProductsLocationCounter is meant to keep a counter of the number of products grouped by a specific location_id whenever a new product is added.
The problem is that I also need the counter value at that point in time to be attached to the product entity.
So the flow would be like
1. create product
2. counter_value = get counter
3. increment counter
4. product.counter_value = counter_value
Of course this has to be done in a concurrent matter.
Now, i've read/tried different solutions.
this stackoverflow post suggests that i should let the db to handle the concurrency, which sounds fine by me. But the trick is that I need the value of the counter in the same transaction. So I've created a trigger
CREATE FUNCTION maintain_location_product_count_fun() RETURNS TRIGGER AS
$$
DECLARE
counter_var BIGINT;
BEGIN
IF TG_OP IN ('INSERT') THEN
select product_location_count.counter into counter_var from product_location_count WHERE id = new.location_id FOR UPDATE;
UPDATE product_location_count SET counter = counter + 1 WHERE id = new.location_id;
UPDATE products SET counter_value = counter_var WHERE location_id = new.location_id;
END IF;
RETURN NULL;
END
$$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_location_product_count_trig
AFTER INSERT ON products
FOR EACH ROW
EXECUTE PROCEDURE maintain_location_product_count_fun();
and tested it with a parallel stream
IntStream.range(1, 5000)
.parallel()
.forEach(value -> {
executeInsideTransactionTemplate(status -> {
var location = locationRepository.findById(location.getId()).get();
return addProductWithLocation(location)
});
});
Got no duplication on the counter_value column. Is this trigger safe for multi-threaded apps? Haven't worked with triggers/postgresql functions before. Not sure what to expect
The second solution I tried was to add PESIMISTIC_WRITE on findById method of the ProductsLocationCounter entity but i ended up getting
cannot execute SELECT FOR UPDATE in a read-only transaction even though i was executing the code in a #Transactional annotated method ( which by default has read-only false).
The third one was to update and retrieve the value of the counter in the same statement but spring jpa doesn't allow that (nor the underlying db) as the update statement only return the number of rows affected
Is there any other solution or do i need to add something to the trigger function to make it threadsafe? Thank you
This is how I've achieved what i needed.
Long story short, i've used a sql function and I called it inside repository. I didn't need the trigger anymore.
https://stackoverflow.com/a/74208072/3018285
i am testing a plpgsql function in jmeter.
The following sample is to replicate the issue. i have a table named sing with definition as follows
db=# \d sing
Table "schema1.sing"
Column
Type
id
bigint
valr
numeric
and my plpgsql function is as follows
create or replace function schema1.insissue(val text) returns text as $$
declare
_p text;_h text;
ids text[];
valid numeric := functiontochangetoid(val); // a sample function to change value into id.
slid bigint:= nextval('rep_s'); // sequence value
dup text := null;
begin
select array_agg(id) from sing where valr = valid into ids;
raise notice 'ids %',ids;
if coalesce(array_upper(ids,1),0) > 0 then
dup = 'FAIL';
end if;
raise notice 'dup %',dup;
if dup is null then
insert into sing values (slid,valid);
return 'SUCCESS'|| slid;
end if;
return 'FAIL';
exception
when others then
get stacked diagnostics
_p := pg_exception_context,_h := pg_exception_hint;
raise notice 'sqlerrm >> :%',sqlerrm;
raise notice 'position >> :%',_p;
raise notice 'hint >> :%',_h;
return 'FAIL';
end;
$$ language plpgsql;
simply in my function it checks if the value exist in valr column of sing table and if not exist inserts the value to the table.
now my jmeter config
to connect i use postgresql-42.2.14.jar.
when the ramp up period is 1 sec IE 200 request in one second the function creates duplicate values like this, when ramp up period is 100 sec no issue.
db=# select * from sing;
id
valr
897
1095
898
1095
89+
1095
900
1095
901
1095
902
1095
903
1095
but it shoul be actually like this
db=# select * from sing;
id
valr
897
1095
how can i avoid these type of duplicate values ? because my app will have high traffic may be 100 calls in second also i can't make "valr" column a primary key. because it contains other type of values.
my postgres version
db=# select version();
version
------------------------------------------------------------------------------------------------------------------
PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
at last found the solution , the transaction isolation to serialize works for my actual problem. checkout this link https://www.postgresql.org/docs/12/sql-set-transaction.html. The transaction is read committed by default. when we change transaction to serialization on a session it works.
To make a transaction serialized you can use set command on a session before any select query
SET transaction isolation level serializable;
it cannot be done inside a function or procedure in PostgreSQL only for sessions. we can use set in procedure but there will an error like this
NOTICE: sqlerrm >> :SET TRANSACTION ISOLATION LEVEL must be called before any query
NOTICE: position >> :SQL statement "SET transaction isolation level serializable"
I'm performing a test with CouchBase 4.0 and java sdk 2.2. I'm inserting 10 documents whose keys always start by "190".
After inserting these 10 documents I query them with:
cb.restore("190", cache);
Thread.sleep(100);
cb.restore("190", cache);
The query within the 'restore' method is:
Statement st = Select.select("meta(c).id, c.*").from(this.bucketName + " c").where(Expression.x("meta(c).id").like(Expression.s(callId + "_%")));
N1qlQueryResult result = bucket.query(st);
The first call to restore returns 0 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 0
The second call (100ms later) returns the 10 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 10
I tried adding PersistTo.MASTER in the 'insert' statement, but it neither works.
It seems that the 'insert' is not persisted immediately.
Any help would be really appreciated.
Joan.
You're using N1QL to query the data - and N1QL is only eventually consistent (by default), so it only shows up after the indices are recalculated. This isn't related to whether or not the data is persisted (meaning: written from RAM to disc).
You can try to change the scan_consitency level from its default - NOT_BOUNDED - to get consistent results, but that would take longer to return.
read more here
java scan_consitency options
In my program data model, exist a table with two column as following:
Id_1 and Id_2 with Number data type. This table hasn't any primary key and unique key.
I have a package with a procedure as persist. This procedure using for adding a row to the table.
My procedure in package is as following:
procedure persist(id_1 out Number,
id_2 out Number)is
begin
insert into middle_table values(id_1,id_2);
end;
Problem is: I have a scenario as following:
Thread one and thread 2 concurrent call above procedure with same parameter and result is : 2 equals row added to above table and this wrong in my application.
My question is: What I do for prevent this situation in procedure?
You should always have a constraint, but still this requirement may be valid in some cases.
A Simple and elegant solution is to do a MERGE or do a SELECT and INSERT. So how many times, the proc is executed, you are safe.
You can have client side implementation by putting the procedure call in a syncronized method of your thread class. So, it cant be run parallely.
public void your_method() {
// Other statements
synchronized( this ) { // blocks "this" from being executed by parallel threads
// call your oracle stored proc here
}
}
But if there are multiple clients across different platforms, you may have to write something in Oracle itself!
A Simple and elegant solution is to do a MERGE or do a SELECT and INSERT
procedure persist(id_1 out Number,
id_2 out Number)
is
retcode NUMBER := 0;
begin
retcode := 100;
/* Checking for semaphore, else wait ! */
WHILE(retcode = 100)
LOOP
retcode = check_semphore(); /* Returns 100 if present else 0 */
IF(retcode = 100) THEN
/* Semaphore present */
NULL;
ELSE
write_semaphore;
/* probably a entry in a table with commit,
have to use savepoints, else every other transactions would be comitted! */
MERGE INTO middle_table m
USING (SELECT id_1,id_2 FROM dual) new_Values
ON ( new_Values.id_1 = m.id_1
AND new_Values.id_2 = m.id_2)
WHEN NOT MATCHED
THEN
INSERT INTO middle_table VALUES(id_1,id_2);
delete_semaphore;
/* delete tht entry */
EXIT;
END IF;
END LOOP;
end;
/
What about adding a unique constraint to the table in question?
or instead of having the threads writing directly to the db put the new objects in a hashtable, check for duplicate, join the threads and then use JPA to persit the objects found in the hashtable.
I don't know if I'm wording this question correctly, but here it goes.
This is a web application using java, oracle, hibernate.
I have 2 tables in a one (items) to many (tasks) relationship.
Items
item_id name
active_status
etc
Tasks
task_id
item_id
active_status
progress_status
etc
The item's status is made up of the statuses of all of its tasks. Here's the logic...
If Item Status is Canceled or On Hold...return Item Active Status
If there are no tasks, return Completed
If All Tasks are Active and NOT Superseded, then
...return Not Started if all tasks are Not Started
...return Completed if all tasks are Completed
...return On Hold if all tasks are On Hold
Otherwise return Started
I want to do this using SQL and map it to a field in my hibernate mapping file.
I've tried many things over the past several days, and can't seem to get it to work. I tried grouping the records and if 1 record was found, return that status. I've used decode, case, etc.
Here are a few examples of things I've tried. In the second example I get a 'not a single group group function' error.
Any thoughts?
select decode(i.active_status_id, 'OH', i.active_status_id, 'Ca', i.active_status_id,t.progress_status_id)
from tasks t
LEFT OUTER JOIN Items i
ON i.item_id = t.item_id
where t.item_id = 10927815 and t.active_status_id = 'Ac' and t.active_status_id != 'Su'
group by i.active_status_id, t.progress_status_id;
select case
when (count(*) = 1) then progress_status_id
else 'St'
end
from
(select progress_status_id
from tasks t
where t.item_id = 10927815 and (t.active_status_id = 'Ac' and t.active_status_id != 'Su') group by t.progress_status_id)
perhaps somthing like this
SELECT
item_id
, CASE
WHEN active_status IN ('Canceled', 'On Hold') THEN active_status
WHEN t_num = 0 THEN 'Completed'
WHEN flag_all_active = 1 AND flag_all_not_started = 1 THEN 'Not Started'
WHEN flag_all_active = 1 AND flag_all_completed = 1 THEN 'Completed'
WHEN flag_all_active = 1 AND flag_all_on_hold = 1 THEN 'On Hold'
ELSE 'Started'
END
FROM
(
SELECT
i.item_id
, i.active_status
, sum(CASE WHEN t.task_id is NULL THEN 0 ELSE 1 end ) as t_num
, MIN( CASE t.active_status WHEN 'Ac' THEN 1 ELSE 0 END ) as flag_all_active
, MIN( CASE t.progress_status_id WHEN 'Not Starten' THEN 1 ELSE 0 END ) as flag_all_not_started
, MIN( CASE t.progress_status_id WHEN 'Completed' THEN 1 ELSE 0 END ) as flag_all_completed
, MIN( CASE t.progress_status_id WHEN 'On Hold' THEN 1 ELSE 0 END ) as flag_all_on_hold
FROM
items i
left outer join tasks t on (t.item_id = i.item_id )
group by i.item_id
)
;
If you're using annotations you can use #Formula("sql query here") for your derived properties.
See Hibernate formula docs for a (surprisingly brief) explanation.
Alternatively, since you're dealing with relatively large lists of items, it would be better to make the status calculations part of your initial query thus avoiding the database getting hammered by hundreds or thousands of requests. This is what will probably happen if you iterate over each item in the list.
I would recommend joining the status calculation to whatever query you are using to generate your list (presumably in a NamedQuery). This lets your database do all the heavy lifting without being slowed down by the network, which is what it is best at. The Hibernate docs give lots of helpful examples of queries you can try.