issue regarding postgres function test run in jmeter - java

i am testing a plpgsql function in jmeter.
The following sample is to replicate the issue. i have a table named sing with definition as follows
db=# \d sing
Table "schema1.sing"
Column
Type
id
bigint
valr
numeric
and my plpgsql function is as follows
create or replace function schema1.insissue(val text) returns text as $$
declare
_p text;_h text;
ids text[];
valid numeric := functiontochangetoid(val); // a sample function to change value into id.
slid bigint:= nextval('rep_s'); // sequence value
dup text := null;
begin
select array_agg(id) from sing where valr = valid into ids;
raise notice 'ids %',ids;
if coalesce(array_upper(ids,1),0) > 0 then
dup = 'FAIL';
end if;
raise notice 'dup %',dup;
if dup is null then
insert into sing values (slid,valid);
return 'SUCCESS'|| slid;
end if;
return 'FAIL';
exception
when others then
get stacked diagnostics
_p := pg_exception_context,_h := pg_exception_hint;
raise notice 'sqlerrm >> :%',sqlerrm;
raise notice 'position >> :%',_p;
raise notice 'hint >> :%',_h;
return 'FAIL';
end;
$$ language plpgsql;
simply in my function it checks if the value exist in valr column of sing table and if not exist inserts the value to the table.
now my jmeter config
to connect i use postgresql-42.2.14.jar.
when the ramp up period is 1 sec IE 200 request in one second the function creates duplicate values like this, when ramp up period is 100 sec no issue.
db=# select * from sing;
id
valr
897
1095
898
1095
89+
1095
900
1095
901
1095
902
1095
903
1095
but it shoul be actually like this
db=# select * from sing;
id
valr
897
1095
how can i avoid these type of duplicate values ? because my app will have high traffic may be 100 calls in second also i can't make "valr" column a primary key. because it contains other type of values.
my postgres version
db=# select version();
version
------------------------------------------------------------------------------------------------------------------
PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit

at last found the solution , the transaction isolation to serialize works for my actual problem. checkout this link https://www.postgresql.org/docs/12/sql-set-transaction.html. The transaction is read committed by default. when we change transaction to serialization on a session it works.
To make a transaction serialized you can use set command on a session before any select query
SET transaction isolation level serializable;
it cannot be done inside a function or procedure in PostgreSQL only for sessions. we can use set in procedure but there will an error like this
NOTICE: sqlerrm >> :SET TRANSACTION ISOLATION LEVEL must be called before any query
NOTICE: position >> :SQL statement "SET transaction isolation level serializable"

Related

ORA-06502: PL/SQL: numeric or value error when using TO_BINARY_DOUBLE with JDBC

I have a small snippet of code that grabs some data and generates SQL statements to insert them in an Oracle database. These are then executed via a JDBC driver on the Oracle server.
The issue I am running into is that if these statements contain a TO_BINARY_DOUBLE call they always fail for me and not for anyone else in my team, which supposedly have the exact same driver and environment as I do, which is incredibly strange.
CREATE TABLE "SOME_TABLE" (
"_id" BINARY_DOUBLE NOT NULL,
"double" BINARY_DOUBLE,
PRIMARY KEY ("_id")
);
DECLARE
"#value__id" BINARY_DOUBLE;
"#value_double" BINARY_DOUBLE;
BEGIN
"#value__id" := TO_BINARY_DOUBLE('0.0');
"#value_double" := TO_BINARY_DOUBLE('1.2');
INSERT INTO "SOME_TABLE" ("_id", "double")
VALUES(
"#value__id",
"#value_double"
);
END;
And the error:
Unable to execute SQL statement batch: error occurred during batching: ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 5
Hoping someone could shed some light on the source, or point me in the right direction to try find it.
You appear to have different NLS settings to your colleagues; specifically NLS_NUMERIC_CHARACTERS. With that set to '.,' the code works; with it set to ',.' (i.e. expecting a comma as the decimal separator, rather than a period) it throws the error you see.
You can either change your Java environment to match your colleagues', which will probably involve changing the locale, either of your PC or via JVM flags; or override it in the function call:
"#value__id" := TO_BINARY_DOUBLE('0.0', '999D999', 'nls_numeric_characters=''.,''');
"#value_double" := TO_BINARY_DOUBLE('1.2', '999D999', 'nls_numeric_characters=''.,''');
using a format mask that covers whatever string values you might have to deal with - I'm guessing that those values would normally come from a user or a procedure argument. Of course, this then assumes that the string values will always have a period as the decimal separator.
db<>fiddle

Oracle xmlsequence - java.sql.SQLException

I tried to use xmlsequence in this statement with datagrip:
select xmlsequence(extract(river, '/river/cities/*'))
from river_xml
where extractValue(river, '/river/name/text()')='Rhein';
the output was fine:
2020-06-18 19:09:36] 1 row retrieved starting from 1 in 38 ms (execution: 0 ms, fetching: 38 ms)
but from the select statement I got:
<failed to load>
java.sql.SQLException: Interner Fehler: makeJavaArray doesn't support type 2007
at oracle.sql.ArrayDescriptor.makeJavaArray(ArrayDescriptor.java:1075)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.unpickle81ImgBodyElements(OracleTypeCOLLECTION.java:571)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.unpickle81ImgBody(OracleTypeCOLLECTION.java:527)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.unpickle81(OracleTypeCOLLECTION.java:339)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.unlinearizeInternal(OracleTypeCOLLECTION.java:235)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.unlinearize(OracleTypeCOLLECTION.java:214)
at oracle.sql.ArrayDescriptor.toJavaArray(ArrayDescriptor.java:790)
at oracle.sql.ARRAY.getArray(ARRAY.java:301)
in JdbcHelperImpl.wrapIfNeeded(JdbcHelperImpl.java:461)
I can't find this problem in the internet, so maybe someone here knows how I can solve this?
Thanks for your help
The xmlsequence, extract and extractvalue functions are all deprecated. At the moment you're getting a result which is a single collection of type XMLSEQUENCETYPE, with each element of the collection a city node. Presumably it's that collection type that DataGrip isn't happy about.
You can use xmltable instead, which will give you a result with one row per city:
select x.*
from river_xml r
cross join xmltable(
'/river[name="Rhein"]/cities/city'
passing r.river
columns city xmltype path '.'
) x;
You can adapt that to get information about the city in separate columns instead of as an XMLType value instead if you want; it depends what you do with the result.
db<>fiddle doesn't seem to know what to do with XMLSEQUENCETYPE either, which is fair enough; but you can see the output from the XMLTable query.

Deadlock during insert/update in parallel

I am new to MS SQL Server and I am trying to update the record by incrementing occurrence counter(+1) if data is missing or I freshly insert it with counter value zero '0'.
Moreover my application runs in parallel to process each element of data array a[]. When processing array in parallel SQL Server throws deadlock for the same. Though I set transaction isolation level yet the same deadlock is happening on the table. My application is written in Java/Camel/Hibernate.
Stored procedure:
IF(#recordCount = 0 OR #recordCount > 1 )
BEGIN
IF(#chargeAbbreviation IS NOT NULL)
BEGIN
set transaction isolation level READ COMMITTED;
begin transaction;
UPDATE dbo.SLG_Charge_Abbreviation_Missing_Report WITH (UPDLOCK, HOLDLOCK)
SET dbo.SLG_Charge_Abbreviation_Missing_Report.Occurrence_Count+=1,dbo.SLG_Charge_Abbreviation_Missing_Report.ModifiedAt=GETDATE()
WHERE dbo.SLG_Charge_Abbreviation_Missing_Report.Jurisdiction_ID = #jurisdictionId AND
UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Charge_Abbreviation) = #chargeAbbreviation AND
(UPPER(dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code) = #statuteCode OR (dbo.SLG_Charge_Abbreviation_Missing_Report.Statute_Code IS NULL AND #statuteCode IS NULL)) AND
dbo.SLG_Charge_Abbreviation_Missing_Report.Product_Category_id = #productCategoryId
IF(##ROWCOUNT = 0)
BEGIN
INSERT INTO dbo.SLG_Charge_Abbreviation_Missing_Report VALUES(#OriginalChargeAbbreviation,#jurisdictionId,#OriginalStatuteCode,#productCategoryId,GETDATE(),GETDATE(),1);
END
commit
END
SELECT TOP 0 * FROM dbo.SLG_Charge_Mapping
END
It looks like you're trying to use some version of Sam Saffron's upsert method.
To take advantage of the Key-Range Locking when using holdlock/serializable you need to have an index that covers the columns in the query.
If you don't have one that covers this query, you could consider creating one like this:
create unique nonclustered index ux_slg_Charge_Abbreviation_Missing_Report_jid_pcid_ca_sc
on dbo.slg_Charge_Abbreviation_Missing_Report (
Jurisdiction_id
, Product_Category_id
, Charge_Abbreviation
, Statute_Code
);
I don't think the line: set transaction isolation level read committed; is doing you any favors in this instance.
set nocount on;
set xact_abort on;
if(#recordCount = 0 or #recordCount > 1 )
begin;
if #chargeAbbreviation is not null
begin;
begin tran;
update camr with (updlock, serializable)
set camr.Occurrence_Count = camr.Occurrence_Count + 1
, camr.ModifiedAt = getdate()
from dbo.slg_Charge_Abbreviation_Missing_Report as camr
where camr.Jurisdiction_id = #jurisdictionId
and camr.Product_Category_id = #productCategoryId
and upper(camr.Charge_Abbreviation) = #chargeAbbreviation
and (
upper(camr.Statute_Code) = #statuteCode
or (camr.Statute_Code is null and #statuteCode is null)
)
if ##rowcount = 0
begin;
insert into dbo.slg_Charge_Abbreviation_Missing_Report values
(#OriginalChargeAbbreviation,#jurisdictionId
,#OriginalStatuteCode,#productCategoryId
,getdate(),getdate(),1);
end;
commit tran
end;
select top 0 from dbo.slg_Charge_Mapping;
end;
Note: holdlock is the same as serializable.
Links related to the solution above:
Insert or Update pattern for Sql Server - Sam Saffron
Key-Range Locking - MSDN
Documentation on serializable and other Table Hints - MSDN
Error and Transaction Handling in SQL Server Part One – Jumpstart Error Handling - Erland Sommarskog
SQL Server Isolation Levels: A Series - Paul White
Simpletalk - SQL Server Deadlocks by Example - Gail Shaw

Couchbase query does not see documents add recently

I'm performing a test with CouchBase 4.0 and java sdk 2.2. I'm inserting 10 documents whose keys always start by "190".
After inserting these 10 documents I query them with:
cb.restore("190", cache);
Thread.sleep(100);
cb.restore("190", cache);
The query within the 'restore' method is:
Statement st = Select.select("meta(c).id, c.*").from(this.bucketName + " c").where(Expression.x("meta(c).id").like(Expression.s(callId + "_%")));
N1qlQueryResult result = bucket.query(st);
The first call to restore returns 0 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 0
The second call (100ms later) returns the 10 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 10
I tried adding PersistTo.MASTER in the 'insert' statement, but it neither works.
It seems that the 'insert' is not persisted immediately.
Any help would be really appreciated.
Joan.
You're using N1QL to query the data - and N1QL is only eventually consistent (by default), so it only shows up after the indices are recalculated. This isn't related to whether or not the data is persisted (meaning: written from RAM to disc).
You can try to change the scan_consitency level from its default - NOT_BOUNDED - to get consistent results, but that would take longer to return.
read more here
java scan_consitency options

ORMLite groupByRaw and groupBy issue on android SQLite db

I have a SQLite table content with following columns:
-----------------------------------------------
|id|book_name|chapter_nr|verse_nr|word_nr|word|
-----------------------------------------------
the sql query
select count(*) from content where book_name = 'John'
group by book_name, chapter_nr
in DB Browser returns 21 rows (which is the count of chapters)
the equivalent with ORMLite android:
long count = getHelper().getWordDao().queryBuilder()
.groupByRaw("book_name, chapter_nr")
.where()
.eq("book_name", book_name)
.countOf();
returns 828 rows (which is the count of verse numbers)
as far as I know the above code is translated to:
select count(*) from content
where book_name = 'John'
group by book_name, chapter_nr
result of this in DB Browser:
| count(*)
------------
1 | 828
2 | 430
3 | 653
...
21| 542
---------
21 Rows returned from: select count(*)...
so it seems to me that ORMLite returns the first row of the query as the result of countOf().
I've searched stackoverflow and google a lot. I found this question (and more interestingly the answer)
You can also count the number of rows in a custom query by calling the > countOf() method on the Where or QueryBuilder object.
// count the number of lines in this custom query
int numRows = dao.queryBuilder().where().eq("name", "Joe Smith").countOf();
this is (correct me if I'm wrong) exactly what I'm doing, but somehow I just get the wrong number of rows.
So... either I'm doing something wrong here or countOf() is not working the way it is supposed to.
Note: It's the same with groupBy instead of groupByRaw (according to ORMLite documentation joining groupBy's should work)
...
.groupBy("book_name")
.groupBy("chapter_nr")
.where(...)
.countOf()
EDIT: getWordDao returns from class Word:
#DatabaseTable(tableName = "content")
public class Word { ... }
returns 828 rows (which is the count of verse numbers)
This seems to be a limitation of the QueryBuilder.countOf() mechanism. It is expecting a single value and does not understand the addition of GROUP BY to the count query. You can tell that it doesn't because that method returns a single long.
If you want to extract the counts for each of the groups it looks like you will need to do a raw query check out the docs.

Categories

Resources