I have the following method where I am doing db insertions. I want to perform the inserts in a transaction.
Meaning when there are 100 values in records, I wan to insert them all and commit once.
How could I amend the following such that I can get record.value() info for each insert queries below.
This would essentially equate to having andThen() a 100 times but of course I do not want to write andThen() a 100 times nor do I know the number of records which can vary.
To note: Using RX Java 1.
Please advice. Thank you.
public Observable<?> insert(Observable<Record<String, String>> records) {
// I am looking for a way to get this record.value() into the following return block.
records.flatMap(record -> {
String value = record.value();
return null;
});
return client.rxGetConnection()
// making it transactional by setting to false
.flatMap(connection -> connection.rxSetAutoCommit(false)
// was looking to insert above records and flatMap operations here but it is not possible from what I have explored.
.toCompletable()
// .andThen(connection.rxExecute("INSERT (name) VALUES " + record.value()) // trying to achieve this, to be able to get record.value for each insert
.andThen(connection.rxExecute("INSERT (name) VALUES some_value"))
.flatMap(rows -> connection.rxCommit())).toObservable();
}
Related
I have a scenario in which some DB calls dependent upon other, hence I am not getting how to chain all these operation.
Scenario: 3 tables are considered. One for getting incremental number from a table (DB call 1), then after getting the number, increment & save it (DB call 2), then using it to store it in another table (DB call 3), then need to store some more information in a child table of previous operation's table (DB call 4)
I am not able to get how to keep all these operation in one pipeline. Hence, I did nested operation. Due to which before completion of all DB operation, API is returning the value already. So, in worst case scenario, if 4th DB call operation is failing, still the API has returned the value which shouldn't happen.
Can anyone suggest how to make this task in one chain?
How to pass first DB call's operation to other operation in same chain?
public Mono<MasterResponse> createMasterDetails(MasterRequest request)
{
MasterResponse response = new MasterResponse();
// DB op 1
seriesRepo.findByItemType(request.getItemType())
.doOnSuccess( series -> {
if(series == null)
throw new RunTimeException("Series detail not found for itemType: " + request.getItemType());
// 2nd DB operation is within this method
String billNo = getBillNumberByType(series);
MasterDetails masterDetails = new MasterDetails();
// operation to copy request info to masterDetails
masterDetails.setBillNo(billNo);
// returning dto value set
response.setBillNo(billNo);
// DB op 3
masterDetailsRepo.save(masterDetails)
.doOnSuccess(masterData -> {
MasterAttribute masterAttribute = new MasterAttribute();
// operation to copy request info to masterAttribute
masterDetails.setMasterId(masterData.getId());
masterDetailsRepo.save(masterDetails)
.doOnSuccess(ardRes -> log.info("master details saved in DB."))
.subscribe();
}).subscribe();
}).doOnError(err ->{
log.error("Unable to fetch information from series :: {}", err.getMessage());
Mono.error(new RunTimeException("Unable to fetch information from series :: {}" + err.getMessage()));
}).block();
return Mono.just(response);
}
Based on my understanding you could actually make it more succint and also you dont need to use block().
Here's a sample code which you could use
return seriesRepo
.findByItemType(request.getItemType())
.switchIfEmpty(Mono.error({YOUR EXCEPTION})))
.flatMap(series -> Mono.just( getBillNumberByType(series) ))
.flatMap(billNo -> {
MasterDetails masterDetails = new MasterDetails();
// operation to copy request info to masterDetails
masterDetails.setBillNo(billNo);
return Mono.just(masterDetails);
}).flatMap(masterDetailsRepo::save)
.flatMap( masterDetails -> {
MasterResponse response = new MasterResponse();
response.setBillNo(masterDetails.getBillNo())
return Mono.just(response);
})
I am making the following query which works and updates the info into the database as expected. But is there a way I can get an output from Single < UpdateResult >?
public Single<UpdateResult> update(UpdateEventRequest request) {
return client.rxUpdateWithParams(
"UPDATE mydb.sample SET sta_cd=?, some_ts=current_timestamp WHERE id=? RETURNING sta_cd",
new JsonArray(Arrays.asList(sta_cd, id)));
}
From the following e variable, I was hoping to get the value "10012". But it doesn't seem possible. Tried with map, flatmap and just see options available in e. The only result data in e is 'keys' which is an empty list and 'updated' which is an integer value of 1. My DB is postgres and was expecting results from from Single < UpdateResult > since am using 'RETURNING' in the query.
I have done the same for an insert operation which works but that is via the method rxQueryWithParams() and that returns a Single < ResultSet > instead. Thus wondering if this is even possible. Been having a look at docs and maybe this is not possible as an update query is returning a Single < UpdateResult > . Looking for advice if this is possible, to return data from an update query or a way around this. Please advice. Thanks.
Single<UpdateResult> result = someClass.update("10012", "78632");
result.subscribe(
e -> {
System.out.println("success: " + e); // I land here as expected
},
error -> {
System.out.println("error: " + error);
}
);
Because you are using RETURNING in these commands, treat these INSERT and UPDATE commands as queries.
Run them through rxQueryWithParams() so you can retrieve the results.
When you run rxUpdateWithParams(), the UpdateResult contains only the number of rows affected.
I wrote a Stream UDF for a range Query and it doesn't work properly. have you any idea how to set many filters with lua ?
The query:
SELECT id1, id2, link_type, visibility, data, time, version FROM linktable
WHERE id1 = <id1> AND
link_type = <link_type> AND
time >= <minTime> AND
time <= <maxTimestamp> AND
visibility = VISIBILITY_DEFAULT
ORDER BY time DESC LIMIT <offset>, <limit>;
Java code to invoke this lua function:
stmt = new Statement();
stmt.setNamespace(dbid);
stmt.setSetName("links");
stmt.setIndexName("time");
stmt.setFilters(Filter.range("time", minTimestamp, maxTimestamp));
stmt.setAggregateFunction("linkbench", "check_id1", Value.get(id1));
stmt.setAggregateFunction("linkbench", "check_linktype", Value.get(link_type));
resultSet = client.queryAggregate(null, stmt, "linkbench", "check_visibility", Value.get(VISIBILITY_DEFAULT));
Lua Script:
local function map_links(record)
-- Add user and password to returned map.
-- Could add other record bins here as well.
return record.id2
end
function check_id1(stream,id1)
local function filter_id1(record)
return record.id1 == id1
end
return stream : filter(filter_id1) : map(map_links)
end
function check_linktype(stream,link_type)
local function filter_linktype(record)
return record.link_type == link_type
end
return stream : filter(filter_linktype) : map(map_links)
end
function check_visibility(stream,visibility)
local function filter_visibility(record)
return record.visibility == visibility
end
return stream : filter(filter_visibility) : map(map_links)
end
Any idea how to write the filter for all the query restrictions ?
Thank you!
Since release 3.12 a predicate filter would be the correct approach, avoiding Lua completely for better performance and scalability.
Take a look at the PredExp class of the Java client and its examples for building complex filters. Predicate filtering also currently exists for the C, C# and Go clients.
Multiple aggregation functions are not supported. Aggregation and Filter functions must be combined.
function combined_aggregation(stream,id1,link_type,visibility)
local function combined_filter(record)
return record.id1 == id1 and
record.link_type == link_type and
record.visibility == visibility
end
return stream : filter(combined_filter) : map(map_links)
end
I want to get unique ID from my domain object (table) ID each time a method is called. So that ID's do not repeat. I have a function that returns unique ID.
public static Long generateID (Short company)
throws Exception
{
IDDAO iDDAO = SpringApplicationContext.getBean (IDDAO.class);
ID iD = iDDAO.findByNaturalKey (new IDNatKey (company);
if (iD != null)
{
// Check if ID has reached limit, then reset the ID to the first ID
if (iD.getLatestIDno ().longValue () == iD.getLastIDno ().longValue ())
{
iD.setLatestIDno (iD.getFrstIDno ());
}
// Get next ID
iD.setLatestIDno (iD.getLatestIDno () + 1);
// update database with latest id
iDDAO.update (iD);
return iD.getLatestIDno ();
}
}
The issue is that if access the application from two machines and press button from UI to generate ID exactly at the same time, there are sometimes duplicate IDs returned from this method
e.g.
Long ID = TestClass.generateID (123);
This gives me duplicate sometimes.
I made the method like this
public static synchronized Long generateID (Short company)
throws Exception
so that only one thread can go in this function at a time, but the duplicate issue is still there.
I do not want to use database sequences as I do not want gaps in the ID sequences if the transaction rolls back, in that case sequence will be incremented still, which I do not want.Gaps at the middle are OK but not at end. E.g we have 1, 2 and 3 as IDs and 2 rolls back, that is OK. But if 3 rolls back, we should get 3 again when another user comes, in case of sequence, it will give 4
Please help me tell what I am doing incorrect ? static synchronized will still cause other threads to go inside this function at same time ? I have many other static (but not synchronized) functions in the class. Will this cause issue with them too if I make it static synchronized ?
Thanks
Aiden
You can use java.util.UUID. it will generate a universal uniqueId.
Keep 2 unique IDs:
a db-provided, internal transaction ID, created by an autoincrement every time a new transaction is built. Gaps may appear if transactions are rolled back.
a pretty, gap-less "ticket ID", assigned only once the transaction commits successfully.
Assign both from the DB - it is best to keep all shared state there, as the DB will guarantee ACID, while Java concurrency is far trickier to get right.
In that case I think you try the below :
synchronized(this){if (iD != null)
{
// Check if ID has reached limit, then reset the ID to the first ID
if (iD.getLatestIDno ().longValue () == iD.getLastIDno ().longValue ())
{
iD.setLatestIDno (iD.getFrstIDno ());
}
// Get next ID
iD.setLatestIDno (iD.getLatestIDno () + 1);
// update database with latest id
iDDAO.update (iD);
return iD.getLatestIDno ();
}}
I have a table named Person, My select sql usually brings number of lets say 100K person since It takes so much time I am having readtimeout exception.
So I know that I have to use ROWNUM to limit the result size.
Class MyService {
#Transactional(rollbackFor = Exception.class)
doJob(){
jobService.process();
}
}
Class JobService {
public void process() {
List<Person> personlList= jdbcQuery.query ("Select * from ... ... where rownum<1000" , ROWMAPPAR, parameter);
//Process all record list
}
Everything is ok till know But I want to be sure all record lets say 100K are processed and if there is an error while processing one of the batch ,rollback should be occured.
Do I need to invode process() method recursively?
Using
Spring 3.5
Oracle 11g
Using ROWNUM as shown in your query may very well not give you the results you expect. (But on the other hand it may, at least sometimes :-). ROWNUM is generated as rows are are emitted from the query, AFTER the WHERE clause is evaluated, but BEFORE any ORDER BY or HAVING clauses are applied. This can cause your query to return results which may surprise you.
Try creating the following table:
create table t(n number);
And populating it with:
insert into t (n)
select n from
(select rownum n from dual connect by level <= 2000)
where n > 1234;
Thus the table will have rows with values of 1235 through 2000.
The run each of the following queries in order:
select *
from t
order by n;
select n, rownum
from t
where rownum < 100
order by n;
select n, rownum as r from
(select n
from t
order by n);
select n, r from
(select n, rownum as r from
(select n
from t
order by n))
where r < 100
order by n;
and observe the differences in the output you get.
For those who don't have an Oracle instance handy, here's an SQLFiddle with the above in it.
Share and enjoy.
Do I need to invode process() method recursively?
I wouldn't do that. Simply rewrite your code to this:
class MyService {
#Transactional(rollbackFor = Exception.class)
void doJob(){
// Continue processing within the same transaction, until process() returns false
while (jobService.process());
}
}
class JobService {
public boolean process() {
List<Person> personlList= jdbcQuery.query(
"Select * from ... ... where rownum<=1000" , ROWMAPPAR, parameter);
// I've changed your predicate ------^^
// process() returns false when the above select returns less than 1000 records
return personList.size() == 1000;
}
}
Beware, though, that one of the problems that you may be experiencing is the fact that you're keeping a very long-running transaction alive. This will cause a lot of concurrency inside your database and might contribute to the batch job running slow. If you don't absolutely need an atomic batch job (everything committed or everything rolled back), you might consider running each sub-job in its own transaction.