Keep row locked when exception occurs inside transaction - Spring Boot - java

We are having trouble to keep a transaction running when a exception occurs and also keep rows locked when this happens.
Firs, a bit of code to clarify.
With this query we are performing a for update skip locked (JobDao):
#NotNull
default Optional<EventEntity> getJobForProcessing(#NotNull final Instant now) {
final List<EventEntity> events = getUnprocessedEvent(now, PageRequest.of(0, 1));
if (events.isEmpty()) {
return Optional.empty();
}
return Optional.ofNullable(events.get(0));
}
#NotNull
#Query("""
select ee from EventEntity ee
where ee.processed = false
order by ee.createdAt""")
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = LockOptions.SKIP_LOCKED + "")})
Optional<EventEntity> getJobForProcessing(#NotNull final Instant now, #NotNull final Pageable pageable);
The following code runs on another thread using #Async (JobTransaction called from scheduler):
#Transactional
public void runInTransaction() {
final Instant now = Now.getInstantUtc();
final Optional<Job> jobForProcessing = dao.getJobForProcessing(now);
if (jobForProcessing.isPresent()) {
final Job job = jobForProcessing.get();
jobProcessor.processJob(job);
} else {
Log.d(TAG, "No more jobs, stopping thread %s.".formatted(Thread.currentThread().getName()));
}
}
The processJob(job) method — simplified — is the following (JobProcessor):
private void processJob(#NotNull final Job job) {
try {
worker.execute(job);
jobService.markJobAsProcessed(job);
} catch (final Throwable throwable) {
Log.e(TAG, "Error to process job: " + job.getId(), throwable);
jobService.rescheduleJobExecution(job, throwable);
throw new RuntimeException(throwable);
}
}
On the above code, the important part is the catch block that calls jobService.rescheduleJobExecution(...).
This is the rescheduleJobExecution (JobService):
public void rescheduleJobExecution(#NotNull final Job job,
#NotNull final Throwable throwable) {
final var nextRetryAt = backoffCalculator.calculateNextRetryAt(job, now);
final var stackTraceAsString = Throwables.getStackTraceAsString(throwable);
final var executionAttempts = job.getExecutionAttempts() + 1;
final var newJob = job.cloneEntity()
.withExecutionAttempts(executionAttempts)
.withErrorLogLastRetry(stackTraceAsString)
.withNextRetryAt(nextRetryAt)
.build();
jobDao.save(newJob);
}
The problem I'm facing is that, when one error occurs and we try to reschedule the job, the following is throw: Transaction silently rolled back because it has been marked as rollback-only.
I understand why this happens, but have not found any way to fix this. What I already tried:
Change the runInTransaction to #Transactional(noRollbackFor = RuntimeException.class);
Add #Transactional(propagation = Propagation.REQUIRES_NEW) on the rescheduleJobExecution method. This solves the problem but brings a new one: since the outer transaction has already failed, the row is available to other threads to select and in a multithread environment this could lead to race conditions (ie: the job is reselected to execution before the reschedule is done).
So, I'm looking for a solution that enables me to deal with job rescheduling and also that keep jobs safe on a multithread environment. The getUnprocessedEvent method is used by many threads and we also like to keep using the for update skip locked that postgres provides.

Related

Spring WebFlux - how to determine when my client has finished working

I need to call certain API with multiple query params simultaneously, in order to do that I wanted to use reactive approach. I ended up with reactive client that is able to call endpoint based on passed SearchQuery, handle pagination of that response and call for remaining pages and returns Flux<Item>. So far it works fine, however what I need to do now is to:
Collect data for all search queries and save them as initial state
Once the initial data is collected, I need to start repeating those calls in small time intervals and validate each item against initial data. Basically, I need to find new items from here.
But I'm running out of options how to solve that, I came up with probably the dirties solution ever, but I bet there are much better ways to do that.
So first of all, this is relevant code of my client
public Flux<Item> collectData(final SearchQuery query) {
final var iteration = new int[]{0};
return invoke(query, 0).expand(res ->
this.handleResponse(res, query, iteration))
.flatMap(response -> Flux.fromIterable(response.collectItems()));
}
private Mono<ApiResponse> handleResponse(final ApiResponse response, final SearchQuery searchQuery, final int[] iteration) {
return hasNextPage(response) ? invoke(searchQuery, calculateOffset(++iteration[0])) : Mono.empty();
}
private Mono<ApiResponse> invoke(final SearchQuery query, final int offset) {
final var url = offset == 0 ? query.toUrlParams() : query.toUrlParamsWithOffset(offset);
return doInvoke(url).onErrorReturn(ApiResponse.emptyResponse());
}
private Mono<ApiResponse> doInvoke(final String endpoint) {
return webClient.get()
.uri(endpoint)
.retrieve()
.bodyToMono(ApiResponse.class);
}
And here is my service that is using this client
private final Map<String, Item> initialItems = new ConcurrentHashMap<>();
void work() {
final var executorService = Executors.newSingleThreadScheduledExecutor();
queryRepository.getSearchQueries().forEach(query -> {
reactiveClient.collectData(query).subscribe(item -> initialItems.put(item.getId(), item));
});
executorService.scheduleAtFixedRate(() -> {
if(isReady()) {
queryRepository.getSearchQueries().forEach(query -> {
reactiveClient.collectData(query).subscribe(this::process);
});
}
}, 0, 3, TimeUnit.SECONDS);
}
/**
* If after 2 second sleep size of initialItems remains the same,
* that most likely means that initial population phase is over,
* and we can proceed with further data processing
**/
private boolean isReady() {
try {
final var snapshotSize = initialItems.size();
Thread.sleep(2000);
return snapshotSize == initialItems.size();
} catch (Exception e) {
return false;
}
}
I think the code speaks for itself, I just want to finish first phase, which is initial data population and then start processing all incomming data.

Why is db connection closed after trying and failing to get a lock with spring-data-jpa?

So I would like to wrap a PessimisticLockingFailureException that gets thrown in a jpa repo when trying to get a lock for an entity that is already locked. And handle the wrapped exception in my exception handlers.
But it seems that when spring tries to end the transaction the connection is already closed and spring throws a new exception that overwrites the exception I would like to see.
In the logs I get "Application exception overridden by rollback exception" and it is this I would like to avoid. (Cause of rollback ex is that "Connection is closed")
Is there a solution to this? Or am I doing something wrong?
(Here's some pseudo code of what I'm doing)
String restControllerMethod(String args) {
try {
return service.serviceMethod(args);
} catch (Exception e1) {
throw e1; // org.springframework.orm.jpa.JpaSystemException caused by org.hibernate.TransactionException caused by java.sql.SQLException
}
}
#Transactional
String serviceMethod(String args) {
Entity entity;
try {
entity = repo.repoFindMethod(args);
} catch (Exception e2) {
throw new WrappingException(e2); // org.springframework.dao.PessimisticLockingFailureException caused by org.hibernate.PessimisticLockException
}
// do some processing with entity
return result;
}
#Lock(LockModeType.PESSIMISTIC_READ)
String repoFindMethod(String args);
I'm using spring-boot-starter-parent 2.3.2.RELEASE with spring-boot-starter-web spring-boot-starter-data-jpa and an emmbedded h2 db
Fixed this by adding a com.zaxxer.hikari.SQLExceptionOverride implementation and pointing the
spring.datasource.hikari.exception-override-class-name to it.
This causes hikari to not close the connection when the db throws an exception with the specified error code.
I've also added #QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "0")}) to the locking query since default lock wait times can be vendor specific
The issue with this solution is that it is vendor specific (both for h2 and hikari). And not all vendors support a custom timeout for obtaining locks (h2 for example does not support this but it matters less since it's timeout is very short anyway)
Example of my solution (for h2):
spring.datasource.hikari.exception-override-class-name=com.example.H2SQLExceptionOverride
public class H2SQLExceptionOverride implements SQLExceptionOverride {
private static final Logger logger = LoggerFactory.getLogger(H2SQLExceptionOverride.class);
public static final int LOCK_TIMOUT_ERROR_CODE = 50200;
#java.lang.Override
public Override adjudicate(SQLException sqlException) {
if (sqlException.getErrorCode() == LOCK_TIMOUT_ERROR_CODE) {
logger.debug("Diverting from default hikari impl and continuing transaction with errorCode: "
+ sqlException.getErrorCode() + " and sqlState: " + sqlException.getSQLState());
return Override.DO_NOT_EVICT;
}
return Override.CONTINUE_EVICT;
}
}

the use of threads and the java Future interface in AWS Lambda

I want to create an AWS Lambda function in java that writes to a database in Firestore. The short story is that, while the code does what it should when I execute it
on my own computer, using NetBeans (the truth is that it works most of the time, but not always, maybe due to problems with my internet connection), nothing at all
happens when I deploy it as a Lambda function and invoke this. I suspect that this has less to do with Firestore itself, but rather with how AWS Lambda handles asynchronous
operations.
Now to the details!
As a simple example, the method that writes to the Firestore object db reads
public static void writeFirestore(Firestore db){
try{
DateTime now = DateTime.now();
String time = now.toString();
Map<String, String> data = new HashMap<>();
data.put("time", time);
String collTitle = "Notebook";
String docTitle = "Document: "+time;
db.collection(collTitle).document(docTitle).set(data);
System.out.println("wrote to Firestore");
}
catch(Exception e){
System.out.println("Could not write to db: "+e.toString());
}
}
Now, as it takes some time to connect to Firestore and initialize db, I want to make sure that db is not passed as an argument into writeFirestore() before it
has been properly retrieved. So, I define a version of db in the form of a Future object, using ExecutorService, and then retrieve
the object db with the get()-method. For this, I define the class TaskRunner:
public class TaskRunner {
ExecutorService executor;
public TaskRunner(){
executor = Executors.newSingleThreadExecutor();
}
public static interface Callback<T>{
public void onCallback(T result);
}
public <T> void executeAsync(Callable<T> callable, Callback<T> callback) throws Exception{
try{
Future future = executor.submit(callable);
Object result = future.get();
if(result != null){
System.out.println("result is not null; applying callback...");
callback.onCallback((T) result);
}
else{
System.out.println("result is null");
}
}
catch(Exception e){
System.out.println("Problem running executeAsync: "+e.toString());
}
}
}
Writing the example document to my fixed database db now goes as follows:
I define the class FirestoreCreator that implements Callable with the purpose of retrieving the Firestore object db:
public static class FirestoreCreator implements Callable<Firestore>{
#Override
public Firestore call() throws Exception {
String projectId = "myProjectId";
GoogleCredentials credentials =
GoogleCredentials.fromStream(new FileInputStream("myCredentialsFile.json"));
FirestoreOptions firestoreOptions = FirestoreOptions.getDefaultInstance()
.toBuilder()
.setProjectId(projectId)
.setCredentials(credentials)
.build();
Firestore db = firestoreOptions.getService();
return db;
}
}
I implement the TaskRunner.Callback interface using writeFirestore().
I create a TaskRunner object, taskRunner, and call its executeAsync() method with the above two objects as parameters.
These three steps are collected in the final method testUpdateFirestore() that does the job:
public static void testUpdateFirestoreInterface(){
FirestoreCreator fsCreator = new FirestoreCreator();
TaskRunner.Callback<Firestore> updateCallback = new TaskRunner.Callback<Firestore>() {
#Override
public void onCallback(Firestore result) {
writeFirestore(result);
}
};
TaskRunner taskRunner = new TaskRunner();
try {
taskRunner.executeAsync(fsCreator, updateCallback);
} catch (Exception ex) {
System.out.println("Failed to run executeAsync");
}
}
As I already mentioned in the introduction, the code works (most times) when I run it on my computer, but not at all in AWS Lambda. No exception is thrown, and yet no document has been written in Firestore.
The discussion about threads in AWS Lambda (https://dzone.com/articles/multi-threaded-programming-with-aws-lambda) made me suspect that reason is that the use of some thread that runs when ExecutorService is used is not being handled properly.
Does anyone know what goes wrong and what a solution could look like?

#Transactional method insert value on exception and multithread wildfly CDI

I have a method in CDI bean which is transactional, on error it creates an entry in database with the exception message. This method can be called by RESTendpoint and in multithread way.
We have a SQL constraint to avoid duplicity in database
#Transactional
public RegistrationRuleStatus performCheck(RegistrationRule rule, User user) {
try {
//check if rule is dependent of other rules and if all proved, perform check
List<RegistrationRule> rules = rule.getRuleParentDependencies();
boolean parentDependenciesAreProved = true;
if (!CollectionUtils.isEmpty(rules)) {
parentDependenciesAreProved = ruleDao.areParentDependenciesProved(rule,user.getId());
}
if (parentDependenciesAreProved) {
Object service = CDI.current().select(Object.class, new NamedAnnotation(rule.getProvider().name())).get();
Method method = service.getClass().getMethod(rule.getProviderType().getMethod(), Long.class, RegistrationRule.class);
return (RegistrationRuleStatus) method.invoke(service, user.getId(), rule);
} else {
RegistrationRuleStatus status = statusDao.getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.START, new Date());
statusDao.create(status);
}
return status;
}
} catch (Exception e) {
LOGGER.error("could not perform check {} for provider {}", rule.getProviderType().name(), rule.getProvider().name(), e.getCause()!=null?e.getCause():e);
return statusDao.createErrorStatus(user,rule,e.getCause()!=null?e.getCause().getMessage():e.getMessage());
}
}
create Error method:
#Transactional
public RegistrationRuleStatus createErrorStatus(User user, RegistrationRule rule, String message) {
RegistrationRuleStatus status = getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.ERROR, new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
create(status);
}else {
status.setStatus(RegistrationActionStatus.ERROR);
status.setStatusDate(new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
update(status);
}
return status;
}
the problem is method is called twice at same time and the error recorded is DuplicateException but we don't want it. We verify at the beginning if object already exists, but I think it is called at exactly same time.
JAVA8/wildfly/CDI/JPA/eclipselink
Any idea?
I'd suggest you to consider following approaches:
1) Implement retry logic. Catch exception, analyze it. If it indicates an unexpected duplicate (like you described), then don't consider it as an error and just repeat the method call. Now your code will work differently: It will notice that a record already exists and will not create a duplicate.
2) Use isolation level SERIALIZABLE. Then within a single transaction your will "see" a consistent behaviour: If select operation hasn't found a particular record, then till the end of this transaction no other transaction will insert such record and there will be no exception related to duplicates. But the price is that the whole table will be locked for each such transaction. This can degrade the application performance essentially.

Rxjava2 + Retrofit2 + Android. Best way to do hundreds of network calls

I have an app. I have a big button that allows the user to sync all their data at once to the cloud. A re-sync feature that allows them to send all their data again. (300+ entries)
I am using RXjava2 and retrofit2. I have my unit test working with a single call. However I need to make N network calls.
What I want to avoid is having the observable call the next item in a queue. I am at the point where I need to implement my runnable. I have seen a bit about Maps but I have not seen anyone use it as a queue. Also I want to avoid having one item fail and it report back as ALL items fail, like the Zip feature would do. Should I just do the nasty manager class that keeps track of a queue? Or is there a cleaner way to send several hundred items?
NOTE: SOLUTION CANNOT DEPEND ON JAVA8 / LAMBDAS. That has proved to be way more work than is justified.
Note all items are the same object.
#Test
public void test_Upload() {
TestSubscriber<Record> testSubscriber = new TestSubscriber<>();
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
clientSecureDataToolKit.putUserDataToSDK(mPayloadSecureDataToolKit).subscribe(testSubscriber);
testSubscriber.awaitTerminalEvent();
testSubscriber.assertNoErrors();
testSubscriber.assertValueCount(1);
testSubscriber.assertCompleted();
}
My helper to gather and send all my items
public class SecureDataToolKitHelper {
private final static String TAG = "SecureDataToolKitHelper";
private final static SimpleDateFormat timeStampSimpleDateFormat =
new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
public static void uploadAll(Context context, RuntimeExceptionDao<EventModel, UUID> eventDao) {
List<EventModel> eventModels = eventDao.queryForAll();
QueryBuilder<EventModel, UUID> eventsQuery = eventDao.queryBuilder();
String[] columns = {...};
eventsQuery.selectColumns(columns);
try {
List<EventModel> models;
models = eventsQuery.orderBy("timeStamp", false).query();
if (models == null || models.size() == 0) {
return;
}
ArrayList<PayloadSecureDataToolKit> toSendList = new ArrayList<>();
for (EventModel eventModel : models) {
try {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit();
if (eventModel != null) {
// map my items ... not shown
toSendList.add(payloadSecureDataToolKit);
}
} catch (Exception e) {
Log.e(TAG, "Error adding payload! " + e + " ..... Skipping entry");
}
}
doAllNetworkCalls(toSendList);
} catch (SQLException e) {
e.printStackTrace();
}
}
my Retrofit stuff
public class ClientSecureDataToolKit {
private static ClientSecureDataToolKit mClientSecureDataToolKit;
private static Retrofit mRetrofit;
private ClientSecureDataToolKit(){
mRetrofit = new Retrofit.Builder()
.baseUrl(Utilities.getSecureDataToolkitURL())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.addConverterFactory(GsonConverterFactory.create())
.build();
}
public static ClientSecureDataToolKit getClientSecureDataKit(){
if(mClientSecureDataToolKit == null){
mClientSecureDataToolKit = new ClientSecureDataToolKit();
}
return mClientSecureDataToolKit;
}
public Observable<Record> putUserDataToSDK(PayloadSecureDataToolKit payloadSecureDataToolKit){
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);
Observable<Record> observable = interfaceSecureDataToolKit.putRecord(NetworkUtils.SECURE_DATA_TOOL_KIT_AUTH, payloadSecureDataToolKit);
return observable;
}
}
public interface InterfaceSecureDataToolKit {
#Headers({
"Content-Type: application/json"
})
#POST("/api/create")
Observable<Record> putRecord(#Query("api_token") String api_token, #Body PayloadSecureDataToolKit payloadSecureDataToolKit);
}
Update. I have been trying to apply this answer to not much luck. I am running out of steam for tonight. I am trying to implement this as a unit test, like I did for the original call for one item.. It looks like something is not right with use of lambda maybe..
public class RxJavaBatchTest {
Context context;
final static List<EventModel> models = new ArrayList<>();
#Before
public void before() throws Exception {
context = new MockContext();
EventModel eventModel = new EventModel();
//manually set all my eventmodel data here.. not shown
eventModel.setSampleId("SAMPLE0");
models.add(eventModel);
eventModel.setSampleId("SAMPLE1");
models.add(eventModel);
eventModel.setSampleId("SAMPLE3");
models.add(eventModel);
}
#Test
public void testSetupData() {
Assert.assertEquals(3, models.size());
}
#Test
public void testBatchSDK_Upload() {
Callable<List<EventModel> > callable = new Callable<List<EventModel> >() {
#Override
public List<EventModel> call() throws Exception {
return models;
}
};
Observable.fromCallable(callable)
.flatMapIterable(models -> models)
.flatMap(eventModel -> {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit(eventModel);
return doNetworkCall(payloadSecureDataToolKit) // I assume this is just my normal network call.. I am getting incompatibility errors when I apply a testsubscriber...
.subscribeOn(Schedulers.io());
}, true, 1);
}
private Observable<Record> doNetworkCall(PayloadSecureDataToolKit payloadSecureDataToolKit) {
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
Observable observable = clientSecureDataToolKit.putUserDataToSDK(payloadSecureDataToolKit);//.subscribe((Observer<? super Record>) testSubscriber);
return observable;
}
Result is..
An exception has occurred in the compiler (1.8.0_112-release). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.
com.sun.tools.javac.code.Symbol$CompletionFailure: class file for java.lang.invoke.MethodType not found
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compile<MyBuildFlavorhere>UnitTestJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
Edit. No longer trying Lambdas. Even after setting up the path on my mac, javahome to point to 1.8, etc. I could not get it to work. If this was a newer project I would push harder. However as this is an inherited android application written by web developers trying android, it is just not a great option. Nor is it worth the time sink to get it working. Already into the days of this assignment instead of the half day it should have taken.
I could not find a good non lambda flatmap example. I tried it myself and it was getting messy.
If I understand you correctly, you want to make your calls in parallel?
So rx-y way of doing this would be something like:
Observable.fromCallable(() -> eventsQuery.orderBy("timeStamp", false).query())
.flatMapIterable(models -> models)
.flatMap(model -> {
// map your model
//avoid throwing exceptions in a chain, just return Observable.error(e) if you really need to
//try to wrap your methods that throw exceptions in an Observable via Observable.fromCallable()
return doNetworkCall(someParameter)
.subscribeOn(Schedulers.io());
}, true /*because you don't want to terminate a stream if error occurs*/, maxConcurrent /* specify number of concurrent calls, typically available processors + 1 */)
.subscribe(result -> {/* handle result */}, error -> {/* handle error */});
In your ClientSecureDataToolKit move this part into constructor
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);

Categories

Resources