I have written custom flume sink, named MySink, whose process method is indicated in the first snippet below. I am getting an IllegalStateException as follows (detailed stack trace is available in the 2nd snippet below):
Caused by: java.lang.IllegalStateException: begin() called when
transaction is OPEN!
QUESTION: I have followed the KafkaSink and similar existing sink implementations in flume code base while writing the process method and I am applying the very same transaction handling logic with those exiting sinks. Could you please tell me what is wrong in my process method here? How can I fix the problem?
PROCESS method (I have marked where the exception is thrown):
#Override
public Status process() throws EventDeliveryException {
Status status = Status.READY;
Channel ch = getChannel();
Transaction txn = ch.getTransaction();
Event event = null;
try {
LOG.info(getName() + " BEFORE txn.begin()");
//!!!! EXCEPTION IS THROWN in the following LINE !!!!!!
txn.begin();
LOG.info(getName() + " AFTER txn.begin()");
LOG.info(getName() + " BEFORE ch.take()");
event = ch.take();
LOG.info(getName() + " AFTER ch.take()");
if (event == null) {
// No event found, request back-off semantics from the sink runner
LOG.info(getName() + " - EVENT is null! ");
return Status.BACKOFF;
}
Map<String, String> keyValueMapInTheMessage = event.getHeaders();
if (!keyValueMapInTheMessage.isEmpty()) {
mDBWriter.insertDataToDB(keyValueMapInTheMessage);
}
LOG.info(getName() + " - EVENT: " + EventHelper.dumpEvent(event));
if (txn != null) {
txn.commit();
}
} catch (Exception ex) {
String errMsg = getName() + " - Failed to publish events. Exception: ";
LOG.info(errMsg);
status = Status.BACKOFF;
if (txn != null) {
try {
txn.rollback();
} catch (Exception e) {
LOG.info(getName() + " - EVENT: " + EventHelper.dumpEvent(event));
throw Throwables.propagate(e);
}
}
throw new EventDeliveryException(errMsg, ex);
} finally {
if (txn != null) {
txn.close();
}
}
return status;
}
EXCEPTION STACK:
2016-01-22 14:01:15,440 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)]
Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: MySink - Failed to publish events.
Exception: at com.XYZ.flume.maprdb.MySink.process(MySink.java:116)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: begin() called when transaction is OPEN!
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.channel.BasicTransactionSemantics.begin(BasicTransactionSemantics.java:131)
at com.XYZ.flume.maprdb.MySink.process(MySink.java:82)
... 3 more
if (event == null) {
// No event found, request back-off semantics from the sink runner
LOG.info(getName() + " - EVENT is null! ");
return Status.BACKOFF;
}
this code causes this problem. when event is null, you just return it.however, the correct way is to commit or rollback.a transaction should go through three stages: begin, commit or rollback, finally close.we can see the following source code to find how it implements.
BasicChannelSemantics:
public Transaction getTransaction() {
if (!initialized) {
synchronized (this) {
if (!initialized) {
initialize();
initialized = true;
}
}
}
BasicTransactionSemantics transaction = currentTransaction.get();
if (transaction == null || transaction.getState().equals(
BasicTransactionSemantics.State.CLOSED)) {
transaction = createTransaction();
currentTransaction.set(transaction);
}
return transaction;
}
when currentTransaction is null or its State is close, channel will create a new one, otherwise return the old one. this exception does not happen immediately. when the first time execute the process method, you get a new transaction, but the event is null, you just return and finally close, the close method does not work because of its implement.so the second time execute the process method, you don't get a new transaction, it is the old one.the following code is about how transaction implement.
BasicTransactionSemantics:
protected BasicTransactionSemantics() {
state = State.NEW;
initialThreadId = Thread.currentThread().getId();
}
public void begin() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"begin() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.NEW),
"begin() called when transaction is " + state + "!");
try {
doBegin();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
state = State.OPEN;
}
public void commit() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"commit() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.OPEN),
"commit() called when transaction is %s!", state);
try {
doCommit();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
state = State.COMPLETED;
}
public void rollback() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"rollback() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.OPEN),
"rollback() called when transaction is %s!", state);
state = State.COMPLETED;
try {
doRollback();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
}
public void close() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"close() called from different thread than getTransaction()!");
Preconditions.checkState(
state.equals(State.NEW) || state.equals(State.COMPLETED),
"close() called when transaction is %s"
+ " - you must either commit or rollback first", state);
state = State.CLOSED;
doClose();
}
when create, the state is new.
when begin, the state must be new, then state become open.
when commit or rollback, the state must be open, then state become complete.
when close, the state must be complete, then state become close.
so when you execute the close method in a right way, the next time you will get a new transaction, otherwise the old one which state must not be new, so you can't execute transaction.begin(), it needs a new one.
Related
I need to migrate to Kinesis library to version 2.2.11 so I followed the tutorial: https://docs.aws.amazon.com/streams/latest/dev/kcl-migration.html
I need to run multiple instances of my consumer app, so every one of them needs to have an unique application name in order to have a separate lease table in DynamoDb.
When initializing the consumer Kinesis runs DynamoDBLeaseRefresher.createLeaseTableIfNotExists which checks if a new table needs to be created for this application name and creates one if it cannot be found.
So 2 operations are performed:
DescribeTable - it returns the table info or throws a ResourceNotFoundExecption,
if needed - CreateTable.
The problem for me is with the DescribeTable method. When I am looking for an existing table it returns it with no problem. But when I am looking for a non-existent table it throws the ResourceNotFoundExecption -> so far so good. Unfortunately it then gets wrapped and is now:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: software.amazon.awssdk.awscore.exception.AwsServiceException$Builder.extendedRequestId(Ljava/lang/String;)Lsoftware/amazon/awssdk/awscore/exception/AwsServiceException$Builder;
and the app expecting ResourceNotFoundException gets something different instead and crashes.
The wrapped exception message is a bit misleading: "Unable to execute HTTP request" since the request was performed and returned the proper message: "Resource not found".
Funny thing is that it sometimes works, the exception does not get wrapped, the CreateTable operation is performed and the consumer starts properly.
I have made a workaround for it for now where I just create the table before the initialization of the LeaseCoordinator, so it always gets the existing table.
here is my code:
public KinesisStreamReaderService(String streamName, String applicationName, String regionName) {
KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(Region.of(connectionProperties.getRegion()))
.httpClientBuilder(createHttpClientBuilder())
.build();
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(regionName)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(regionName)).build();
// if(!dynamoDbTableExists(dynamoClient, applicationName)) {
// createDynamoDbTable(dynamoClient, applicationName);
// }
ConfigsBuilder configsBuilder = new ConfigsBuilder(streamName, applicationName, kinesisClient,
dynamoClient, cloudWatchClient, workerId(), KinesisReaderProcessor::new);
configsBuilder.retrievalConfig().initialPositionInStreamExtended(
InitialPositionInStreamExtended.newInitialPosition(
InitialPositionInStream.LATEST));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig(),
configsBuilder.retrievalConfig().retrievalSpecificConfig(new PollingConfig(streamName, kinesisClient))
);
}
private void createDynamoDbTable(DynamoDbAsyncClient dynamoClient, String applicationName) {
log.info("Creating new lease table: {}", applicationName);
CompletableFuture<CreateTableResponse> createTableFuture = dynamoClient
.createTable(CreateTableRequest.builder()
.provisionedThroughput(ProvisionedThroughput.builder().readCapacityUnits(10L).writeCapacityUnits(10L).build())
.tableName(applicationName)
.keySchema(KeySchemaElement.builder().attributeName("leaseKey").keyType(KeyType.HASH).build())
.attributeDefinitions(AttributeDefinition.builder().attributeName("leaseKey").attributeType(
ScalarAttributeType.S).build())
.build());
try {
CreateTableResponse createTableResponse = createTableFuture.get();
log.debug("Created new lease table: {}", createTableResponse.tableDescription().tableName());
} catch (InterruptedException | ExecutionException e) {
throw new DataStreamException(e.getMessage(), e);
}
}
private boolean dynamoDbTableExists(DynamoDbAsyncClient dynamoClient, String tableName) {
CompletableFuture<DescribeTableResponse> describeTableResponseCompletableFutureNew = dynamoClient
.describeTable(DescribeTableRequest.builder()
.tableName(tableName).build());
try {
DescribeTableResponse describeTableResponseNew = describeTableResponseCompletableFutureNew
.get();
return nonNull(describeTableResponseNew);
} catch (InterruptedException | ExecutionException e) {
log.info(e.getMessage(), e);
}
return false;
}
private static String workerId() {
String workerId;
try {
workerId = format("%s_%s", getLocalHost().getCanonicalHostName(), randomUUID().toString());
} catch (UnknownHostException e) {
workerId = randomUUID().toString();
}
return workerId;
}
#Override
public void read(Consumer<String> consumer) {
this.consumer = consumer;
scheduler.run();
}
private class KinesisReaderProcessor implements ShardRecordProcessor {
private String shardId;
#Override
public void initialize(InitializationInput initializationInput) {
this.shardId = initializationInput.shardId();
log.info("Initializing record processor for shard: {}", shardId);
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
log.debug("Checking shard {} for new records", shardId);
List<KinesisClientRecord> records = processRecordsInput.records();
if (!records.isEmpty()) {
log.debug("Processing {} records from kinesis stream shard {}", records.size(), shardId);
records.forEach(record -> {
String json = UTF_8.decode(record.data()).toString();
log.info(json);
consumer.accept(json);
});
}
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
log.info("Record processor has lost lease, terminating");
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
try {
shutdownRequestedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
}
}
Am I missing some configuration for the scheduler or something? Why is it sometimes working?
Thanks
Edit:
The problem is this block of code in DynamoDBLeaseRefresher.tableStatus() is invoked to check if the table exists:
DescribeTableResponse result;
try {
try {
result =
(DescribeTableResponse)FutureUtils.resolveOrCancelFuture(this.dynamoDBClient.describeTable(request), this.dynamoDbRequestTimeout);
} catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
} catch (InterruptedException var6) {
throw new DependencyException(var6);
}
} catch (ResourceNotFoundException var7) {
log.debug("Got ResourceNotFoundException for table {} in leaseTableExists, returning false.", this.table);
return null;
}
and in my case it should get ResourceNotFoundException if the table is not found, but as I said the expection gets wrapped to CompletionException before it reaches the appropriate catch block and is caught in the code here:
catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
This is happening 20 times in the loop while trying to Initialize the LeaseCoordinator and then just stops trying to initialize the connection. (As mentioned above it works occasionally, but that makes it even stranger to me)
With my workaround it only needs 1 try to get initialized
You don't need to create a lease table manually - DynamoDBLeaseCoordinator will create one if not exists on initialization and wait until it exists:
#Override
public void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
final boolean newTableCreated =
leaseRefresher.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
if (newTableCreated) {
log.info("Created new lease table for coordinator with initial read capacity of {} and write capacity of {}.",
initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
}
// Need to wait for table in active state.
final long secondsBetweenPolls = 10L;
final long timeoutSeconds = 600L;
final boolean isTableActive = leaseRefresher.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
if (!isTableActive) {
throw new DependencyException(new IllegalStateException("Creating table timeout"));
}
}
The issue in your case, I think, is that it's eventually created and you probably should periodically check until table appears - like DynamoDBLeaseCoordinator#initialize() does.
On acquiring a state machine with stateMachineService the machine is started, but I passed 'false' as a second parameter.
stateMachine = stateMachineService.acquireStateMachine(id, false)
According to console output 'acquireStateMachine' starts the machine.
I'm using DefaultStateMachineService
#Bean
public StateMachineService<BookingItemState, BookingItemEvent> stateMachineService(
StateMachineFactory<BookingItemState, BookingItemEvent> stateMachineFactory,
StateMachineRuntimePersister<BookingItemState, BookingItemEvent, String> stateMachineRuntimePersister) {
return new DefaultStateMachineService<>(stateMachineFactory, stateMachineRuntimePersister);
}
The issue is in DefaultStateMachineService class. I suppose that you have configured SM as below enabling autoStartap property:
#Override
public void configure(StateMachineConfigurationConfigurer<String, String> config) throws Exception {
config
.withConfiguration()
.autoStartup(true);
}
If you call acquireStateMachine the DefaultStateMachineService creates a new SM using stateMachineFactory (but you SM has enabled autoStartup) it starts a new SM and stores it to DB.
Let's consider the metnod:
public StateMachine<S, E> acquireStateMachine(String machineId, boolean start) {
log.info("Acquiring machine with id " + machineId);
StateMachine<S, E> stateMachine;
// naive sync to handle concurrency with release
synchronized (machines) {
stateMachine = machines.get(machineId);
if (stateMachine == null) {
log.info("Getting new machine from factory with id " + machineId);
stateMachine = stateMachineFactory.getStateMachine(machineId);
if (stateMachinePersist != null) {
try {
StateMachineContext<S, E> stateMachineContext = stateMachinePersist.read(machineId);
stateMachine = restoreStateMachine(stateMachine, stateMachineContext);
} catch (Exception e) {
log.error("Error handling context", e);
throw new StateMachineException("Unable to read context from store", e);
}
}
machines.put(machineId, stateMachine);
}
}
// handle start outside of sync as it might take some time and would block other machines acquire
return handleStart(stateMachine, start);
}
To avoid this issue you may disable autoStartup option or implement you custom StateMachineService. But then you have to explicitly call stateMachine.start().
My app is built upon Spring + SockJs. Main page represents a table of available connections so that user can monitor them in real-time. Every single url monitor can be suspended/resumed separatelly from each other. The problem is once you suspend some monitor then you can never resume it back because ApplicationEvents property of MonitoringFacade bean suddenly becomes null for the SINGLE entity. For other entites listener keeps working pretty well. When attempt to invoke methods of such null listener NullPointerException is never thrown though.
class IndexController implements ApplicationEvents
...
public IndexController(SimpMessagingTemplate simpMessagingTemplate, MonitoringFacade monitoringFacade) {
this.simpMessagingTemplate = simpMessagingTemplate;
this.monitoringFacade = monitoringFacade;
}
#PostConstruct
public void initialize() {
if (logger.isDebugEnabled()) {
logger.debug(">>Index controller initialization.");
}
monitoringFacade.addDispatcher(this);
}
...
#Override
public void monitorUpdated(String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>Sending monitoring data to client with monitor id " + monitorId);
}
try {
ConfigurationDTO config = monitoringFacade.findConfig(monitorId);
Report report = monitoringFacade.findReport(monitorId);
ReportReadModel readModel = ReportReadModel.mapFrom(config, report);
simpMessagingTemplate.convertAndSend("/client/update", readModel);
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
public class MonitoringFacadeImpl implements MonitoringFacade
...
private ApplicationEvents dispatcher;
public void addDispatcher(ApplicationEvents dispatcher) {
logger.info("Setting up dispatcher");
this.dispatcher = dispatcher;
}
...
#Override
public void refreshed(RefreshEvent event) {
final String monitorId = event.getId().getIdentity();
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Refreshing monitoring data with monitor id '%s'", monitorId));
}
Configuration refreshedConfig = configurationService.find(monitorId);
reportingService.compileReport(refreshedConfig, event.getData());
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Notifying monitoring data updated with monitor id '%s'", monitorId) + dispatcher);
}
dispatcher.monitorUpdated(monitorId); // here dispatcher has null value... or it's actually not
}
void refreshed(RefreshEvent event) method succesfully receives updates from Quartz scheduler through the interface and sends it back to controller.
The question is how a singleton-scoped bean can have different property values for different objects it is applied for and why such a property becomes null even though i have never set it to null?
UPD:
#MessageMapping("/monitor/{monitorId}/suspend")
public void handleSuspend(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling suspend request for monitor with id " + monitorId);
}
try {
monitoringFacade.disableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
#MessageMapping("/monitor/{monitorId}/resume")
public void handleResume(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling resume request for monitor with id " + monitorId);
}
try {
monitoringFacade.enableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
I am using #Aspect to implement a retry logic(max_retries = 5) for database stale connection problems.In this Advice I have a ThreadLocal object which keep tracks of how many times logic has retried to get connection and it gets incremented whenever it cannot get connection so to avoid unlimited retries for stale connection issue, maximum number of retries is 5(constant).
But the problem I have is , in this #Aspect java class ThreadLocal never gets incremented and this is causing endlees loop in the code, which of course should not retry after maximun number of retries, but never reach that count and does not break out of while loop.
Please let me know if anybody had this problem with #Aspect and ThreadLcal object.
I will be happy to share the code.
private static ThreadLocal<Integer> retryCounter= new ThreadLocal<Integer>() {};
private static final String STALE_CONNECTION_EXCEPTION = "com.ibm.websphere.ce.cm.StaleConnectionException";
#Around("service")
public Object retryConnection(ProceedingJoinPoint pjp) throws Throwable {
if (staleConnectionException == null) {
return pjp.proceed();
}
Throwable exception = null;
retryCounter.set(new Integer(0));
while ( retryCounter.get() < MAX_TRIES) {
try {
return pjp.proceed();
}
catch (AppDataException he) {
exception = retry(he.getCause());
}
catch (NestedRuntimeException e) {
exception = retry(e);
}
}
if (exception != null) {
Logs.error("Stale connection exception occurred, no more retries left", this.getClass(), null);
logException(pjp, exception);
throw new AppDataException(exception);
}
return null;
}
private Throwable retry(Throwable e) throws Throwable {
if (e instanceof NestedRuntimeException && ((NestedRuntimeException)e).contains(staleConnectionException)) {
retryCounter.set(retryCounter.get()+1);
LogUtils.log("Stale connection exception occurred, retrying " + retryCounter.get() + " of " + MAX_TRIES, this.getClass());
return e;
}
else {
throw e;
}
}
As mentioned in the comments, not sure why you are using a thread local... but given that you are, what might be causing the infinite loop is recursive use of this aspect. Run it through a debugger or profile it to see if you are hitting the same aspect in a nested fashion.
To be honest, looking at your code, I think you would be better off not doing this at all, but rather just configure connection testing in your connection pool (assuming you are using one): http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/tdat_pretestconn.html
I am getting a
org.hibernate.TransactionException: nested transactions not supported
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:152)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1395)
at com.mcruiseon.server.hibernate.ReadOnlyOperations.flush(ReadOnlyOperations.java:118)
Code that throws that exception. I am calling flush from a thread that runs infinite until there is data to flush.
public void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
// Below Line 118
txD = session.beginTransaction();
txD.begin() ;
session.saveOrUpdate(dataStore);
try {
txD.commit();
while(!txD.wasCommitted()) ;
} catch (ConstraintViolationException e) {
txD.rollback() ;
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
txD.rollback() ;
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
Edit :
I am calling flush in a independent thread as datastore list contains data. From what I see its a sync operation call to flush, so ideally flush should not return until transaction is complete. I would like it that way is the least I want to expect. Since its a independent thread doing its job, all I care about it flush being a sync operation. Now my question is, is txD.commit a async operation ? Does it return before that transaction has a chance to finish. If yes, is there a way to get commit to "Wait" until the transaction completes ?
public void run() {
Object dataStore = null;
while (true) {
try {
synchronized (flushQ) {
if (flushQ.isEmpty())
flushQ.wait();
if (flushQ.isEmpty()) {
continue;
}
dataStore = flushQ.removeFirst();
if (dataStore == null) {
continue;
}
}
try {
flush(dataStore);
} catch (DidNotSaveRequestSomeRandomError e) {
e.printStackTrace();
log.fatal(e);
}
} catch (HibernateException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Edit 2 : Added while(!txD.wasCommitted()) ; (in code above), still I get that freaking nested transactions not supported. Infact due to this exception a record is not being written to by table too. Is there something to do with the type of table ? I have INNODB for all my tables?
Finally got the nested transaction not supported error fixed. Changes made to code are
if (session.getTransaction() != null
&& session.getTransaction().isActive()) {
txD = session.getTransaction();
} else {
txD = session.beginTransaction();
}
//txD = session.beginTransaction();
// txD.begin() ;
session.saveOrUpdate(dataStore);
try {
txD.commit();
while (!txD.wasCommitted())
;
}
Credits of above code also to Venkat. I did not find HbTransaction, so just used getTransaction and beginTransaction. It worked.
I also made changes in the hibernate properties due to advice on here. I added these lines to the hibernate.properties. This alone did not solve the issue. But I am leaving it there.
hsqldb.write_delay_millis=0
shutdown=true
You probably already began a transaction before calling this method.
Either this should be part of the enclosing transaction, and you should thus not start another one; or it shouldn't be part of the enclosing transaction, and you should thus open a new session and a new transaction rather than using the current session.