My app is built upon Spring + SockJs. Main page represents a table of available connections so that user can monitor them in real-time. Every single url monitor can be suspended/resumed separatelly from each other. The problem is once you suspend some monitor then you can never resume it back because ApplicationEvents property of MonitoringFacade bean suddenly becomes null for the SINGLE entity. For other entites listener keeps working pretty well. When attempt to invoke methods of such null listener NullPointerException is never thrown though.
class IndexController implements ApplicationEvents
...
public IndexController(SimpMessagingTemplate simpMessagingTemplate, MonitoringFacade monitoringFacade) {
this.simpMessagingTemplate = simpMessagingTemplate;
this.monitoringFacade = monitoringFacade;
}
#PostConstruct
public void initialize() {
if (logger.isDebugEnabled()) {
logger.debug(">>Index controller initialization.");
}
monitoringFacade.addDispatcher(this);
}
...
#Override
public void monitorUpdated(String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>Sending monitoring data to client with monitor id " + monitorId);
}
try {
ConfigurationDTO config = monitoringFacade.findConfig(monitorId);
Report report = monitoringFacade.findReport(monitorId);
ReportReadModel readModel = ReportReadModel.mapFrom(config, report);
simpMessagingTemplate.convertAndSend("/client/update", readModel);
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
public class MonitoringFacadeImpl implements MonitoringFacade
...
private ApplicationEvents dispatcher;
public void addDispatcher(ApplicationEvents dispatcher) {
logger.info("Setting up dispatcher");
this.dispatcher = dispatcher;
}
...
#Override
public void refreshed(RefreshEvent event) {
final String monitorId = event.getId().getIdentity();
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Refreshing monitoring data with monitor id '%s'", monitorId));
}
Configuration refreshedConfig = configurationService.find(monitorId);
reportingService.compileReport(refreshedConfig, event.getData());
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Notifying monitoring data updated with monitor id '%s'", monitorId) + dispatcher);
}
dispatcher.monitorUpdated(monitorId); // here dispatcher has null value... or it's actually not
}
void refreshed(RefreshEvent event) method succesfully receives updates from Quartz scheduler through the interface and sends it back to controller.
The question is how a singleton-scoped bean can have different property values for different objects it is applied for and why such a property becomes null even though i have never set it to null?
UPD:
#MessageMapping("/monitor/{monitorId}/suspend")
public void handleSuspend(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling suspend request for monitor with id " + monitorId);
}
try {
monitoringFacade.disableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
#MessageMapping("/monitor/{monitorId}/resume")
public void handleResume(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling resume request for monitor with id " + monitorId);
}
try {
monitoringFacade.enableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
Related
I have a spring boot application that uses the libraries: SimpleMessageListenerContainer (https://docs.spring.io/spring-amqp/docs/current/api/org/springframework/amqp/rabbit/listener/SimpleMessageListenerContainer.html) and SimpleMessageListenerContainerFactory (https://www.javadoc.io/static/org.springframework.cloud/spring-cloud-aws-messaging/2.2.0.RELEASE/org/springframework/cloud/aws/messaging/config/SimpleMessageListenerContainerFactory.html). The application uses ASW SQS and Kafka but I'm experiencing some out of order data and trying to investigate why. Is there a way to view logging from the libraries? I know I cannot edit them directly but when I create the bean, I want to be able to see the logs from those two libraries and if possible to add to them.
Currently I'm setting up the bean in this way:
#ConditionalOnProperty(value = "application.listener-mode", havingValue = "SQS")
#Component
public class SqsConsumer {
private final static Logger logger = LoggerFactory.getLogger(SqsConsumer.class);
#Autowired
private ConsumerMessageHandler consumerMessageHandler;
#Autowired
private KafkaProducer producer;
#PostConstruct
public void init() {
logger.info("Loading SQS Listener Bean");
}
#SqsListener("${application.aws-iot.sqs-url}")
public void receiveMessage(String message) {
byte[] decodedValue = Base64.getDecoder().decode(message);
consumerMessageHandler.handle(decodedValue, message);
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(10);
factory.setWaitTimeOut(20);
logger.info("Created simpleMessageListenerContainerFactory");
logger.info(factory.toString());
return factory;
}
}
For reference, this is a method in the SimpleMessageListenerContainer. It is these logs which I would like to investigate and potentially add to:
#Override
public void run() {
while (isQueueRunning()) {
try {
ReceiveMessageResult receiveMessageResult = getAmazonSqs()
.receiveMessage(
this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(
receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isQueueRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(
this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(
messageBatchLatch, messageExecutor));
}
else {
messageBatchLatch.countDown();
}
}
try {
messageBatchLatch.await();
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
catch (Exception e) {
getLogger().warn(
"An Exception occurred while polling queue '{}'. The failing operation will be "
+ "retried in {} milliseconds",
this.logicalQueueName, getBackOffTime(), e);
try {
// noinspection BusyWait
Thread.sleep(getBackOffTime());
}
catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
SimpleMessageListenerContainer.this.scheduledFutureByQueue
.remove(this.logicalQueueName);
}
How would I be able to see all of that logging from where I create the bean?
Any help would be much appreciated!
I need to migrate to Kinesis library to version 2.2.11 so I followed the tutorial: https://docs.aws.amazon.com/streams/latest/dev/kcl-migration.html
I need to run multiple instances of my consumer app, so every one of them needs to have an unique application name in order to have a separate lease table in DynamoDb.
When initializing the consumer Kinesis runs DynamoDBLeaseRefresher.createLeaseTableIfNotExists which checks if a new table needs to be created for this application name and creates one if it cannot be found.
So 2 operations are performed:
DescribeTable - it returns the table info or throws a ResourceNotFoundExecption,
if needed - CreateTable.
The problem for me is with the DescribeTable method. When I am looking for an existing table it returns it with no problem. But when I am looking for a non-existent table it throws the ResourceNotFoundExecption -> so far so good. Unfortunately it then gets wrapped and is now:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: software.amazon.awssdk.awscore.exception.AwsServiceException$Builder.extendedRequestId(Ljava/lang/String;)Lsoftware/amazon/awssdk/awscore/exception/AwsServiceException$Builder;
and the app expecting ResourceNotFoundException gets something different instead and crashes.
The wrapped exception message is a bit misleading: "Unable to execute HTTP request" since the request was performed and returned the proper message: "Resource not found".
Funny thing is that it sometimes works, the exception does not get wrapped, the CreateTable operation is performed and the consumer starts properly.
I have made a workaround for it for now where I just create the table before the initialization of the LeaseCoordinator, so it always gets the existing table.
here is my code:
public KinesisStreamReaderService(String streamName, String applicationName, String regionName) {
KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(Region.of(connectionProperties.getRegion()))
.httpClientBuilder(createHttpClientBuilder())
.build();
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(regionName)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(regionName)).build();
// if(!dynamoDbTableExists(dynamoClient, applicationName)) {
// createDynamoDbTable(dynamoClient, applicationName);
// }
ConfigsBuilder configsBuilder = new ConfigsBuilder(streamName, applicationName, kinesisClient,
dynamoClient, cloudWatchClient, workerId(), KinesisReaderProcessor::new);
configsBuilder.retrievalConfig().initialPositionInStreamExtended(
InitialPositionInStreamExtended.newInitialPosition(
InitialPositionInStream.LATEST));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig(),
configsBuilder.retrievalConfig().retrievalSpecificConfig(new PollingConfig(streamName, kinesisClient))
);
}
private void createDynamoDbTable(DynamoDbAsyncClient dynamoClient, String applicationName) {
log.info("Creating new lease table: {}", applicationName);
CompletableFuture<CreateTableResponse> createTableFuture = dynamoClient
.createTable(CreateTableRequest.builder()
.provisionedThroughput(ProvisionedThroughput.builder().readCapacityUnits(10L).writeCapacityUnits(10L).build())
.tableName(applicationName)
.keySchema(KeySchemaElement.builder().attributeName("leaseKey").keyType(KeyType.HASH).build())
.attributeDefinitions(AttributeDefinition.builder().attributeName("leaseKey").attributeType(
ScalarAttributeType.S).build())
.build());
try {
CreateTableResponse createTableResponse = createTableFuture.get();
log.debug("Created new lease table: {}", createTableResponse.tableDescription().tableName());
} catch (InterruptedException | ExecutionException e) {
throw new DataStreamException(e.getMessage(), e);
}
}
private boolean dynamoDbTableExists(DynamoDbAsyncClient dynamoClient, String tableName) {
CompletableFuture<DescribeTableResponse> describeTableResponseCompletableFutureNew = dynamoClient
.describeTable(DescribeTableRequest.builder()
.tableName(tableName).build());
try {
DescribeTableResponse describeTableResponseNew = describeTableResponseCompletableFutureNew
.get();
return nonNull(describeTableResponseNew);
} catch (InterruptedException | ExecutionException e) {
log.info(e.getMessage(), e);
}
return false;
}
private static String workerId() {
String workerId;
try {
workerId = format("%s_%s", getLocalHost().getCanonicalHostName(), randomUUID().toString());
} catch (UnknownHostException e) {
workerId = randomUUID().toString();
}
return workerId;
}
#Override
public void read(Consumer<String> consumer) {
this.consumer = consumer;
scheduler.run();
}
private class KinesisReaderProcessor implements ShardRecordProcessor {
private String shardId;
#Override
public void initialize(InitializationInput initializationInput) {
this.shardId = initializationInput.shardId();
log.info("Initializing record processor for shard: {}", shardId);
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
log.debug("Checking shard {} for new records", shardId);
List<KinesisClientRecord> records = processRecordsInput.records();
if (!records.isEmpty()) {
log.debug("Processing {} records from kinesis stream shard {}", records.size(), shardId);
records.forEach(record -> {
String json = UTF_8.decode(record.data()).toString();
log.info(json);
consumer.accept(json);
});
}
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
log.info("Record processor has lost lease, terminating");
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
try {
shutdownRequestedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
}
}
Am I missing some configuration for the scheduler or something? Why is it sometimes working?
Thanks
Edit:
The problem is this block of code in DynamoDBLeaseRefresher.tableStatus() is invoked to check if the table exists:
DescribeTableResponse result;
try {
try {
result =
(DescribeTableResponse)FutureUtils.resolveOrCancelFuture(this.dynamoDBClient.describeTable(request), this.dynamoDbRequestTimeout);
} catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
} catch (InterruptedException var6) {
throw new DependencyException(var6);
}
} catch (ResourceNotFoundException var7) {
log.debug("Got ResourceNotFoundException for table {} in leaseTableExists, returning false.", this.table);
return null;
}
and in my case it should get ResourceNotFoundException if the table is not found, but as I said the expection gets wrapped to CompletionException before it reaches the appropriate catch block and is caught in the code here:
catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
This is happening 20 times in the loop while trying to Initialize the LeaseCoordinator and then just stops trying to initialize the connection. (As mentioned above it works occasionally, but that makes it even stranger to me)
With my workaround it only needs 1 try to get initialized
You don't need to create a lease table manually - DynamoDBLeaseCoordinator will create one if not exists on initialization and wait until it exists:
#Override
public void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
final boolean newTableCreated =
leaseRefresher.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
if (newTableCreated) {
log.info("Created new lease table for coordinator with initial read capacity of {} and write capacity of {}.",
initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
}
// Need to wait for table in active state.
final long secondsBetweenPolls = 10L;
final long timeoutSeconds = 600L;
final boolean isTableActive = leaseRefresher.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
if (!isTableActive) {
throw new DependencyException(new IllegalStateException("Creating table timeout"));
}
}
The issue in your case, I think, is that it's eventually created and you probably should periodically check until table appears - like DynamoDBLeaseCoordinator#initialize() does.
On acquiring a state machine with stateMachineService the machine is started, but I passed 'false' as a second parameter.
stateMachine = stateMachineService.acquireStateMachine(id, false)
According to console output 'acquireStateMachine' starts the machine.
I'm using DefaultStateMachineService
#Bean
public StateMachineService<BookingItemState, BookingItemEvent> stateMachineService(
StateMachineFactory<BookingItemState, BookingItemEvent> stateMachineFactory,
StateMachineRuntimePersister<BookingItemState, BookingItemEvent, String> stateMachineRuntimePersister) {
return new DefaultStateMachineService<>(stateMachineFactory, stateMachineRuntimePersister);
}
The issue is in DefaultStateMachineService class. I suppose that you have configured SM as below enabling autoStartap property:
#Override
public void configure(StateMachineConfigurationConfigurer<String, String> config) throws Exception {
config
.withConfiguration()
.autoStartup(true);
}
If you call acquireStateMachine the DefaultStateMachineService creates a new SM using stateMachineFactory (but you SM has enabled autoStartup) it starts a new SM and stores it to DB.
Let's consider the metnod:
public StateMachine<S, E> acquireStateMachine(String machineId, boolean start) {
log.info("Acquiring machine with id " + machineId);
StateMachine<S, E> stateMachine;
// naive sync to handle concurrency with release
synchronized (machines) {
stateMachine = machines.get(machineId);
if (stateMachine == null) {
log.info("Getting new machine from factory with id " + machineId);
stateMachine = stateMachineFactory.getStateMachine(machineId);
if (stateMachinePersist != null) {
try {
StateMachineContext<S, E> stateMachineContext = stateMachinePersist.read(machineId);
stateMachine = restoreStateMachine(stateMachine, stateMachineContext);
} catch (Exception e) {
log.error("Error handling context", e);
throw new StateMachineException("Unable to read context from store", e);
}
}
machines.put(machineId, stateMachine);
}
}
// handle start outside of sync as it might take some time and would block other machines acquire
return handleStart(stateMachine, start);
}
To avoid this issue you may disable autoStartup option or implement you custom StateMachineService. But then you have to explicitly call stateMachine.start().
I have written custom flume sink, named MySink, whose process method is indicated in the first snippet below. I am getting an IllegalStateException as follows (detailed stack trace is available in the 2nd snippet below):
Caused by: java.lang.IllegalStateException: begin() called when
transaction is OPEN!
QUESTION: I have followed the KafkaSink and similar existing sink implementations in flume code base while writing the process method and I am applying the very same transaction handling logic with those exiting sinks. Could you please tell me what is wrong in my process method here? How can I fix the problem?
PROCESS method (I have marked where the exception is thrown):
#Override
public Status process() throws EventDeliveryException {
Status status = Status.READY;
Channel ch = getChannel();
Transaction txn = ch.getTransaction();
Event event = null;
try {
LOG.info(getName() + " BEFORE txn.begin()");
//!!!! EXCEPTION IS THROWN in the following LINE !!!!!!
txn.begin();
LOG.info(getName() + " AFTER txn.begin()");
LOG.info(getName() + " BEFORE ch.take()");
event = ch.take();
LOG.info(getName() + " AFTER ch.take()");
if (event == null) {
// No event found, request back-off semantics from the sink runner
LOG.info(getName() + " - EVENT is null! ");
return Status.BACKOFF;
}
Map<String, String> keyValueMapInTheMessage = event.getHeaders();
if (!keyValueMapInTheMessage.isEmpty()) {
mDBWriter.insertDataToDB(keyValueMapInTheMessage);
}
LOG.info(getName() + " - EVENT: " + EventHelper.dumpEvent(event));
if (txn != null) {
txn.commit();
}
} catch (Exception ex) {
String errMsg = getName() + " - Failed to publish events. Exception: ";
LOG.info(errMsg);
status = Status.BACKOFF;
if (txn != null) {
try {
txn.rollback();
} catch (Exception e) {
LOG.info(getName() + " - EVENT: " + EventHelper.dumpEvent(event));
throw Throwables.propagate(e);
}
}
throw new EventDeliveryException(errMsg, ex);
} finally {
if (txn != null) {
txn.close();
}
}
return status;
}
EXCEPTION STACK:
2016-01-22 14:01:15,440 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)]
Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: MySink - Failed to publish events.
Exception: at com.XYZ.flume.maprdb.MySink.process(MySink.java:116)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: begin() called when transaction is OPEN!
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.channel.BasicTransactionSemantics.begin(BasicTransactionSemantics.java:131)
at com.XYZ.flume.maprdb.MySink.process(MySink.java:82)
... 3 more
if (event == null) {
// No event found, request back-off semantics from the sink runner
LOG.info(getName() + " - EVENT is null! ");
return Status.BACKOFF;
}
this code causes this problem. when event is null, you just return it.however, the correct way is to commit or rollback.a transaction should go through three stages: begin, commit or rollback, finally close.we can see the following source code to find how it implements.
BasicChannelSemantics:
public Transaction getTransaction() {
if (!initialized) {
synchronized (this) {
if (!initialized) {
initialize();
initialized = true;
}
}
}
BasicTransactionSemantics transaction = currentTransaction.get();
if (transaction == null || transaction.getState().equals(
BasicTransactionSemantics.State.CLOSED)) {
transaction = createTransaction();
currentTransaction.set(transaction);
}
return transaction;
}
when currentTransaction is null or its State is close, channel will create a new one, otherwise return the old one. this exception does not happen immediately. when the first time execute the process method, you get a new transaction, but the event is null, you just return and finally close, the close method does not work because of its implement.so the second time execute the process method, you don't get a new transaction, it is the old one.the following code is about how transaction implement.
BasicTransactionSemantics:
protected BasicTransactionSemantics() {
state = State.NEW;
initialThreadId = Thread.currentThread().getId();
}
public void begin() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"begin() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.NEW),
"begin() called when transaction is " + state + "!");
try {
doBegin();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
state = State.OPEN;
}
public void commit() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"commit() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.OPEN),
"commit() called when transaction is %s!", state);
try {
doCommit();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
state = State.COMPLETED;
}
public void rollback() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"rollback() called from different thread than getTransaction()!");
Preconditions.checkState(state.equals(State.OPEN),
"rollback() called when transaction is %s!", state);
state = State.COMPLETED;
try {
doRollback();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new ChannelException(e.toString(), e);
}
}
public void close() {
Preconditions.checkState(Thread.currentThread().getId() == initialThreadId,
"close() called from different thread than getTransaction()!");
Preconditions.checkState(
state.equals(State.NEW) || state.equals(State.COMPLETED),
"close() called when transaction is %s"
+ " - you must either commit or rollback first", state);
state = State.CLOSED;
doClose();
}
when create, the state is new.
when begin, the state must be new, then state become open.
when commit or rollback, the state must be open, then state become complete.
when close, the state must be complete, then state become close.
so when you execute the close method in a right way, the next time you will get a new transaction, otherwise the old one which state must not be new, so you can't execute transaction.begin(), it needs a new one.
Im using hibernate 3 and spring.
When I start a thread an exception occurred:
org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I dont know how to detach entities or close session with this architecture.
I appreciate some help.
CommunicationService.sendCommunications() code:
public void sendCommunications(HibernateMessageToSendRepository messageToSendRepository) {
Long messageId = new Long(41); //this is only for test. the idea is get a list of id and generate a thread group.
MessageSender sender = SmsSender(messageId, messageToSendRepository);
sender.start();
}
Invoking sendCommunications code:
ApplicationContext appCont = new ClassPathXmlApplicationContext("appContext.xml");
ServiceLocator serviceLocator = ServiceLocator.getInstance();
HibernateMessageToSendRepository messageToSendRepository = (HibernateMessageToSendRepository) appCont.getBean("messageToSendRepository");
CommunicationService communication = serviceLocator.getCommunicationService();
communication.sendCommunications(messageToSendRepository);
SmsSender (extends from MessageSender (thread)) code:
public class SmsSender extends MessageSender {
public SmsSender(Long messageToSendId, HibernateMessageToSendRepository messageToSendRepository) {
super(messageToSendRepository);
MessageToSend messageToSendNew = this.messageToSendRepository.getById(messageToSendId);
this.messageToSend = messageToSendNew;
}
public void run() {
try {
MessageToSendSms messageToSendSms = (MessageToSendSms) this.messageToSend;
Iterator<CustomerByMessage> itCbmsgs = messageToSendSms.getCustomerByMessage().iterator();
while (itCbmsgs.hasNext()) {
CustomerByMessage cbm = (CustomerByMessage) itCbmsgs.next();
//sms sending
this.getGateway().sendSMS(cbm.getBody(), cbm.getCellphone());
cbm.setStatus(CustomerByMessageStatus.SENT_OK);
cbm.setSendingDate(Calendar.getInstance().getTime());
}
messageToSendSms.getMessage().setStatus(messageToSendStatus.ALL_MESSAGES_SENT);
this.messageToSendRepository.update(messageToSendSms);
} catch (Exception e) {
this.log.error("Error en sms sender " + e.getMessage());
}
}
}
MessageToSendRepository code:
public void update(MessageToSend messageToSend) {
try {
this.getSession().update(messageToSend);
} catch (HibernateException e) {
this.log.error(e.getMessage(), e);
throw e;
}
}
You need to detach messageToSendNew after you you retrieve it, but before you share it with another thread. You can detach the object by calling Session.close() on your hibernate session.
Caveat you must eagerly populate all the fields that you need.
If you need to reconnect it with a new session you can use the merge() method.