Im using hibernate 3 and spring.
When I start a thread an exception occurred:
org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I dont know how to detach entities or close session with this architecture.
I appreciate some help.
CommunicationService.sendCommunications() code:
public void sendCommunications(HibernateMessageToSendRepository messageToSendRepository) {
Long messageId = new Long(41); //this is only for test. the idea is get a list of id and generate a thread group.
MessageSender sender = SmsSender(messageId, messageToSendRepository);
sender.start();
}
Invoking sendCommunications code:
ApplicationContext appCont = new ClassPathXmlApplicationContext("appContext.xml");
ServiceLocator serviceLocator = ServiceLocator.getInstance();
HibernateMessageToSendRepository messageToSendRepository = (HibernateMessageToSendRepository) appCont.getBean("messageToSendRepository");
CommunicationService communication = serviceLocator.getCommunicationService();
communication.sendCommunications(messageToSendRepository);
SmsSender (extends from MessageSender (thread)) code:
public class SmsSender extends MessageSender {
public SmsSender(Long messageToSendId, HibernateMessageToSendRepository messageToSendRepository) {
super(messageToSendRepository);
MessageToSend messageToSendNew = this.messageToSendRepository.getById(messageToSendId);
this.messageToSend = messageToSendNew;
}
public void run() {
try {
MessageToSendSms messageToSendSms = (MessageToSendSms) this.messageToSend;
Iterator<CustomerByMessage> itCbmsgs = messageToSendSms.getCustomerByMessage().iterator();
while (itCbmsgs.hasNext()) {
CustomerByMessage cbm = (CustomerByMessage) itCbmsgs.next();
//sms sending
this.getGateway().sendSMS(cbm.getBody(), cbm.getCellphone());
cbm.setStatus(CustomerByMessageStatus.SENT_OK);
cbm.setSendingDate(Calendar.getInstance().getTime());
}
messageToSendSms.getMessage().setStatus(messageToSendStatus.ALL_MESSAGES_SENT);
this.messageToSendRepository.update(messageToSendSms);
} catch (Exception e) {
this.log.error("Error en sms sender " + e.getMessage());
}
}
}
MessageToSendRepository code:
public void update(MessageToSend messageToSend) {
try {
this.getSession().update(messageToSend);
} catch (HibernateException e) {
this.log.error(e.getMessage(), e);
throw e;
}
}
You need to detach messageToSendNew after you you retrieve it, but before you share it with another thread. You can detach the object by calling Session.close() on your hibernate session.
Caveat you must eagerly populate all the fields that you need.
If you need to reconnect it with a new session you can use the merge() method.
Related
I have a spring boot application that uses the libraries: SimpleMessageListenerContainer (https://docs.spring.io/spring-amqp/docs/current/api/org/springframework/amqp/rabbit/listener/SimpleMessageListenerContainer.html) and SimpleMessageListenerContainerFactory (https://www.javadoc.io/static/org.springframework.cloud/spring-cloud-aws-messaging/2.2.0.RELEASE/org/springframework/cloud/aws/messaging/config/SimpleMessageListenerContainerFactory.html). The application uses ASW SQS and Kafka but I'm experiencing some out of order data and trying to investigate why. Is there a way to view logging from the libraries? I know I cannot edit them directly but when I create the bean, I want to be able to see the logs from those two libraries and if possible to add to them.
Currently I'm setting up the bean in this way:
#ConditionalOnProperty(value = "application.listener-mode", havingValue = "SQS")
#Component
public class SqsConsumer {
private final static Logger logger = LoggerFactory.getLogger(SqsConsumer.class);
#Autowired
private ConsumerMessageHandler consumerMessageHandler;
#Autowired
private KafkaProducer producer;
#PostConstruct
public void init() {
logger.info("Loading SQS Listener Bean");
}
#SqsListener("${application.aws-iot.sqs-url}")
public void receiveMessage(String message) {
byte[] decodedValue = Base64.getDecoder().decode(message);
consumerMessageHandler.handle(decodedValue, message);
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(10);
factory.setWaitTimeOut(20);
logger.info("Created simpleMessageListenerContainerFactory");
logger.info(factory.toString());
return factory;
}
}
For reference, this is a method in the SimpleMessageListenerContainer. It is these logs which I would like to investigate and potentially add to:
#Override
public void run() {
while (isQueueRunning()) {
try {
ReceiveMessageResult receiveMessageResult = getAmazonSqs()
.receiveMessage(
this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(
receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isQueueRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(
this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(
messageBatchLatch, messageExecutor));
}
else {
messageBatchLatch.countDown();
}
}
try {
messageBatchLatch.await();
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
catch (Exception e) {
getLogger().warn(
"An Exception occurred while polling queue '{}'. The failing operation will be "
+ "retried in {} milliseconds",
this.logicalQueueName, getBackOffTime(), e);
try {
// noinspection BusyWait
Thread.sleep(getBackOffTime());
}
catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
SimpleMessageListenerContainer.this.scheduledFutureByQueue
.remove(this.logicalQueueName);
}
How would I be able to see all of that logging from where I create the bean?
Any help would be much appreciated!
I need to migrate to Kinesis library to version 2.2.11 so I followed the tutorial: https://docs.aws.amazon.com/streams/latest/dev/kcl-migration.html
I need to run multiple instances of my consumer app, so every one of them needs to have an unique application name in order to have a separate lease table in DynamoDb.
When initializing the consumer Kinesis runs DynamoDBLeaseRefresher.createLeaseTableIfNotExists which checks if a new table needs to be created for this application name and creates one if it cannot be found.
So 2 operations are performed:
DescribeTable - it returns the table info or throws a ResourceNotFoundExecption,
if needed - CreateTable.
The problem for me is with the DescribeTable method. When I am looking for an existing table it returns it with no problem. But when I am looking for a non-existent table it throws the ResourceNotFoundExecption -> so far so good. Unfortunately it then gets wrapped and is now:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: software.amazon.awssdk.awscore.exception.AwsServiceException$Builder.extendedRequestId(Ljava/lang/String;)Lsoftware/amazon/awssdk/awscore/exception/AwsServiceException$Builder;
and the app expecting ResourceNotFoundException gets something different instead and crashes.
The wrapped exception message is a bit misleading: "Unable to execute HTTP request" since the request was performed and returned the proper message: "Resource not found".
Funny thing is that it sometimes works, the exception does not get wrapped, the CreateTable operation is performed and the consumer starts properly.
I have made a workaround for it for now where I just create the table before the initialization of the LeaseCoordinator, so it always gets the existing table.
here is my code:
public KinesisStreamReaderService(String streamName, String applicationName, String regionName) {
KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(Region.of(connectionProperties.getRegion()))
.httpClientBuilder(createHttpClientBuilder())
.build();
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(regionName)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(regionName)).build();
// if(!dynamoDbTableExists(dynamoClient, applicationName)) {
// createDynamoDbTable(dynamoClient, applicationName);
// }
ConfigsBuilder configsBuilder = new ConfigsBuilder(streamName, applicationName, kinesisClient,
dynamoClient, cloudWatchClient, workerId(), KinesisReaderProcessor::new);
configsBuilder.retrievalConfig().initialPositionInStreamExtended(
InitialPositionInStreamExtended.newInitialPosition(
InitialPositionInStream.LATEST));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig(),
configsBuilder.retrievalConfig().retrievalSpecificConfig(new PollingConfig(streamName, kinesisClient))
);
}
private void createDynamoDbTable(DynamoDbAsyncClient dynamoClient, String applicationName) {
log.info("Creating new lease table: {}", applicationName);
CompletableFuture<CreateTableResponse> createTableFuture = dynamoClient
.createTable(CreateTableRequest.builder()
.provisionedThroughput(ProvisionedThroughput.builder().readCapacityUnits(10L).writeCapacityUnits(10L).build())
.tableName(applicationName)
.keySchema(KeySchemaElement.builder().attributeName("leaseKey").keyType(KeyType.HASH).build())
.attributeDefinitions(AttributeDefinition.builder().attributeName("leaseKey").attributeType(
ScalarAttributeType.S).build())
.build());
try {
CreateTableResponse createTableResponse = createTableFuture.get();
log.debug("Created new lease table: {}", createTableResponse.tableDescription().tableName());
} catch (InterruptedException | ExecutionException e) {
throw new DataStreamException(e.getMessage(), e);
}
}
private boolean dynamoDbTableExists(DynamoDbAsyncClient dynamoClient, String tableName) {
CompletableFuture<DescribeTableResponse> describeTableResponseCompletableFutureNew = dynamoClient
.describeTable(DescribeTableRequest.builder()
.tableName(tableName).build());
try {
DescribeTableResponse describeTableResponseNew = describeTableResponseCompletableFutureNew
.get();
return nonNull(describeTableResponseNew);
} catch (InterruptedException | ExecutionException e) {
log.info(e.getMessage(), e);
}
return false;
}
private static String workerId() {
String workerId;
try {
workerId = format("%s_%s", getLocalHost().getCanonicalHostName(), randomUUID().toString());
} catch (UnknownHostException e) {
workerId = randomUUID().toString();
}
return workerId;
}
#Override
public void read(Consumer<String> consumer) {
this.consumer = consumer;
scheduler.run();
}
private class KinesisReaderProcessor implements ShardRecordProcessor {
private String shardId;
#Override
public void initialize(InitializationInput initializationInput) {
this.shardId = initializationInput.shardId();
log.info("Initializing record processor for shard: {}", shardId);
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
log.debug("Checking shard {} for new records", shardId);
List<KinesisClientRecord> records = processRecordsInput.records();
if (!records.isEmpty()) {
log.debug("Processing {} records from kinesis stream shard {}", records.size(), shardId);
records.forEach(record -> {
String json = UTF_8.decode(record.data()).toString();
log.info(json);
consumer.accept(json);
});
}
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
log.info("Record processor has lost lease, terminating");
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
try {
shutdownRequestedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
}
}
Am I missing some configuration for the scheduler or something? Why is it sometimes working?
Thanks
Edit:
The problem is this block of code in DynamoDBLeaseRefresher.tableStatus() is invoked to check if the table exists:
DescribeTableResponse result;
try {
try {
result =
(DescribeTableResponse)FutureUtils.resolveOrCancelFuture(this.dynamoDBClient.describeTable(request), this.dynamoDbRequestTimeout);
} catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
} catch (InterruptedException var6) {
throw new DependencyException(var6);
}
} catch (ResourceNotFoundException var7) {
log.debug("Got ResourceNotFoundException for table {} in leaseTableExists, returning false.", this.table);
return null;
}
and in my case it should get ResourceNotFoundException if the table is not found, but as I said the expection gets wrapped to CompletionException before it reaches the appropriate catch block and is caught in the code here:
catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
This is happening 20 times in the loop while trying to Initialize the LeaseCoordinator and then just stops trying to initialize the connection. (As mentioned above it works occasionally, but that makes it even stranger to me)
With my workaround it only needs 1 try to get initialized
You don't need to create a lease table manually - DynamoDBLeaseCoordinator will create one if not exists on initialization and wait until it exists:
#Override
public void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
final boolean newTableCreated =
leaseRefresher.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
if (newTableCreated) {
log.info("Created new lease table for coordinator with initial read capacity of {} and write capacity of {}.",
initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
}
// Need to wait for table in active state.
final long secondsBetweenPolls = 10L;
final long timeoutSeconds = 600L;
final boolean isTableActive = leaseRefresher.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
if (!isTableActive) {
throw new DependencyException(new IllegalStateException("Creating table timeout"));
}
}
The issue in your case, I think, is that it's eventually created and you probably should periodically check until table appears - like DynamoDBLeaseCoordinator#initialize() does.
My app is built upon Spring + SockJs. Main page represents a table of available connections so that user can monitor them in real-time. Every single url monitor can be suspended/resumed separatelly from each other. The problem is once you suspend some monitor then you can never resume it back because ApplicationEvents property of MonitoringFacade bean suddenly becomes null for the SINGLE entity. For other entites listener keeps working pretty well. When attempt to invoke methods of such null listener NullPointerException is never thrown though.
class IndexController implements ApplicationEvents
...
public IndexController(SimpMessagingTemplate simpMessagingTemplate, MonitoringFacade monitoringFacade) {
this.simpMessagingTemplate = simpMessagingTemplate;
this.monitoringFacade = monitoringFacade;
}
#PostConstruct
public void initialize() {
if (logger.isDebugEnabled()) {
logger.debug(">>Index controller initialization.");
}
monitoringFacade.addDispatcher(this);
}
...
#Override
public void monitorUpdated(String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>Sending monitoring data to client with monitor id " + monitorId);
}
try {
ConfigurationDTO config = monitoringFacade.findConfig(monitorId);
Report report = monitoringFacade.findReport(monitorId);
ReportReadModel readModel = ReportReadModel.mapFrom(config, report);
simpMessagingTemplate.convertAndSend("/client/update", readModel);
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
public class MonitoringFacadeImpl implements MonitoringFacade
...
private ApplicationEvents dispatcher;
public void addDispatcher(ApplicationEvents dispatcher) {
logger.info("Setting up dispatcher");
this.dispatcher = dispatcher;
}
...
#Override
public void refreshed(RefreshEvent event) {
final String monitorId = event.getId().getIdentity();
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Refreshing monitoring data with monitor id '%s'", monitorId));
}
Configuration refreshedConfig = configurationService.find(monitorId);
reportingService.compileReport(refreshedConfig, event.getData());
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Notifying monitoring data updated with monitor id '%s'", monitorId) + dispatcher);
}
dispatcher.monitorUpdated(monitorId); // here dispatcher has null value... or it's actually not
}
void refreshed(RefreshEvent event) method succesfully receives updates from Quartz scheduler through the interface and sends it back to controller.
The question is how a singleton-scoped bean can have different property values for different objects it is applied for and why such a property becomes null even though i have never set it to null?
UPD:
#MessageMapping("/monitor/{monitorId}/suspend")
public void handleSuspend(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling suspend request for monitor with id " + monitorId);
}
try {
monitoringFacade.disableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
#MessageMapping("/monitor/{monitorId}/resume")
public void handleResume(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling resume request for monitor with id " + monitorId);
}
try {
monitoringFacade.enableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
I have an EJB timer (EJB 2.1) which has bean managed transaction.
The timer code calls a business method which deals with 2 resources in a single transaction. One is database and other one is MQ queue server.
Application server used is Websphere Application Server 7 (WAS). In order to ensure consistency across 2 resources (database and queue manager), we have enabled the option to support 2 phase commit in WAS. This is to ensure that in case of any exception during database operation, message posted in queue is rolled back along with database rollback and vice versa.
Below is the flow explained:
When timeout occurs in Timer code, startProcess() in DirectProcessor is called which is our business method. This method has a try block within which there is a method call to createPostXMLMessage() in the same class. This in turn has a call to another method postMessage() in class PostMsg.
The issue is when we encounter any database exception in createPostXMLMessage() method, the message posted earlier does not roll back although database part is successfully rolled back. Please help.
In ejb-jar.xml
<session id="Transmit">
<ejb-name>Transmit</ejb-name>
<home>com.TransmitHome</home>
<remote>com.Transmit</remote>
<ejb-class>com.TransmitBean</ejb-class>
<session-type>Stateless</session-type>
<transaction-type>Bean</transaction-type>
</session>
public class TransmitBean implements javax.ejb.SessionBean, javax.ejb.TimedObject {
public void ejbTimeout(Timer arg0) {
....
new DIRECTProcessor().startProcess(mySessionCtx);
}
}
public class DIRECTProcessor {
public String startProcess(javax.ejb.SessionContext mySessionCtx) {
....
UserTransaction ut= null;
ut = mySessionCtx.getUserTransaction();
try {
ut.begin();
createPostXMLMessage(interfaceObj, btch_id, dpId, errInd);
ut.commit();
}
catch (Exception e) {
ut.rollback();
ut=null;
}
}
public void createPostXMLMessage(ArrayList<InstrInterface> arr_instrObj, String batchId, String dpId,int errInd) throws Exception {
...
PostMsg pm = new PostMsg();
try {
pm.postMessage( q_name, final_msg.toString());
// database update operations using jdbc
}
catch (Exception e) {
throw e;
}
}
}
public class PostMsg {
public String postMessage(String qName, String message) throws Exception {
QueueConnectionFactory qcf = null;
Queue que = null;
QueueSession qSess = null;
QueueConnection qConn = null;
QueueSender qSender = null;
que = ServiceLocator.getInstance().getQ(qName);
try {
qConn = (QueueConnection) qcf.createQueueConnection(
Constants.QCONN_USER, Constants.QCONN_PSWD);
qSess = qConn.createQueueSession(true, Session.AUTO_ACKNOWLEDGE);
qSender = qSess.createSender(que);
TextMessage txt = qSess.createTextMessage();
txt.setJMSDestination(que);
txt.setText(message);
qSender.send(txt);
} catch (Exception e) {
retval = Constants.ERROR;
e.printStackTrace();
throw e;
} finally {
closeQSender(qSender);
closeQSession(qSess);
closeQConn(qConn);
}
return retval;
}
}
I have a RESTful API that makes use of an entity class annotated with #EntityListners. And in the EntityListner.java, I have a method annotated with #PostPersist. So, when that event fires, I want to extract all the information regarding the entity that just got persisted to the database. But when I try to do that, Glassfish is generating an exception and the method in EntityListner class is not executing as expected. Here is the code
public class EntityListner {
private final static String QUEUE_NAME = "customer";
#PostUpdate
#PostPersist
public void notifyOther(Customer entity){
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
// channel.basicPublish("", QUEUE_NAME, null, successMessage .getBytes());
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally{
}
}
}
If I send the commented out successMessage message instead of custData, everything works fine.
http://www.objectdb.com/java/jpa/persistence/event says the following regarding the entity lifecycle methods, and I am wondering if that is the situation here.
To avoid conflicts with the original database operation that fires the entity lifecycle event (which is still in progress) callback methods should not call EntityManager or Query methods and should not access any other entity objects
Any ideas?
As that paragraph says, the standard does not support calling entity manager methods from inside entity listeners. I strongly recommend building custData from the persisted entity, as Heiko Rupp says in his answer. If that is not feasible, consider:
notifying asynchronously. I do not really recommend this as it probably depends on timing to work properly:
public class EntityListener {
private final static String QUEUE_NAME = "customer";
private ScheduledExecutorService getExecutorService() {
// get asynchronous executor service from somewhere
// you will most likely need a ScheduledExecutorService
// instance, in order to schedule notification with
// some delay. Alternatively, you could try Thread.sleep(...)
// before notifying, but that is ugly.
}
private void doNotifyOtherInNewTransaction(Customer entity) {
// For all this to work correctly,
// you should execute your notification
// inside a new transaction. You might
// find it easier to do this declaratively
// by invoking some method demarcated
// with REQUIRES_NEW
try {
// (begin transaction)
doNotifyOther(entity);
// (commit transaction)
} catch (Exception ex) {
// (rollback transaction)
}
}
#PostUpdate
#PostPersist
public void notifyOther(final Customer entity) {
ScheduledExecutorService executor = getExecutorService();
// This is the "raw" version
// Most probably you will need to call
// executor.schedule and specify a delay,
// in order to give the old transaction some time
// to flush and commit
executor.execute(new Runnable() {
#Override
public void run() {
doNotifyOtherInNewTransaction(entity);
}
});
}
// This is exactly as your original code
public void doNotifyOther(Customer entity) {
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally {
}
}
}
registering some post-commit trigger (my recommendation if Heilo Rupp answer is not feasible). This is not timing dependant because it is guaranteed to execute after you have flushed to database. Furthermore, it has the added benefit that you don't notify if you end up rolling back your transaction. The way to do this depends on what you are using for transaction management, but basically you create an instance of some particular instance and then register it in some registry. For example, with JTA it would be:
public class EntityListener {
private final static String QUEUE_NAME = "customer";
private Transaction getTransaction() {
// get current JTA transaction reference from somewhere
}
private void doNotifyOtherInNewTransaction(Customer entity) {
// For all this to work correctly,
// you should execute your notification
// inside a new transaction. You might
// find it easier to do this declaratively
// by invoking some method demarcated
// with REQUIRES_NEW
try {
// (begin transaction)
doNotifyOther(entity);
// (commit transaction)
} catch (Exception ex) {
// (rollback transaction)
}
}
#PostUpdate
#PostPersist
public void notifyOther(final Customer entity) {
Transaction transaction = getTransaction();
transaction.registerSynchronization(new Synchronization() {
#Override
public void beforeCompletion() { }
#Override
public void afterCompletion(int status) {
if (status == Status.STATUS_COMMITTED) {
doNotifyOtherInNewTransaction(entity);
}
}
});
}
// This is exactly as your original code
public void doNotifyOther(Customer entity) {
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally {
}
}
}
If you are using Spring transactions, the code will be very similar, with just some class name changes.
Some pointers:
ScheduledExecutorService Javadoc, for triggering asynchronous actions.
transaction synchronization with JTA: Transaction Javadoc and Synchronization Javadoc
EJB transaction demarcation
the Spring equivalents: TransactionSynchronizationManager Javadoc and TransactionSynchronization Javadoc.
And some Spring documentation on Spring transactions
I guess you may be seeing a NPE, as you may be violating the paragraph you were citing:
String custData = custFacade.find(customerId).toString();
The find seems to implicitly querying for the object (as you describe), which may not be fully synced to the database and thus not yet accessible.
In his answer, gpeche noted that it's fairly straightforward to translate his option #2 into Spring. To save others the trouble of doing that:
package myapp.entity.listener;
import javax.persistence.PostPersist;
import javax.persistence.PostUpdate;
import org.springframework.transaction.support.TransactionSynchronizationAdapter;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import myapp.util.ApplicationContextProvider;
import myapp.entity.NetScalerServer;
import myapp.service.LoadBalancerService;
public class NetScalerServerListener {
#PostPersist
#PostUpdate
public void postSave(final NetScalerServer server) {
TransactionSynchronizationManager.registerSynchronization(
new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() { postSaveInNewTransaction(server); }
});
}
private void postSaveInNewTransaction(NetScalerServer server) {
ApplicationContext appContext =
ApplicationContextProvider.getApplicationContext();
LoadBalancer lbService = appContext.getBean(LoadBalancerService.class);
lbService.updateEndpoints(server);
}
}
The service method (here, updateEndpoints()) can use the JPA EntityManager (in my case, to issue queries and update entities) without any issue. Be sure to annotate the updateEndpoints() method with #Transaction(propagation = Propagation.REQUIRES_NEW) to ensure that there's a new transaction to perform the persistence operations.
Not directly related to the question, but ApplicationContextProvider is just a custom class to return an app context since JPA 2.0 entity listeners aren't managed components, and I'm too lazy to use #Configurable here. Here it is for completeness:
package myapp.util;
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
#Override
public void setApplicationContext(ApplicationContext appContext)
throws BeansException {
applicationContext = appContext;
}
}