how to use JPA life-cycle events to get entity data - java

I have a RESTful API that makes use of an entity class annotated with #EntityListners. And in the EntityListner.java, I have a method annotated with #PostPersist. So, when that event fires, I want to extract all the information regarding the entity that just got persisted to the database. But when I try to do that, Glassfish is generating an exception and the method in EntityListner class is not executing as expected. Here is the code
public class EntityListner {
private final static String QUEUE_NAME = "customer";
#PostUpdate
#PostPersist
public void notifyOther(Customer entity){
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
// channel.basicPublish("", QUEUE_NAME, null, successMessage .getBytes());
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally{
}
}
}
If I send the commented out successMessage message instead of custData, everything works fine.
http://www.objectdb.com/java/jpa/persistence/event says the following regarding the entity lifecycle methods, and I am wondering if that is the situation here.
To avoid conflicts with the original database operation that fires the entity lifecycle event (which is still in progress) callback methods should not call EntityMan­ager or Query methods and should not access any other entity objects
Any ideas?

As that paragraph says, the standard does not support calling entity manager methods from inside entity listeners. I strongly recommend building custData from the persisted entity, as Heiko Rupp says in his answer. If that is not feasible, consider:
notifying asynchronously. I do not really recommend this as it probably depends on timing to work properly:
public class EntityListener {
private final static String QUEUE_NAME = "customer";
private ScheduledExecutorService getExecutorService() {
// get asynchronous executor service from somewhere
// you will most likely need a ScheduledExecutorService
// instance, in order to schedule notification with
// some delay. Alternatively, you could try Thread.sleep(...)
// before notifying, but that is ugly.
}
private void doNotifyOtherInNewTransaction(Customer entity) {
// For all this to work correctly,
// you should execute your notification
// inside a new transaction. You might
// find it easier to do this declaratively
// by invoking some method demarcated
// with REQUIRES_NEW
try {
// (begin transaction)
doNotifyOther(entity);
// (commit transaction)
} catch (Exception ex) {
// (rollback transaction)
}
}
#PostUpdate
#PostPersist
public void notifyOther(final Customer entity) {
ScheduledExecutorService executor = getExecutorService();
// This is the "raw" version
// Most probably you will need to call
// executor.schedule and specify a delay,
// in order to give the old transaction some time
// to flush and commit
executor.execute(new Runnable() {
#Override
public void run() {
doNotifyOtherInNewTransaction(entity);
}
});
}
// This is exactly as your original code
public void doNotifyOther(Customer entity) {
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally {
}
}
}
registering some post-commit trigger (my recommendation if Heilo Rupp answer is not feasible). This is not timing dependant because it is guaranteed to execute after you have flushed to database. Furthermore, it has the added benefit that you don't notify if you end up rolling back your transaction. The way to do this depends on what you are using for transaction management, but basically you create an instance of some particular instance and then register it in some registry. For example, with JTA it would be:
public class EntityListener {
private final static String QUEUE_NAME = "customer";
private Transaction getTransaction() {
// get current JTA transaction reference from somewhere
}
private void doNotifyOtherInNewTransaction(Customer entity) {
// For all this to work correctly,
// you should execute your notification
// inside a new transaction. You might
// find it easier to do this declaratively
// by invoking some method demarcated
// with REQUIRES_NEW
try {
// (begin transaction)
doNotifyOther(entity);
// (commit transaction)
} catch (Exception ex) {
// (rollback transaction)
}
}
#PostUpdate
#PostPersist
public void notifyOther(final Customer entity) {
Transaction transaction = getTransaction();
transaction.registerSynchronization(new Synchronization() {
#Override
public void beforeCompletion() { }
#Override
public void afterCompletion(int status) {
if (status == Status.STATUS_COMMITTED) {
doNotifyOtherInNewTransaction(entity);
}
}
});
}
// This is exactly as your original code
public void doNotifyOther(Customer entity) {
CustomerFacadeREST custFacade = new CustomerFacadeREST();
Integer customerId = entity.getCustomerId();
String custData = custFacade.find(customerId).toString();
String successMessage = "Entity added to server";
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
channel.basicPublish("", QUEUE_NAME, null, custData.getBytes());
channel.close();
connection.close();
}
catch(IOException ex){
}
finally {
}
}
}
If you are using Spring transactions, the code will be very similar, with just some class name changes.
Some pointers:
ScheduledExecutorService Javadoc, for triggering asynchronous actions.
transaction synchronization with JTA: Transaction Javadoc and Synchronization Javadoc
EJB transaction demarcation
the Spring equivalents: TransactionSynchronizationManager Javadoc and TransactionSynchronization Javadoc.
And some Spring documentation on Spring transactions

I guess you may be seeing a NPE, as you may be violating the paragraph you were citing:
String custData = custFacade.find(customerId).toString();
The find seems to implicitly querying for the object (as you describe), which may not be fully synced to the database and thus not yet accessible.

In his answer, gpeche noted that it's fairly straightforward to translate his option #2 into Spring. To save others the trouble of doing that:
package myapp.entity.listener;
import javax.persistence.PostPersist;
import javax.persistence.PostUpdate;
import org.springframework.transaction.support.TransactionSynchronizationAdapter;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import myapp.util.ApplicationContextProvider;
import myapp.entity.NetScalerServer;
import myapp.service.LoadBalancerService;
public class NetScalerServerListener {
#PostPersist
#PostUpdate
public void postSave(final NetScalerServer server) {
TransactionSynchronizationManager.registerSynchronization(
new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() { postSaveInNewTransaction(server); }
});
}
private void postSaveInNewTransaction(NetScalerServer server) {
ApplicationContext appContext =
ApplicationContextProvider.getApplicationContext();
LoadBalancer lbService = appContext.getBean(LoadBalancerService.class);
lbService.updateEndpoints(server);
}
}
The service method (here, updateEndpoints()) can use the JPA EntityManager (in my case, to issue queries and update entities) without any issue. Be sure to annotate the updateEndpoints() method with #Transaction(propagation = Propagation.REQUIRES_NEW) to ensure that there's a new transaction to perform the persistence operations.
Not directly related to the question, but ApplicationContextProvider is just a custom class to return an app context since JPA 2.0 entity listeners aren't managed components, and I'm too lazy to use #Configurable here. Here it is for completeness:
package myapp.util;
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
#Override
public void setApplicationContext(ApplicationContext appContext)
throws BeansException {
applicationContext = appContext;
}
}

Related

Initializing Consumer Module with Play! Framework

I have a Play Application with a ConsumerService that I want to start and have it listen to a particular RabbitMQ queue on startup. In Play! 2.5, my understanding is that this is now done via a Guide Module so I have a Module.java class in my app's root directly that looks like this:
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(ConsumerService.class).asEagerSingleton();
}
}
Here is my ConsumerService class:
#Singleton
public class ConsumerService {
private static final String TASK_QUEUE_NAME = "queue";
private final JPAApi jpaApi;
#Inject
public ConsumerService(JPAApi api) throws Exception {
this.jpaApi = api;
pullMessages();
}
#Transactional
public void pullMessages() throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
final Connection connection = factory.newConnection();
final Channel channel = connection.createChannel();
channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
Logger.info(" [*] Waiting for messagez. To exit press CTRL+C");
channel.basicQos(1);
final Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
try {
JPA.em();
} catch (Exception e) {
System.out.println("JPA.em() failed: " + e.getMessage());
}
try {
jpaApi.em();
} catch (Exception e) {
System.out.println("jpaApi.em() failed: " + e.getMessage());
}
}
};
channel.basicConsume(TASK_QUEUE_NAME, false, consumer);
}
}
Clearly binding this service as an Eager Singleton has its downsides as attempting to get an entityManager via either of these methods throws an exception. My understanding is that it's due to the fact that this class is binded/loaded before Play has initialized the EntityManager factory. Basically the application hasn't started.
Forgive me but even though I've worked with JPA for years, I find this very confusing and not sure what my best approach should be in working around the basic issue: Start up a "Listener" that ultimately needs to do some DB action when it consumes a message.
I'm curious if there's a way I can put the "handleDelivery" method in a transaction, or redesign my initialization flow such that I can call/inject the jpaApi cleanly.
Also, is there any way to start up this consumer in Play 2.5 than the way I'm doing here? I'm having trouble finding such.
I've looked into the JPAApi.withTransaction documentation, but I'm hoping there's a better way that I'm not aware of.

Broadcasting with Jersey SSE: Detect closed connection

I believe this question is not a duplicate of Server sent event with Jersey: EventOutput is not closed after client drops, but probably related to Jersey Server-Sent Events - write to broken connection does not throw exception.
In chapter 15.4.2 of the Jersey documentation, the SseBroadcaster is described:
However, the SseBroadcaster internally identifies and handles also client disconnects. When a client closes the connection the broadcaster detects this and removes the stale connection from the internal collection of the registered EventOutputs as well as it frees all the server-side resources associated with the stale connection.
I cannot confirm this. In the following testcase, I see the subclassed SseBroadcaster's onClose() method never being called: not when the EventInput is closed, and not when another message is broadcasted.
public class NotificationsResourceTest extends JerseyTest {
final static Logger log = LoggerFactory.getLogger(NotificationsResourceTest.class);
final static CountingSseBroadcaster broadcaster = new CountingSseBroadcaster();
public static class CountingSseBroadcaster extends SseBroadcaster {
final AtomicInteger connectionCounter = new AtomicInteger(0);
public EventOutput createAndAttachEventOutput() {
EventOutput output = new EventOutput();
if (add(output)) {
int cons = connectionCounter.incrementAndGet();
log.debug("Active connection count: "+ cons);
}
return output;
}
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
#Override
public void onException(final ChunkedOutput<OutboundEvent> chunkedOutput, final Exception exception) {
log.trace("An exception has been detected", exception);
}
public int getConnectionCount() {
return connectionCounter.get();
}
}
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
return eventOutput;
}
}
#Override
protected Application configure() {
ResourceConfig config = new ResourceConfig(NotificationsResource.class);
config.register(SseFeature.class);
return config;
}
#Test
public void test() throws Exception {
// check that there are no connections
assertEquals(0, broadcaster.getConnectionCount());
// connect subscriber
log.info("Connecting subscriber");
EventInput eventInput = target("notifications").request().get(EventInput.class);
assertFalse(eventInput.isClosed());
// now there are connections
assertEquals(1, broadcaster.getConnectionCount());
// push data
log.info("Broadcasting data");
String payload = UUID.randomUUID().toString();
OutboundEvent chunk = new OutboundEvent.Builder()
.mediaType(MediaType.TEXT_PLAIN_TYPE)
.name("message")
.data(payload)
.build();
broadcaster.broadcast(chunk);
// read data
log.info("Reading data");
InboundEvent inboundEvent = eventInput.read();
assertNotNull(inboundEvent);
assertEquals(payload, inboundEvent.readData());
// close subscription
log.info("Closing subscription");
eventInput.close();
assertTrue(eventInput.isClosed());
// at this point, the subscriber has disconnected itself,
// but jersey doesnt realise that
assertEquals(1, broadcaster.getConnectionCount());
// wait, give TCP a chance to close the connection
log.debug("Sleeping for some time");
Thread.sleep(10000);
// push data again, this should really flush out the not-connected client
log.info("Broadcasting data again");
broadcaster.broadcast(chunk);
Thread.sleep(100);
// there is no subscriber anymore
assertEquals(0, broadcaster.getConnectionCount()); // FAILS!
}
}
Maybe JerseyTest is not a good way to test this. In a less ... clinical setup, where a JavaScript EventSource is used, I see onClose() being called, but only after a message is broadcasted on the previously closed connection.
What am I doing wrong?
Why doesn't SseBroadcaster detect the closing of the connection by the client?
Follow-up
I've found JERSEY-2833 which was rejected with Works as designed:
According to the Jersey Documentation in SSE chapter (https://jersey.java.net/documentation/latest/sse.html) in 15.4.1 it's mentioned that Jersey does not explicitly close the connection, it's the responsibility of the resource method or the client.
What does that mean exactly? Should the resource enforce a timeout and kill all active and closed-by-client connections?
In the documentation of the constructor org.glassfish.jersey.media.sse.SseBroadcaster.SseBroadcaster(), it says:
Creates a new instance. If this constructor is called by a subclass, it assumes the the reason for the subclass to exist is to implement onClose(org.glassfish.jersey.server.ChunkedOutput) and onException(org.glassfish.jersey.server.ChunkedOutput, Exception)methods, so it adds the newly created instance as the listener. To avoid this, subclasses may call SseBroadcaster(Class) passing their class as an argument.
So you should not leave default constructor and try implementing your constructor invoking super with your class:
public CountingSseBroadcaster(){
super(CountingSseBroadcaster.class);
}
I believe it might be better to set a timeout on your resource and kill only that connection, for example:
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
new Timer().schedule( new TimerTask()
{
#Override public void run()
{
eventOutput.close()
}
}, 10000); // 10 second timeout
return eventOutput;
}
}
Im wondering if by subclassing you may have changed the behaviour.
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
In this you don't close the ChunkedOutput so it won't release the connection. Could this be the problem?

MQ Queue transaction not rolled back in a 2 phase transaction

I have an EJB timer (EJB 2.1) which has bean managed transaction.
The timer code calls a business method which deals with 2 resources in a single transaction. One is database and other one is MQ queue server.
Application server used is Websphere Application Server 7 (WAS). In order to ensure consistency across 2 resources (database and queue manager), we have enabled the option to support 2 phase commit in WAS. This is to ensure that in case of any exception during database operation, message posted in queue is rolled back along with database rollback and vice versa.
Below is the flow explained:
When timeout occurs in Timer code, startProcess() in DirectProcessor is called which is our business method. This method has a try block within which there is a method call to createPostXMLMessage() in the same class. This in turn has a call to another method postMessage() in class PostMsg.
The issue is when we encounter any database exception in createPostXMLMessage() method, the message posted earlier does not roll back although database part is successfully rolled back. Please help.
In ejb-jar.xml
<session id="Transmit">
<ejb-name>Transmit</ejb-name>
<home>com.TransmitHome</home>
<remote>com.Transmit</remote>
<ejb-class>com.TransmitBean</ejb-class>
<session-type>Stateless</session-type>
<transaction-type>Bean</transaction-type>
</session>
public class TransmitBean implements javax.ejb.SessionBean, javax.ejb.TimedObject {
public void ejbTimeout(Timer arg0) {
....
new DIRECTProcessor().startProcess(mySessionCtx);
}
}
public class DIRECTProcessor {
public String startProcess(javax.ejb.SessionContext mySessionCtx) {
....
UserTransaction ut= null;
ut = mySessionCtx.getUserTransaction();
try {
ut.begin();
createPostXMLMessage(interfaceObj, btch_id, dpId, errInd);
ut.commit();
}
catch (Exception e) {
ut.rollback();
ut=null;
}
}
public void createPostXMLMessage(ArrayList<InstrInterface> arr_instrObj, String batchId, String dpId,int errInd) throws Exception {
...
PostMsg pm = new PostMsg();
try {
pm.postMessage( q_name, final_msg.toString());
// database update operations using jdbc
}
catch (Exception e) {
throw e;
}
}
}
public class PostMsg {
public String postMessage(String qName, String message) throws Exception {
QueueConnectionFactory qcf = null;
Queue que = null;
QueueSession qSess = null;
QueueConnection qConn = null;
QueueSender qSender = null;
que = ServiceLocator.getInstance().getQ(qName);
try {
qConn = (QueueConnection) qcf.createQueueConnection(
Constants.QCONN_USER, Constants.QCONN_PSWD);
qSess = qConn.createQueueSession(true, Session.AUTO_ACKNOWLEDGE);
qSender = qSess.createSender(que);
TextMessage txt = qSess.createTextMessage();
txt.setJMSDestination(que);
txt.setText(message);
qSender.send(txt);
} catch (Exception e) {
retval = Constants.ERROR;
e.printStackTrace();
throw e;
} finally {
closeQSender(qSender);
closeQSession(qSess);
closeQConn(qConn);
}
return retval;
}
}

Problems with threads and hibernate sessions

Im using hibernate 3 and spring.
When I start a thread an exception occurred:
org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I dont know how to detach entities or close session with this architecture.
I appreciate some help.
CommunicationService.sendCommunications() code:
public void sendCommunications(HibernateMessageToSendRepository messageToSendRepository) {
Long messageId = new Long(41); //this is only for test. the idea is get a list of id and generate a thread group.
MessageSender sender = SmsSender(messageId, messageToSendRepository);
sender.start();
}
Invoking sendCommunications code:
ApplicationContext appCont = new ClassPathXmlApplicationContext("appContext.xml");
ServiceLocator serviceLocator = ServiceLocator.getInstance();
HibernateMessageToSendRepository messageToSendRepository = (HibernateMessageToSendRepository) appCont.getBean("messageToSendRepository");
CommunicationService communication = serviceLocator.getCommunicationService();
communication.sendCommunications(messageToSendRepository);
SmsSender (extends from MessageSender (thread)) code:
public class SmsSender extends MessageSender {
public SmsSender(Long messageToSendId, HibernateMessageToSendRepository messageToSendRepository) {
super(messageToSendRepository);
MessageToSend messageToSendNew = this.messageToSendRepository.getById(messageToSendId);
this.messageToSend = messageToSendNew;
}
public void run() {
try {
MessageToSendSms messageToSendSms = (MessageToSendSms) this.messageToSend;
Iterator<CustomerByMessage> itCbmsgs = messageToSendSms.getCustomerByMessage().iterator();
while (itCbmsgs.hasNext()) {
CustomerByMessage cbm = (CustomerByMessage) itCbmsgs.next();
//sms sending
this.getGateway().sendSMS(cbm.getBody(), cbm.getCellphone());
cbm.setStatus(CustomerByMessageStatus.SENT_OK);
cbm.setSendingDate(Calendar.getInstance().getTime());
}
messageToSendSms.getMessage().setStatus(messageToSendStatus.ALL_MESSAGES_SENT);
this.messageToSendRepository.update(messageToSendSms);
} catch (Exception e) {
this.log.error("Error en sms sender " + e.getMessage());
}
}
}
MessageToSendRepository code:
public void update(MessageToSend messageToSend) {
try {
this.getSession().update(messageToSend);
} catch (HibernateException e) {
this.log.error(e.getMessage(), e);
throw e;
}
}
You need to detach messageToSendNew after you you retrieve it, but before you share it with another thread. You can detach the object by calling Session.close() on your hibernate session.
Caveat you must eagerly populate all the fields that you need.
If you need to reconnect it with a new session you can use the merge() method.

Are there any good tutorials or examples on how to use Java ObjectPool/pools?

I am trying to create a pool of channels/connections to a queue server and was trying to use ObjectPool but am having trouble using it from the example on their site.
So far I have threads that do work but I want each of them to grab a channel from the pool and then return it. I understand how to use it(borrowObject/returnObjects) but not sure how to create the intial pool.
Here's how channels are made in rabbitmq:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
and my code just uses channel to do stuff. I'm confused because the only example I could find (on their site) starts it like this:
private ObjectPool<StringBuffer> pool;
public ReaderUtil(ObjectPool<StringBuffer> pool) {
this.pool = pool;
}
Which does not make sense to me. I realized this is common to establishing database connections so I tried to find tutorials using databases and ObjectPool but they seem to use DBCP which is specific to databases(and I can't seem to use the logic for my queue server).
Any suggestions on how to use it? Or is there a another approach used for pools in java?
They create a class that creates objects & knows what to do when they are returned. That might be something like this for you:
public class PoolConnectionFactory extends BasePoolableObjectFactory<Connection> {
private final ConnectionFactory factory;
public PoolConnectionFactory() {
factory = new ConnectionFactory();
factory.setHost("localhost");
}
// for makeObject we'll simply return a new Connection
public Connection makeObject() {
return factory.newConnection();
}
// when an object is returned to the pool,
// we'll clear it out
public void passivateObject(Connection con) {
con.I_don't_know_what_to_do();
}
// for all other methods, the no-op
// implementation in BasePoolableObjectFactory
// will suffice
}
now you create a ObjectPool<Connection> somewhere:
ObjectPool<Connection> pool = new StackObjectPool<Connection>(new PoolConnectionFactory());
then you can use pool inside your threads like
Connection c = pool.borrowObject();
c.doSomethingWithMe();
pool.returnObject(c);
The lines that don't make sense to you are a way to pass the pool object to a different class. See last line, they create the pool while creating the reader.
new ReaderUtil(new StackObjectPool<StringBuffer>(new StringBufferFactory()))
You'll need a custom implementation of PoolableObjectFactory to create, validate, and destroy the objects you want to pool. Then pass an instance of your factory to an ObjectPool's contructor and you're ready to start borrowing objects.
Here's some sample code. You can also look at the source code for commons-dbcp, which uses commons-pool.
import org.apache.commons.pool.BasePoolableObjectFactory;
import org.apache.commons.pool.ObjectPool;
import org.apache.commons.pool.PoolableObjectFactory;
import org.apache.commons.pool.impl.GenericObjectPool;
public class PoolExample {
public static class MyPooledObject {
public MyPooledObject() {
System.out.println("hello world");
}
public void sing() {
System.out.println("mary had a little lamb");
}
public void destroy() {
System.out.println("goodbye cruel world");
}
}
public static class MyPoolableObjectFactory extends BasePoolableObjectFactory<MyPooledObject> {
#Override
public MyPooledObject makeObject() throws Exception {
return new MyPooledObject();
}
#Override
public void destroyObject(MyPooledObject obj) throws Exception {
obj.destroy();
}
// PoolableObjectFactory has other methods you can override
// to valdiate, activate, and passivate objects.
}
public static void main(String[] args) throws Exception {
PoolableObjectFactory<MyPooledObject> factory = new MyPoolableObjectFactory();
ObjectPool<MyPooledObject> pool = new GenericObjectPool<MyPooledObject>(factory);
// Other ObjectPool implementations with special behaviors are available;
// see the JavaDoc for details
try {
for (int i = 0; i < 2; i++) {
MyPooledObject obj;
try {
obj = pool.borrowObject();
} catch (Exception e) {
// failed to borrow object; you get to decide how to handle this
throw e;
}
try {
// use the pooled object
obj.sing();
} catch (Exception e) {
// this object has failed us -- never use it again!
pool.invalidateObject(obj);
obj = null; // don't return it to the pool
// now handle the exception however you want
} finally {
if (obj != null) {
pool.returnObject(obj);
}
}
}
} finally {
pool.close();
}
}
}

Categories

Resources