I want to use Netflix-Ribbon as TCP client load balancer without Spring Cloud,and i write test code.
public class App implements Runnable
{
public static String msg = "hello world";
public BaseLoadBalancer lb;
public RxClient<ByteBuf, ByteBuf > client;
public Server echo;
App(){
lb = new BaseLoadBalancer();
echo = new Server("localhost", 8000);
lb.setServersList(Lists.newArrayList(echo));
DefaultClientConfigImpl impl = DefaultClientConfigImpl.getClientConfigWithDefaultValues();
client = RibbonTransport.newTcpClient(lb, impl);
}
public static void main( String[] args ) throws Exception
{
for( int i = 40; i > 0; i--)
{
Thread t = new Thread(new App());
t.start();
t.join();
}
System.out.println("Main thread is finished");
}
public String sendAndRecvByRibbon(final String data)
{
String response = "";
try {
response = client.connect().flatMap(new Func1<ObservableConnection<ByteBuf, ByteBuf>,
Observable<ByteBuf>>() {
public Observable<ByteBuf> call(ObservableConnection<ByteBuf, ByteBuf> connection) {
connection.writeStringAndFlush(data);
return connection.getInput();
}
}).timeout(1, TimeUnit.SECONDS).retry(1).take(1)
.map(new Func1<ByteBuf, String>() {
public String call(ByteBuf ByteBuf) {
return ByteBuf.toString(Charset.defaultCharset());
}
})
.toBlocking()
.first();
}
catch (Exception e) {
System.out.println(((LoadBalancingRxClientWithPoolOptions) client).getMaxConcurrentRequests());
System.out.println(lb.getLoadBalancerStats());
}
return response;
}
public void run() {
for (int i = 0; i < 200; i++) {
sendAndRecvByRibbon(msg);
}
}
}
i find it will create a new socket everytime i callsendAndRecvByRibbon even though the poolEnabled is setting to true. So,it confuse me,i miss something?
and there are no option to configure the size of the pool,but hava a PoolMaxThreads and MaxConnectionsPerHost.
My question is how to use a connection pool in my simple code, and what's wrong with my sendAndRecvByRibbon,it open a socket then use it only once,how can i reuse the connection?thanks for your time.
the server is just a simple echo server writing in pyhton3,i comment outconn.close() because i want to use long connection.
import socket
import threading
import time
import socketserver
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
conn = self.request
while True:
client_data = conn.recv(1024)
if not client_data:
time.sleep(5)
conn.sendall(client_data)
# conn.close()
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
if __name__ == "__main__":
HOST, PORT = "localhost", 8000
server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler)
ip, port = server.server_address
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
server.serve_forever()
and the pom of mevan,i just add two dependency in IED's auto generated POM.
<dependency>
<groupId>commons-configuration</groupId>
<artifactId>commons-configuration</artifactId>
<version>1.6</version>
</dependency>
<dependency>
<groupId>com.netflix.ribbon</groupId>
<artifactId>ribbon</artifactId>
<version>2.2.2</version>
</dependency>
the code for printing src_port
#Sharable
public class InHandle extends ChannelInboundHandlerAdapter {
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println(ctx.channel().localAddress());
super.channelRead(ctx, msg);
}
}
public class Pipeline implements PipelineConfigurator<ByteBuf, ByteBuf> {
public InHandle handler;
Pipeline() {
handler = new InHandle();
}
public void configureNewPipeline(ChannelPipeline pipeline) {
pipeline.addFirst(handler);
}
}
and change the client = RibbonTransport.newTcpClient(lb, impl);to Pipeline pipe = new Pipeline();client = RibbonTransport.newTcpClient(lb, pipe, impl, new DefaultLoadBalancerRetryHandler(impl));
So, your App() constructor does the initialization of lb/client/etc.
Then you're starting 40 different threads with 40 different RxClient instances (each instance has own pool by default) by calling new App() in the first for loop. To make things clear - the way you spawn multiple RxClient instances here does not allow them to share any common pool. Try to use one RxClient instance instead.
What if you change your main method like below, does it stop creating extra sockets?
public static void main( String[] args ) throws Exception
{
App app = new App() // Create things just once
for( int i = 40; i > 0; i--)
{
Thread t = new Thread(()->app.run()); // pass the run()
t.start();
t.join();
}
System.out.println("Main thread is finished");
}
If above does not help fully (at least it will reduce created sockets count in 40 times) - can you please clarify how exactly do you determine that:
i find it will create a new socket everytime i call sendAndRecvByRibbon
and what are your measurements after you update constructor with this line:
DefaultClientConfigImpl impl = DefaultClientConfigImpl.getClientConfigWithDefaultValues();
impl.set(CommonClientConfigKey.PoolMaxThreads,1); //Add this one and test
Update
Yes, looking at the sendAndRecvByRibbon it seems that it lacks marking the PooledConnection as no longer acquired by calling close once you don't expect any further reads from it.
As long as you expect the only single read event, just change this line
connection.getInput()
to the
return connection.getInput().zipWith(Observable.just(connection), new Func2<ByteBuf, ObservableConnection<ByteBuf, ByteBuf>, ByteBuf>() {
#Override
public ByteBuf call(ByteBuf byteBuf, ObservableConnection<ByteBuf, ByteBuf> conn) {
conn.close();
return byteBuf;
}
});
Note, that if you'd design more complex protocol over TCP, then input bytebuf can be analyzed for your specific 'end of communication' sign which indicates the connection can be returned to the pool.
Related
I have a vertx application where I deploy multiple instances of verticle A (HttpVerticle.java) and multiple instances of verticle B (AerospikeVerticle.java). The aerospike verticles need to share a single AerospikeClient. The HttpVerticle listens to port 8888 and calls AerospikeVerticle using the event bus. My questions are:
Is using sharedData the right way to share singleton client instances? Is there any other recommended / cleaner approach? I plan to create and share more such singleton objects (cosmos db clients, meterRegistry etc.) in the application. I plan to use sharedData.localMap to share them in a similar fashion.
Is it possible to use vertx's eventloop as the backing eventloop for aerospike client? Such that the aerospike client initialisation does not need to create its own new eventloop? Currently looks like the onRecord part of the aerospike get call runs on aerospike's eventloop.
public class SharedAerospikeClient implements Shareable {
public final EventLoops aerospikeEventLoops;
public final AerospikeClient client;
public SharedAerospikeClient() {
EventPolicy eventPolicy = new EventPolicy();
aerospikeEventLoops = new NioEventLoops(eventPolicy, 2 * Runtime.getRuntime().availableProcessors());
ClientPolicy clientPolicy = new ClientPolicy();
clientPolicy.eventLoops = aerospikeEventLoops;
client = new AerospikeClient(clientPolicy, "localhost", 3000);
}
}
Main.java
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
LocalMap localMap = vertx.sharedData().getLocalMap("SHARED_OBJECTS");
localMap.put("AEROSPIKE_CLIENT", new SharedAerospikeClient());
vertx.deployVerticle("com.demo.HttpVerticle", new DeploymentOptions().setInstances(2 * 4));
vertx.deployVerticle("com.demo.AerospikeVerticle", new DeploymentOptions().setInstances(2 * 4));
}
}
HttpVerticle.java
public class HttpVerticle extends AbstractVerticle {
#Override
public void start(Promise<Void> startPromise) throws Exception {
vertx.createHttpServer().requestHandler(req -> {
vertx.eventBus().request("read.aerospike", req.getParam("id"), ar -> {
req.response()
.putHeader("content-type", "text/plain")
.end(ar.result().body().toString());
System.out.println(Thread.currentThread().getName());
});
}).listen(8888, http -> {
if (http.succeeded()) {
startPromise.complete();
System.out.println("HTTP server started on port 8888");
} else {
startPromise.fail(http.cause());
}
});
}
}
AerospikeVerticle.java
public class AerospikeVerticle extends AbstractVerticle {
private SharedAerospikeClient sharedAerospikeClient;
#Override
public void start(Promise<Void> startPromise) throws Exception {
EventBus eventBus = vertx.eventBus();
sharedAerospikeClient = (SharedAerospikeClient) vertx.sharedData().getLocalMap("SHARED_OBJECTS").get("AEROSPIKE_CLIENT");
MessageConsumer<String> consumer = eventBus.consumer("read.aerospike");
consumer.handler(this::getRecord);
System.out.println("Started aerospike verticle");
startPromise.complete();
}
public void getRecord(Message<String> message) {
sharedAerospikeClient.client.get(
sharedAerospikeClient.aerospikeEventLoops.next(),
new RecordListener() {
#Override
public void onSuccess(Key key, Record record) {
if (record != null) {
String result = record.getString("value");
message.reply(result);
} else {
message.reply("not-found");
}
}
#Override
public void onFailure(AerospikeException exception) {
message.reply("error");
}
},
sharedAerospikeClient.client.queryPolicyDefault,
new Key("myNamespace", "mySet", message.body())
);
}
}
I don't know about the Aerospike Client.
Regarding sharing objects between verticles, indeed shared data maps are designed for this purpose.
However, it is easier to:
create the shared client in your main class or custom launcher
provide the client as a parameter of the verticle constructor
The Vertx interface has a deployVerticle(Supplier<Verticle>, DeploymentOptions) method which is convenient in this case:
MySharedClient client = initSharedClient();
vertx.deploy(() -> new SomeVerticle(client), deploymentOptions);
i have joined to one of those Vertx lovers , how ever the single threaded main frame may not be working for me , because in my server there might be 50 file download requests at a moment , as a work around i have created this class
public abstract T onRun() throws Exception;
public abstract void onSuccess(T result);
public abstract void onException();
private static final int poolSize = Runtime.getRuntime().availableProcessors();
private static final long maxExecuteTime = 120000;
private static WorkerExecutor mExecutor;
private static final String BG_THREAD_TAG = "BG_THREAD";
protected RoutingContext ctx;
private boolean isThreadInBackground(){
return Thread.currentThread().getName() != null && Thread.currentThread().getName().equals(BG_THREAD_TAG);
}
//on success will not be called if exception be thrown
public BackgroundExecutor(RoutingContext ctx){
this.ctx = ctx;
if(mExecutor == null){
mExecutor = MyVertxServer.vertx.createSharedWorkerExecutor("my-worker-pool",poolSize,maxExecuteTime);
}
if(!isThreadInBackground()){
/** we are unlocking the lock before res.succeeded , because it might take long and keeps any thread waiting */
mExecutor.executeBlocking(future -> {
try{
Thread.currentThread().setName(BG_THREAD_TAG);
T result = onRun();
future.complete(result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
future.fail(e);
}
/** false here means they should not be parallel , and will run without order multiple times on same context*/
},false, res -> {
if(res.succeeded()){
onSuccess((T)res.result());
}
});
}else{
GUI.display("AVOIDED DUPLICATE BACKGROUND THREADING");
System.out.println("AVOIDED DUPLICATE BACKGROUND THREADING");
try{
T result = onRun();
onSuccess((T)result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
}
}
}
allowing the handlers to extend it and use it like this
public abstract class DefaultFileHandler implements MyHttpHandler{
public abstract File getFile(String suffix);
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
new BackgroundExecutor<Void>(ctx) {
#Override
public Void onRun() throws Exception {
File file = getFile(URLDecoder.decode(suffix, "UTF-8"));
if(file == null || !file.exists()){
utils.sendResponseAndEnd(ctx.response(),404);
return null;
}else{
utils.sendFile(ctx, file);
}
return null;
}
#Override
public void onSuccess(Void result) {}
#Override
public void onException() {
utils.sendResponseAndEnd(ctx.response(),404);
}
};
}
and here is how i initialize my vertx server
vertx.deployVerticle(MainDeployment.class.getCanonicalName(),res -> {
if (res.succeeded()) {
GUI.display("Deployed");
} else {
res.cause().printStackTrace();
}
});
server.requestHandler(router::accept).listen(port);
and here is my MainDeployment class
public class MainDeployment extends AbstractVerticle{
#Override
public void start() throws Exception {
// Different ways of deploying verticles
// Deploy a verticle and don't wait for it to start
for(Entry<String, MyHttpHandler> entry : MyVertxServer.map.entrySet()){
MyVertxServer.router.route(entry.getKey()).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
entry.getValue().Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
}
this is working just fine when and where i need it , but i still wonder if is there any better way to handle concurencies like this on vertx , if so an example would be really appreciated . thanks alot
I don't fully understand your problem and reasons for your solution. Why don't you implement one verticle to handle your http uploads and deploy it multiple times? I think that handling 50 concurrent uploads should be a piece of cake for vert.x.
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
vertx is a well-designed model so that a concurrency issue does not occur.
generally, vertx does not recommend the multi-thread model.
(because, handling is not easy.)
If you select multi-thread model, you have to think about shared data..
Simply, if you just only want to split EventLoop Area,
first of all, you make sure Check your a number of CPU Cores.
and then Set up the count of Instances .
DeploymentOptions options = new DeploymentOptions().setInstances(4);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
But, If you have 4cores of CPU, you don't set up over 4 instances.
If you set up to number four or more, the performance won't improve.
vertx concurrency reference
http://vertx.io/docs/vertx-core/java/
i'm writing a program that connects with various TCP network devices. The GUI is made using JavaFX. The whole connection part is in its own package "Network". Roughly described, it looks like this: (I don't know much about UML, no blaming plaese :/ - i just needed a way to quickly describe how my program structure looks). http://i.stack.imgur.com/PSdsH.jpg
okay thats how it is:
The TCP classes are stored in a synchronized List in "NetworkManager". These classes hold information about the connection (how much data received yet, ip, mac etc.). The Rcv-Thread constantly tries to receive data.
well, this is what i want:
As soon as the Rcv-Thread receives a specific message, the controller should be invoked to do something (GUI refresh or whatever). Also the controller should stay decoupled from the "Network" module-> it is reused in another project. I want to achieve this behaviour through an custom event. In short: TCP-Rcv-Thread needs to be able to give information to the Controller. But i dont really know how to get it all to work. Lets see where i am:
I have an event class in the "Network" module.
import java.util.EventObject;
public class XEvent extends EventObject{
String message;
public XEvent(Object source, String message) {
super(source);
this.message = message;
}
public String getMessage() {
return message;
}
}
I have a listener class in the "Network" module.
import java.util.EventListener;
public interface XListener extends EventListener{
void handlerMethod1(XEvent event);
void handlerMethod2(XEvent event);
}
I tried to prepare my Rcv-Thread for firing the event:
import javax.swing.event.EventListenerList;
import java.io.IOException;
public class ReceiveDataThread implements Runnable {
protected EventListenerList listenerList = new EventListenerList();
}
protected void addXListener(XListener xListener) {
listenerList.add(XListener.class, xListener);
}
protected void removeListener(XListener xListener) {
listenerList.remove(XListener.class, xListener);
}
protected void fireHandlerMethod1(String message) {
XEvent event = null;
Object[] list = listenerList.getListenerList();
for (int i = 0; i < list.length; i += 2) {
if (list[i] == XListener.class) {
if (event == null) event = new XEvent(this, message);
XListener l = (XListener) list[i + 1];
l.handlerMethod1(event);
}
}
}
protected void fireHandlerMethod2(String message) {
XEvent event = null;
Object[] list = listenerList.getListenerList();
for (int i = 0; i < list.length; i += 2) {
if (list[i] == XListener.class) {
if (event == null) event = new XEvent(this, message);
XListener l = (XListener) list[i + 1];
l.handlerMethod2(event);
}
}
}
#Override
public void run() {
String s;
while (!stopThread) {
s = receiveData();
System.out.println("test");
fireHandlerMethod1(s);
}
}
The Controller (this class should react on the custom events) implements the Listener:
public class Controller implements Initializable, XListener {
#Override
public void handlerMethod1(XEvent event) {
System.out.println("Event1: " + event.getMessage());
}
#Override
public void handlerMethod2(XEvent event) {
}
}
And from there on i'm not really shure how to get it work that my events (fired from my Rcv-Thread) are noticed by my controller class. I think i have to add a listener to every Rcv-Thread object via the controller class (just like when i use a ButtonListener, ...) . The problem is: from my TCP Class i can't access the Rcv-Thread-object's addXListener method - even when set to public (but i can access the Rcv-Thread-Classes from the list). I tried to read as much as i can about the problem but cant figure out how to get this to work. What am i missing?
edit1: TCP class:
public class TCPClass{
private Thread receiveDataThread;
private String MAC;
private InetAddress IP;
private Socket socket = new Socket();
private int tcpSendPort;
private int timeOut = 10;
private ObjectOutputStream objectOutputStream;
private BufferedReader bufferedReader;
private String connectionStatus = "offline";
public TCPClass(DatagramPacket datagramPacket) {
IP = datagramPacket.getAddress();
setConnectionStatusOnline();
tcpSendPort = 50000 + NetworkManager.getNumberOfConnections();
MAC = extractMac(datagramPacket);
}
public void connect(int tcpPort) {
try {
socket = new Socket(IP, tcpPort, null, tcpSendPort);
bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
receiveDataThread = new Thread(new ReceiveDataThread(this));
receiveDataThread.start();
InputStreamReader(socket.getInputStream()));
} catch (IOException e) {
e.printStackTrace();
System.out.println("on MAC: " + getMAC() + "\non Device:" + toString());
}
if (socket.isConnected()) {
setConnectionStatusConnected();
}
}
}
The NetworkManager creates an object of TCPClass and calls the connect() method.
Ok so after days i figured it out myself.
The main problem was that i was not able to call the addXListener() method of Rcv-Thread from the Controller. I took the Custom Event stuff out of the Rcv-Thread and moved it to the TCP-Class. Now i'm able to add the Listener to these classes. If i want to fire an event from the Rcv-Thread i simply call fireHandlerMethod() from its superclass (TCP-Class) - and everything works as expected.
I am currently working on a Java homework. I am asked to create a basic DNS server.
There is an UDPSender class which is a thread listening on port 53.
There is also another thread which is called UDPManager.
UDPManager starts a thread with a nested runnable class which holds an ArrayList of DatagramPacket. The UDPSender aggregates the UDPManager and whenever it receives an UDP packet, it sends it to the manager for him to add it to the arrayList.
import java.net.DatagramPacket;
import java.util.ArrayList;
import java.util.HashMap;
public class UDPManager {
private UDPManagerRunnable manager;
public UDPManager(String hostsFile, String remoteDNS, boolean localResolution) {
manager = new UDPManagerRunnable(hostsFile, remoteDNS, localResolution);
new Thread(manager).start();
}
public void managePacket(DatagramPacket p) {
manager.managePacket(p);
}
public void close() {
manager.close();
}
private class UDPManagerRunnable implements Runnable {
private ArrayList<DatagramPacket> packets;
private HashMap<Integer, String> clients;
private boolean localResolution;
private boolean running;
private String hostsFile;
private String remoteDNS;
public UDPManagerRunnable(String hostsFile, String remoteDNS, boolean localResolution) {
packets = new ArrayList<DatagramPacket>();
clients = new HashMap<Integer, String>();
this.localResolution = localResolution;
this.running = true;
this.hostsFile = hostsFile;
this.remoteDNS = remoteDNS;
}
public void managePacket(DatagramPacket p) {
packets.add(p);
System.out.println("Received packet. "+packets.size());
}
public void close() {
running = false;
}
public void run() {
DatagramPacket currentPacket = null;
while(running) {
if(!packets.isEmpty()) {
currentPacket = packets.remove(0);
byte[] data = currentPacket.getData();
int anCountValue = data[Constant.ANCOUNT_BYTE_INDEX];
if(anCountValue == Constant.ANCOUNT_REQUEST)
this.processRequest(currentPacket);
else if(anCountValue == Constant.ANCOUNT_ONE_ANSWER)
this.processResponse(currentPacket);
}
}
}
private void processRequest(DatagramPacket packet) {
System.out.println("it's a request!");
}
private void processResponse(DatagramPacket packet) {
System.out.println("it's a response!");
}
}
}
This is the UDPManager. The packets are transmitted to the manager correctly as the System.out.println correctly displays "Received packet." and the size of the array does increase. The problem I'm running into is that inside the "run()" it never see the size increasing. The weird thing is that it works perfectly fine in debug.
Any idea why it's acting this way?
Thanks a lot for your help.
The problem is, that your first thread is putting the new data into the packets variable, but for the second thread this is not visible. You should synchronize the access to the array.
When you start a second thread all variables are copied. The second thread is only working on the copies. You need to synchronize access to this variables, so changes are made visible to the other threads.
you should synchronize packets when you access or modify it
I am trying to create a pool of channels/connections to a queue server and was trying to use ObjectPool but am having trouble using it from the example on their site.
So far I have threads that do work but I want each of them to grab a channel from the pool and then return it. I understand how to use it(borrowObject/returnObjects) but not sure how to create the intial pool.
Here's how channels are made in rabbitmq:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
and my code just uses channel to do stuff. I'm confused because the only example I could find (on their site) starts it like this:
private ObjectPool<StringBuffer> pool;
public ReaderUtil(ObjectPool<StringBuffer> pool) {
this.pool = pool;
}
Which does not make sense to me. I realized this is common to establishing database connections so I tried to find tutorials using databases and ObjectPool but they seem to use DBCP which is specific to databases(and I can't seem to use the logic for my queue server).
Any suggestions on how to use it? Or is there a another approach used for pools in java?
They create a class that creates objects & knows what to do when they are returned. That might be something like this for you:
public class PoolConnectionFactory extends BasePoolableObjectFactory<Connection> {
private final ConnectionFactory factory;
public PoolConnectionFactory() {
factory = new ConnectionFactory();
factory.setHost("localhost");
}
// for makeObject we'll simply return a new Connection
public Connection makeObject() {
return factory.newConnection();
}
// when an object is returned to the pool,
// we'll clear it out
public void passivateObject(Connection con) {
con.I_don't_know_what_to_do();
}
// for all other methods, the no-op
// implementation in BasePoolableObjectFactory
// will suffice
}
now you create a ObjectPool<Connection> somewhere:
ObjectPool<Connection> pool = new StackObjectPool<Connection>(new PoolConnectionFactory());
then you can use pool inside your threads like
Connection c = pool.borrowObject();
c.doSomethingWithMe();
pool.returnObject(c);
The lines that don't make sense to you are a way to pass the pool object to a different class. See last line, they create the pool while creating the reader.
new ReaderUtil(new StackObjectPool<StringBuffer>(new StringBufferFactory()))
You'll need a custom implementation of PoolableObjectFactory to create, validate, and destroy the objects you want to pool. Then pass an instance of your factory to an ObjectPool's contructor and you're ready to start borrowing objects.
Here's some sample code. You can also look at the source code for commons-dbcp, which uses commons-pool.
import org.apache.commons.pool.BasePoolableObjectFactory;
import org.apache.commons.pool.ObjectPool;
import org.apache.commons.pool.PoolableObjectFactory;
import org.apache.commons.pool.impl.GenericObjectPool;
public class PoolExample {
public static class MyPooledObject {
public MyPooledObject() {
System.out.println("hello world");
}
public void sing() {
System.out.println("mary had a little lamb");
}
public void destroy() {
System.out.println("goodbye cruel world");
}
}
public static class MyPoolableObjectFactory extends BasePoolableObjectFactory<MyPooledObject> {
#Override
public MyPooledObject makeObject() throws Exception {
return new MyPooledObject();
}
#Override
public void destroyObject(MyPooledObject obj) throws Exception {
obj.destroy();
}
// PoolableObjectFactory has other methods you can override
// to valdiate, activate, and passivate objects.
}
public static void main(String[] args) throws Exception {
PoolableObjectFactory<MyPooledObject> factory = new MyPoolableObjectFactory();
ObjectPool<MyPooledObject> pool = new GenericObjectPool<MyPooledObject>(factory);
// Other ObjectPool implementations with special behaviors are available;
// see the JavaDoc for details
try {
for (int i = 0; i < 2; i++) {
MyPooledObject obj;
try {
obj = pool.borrowObject();
} catch (Exception e) {
// failed to borrow object; you get to decide how to handle this
throw e;
}
try {
// use the pooled object
obj.sing();
} catch (Exception e) {
// this object has failed us -- never use it again!
pool.invalidateObject(obj);
obj = null; // don't return it to the pool
// now handle the exception however you want
} finally {
if (obj != null) {
pool.returnObject(obj);
}
}
}
} finally {
pool.close();
}
}
}