How to run async / non-blocking MySQL queries in Play framework? - java

Just starting out on Play. The documentation talks about how Play can be run asynchronously.
But how to run MySQL queries when running Play asynchronously? Normal MySQL queries are blocking, right? So that wouldn't work.
Node.js has its own non-blocking MySQL clients just for this purpose, but I can't find anything similar for Play.
How do you run MySQL queries within an asynchronous Play application?

Play Jobs are executed in a separate thread and release the main http thread. The main http thread is then started where it left off when the Job (wrapped in a Promise object) returns after completing.
So, the main http thread is not held up, and can be made available for handling other incoming http requests.

In general execution of SQL Calls to DB is usually blocking and executed sequentially. Play has great support for Asynchronous execution which improves performance of your app.
Working code sample for Play 2.0
public static Result slow() {
Logger.debug("slow started");
// Start execution
Promise<DataObject> userObject1 = SlowQuery.getUser(440);
Promise<DataObject> userObject2 = SlowQuery.getCategory(420);
// ... here execution is already in progress ...
// Map to Promise Objects
Promise<DataObject> res1 = userObject1.map(new Function<DataObject, DataObject>() {
public DataObject apply(DataObject res) {
Logger.debug("Got result (userObject1): " + res.toString());
return res;
}
});
Promise<DataObject> res2 = userObject2.map(new Function<DataObject, DataObject>() {
public DataObject apply(DataObject res) {
Logger.debug("Got result (userObject2): " + res.toString());
return res;
}
});
// here we wait for completion - this blocks
userObject1.getWrappedPromise().await();
userObject2.getWrappedPromise().await();
// the result is available
Logger.debug(res1.get().toString());
Logger.debug(res2.get().toString());
Logger.debug("slow finished");
return ok("done");
}
feel free to improve using community wiki feature - I am sure some parts can be shortened.

Related

How can I turn blocking code into a responsive style?

I have always been used to blocking programming and working in the Spring MVC framework. Recently, I considered learning reactive programming. I was full of doubts about how to convert the previous logic into a new style.
See the following processing(Pseudocode):
public Mono<List<String>> a() {
// 1...
List<String> strings = new ArrayList<>();
for (int i = 0; i < 100; i++) {
strings.add("hello " + i);
}
Mono<List<String>> mono = Mono.just(strings);
// 2...
mono.subscribe(e -> {
b();
});
// 3...
mono.subscribe(e -> {
c();
});
mono.subscribeOn(Schedulers.boundedElastic());
return mono;
}
// Simulate a time-consuming process.
public void b() {
try {
Thread.sleep(100);
} catch (InterruptedException err) {
throw new RuntimeException(err);
}
}
// Simulate the process of requesting an external HTTP interface once.
public int c() {
try {
Thread.sleep(300);
} catch (InterruptedException err) {
throw new RuntimeException(err);
}
return 1;
}
I tried to convert it into code that conforms to the responsive programming style, but found that the time-consuming code logic has blocked the current thread, which is inconsistent with my expectation.
I tested Webflux and Tomcat respectively, and the results show that the performance of the former is very poor. I suspect that the IO thread is blocked, which can be seen from the thread sleep time.
Thread.sleep() will pause the JVM thread running the request. You can't call this method with a Spring WebFlux application. By design WebFlux uses very few threads to handle the requests to avoid context switching but if your code intentionally blocks them you break the whole design.
In practice Spring WebFlux can be faster than a regular Spring MVC if the application workload is I/O bound e.g. a micro-service that calls multiple external APIs and doesn't perform significant calculations.
I'd suggest that you try simulating the I/O operations by making a network call to an actual server using a reactive library like Reactor Netty. Otherwise you would have to dig into the code and figure out how to create a meaningful mock for a network I/O operation, which might be tricky.
Reactive programming is good for juggling work across threads when "someone else other than this JVM" is busy.
So, handover to OS for file write, or to DB for record update or to another remote server over API call. Let them call you back when they have the result. Eventually, you should be juggling, but work goes on somewhere else and results flow in. This is where reactivity shines.
So, it's difficult to simulate with Thread.sleep or for loop running 10_000 times.
Also, it means the whole flow has to be reactive - your disk IO library should be reactive, your DB library should be reactive, your Rest client for calling other networked services should be reactive. Reactor should be helping with solutions for each of these.
Also, its not all-or-nothing. Even if your disk IO is blocking, you will still gain benefits if atleast the Rest client is non-blocking.

Vert.x Event loop - How is this asynchronous?

I'm playing around with Vert.x and quite new to the servers based on event loop as opposed to the thread/connection model.
public void start(Future<Void> fut) {
vertx
.createHttpServer()
.requestHandler(r -> {
LocalDateTime start = LocalDateTime.now();
System.out.println("Request received - "+start.format(DateTimeFormatter.ISO_DATE_TIME));
final MyModel model = new MyModel();
try {
for(int i=0;i<10000000;i++){
//some simple operation
}
model.data = start.format(DateTimeFormatter.ISO_DATE_TIME) +" - "+LocalDateTime.now().format(DateTimeFormatter.ISO_DATE_TIME);
} catch (Exception e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
r.response().end(
new Gson().toJson(model)
);
})
.listen(4568, result -> {
if (result.succeeded()) {
fut.complete();
} else {
fut.fail(result.cause());
}
});
System.out.println("Server started ..");
}
I'm just trying to simulate a long running request handler to understand how this model works.
What I've observed is the so called event loop is blocked until my first request completes. Whatever little time it takes, subsequent request is not acted upon until the previous one completes.
Obviously I'm missing a piece here and that's the question that I have here.
Edited based on the answers so far:
Isn't accepting all requests considered to be asynchronous? If a new
connection can only be accepted when the previous one is cleared
off, how is it async?
Assume a typical request takes anywhere between 100 ms to 1 sec (based on the kind and nature of the request). So it means, the
event loop can't accept a new connection until the previous request
finishes(even if its winds up in a second). And If I as a programmer
have to think through all these and push such request handlers to a
worker thread , then how does it differ from a thread/connection
model?
I'm just trying to understand how is this model better from a traditional thread/conn server models? Assume there is no I/O op or
all the I/O op are handled asynchronously? How does it even solve
c10k problem, when it can't start all concurrent requests parallely and have to wait till the previous one terminates?
Even if I decide to push all these operations to a worker thread(pooled), then I'm back to the same problem isn't it? Context switching between threads?
Edits and topping this question for a bounty
Do not completely understand how this model is claimed to asynchronous.
Vert.x has an async JDBC client (Asyncronous is the keyword) which I tried to adapt with RXJava.
Here is a code sample (Relevant portions)
server.requestStream().toObservable().subscribe(req -> {
LocalDateTime start = LocalDateTime.now();
System.out.println("Request for " + req.absoluteURI() +" received - " +start.format(DateTimeFormatter.ISO_DATE_TIME));
jdbc.getConnectionObservable().subscribe(
conn -> {
// Now chain some statements using flatmap composition
Observable<ResultSet> resa = conn.queryObservable("SELECT * FROM CALL_OPTION WHERE UNDERLYING='NIFTY'");
// Subscribe to the final result
resa.subscribe(resultSet -> {
req.response().end(resultSet.getRows().toString());
System.out.println("Request for " + req.absoluteURI() +" Ended - " +LocalDateTime.now().format(DateTimeFormatter.ISO_DATE_TIME));
}, err -> {
System.out.println("Database problem");
err.printStackTrace();
});
},
// Could not connect
err -> {
err.printStackTrace();
}
);
});
server.listen(4568);
The select query there takes 3 seconds approx to return the complete table dump.
When I fire concurrent requests(tried with just 2), I see that the second request completely waits for the first one to complete.
If the JDBC select is asynchronous, Isn't it a fair expectation to have the framework handle the second connection while it waits for the select query to return anything.?
Vert.x event loop is, in fact, a classical event loop existing on many platforms. And of course, most explanations and docs could be found for Node.js, as it's the most popular framework based on this architecture pattern. Take a look at one more or less good explanation of mechanics under Node.js event loop. Vert.x tutorial has fine explanation between "Don’t call us, we’ll call you" and "Verticles" too.
Edit for your updates:
First of all, when you are working with an event loop, the main thread should work very quickly for all requests. You shouldn't do any long job in this loop. And of course, you shouldn't wait for a response to your call to the database.
- Schedule a call asynchronously
- Assign a callback (handler) to result
- Callback will be executed in the worker thread, not event loop thread. This callback, for example, will return a response to the socket.
So, your operations in the event loop should just schedule all asynchronous operations with callbacks and go to the next request without awaiting any results.
Assume a typical request takes anywhere between 100 ms to 1 sec (based on the kind and nature of the request).
In that case, your request has some computation expensive parts or access to IO - your code in the event loop shouldn't wait for the result of these operations.
I'm just trying to understand how is this model better from a traditional thread/conn server models? Assume there is no I/O op or all the I/O op are handled asynchronously?
When you have too many concurrent requests and a traditional programming model, you will make thread per each request. What this thread will do? They will be mostly waiting for IO operations (for example, result from database). It's a waste of resources. In our event loop model, you have one main thread that schedule operations and preallocated amount of worker threads for long tasks. + None of these workers actually wait for the response, they just can execute another code while waiting for IO result (it can be implemented as callbacks or periodical checking status of IO jobs currently in progress). I would recommend you go through Java NIO and Java NIO 2 to understand how this async IO can be actually implemented inside the framework. Green threads is a very related concept too, that would be good to understand. Green threads and coroutines are a type of shadowed event loop, that trying to achieve the same thing - fewer threads because we can reuse system thread while green thread waiting for something.
How does it even solve c10k problem, when it can't start all concurrent requests parallel and have to wait till the previous one terminates?
For sure we don't wait in the main thread for sending the response for the previous request. Get request, schedule long/IO tasks execution, next request.
Even if I decide to push all these operations to a worker thread(pooled), then I'm back to the same problem isn't it? Context switching between threads?
If you make everything right - no. Even more, you will get good data locality and execution flow prediction. One CPU core will execute your short event loop and schedule async work without context switching and nothing more. Other cores make a call to the database and return response and only this. Switching between callbacks or checking different channels for IO status doesn't actually require any system thread's context switching - it's actually working in one worker thread. So, we have one worker thread per core and this one system thread await/checks results availability from multiple connections to database for example. Revisit Java NIO concept to understand how it can work this way. (Classical example for NIO - proxy-server that can accept many parallel connections (thousands), proxy requests to some other remote servers, listen to responses and send responses back to clients and all of this using one or two threads)
About your code, I made a sample project for you to demonstrate that everything works as expected:
public class MyFirstVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> fut) {
JDBCClient client = JDBCClient.createShared(vertx, new JsonObject()
.put("url", "jdbc:hsqldb:mem:test?shutdown=true")
.put("driver_class", "org.hsqldb.jdbcDriver")
.put("max_pool_size", 30));
client.getConnection(conn -> {
if (conn.failed()) {throw new RuntimeException(conn.cause());}
final SQLConnection connection = conn.result();
// create a table
connection.execute("create table test(id int primary key, name varchar(255))", create -> {
if (create.failed()) {throw new RuntimeException(create.cause());}
});
});
vertx
.createHttpServer()
.requestHandler(r -> {
int requestId = new Random().nextInt();
System.out.println("Request " + requestId + " received");
client.getConnection(conn -> {
if (conn.failed()) {throw new RuntimeException(conn.cause());}
final SQLConnection connection = conn.result();
connection.execute("insert into test values ('" + requestId + "', 'World')", insert -> {
// query some data with arguments
connection
.queryWithParams("select * from test where id = ?", new JsonArray().add(requestId), rs -> {
connection.close(done -> {if (done.failed()) {throw new RuntimeException(done.cause());}});
System.out.println("Result " + requestId + " returned");
r.response().end("Hello");
});
});
});
})
.listen(8080, result -> {
if (result.succeeded()) {
fut.complete();
} else {
fut.fail(result.cause());
}
});
}
}
#RunWith(VertxUnitRunner.class)
public class MyFirstVerticleTest {
private Vertx vertx;
#Before
public void setUp(TestContext context) {
vertx = Vertx.vertx();
vertx.deployVerticle(MyFirstVerticle.class.getName(),
context.asyncAssertSuccess());
}
#After
public void tearDown(TestContext context) {
vertx.close(context.asyncAssertSuccess());
}
#Test
public void testMyApplication(TestContext context) {
for (int i = 0; i < 10; i++) {
final Async async = context.async();
vertx.createHttpClient().getNow(8080, "localhost", "/",
response -> response.handler(body -> {
context.assertTrue(body.toString().contains("Hello"));
async.complete();
})
);
}
}
}
Output:
Request 1412761034 received
Request -1781489277 received
Request 1008255692 received
Request -853002509 received
Request -919489429 received
Request 1902219940 received
Request -2141153291 received
Request 1144684415 received
Request -1409053630 received
Request -546435082 received
Result 1412761034 returned
Result -1781489277 returned
Result 1008255692 returned
Result -853002509 returned
Result -919489429 returned
Result 1902219940 returned
Result -2141153291 returned
Result 1144684415 returned
Result -1409053630 returned
Result -546435082 returned
So, we accept a request - schedule a request to the database, go to the next request, we consume all of them and send a response for each request only when everything is done with the database.
About your code sample I see two possible issues - first, it looks like you don't close() connection, which is important to return it to pool. Second, how your pool is configured? If there is only one free connection - these requests will serialize waiting for this connection.
I recommend you to add some printing of a timestamp for both requests to find a place where you serialize. You have something that makes the calls in the event loop to be blocking. Or... check that you send requests in parallel in your test. Not next after getting a response after previous.
How is this asynchronous? The answer is in your question itself
What I've observed is the so called event loop is blocked until my
first request completes. Whatever little time it takes, subsequent
request is not acted upon until the previous one completes
The idea is instead of having a new for serving each HTTP request, same thread is used which you have blocked by your long running task.
The goal of event loop is to save the time involved in context switching from one thread to another thread and utilize the ideal CPU time when a task is using IO/Network activities. If while handling your request it had to other IO/Network operation eg: fetching data from a remote MongoDB instance during that time your thread will not be blocked and instead an another request would be served by the same thread which is the ideal use case of event loop model (Considering that you have concurrent requests coming to your server).
If you have long running tasks which does not involve Network/IO operation, you should consider using thread pool instead, if you block your main event loop thread itself other requests would be delayed. i.e. for long running tasks you are okay to pay the price of context switching for for server to be responsive.
EDIT:
The way a server can handle requests can vary:
1) Spawn a new thread for each incoming request (In this model the context switching would be high and there is additional cost of spawning a new thread every time)
2) Use a thread pool to server the request (Same set of thread would be used to serve requests and extra requests gets queued up)
3) Use a event loop (single thread for all the requests. Negligible context switching. Because there would be some threads running e.g: to queue up the incoming requests)
First of all context switching is not bad, it is required to keep application server responsive, but, too much context switching can be a problem if the number of concurrent requests goes too high (roughly more than 10k). If you want to understand in more detail I recommend you to read C10K article
Assume a typical request takes anywhere between 100 ms to 1 sec (based
on the kind and nature of the request). So it means, the event loop
can't accept a new connection until the previous request finishes(even
if its winds up in a second).
If you need to respond to large number of concurrent requests (more than 10k) I would consider more than 500ms as a longer running operation. Secondly, Like I said there are some threads/context switching involved e.g.: to queue up incoming requests, but, the context switching amongst threads would be greatly reduced as there would be too few threads at a time. Thirdly, if there is a network/IO operation involved in resolving first request second request would get a chance to be resolved before first is resolved, this is where this model plays well.
And If I as a programmer have to think
through all these and push such request handlers to a worker thread ,
then how does it differ from a thread/connection model?
Vertx is trying to give you best of threads and event loop, so, as programmer you can make a call on how to make your application efficient under both the scenario i.e. long running operation with and without network/IO operation.
I'm just trying to understand how is this model better from a
traditional thread/conn server models? Assume there is no I/O op or
all the I/O op are handled asynchronously? How does it even solve c10k
problem, when it can't start all concurrent requests parallely and
have to wait till the previous one terminates?
The above explanation should answer this.
Even if I decide to push all these operations to a worker
thread(pooled), then I'm back to the same problem isn't it? Context
switching between threads?
Like I said, both have pros and cons and vertx gives you both the model and depending on your use case you got to choose what is ideal for your scenario.
In these sort of processing engines, you are supposed to turn long running tasks in to asynchronously executed operations and these is a methodology for doing this, so that the critical thread can complete as quickly as possible and return to perform another task. i.e. any IO operations are passed to the framework to call you back when the IO is done.
The framework is asynchronous in the sense that it supports you producing and running these asynchronous tasks, but it doesn't change your code from being synchronous to asynchronous.

Java Multithreaded - Better way to cancel Future task with database and http connections?

I am having difficulty trying to correctly program my application in the way I want it to behave.
Currently, my application (as a Java Servlet) will query the database for a list of items to process. For every item in the list, it will submit an HTTP Post request. I am trying to create a way where I can stop this processing (and even terminate the HTTP Post request in progress) if the user requests. There can be simultaneous threads that are separately processing different queries. Right now, I will stop processing in all threads.
My current attempt involves implementing the database query and HTTP Post in a Callable class. Then I submit the Callable class via the Executor Service to get a Future object.
However, in order properly to stop the processing, I need to abort the HTTP Post and close the database's Connection, Statement and ResultSet - because the Future.cancel() will not do this for me. How can I do this when I call cancel() on the Future object? Do I have to store a List of Arrays that contains the Future object, HttpPost, Connection, Statement, and ResultSet? This seems overkill - surely there must be a better way?
Here is some code I have right now that only aborts the HttpPost (and not any database objects).
private static final ExecutorService pool = Executors.newFixedThreadPool(10);
public static Future<HttpClient> upload(final String url) {
CallableTask ctask = new CallableTask();
ctask.setFile(largeFile);
ctask.setUrl(url);
Future<HttpClient> f = pool.submit(ctask); //This will create an HttpPost that posts 'largefile' to the 'url'
linklist.add(new tuple<Future<HttpClient>, HttpPost>(f, ctask.getPost())); //storing the objects for when I cancel later
return f;
}
//This method cancels all running Future tasks and aborts any POSTs in progress
public static void cancelAll() {
System.out.println("Checking status...");
for (tuple<Future<HttpClient>, HttpPost> t : linklist) {
Future<HttpClient> f = t.getFuture();
HttpPost post = t.getPost();
if (f.isDone()) {
System.out.println("Task is done!");
} else {
if (f.isCancelled()) {
System.out.println("Task was cancelled!");
} else {
while (!f.isDone()) {
f.cancel(true);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("!Aborting Post!");
try {
post.abort();
} catch (Exception ex) {
System.out.println("Aborted Post, swallowing exception: ");
ex.printStackTrace();
}
}
}
}
}
}
Is there an easier way or a better design? Right now I terminate all processing threads - in the future, I would like to terminate individual threads.
I think keeping a list of all the resources to be closed is not the best approach. In your current code, it seems that the HTTP request is initiated by the CallableTask but the closing is done by somebody else. Closing resources is the responsibility of the one who opened it, in my opinion.
I would let CallableTask to initiate the HTTP request, connect to database and do it's stuff and, when it is finished or aborted, it should close everything it opened. This way you have to keep track only the Future instances representing your currently running tasks.
I think your approach is correct. You would need to handle the rollback yourself when you are canceling the thread
cancel() just calls interrupt() for already executing thread. Have a look here
http://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
As it says
An interrupt is an indication to a thread that it should stop what it
is doing and do something else. It's up to the programmer to decide
exactly how a thread responds to an interrupt, but it is very common
for the thread to terminate.
Interrupted thread would throw InterruptedException
when a thread is waiting, sleeping, or otherwise paused for a long
time and another thread interrupts it using the interrupt() method in
class Thread.
So you need to explicitly code for scenarios such as you mentioned in executing thread where there is a possible interruption.

Architecture for executing long running jobs for a non-blocking node.js application in a distributed environment

I'm building an HTTP Proxy in node.js. When the incoming request meets some conditions, a long running job is executed. When this happens, all the subsequent requests must wait for the job to end (due to the node's architecture):
function proxy(request, response) {
if(isSpecial(request)) {
// Long running job
}
// Proxy request
}
This is not good.So let's say that the long running job can be implemented in Java, and for this purpose I build a Java server application that executes the long running job in a separate thread every time a request is made by the node application.
So, when the conditions are met, node.js makes a connection (TCP, HTTP, whatever) to the Java server. The Java server initializes a new Thread for the request, executes the long running job in this separate thread, and returns back, let's say, a JSON response (can be binary, whatever) that node can easily, asynchronously, handle:
var javaServer = initJavaServer(); // pseudo-code
function proxy(request, response) {
var special = isSpecial(request);
if (special) {
var jobResponse;
javaServer.request( ... );
javaServer.addListener("data", function(chunk)) {
// Read response
// jobResponse = ...
}
javaServer.addListener("end", function(jobResult)) {
doProxy(jobResponse, request, response);
}
} else {
doProxy(null, request, response);
}
}
In this way, I can execute long running jobs for those request that meet the conditions, without blocking the whole node application.
So here the requirements are:
Speed
Scalability of both apps (the node proxy runs on a cluster, and the Java app on another one)
Maybe a messaging broker service like RabbitMQ may help (node pushes messages, Java subscribes to them and pushes the response back).
Thoughts?
Take a look at Q-Oper8 ( https://github.com/robtweed/Q-Oper8 ) which is designed to provide a native Node.js solution to situations such as this

How to terminate CXF webservice call within Callable upon Future cancellation

Edit
This question has gone through a few iterations by now, so feel free to look through the revisions to see some background information on the history and things tried.
I'm using a CompletionService together with an ExecutorService and a Callable, to concurrently call the a number of functions on a few different webservices through CXF generated code.. These services all contribute different information towards a single set of information I'm using for my project. The services however can fail to respond for a prolonged period of time without throwing an exception, prolonging the wait for the combined set of information.
To counter this I'm running all the service calls concurrently, and after a few minutes would like to terminate any of the calls that have not yet finished, and preferably log which ones weren't done yet either from within the callable or by throwing an detailed Exception.
Here's some highly simplified code to illustrate what I'm doing already:
private Callable<List<Feature>> getXXXFeatures(final WiwsPortType port,
final String accessionCode) {
return new Callable<List<Feature>>() {
#Override
public List<Feature> call() throws Exception {
List<Feature> features = new ArrayList<Feature>();
//getXXXFeatures are methods of the WS Proxy
//that can take anywhere from second to never to return
for (RawFeature raw : port.getXXXFeatures(accessionCode)) {
Feature ft = convertFeature(raw);
features.add(ft);
}
if (Thread.currentThread().isInterrupted())
log.error("XXX was interrupted");
return features;
}
};
}
And the code that concurrently starts the WS calls:
WiwsPortType port = new Wiws().getWiws();
List<Future<List<Feature>>> ftList = new ArrayList<Future<List<Feature>>>();
//Counting wrapper around CompletionService,
//so I could implement ccs.hasRemaining()
CountingCompletionService<List<Feature>> ccs =
new CountingCompletionService<List<Feature>>(threadpool);
ftList.add(ccs.submit(getXXXFeatures(port, accessionCode)));
ftList.add(ccs.submit(getYYYFeatures(port accessionCode)));
ftList.add(ccs.submit(getZZZFeatures(port, accessionCode)));
List<Feature> allFeatures = new ArrayList<Feature>();
while (ccs.hasRemaining()) {
//Low for testing, eventually a little more lenient
Future<List<Feature>> polled = ccs.poll(5, TimeUnit.SECONDS);
if (polled != null)
allFeatures.addAll(polled.get());
else {
//Still jobs remaining, but unresponsive: Cancel them all
int jobsCanceled = 0;
for (Future<List<Feature>> job : ftList)
if (job.cancel(true))
jobsCanceled++;
log.error("Canceled {} feature jobs because they took too long",
jobsCanceled);
break;
}
}
The problem I'm having with this code is that the Callables aren't actually canceled when waiting for port.getXXXFeatures(...) to return, but somehow keep running. As you can see from the if (Thread.currentThread().isInterrupted()) log.error("XXX was interrupted"); statements the interrupted flag is set after port.getFeatures returns, this is only available after the Webservice call completes normally, instead of it having been interrupted when I called Cancel.
Can anyone tell me what I am doing wrong and how I can stop the running CXF Webservice call after a given time period, and register this information in my application?
Best regards, Tim
Edit 3 New answer.
I see these options:
Post your problem on the Apache CXF as feature request
Fix ACXF yourself and expose some features.
Look for options for asynchronous WS call support within the Apache CXF
Consider switching to a different WS provider (JAX-WS?)
Do your WS call yourself using RESTful API if the service supports it (e.g. plain HTTP request with parameters)
For über experts only: use true threads/thread group and kill the threads with unorthodox methods.
The CXF docs have some instructions for setting the read timeout on the HTTPURLConnection:
http://cwiki.apache.org/CXF20DOC/client-http-transport-including-ssl-support.html
That would probably meet your needs. If the server doesn't respond in time, an exception is raised and the callable would get the exception. (except there is a bug where is MAY hang instead. I cannot remember if that was fixed for 2.2.2 or if it's just in the SNAPSHOTS right now.)

Categories

Resources