Check client is alive with HttpServletRequest object - java

I'm writing a Spring web application and I'm mapping the "/do" URL path to the following Controller's method
#Controller
public class MyController
{
#RequestMapping(value="/do", method=RequestMethod.GET)
public String do()
{
File f = new File("otherMethodEnded.tmp");
while (!f.exists())
{
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
// ok, let's continue
}
}
The otherMethodEnded.tmp file is written by one another Controller's method, so when the client calls the second URL I expect the first method to exit the while loop.
Everything works, except when the client calls the "/do" URL and then closes the connection before the response was received. The problem is that the server remains in the while (!f.exists()) loop even though the client is down and cannot call the second URL to unlock the while loop.
I would try to retrieve the connection status of the "/do" URL and exit the loop when the connection is closed by the client, but I cannot find any way to do so.
I tried with the HttpServletRequest.getSession(false) method but the returned HttpSession object is always not null, so the HttpServletRequest object is not updated in case of connection close of the client.
How can I check whether the client is still waiting for the risponse or not?

The simplest way to verify something is not right is to define a timeout value and then during your loop test if your time spent waiting has exceeded the timeout.
something like:
#Controller
public class MyController
{
private static final long MAX_LOOP_TIME = 1000 * 60 * 5; // 5 minutes? choose a value
#RequestMapping(value="/do", method=RequestMethod.GET)
public String do()
{
File f = new File("otherMethodEnded.tmp");
long startedAt = System.currentTimeMillis()
boolean forcedExit = false;
while (!forcedExit && !f.exists())
{
try {
Thread.sleep(5000);
if (System.currentTimeMillis() - startedAt > MAX_LOOP_TIME) {
forcedExit = true;
}
} catch (InterruptedException e) {
forcedExit = true;
}
}
// ok, let's continue
// if forcedExit , handle error scenario?
}
}
Additionally: InterruptedException is not something to blindly catch and ignore. see this discussion
In your case I would really exit the while loop if you're interrupted.
You only know if the client is no longer waiting on your connection when you notice the output stream you write to (response.outputstream) is closed. But there isn't a way to detect it.
(see this question for details)
Seeing as you've indicated your client does occasional callbacks, you could on the clientside poll if the other call has been completed. If this other call has completed, do the operation, otherwise return directly and have the client do the call again. (assuming you are sending json, but adapt as you require)
something like
public class MyController
{
#RequestMapping(value="/do", method=RequestMethod.GET)
public String do()
{
File f = new File("otherMethodEnded.tmp");
if (f.exists()) {
// do what you set out to do
// ok, let's continue
// and return with a response that indicates the call did what it did
// view that returns json { "result" : "success" }
return "viewThatSIgnalsToClientThatOperationSucceeded";
} else {
// view that returns json: { "result" : "retry" }
return "viewThatSignalsToClientToRetryIn5Seconds";
}
}
}
Then the clientside would run something like: (pseudojavascript as it's been a while)
val callinterval = setInterval(function() { checkServer() }, 5000);
function checkServer() {
$.ajax({
// ...
success: successFunction
});
}
function successFunction(response) {
// connection succeeded
var json = $.parseJSON(response);
if (json.result === "retry") {
// interval will call this again
} else {
clearInterval(callinterval);
if (json.result === "success") {
// do the good stuff
} else if (json.result === "failure") {
// report that the server reported an error
}
}
}
Ofcourse this is just semi-serious code but it's roughly how i'd try it if I were to have the dependency. If this is regarding afile upload, keep in mind that this file may not contain all of the bytes yet. file exists != file = completely uploaded, unless you use move it. cp / scp / etc. is not atomic.

Related

how to continue the program when there is exception while calling feign client api in java

public string prospect(List<ProspectRequest> prospectRequest, String primaryClientId) {
if(p2p(primaryClientId)=="Success") {
for(ProspectRequest prospect : prospectRequest) {
p2p(prospect.getId());
}
// rest of code i would like to continue
}
}
public string p2p(String id){
crmApi.getProspectId(id);//this is external client api
String message = "Success";
return message;
}
if p2p(primaryClientId) is failed then I need to stop the entire process .How do i like to continue with "rest of code i would like to continue".
if the feign client api crmApi.getProspectId is success then the success message is returned and then its a good case.
If the crmApi.getProspectId api gives error then I still need to continue the p2p() program for next set of clients also .How does that work.
Thanks in Advance.
Seems like you just need to use a try catch finally block
public string prospect(List<ProspectRequest> prospectRequest, String primaryClientId) {
try {
if(p2p(primaryClientId)=="Success") {
for(ProspectRequest prospect : prospectRequest) {
p2p(prospect.getId());
}
} catch (RuntimeException e) {} // Exception can be specified exp. RuntimeException
finally {
// rest of code i would like to continue
}
}
}
public string p2p(String id){
crmApi.getProspectId(id);//this is external client api
String message = "Success";
return message;
}
if you need something out of the first if-block you can put an option (if the if-block fails) in the catch-block. When
if(p2p(primaryClientId)=="Success") {
for(ProspectRequest prospect : prospectRequest) {
p2p(prospect.getId());
}
throws an checked exception you need to change RuntimeExpection to the superclass Exception
edited:
for(ProspectRequest prospect : prospectRequest) {
try {
p2p(prospect.getId());
} catch (RuntimeExpection e) {}
}

Vertx http server Thread has been blocked for xxxx ms, time limit is 2000

i have written a large scale http server using , but im getting this error when number of concurrent requests increases
WARNING: Thread Thread[vert.x-eventloop-thread-1,5,main] has been blocked for 8458 ms, time limit is 1000
io.vertx.core.VertxException: Thread blocked
here is my full code :
public class MyVertxServer {
public Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(100));
private HttpServer server = vertx.createHttpServer();
private Router router = Router.router(vertx);
public void bind(int port){
server.requestHandler(router::accept).listen(port);
}
public void createContext(String path,MyHttpHandler handler){
if(!path.endsWith("/")){
path += "/";
}
path+="*";
router.route(path).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
handler.Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
and how i call it :
ver.createContext("/getRegisterManager",new ProfilesManager.RegisterHandler());
ver.createContext("/getLoginManager", new ProfilesManager.LoginHandler());
ver.createContext("/getMapcomCreator",new ItemsManager.MapcomCreator());
ver.createContext("/getImagesManager", new ItemsManager.ImagesHandler());
ver.bind(PORT);
how ever i dont find eventbus() useful for http servers that process send/receive files , because u need to send the RoutingContext in the message with is not possible.
could you please point me to the right direction? thanks
added a little bit of handler's code:
class ProfileGetter implements MyHttpHandler{
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
String username = utils.Decode(ctx.request().headers().get("username"));
String lang = utils.Decode(ctx.request().headers().get("lang"));
display("profile requested : "+username);
Profile profile = ProfileManager.FindProfile(username,lang);
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
}
here ProfileManager.FindProfile(username,lang) does a long running database job on the same thread
...
basically all of my processes are happening on the main thread , because if i use executor i will get strange exceptions and nullpointers in Vertx , making me feel like the request proccessors in Vertx are parallel
Given the small amount of code in the question lets agree that the problem is on the line:
Profile profile = ProfileManager.FindProfile(username,lang);
Assuming that this is internally doing some blocking JDBC call which is a anti-pattern in Vert.x you can solve this in several ways.
Say that you can totally refactor the ProfileManager class which IMO is the best then you can update it to be reactive, so your code would be like:
ProfileManager.FindProfile(username,lang, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});
Now what would be hapening behind the scenes is that your JDBC call would not block (which is tricky because JDBC is blocking by nature). So to fix this and you're lucky enough to use MySQL or Postgres then you could code your JDBC against the async-client if you're stuck with other RDBMS servers then you need to use the jdbc-client which in turn will use a thread pool to offload the work from the event loop thread.
Now say that you cannot change the ProfileManager code then you can still off load it to the thread pool by wrapping the code in a executeBlocking block:
vertx.executeBlocking(future -> {
Profile profile = ProfileManager.FindProfile(username,lang);
future.complete(profile);
}, false, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});

Download files with netty

I am creating a very basic webserver using netty and java. I will have basic functionality. It's main responsibilities would be to serve responses for API calls done from a client (e.g a browser, or a console app I am building) in JSON form or send a zip file. For that reason I have created the HttpServerHanddler class which is responsible for getting the request, parsing it to find the command and call the appropriate api call.It extends SimpleChannelInboundHandler
and overrides the following functions;
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
LOG.debug("channelActive");
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
LOG.debug("In channelComplete()");
ctx.flush();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg)
throws IOException {
ctx = processMessage(ctx, msg);
if (!HttpHeaders.isKeepAlive(request)) {
// If keep-alive is off, close the connection once the content is
// fully written.
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(
ChannelFutureListener.CLOSE);
}
}
private ChannelHandlerContext processMessage(ChannelHandlerContext ctx, Object msg){
if (msg instanceof HttpRequest) {
HttpRequest request = this.request = (HttpRequest) msg;
if (HttpHeaders.is100ContinueExpected(request)) {
send100Continue(ctx);
}
//parse message to find command, parameters and cookies
ctx = executeCommand(command, parameters, cookies)
}
if (msg instanceof LastHttpContent) {
LOG.debug("msg is of LastHttpContent");
if (!HttpHeaders.isKeepAlive(request)) {
// If keep-alive is off, close the connection once the content is
// fully written.
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(
ChannelFutureListener.CLOSE);
}
}
return ctx;
}
private ChanndelHandlerContext executeCommand(String command, HashMap<String, List<String>>> parameters, Set<Cookie> cookies>){
//switch case to see which command has to be invoked
switch(command){
//many cases
case "/report":
ctx = myApi.getReport(parameters, cookies); //This is a member var of ServerHandler
break;
//many more cases
}
return ctx;
}
In my Api class that has the getReport function.
getReport
public ChannelHandlerContext getReportFile(Map<String, List<String>> parameters,
Set<Cookie> cookies) {
//some initiliazations. Actual file handing happens bellow
File file = new File(fixedReportPath);
RandomAccessFile raf = null;
long fileLength = 0L;
try {
raf = new RandomAccessFile(file, "r");
fileLength = raf.length();
LOG.debug("creating response for file");
this.response = Response.createFileResponse(fileLength);
this.ctx.write(response);
this.ctx.write(new HttpChunkedInput(new ChunkedFile(raf, 0,
fileLength,
8192)),
this.ctx.newProgressivePromise());
} catch (FileNotFoundException fnfe) {
LOG.debug("File was not found", fnfe);
this.response = Response.createStringResponse("failure");
this.ctx.write(response);
} catch (IOException ioe) {
LOG.debug("Error getting file size", ioe);
this.response = Response.createStringResponse("failure");
this.ctx.write(response);
} finally {
try {
raf.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return this.ctx;
}
Response class is responsible for handling various types of response creations (JsonString JsonArray JsonInteger File, etc)
public static FullHttpResponse createFileResponse(long fileLength) {
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
HttpHeaders.setContentLength(response, fileLength);
response.headers().set(HttpHeaders.Names.CONTENT_TYPE, "application/octet-stream");
return response;
}
My Api works great for my Json responses(easier to achieve) but It won't work well with my json responses, but not with my file response. When making a request from e.g chrome it only hangs and does not download the file. Should I do something else when downloading a file using netty? I know its not the best wittern code, I still think I have some bits and pieces missing from totally understanding the code, but I would like your advice on how to handle download on my code. For my code I took under consideration this and this
First, some remarks on your code...
Instead of returning ctx, I would prefer to return the last Future for the last command, such that your last event (no keep alive on) could use it directly.
public void channelRead0(ChannelHandlerContext ctx, Object msg)
throws IOException {
ChannelFuture future = processMessage(ctx, msg);
if (future != null && !HttpHeaders.isKeepAlive(request)) {
// If keep-alive is off, close the connection once the content is
// fully written.
future.addListener(ChannelFutureListener.CLOSE);
}
}
Doing this way will allow to directly close without having any "pseudo" send, even empty.
Important: Note that in Http, the response is managed such that there are chunk send for all data after the first HttpResponse item, until the last one which is empty (LastHttpContent). Sending another empty one (Empty chunk but not LastHttpContent) could break the internal logic.
Moreover, you're doing the work twice (once in read0, once in processMessage), which could lead to some issues perhaps.
Also, since you check for KeepAlive, you should ensure to set it back in the response:
if (HttpHeaders.isKeepAlive(request)) {
response.headers().set(CONNECTION, HttpHeaders.Values.KEEP_ALIVE);
}
On your send, you have 2 choices (depending on the usage of SSL or not): you've selected only the second one, which is more general, so of course valid in all cases but less efficient.
// Write the content.
ChannelFuture sendFileFuture;
ChannelFuture lastContentFuture;
if (ctx.pipeline().get(SslHandler.class) == null) {
sendFileFuture =
ctx.write(new DefaultFileRegion(raf.getChannel(), 0, fileLength), ctx.newProgressivePromise());
// Write the end marker.
lastContentFuture = ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT); // <= last writeAndFlush
} else {
sendFileFuture =
ctx.writeAndFlush(new HttpChunkedInput(new ChunkedFile(raf, 0, fileLength, 8192)),
ctx.newProgressivePromise()); // <= last writeAndFlush
// HttpChunkedInput will write the end marker (LastHttpContent) for us.
lastContentFuture = sendFileFuture;
}
This is this lastContentFuture that you can get back to the caller to check the KeepAlive.
Note however that you didn't include a single flush there (except with your EMPTY_BUFFER but which can be the main reason of your issue there!), contrary to the example (from which I copied the source).
Note that both use a writeAndFlush for the last call (or the unique one).

Why I am seeing lot of TimeoutException if any one server goes down?

Here is my DataClientFactory class.
public class DataClientFactory {
public static IClient getInstance() {
return ClientHolder.INSTANCE;
}
private static class ClientHolder {
private static final DataClient INSTANCE = new DataClient();
static {
new DataScheduler().startScheduleTask();
}
}
}
Here is my DataClient class.
public class DataClient implements IClient {
private ExecutorService service = Executors.newFixedThreadPool(15);
private RestTemplate restTemplate = new RestTemplate();
// for initialization purpose
public DataClient() {
try {
new DataScheduler().callDataService();
} catch (Exception ex) { // swallow the exception
// log exception
}
}
#Override
public DataResponse getDataSync(DataKey dataKeys) {
DataResponse response = null;
try {
Future<DataResponse> handle = getDataAsync(dataKeys);
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
}
return response;
}
#Override
public Future<DataResponse> getDataAsync(DataKey dataKeys) {
Future<DataResponse> future = null;
try {
DataTask dataTask = new DataTask(dataKeys, restTemplate);
future = service.submit(dataTask);
} catch (Exception ex) {
// log error
}
return future;
}
}
I get my client instance from the above factory as shown below and then make a call to getDataSync method by passing DataKey object. DataKey object has userId and Timeout values in it. Now after this, call goes to my DataTask class to call method as soon as handle.get is called.
IClient dataClient = DataClientFactory.getInstance();
long userid = 1234l;
long timeout_ms = 500;
DataKey keys = new DataKey.Builder().setUserId(userid).setTimeout(timeout_ms)
.remoteFlag(false).secondaryFlag(true).build();
// call getDataSync method
DataResponse dataResponse = dataClient.getDataSync(keys);
System.out.println(dataResponse);
Here is my DataTask class which has all the logic -
public class DataTask implements Callable<DataResponse> {
private DataKey dataKeys;
private RestTemplate restTemplate;
public DataTask(DataKey dataKeys, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.dataKeys = dataKeys;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
ResponseEntity<String> response = null;
int serialId = getSerialIdFromUserId();
boolean remoteFlag = dataKeys.isRemoteFlag();
boolean secondaryFlag = dataKeys.isSecondaryFlag();
List<String> hostnames = new LinkedList<String>();
Mappings mappings = ClientData.getMappings(dataKeys.whichFlow());
String localPrimaryAdress = null;
String remotePrimaryAdress = null;
String localSecondaryAdress = null;
String remoteSecondaryAdress = null;
// use mappings object to get above Address by using serialId and basis on
// remoteFlag and secondaryFlag populate the hostnames linked list
if (remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
hostnames.add(remoteSecondaryHostIPAdress);
} else if (remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
} else if (!remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
} else if (!remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
}
for (String hostname : hostnames) {
// If host name is null or host name is in local block host list, skip sending request to this host
if (hostname == null || ClientData.isHostBlocked(hostname)) {
continue;
}
try {
String url = generateURL(hostname);
response = restTemplate.exchange(url, HttpMethod.GET, dataKeys.getEntity(), String.class);
// make DataResponse
break;
} catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}
}
return dataResponse;
}
private String generateURL(final String hostIPAdress) {
// make an url
}
private int getSerialIdFromUserId() {
// get the id
}
}
Now basis on userId, I will get the serialId and then get the list of hostnames, I am suppose to make a call depending on what flag is passed. Then I iterate the hostnames list and make a call to the servers. Let's say, if I have four hostnames (A, B, C, D) in the linked list, then I will make call to A first and if I get the data back, then return the DataResponse back. But suppose if A is down, then I need to add A to block list instantly so that no other threads can make a call to A hostname. And then make a call to hostname B and get the data back and return the response (or repeat the same thing if B is also down).
I have a background thread as well which runs every 10 minutes and it gets started as soon we get the client instance from the factory and it parses my another service URL to get the list of block hostnames that we are not supposed to make a call. Since it runs every 10 minutes so any servers which are down, it will get the list after 10 minutes only, In general suppose if A is down, then my service will provide A as the block list of hostnames and as soon as A becomes up, then that list will be updated as well after 10 minutes.
Here is my background thread code DataScheduler-
public class DataScheduler {
private RestTemplate restTemplate = new RestTemplate();
private static final Gson gson = new Gson();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void startScheduleTask() {
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
try {
callDataService();
} catch (Exception ex) {
// log an error
}
}
}, 0, 10L, TimeUnit.MINUTES);
}
public void callDataService() throws Exception {
String url = null;
// execute the url and get the responseMap from it as a string
parseResponse(responseMap);
}
private void parseResponse(Map<FlowsEnum, String> responses) throws Exception {
// .. some code here to calculate partitionMappings
// block list of hostnames
Map<String, List<String>> coloExceptionList = gson.fromJson(response.split("blocklist=")[1], Map.class);
for (Map.Entry<String, List<String>> entry : coloExceptionList.entrySet()) {
for (String hosts : entry.getValue()) {
blockList.add(hosts);
}
}
if (update) {
ClientData.setAllMappings(partitionMappings);
}
// update the block list of hostnames
if (!DataUtils.isEmpty(responses)) {
ClientData.replaceBlockedHosts(blockList);
}
}
}
And here is my ClientData class which holds all the information for block list of hostnames and partitionMappings details (which is use to get the list of valid hostnames).
public class ClientData {
private static final AtomicReference<ConcurrentHashMap<String, String>> blockedHosts = new AtomicReference<ConcurrentHashMap<String, String>>(
new ConcurrentHashMap<String, String>());
// some code here to set the partitionMappings by using CountDownLatch
// so that read is blocked for first time reads
public static boolean isHostBlocked(String hostName) {
return blockedHosts.get().contains(hostName);
}
public static void blockHost(String hostName) {
blockedHosts.get().put(hostName, hostName);
}
public static void replaceBlockedHosts(List<String> blockList) {
ConcurrentHashMap<String, String> newBlockedHosts = new ConcurrentHashMap<>();
for (String hostName : blockList) {
newBlockedHosts.put(hostName, hostName);
}
blockedHosts.set(newBlockedHosts);
}
}
Problem Statement:-
When all the servers are up (A,B,C,D as an example) above code works fine and I don't see any TimeoutException happening at all from the handle.get but if let's say one server (A) went down which I was supposed to make a call from the main thread then I start seeing lot of TimeoutException, by lot I mean, huge number of client timeouts happening.
And I am not sure why this is happening? In general this won't be happening right since as soon as the server goes down, it will get added to blockList and then no thread will be making a call to that server, instead it will try another server in the list? So it should be smooth process and then as soon as those servers are up, blockList will get updated from the background thread and then you can start making a call.
Is there any problem in my above code which can cause this problem? Any suggestions will be of great help.
In general, what I am trying to do is - make a hostnames list depending on what user id being passed by using the mappings object. And then make a call to the first hostname and get the response back. But if that hostname is down, then add to the block list and make a call to the second hostname in the list.
Here is the Stacktrace which I am seeing -
java.util.concurrent.TimeoutException\n\tat java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
java.util.concurrent.FutureTask.get(FutureTask.java:119)\n\tat com.host.client.DataClient.getDataSync(DataClient.java:20)\n\tat
NOTE: For multiple userId's, we can have same server, meaning server A can get resolve to multiple userId's.
In DataClient class, at the below line:
public class DataClient implements IClient {
----code code---
Future<DataResponse> handle = getDataAsync(dataKeys);
//BELOW LINE IS PROBLEM
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS); // <--- HERE
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
----code code-----
You have assigned a timeout to handle.get(...), which is timing out before your REST connections can respond. The rest connections themselves may or may not be timing out, but since you are timing out of get method of future before the completion of the execution of the thread, the blocking of hosts has no visible effect, while the code inside the call method of DataTask may be performing as expected. Hope this helps.
You asked about suggestions, so here are some suggestions:
1.) Unexpected return value
Method returns unexpectedly FALSE
if (ClientData.isHostBlocked(hostname)) //this may return always false! please check
2.) Exception-Handling
Are you really sure, that a RestClientException occurs?
Only when this exception occured, the host will be added to blocked list!
Your posted code seems to ignore logging (it is commented out!)
...catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}

MINA: Performing synchronous write requests / read responses

I'm attempting to perform a synchronous write/read in a demux-based client application with MINA 2.0 RC1, but it seems to get stuck. Here is my code:
public boolean login(final String username, final String password) {
// block inbound messages
session.getConfig().setUseReadOperation(true);
// send the login request
final LoginRequest loginRequest = new LoginRequest(username, password);
final WriteFuture writeFuture = session.write(loginRequest);
writeFuture.awaitUninterruptibly();
if (writeFuture.getException() != null) {
session.getConfig().setUseReadOperation(false);
return false;
}
// retrieve the login response
final ReadFuture readFuture = session.read();
readFuture.awaitUninterruptibly();
if (readFuture.getException() != null) {
session.getConfig().setUseReadOperation(false);
return false;
}
// stop blocking inbound messages
session.getConfig().setUseReadOperation(false);
// determine if the login info provided was valid
final LoginResponse loginResponse = (LoginResponse)readFuture.getMessage();
return loginResponse.getSuccess();
}
I can see on the server side that the LoginRequest object is retrieved, and a LoginResponse message is sent. On the client side, the DemuxingProtocolCodecFactory receives the response, but after throwing in some logging, I can see that the client gets stuck on the call to readFuture.awaitUninterruptibly().
I can't for the life of me figure out why it is stuck here based upon my own code. I properly set the read operation to true on the session config, meaning that messages should be blocked. However, it seems as if the message no longer exists by time I try to read response messages synchronously.
Any clues as to why this won't work for me?
The reason this wasn't working for me was because of an issue elsewhere in my code where I stupidly neglected to implement the message response encoder/decoder. Ugh. Anyway, the code in my question worked as soon as I fixed that.
I prefer this one (Christian Mueller : http://apache-mina.10907.n7.nabble.com/Mina-Client-which-sends-receives-messages-synchronous-td35672.html)
public class UCPClient {
private Map<Integer, BlockingQueue<UCPMessageResponse>> concurrentMap = new ConcurrentHashMap<Integer, BlockingQueue<UCPMessageResponse>>();
// some other code
public UCPMessageResponse send(UCPMessageRequest request) throws Throwable {
BlockingQueue<UCPMessageResponse> queue = new LinkedBlockingQueue<UCPMessageResponse>(1);
UCPMessageResponse res = null;
try {
if (sendSync) {
concurrentMap.put(Integer.valueOf(request.getTransactionReference()), queue);
}
WriteFuture writeFuture = session.write(request);
if (sendSync) {
boolean isSent = writeFuture.await(transactionTimeout, TimeUnit.MILLISECONDS);
if (!isSent) {
throw new TimeoutException("Could not sent the request in " + transactionTimeout + " milliseconds.");
}
if (writeFuture.getException() != null) {
throw writeFuture.getException();
}
res = queue.poll(transactionTimeout, TimeUnit.MILLISECONDS);
if (res == null) {
throw new TimeoutException("Could not receive the response in " + transactionTimeout + " milliseconds.");
}
}
} finally {
if (sendSync) {
concurrentMap.remove(Integer.valueOf(request.getTransactionReference()));
}
}
return res;
}
}
and the IoHandler:
public class InnerHandler implements IoHandler {
// some other code
public void messageReceived(IoSession session, Object message) throws Exception {
if (sendSync) {
UCPMessageResponse res = (UCPMessageResponse) message;
BlockingQueue<UCPMessageResponse> queue = concurrentMap.get(res.getTransactionReference());
queue.offer(res);
}
}
}
I had this exact problem. It turns out that it's because I was doing reads/writes in my IoHandler.sessionCreated() implementation. I moved the processing onto the thread that established the connection, instead of just waiting for the close future.
You must not use your login() function in IoHandler Thread :
If you call IoFuture.awaitUninterruptibly() in the override event function of IoHandler,
IoHandler don't work and get stuck.
You can call login() in other Thread and it will be work properly.

Categories

Resources