I'm calling multiple request(10) at same time HTTP GET method.Here calling method will create different threads (Like Thread 1,Thread 2 ....)
Caller Method:
enter code here: for(int i=0;i<10;i++){Thread.currentThread().getId();HttpClient httpClient = new HtpClient(url);res = httpClient.get(5000);}
Then Request will hit application entry point.The entry point will create new threads for each and every request (Like Thread 11,Thread 12 ....).
enter code here//public void DoProcess(){Thread.currentThread().getId();// New threads for each request.else........}
But i want to know which caller thread request created application thread.
Like Thread 1 belongs Thread 11
Thread 2 belongs Thread 12
Please let me know,how to achieved this.
Client connects through TCP, so there is a socket client ip and port involved.
I don't know HTTPClient api by heart but if there is a getClientPort() then you should be able to printout the time, thread name and client ip+port. On the server, whatever accept the socket will have the client ip and port too. If this is a servlet container, the servlet request has getRemoteAddress() and getRemotePort(). There too you can print out the time, ip+port and thread name. If you pile those events in 2 tables, you should be able to join by ip+port with a tolerance on the client time vs server time (try less than 2 seconds apart, assuming C and S are on same time with NTP).
The other trivial way (but it changes the http payload) is to inject a HTTP header from the client into the http request, stating the current thread name/tid. Ex: "my_custom_remote_thread_id: Thread-11". This way on the server you can pull the request header to figure the client thread name/tid.
Related
I am consuming a REST web service from Java code using Apache commons HTTP client API. If no response returns within the socket timeout value configured in the connection manager parameters, socket time out exception occurs. In such cases as the thread returns the exception to the caller class, even if the REST service returns response few secs later, will be lost.
Is it possible to create a new thread which will still listen to the service even after the timeout and just logs the response, while the main thread returns the exception to the caller class?
Is there any better way to achieve this?
Thanks.
The pattern you are most likely looking for involves asynchronous requests. For every action you post you create a unique "job" id and with that a specific URL for the job status. After starting the job, you can then query on that specific job instance's status. For example:
POST to /actions
Returns 202 Accepted & include a Location header to /actions/results/1234
Immediately GET /actions/results/1234 to ascertain it's status.
If it returns a 2xx your job is done.
If it returns 404, wait 10 seconds (or whatever) and try again.
Once you are happy with the result, issue a DELETE to /actions/results/1234 to clean up after yourself.
Of course you don't have to return 404 if the job is not done, there are other strategies for checking on the status - the key thing is that it's a subsequent call.
I'm new to socket programming and programming a Java UDP simple client-server application. I'm writing a time/date server client. The client can ask the server for the time and date and it waits for a response. Also, every minute, the server updates all clients with the current time. The client needs to
be able to initiate contact with the server and wait for a message back
listen for periodic updates from the server
How can I do this using a single DatagramSocket?
I was thinking of creating two threads: one that listens and one that writes. The problem is that in the case that the client initiates contact with the server, it needs to wait to receive an acknowledgement from the server. So, the writing thread also needs to listen for packets from the server sometimes. But in this case, I have two threads listening and the wrong thread will get the acknowledgement.
Is there a way to specify which thread gets the input? Or is there some other way to solve this problem?
I've been searching for this answer but unable to find it. The closest I've found is Java sockets: can you send from one thread and receive on another?
If there is just one writer thread then it could send the request and go into a wait loop. The listener thread would then get the response, add it to a shared variable (maybe an AtomicReference), and then notify the writer that response has been received.
// both write and listener threads will need to share this
private final AtomicReference<Response> responseRef =
new AtomicReference<Response>();
...
// writer-thread
writeRequest(request);
synchronize (responseRef) {
while (responseRef.get() == null) {
// maybe put a timeout here
responseRef.wait();
}
}
processResponse(response);
...
// listener-thread
Response response = readResponse();
synchronize (responseRef) {
responseRef.set(response);
responseRef.notify();
}
If you have multiple writers or multiple requests being sent at the same time then it gets more complicated. You'll need to send some sort of unique id with each request and return it in the response. Then the response thread can match up the request with the response. You'd need a ConcurrentHashMap or other shared collection so that the responder can match up the particular request, add the response, and notify the appropriate waiting thread.
As the topic suggests I have a server and some clients.
The server accepts I/O connections concurrently (no queueing in socket connections) but I have this troubling issue and I do not know how to bypass it!
If I force a client to throw an I/O Exception the server detects it and terminates the client thread correctly (verified from Task Manager (Windows) and System Monitor (Ubuntu) ). But If I emulate an I/O that is "hanging" like i.e. Thread.sleep(60*1000);or
private static Object lock = new Object();
synchronized(lock) {
while (true) {
try {
lock.wait();
} catch (InterruptedException e) {
/* Foo */
}
}
}
then all subsequent I/O operations (connection & data transfer) seem to block or wait until the "hanging" client is terminated. The applications makes use of the ExecutorService so if the "hanging" client does not complete the operations in the suggested time limit then the task will time out and the client is forced to exit. The subsequent "blocked" I/Os will resume but I wonder why the server doesn't accept any I/O connections or performs any I/O operations when a client "hangs"?
NOTE:The client threading takes place in the server main like this:
while (true) {
accept client connection;
submit client task;
||
\ /
\/
// ExecutorService here in the form
// spService.submit(new Callable<Tuple<String[], BigDecimal[]>>() {
// ... code ... }}).get(taskTimeout, taskTimeUnit);
check task result & perform cleanup if result is null;
otherwise continue;
}
The Problem :
This may very well indicate that your server ACCEPTS client connections concurrently, however, it only handles these connections synchronously. That means that even if a million clients connect, successfully, at any given time, if anyone of them takes a long time (or hangs), it will hold up the others.
The TEST:
To verify this : I would toggle the amount of time a client takes to connect by adding Thread.sleep statments(1000) in your clients.
Expected result :
I believe you will see that even adding a single Thread.sleep(1000) statement in your client delays all other connecting clients by 1000.
I think I have found the source of my problems!
I do use one thread-per-client model but I run my tests locally i.e. in the same machine which means all of them have the same IP! So each client is assigned the same IP with the server! I guess that this leaves server and clients to differ only in port number but since each client is mapped to a different localport for each server connection then the server shouldn't block. I have confirmed that each client and server use different I/Os (compared references) and I wrap their sockets using <Input/Output>Streams to BufferedReaders & PrintWriters but still when a client hangs all other clients hang too (so maybe the I/O channels are indeed the same???)!I will test this on another machine and check the results back with you! :)
EDIT: Confirmed the erratic behaviour. It seems that even with remote clients if one hangs the other clients seem to hang too! :/
Don't know but I am determined to fix this. It's just that it's pretty weird since I am pretty sure I use one thread-per-client (I/Os differ, client sockets differ, IPs seem to be not a problem, I even map each client in the server to a localport of my choice ...)
May be I'll switch to NIO if I don't find a solution soon enough.
SOLUTION: Solved the problem! It seemed that the ExecutorService had to be run in a seperate thread otherwise if an I/O in a client blocked, all I/Os would block! That's strange given the fact that I've tried both an Executors.newFixedThreadPool(<nThreads>); and Executors.newCachedThreadPool(); and the client actions (aka I/Os) should take place in a new Thread for each client.
In any case, I used a method and wrapped the calls so each client instace would use a final ExecutorService baseWorker = Executors.newSingleThreadExecutor(); and created a new Thread explicitly each time using <Thread instance>.start(); so each thread would run in the background :)
I have made an client - server example using netty.
I have defined handlers for the server and the client.
Basically I connect with the client to the server and send some messages.
Every received message gets send back (with the content of the message being converted to upper case).
All the work on the received messages, on server and client-side, is done by the defined handlers.
But I would like to use, or better receive/accept, some of the messages directly in the client
not (just) in the handler. So my question is is it possible to have some listener to receive messages directly in the client-program and not in its handlers. The thin is I would like to access the received messages within the (executable) program (basically the class with a main method) that created a new client object, using something like a timer which (or a loop) which would periodically check for new messages.
I would appreciate if someone could help me with this issue. Or at least tell me if its even possible with netty .
You're looking to translate netty's event-based model into a polling model. A simple way to do that is to create a message queue:
//import java.util.concurrent.BlockingQueue;
//import java.util.concurrent.LinkedBlockingQueue;
BlockingQueue queue = new LinkedBlockingQueue();
You need to make the queue available to your handler as a constructor argument, and when a message arrives you put it into the queue:
// Your netty handler:
queue.put(message);
On the client end, you can poll the queue for messages:
// The polling loop in your program:
message = queue.poll(5, TimeUnit.SECONDS);
The BlockingQueue offers you the choice between waiting for a message to arrive (take()), waiting a certain amount of time for a message to arrive (poll(long, TimeUnit)), or merely checking whether any message is available right now (poll()).
From a design perspective, this approach kills the non-blocking IO advantage netty is supposed to give you. You could have used a normal Socket connection for the same net result.
I would like to have an advice for this issue:
I am using Jbos 5.1.0, EJB3.0
I have system, which sending requests via UDP'S to remote modems, and suppose to wait for an answer from the target modem.
the remote modems support only UDP calls, therefor I o design asynchronous mechanism. (also coz I want to request X modems parallel)
this is what I try to do:
all calls are retrieved from Data Base, then each call will be added as a message to JMS QUE.
let's say i will set X MDB'S on that que, so I can work asynchronous. now each MDB will send UDP request to the IP-address(remote modem) which will be parsed from the que message.
so basicly each MDB, which takes a message is sending a udp request to the remote modem and [b]waiting [/b]for an answer from that modem.
[u]now here is the BUG:[/u]
could happen a scenario where MDB will get an answer, but not from the right modem( which it requested in first place).
that bad scenario cause two wrong things:
a. the sender which sent the message will wait forever since the message never returned to him(it got accepted by another MDB).
b. the MDB which received the message is not the right one, and probablly if it was on a "listener" mode, then it supposed to wait for an answer from diffrent sender.(else it wouldnt get any messages)
so ofcourse I can handle everything with a RETRY mechanisem. so both mdb's(the one who got message from the wrong sender, and the one who never got the answer) will try again, to do thire operation with a hope that next time it will success.
This is the mechanism, mybe you could tell me if there is any design pattren, or any other effective solution for this problem?
Thanks,
ray.
It's tough to define an exacting solution without knowing the details, but I will assume that when a response is received from a modem (either the correct one or not), it is possible to determine which exact modem the request came from.
If this is the case, I would separate out the request handler from the response handler:
RequestMDB receives a message from the [existing] queue, dispatches the request and returns.
A new component (call it the ResponseHandler) handles all incoming responses from the modems. The response sender is identified (a modem ID ?) and packages the response into a JMS message which is sent to a JMS Response Queue.
A new MDB (ResponseMDB) listens on the JMS Response Queue and processes the response for which the modem ID is now known.
In short, by separating concerns, you remove the need for the response processing MDB to only be able to process responses from a specific modem and can now process any response that is queued by the ResponseHandler.
The ResponseHandler (listening for responses from the modems) would need to be a multithreaded service. You could implement this as a JBoss ServiceMBean with some sort of ThreadPool support. It will need a reference to the JMS QueueConnectionFactory and the JMS response queue.
In order to handle request timeouts, I propose you create a scheduled task, one for each modem, named after the modem ID. When a request is sent, the task is scheduled for execution after a delay of the timeout period. When a response is received by the ResponseHandler, the ResponseHandler queues the response and then cancels the named task. If the timeout period elapsed without a cancellation, the scheduled task executes and queues another request (an reschedules the timeout task).
Easier said than done, I suppose, but I hope this helps.
//Nicholas