Closely related to this question: How to use HttpClient with multithreaded operation?, I'm wondering if apache HttpAsyncClient is thread safe, or if it, too, requires the use of a MultiThreadedHttpConnectionManager, or a ThreadSafeClientConnManager.
If it does require such a connection manager, does one exist in the async libraries?
I was able to find a PoolingClientAsyncConnectionManager in the async libraries, but I'm not sure if that's what I need.
Alternately, I was thinking of using ThreadLocal to create one HttpAsyncClient object per thread.
Note that unlike the question I referenced earlier, I need state to be independent across sessions, even if multiple sessions hit the same domain. If a cookie is set in session 1, the cookie should not be visible to session 2. For this reason, I've also considered creating a brand new HttpAsyncClient object for every single request, though I get the impression there should be a better way.
Thanks.
You mention "independent across sessions". If this just means cookies then I would think creating your own CookieStore which is cleared when each of your threads goes to use a HttpClient would be enough.
I would use ThreadLocal to create a per-thread client, don't use a shared connection manager, and then clear the cookies aggressively. This answer was useful around cookie clearing:
Android HttpClient persistent cookies
Something like the following code would work. I've overridden the ThreadLocal.get() method to call clear() in case each request is independent. You could also call clear in the execute(...) method.
private static final ThreadLocal<ClientContext> localHttpContext =
new ThreadLocal<ClientContext> () {
#Override
protected ClientContext initialValue() {
return new ClientContext();
}
#Override
public ClientContext get() {
ClientContext clientContext = super.get();
// could do this to clear the context before usage by the thread
clientContext.clear();
return clientContext;
}
};
...
ClientContext clientContext = localHttpContext.get();
// if this wasn't in the get method above
// clientContext.clear();
HttpGet httpGet = new HttpGet("http://www.google.com/");
HttpResponse response = clientContext.execute(httpGet);
...
private static class ClientContext {
final HttpClient httpClient = new DefaultHttpClient();
final CookieStore cookieStore = new BasicCookieStore();
final HttpContext localContext = new BasicHttpContext();
public ClientContext() {
// bind cookie store to the local context
localContext.setAttribute(ClientContext.COOKIE_STORE, cookieStore);
}
public HttpResponse execute(HttpUriRequest request) {
// in case you want each execute to be indepedent
// clientContext.clear();
return httpClient.execute(request, httpContext);
}
public void clear() {
cookieStore.clear();
}
}
After load testing both with and without the PoolingClientAsyncConnectionManager, we discovered that we got inconsistent results when we did not use the PoolingClientAsyncConnectionManager.
Amongst other things, we tracked the number of Http calls we were making, and the number of Http calls that were completed (either through the cancelled(...), completed(...), or failed(...) functions of the associated FutureCallback). Without the PoolingClientAsyncConnectionManager, and under heavy load, the two figures sometimes did not match up, leading us to believe that somewhere, some connections were stomping on connection information from other threads (just a guess).
Either way, with the PoolingClientAsyncConnectionManager, the figures always matched, and the load tests were all successful, so we are definitely using it.
The final code we used goes like this:
public class RequestProcessor {
private RequestProcessor instance = new RequestProcessor();
private PoolingClientAsyncConnectionManager pcm = null;
private HttpAsyncClient httpAsyncClient = null;
private RequestProcessor() {
// Initialize the PoolingClientAsyncConnectionManager, and the HttpAsyncClient
}
public void process(...) {
this.httpAsyncClient.execute(httpMethod,
new BasicHttpContext(), // Use a separate HttpContext for each request so information is not shared between requests
new FutureCallback<HttpResponse>() {
#Override
public void cancelled() {
// Do stuff
}
#Override
public void completed(HttpResponse httpResponse) {
// Do stuff
}
#Override
public void failed(Exception e) {
// Do stuff
}
});
}
}
Related
We are working on a solution which is like this;
Request: (We receive the request via API call and send to third-party via a library we use)
OUR-Client --> OUR-API --> THIRD-PARTY
Response: (This response we receive from third-party asynchronously through a callback method given in the library we are using)
THIRD-PARTY --> OUR-CODE --> OUR-Client
Here is the below code and want to get rid of Thread.sleep() call and make use of the callback to provide response.
----- API Method -------------
#GetMapping
public ResponseEntity<String> getData(#RequestBody String requestId) throws SessionNotFound, InterruptedException {
dataService.get(requestId);
String msg;
long start = System.currentTimeMillis();
do {
// We want to get rid of this sleep() statement and some way to callback here as soon there is message.
Thread.sleep(30);
msg = clientApp.getRespnse(requestId);
} while(msg == null);
return ResponseEntity.ok(msg);
}
------- Service Class and Methods ---------------
#Service
public class DataService {
#Autowired
private ClientApp clientApp;
public void get(String requestId) throws SessionNotFound {
// This method is from the library we use. This only submits the request, response is received on different method.
send(requestId);
}
------- Component Class and Methods ---------------
#Component
public class ClientFixApp {
private Map<String, String> responseMap = new HashMap<>();
// This method is callback from the third party library, whenever there is response this method will get invoked and this message we need to send as response of the API call.
#Override
public void onResponse(String requestId)
throws FieldNotFound, IncorrectDataFormat, IncorrectTagValue, UnsupportedMessageType {
responseMap.put(msgId, jsonMsg);
}
public String getRespnse(String requestId) {
return responseMap.get(requestId);
}
}
DataService and ClientFixApp are flawed by design (the very fact it is 2 different classes while there must be one, speaks a lot). Truly asynchronous programs must allow to register user procedure as a callack, called when the I/O operation finished (successfully or not). ClientFixApp silently writes the result in a table, leaving for client no other option except polling.
You can use any existing asynchronous http library. For example, below is the code for the library java.net.http included in Java 11 and later. Other libraries have similar functionality.
public static CompletableFuture<HttpResponse<String>> doGet(String uri) {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();
return client.sendAsync(request, HttpResponse.BodyHandlers.ofString());
}
public static void main(String[] args) {
CompletableFuture<HttpResponse<String>> future = doGet("https://postman-echo.com/get");
future.thenApply(HttpResponse::body)
.thenAccept(System.out::println)
.join();
}
I am using Spring WebserviceTemplate to make SOAP call to a service. I ran performance test to see how it behaves under load. I also have a interceptor to copy the header parameters from my incoming request over to the service I am calling.
#Component
public class HeaderPropagationInterceptor implements ClientInterceptor {
#Override
public boolean handleRequest(MessageContext messageContext) throws WebServiceClientException {
SoapMessage request = (SoapMessage) messageContext.getRequest();
Result result = request.getSoapHeader().getResult();
JAXB.marshal(getRequestHeader(), result);
return true;
}
When I ran the performance test , I see the below statement blocking for 4-5 seconds
JAXB.marshal(getRequestHeader(), result);
Is there a reason why this might be blocking?
The JAXB utility class will create the JAXBContext the first time it is called (expensive operation). It will be weakly cached, which means if memory runs low the context may be recycled, and recreated on the following call. You really should create your context and keep it explicitly. Something like this (as already suggested in the comments by others) should solve your problem:
#Component
public class HeaderPropagationInterceptor implements ClientInterceptor
{
private JAXBContext jaxbContext;
#PostConstruct
public void createJaxbContext() {
try {
jaxbContext = JAXBContext.newInstance(RequestHeader.class);
}
catch(JAXBException e) {
throw new IllegalStateException("Unable to create JAXBContext.", e);
}
}
#Override
public boolean handleRequest(MessageContext messageContext) throws WebServiceClientException {
SoapMessage request = (SoapMessage) messageContext.getRequest();
Result result = request.getSoapHeader().getResult();
jaxbContext.createMarshaller().marshal(getRequestHeader(), result);
return true;
}
}
You have to replace the RequestHeader.class with the actual class used by your code. If performance needs to be improved further, it's also possible to use a thread-local for reusing the marshaller, but you should probably do further profiling to verify that is really a bottleneck. Good luck with your project!
I am working on a project in which I need to make a HTTP URL call to my server which is running Restful Service which returns back the response as a JSON String. I am using RestTemplate here along with HttpComponentsClientHttpRequestFactory to execute an url.
I have setup a http request timeout (READ and CONNECTION time out) on my RestTemplate by using HttpComponentsClientHttpRequestFactory.
Below is my Interface:
public interface Client {
// for synchronous
public String getSyncData(String key, long timeout);
// for asynchronous
public String getAsyncData(String key, long timeout);
}
Below is my implementation of Client interface -
public class DataClient implements Client {
private final RestTemplate restTemplate = new RestTemplate();
private ExecutorService executor = Executors.newFixedThreadPool(10);
// for synchronous call
#Override
public String getSyncData(String key, long timeout) {
String response = null;
try {
Task task = new Task(key, restTemplate, timeout);
// direct call, implementing sync call as async + waiting is bad idea.
// It is meaningless and consumes one thread from the thread pool per a call.
response = task.call();
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
}
return response;
}
// for asynchronous call
#Override
public Future<String> getAsyncData(String key, long timeout) {
Future<String> future = null;
try {
Task task = new Task(key, restTemplate, timeout);
future = executor.submit(task);
} catch (Exception ex) {
PotoLogging.logErrors(ex, DataErrorEnum.CLIENT_ERROR, key);
}
return future;
}
}
And below is my simple Task class
class Task implements Callable<String> {
private RestTemplate restTemplate;
private String key;
private long timeout; // in milliseconds
public Task(String key, RestTemplate restTemplate, long timeout) {
this.key = key;
this.restTemplate = restTemplate;
this.timeout = timeout;
}
public String call() throws Exception {
String url = "some_url_created_by_using_key";
// does this looks right the way I am setting request factory?
// or is there any other effficient way to do this?
restTemplate.setRequestFactory(clientHttpRequestFactory());
String response = restTemplate.exchange(url, HttpMethod.GET, null, String.class);
return response;
}
private static ClientHttpRequestFactory clientHttpRequestFactory() {
// is it ok to create a new instance of HttpComponentsClientHttpRequestFactory everytime?
HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory();
factory.setReadTimeout(timeout); // setting timeout as read timeout
factory.setConnectTimeout(timeout); // setting timeout as connect timeout
return factory;
}
}
Now my question is - Does the way I am using RestTemplate along with setRequestFactory in the call method of Task class everytime is efficient? Since RestTemplate is very heavy to be created so not sure whether I got it right.
And is it ok to create a new instance of HttpComponentsClientHttpRequestFactory everytime? Will it be expensive?
What is the right and efficient way to use RestTemplate if we need to setup Read and Connection timeout on it.
This library will be used like this -
String response = DataClientFactory.getInstance().getSyncData(keyData, 100);
From what I can tell, you're reusing the same RestTemplate object repeatedly, but each Task is performing this line: restTemplate.setRequestFactory(clientHttpRequestFactory());. This seems like it can have race conditions, e.g. one Task can set the RequestFactory that another Task will then accidentally use.
Otherwise, it seems like you're using RestTemplate correctly.
How often do your timeouts change? If you mostly use one or two timeouts, you can create one or two RestTemplates using the RequestFactory constructor with the pre-loaded timeout. If you're a stickler for efficiency, create a HashMap<Integer, RestTemplate> that caches a RestTemplate with a particular timeout each time a new timeout is requested.
Otherwise, looking at the code for RestTemplate's constructor, and for HttpComponentsClientHttpRequestFactory's constructor, they don't look exceptionally heavy, so calling them repeatedly probably won't be much of a bottleneck.
I am working on a Java/Spring web application that for each incoming request does the following:
fires off a number of requests to third party web servers,
retrieves the response from each,
parses each response into a list of JSON objects,
collates the lists of JSON objects into a single list and returns it.
I am creating a separate thread for each request sent to the third party web servers. I am using the Apache PoolingClientConnectionManager. Here is an outline of the code I am using:
public class Background {
static class CallableThread implements Callable<ArrayList<JSONObject>> {
private HttpClient httpClient;
private HttpGet httpGet;
public CallableThread(HttpClient httpClient, HttpGet httpGet) {
this.httpClient = httpClient;
this.httpGet = httpGet;
}
#Override
public ArrayList<JSONObject> call() throws Exception {
HttpResponse response = httpClient.execute(httpGet);
return parseResponse(response);
}
private ArrayList<JSONObject> parseResponse(HttpResponse response) {
ArrayList<JSONObject> list = null;
// details omitted
return list;
}
}
public ArrayList<JSONObject> getData(List<String> urlList, PoolingClientConnectionManager connManager) {
ArrayList<JSONObject> jsonObjectsList = null;
int numThreads = urlList.size();
ExecutorService executor = Executors.newFixedThreadPool(numThreads);
List<Future<ArrayList<JSONObject>>> list = new ArrayList<Future<ArrayList<JSONObject>>>();
HttpClient httpClient = new DefaultHttpClient(connManager);
for (String url : urlList) {
HttpGet httpGet = new HttpGet(url);
CallableThread worker = new CallableThread(httpClient, httpGet);
Future<ArrayList<JSONObject>> submit = executor.submit(worker);
list.add(submit);
}
for (Future<ArrayList<JSONObject>> future : list) {
try {
if (future != null) {
if (jsonObjectsList == null) {
jsonObjectsList = future.get();
} else {
if (future.get() != null) {
jsonObjectsList.addAll(future.get());
}
}
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
executor.shutdown();
return jsonObjectsList;
}
}
This all works fine. My question is in relation to how well this code will scale as the traffic to my website increases? Is there a better way to implement this? For example, by implementing non-blocking I/O to reduce the number of threads being created. Are there libraries or frameworks that might help?
At the moment, I am using Java 6 and Spring Framework 3.1
Thanks in advance
I wouldn't recommend to implement this as a synchronous service. Do it asynchronously. Get your request, pool the callables, and return a resource location where the client can later request the result.
You've got to be pooling this callables in an executor. Poll the executor in a background process and make avalable the results in the location you returned at the first request. Doing it this way, it would be easier to control your available resuources, and deny cleanly a processing requests if there aren't any more resources available.
Non blocking IO won't reduce the number of threads, it just delegates the "job" to another thread, in order for the service thread not to be blocked and to be able to receive more requests.
use REST.
Receive a POST request, and answer with something like this:
HTTP/1.1 202 Accepted
Location: /result/to/consult/later
The client can then request the resutl at the given location. If the processing has not finished, then answer with:
HTTP/1.1 201 Created
If its done then return a HTTP/1.1 200 OK with the resulting JSON.
I'm new to both java and jersey. Now I want to use the jersey to realize a REST services with extra processing after sending the response (specifically, sleep a fix amount of seconds and then fire a different REST request in the same servlet context, so it's unlike a REST proxy). I had googled for a while but all seems take it for granted that implicitly flushing the response at the end of method. Here are the current codes with JAXB enabled I'm struggling to work on.
#Path("/chat")
public class LoadSimulator {
#Context private UriInfo uriInfo;
#Path("/outbound/{senderAddress}/requests")
#POST
#Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
#Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response createOutboundSMSMessage(OutboundSMSMessageRequest inSmsReq, #PathParam("senderAddress") String senderAddress) throws JAXBException {
String requestId = UUID.randomUUID().toString();
URI uri = uriInfo.getAbsolutePathBuilder().path(requestId).build();
ObjectFactory factory = new ObjectFactory();
ResourceReference resourceReference = new ResourceReference();
resourceReference.setResourceURL(uri.toString());
JAXBElement<ResourceReference> inSmsResponse = factory.createResourceReference(resourceReference);
return Response.created(uri).entity(inSmsResponse).build();
//// want to flush or commit the response explicitly like:
// out.flush();
// out.close();
//// Then sleep for a few second and fire a new REST request
// sleep(5);
// ....
// ClientConfig config = new DefaultClientConfig();
// String response = r.path("translate").queryParams(params).get(String.class);
}
}
If you could do what you're trying to do, you would exhaust the resources on your server because every request would take X seconds and you have a finite amount of threads available before the box cries uncle.
Without commenting on why you'd want to do this; If you used the #Singleton annotation for your LoadSimulator you could set up a thread that listens on a (concurrent) queue in #PostConstruct public void init() - that gets called when your servlet starts up.
#Singleton
#Path("/chat")
public class LoadSimulator {
private Thread restCaller;
private ConcurrentLinkedQueue<MyInfo> queue = new ConcurrentLinkedQueue<MyInfo>();
...
#PostConstruct public void init()
{
restCaller = new Thread(new MyRunnable(queue));
restCaller.start();
}
...
Then in your REST call, you'd put whatever information is needed to make the second REST call on that queue, and have the aforementioned thread pulling it off and making queries.