Dynamic Rest Client in Java - java

I'm trying to create a dynamic Rest client, where I can set the HTTP Method(GET-POST-PUT-DELETE), Query Params and body(Json, plain, XML), this is basically what I need, for the request I think i know how I can do it, but my concern is for reading the answer, since I know what I should get ( format) but I dont know how to read it properly, so far I return an object, below the code (only for POST, but the idea is the same):
Response responseRest = null;
Client client = null;
try {
client = new ResteasyClientBuilder().establishConnectionTimeout(TIME_OUT, TimeUnit.MILLISECONDS).socketTimeout(TIME_OUT, TimeUnit.MILLISECONDS).build();
WebTarget target = client.target(request.getUrlTarget());
MediaType type = assignResponseType(request.getTypeResponse());
switch (request.getProtocol()) {
case POST: {
if (request.getParamQuery() != null) {
for (VarRequestDTO varRequest : request.getParamQuery()) {
target = target.queryParam(varRequest.getName(), varRequest.getValue());
}
}
responseRest = target.request().post(Entity.entity(new ResponseWrapper(), type));
break;
}
default:
//HTTP METHOD No supported
}
Object result = responseRest.readEntity(Object.class);
}
catch (Exception e) {
response.setError(Boolean.TRUE);
response.setMessage(e.getMessage());
e.printStackTrace();
} finally {
if (responseRest != null) {
responseRest.close();
}
if (client != null) {
client.close();
}
}
What I basically I need is to return the object in the format needed, and where is called it's supposed to do a cast to the correct format, I just need it to be dynamic and used for any service.
Thanks

Every request that a ReST client makes to a ReST service, it passes an "Accept" header.
This is to indicate to the service the MIME-type of the resource the client is willing to accept.
In the above case, what are the acceptable formats (json/ plain text/ etc.) for you?
Depending on the "accept" format you choose, and the "Content-type" header that you receive, you can write a deserializer to accept that data and process.
Also, instead of returning an Object which is too generic, consider returning a readable Stream to the caller.

Related

Vertx client is taking time to check for failure

I have a requirement where I am connecting one microservice to other microservice via Vertx client. In the code I am checking if another microservice is down then on failure it should create some JsonObject with solrError as key and failure message as value. If there is a solr error I mean if other microservice is down which is calling solr via load balancing then it should throw some error response. But Vertx client is taking some time to check on failure and when condition is checked that time there is no solrError in jsonobject as Vertx client is taking some time to check for failure so condition fails and resp is coming as null. In order to avoid this what can be done so that Vertx client fails before the condition to check for solrError and returns Internal server error response?
Below is the code :
solrQueryService.executeQuery(query).subscribe().with(jsonObject -> {
ObjectMapper objMapper = new ObjectMapper();
SolrOutput solrOutput = new SolrOutput();
List<Doc> docs = new ArrayList<>();
try {
if(null != jsonObject.getMap().get("solrError")){
resp = Response.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity(new BaseException(
exceptionService.processSolrDownError(request.header.referenceId))
.getResponse()).build();
}
solrOutput = objMapper.readValue(jsonObject.toString(), SolrOutput.class);
if (null != solrOutput.getResponse()
&& CollectionUtils.isNotEmpty(solrOutput.getResponse().getDocs())) {
docs.addAll(solrOutput.getResponse().getDocs());
uniDocList = Uni.createFrom().item(docs);
}
} catch (JsonProcessingException e) {
e.printStackTrace();
}
});
if(null!=resp && resp.getStatus() !=200) {
return resp ;
}
SolrQueryService is preparing query and send out URL and query to Vertx web client as below :
public Uni<JsonObject> search(URL url, SolrQuery query,Integer timeout) {
int port = url.getPort();
if (port == -1 && "https".equals(url.getProtocol())) {
port = 443;
}
if (port == -1 && "http".equals(url.getProtocol())) {
port = 80;
}
HttpRequest<Buffer> request = client.post(port, url.getHost(), url.getPath()).timeout(timeout);
return request.sendJson(query).map(resp -> {
return resp.bodyAsJsonObject();
}).onFailure().recoverWithUni(f -> {
return Uni.createFrom().item(new JsonObject().put("solrError", f.getMessage()));
});
}
I have not used the Vertx client but assume its reactive and non-blocking. Assuming this is the case, your code seems to be mixing imperative and reactive constructs. The subscribe in the first line is reactive and the lambda you provide will be called when the server responds to the client request. However, after the subscribe, you have imperative code which runs before the lambda even has a chance to be called so your checks and access to the "resp" object will never be a result of what happened in the lambda itself.
You need to move all the code into the lambda or at least make subsequent code chain onto the result of the subscribe.

How to handle case when API response returns a null body in Java?

I am calling a File Server's REST API using POST method and sending it file content to be uploaded. The REST API should ideally save the file and send a response which contains fileName.
My code is something like this.
public String uploadFile() {
UploadResponse response = restTemplate.postForObject(
FILE_SERVER_URL/upload,
new HttpEntity<>(fileContent, headers),
UploadResponse.class);
return response.getFileName();
}
In the above code, the compiler complains that UploadResponse response could be null, and I should handle that.
I plan to handle it with the below code.
public String uploadFile() {
UploadResponse response = restTemplate.postForObject(
FILE_SERVER_URL/upload,
new HttpEntity<>(fileContent, headers),
UploadResponse.class);
if(response != null) {
return response.getFileServiceId();
}
else {
throw new RuntimeException("File upload failed");
}
}
However, I am not sure if it is the right way to handle this. I don't feel this is a Runtime Exception. Please guide me as how should I handle the case that response could be null.
You can use Optional to avoid null checks in your code.
You can read this really insightful Q/A here - https://softwareengineering.stackexchange.com/questions/309134/why-is-using-an-optional-preferential-to-null-checking-the-variable

Do I need try-finally for JAX-RS Response? [duplicate]

It's not clear must I close JAX RS Client/Response instances or not. And if I must, always or not?
According to documentation about the Client class:
Calling this method effectively invalidates all resource targets
produced by the client instance.
The WebTarget class does not have any invalidate()/close() method, but the Response class does.
According to documentation:
Close the underlying message entity input stream (if available and
open) as well as releases any other resources associated with the
response (e.g. buffered message entity data).
... The close() method
should be invoked on all instances that contain an un-consumed entity
input stream to ensure the resources associated with the instance are
properly cleaned-up and prevent potential memory leaks. This is
typical for client-side scenarios where application layer code
processes only the response headers and ignores the response entity.
The last paragraph is not clear to me. What does "un-consumed entity input stream" mean? If I get an InputSteam or a String from response, should I close the response explicitly?
We can get a response result without getting access to Response instance:
Client client = ...;
WebTarget webTarget = ...;
Invocation.Builder builder = webTarget.request(MediaType.APPLICATION_JSON_TYPE);
Invocation invocation = builder.buildGet();
InputStream reso = invocation.invoke(InputStream.class);
I'm working with RESTeasy implementation, and I expected that response will be closed inside of resteasy implementation, but I could not find it. Could anyone tell me why?
I know that the Response class will implement Closeable interface
But even know, the Response is used, without closing it.
According to the documentation close() is idempotent.
This operation is idempotent, i.e. it can be invoked multiple times with the same effect which also means that calling the close() method on an already closed message instance is legal and has no further effect.
So you can safely close the InputStream yourself and should.
That being said I style wise would not do invocation.invoke(InputStream.class) as the invoker(Class) is made for doing entity transformation. Instead if you want InputStream you should probably just call invocation.invoke() and deal with the Response object directly as you may want some header info before reading the stream.
The reason you want headers when dealing with a response InputStream is typical because you either don't care about the body or the body requires special processing and size considerations which is what the documentation is alluding to (e.g. HEAD request to ping server).
See also link
A message instance returned from this method will be cached for subsequent retrievals via getEntity(). Unless the supplied entity type is an input stream, this method automatically closes the an unconsumed original response entity data stream if open. In case the entity data has been buffered, the buffer will be reset prior consuming the buffered data to enable subsequent invocations of readEntity(...) methods on this response.
So if you choose anything other than InputStream you will not have to close the Response (but regardless its safe to do it anyways as its idempotent).
In short: do call close() or use closeable with try-with-resources-statements.
If you use the JAX-RS Client reference, calling close() on the client closes open sockets.
Calling close on Response releases the connection but not any open socket
It is not necessary required to call close() since Resteasy will release the connection under the covers. But it should be done if result is an InputStream or if you're dealing with Response results.
Resources/Reference:
According to the Resteasy documentation you should call close() on Response references.
In section 47.3 at the end it states that
Resteasy will release the connection under the covers. The only
counterexample is the case in which the response is an instance of
InputStream, which must be closed explicitly.
On the other hand, if the result of an invocation is an instance of
Response, then Response.close() method must be used to released the
connection.
You should probably execute this in a try/finally block. Again,
releasing a connection only makes it available for another use. It
does not normally close the socket.
Note that if ApacheHttpClient4Engine has created its own instance of
HttpClient, it is not necessary to wait for finalize() to close open
sockets. The ClientHttpEngine interface has a close() method for this
purpose.
Finally, if your javax.ws.rs.client.Client class has created the
engine automatically for you, you should call Client.close() and
this will clean up any socket connections.
Looking into the resteasy-client source code, Invocation#invoke(Class<T>) is simply calling Invocation#invoke() and calling Invocation#extractResult(GenericType<T> responseType, Response response, Annotation[] annotations) to extract the result from the Response:
#Override
public <T> T invoke(Class<T> responseType)
{
Response response = invoke();
if (Response.class.equals(responseType)) return (T)response;
return extractResult(new GenericType<T>(responseType), response, null);
}
Invocation#extractResult(GenericType<T> responseType, Response response, Annotation[] annotations) closes the Response in the finally block:
/**
* Extracts result from response throwing an appropriate exception if not a successful response.
*
* #param responseType
* #param response
* #param annotations
* #param <T>
* #return
*/
public static <T> T extractResult(GenericType<T> responseType, Response response, Annotation[] annotations)
{
int status = response.getStatus();
if (status >= 200 && status < 300)
{
try
{
if (response.getMediaType() == null)
{
return null;
}
else
{
T rtn = response.readEntity(responseType, annotations);
if (InputStream.class.isInstance(rtn)
|| Reader.class.isInstance(rtn))
{
if (response instanceof ClientResponse)
{
ClientResponse clientResponse = (ClientResponse)response;
clientResponse.noReleaseConnection();
}
}
return rtn;
}
}
catch (WebApplicationException wae)
{
try
{
response.close();
}
catch (Exception e)
{
}
throw wae;
}
catch (Throwable throwable)
{
try
{
response.close();
}
catch (Exception e)
{
}
throw new ResponseProcessingException(response, throwable);
}
finally
{
if (response.getMediaType() == null) response.close();
}
}
try
{
// Buffer the entity for any exception thrown as the response may have any entity the user wants
// We don't want to leave the connection open though.
String s = String.class.cast(response.getHeaders().getFirst("resteasy.buffer.exception.entity"));
if (s == null || Boolean.parseBoolean(s))
{
response.bufferEntity();
}
else
{
// close connection
if (response instanceof ClientResponse)
{
try
{
ClientResponse.class.cast(response).releaseConnection();
}
catch (IOException e)
{
// Ignore
}
}
}
if (status >= 300 && status < 400) throw new RedirectionException(response);
return handleErrorStatus(response);
}
finally
{
// close if no content
if (response.getMediaType() == null) response.close();
}
}

How to follow Single Responsibility principle in my HttpClient executor?

I am using RestTemplate as my HttpClient to execute URL and the server will return back a json string as the response. Customer will call this library by passing DataKey object which has userId in it.
Using the given userId, I will find out what are the machines that I can hit to get the data and then store those machines in a LinkedList, so that I can execute them sequentially.
After that I will check whether the first hostname is in block list or not. If it is not there in the block list, then I will make a URL with the first hostname in the list and execute it and if the response is successful then return back the response. But let's say if that first hostname is in the block list, then I will try to get the second hostname in the list and make the url and execute it, so basically, first find the hostname which is not in block list before making the URL.
Now, let's say if we selected first hostname which was not in the block list and executed the URL and somehow server was down or not responding, then I will execute the second hostname in the list and keep doing this until you get a successful response. But make sure they were not in the block list as well so we need to follow above point.
If all servers are down or in block list, then I can simply log and return the error that service is unavailable.
Below is my DataClient class which will be called by customer and they will pass DataKey object to getData method.
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate(new HttpComponentsClientHttpRequestFactory());
private ExecutorService service = Executors.newFixedThreadPool(15);
public Future<DataResponse> getData(DataKey key) {
DataExecutorTask task = new DataExecutorTask(key, restTemplate);
Future<DataResponse> future = service.submit(task);
return future;
}
}
Below is my DataExecutorTask class:
public class DataExecutorTask implements Callable<DataResponse> {
private DataKey key;
private RestTemplate restTemplate;
public DataExecutorTask(DataKey key, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.key = key;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
ResponseEntity<String> response = null;
MappingsHolder mappings = ShardMappings.getMappings(key.getTypeOfFlow());
// given a userId, find all the hostnames
// it can also have four hostname or one hostname or six hostname as well in the list
List<String> hostnames = mappings.getListOfHostnames(key.getUserId());
for (String hostname : hostnames) {
// If host name is null or host name is in local block list, skip sending request to this host
if (ClientUtils.isEmpty(hostname) || ShardMappings.isBlockHost(hostname)) {
continue;
}
try {
String url = generateURL(hostname);
response = restTemplate.exchange(url, HttpMethod.GET, key.getEntity(), String.class);
if (response.getStatusCode() == HttpStatus.NO_CONTENT) {
dataResponse = new DataResponse(response.getBody(), DataErrorEnum.NO_CONTENT,
DataStatusEnum.SUCCESS);
} else {
dataResponse = new DataResponse(response.getBody(), DataErrorEnum.OK,
DataStatusEnum.SUCCESS);
}
break;
// below codes are duplicated looks like
} catch (HttpClientErrorException ex) {
HttpStatusCodeException httpException = (HttpStatusCodeException) ex;
DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException);
String errorMessage = httpException.getResponseBodyAsString();
dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR);
return dataResponse;
} catch (HttpServerErrorException ex) {
HttpStatusCodeException httpException = (HttpStatusCodeException) ex;
DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException);
String errorMessage = httpException.getResponseBodyAsString();
dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR);
return dataResponse;
} catch (RestClientException ex) {
// if it comes here, then it means some of the servers are down so adding it into block list
ShardMappings.blockHost(hostname);
}
}
if (ClientUtils.isEmpty(hostnames)) {
dataResponse = new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR);
} else if (response == null) { // either all the servers are down or all the servers were in block list
dataResponse = new DataResponse(null, DataErrorEnum.SERVICE_UNAVAILABLE, DataStatusEnum.ERROR);
}
return dataResponse;
}
}
My block list keeps-on getting updated from another background thread every 1 minute. If any server is down and not responding, then I need to block that server by using this -
ShardMappings.blockHost(hostname);
And to check whether any server is in block list or not, I use this -
ShardMappings.isBlockHost(hostname);
I am returning SERVICE_UNAVAILABLE if servers are down or in block list,on the basis of response == null check, not sure whether it's a right approach or not.
I am not following Single Responsibility Principle here I guess at all.
Can anyone provide an example what is the best way to use SRP principle here.
After thinking a lot, I was able to extract hosts class like given below but not sure what is the best way to use this in my above DataExecutorTask class.
public class Hosts {
private final LinkedList<String> hostsnames = new LinkedList<String>();
public Hosts(final List<String> hosts) {
checkNotNull(hosts, "hosts cannot be null");
this.hostsnames.addAll(hosts);
}
public Optional<String> getNextAvailableHostname() {
while (!hostsnames.isEmpty()) {
String firstHostname = hostsnames.removeFirst();
if (!ClientUtils.isEmpty(firstHostname) && !ShardMappings.isBlockHost(firstHostname)) {
return Optional.of(firstHostname);
}
}
return Optional.absent();
}
public boolean isEmpty() {
return hostsnames.isEmpty();
}
}
Your concern is valid. First, let's see what the original data executor do:
First, it is getting list of hostnames
Next, it loops through every hostnames that do the following things:
It checks whether the hostname is valid to send request.
If not valid: skip.
Else continue.
Generate the URL based on hostname
Send the request
Translate the request response to domain response
Handle exceptions
If the hostnames is empty, generate an empty response
Return response
Now, what can we do to follow SRP? As I can see, we can group those operations into some groups. What I can see is, these operations can be split into:
HostnameValidator: checks whether the hostname is valid to send request
--------------
HostnameRequestSender: Generate the URL
Send the request
--------------
HttpToDataResponse: Translate the request response to domain response
--------------
HostnameExceptionHandler: Handle exceptions
That is, one approach to de-couple your operations and to follow SRP. There is also other approach, for example to simplify your operations:
First, it is getting list of hostnames
If the hostnames is empty, generate an empty response
Next, it loops through every hostnames that do the following things:
It checks whether the hostname is valid to send request
If not valid: remove hostname
Else: Generate the URL based on hostname
Next, it loops through every valid hostnames that do the following things:
Send the request
Translate the request response to domain response
Handle exceptions
Return response
Then it can also be split into:
HostnameValidator: checks whether the hostname is valid to send request
--------------
ValidHostnameData: Getting list of hostnames
Loops through every hostnames that do the following things:
Checks whether the hostname is valid to send request
If not valid: remove hostname
Else: Generate the URL based on hostname
--------------
HostnameRequestSender: Send the request
--------------
HttpToDataResponse: Translate the request response to domain response
--------------
HostnameExceptionHandler: Handle exceptions
Of course there are also other way to do it. And I leave the implementation details blank because there is many way to implement it.

Inputstream handled by different objects depending on the content

I am writing a crawler/parser that should be able to process different types of content, being RSS, Atom and just plain html files. To determine the correct parser, I wrote a class called ParseFactory, which takes an URL, tries to detect the content-type, and returns the correct parser.
Unfortunately, checking the content-type using the provided in method in URLConnection doesn't always work. For example,
String contentType = url.openConnection().getContentType();
doesn't always provide the correct content-type (e.g "text/html" where it should be RSS) or doesn't allow to distinguish between RSS and Atom (e.g. "application/xml" could be both an Atom or a RSS feed). To solve this problem, I started looking for clues in the InputStream. Problem is that I am having trouble coming up an elegant class design, where I need to download the InputStream only once. In my current design I have wrote a separate class first that determines the correct content-type, next the ParseFactory uses this information to create an instance of the corresponding parser, which in turn, when the method 'parse()' is called, downloads the entire InputStream a second time.
public Parser createParser(){
InputStream inputStream = null;
String contentType = null;
String contentEncoding = null;
ContentTypeParser contentTypeParser = new ContentTypeParser(this.url);
Parser parser = null;
try {
inputStream = new BufferedInputStream(this.url.openStream());
contentTypeParser.parse(inputStream);
contentType = contentTypeParser.getContentType();
contentEncoding = contentTypeParser.getContentEncoding();
assert (contentType != null);
inputStream = new BufferedInputStream(this.url.openStream());
if (contentType.equals(ContentTypes.rss))
{
logger.info("RSS feed detected");
parser = new RssParser(this.url);
parser.parse(inputStream);
}
else if (contentType.equals(ContentTypes.atom))
{
logger.info("Atom feed detected");
parser = new AtomParser(this.url);
}
else if (contentType.equals(ContentTypes.html))
{
logger.info("html detected");
parser = new HtmlParser(this.url);
parser.setContentEncoding(contentEncoding);
}
else if (contentType.equals(ContentTypes.UNKNOWN))
logger.debug("Unable to recognize content type");
if (parser != null)
parser.parse(inputStream);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return parser;
}
Basically, I am looking for a solution that allows me to eliminate the second "inputStream = new BufferedInputStream(this.url.openStream())".
Any help would be greatly appreciated!
Side note 1: Just for the sake of being complete, I also tried using the URLConnection.guessContentTypeFromStream(inputStream) method, but this returns null way too often.
Side note 2: The XML-parsers (Atom and Rss) are based on SAXParser, the Html-parser on Jsoup.
Can you just call mark and reset?
inputStream = new BufferedInputStream(this.url.openStream());
inputStream.mark(2048); // Or some other sensible number
contentTypeParser.parse(inputStream);
contentType = contentTypeParser.getContentType();
contentEncoding = contentTypeParser.getContentEncoding();
inputstream.reset(); // Let the parser have a crack at it now
Perhaps your ContentTypeParser should cache the content internally and feed it to the appropiate ContentParser instead of reacquiring data from InputStream.

Categories

Resources