I want to invoke series of API calls in Java. The requirement is that some API's response will be used in the subsequent API call's request. I can achieve this using certain loops. But I want to use a design pattern in such a way that the implementation is generic. Any help?
Chain of responsibility doesn't serve my need as I won't be knowing what is my request context in the beginning.
String out = null;
Response res = execute(req);
out += res.getOut();
req.setXYZ(res.getXYZ);
Response res = execute(req);
out += res.getOut();
req.setABC(res.getABC);
Response res = execute(req);
out += res.getOut();
System.out.println("Final response::"+out);
The following come to mind:
For function calls that return an object: never return null.
For function calls that do not (otherwise) return anything: return this.
Accept functional interfaces in your API so users can customize behavior
For stateful objects that expose API as described above, provide a Builder pattern so users don't end up choosing between constructors
All methods of the Builder described must be void, and therefore return this
You can create a ResponseStringBuilder class that takes a Function<Response,String> to get the String from the Response.
public ResponseStringBuilder {
private Request request;
public StringBuilder resultBuilder = new StringBuilder();
public ResponseBuilder(Request req) {
this.request = req;
}
public ResponseStringBuilder fromExtractor(Function<Request, Response> getResponse, Function<Response,String> extract) {
Response response = getResponse.apply(request);
resultBuilder.append(extract.apply(response));
return this;
}
public String getResult() {
return resultBuilder.toString();
}
}
That would make your calls
ResponseStringBuilder builder = new ResponseStringBuilder(req);
#SuppressWarnings("unchecked")
Function<Response,String> extractors = new Function[] {
Response::getABC, Response::getXYZ
};
for (Function<Response,String> ext : extractors) {
builder = builder.fromExtractor(this::execute, ext);
}
System.out.println("final response: " + builder.getResult());
Not sure if the array declaration actually compiles, but it should work with minor modification and you get the gist.
You can use a CompletableFuture to implement promises in Java. The problem is, you're trying to pass two different things down the 'pipeline:' the request, which is mutable and (sometimes) changes, and the result, which is accumulated over the course of the calls.
I've gotten around that by creating a class called Pipe which has a request, and the accumulator for the results so far. It has getters for both, and it has a few convenience methods to return a new object with the accumulated results or even mutate the request and accumulate in one call. This makes the code of the API chaining a lot cleaner.
The with* methods after the fields, constructor, and getters are the ones that handle the accumulation and mutation. The chain method puts it all together:
import java.util.concurrent.CompletableFuture;
public class Pipe {
private Request req;
private String out;
public Pipe(Request req, String out) {
this.req = req;
this.out = out;
}
public Request getReq() {
return req;
}
public String getOut() {
return out;
}
public Pipe with(String data) {
return new Pipe(req, out + data);
}
public Pipe withABC(String abc, String data) {
req.setABC(abc);
return new Pipe(req, out + data);
}
public Pipe withXYZ(String xyz, String data) {
req.setXYZ(xyz);
return new Pipe(req, out + data);
}
public static void chain(Request req) throws Exception {
var promise = CompletableFuture.supplyAsync(() -> new Pipe(req, ""))
.thenApply(pipe -> {
Response res = execute(pipe.getReq());
return pipe.withABC(res.getABC(), res.getOut());
})
.thenApply(pipe -> {
Response res = execute(pipe.getReq());
return pipe.withXYZ(res.getXYZ(), res.getOut());
})
.thenApply(pipe -> {
Response res = execute(pipe.getReq());
return pipe.with(res.getOut());
});
var result = promise.get().getOut();
System.out.println(result);
}
public static Response execute(Request req) {
return req.getResponse();
}
}
Because it runs asynchronously, it can throw InterruptedException, and it can also throw ExecutionException if something else breaks. I don't know how you want to handle that, so I just declared chain to throw.
If you wanted to apply n operations in a loop you have to keep reassigning the promise back, as follows:
var promise = CompletableFuture.supplyAsync(() -> new Pipe(req, ""));
for (...) {
promise = promise.thenApply(pipe -> {
Response res = execute(pipe.getReq());
return pipe.with(res.getOut());
});
}
var result = promise.get().getOut();
I've used Java 10 type inference with var here, but the types of promise and result would be CompletableFuture<Pipe> and String, respectively.
(Note: it might be better to make Request immutable and pass a new, altered one down the pipeline rather than mutating it. On the other hand, you could also wrap a StringBuilder instead of a String, and have the data you're accumulating be mutable, too. Right now it's an odd mix of mutable and immutable, but that matches what your code was doing.)
Thanks all for the inputs, finally I landed on one solution which meets my need. I used one Singleton for the request execution. For each type of command, there will a set of requests to be executed in one particular order. Each command is having a particular order of requests to be executed which I stored in an array with request's unique ID. Then kept the array in a map against command name.
In a loop, I ran the array and executed, after each iteration I keep setting the response back into the request object and eventually prepared the output response.
private static Map<RequestAction,String[]> actionMap = new HashMap<RequestAction, String[]>();
static{
actionMap.put(RequestAction.COMMAND1,new String[]{WebServiceConstants.ONE,WebServiceConstants.TWO,WebServiceConstants.FOUR,WebServiceConstants.THREE});
actionMap.put(RequestAction.THREE,new String[]{WebServiceConstants.FIVE,WebServiceConstants.ONE,WebServiceConstants.TWO});}
public Map<String,Object> execute(ServiceParam param) {
String[] requestChain = getRequestChain(param);
Map<String,Object> responseMap = new HashMap<String, Object>();
for(String reqId : requestChain) {
prepareForProcessing(param, tempMap,responseMap);
param.getRequest().setReqId(reqId);
//processing the request
tempMap = Service.INSTANCE.process(param);
//prepare responseMap using tempMap
param.setResponse(response);
}
return responseMap;
}
Related
Say we have an instance of o.s.w.reactive.function.server.ServerResponse.
What is the proper way to fetch the contents of its body, in other words how to implement fetchBodyAsString function?
test(){
ServerResponse response = getResponseFromService("mock data");
String body = fetchBodyAsString(response);
assertEquals("hello", body);
}
Could you also elaborate a bit on why does ServerResponse have methods for everything (cookies(), headers(), statusCode()), but the response body? I guess there should be a way to get the body with writeTo() method, although it is absolutely vague how to use it.
I was digging around for something similar for unit testing purposes, and stitched together the below code. It's in Kotlin, but should be relatively easy to translate to Java and solve your problem (though it definitely does seem a bit hacky).
fun fetchBodyAsString(serverResponse: ServerResponse): String {
val DEFAULT_CONTEXT: ServerResponse.Context = object : ServerResponse.Context {
override fun messageWriters(): List<HttpMessageWriter<*>> {
return HandlerStrategies.withDefaults().messageWriters()
}
override fun viewResolvers(): List<ViewResolver> {
return Collections.emptyList()
}
}
// Only way I could figure out how to get the ServerResponse body was to have it write to an exchange
val request = MockServerHttpRequest.get("http://thisdoenstmatter.com").build()
val exchange = MockServerWebExchange.from(request)
serverResponse.writeTo(exchange, DEFAULT_CONTEXT).block()
val response = exchange.response
return response.bodyAsString.block()!!
}
Basically needed to create a fake MockServerWebExchange and have the ServerResponse write to it to translate it into a MockServerHttpResponse of which you can pull the response body out of fairly painlessly. This is definitely not elegant, but it works.
Also note, I didn't test the above function itself, just that it compiles. It should work though as the function's inner code is exactly what we're using.
As for your other questions about ServerResponse, I don't know the answers, but am curious about that as well!
As far as i know ServerResponse is used at the controller or router function.
For testing you can use WebTestClient
#Autowired
WebTestClient webTestClient;
#Test
void test() {
webTestClient.get()
.exchange()
.expectStatus()
.isOk()
.expectHeader()
.contentType(MediaType.APPLICATION_JSON)
.expectBody()
.jsonPath("data.name").isEqualTo("name");
}
or
#Autowired
WebTestClient webTestClient;
#Test
void test() {
FluxExchangeResult<String> result = webTestClient.get()
.exchange()
.returnResult(String.class);
int rawStatusCode = result.getRawStatusCode();
HttpStatus status = result.getStatus();
HttpHeaders responseHeaders = result.getResponseHeaders();
String stringResponseBody = result.getResponseBody().blockFirst();
}
This is based on Alan Yeung solution above, except in Java. There has to be a better 'native' way to do this without loading application context.
public class ServerResponseExtractor {
public static <T> T serverResponseAsObject(ServerResponse serverResponse,
ObjectMapper mapper, Class<T> type) {
String response = serverResponseAsString(serverResponse);
try {
return mapper.readValue(response, type);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public static String serverResponseAsString(ServerResponse serverResponse) {
MockServerWebExchange exchange = MockServerWebExchange.from(
MockServerHttpRequest.get("/foo/foo"));
DebugServerContext debugServerContext = new DebugServerContext();
serverResponse.writeTo(exchange, debugServerContext).block();
MockServerHttpResponse response = exchange.getResponse();
return response.getBodyAsString().block();
}
private static class DebugServerContext implements ServerResponse.Context {
#Override
public List<HttpMessageWriter<?>> messageWriters() {
return HandlerStrategies.withDefaults().messageWriters();
}
#Override
public List<ViewResolver> viewResolvers() {
return Collections.emptyList();
}
}
}
Another way to test the body inside a unit test is to cast the ServerResponse to an EntityResponse. This does show a warning for an unchecked cast but inside a controlled unit test I wasn't too worried about it. This just exposes the object that was set using bodyValue() before it is serialized. If you are trying to test the serialization of said body this might not work for your needs.
val entityResponse = serverResponse as EntityResponse<{ Insert Class of Body }>
val bodyObject = entityResponse.entity()
I have an Play application that handles WebSocket requests. The routes file contains this line:
GET /testsocket controllers.HomeController.defaultRoomSocket
An already working, synchronous version looks like this: (adapted from 2.7.x docs)
public WebSocket defaultRoomSocket() {
return WebSocket.Text.accept(
request -> ActorFlow.actorRef(MyWebSocketActor::props, actorSystem, materializer));
}
As stated in https://www.playframework.com/documentation/2.7.x/JavaWebSockets#Accepting-a-WebSocket-asynchronously I changed the signature to
public CompletionStage<WebSocket> defaultRoomSocket(){
//returning a CompletionStage here, using the "ask pattern"
//to get the needed Flow from an other Actor
}
From here I run into the following problem:
Cannot use a method returning java.util.concurrent.CompletionStage[play.mvc.WebSocket] as a Handler for requests
Further more, 'WebSocket' has no TypeParameter, as the documentation suggests. What is the appropriate way to accept a WebSocket request async?
The documentation indeed need to be updated, I think some bits were missed in the refactoring of the websockets in #5055.
To get async processing, you should use the acceptOrResult method that takes a CompletionStage as return type instead of a flow. This can then return either a Result or an Akka Flow, using a functional programming helper (F.Either). In fact, here's how the accept method is implemented:
public WebSocket accept(Function<Http.RequestHeader, Flow<In, Out, ?>> f) {
return acceptOrResult(
request -> CompletableFuture.completedFuture(F.Either.Right(f.apply(request))));
}
As you can see, all it does is call the async version with a completedFuture.
To fully make it async and get to what I think you're trying to achieve, you'd do something like this:
public WebSocket ws() {
return WebSocket.Json.acceptOrResult(request -> {
if (sameOriginCheck(request)) {
final CompletionStage<Flow<JsonNode, JsonNode, NotUsed>> future = wsFutureFlow(request);
final CompletionStage<Either<Result, Flow<JsonNode, JsonNode, ?>>> stage = future.thenApply(Either::Right);
return stage.exceptionally(this::logException);
} else {
return forbiddenResult();
}
});
}
#SuppressWarnings("unchecked")
private CompletionStage<Flow<JsonNode, JsonNode, NotUsed>> wsFutureFlow(Http.RequestHeader request) {
long id = request.asScala().id();
UserParentActor.Create create = new UserParentActor.Create(Long.toString(id));
return ask(userParentActor, create, t).thenApply((Object flow) -> {
final Flow<JsonNode, JsonNode, NotUsed> f = (Flow<JsonNode, JsonNode, NotUsed>) flow;
return f.named("websocket");
});
}
private CompletionStage<Either<Result, Flow<JsonNode, JsonNode, ?>>> forbiddenResult() {
final Result forbidden = Results.forbidden("forbidden");
final Either<Result, Flow<JsonNode, JsonNode, ?>> left = Either.Left(forbidden);
return CompletableFuture.completedFuture(left);
}
private Either<Result, Flow<JsonNode, JsonNode, ?>> logException(Throwable throwable) {
logger.error("Cannot create websocket", throwable);
Result result = Results.internalServerError("error");
return Either.Left(result);
}
(this is taken from the play-java-websocket-example, which might be of interest)
As you can see, it first goes through a few stages before returning either a websocket connection or a HTTP status.
I have to implement a filter to prevent XSS attack in my Liferay Portal. I have read a lot of answers about it, so I used an HttpServletRequestWrapper to add sanitized parameters to my request. My filter works properly: debugging the code I realized that the filter takes the parameter and sanitized it.
My problem is that in the processAction of a portlet I am not able to retrieve the sanitized parameter using request.getParameter() but I always get the old not sanitized parameter.
For example, suppose I have a portlet with a simple form like this:
As you can see in the input field there is a b tag to sanitize. When the form is submitted my filter is invoked and it throws the doFilter() method.
My doFilter method iterates over all parametes doing sanitation. Then I add them in my WrappedRequest:
/*
* Did it make any difference?
*/
if (!Arrays.equals(processedParams, params)) {
logger.info("Parameter: " + params[0] + " sanitized with: " + processedParams[0] );
/*
* If so, wrap up the request with a new version that will return the trimmed version of the param
*/
HashMap<String, String[]> map = new HashMap<>();
map.put(name, processedParams);
final HttpServletRequestWrapper newRequest = new ExtendedRequestWrapper(httpServletRequest,map);
/*
* Return the wrapped request and forward the processing instruction from
* the validation rule
*/
return newRequest;
My class ExtendedRequestWrapper implements getparameter method:
public class ExtendedRequestWrapper extends HttpServletRequestWrapper {
private final Map<String, String[]> modifiableParameters;
private Map<String, String[]> allParameters = null;
public ExtendedRequestWrapper(final HttpServletRequest request,
final Map<String, String[]> additionalParams)
{
super(request);
this.modifiableParameters = new TreeMap<String, String[]>();
this.modifiableParameters.putAll(additionalParams);
}
#Override
public String getParameter(final String name)
{
String[] strings = getParameterMap().get(name);
if (strings != null)
{
return strings[0];
}
return super.getParameter(name);
}
#Override
public Map<String, String[]> getParameterMap()
{
if (this.allParameters == null)
{
this.allParameters = new TreeMap<String, String[]>();
this.allParameters.putAll(super.getParameterMap());
this.allParameters.putAll(modifiableParameters);
}
//Return an unmodifiable collection because we need to uphold the interface contract.
return Collections.unmodifiableMap(allParameters);
}
#Override
public Enumeration<String> getParameterNames()
{
return Collections.enumeration(getParameterMap().keySet());
}
#Override
public String[] getParameterValues(final String name)
{
return getParameterMap().get(name);
}
}
Now, when I try to access to sanitized params in my processAction() I get the old value, that one not sanitized:
#Override
public void processAction(ActionRequest request, ActionResponse response) throws PortletException, IOException {
String azione = request.getParameter("MyXSSaction");
if(azione.equals("XSSAttack")) {
String descr = req.getParameter("mydescr");
}
}
How can I solve?
You should not do this generically in your input handling. First of all, there is no XSS in <b>, as the second S in XSS is for 'scripting' - and <b> doesn't contain any scripts.
Second, a general and thorough application of such a filter will effectively keep you from adding proper web content, blog articles and other content, where formatting is legitimately done.
Third - let's say you have a random book management system: Why shouldn't people be able to enter Let x < 5 as a book title, or even <script>alert('XSS')</script> and what to do against it - wouldn't they be proper book titles? In fact, they'd be proper data, and you want to escape them when you display.
There might be an argument for sanitizing (like Liferay's AntiSamy plugin does) certain elements if they're meant to be displayed as HTML. But anything else just needs to be properly escaped during output.
Another way to put it: Those parameters are only dangerous if you incorrectly don't escape them when they're shown in HTML pages - but if you embed them in a text/plain email body, they're completely harmless.
Is there any way to execute multiple requests in sequence in Retrofit?
These requests uses same Java interface and differ only by parameters they take which are contained in ArrayList.
For requests A1, A2, A3, A4, A5...... An
Hit A1,
onResponse() of A1 is called
Hit A2,
onResponse() of A2 is called
Hit A3
.
.
.
.
.
.
onResponse() of An is called.
The problem can be easily solved with RxJava.
Assuming you have a retrofit Api class, that returns a Completable:
interface Api {
#GET(...)
fun getUser(id: String): Completable
}
Then you can perform this:
// Create a stream, that emits each item from the list, in this case "param1" continued with "param2" and then "param3"
Observable.fromIterable(listOf("param1", "param2", "param3"))
// we are converting `Observable` stream into `Completable`
// also we perform request here: first time parameter `it` is equal to "param1", so a request is being made with "param1"
// execution will halt here until responce is received. If response is successful, only then a call with second param ("param2") will be executed
// then the same again with "param3"
.flatMapCompletable { api.getUser(it) }
// we want request to happen on a background thread
.subscribeOn(Schedulers.io())
// we want to be notified about completition on UI thread
.observeOn(AndroidSchedulers.mainThread())
// here we'll get notified, that operation has either successfully performed OR failed for some reason (specified by `Throwable it`)
.subscribe({ println("completed") }, { println(it.message) })
If your retrofit API does not return a Completable, then change api.getUser(it) to api.getUser(it).toCompletable().
You can do this easily by using zip function in Rx (For ex: each request in retrofit 2 return Observable<Object>). It will run request sequentially. You can try my codes below:
public Observable buildCombineObserverable() {
List<Observable<Object>> observables = new ArrayList<>();
for (int i = 0; i < number_of_your_request; i++) {
observables.add(your_each_request_with_retrofit);
}
return Observable.zip(observables, new FuncN<Object>() {
#Override
public Object call(Object... args) {
return args;
}
});
}
You can subscribe this Observable and get data from all requests. Zip function will do request sequentially but zip data and return data as Object...
Ok, I don't know about Retrofit, I use loopj's library, but the concept is the same. they both have a method for success and a method for failure. so here is my general suggestion:
ArrayList<MyRequest> requests = new ArrayList<>();
int numberOfRequests = 10;
JSONObject params = null;
try{
params = new JSONObject("{\"key\":\"value\"}");
}catch(JSONException e){
e.printStackTrace();
}
MyRequest firstRequest = new MyRequest();
requests.add(firstRequest);
for(int i = 0; i < numberOfRequests; i++){
MyRequest myRequest = new MyRequest();
requests.get(requests.size() - 1).addNextRequest(myRequest);
myRequest.addPreviousRequest(requests.get(requests.size() - 1));
//don't invoke sendRequest before addNextRequest
requests.get(requests.size() - 1).sendRequest(params, "example.com", App.context);
requests.add(myRequest);
}
requests.get(requests.size() - 1).sendRequest(params, "example.com", App.context);
and the MyRequest class:
import android.content.Context;
import com.loopj.android.http.AsyncHttpClient;
import com.loopj.android.http.AsyncHttpResponseHandler;
import org.json.JSONObject;
import cz.msebera.android.httpclient.Header;
import cz.msebera.android.httpclient.entity.StringEntity;
public class MyRequest{
private Object result, nextRequestsResult;
private MyRequest nextRequest, previousRequest;
public void addNextRequest(MyRequest nextRequest){
this.nextRequest = nextRequest;
}
public void addPreviousRequest(MyRequest previousRequest){
this.previousRequest = previousRequest;
}
public void sendRequest(JSONObject parameters, String url, Context ctx){
AsyncHttpClient mClient = new AsyncHttpClient();
StringEntity entity = new StringEntity(parameters.toString(), "UTF-8");
String contentType = "application/json";
mClient.post(ctx, url, entity, contentType,
new AsyncHttpResponseHandler(){
private void sendResult(Object... results){
MyRequest.this.result = results;
if(previousRequest != null){
if(nextRequest != null){
if( nextRequestsResult != null){
previousRequest.onResult(results, nextRequestsResult);
}else{
//next request's result is not ready yet
//so we don't do anything here. When nextRequestsResult
//gets ready, it will invoke this request's onResult
}
}else {
//nextRequest == null means this the last request
previousRequest.onResult(results);
}
}else{
//previousRequest == null means this is the first request
if(nextRequest != null){
if(nextRequestsResult != null){
previousRequest.onResult(results, nextRequestsResult);
}else{
//next request's result is not ready yet
//so we don't do anything here. When nextRequestsResult
//gets ready, it will invoke this request's onResult
}
}else{
//next request and previous request are null so it means
//this is the only request, so this is the final destination
doFinalJobWithResults(results);
}
}
}
#Override
public void onSuccess(final int statusCode, final Header[] headers,
final byte[] responseBody){
sendResult(responseBody, true, null, false);//whatever
}
#Override
public void onFailure(final int statusCode, final Header[] headers,
final byte[] responseBody,
final Throwable error){
sendResult(responseBody, error);//or just sendResult();
}
});
}
/**
This method should be invoked only by next request
#param nextRequestsResult
results of the next request which this request is expecting.
*/
private void onResult(Object... nextRequestsResult){
this.nextRequestsResult = nextRequestsResult;
//do whatever you want with the result of next requests here
if(previousRequest != null){
if(result != null){
previousRequest.onResult(result, this.nextRequestsResult);
}
}else{
//if it doesn't have previous request then it means this is the first request
//so since this method gets invoked only by next request then it means
//all of the next requests have done their job and this is the final destination
if(nextRequestsResult != null){
if(this.result != null){
doFinalJobWithResults(nextRequestsResult, this.result);
}
}
}
}
private void doFinalJobWithResults(Object... results){
//whatever
}
}
It's a general purpose class, you can send hundreds of network requests simultaneously but their results will be processed in sequence.
This way for example 100 requests will be sent to the server but it takes the time of one request to get all responses of them and process.
I haven't tested this code at all, it may have some bugs and mistakes, I wrote it just for this question only to give an idea.
i want to generate unique md5 for every http request that will hit REST API.
So far i have just used String requestParameters but actual httpRequest will have many other things.
How can i achieve this ?
public final class MD5Generator {
public static String getMd5HashCode(String requestParameters) {
return DigestUtils.md5DigestAsHex(requestParameters.getBytes());
}
}
My Controller
#RequestMapping(value = { "/dummy" }, method = RequestMethod.GET)
public String processOperation(HttpServletRequest request) {
serviceLayer = new ServiceLayer(request);
return "wait operation is executing";
}
Service layer
private String httpRequestToString() {
String request = "";
Enumeration<String> requestParameters = httpRequest.getParameterNames();
while (requestParameters.hasMoreElements()) {
request += String.valueOf(requestParameters.nextElement());
}
if (!request.equalsIgnoreCase(""))
return request;
else {
throw new HTTPException(200);
}
}
private String getMD5hash() {
return MD5Generator.getMd5HashCode(httpRequestToString());
}
Do you see any issues with generating an UUID for every request and use that instead?
For example, you could generate the UUID and attach it to the request object if you need it during the request life-cycle:
String uuid = UUID.randomUUID().toString();
request.setAttribute("request-id", uuid);
You can combine request time (System.currentTimeMillis()) and remote address from HttpServletRequest. However, if you're expecting high loads, multiple requests may arrive from a particular client in the same millisecond. To overcome this situation, you may add a global atomic counter to your String combination.
Once you generate an MD5 key, you can set it in ThreadLocal to reach afterwards.
You can do this but in future maybe. I search and not found automated way to achieve this
#GetMapping("/user/{{md5(us)}}")