I generally dislike the usage of singletons or static class, since I can refactor them to something different most of the time.
However, I am currently designing my access point to a HTTP API on an Android app, and I was thinking that I have the following environment:
I need to send HTTP requests in the majority of my code modules (Activities).
The code for sending a request does not depend on the request being sent
There will always only be one specific user on the app per session (unlike the server-side that has to handle different users etc)
Therefore, I was thinking that this could be a situation where it is justifiable to use a Singleton, or even a static class, to place HTTP requests - In the rest of my code, I would then simply have to use something like:
MyHttpAccess.attemptLogin(name, pass, callback)
in order to complete the request. I'm even leaning towards using a static class, as I do not have any variable data that I can think of needing to store.
Does this seem like good or bad design, and what should I potentially change?
Http Request in kotlin using single class
https://medium.com/#umesh8346/android-kotlin-api-integration-different-method-b5b84eb4f386
Related
I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples
I'm looking to implement a very simple REST web service in Java. This is not my primary line of work, so everything is new to me.
I've been researching Java and JAX-RS implementations. They do not appear to be that difficult, but I haven't been able to understand the lifetime of the service and how it is created by the web server.
I'm afraid that my service may have to do some costly initialization, such as load a bunch of setup data from a file or resource in order to be able to process the requests. I do not know if I want it to have to do that each time it has to process a request.
So, my question is, what is the lifetime of my service? Can I load a bunch of parameters for my web service from a file before responding to requests? The parameters I need to load do not change and should be the same for all requests (therefore, it is stateless), but I'll need to be able to load that data from somewhere and I'm worried that it will forced to do it for each request. So, can my web service "live" or be cached such that it only needs to do that initialization once, or once per thread, but not once per request?
edit: I haven't decided yet which JAX-RS implementation to use or which server. I'm just interested in the fact that, it can be done, and if it matters which implementation I choose.
Just give an example using Jersey which is an implementatin of JAX-RS. The default life-cycle of root resource class is each request creates its own instance as specified here. So if you have some initial setup in the service and if they are the same for all requests, then you can put them in the static field of resource class and use static block to initialize them since static variables are created on per class basis. Something like this:
private static MyParam params;
static {
params = new MyParam("/path/to/file/setup.conf");
}
In my app, i have lots of GET,POST, PUT requests. Right now, i have a singleton class that holds my downloaded data and has many inner classes that extend AsyncTask.
In my singleton class, i have also a few interfaces like this:
/**
* Handlers for notifying listeners when data is downloaded
*
*/
public interface OnQuestionsLoadedListener {
public void onDataLoadComplete();
public void onDataLoadingError();
}
Is there something wrong with this pattern (many inner classes that extend AsyncTask)?
Could it be done more efficiently with maybe just 1 inner class for every HTTP call (1 for GET, 1 for POST, ...)? If so, how to decide what to do after e.g. GET request?
As a whole, you should get away from AsyncTasks while preforming network requests.
Your AsyncTasks are linked to your Activity. That means, if your Activity stops, your AsyncTask stops.
This isn't the biggest problem when fetching data to show in that Activity, since you won't care that the fetching has stopped. But when you want to send some saved data to the server, and your user pressed 'back' or something like that before everything is sent, the data could be lost and not send.
What you want to have instead, is a Service which will keep running regardless of what happens to your Activities.
I'd advise you to take a look into RoboSpice. Even if you decide not to use it, reading what it does and why it does will give you a good insight on the pretty long list of reasons not to use AsyncTasks for network requests and why better to use Services.
If you use this, the rest of your question about efficiently network requesting is obsolete too, since they'll handle it for you the best way possible.
Nothing wrong with many async classes.
What ido is have a network layer,a service class. Send an intent to the service class with a resultreceiver object as part of intent. then in the service make http request in async task and send back the the result through result receiver object.
A good design is to abstract the ui (activity or fragment) from network access.
In a recently developed app I followed a similar scheme but in addition implemented a WebRequest class doing the actual GET, POST, PUT etc.
What I now have is a "Connector" class which has a whole lot of AsyncTask subclasses within.
In my implementation, however, I made them accept a Callback object to which each of those subclasses passes the Http result.
I think this is a valid if perhaps not ideal way.
What I imagine could be an improvement would be if I had just one subclass of Asynctask to which I would pass the request body (which is now built within those different tasks), the request url and method as well as the callback (which is, in my opinion a rather nice way to get the results).
I really like functional programming, I like its immutability concepts and also it's no side-effects concepts for functions.
I'm trying to take some of these concepts into java.
Now I have some kind of a servlet which receives a request and if browser did not send a cookie to server then i would like to create a cookie with a certain path to the user.
now inside the servlet i don't want to hold that logic because its common to multiple servlets.
so i extract it into some kind of a cookie manager which will do that:
CookieManager.java.handleCookies(request, response)
Check if browser sent cookie.
If not set cookie with new session cookie value with certain path.
however i don't like it because now the servlet will call the CookieManager.java.handleCookie will have a side effect I would rather it to return some kind of a response and further use it in my servlet wihtout having it effect its parameters that i'm passing into it.
anyone can suggest a solution which would both be elegant, no side effects, and excellent in performance?
thanks
You can make use of servlet filter. It would be well suited for your case. You can map your filter to URL pattern and write your code inside dofilter method. Filters are recommended if you want to have pre and post prcoess of request/response. Since you are doing preprocess for you request it would fit in your case. If is also loosely coupled, because you can remove it, modify it, or add another rule anytime without modifying the core servlet code.
One good solution is to use create a servlet which will act as a parent class for all other servlets.
Now in this servlet put this logic of cookie handling in a common function say handlecookie.
In your get and post APIs of this servlet first call this handleCookie and then service API of servlet (keep this empty)
In al child servlet classes you can only override the service class inherited from the parent class and things should work fine for you
Servlet filters are other solution that you can make use of.
In my app I have, for example, 3 logical blocks, created by user in such order:
FirstBlock -> SecondBlock -> ThirdBlock
This is no class-inheritance between them (each of them doesn't extends any other), but logical-inheritance exists (for example Image contains Area contains Message). Sorry, I'm not strong in terms - hope you'll understand me.
Each of blocks sends requests to server (to create infromation about it on server side) and then handles responses independently (but using same implementation of http client). Just like at that image (red lines are responses, black - requests).
http://s2.ipicture.ru/uploads/20120121/z56Sr62E.png
Question
Is it good model? Or it's better to create a some controller-class, that will send requests by it's own, and then handle responses end redirect results to my blocks? Or should implementation of http client be controller itself?
P.S. If I forgot to provide some information - please, tell me. Also if there a errors in my English - please, edit question.
Here's why I would go with a separate controller class to handle the HTTP requests and responses:
Reduce code duplication (do you really need three separate HTTP implementations?)
If/when the communication protocol between your app and server changes, you have to rewrite all your classes. Say for example you add another field to your response payload and your app isn't built to handle it, you now have to rewrite FirstBlock, SecondBlock, and ThirdBlock. Not ideal.
Modify your Implementation of HTTP client controller class such that:
All HTTP requests/responses go through it
It is responsible for routing the responses to the appropriate class.
Advantages?
If/when you change the communication protocol, all the relevant code is in this controller class and you don't have to touch FirstBlock, SecondBlock, or ThirdBlock
Debugging your HTTP requests!
I would suggest that your 3 blocks not deal with HttpClient directly. They should each deal with some interface which handles the remote connection sending of the request and processing of the results. For example:
public interface FirstBlockConnector {
public SomeResultObject askForSomeResult(SomeRequestObject request);
}
Then the details of the HTTP request and response will be in the connector implementations. You may find that you only need one connector that implements all 3 RPC interfaces. Once you separate out the RPC mechanisms then you can find common code in the implementations that actually deal with the HttpClient object. You can also swap out HTTP with another RPC mechanism without changing your block code.
In terms of controllers, I think of them being a web-server side term and not for the client but maybe you meant a connector like the above.
Hope this helps.