Way to design app that uses http requests - java

In my app I have, for example, 3 logical blocks, created by user in such order:
FirstBlock -> SecondBlock -> ThirdBlock
This is no class-inheritance between them (each of them doesn't extends any other), but logical-inheritance exists (for example Image contains Area contains Message). Sorry, I'm not strong in terms - hope you'll understand me.
Each of blocks sends requests to server (to create infromation about it on server side) and then handles responses independently (but using same implementation of http client). Just like at that image (red lines are responses, black - requests).
http://s2.ipicture.ru/uploads/20120121/z56Sr62E.png
Question
Is it good model? Or it's better to create a some controller-class, that will send requests by it's own, and then handle responses end redirect results to my blocks? Or should implementation of http client be controller itself?
P.S. If I forgot to provide some information - please, tell me. Also if there a errors in my English - please, edit question.

Here's why I would go with a separate controller class to handle the HTTP requests and responses:
Reduce code duplication (do you really need three separate HTTP implementations?)
If/when the communication protocol between your app and server changes, you have to rewrite all your classes. Say for example you add another field to your response payload and your app isn't built to handle it, you now have to rewrite FirstBlock, SecondBlock, and ThirdBlock. Not ideal.
Modify your Implementation of HTTP client controller class such that:
All HTTP requests/responses go through it
It is responsible for routing the responses to the appropriate class.
Advantages?
If/when you change the communication protocol, all the relevant code is in this controller class and you don't have to touch FirstBlock, SecondBlock, or ThirdBlock
Debugging your HTTP requests!

I would suggest that your 3 blocks not deal with HttpClient directly. They should each deal with some interface which handles the remote connection sending of the request and processing of the results. For example:
public interface FirstBlockConnector {
public SomeResultObject askForSomeResult(SomeRequestObject request);
}
Then the details of the HTTP request and response will be in the connector implementations. You may find that you only need one connector that implements all 3 RPC interfaces. Once you separate out the RPC mechanisms then you can find common code in the implementations that actually deal with the HttpClient object. You can also swap out HTTP with another RPC mechanism without changing your block code.
In terms of controllers, I think of them being a web-server side term and not for the client but maybe you meant a connector like the above.
Hope this helps.

Related

Http requests in Android: Good use of Singleton or static class?

I generally dislike the usage of singletons or static class, since I can refactor them to something different most of the time.
However, I am currently designing my access point to a HTTP API on an Android app, and I was thinking that I have the following environment:
I need to send HTTP requests in the majority of my code modules (Activities).
The code for sending a request does not depend on the request being sent
There will always only be one specific user on the app per session (unlike the server-side that has to handle different users etc)
Therefore, I was thinking that this could be a situation where it is justifiable to use a Singleton, or even a static class, to place HTTP requests - In the rest of my code, I would then simply have to use something like:
MyHttpAccess.attemptLogin(name, pass, callback)
in order to complete the request. I'm even leaning towards using a static class, as I do not have any variable data that I can think of needing to store.
Does this seem like good or bad design, and what should I potentially change?
Http Request in kotlin using single class
https://medium.com/#umesh8346/android-kotlin-api-integration-different-method-b5b84eb4f386

Why HTTP method PUT should be idempotent and not the POST in implementation RestFul service?

There are many resources available over the internet wherein PUT vs POST is discussed. But I could not understand how would that affect the Java implementation or back end implementation which is done underneath for a RestFul service? Links I viewed are mentioned below:
https://www.keycdn.com/support/put-vs-post/
https://spring.io/understanding/REST#post
https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
http://javarevisited.blogspot.com/2016/10/difference-between-put-and-post-in-restful-web-service.html
For example let's say there is a RestFul webservice for Address.
So POST /addresses will do the job of updating the Address and PUT /addresses/1 will do the job of creating the one.
Now how the HTTP method PUT and POST can control what weservice code is doing behind the scenes?
PUT /addresses/1
may end up creating multiple entries of the same address in the DB.
So my question is, why the idempotent behavior is linked to the HTTP method?
How will you control the idempotent behavior by using specif HTTP methods? Or is it that just a guideline or standard practice suggested?
I am not looking for an explanation of what is idempotent behavior but what make us tag these HTTP methods so?
This is HTTP specific. As RFC linked by you states that https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html (see up to date RFC links at the bottom of this answer). It is not described as part of REST: https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
Now you wrote,
I am not looking for an explanation of what is idempotent behavior but
what make us tag these HTTP methods so?
An idempotent operation has always the same result (I know you know it), but the result is not the same thing as the HTTP response. It should be obvious from HTTP perspective that multiple requests with any method even all the same parameters can have different responses (ie. timestamps). So they can actually differ.
What should not change is the result of the operation. So calling multiple times PUT /addresses/1 should not create multiple addresses.
As you see it's called PUT not CREATE for a reason. It may create resource if it does not exist. If it exists then it may overwrite it with the new version (update) if its exactly the same should do nothing on the server and result in the same answer as if it would be the same request (because it may be the same request repeated because the previous request was interrupted and the client did not receive response).
Comparing to SQL PUT would more like INSERT OR UPDATE not only INSERT or UPDATE.
So my question is, why the idempotent behavior is linked to the HTTP method?
It is likned to HTTP method so some services (proxies) know that in case of failure of request they can try safely (not in the terms of safe HTTP method but in the terms of idempotence) repeat them.
How will you control the idempotent behavior by using specif HTTP methods?
I'm not sure what are you asking for.
But:
GET, HEAD just return data it does not change anything (apart maybe some logs, stats, metadata?) so it's safe and idempotent.
POST, PATCH can do anything it is not safe nor idempotent
PUT, DELETE - are not safe (they change data) but they are idempotent so it is "safe" to repeat them.
This basically means that safe method can be made by proxies, caches, web crawlers etc. safely without changing anything. Idempotent can be repeated by software and it will not change the outcome.
Or is it that just a guideline or standard practice suggested?
It is "standard". Maybe RFC is not standard yet but it will eventually be one and we don't have anything else we could (and should) follow.
Edit:
As RFC mentioned above is outdated here are some references to current RFCs about that topic:
Retrying idempotent requests by client or proxy: https://www.rfc-editor.org/rfc/rfc7230#section-6.3.1
Pipelineing idempotent requests: https://www.rfc-editor.org/rfc/rfc7230#section-6.3.2
Idempotent methods in: HTTP https://www.rfc-editor.org/rfc/rfc7231#section-4.2.2
Thanks to Roman Vottner for the suggestion.
So my question is, why the idempotent behavior is linked to the HTTP method?
I am not looking for an explanation of what is idempotent behavior but what make us tag these HTTP methods so?
So that generic, domain agnostic participants in the exchange of messages can make useful contributions.
RFC 7231 calls out a specific example in its definition of idempotent
Idempotent methods are distinguished because the request can be repeated automatically if a communication failure occurs before the client is able to read the server's response. For example, if a client sends a PUT request and the underlying connection is closed before any response is received, then the client can establish a new connection and retry the idempotent request. It knows that repeating the request will have the same intended effect, even if the original request succeeded, though the response might differ.
A client, or intermediary, doesn't need to know anything about your bespoke API, or its underlying implementation, to act this way. All of the necessary information is in the specification (RFC 7231's definitions of PUT and idempotent), and in the server's announcement that the resource supports PUT.
Note that idempotent request handling is required of PUT, but it is not forbidden for POST. It's not wrong to have an idempotent POST request handler, or even one that is safe. But generic components, that have only the metadata and the HTTP spec to work from, will not know or discover that the POST request handler is idempotent.
I could not understand how would that affect the Java implementation or back end implementation which is done underneath for a RestFul service?
There's no magic; using PUT doesn't automatically change the underlying implementation of the service; technically, it doesn't even constrain the underlying implementation. What it does do is clearly document where the responsibility lies.
It's analogous to Fielding's 2002 observation about GET being safe
HTTP does not attempt to require the results of a GET to be safe. What
it does is require that the semantics of the operation be safe, and
therefore it is a fault of the implementation, not the interface
or the user of that interface, if anything happens as a result that
causes loss of property (money, BTW, is considered property for the
sake of this definition).
An important thing to realize is that, as far as HTTP is concerned, there is no "resource hierarchy". There's no relationship between /addresses and /addresses/1 -- for example, messages to one have no effect on cached representations of the other. The notion that /addresses is a "collection" and /addresses/1 is an "item in the /addresses collection" is an implementation detail, private to the origin server.
(It used to be the case that the semantics of POST would refer to subordinate resources, see for example RFC 1945; but even then the spelling of the identifier for the subordinate was not constrainted.)
I mean PUT /employee is acceptable or it has to be PUT/employee/<employee-id>
PUT /employee has the semantics of "replace the current representation of /employee with the representation I'm providing". If /employee is a representation of a collection, it is perfectly fine to modify that collection by passing with PUT a new representation of the collection.
GET /collection
200 OK
{/collection/1, collection/2}
PUT /collection
{/collection/1, /collection/2, /collection/3}
200 OK
GET /collection
200 OK
{/collection/1, /collection/2, /collection/3}
PUT /collection
{/collection/4}
200 OK
GET /collection
200 OK
{/collection/4}
If that's not what you want; if you want to append to the collection, rather than replace the entire representation, then PUT has the wrong semantics when applied to the collection. You either need to PUT the item representation to an item resource, or you need to use some other method on the collection (POST or PATCH are suitable)
GET /collection
200 OK
{/collection/1, collection/2}
PUT /collection/3
200 OK
GET /collection
200 OK
{/collection/1, /collection/2, /collection/3}
PATCH /collection
{ op: add, path: /4, ... }
200 OK
GET /collection
200 OK
{/collection/1, /collection/2, /collection/3, /collection/4 }
How will you control the idempotent behavior by using specific HTTP
methods? Or is it that just a guideline or standard practice
suggested?
It is more about the HTTP specification and an app must follow these specifications. Nothing stops you from altering the behavior on the server side.
There is always a difference between the web service and a Restful Web service.
Consider some legacy apps which uses servlets. Servlets used to have doGet and doPost methods. doPost was always recommended for security reasons and storing data on server/db over doGet. as the info is embedded in the request it self and is not exposed to the outer worlds.
Even there nothing stops you to save data in doGet or return some static pages in doPost hence it's all about following underlying specs
So my question is, why the idempotent behavior is linked to the HTTP method?
Because the HTTP specifications says so:
4.2.2. Idempotent Methods
A request method is considered "idempotent" if the intended effect on
the server of multiple identical requests with that method is the
same as the effect for a single such request. Of the request methods
defined by this specification, PUT, DELETE, and safe request methods
are idempotent.
(Source: RFC 7231)
Why do the specs say this?
Because is useful for implementing HTTP based systems to be able to distinguish idempotent and non-idempotent requests. And the "method" type provides a good way to make the distinction.
Because other parts of the HTTP specification are predicated on being able to distinguish idempotent methods; e.g. proxies and caching rules.
How will you control the idempotent behavior by using specif HTTP methods?
It is up to the server to implement PUT & DELETE to have idempotent behavior. If a server doesn't, then it is violating the HTTP specification.
Or is it that just a guideline or standard practice suggested?
It is required behavior.
Now you could ignore the requirement (there are no protocol police!) but if you do, it is liable to cause your systems to break. Especially if you need to integrate them with systems implemented by other people ... who might write their client code assuming that if it replays a PUT or DELETE that your server won't say "error".
In short, we use specifications like HTTP so that our systems are interoperable. But this strategy only works properly if everyone's code implements the specifications correctly.
POST - is generally not idempotent so that multiple calls will create multiple objects with different IDs
PUT - given the id in the URI you would apply "create or update" query to your database thus once the resource is created every next call will make no difference to the backend state
I.e. there is a clear difference in the backend in how you generate new / updated existing stored objects. E.g. assuming you are using MySQL and auto-generated ID:
POST will end up as INSERT query
PUT will end up in INSERT ... ON DUPLICATE KEY UPDATE query
Normally in Rest APIs. We used
POST - Add data
GET - get data
PUT - update data
DELETE - delete data
Read below post to get more idea.
REST API Best Practices

RabbitMQ java client RpcClient/RpcServer example

I'm new to RabbitMQ and am trying to implement an app where RpcClient and RpcServer seems to be a good fit. This is how the app works: When a request comes, it'll call RpcClient to enqueue the request and then wait for the response. On the server side, a listener would dequeue the request and process it and then enqueue using RpcServer. In theory, this should work. I also found a page on Rabbit MQ that explains how to improve the performance by using a direct reply-to.https://www.rabbitmq.com/direct-reply-to.html. However, I could not tell how to apply this to use the com.rabbitmq.client.RpcClient and com.rabbitmq.client.RpcServer to implement my app. Could someone shed some lights on this? Thanks!
com.rabbitmq.client.RpcClient and com.rabbitmq.client.RpcServer are two convenience class to implement easy the RPC pattern.
You can also implement it with the standard class.
Read this post and also this(using standard class)

How bad is it to use #POST to retrieve data?

I am currently using a #POST web service to retrieve data.
My idea, at the beginning, was to pass a map of parameters. Then my function, on the server side, would take care of reading the needed parameters in the map and return the response.
This is to prevent having a big number of function almost identical on the server side.
But if I understood correctly, #POST should be use for creation of content.
So my question: Is it a big programming mistake to use #POST for data retrieval?
Is it better to create 1 web service per use case, even if it is a lot?
Thanks.
Romain.
POST is used to say that you are submitting data.
GET requests can be bookmarked, POST can't. Before there were single page web appliations we used post-redirect-get to accepta data submission and display a bookmakable page.
If you use POST to retrieve data then web-caching doesn't work, because the caching code doesn't cache POSTS, it expects POST to mean it needs to invalidate its cache. If you split your services out use-case-wise and use GET then you can have something like Squid cache the responses.
You may not need to implement caching right now, but it would be good to keep the option open. Making your services act in a compliant way means you can get leverage from existing tools and infrastructure (which is a selling point of REST).
doGet();
Called by the server (via the service method) to allow a servlet to handle a GET request.
doPost()
Called by the server (via the service method) to allow a servlet to handle a POST request.
No issues with them.Both will handle your request.

Establishing the right relationship between two classes

For my program I have a Server class and a Protocols class.
When my Server receives a message from the Client I want the Server to send the message to the Protocols. The Protocols then figure out what needs to be done with the message and invokes the proper methods. Now, the methods that need to be invoked are inside the Server.
So essentially, the Server needs to have access to the Protocols and the Protocols needs to have access to the Server.
What is the best way to establish such a relationship? How would I do it?
I don't want a circular reference, but is there another way?
What about following the Servlet model of request/response objects?
Every time you receive a message, you package it up in a request object, and you create a response object, and send it to you protocol handler (acting as a kind of servlet).
Your handler, deals with the request, and whatever it needs to pass back, it puts it in the response object, which is ultimately used by the server to send the actual response to the client. If the server needs to take any decisions, it can do it based on the information already provided in the response object after the request has been attended by your protocol handler.
You may later add similar concepts to those of the servlet model, like filters or event handlers to deal with similar requirements.

Categories

Resources