Establishing the right relationship between two classes - java

For my program I have a Server class and a Protocols class.
When my Server receives a message from the Client I want the Server to send the message to the Protocols. The Protocols then figure out what needs to be done with the message and invokes the proper methods. Now, the methods that need to be invoked are inside the Server.
So essentially, the Server needs to have access to the Protocols and the Protocols needs to have access to the Server.
What is the best way to establish such a relationship? How would I do it?
I don't want a circular reference, but is there another way?

What about following the Servlet model of request/response objects?
Every time you receive a message, you package it up in a request object, and you create a response object, and send it to you protocol handler (acting as a kind of servlet).
Your handler, deals with the request, and whatever it needs to pass back, it puts it in the response object, which is ultimately used by the server to send the actual response to the client. If the server needs to take any decisions, it can do it based on the information already provided in the response object after the request has been attended by your protocol handler.
You may later add similar concepts to those of the servlet model, like filters or event handlers to deal with similar requirements.

Related

Restful way of sending Pre-Post or Pre-Put warning message

I am trying to convert a legacy application to Restful web-services. One of our old form displayed a warning message immediately on form load. This warning message depends on a user property.
For example, if there is a property isInactiveByDefault which when set to "true" will set the status of the newly created employee via POST v1/employees to "Inactive". User on "Employee" form load will see a warning message "Any new employee created will have inactive status".
I originally thought of providing a resource to get the status of the property and let the client handle whether to display the warning message or not based on the value of the property. But my manager wants to avoid any business logic in client side. As per him, this is a business logic and should be handled from server.
I am wondering what's the Restful way of sending this warning message. I could send the message with POST response but this would mean they will receive the message after the action.
I think that delegating the choice to display or not the message to the client is at the very frontier of what one may call "business logic". I believe there is room for debate here..
But let aside this point, and if we consider the message(s) to display as data then as soon as your client can make REST calls (via JS, AJAX or whatever), you should be able to query the server before or while the form loads (and wait/sync on that).
So it is perfectly fine and "RESTful" to perform a GET request on the service to i.e. retrieve a list of warning messages (eventually internationalized) and display them. The server will prepare this list of warnings based on such or such property and return it (as JSON or whatever). Your client will only have to display the parsed result of the request (0-N messages to list) next to your form.
EDIT
Example REST service for your use case:
#POST
#Path("/v1/employees")
public Response someFormHandlingMethod() {
// Your form processing
...
if(EmployeeParameters.isInactiveByDefault()) {
emp.setInactive(true);
}
...
}
#GET
#Path("/v1/employees/formwarnings")
public Response getEmployeeFormWarnings() {
// "Business logic" -> Determine warning messages to return to client
...
if(EmployeeParameters.isInactiveByDefault()) {
warnings.add("Any new employee created will have inactive status");
}
...
// and return them in Response (entity as JSON or whatever). Could also be List<String> for example.
}
It is just an example so that you get the idea but the actual implementation will depend on your Jersey configuration, mappers/converters, annotations (such as #Produces), etc.
The focus of a REST architecture is on the decoupling of clients from servers by defining a set of constraints both parties should adhere to. If those constraints are followed stringent it allows servers to evolve freely in future without having to risk breaking clients that also adhere to the REST architecture while such clients will get more robust toward change as a consequence. As such, a server should teach clients on how certain things need to look like (i.e. through providing forms and form-controls) and what further actions (i.e. links) the client can perform from the current state it is in.
By asking for Pre-Post or Pre-Put operations I guess you already assume an HTTP interaction, which is quite common for so called "RESTful" systems. Though as Jim Webber pointed out, HTTP is just a transport protocol whose application domain is the transfer of a document from a source to a target and any business rules we conclude from that interaction are just a side effect of the actual document management. So we have to come up with ways to trigger certain business activities based on the processing of certain documents. In addition to that, HTTP does not know anything like pre-post or pre-put or the like.
Usually a rule of thumb in distributed computing is to never trust any client input and thus validate the input on the server side again. The first validation solution worked on the server providing a client with a Web form that teaches clients on the available properties of a resource as well as gave clients a set of input controls to interact with that form (i.e. a send button that upon clicking will marshal the input data to a representation format that is sent with a HTTP operation the server supports). A client therefore did not need to know any internals of the server as it was served with all the information it needed to make up the next request. This is in essence what HATEOAS is all about. Once a server received a request on a certain endpoint it started to process the received document, i.e. the input data, and if certain constraints it put on that resource were violated it basically sent the same form as before to the client, though this time with the input data of the client included and further markup that was added to the form representation that indicates the input error. Through style attributes added to that errors these usually stood out from the general page design notably (usually red text and/or box next to the input control producing the violation) and thus allowed for easier processing of the problems.
With the introduction of JavaScript some clever people came up with a way to not only send the form to the client, but also a script that automatically checks the user input in the background and if violated adds the above mentioned failure markup dynamically to the DOM of the current form/page and so inform a user on possible problems with the current input. This however requires that the current representation format, or more formally its media-type, does support dynamic content such as scripts and manipulating the current representation format.
Unfortunately, many representation formats exchanged in typical "REST APIs/services" do not support such properties. A plain JSON document i.e. just defines the basic syntax (i.e. objects are key-value containers in between curly braces, ...) but it does not define a semantics for any of the fields it might contain, it does not even know what a URI is and such. As such, JSON is not a good representation format for in a REST architecture to start with. While HAL/JSON or JSON Hyper-Schema attempt to add support for links and rudimentary semantics, they yet create their own media-types and thus representation formats all along. Other JSON based media types, such as hal-forms, halo+json (halform) and ion, attempt to add form-support. Some of these media-type formats may allow to add restrictions to certain form-elements that automatically end up in a user warning and a block of the actual document transmission if violated, though most of the current media types might yet lack support for client-sided scripts or dynamic content updates.
Such circumstances lead Fielding in one of his famous blog posts to a statement like this:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types
Usually the question should not be which media type to support but how many different ones your server/client should support as the more media type representation it may handle the more it will be able to interact with its peer. And ideally you want to reuse the same client for all interactions while the same server should be able to server a multitude of clients, especially ones not under your direct control.
In regards to a "RESTful" solution I would not recommend using custom headers or proprietary message formats as they are usually only understood and thus processable by a limited amount of clients, which is the opposite of what the REST architecture actually aims to achieve. Such messages usually also require a priori knowledge of the server behavior which may lead to problems if the server ever needs to change.
So, long story short, in essence the safest way to introduce "RESTful" (~ one that is in accordance with the constraints the REST architecture put in place) input validation to a resource is by using a form-based representation, that teaches a client on everything it needs to create an actual request, where the request is processed by the server and if certain input constraints are validated leads to returning the form to the client again with a hint on the respective input problem.
If you want to minimize the interaction between client and server due to performance considerations (which are usually not the core problem at hand) you might need to define your own media type that adds script support and dynamic content updates, through similar means such as DOM manipulation or the like. Such a custom media type however should be registered with IANA to allow other implementors to add support for such types and thus allow other clients to interact with your system in future.

Why use HttpMethod.POST instead of HttpMethod.GET in Google App Engine?

In Udacity Developing Scalable Apps with Java course, some API methods of the backend application (that conceptually should be HttpMethod.GET) are implemented using HttpMethod.POST.
The following comment is found at the Java documentation:
/**
* Normally this kind of method is supposed to get invoked by a GET HTTP
* method, but we do it with POST, in order to receive conferenceQueryForm
* Object via the POST body.
*/
Does it mean that, even through a HTTPS connection, the JSON form would be sent in plaintext (i.e. not encrypted) if we use HttpMethod.GET?
No. What that means is that this method (whatever it is) is used to retrieve data from the server in response to a request from the client. Normally, when a client is just asking for something, it should use an HTTP GET. HTTP POST requests are intended for sending data to the server.
However, in this case, the client wants to send a (potentially large) object (called conferenceQueryForm) to the server to describe what it wants. That may be too big or cumbersome to do using a GET request, so instead they've used POST.

Best way to fetch a global variable for a thrift service from a client

I have a client server architecture set up through thrift. The service has been written in java, while the client is in php. Now, as the clients might be many, I want to introduce the concept of something like unique client Id.
The current structure is such that all the clients have the same client code at their end. Hence, the only way for me to determine the client Id is to do it when a request is made.
As the service has a lot of exposed functions(>50), I would not like to add the client Id as a parameter in all the functions (as that would mean a changes for ALL the clients).
Is there a clean way to do this - when the client makes the service object for the first time, it sends its Id, which becomes a global object for the service thread. For all subsequent calls to the exposed functions, the client id thus becomes a global object for the service thread. Please guide.
you can use client IP address for the reference.
"how can i get the client's from the thrift server"
It looks like subclassing TServerSocket/TNonBlockingServerSocket (and your chosen server class) will allow you to access the IP address (or hostname) pretty easily right from the Sockets they manage...
Given that you simply want the clientid to be available globally - you should simply global it. If that doesn't fit within the oop pattern, you could cover it up with a registry pattern, although it's just sugarcoating really.

Java Client/Server application

I have a multithread server, waiting for socket connections.
The first exchange of message is always of the same type, the clients sends an object with the authentication details(userid/pwd), the server checks it and reply to the server if the authentication has been passed or not.
After this first exchange of messages, the client will send some requests, corresponding to various tasks the server is able to execute.
How do i model those eterogeneous requests? In particular my question regards the type of object sent between client and server with InputObjecStream/OutputObjectStream
I had 2 ideas:
Using a "generic message" object, with 2 attributes: a task identifier and an HashMap without generics, able to carry various type of parameters requested for executing the task.
An object for every type of task, this solution is "cleaner", but I don't know how to make the server understand the type of the message received, I thought about a series of object casting of the received message from the client to every possible "specific task message", ignoring the many CastException. It sounds just bad, is there any way to avoid this?
Why not combine the two ideas
Start with a common level interface that the server can cast to determine what it should do or now to react.
As the object is passed off to the handler responsible for handling te request can further cast the object (based on a deeper level of interface implementation)
IMHO
The first approach is very generic but will be hard to maintain. After a while you'll notice that you no longer remember what kind of objects should be in this generic map. You'll have to keep the dictionary in sync.
The second approach is much better. Essentially you receive an abstract Request object with various subclasses. The base class can hold some general information. Normally you would use polymorphism and implement the action in each subclass, overriding abstract method from Request class. But you can't because request object would have to hold server-side logic.
The best you can do here is visitor design pattern. With it, for the price of slightly obscuring your code, you'll get very generic and safe design. instanceof tends to be ugly after some time.
What you could do is use XML messages for the communication. You could prepend in the first bytes an indication for which XML object the message should be mapped and on reception of the message, just check these bytes find the indicator, and use the rest bytesequence to marshal the bytes to XML object (using JAXB or SimpleXML or DOM or anyother xml parser) XML is very verbose and you could use it here to encapsulate your messages.

Way to design app that uses http requests

In my app I have, for example, 3 logical blocks, created by user in such order:
FirstBlock -> SecondBlock -> ThirdBlock
This is no class-inheritance between them (each of them doesn't extends any other), but logical-inheritance exists (for example Image contains Area contains Message). Sorry, I'm not strong in terms - hope you'll understand me.
Each of blocks sends requests to server (to create infromation about it on server side) and then handles responses independently (but using same implementation of http client). Just like at that image (red lines are responses, black - requests).
http://s2.ipicture.ru/uploads/20120121/z56Sr62E.png
Question
Is it good model? Or it's better to create a some controller-class, that will send requests by it's own, and then handle responses end redirect results to my blocks? Or should implementation of http client be controller itself?
P.S. If I forgot to provide some information - please, tell me. Also if there a errors in my English - please, edit question.
Here's why I would go with a separate controller class to handle the HTTP requests and responses:
Reduce code duplication (do you really need three separate HTTP implementations?)
If/when the communication protocol between your app and server changes, you have to rewrite all your classes. Say for example you add another field to your response payload and your app isn't built to handle it, you now have to rewrite FirstBlock, SecondBlock, and ThirdBlock. Not ideal.
Modify your Implementation of HTTP client controller class such that:
All HTTP requests/responses go through it
It is responsible for routing the responses to the appropriate class.
Advantages?
If/when you change the communication protocol, all the relevant code is in this controller class and you don't have to touch FirstBlock, SecondBlock, or ThirdBlock
Debugging your HTTP requests!
I would suggest that your 3 blocks not deal with HttpClient directly. They should each deal with some interface which handles the remote connection sending of the request and processing of the results. For example:
public interface FirstBlockConnector {
public SomeResultObject askForSomeResult(SomeRequestObject request);
}
Then the details of the HTTP request and response will be in the connector implementations. You may find that you only need one connector that implements all 3 RPC interfaces. Once you separate out the RPC mechanisms then you can find common code in the implementations that actually deal with the HttpClient object. You can also swap out HTTP with another RPC mechanism without changing your block code.
In terms of controllers, I think of them being a web-server side term and not for the client but maybe you meant a connector like the above.
Hope this helps.

Categories

Resources