Java Client/Server application - java

I have a multithread server, waiting for socket connections.
The first exchange of message is always of the same type, the clients sends an object with the authentication details(userid/pwd), the server checks it and reply to the server if the authentication has been passed or not.
After this first exchange of messages, the client will send some requests, corresponding to various tasks the server is able to execute.
How do i model those eterogeneous requests? In particular my question regards the type of object sent between client and server with InputObjecStream/OutputObjectStream
I had 2 ideas:
Using a "generic message" object, with 2 attributes: a task identifier and an HashMap without generics, able to carry various type of parameters requested for executing the task.
An object for every type of task, this solution is "cleaner", but I don't know how to make the server understand the type of the message received, I thought about a series of object casting of the received message from the client to every possible "specific task message", ignoring the many CastException. It sounds just bad, is there any way to avoid this?

Why not combine the two ideas
Start with a common level interface that the server can cast to determine what it should do or now to react.
As the object is passed off to the handler responsible for handling te request can further cast the object (based on a deeper level of interface implementation)
IMHO

The first approach is very generic but will be hard to maintain. After a while you'll notice that you no longer remember what kind of objects should be in this generic map. You'll have to keep the dictionary in sync.
The second approach is much better. Essentially you receive an abstract Request object with various subclasses. The base class can hold some general information. Normally you would use polymorphism and implement the action in each subclass, overriding abstract method from Request class. But you can't because request object would have to hold server-side logic.
The best you can do here is visitor design pattern. With it, for the price of slightly obscuring your code, you'll get very generic and safe design. instanceof tends to be ugly after some time.

What you could do is use XML messages for the communication. You could prepend in the first bytes an indication for which XML object the message should be mapped and on reception of the message, just check these bytes find the indicator, and use the rest bytesequence to marshal the bytes to XML object (using JAXB or SimpleXML or DOM or anyother xml parser) XML is very verbose and you could use it here to encapsulate your messages.

Related

Restful way of sending Pre-Post or Pre-Put warning message

I am trying to convert a legacy application to Restful web-services. One of our old form displayed a warning message immediately on form load. This warning message depends on a user property.
For example, if there is a property isInactiveByDefault which when set to "true" will set the status of the newly created employee via POST v1/employees to "Inactive". User on "Employee" form load will see a warning message "Any new employee created will have inactive status".
I originally thought of providing a resource to get the status of the property and let the client handle whether to display the warning message or not based on the value of the property. But my manager wants to avoid any business logic in client side. As per him, this is a business logic and should be handled from server.
I am wondering what's the Restful way of sending this warning message. I could send the message with POST response but this would mean they will receive the message after the action.
I think that delegating the choice to display or not the message to the client is at the very frontier of what one may call "business logic". I believe there is room for debate here..
But let aside this point, and if we consider the message(s) to display as data then as soon as your client can make REST calls (via JS, AJAX or whatever), you should be able to query the server before or while the form loads (and wait/sync on that).
So it is perfectly fine and "RESTful" to perform a GET request on the service to i.e. retrieve a list of warning messages (eventually internationalized) and display them. The server will prepare this list of warnings based on such or such property and return it (as JSON or whatever). Your client will only have to display the parsed result of the request (0-N messages to list) next to your form.
EDIT
Example REST service for your use case:
#POST
#Path("/v1/employees")
public Response someFormHandlingMethod() {
// Your form processing
...
if(EmployeeParameters.isInactiveByDefault()) {
emp.setInactive(true);
}
...
}
#GET
#Path("/v1/employees/formwarnings")
public Response getEmployeeFormWarnings() {
// "Business logic" -> Determine warning messages to return to client
...
if(EmployeeParameters.isInactiveByDefault()) {
warnings.add("Any new employee created will have inactive status");
}
...
// and return them in Response (entity as JSON or whatever). Could also be List<String> for example.
}
It is just an example so that you get the idea but the actual implementation will depend on your Jersey configuration, mappers/converters, annotations (such as #Produces), etc.
The focus of a REST architecture is on the decoupling of clients from servers by defining a set of constraints both parties should adhere to. If those constraints are followed stringent it allows servers to evolve freely in future without having to risk breaking clients that also adhere to the REST architecture while such clients will get more robust toward change as a consequence. As such, a server should teach clients on how certain things need to look like (i.e. through providing forms and form-controls) and what further actions (i.e. links) the client can perform from the current state it is in.
By asking for Pre-Post or Pre-Put operations I guess you already assume an HTTP interaction, which is quite common for so called "RESTful" systems. Though as Jim Webber pointed out, HTTP is just a transport protocol whose application domain is the transfer of a document from a source to a target and any business rules we conclude from that interaction are just a side effect of the actual document management. So we have to come up with ways to trigger certain business activities based on the processing of certain documents. In addition to that, HTTP does not know anything like pre-post or pre-put or the like.
Usually a rule of thumb in distributed computing is to never trust any client input and thus validate the input on the server side again. The first validation solution worked on the server providing a client with a Web form that teaches clients on the available properties of a resource as well as gave clients a set of input controls to interact with that form (i.e. a send button that upon clicking will marshal the input data to a representation format that is sent with a HTTP operation the server supports). A client therefore did not need to know any internals of the server as it was served with all the information it needed to make up the next request. This is in essence what HATEOAS is all about. Once a server received a request on a certain endpoint it started to process the received document, i.e. the input data, and if certain constraints it put on that resource were violated it basically sent the same form as before to the client, though this time with the input data of the client included and further markup that was added to the form representation that indicates the input error. Through style attributes added to that errors these usually stood out from the general page design notably (usually red text and/or box next to the input control producing the violation) and thus allowed for easier processing of the problems.
With the introduction of JavaScript some clever people came up with a way to not only send the form to the client, but also a script that automatically checks the user input in the background and if violated adds the above mentioned failure markup dynamically to the DOM of the current form/page and so inform a user on possible problems with the current input. This however requires that the current representation format, or more formally its media-type, does support dynamic content such as scripts and manipulating the current representation format.
Unfortunately, many representation formats exchanged in typical "REST APIs/services" do not support such properties. A plain JSON document i.e. just defines the basic syntax (i.e. objects are key-value containers in between curly braces, ...) but it does not define a semantics for any of the fields it might contain, it does not even know what a URI is and such. As such, JSON is not a good representation format for in a REST architecture to start with. While HAL/JSON or JSON Hyper-Schema attempt to add support for links and rudimentary semantics, they yet create their own media-types and thus representation formats all along. Other JSON based media types, such as hal-forms, halo+json (halform) and ion, attempt to add form-support. Some of these media-type formats may allow to add restrictions to certain form-elements that automatically end up in a user warning and a block of the actual document transmission if violated, though most of the current media types might yet lack support for client-sided scripts or dynamic content updates.
Such circumstances lead Fielding in one of his famous blog posts to a statement like this:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types
Usually the question should not be which media type to support but how many different ones your server/client should support as the more media type representation it may handle the more it will be able to interact with its peer. And ideally you want to reuse the same client for all interactions while the same server should be able to server a multitude of clients, especially ones not under your direct control.
In regards to a "RESTful" solution I would not recommend using custom headers or proprietary message formats as they are usually only understood and thus processable by a limited amount of clients, which is the opposite of what the REST architecture actually aims to achieve. Such messages usually also require a priori knowledge of the server behavior which may lead to problems if the server ever needs to change.
So, long story short, in essence the safest way to introduce "RESTful" (~ one that is in accordance with the constraints the REST architecture put in place) input validation to a resource is by using a form-based representation, that teaches a client on everything it needs to create an actual request, where the request is processed by the server and if certain input constraints are validated leads to returning the form to the client again with a hint on the respective input problem.
If you want to minimize the interaction between client and server due to performance considerations (which are usually not the core problem at hand) you might need to define your own media type that adds script support and dynamic content updates, through similar means such as DOM manipulation or the like. Such a custom media type however should be registered with IANA to allow other implementors to add support for such types and thus allow other clients to interact with your system in future.

RabbitMQ message signing

I want to use RabbitMQ to communicate between multiple applications which are deployed on different networks and are maintained by different people. As a receiver of a message (consumer) I want to be convinced that the sender of the message (producer) is who he claims to be. Best approach I can think for this would be message signing and verification of those signatures. As this is my first time doing something with RabbitMQ, I am kind of stuck on how to implement this.
Message senders and receivers are Java applications. I've decided to use Spring AMQP template to make things somewhat easier for me. In a perfect scenario I would like to somehow intercept the message when it's already a byte array/stream, sign this blob and attach the signature as a message header. On the receiving end I would again like to intercept the message before it's deserialized, verify the signature from header against the blob and if everything is OK then deserialize it. But I havent found any means in Spring-Rabbit for doing this.
There is a concept of MessagePostProcessor in Spring-Rabbit, but when this is invoked, the message is still not fully serialized. It seems like something that I imagined would be solved somewhere by someone as it feels like a common problem to have, but my research has left me bare handed.
Currently I am using AmqpTemplate.convertAndSend for message sending and #RabbitListener for message receiving. But I am not stuck with Spring. I can use whatever I like. It just seemed like an easy way to get going. I am using Jackson for message serialization to/from JSON. Problem is how to intercept sending and receiving in the right place.
Backup plan is to put both data and signature in body and joint them with a wrapper but this would mean double serialization and is not as clean as I would like the solution to be.
So has anyone got experience with this stuff and can perhaps can advise me on how to approach this problem?
There is a concept of MessagePostProcessor in Spring-Rabbit, but when this is invoked, the message is still not fully serialized.
I am not sure what you mean by that; the MessagePostProcessor is exactly what you need the body is the byte[] that will be sent to RabbitMQ. You can use an overloaded convertAndSend method that takes an MPP, or add your MPP to the template (in the beforeSendMessagePostProcessors).
On the receiving side, the listener container factory can be configured with afterReceiveMessagePostProcessors. Again; the body is the byte[] received from RabbitMQ.

Best approach to stream multiple types with GRPC

I have a server that passes messages to a client. The messages are of different types and the server has a generic handleMessage and passMessage method for the clients.
Now I intend to adapt this and use GRPC for it. I know I could expose all methods of the server by defining services in my .proto file. But is there also a way to:
Stream
heterogenous types
with one RPC call
using GRPC
There is oneof which allows me to set a message that has only one of the properties set. I could have a MessageContainer that is oneof and all my message types are included in this container. Now the container only has one of the types and I would only need to write one
service {
rpc messageHandler(ClientInfo) returns (stream MessageContainer)
}
This way, the server could stream multiple types to the client through one unique interface. Does this make sense? Or is it better to have all methods exposed individually?
UPDATE
I found this thread which argues oneof would be the way to go. I'd like that obviously as it avoids me having to create potentially dozens of services and stubs. It would also help to make sure it's a FIFO setup instead of multiplexing several streams and not being sure which message came first. But it feels dirty for some reason.
Yes, this makes sense (and what you are calling MessageContainer is best understood as a sum type).
... but it is still better to define different methods when you can ("better" here means "more idiomatic, more readable by future maintainers of your system, and better able to be changed in the future when method semantics need to change").
The question of whether to express your service as a single RPC method returning a sum type or as multiple RPC methods comes down to whether or not the particular addend type that will be used can be known at RPC invocation time. Is it the case that when you set request.my_type_determining_field to 5 that the stream transmitted by the server always consists of MessageContainer messages that have their oneof set to a MyFifthKindOfParticularMessage instance? If so then you should probably just write a separate RPC method that returns a stream of MyFifthKindOfParticularMessage messages. If, however, it is the case that at RPC invocation time you don't know with certainty what the used addend types of the messages transmitted from the server will be (and "messages with different addend types in the same stream" is a sub-use-case of this), then I don't think it's possible for your service to be factored into different RPCs and the right thing for you to do is have one RPC method that returns a stream of a sum type.

Multiple clients for multiple purposes in TCP socket?

After seeing some examples at my class, I know that if I want to send a "TypeA" object to server and receive a "ProcessedA" object as a result, I only need one client class.
But if I want to send "TypeA", "TypeB", and "TypeC" objects (not at the same time) to server, do I need to make 3 different client classes, each one of which sends objects of one of those data types, or I only need to make one client class and write 3 different "send" methods?
You can have only one method if the objects you send inherits from one unique class or interface, and the same logic is applied to the result class.
It's a bit hard to understand what you need if you don't include a sample code of what you are trying (as is noticed in another post related to your request).
If this condition can be met by your needs, you can use the instanceof operator inside the server method to detect the type of the received object and cast it to the known child type. And apply the same logic to proccess the response in the client.

Spring REST representation class

I'm reading the two introductory articles about bulding and consuming Spring Rest web services.
What's weird - they're creating a Greeting representation class in the client app (second link ref) for storing the GET response (the greetingmethod on server side returns a Greeting object). But the Greeting classes on the server and client side are different classes - well, they are two distinct classes with identical names, identical field names and types (client's doesn't have a constructor).
Does it mean I have to similarly rewrite the class from stratch when building the client app? In order to do that, I'd need specs on what are the fields' types of JSON-packed objects passed by server's app. A server serializes the object of class ABCClass to JSON and sends it to client. Even if some field called 'abc' has value 10, it doesn't make it an integer. Next time it might contain a string.
My question is - how much information from server app's devs do I need in order to create a client application? How is it usually done?
It all depends on your deserializer and on your needs. With Jackson for example you might use mixins (wiki ref) and custom deserializers (wiki ref) that build your object with your required field names and your structure.
Its just simplest way to have same field names and structure, but not the only one.
Of course, however, you should know the server reply structure to deserialize it anyway

Categories

Resources