Desktop program communicating with server - java

I am in the process of moving the business logic of my Swing program onto the server.
What would be the most efficient way to communicate client-server and server-client?
The server will be responsible for authentication, fetching and storing data, so the program will have to communication frequently.

it depends on a lot of things. if you want a real answer, you should clarify exactly what your program will be doing and exactly what falls under your definition of "efficient"
if rapid productivity falls under your definition of efficient, a method that I have used in the past involves serialization to send plain old java objects down a socket. recently I have found that, in combination with the netty api, i am able to rapidly prototype fairly robust client/server communication.
the guts are fairly simple; the client and server both run Netty with an ObjectDecoder and ObjectEncoder in the pipeline. A class is made for each object designed to handle data. for example, a HandshakeRequest class and HandshakeResponse class.
a handshake request could look like:
public class HandshakeRequest extends Message {
private static final long serialVersionUID = 1L;
}
and a handshake response may look like:
public class HandshakeResponse extends Message {
private static final long serialVersionUID = 1L;
private final HandshakeResult handshakeResult;
public HandshakeResponse(HandshakeResult handshakeResult) {
this.handshakeResult = handshakeResult;
}
public HandshakeResult getHandshakeResult() {
return handshakeResult;
}
}
in netty, the server would send a handshake request when a client connects as such:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ch.write(new HandshakeRequest();
}
the client receives the HandshakeRequest Object, but it needs a way to tell what kind of message the server just sent. for this, a Map<Class<?>, Method> can be used. when your program is run, it should iterate through the Methods of a class with reflection and place them in the map. here is an example:
public HashMap<Class<?>, Method> populateMessageHandler() {
HashMap<Class<?>, Method> temp = new HashMap<Class<?>, Method>();
for (Method method : getClass().getMethods()) {
if (method.getAnnotation(MessageHandler.class) != null) {
Class<?>[] methodParameters = method.getParameterTypes();
temp.put(methodParameters[1], method);
}
}
return temp;
}
this code would iterate through the current class and look for methods marked with an #MessageHandler annotation, then look at the first parameter of the method (the parameter being an object such as public void handleHandshakeRequest(HandshakeRequest request)) and place the class into the map as a key with the actual method as it's value.
with this map in place, it is very easy to receive a message and send the message directly to the method that should handle the message:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
try {
Message message = (Message) e.getMessage();
Method method = messageHandlers.get(message.getClass());
if (method == null) {
System.out.println("No handler for message!");
} else {
method.invoke(this, ctx, message);
}
} catch(Exception exception) {
exception.printStackTrace();
}
}
there's not really anything left to it. netty handles all of the messy stuff allowing us to send serialized objects back and forth with ease. if you decide that you do not want to use netty, you can wrap your own protocol around java's Object Output Stream. you will have to do a little bit more work overall, but the simplicity of communication remains intact.

It's a bit hard to say which method is "most efficient" in terms of what, and I don't know your use cases, but here's a couple of options:
The most basic way is to simply use "raw" TCP-sockets. The upside is that there's nothing extra moving across the network and you create your protocol yourself, the latter being also a downside; you have to design and implement your own protocol for the communication, plus the basic framework for handling multiple connections in the server end (if there is a need for such).
Using UDP-sockets, you'll probably save a little latency and bandwidth (not much, unless you're using something like mobile data, you probably won't notice any difference with TCP in terms of latency), but the networking code is a bit harder task; UDP-sockets are "connectionless", meaning all the clients messages will end up in the same handler and must be distinguished from one another. If the server needs to keep up with client state, this can be somewhat troublesome to implement right.
MadProgrammer brought up RMI (remote method invocation), I've personally never used it, and it seems a bit cumbersome to set up, but might be pretty good in the long run in terms of implementation.
Probably one of the most common ways is to use http for the communication, for example via REST-interface for Web services. There are multiple frameworks (I personally prefer Spring MVC) to help with the implementation, but learning a new framework might be out of your scope for now. Also, complex http-queries or long urls could eat your bandwidth a bit more, but unless we're talking about very large amounts of simultaneous clients, this usually isn't a problem (assuming you run your server(s) in a datacenter with something like 100/100MBit connections). This is probably the easiest solution to scale, if it ever comes to that, as there're lots of load-balancing solutions available for web servers.

Related

Manage sagas between a microservice that uses axon and one that doesn't?

I'm working on a project where there are, for the sake of this question, two microservices:
An new OrderService (Spring Boot)
A "legacy" Invoice Service (Jersey Web Application)
Additionally, there is a RabbitMQ message broker.
In the OrderService, we've used the Axon framework for event-sourcing and CQRS.
We would now like to use sagas to manage the transactions between the OrderService and InvoiceService.
From what I've read, in order to make this change, we would need to do the following:
Switch from a SimpleCommandBus -> DistributedCommandBus
Change configuration to enable distributed commands
Connect the Microservices either using SpringCloud or JCloud
Add AxonFramework to the legacy InvoiceService project and handle the saga events received.
It is the fourth point where we have trouble: the invoice service is maintained by a separate team which is unwilling to make changes.
In this case, is it feasible to use the message broker instead of the command gateway. For instance, instead of doing something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
commandGateway.send(new PrepareShippingCommand(...));
commandGateway.send(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
We would do something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient RabbitTemplate rabbitTemplate;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
rabbitTemplate.convertAndSend(new PrepareShippingCommand(...));
rabbitTemplate.convertAndSend(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
In this case, When we receive a message from the InvoiceService in the exchange, we would publish the corresponding event on the event gateway or using SpringAMQPPublisher.
Questions:
Is this a valid approach?
Is there a documented way of handling this kind of scenario in Axon? If so, can you please provide a link to the documentation or any sample code?
First off, and not completely tailored towards your question, you're referring to the Axon Extensions to enable distributed messaging. Although this is indeed an option, know that this will require you to configure several separate solutions dedicated for distributed commands, events and event storage. Using a unified solution for this like Axon Server will ensure that a user does not have to dive in three (or more) different approaches to make it all work. Instead, Axon Server is attached to the Axon Framework application, and it does all the distribution and event storage for you.
That thus means that things like the DistributedCommandBus and SpringAMQPPublisher are unnecessary to fulfill your goal, if you would use Axon Server.
That's a piece of FYI which can simplify your life; by no means a necessity of course. So let's move to your actual question:
Is this a valid approach?
I think it is perfectly fine for a Saga to act as a anti corruption layer in this form. A Saga states that it reacts on events and send operations. Whether those operations are in the form of commands or another third party service is entirely up to you.
Note though that I feel AMQP is more a solution for distributed events (in a broadcast approach) than that it's a means to send commands (to a direct handler). It can be morphed to suit your needs, but I'd regard it as suboptimal for command dispatching as it needs to be adjusted.
Lastly, make sure that your Saga can cope with exception from sending those operations over RabbitMQ. You wouldn't want the Invoice service to fail on that message and having your Order service's Saga think it's on a happy path with your transaction of course.
Concluding though, it's indeed feasible to use another message broker within a Saga.
Is there a documented way of handling this kind of scenario in Axon?
There's no documented Axon way to deal with such a scenario at the moment, as there is no "one size fits all" solution. From a pure Axon approach, using commands would be the way to go. But as you stated, that's not an option within your domain due to an (unwilling?) other team. Hence you would track back to the actual intent of a Saga, without taking Axon into account, which I would summarize in the following:
A Saga manages a complex business transaction.
Reacts on things happening (events) and sends out operations (commands).
The Saga has a notion of time.
The Saga maintains state over time to know how to react.
That's my two cents, hope this helps you out!

Danger of instantiating a class in a verticle vert.x

I explain my problem, I have a verticle in which I defined all the routes. And I have simple java classes that contain methods that I call in my verticle depending on the route. For example, my downloadFile() method is in the MyFile class like this:
public class MyFile {
public final void downloadFile(RoutingContext rc, Vertx vertx) {
final HttpServerResponse response = rc.response();
response.putHeader("Content-Type", "text/html");
response.setChunked(true);
rc.fileUploads().forEach(file -> {
final String fileNameWithoutExtension = file.uploadedFileName();
final JsonObject jsonObjectWithFileName = new JsonObject();
response.setStatusCode(200);
response.end(jsonObjectWithFileName.put("fileName", fileNameWithoutExtension).encodePrettily());
});
}
public final void saveFile(RoutingContext rc, Vertx vertx) {
//TODO
}
}
And I use this class in my verticle like this:
public class MyVerticle extends AbstractVerticle{
private static final MyFile myFile = new MyFile();
#Override
public void start(Future<Void> startFuture) {
final Router router = Router.router(vertx);
final EventBus eventBus = vertx.eventBus();
router.route("/getFile").handler(routingContext -> {
myFile.downloadFile(routingContext, vertx);
});
router.route("/saveFile").handler(routingContext -> {
myFile.saveFile(routingContext, vertx);
});
}
}
My colleague tells me that it is not good to instantiate a class in a verticle and when I asked him why, he replied that it becomes stateful and I have doubts about what he says to me because I don't see how. And as I declared my MyFile class instance "static final" in my verticle, I want to say that I even gain in performance because I use the same instance for each incoming request instead of creating a new instance .
If it's bad to instantiate a class in a verticle, please explain why?
In addition I would like to know what is the interest of using 2 verticles for a treatment that only one verticle can do?
For example, I want to build a JsonObject with the data I select in my database, why send this data to another verticle knowing that this verticle does nothing but build the JsonObject and wait for it to answer me for sent the response to the client so that I can build this JsonObject in the verticle where I made my request and immediately sent the response to the client.I put you a pseudo code to see better :
public class MyVerticle1 extends AbstractVerticle{
public void start(Future<Void> startFuture) {
connection.query("select * from file", result -> {
if (result.succeeded()) {
List<JsonArray> rowsSelected = result.result().getResults();
eventBus.send("adress", rowsSelected, res -> {
if (res.succeded()) {
routinContext.response().end(res.result().encodePrettily());
}
});
} else {
LOGGER.error(result.cause().toString());
}
});
}
}
public class MyVerticle2 extends AbstractVerticle{
public void start(Future<Void> startFuture) {
JsonArray resultOfSelect = new JsonArray();
eventBus.consumer("adress", message -> {
List<JsonArray> rowsSelected = (List<JsonArray>) message.body();
rowsSelected.forEach(jsa -> {
JsonObject row = new JsonObject();
row.put("id", jsa.getInteger(0));
row.put("name", jsa.getString(1));
resultOfSelect.add(row);
});
message.reply(resultOfSelect);
});
}
}
I really do not see the point of making 2 verticles since I can use the result of my query in the first verticle without using the second verticle.
For me, EventBus is important for transmitting information to verticles for parallel processing.
bear in mind... the answers you're looking for are unfortunately very nuanced and will vary depending on a number of conditions (e.g. the experience of whoever is answering, design idioms in the codebase, tools/libraries at your disposal, etc). so there aren't an authoritative answers, just whatever suits you (and your co-workers).
My colleague tells me that it is not good to instantiate a class in a
verticle and when I asked him why, he replied that it becomes stateful
and I have doubts about what he says to me because I see not how.
your colleague is correct in the general sense that you don't want to have individual nodes in a cluster maintaining their own state because that will in fact hinder the ability to scale reliably. but in this particular case, MyFile appears to be stateless, so introducing it as a member of a Verticle does not automagically make the server stateful.
(if anything, i'd take issue with MyFile doing more than file-based operations - it also handles HTTP requests and responses).
And as I declared my MyFile class instance "static final" in my
verticle, I want to say that I even gain in performance because I use
the same instance for each incoming request instead of creating a new
instance .
i'd say this goes to design preferences. there isn't any real "harm" done here, per se, but i tend to avoid using static members for anything other than constant literals and prefer instead to use dependency injection to wire up my dependencies. but maybe this is a very simple project and introducing a DI framework is beyond the complexity you wish to introduce. it totally depends on your particular set of circumstances.
In addition I would like to know what is the interest of using 2
verticles for a treatment that only one verticle can do?
again, this depends on your set of circumstances and your "complexity budget". if the processing is simple and your desire is to keep the design equally simple, a single Verticle is fine (and arguably easier to understand/conceptualize and support). in larger applications, i tend to create many Verticles along the lines of the different logical domains in play (e.g. Verticles for authentication, Verticles for user account functionality, etc), and orchestrate any complex processing through the EventBus.

Reactor / WebFlux implement a reactive http news ticker

I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?
You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker

transfer data dynamically with netty

I've been learning Netty for a while, and from the Netty's tutorials(the MEAP book), almostly the examples are based on a fixed framework, like the EventLoop, Bootstrap, it seems that only the implementations of the handlers in the channelPipeline are the things we really should be concerned about.
Here I wanna design a simple chess game, based on a Server/Client mode, where two players are on different computers. And the background data I want to use Netty to transmit.(I just wanna practice using netty)
And in such a game, the front GUI detect the player put a chessman and then make some change to the data. Then, I need to deliver this data to the other player. And here comes the question.
I don't know how to implement a ChannelHandler in this situation, because in most examples, it seems that the data are not added dynamically by the handler. For instance, the data was created when channel was active by the method channelActive() or something else. All these methods was auto-invoked by Netty itself.
The only method I think is the write(). However, it seems that I have to call this method by myself if I implements this method, I don't know where I can get the parameter ChannelHandlerContext.
So, how to solve problems like this?
p.s.
I'm not so familiar with java network programming, nor the Netty. All the things I learn is based on the book, which I haven't finished reading yet. :)
Channel hander of Netty looks
package netty_sample;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelHandler;
/**
* Server side action
*/
public class EchoServerHandler extends SimpleChannelHandler {
/**
* This method will be invoked when server recieved a message
*/
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent event) {
String msg = (String) event.getMessage(); // extract a message received
// You can write any code which handles the message, changes data, and create message for client, etc.
ctx.getChannel().write(someMessageToClient); // send back to client
}
}
As I understand, handler routine is invoked dynamically (in event-driven) when server received a message.
So code in the handler works dynamically, and you can write anything in the code.

Akka design for Authentication (Finite State Machine)

I am quite new to Akka and I'd love to have some support for a design decision for my application. I have a rather typical Client/Server application. At the start the client should be able to authenticate at the application level and after that should be able to run in normal operational mode. There are also other state like closing, disconnected, etc. pp.
At the moment, I implemented this using become()
public class MyServerActor extends UntypedActor {
Procedure<Object> normal = new Procedure<Object>() {
#Override
public void apply(Object msg) {
handleMessage(msg);
}
};
#Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof LoginMessage) {
// do login stuff, assume the login was successful
getContext().become(normal);
}
So I would use a different Procedure for a different state.
However, in the doc at http://doc.akka.io/docs/akka/snapshot/java/fsm.html there is a Finite State Machine description which pretty much works like a standard State Machine; depending on the state do certain actions.
I wonder which is the better approach? Or what is the usual approach implementing a Client/Server application in Akka with java?
If you're going for a state based approach, then use Procedure and become. It makes it very clear that you're in a specific state since all the code for that state is grouped together.

Categories

Resources