Akka design for Authentication (Finite State Machine) - java

I am quite new to Akka and I'd love to have some support for a design decision for my application. I have a rather typical Client/Server application. At the start the client should be able to authenticate at the application level and after that should be able to run in normal operational mode. There are also other state like closing, disconnected, etc. pp.
At the moment, I implemented this using become()
public class MyServerActor extends UntypedActor {
Procedure<Object> normal = new Procedure<Object>() {
#Override
public void apply(Object msg) {
handleMessage(msg);
}
};
#Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof LoginMessage) {
// do login stuff, assume the login was successful
getContext().become(normal);
}
So I would use a different Procedure for a different state.
However, in the doc at http://doc.akka.io/docs/akka/snapshot/java/fsm.html there is a Finite State Machine description which pretty much works like a standard State Machine; depending on the state do certain actions.
I wonder which is the better approach? Or what is the usual approach implementing a Client/Server application in Akka with java?

If you're going for a state based approach, then use Procedure and become. It makes it very clear that you're in a specific state since all the code for that state is grouped together.

Related

Manage sagas between a microservice that uses axon and one that doesn't?

I'm working on a project where there are, for the sake of this question, two microservices:
An new OrderService (Spring Boot)
A "legacy" Invoice Service (Jersey Web Application)
Additionally, there is a RabbitMQ message broker.
In the OrderService, we've used the Axon framework for event-sourcing and CQRS.
We would now like to use sagas to manage the transactions between the OrderService and InvoiceService.
From what I've read, in order to make this change, we would need to do the following:
Switch from a SimpleCommandBus -> DistributedCommandBus
Change configuration to enable distributed commands
Connect the Microservices either using SpringCloud or JCloud
Add AxonFramework to the legacy InvoiceService project and handle the saga events received.
It is the fourth point where we have trouble: the invoice service is maintained by a separate team which is unwilling to make changes.
In this case, is it feasible to use the message broker instead of the command gateway. For instance, instead of doing something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
commandGateway.send(new PrepareShippingCommand(...));
commandGateway.send(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
We would do something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient RabbitTemplate rabbitTemplate;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
rabbitTemplate.convertAndSend(new PrepareShippingCommand(...));
rabbitTemplate.convertAndSend(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
In this case, When we receive a message from the InvoiceService in the exchange, we would publish the corresponding event on the event gateway or using SpringAMQPPublisher.
Questions:
Is this a valid approach?
Is there a documented way of handling this kind of scenario in Axon? If so, can you please provide a link to the documentation or any sample code?
First off, and not completely tailored towards your question, you're referring to the Axon Extensions to enable distributed messaging. Although this is indeed an option, know that this will require you to configure several separate solutions dedicated for distributed commands, events and event storage. Using a unified solution for this like Axon Server will ensure that a user does not have to dive in three (or more) different approaches to make it all work. Instead, Axon Server is attached to the Axon Framework application, and it does all the distribution and event storage for you.
That thus means that things like the DistributedCommandBus and SpringAMQPPublisher are unnecessary to fulfill your goal, if you would use Axon Server.
That's a piece of FYI which can simplify your life; by no means a necessity of course. So let's move to your actual question:
Is this a valid approach?
I think it is perfectly fine for a Saga to act as a anti corruption layer in this form. A Saga states that it reacts on events and send operations. Whether those operations are in the form of commands or another third party service is entirely up to you.
Note though that I feel AMQP is more a solution for distributed events (in a broadcast approach) than that it's a means to send commands (to a direct handler). It can be morphed to suit your needs, but I'd regard it as suboptimal for command dispatching as it needs to be adjusted.
Lastly, make sure that your Saga can cope with exception from sending those operations over RabbitMQ. You wouldn't want the Invoice service to fail on that message and having your Order service's Saga think it's on a happy path with your transaction of course.
Concluding though, it's indeed feasible to use another message broker within a Saga.
Is there a documented way of handling this kind of scenario in Axon?
There's no documented Axon way to deal with such a scenario at the moment, as there is no "one size fits all" solution. From a pure Axon approach, using commands would be the way to go. But as you stated, that's not an option within your domain due to an (unwilling?) other team. Hence you would track back to the actual intent of a Saga, without taking Axon into account, which I would summarize in the following:
A Saga manages a complex business transaction.
Reacts on things happening (events) and sends out operations (commands).
The Saga has a notion of time.
The Saga maintains state over time to know how to react.
That's my two cents, hope this helps you out!

Akka and Backup/Fallback Actors

I am coming to Akka after spending quite a bit of time over in Hystrix-land where, like Akka, failure is a first-class citizen.
In Hystrix, I might have a SaveFizzToDbCmd that attempts to save a Fizz instance to an RDB (MySQL, whatever), and a backup/“fallback” SaveFizzToMemoryCmd that saves that Fizz to an in-memory cache in case the primary (DB) command goes down/starts failing:
// Groovy pseudo-code
class SaveFizzToDbCmd extends HystrixCommand<Fizz> {
SaveFizzToMemoryCmd memoryFallback
Fizz fizz
#Override
Fizz run() {
// Use raw JDBC to save ‘fizz’ to an RDB.
}
#Override
Fizz getFallback() {
// This only executes if the ‘run()’ method above throws
// an exception.
memoryFallback.fizz = this.fizz
memoryFallback.execute()
}
}
In Hystrix, if run() throws an exception (say a SqlException), its getFallback() method is invoked. If enough exceptions get thrown within a certain amount of time, the HystrixCommands “circuit breaker” is “tripped” and only the getFallback() method will be invoked.
I am interested in accomplishing the same in Akka, but with actors. With Akka, we might have a JdbcPersistor actor and an InMemoryPersistor backup/fallback actor like so:
class JdbcPersistor extends UntypedActor {
#Override
void onReceive(Object message) {
if(message instanceof SaveFizz) {
SaveFizz saveFizz = message as SaveFizz
Fizz fizz = saveFizz.fizz
// Use raw JDBC to save ‘fizz’ to an RDB.
}
}
}
class InMemoryPersistor extends UntypedActor {
// Should be obvious what this does.
}
The problem I’m struggling with is:
How to get InMemoryPeristor correctly configured/wired as the backup to JdbcPersistor when it is failing; and
Failing back over to the JdbcPersistor if/when it “heals” (though it may never)
I would imagine this is logic that belongs inside JdbcPersistors SupervisorStrategy, but I can find nothing in the Akka docs nor any code snippets that implement this kind of behavior. Which tells me “hey, maybe this isn’t the way Akka works, and perhaps there’s a different way of doing this sort of circuit breaking/failover/failback in Akka-land.” Thoughts?
Please note: Java examples are enormously appreciated as Scala looks like hieroglyphics to me!
One way would be to have a FailoverPersistor actor which consuming code communicates with, that has both a JdbcPersistor and a InMemoryPeristor as children and a flag that decides which one to use and then basically routes traffic to the correct child depending on the state. The flag could then be manipulated by both the supervisor and timed logic/statistics inside the actor.
There is a circuit breaker in the contrib package of akka that might be an inspiration (or maybe even useable to achieve what you want): http://doc.akka.io/docs/akka/current/common/circuitbreaker.html

Designing Akka Supervisor Hierarchy

Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am brand-spanking-new to Akka and actors, and am trying to set up a fairly simple actor system:
So a DataSplitter actor runs and splits up a rather large chunk of binary data, say 20GB, into 100 KB chunks. For each chunk, the data is stored in the DataCache via the DataCacher. In the background, a DataCacheCleaner rummages through the cache and finds data chunks that it can safely delete. This is how we prevent the cache from becoming 20GB in size.
After sending the chunk off to the DataCacher for caching, the DataSplitter then notifies the ProcessorPool of the chunk which now needs to be processed. The ProcessorPool is a router/pool consisting of tens of thousands of different ProcessorActors. When each ProcessActor receives a notification to "process" a 100KB chunk of data, it then fetches the data from the DataCacher and does some processing on it.
If you're wondering why I am bothering even caching anything here (hence the DataCacher, DataCache and DataCacheCleaner), my thinking was that 100KB is still a fairly large message to pass around to tens of thousands of actor instances (100KB * 1,000 = 100MB), so I am trying to just store the 100KB chunk once (in a cache) and then let each actor access it by reference through the cache API.
There is also a Mailman actor that subscribes to the event bus and intercepts all DeadLetters.
So, altogether, 6 actors:
DataSplitter
DataCacher
DataCacheCleaner
ProcessorPool
ProcessorActor
Mailman
The Akka docs preach that you should decompose your actor system based on dividing up subtasks rather than purely by function, but I'm not exactly seeing how this applies here. The problem at hand is that I'm trying to organize a supervisor hierarchy between these actors and I'm not sure what the best/correct approach is. Obviously ProcessorPool is a router that needs to be the parent/supervisor to the ProcessorActors, so we have this known hierarchy:
/user/processorPool/
processorActors
But other than that known/obvious relationship, I'm not sure how to organize the rest of my actors. I could make them all "peers" under one common/master actor:
/user/master/
dataSplitter/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Or I could omit a master (root) actor and try to make things more vertical around the cache:
/user/
dataSplitter/
cacheSupervisor/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Being so new to Akka I'm just not sure what the best course of action is, and if someone could help with some initial hand-holding here, I'm sure the lightbulbs will all turn on. And, just as important as organizing this hierarchy is, I'm not even sure what API constructs I can use to actually create the hierarchy in the code.
Organising them under one master makes it easier to manage since you can access all the actors watched by the supervisor (in this case master).
One hierarchical implementation can be:
Master Supervisor Actor
class MasterSupervisor extends UntypedActor {
private static SupervisorStrategy strategy = new AllForOneStrategy(2,
Duration.create(5, TimeUnit.MINUTES),
new Function<Throwable, Directive>() {
#Override
public Directive apply(Throwable t) {
if (t instanceof SQLException) {
log.error("Error: SQLException")
return restart()
} else if (t instanceof IllegalArgumentException) {
log.error("Error: IllegalArgumentException")
return stop()
} else {
log.error("Error: GeneralException")
return stop()
}
}
});
#Override
public SupervisorStrategy supervisorStrategy() { return strategy }
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
// CREATE A CHILD OF MyOtherSupervisor
if (!dataSplitter) {
dataSplitter = context().actorOf(FromConfig.getInstance().props(Props.create(DataSplitter.class)), "DataSplitter")
// WATCH THE CHILD
context().watch(dataSplitter)
log.info("${self().path()} has created, watching and sent JobId = ${message} message to DataSplitter")
}
// do something with message such as Forward
dataSplitter.forward(message, context())
}
}
DataSplitter Actor
class DataSplitter extends UntypedActor {
// Inject a Service to do the main operation
DataSplitterService dataSplitterService
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
log.info("${self().path()} recieved message: ${message} from ${sender()}")
// do something with message such as Forward
dataSplitterService.splitData()
}
}
}

Desktop program communicating with server

I am in the process of moving the business logic of my Swing program onto the server.
What would be the most efficient way to communicate client-server and server-client?
The server will be responsible for authentication, fetching and storing data, so the program will have to communication frequently.
it depends on a lot of things. if you want a real answer, you should clarify exactly what your program will be doing and exactly what falls under your definition of "efficient"
if rapid productivity falls under your definition of efficient, a method that I have used in the past involves serialization to send plain old java objects down a socket. recently I have found that, in combination with the netty api, i am able to rapidly prototype fairly robust client/server communication.
the guts are fairly simple; the client and server both run Netty with an ObjectDecoder and ObjectEncoder in the pipeline. A class is made for each object designed to handle data. for example, a HandshakeRequest class and HandshakeResponse class.
a handshake request could look like:
public class HandshakeRequest extends Message {
private static final long serialVersionUID = 1L;
}
and a handshake response may look like:
public class HandshakeResponse extends Message {
private static final long serialVersionUID = 1L;
private final HandshakeResult handshakeResult;
public HandshakeResponse(HandshakeResult handshakeResult) {
this.handshakeResult = handshakeResult;
}
public HandshakeResult getHandshakeResult() {
return handshakeResult;
}
}
in netty, the server would send a handshake request when a client connects as such:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ch.write(new HandshakeRequest();
}
the client receives the HandshakeRequest Object, but it needs a way to tell what kind of message the server just sent. for this, a Map<Class<?>, Method> can be used. when your program is run, it should iterate through the Methods of a class with reflection and place them in the map. here is an example:
public HashMap<Class<?>, Method> populateMessageHandler() {
HashMap<Class<?>, Method> temp = new HashMap<Class<?>, Method>();
for (Method method : getClass().getMethods()) {
if (method.getAnnotation(MessageHandler.class) != null) {
Class<?>[] methodParameters = method.getParameterTypes();
temp.put(methodParameters[1], method);
}
}
return temp;
}
this code would iterate through the current class and look for methods marked with an #MessageHandler annotation, then look at the first parameter of the method (the parameter being an object such as public void handleHandshakeRequest(HandshakeRequest request)) and place the class into the map as a key with the actual method as it's value.
with this map in place, it is very easy to receive a message and send the message directly to the method that should handle the message:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
try {
Message message = (Message) e.getMessage();
Method method = messageHandlers.get(message.getClass());
if (method == null) {
System.out.println("No handler for message!");
} else {
method.invoke(this, ctx, message);
}
} catch(Exception exception) {
exception.printStackTrace();
}
}
there's not really anything left to it. netty handles all of the messy stuff allowing us to send serialized objects back and forth with ease. if you decide that you do not want to use netty, you can wrap your own protocol around java's Object Output Stream. you will have to do a little bit more work overall, but the simplicity of communication remains intact.
It's a bit hard to say which method is "most efficient" in terms of what, and I don't know your use cases, but here's a couple of options:
The most basic way is to simply use "raw" TCP-sockets. The upside is that there's nothing extra moving across the network and you create your protocol yourself, the latter being also a downside; you have to design and implement your own protocol for the communication, plus the basic framework for handling multiple connections in the server end (if there is a need for such).
Using UDP-sockets, you'll probably save a little latency and bandwidth (not much, unless you're using something like mobile data, you probably won't notice any difference with TCP in terms of latency), but the networking code is a bit harder task; UDP-sockets are "connectionless", meaning all the clients messages will end up in the same handler and must be distinguished from one another. If the server needs to keep up with client state, this can be somewhat troublesome to implement right.
MadProgrammer brought up RMI (remote method invocation), I've personally never used it, and it seems a bit cumbersome to set up, but might be pretty good in the long run in terms of implementation.
Probably one of the most common ways is to use http for the communication, for example via REST-interface for Web services. There are multiple frameworks (I personally prefer Spring MVC) to help with the implementation, but learning a new framework might be out of your scope for now. Also, complex http-queries or long urls could eat your bandwidth a bit more, but unless we're talking about very large amounts of simultaneous clients, this usually isn't a problem (assuming you run your server(s) in a datacenter with something like 100/100MBit connections). This is probably the easiest solution to scale, if it ever comes to that, as there're lots of load-balancing solutions available for web servers.

Benefits and disadvantages of using java rmi

What are the advantages and disadvantages of RMI?
The advantages and disadvantages are similar to those of any RPC-like (Remote Procedure Call) System. There is a superficial appearance of simplicity, because it objects which are in fact remote can be treated as though they were local.
This would seem like a great benefit to simplicity of programming, but there are hidden costs. Distributed systems have issues of latency and potential for partial failure which the programmer has to be aware of. An invocation of a remote method is subject to potential failure from security, latency problems, network failure, etc. Papering over these sorts of problems can be a disaster for reliability.
From my experience:
Pros:
Easy to start
Dynamic class loading is very powerful
If you implement something like below you can not change server side for a long time and develop client (one exception on rmi server has to get these classes in classpath - so either server them over net or include them and rebuild server)
You can implement two interfaces like that:
Common task interface:
public interface Task<T extends Serializable> extends Serializable {
T execute();
}
Rmi interface:
public interface RmiTask extends Remote {
<T extends Serializable> T executeTask(Task<T> task) throws RemoteException;
}
RmiTask implementation on server side:
public class RmiTaskExecutor implements RmiTask {
public <T extends Serializable> T executeTask(Task<T> task) {
return task.execute();
}
}
Example client Task implementation:
public class IsFileTask implements Task<Boolean> {
final String path;
public IsFileTask(String path) {
this.path = path;
}
public Boolean execute() {
return new File(path).isFile();
}
}
Cons:
Could be insecure, when using Dynamic class loading (client serves implementation of passed types) - for example you know that rmi server calls method() on PassedObject, but marvellous client could override this method and execute whatever he wants there...
hard to implement callback which would work over Internet (it needs to establish new connection from server to client - it can be challenging to pass it through NAT/routers/ firewalls)
when you suddenly broke the connection during execution of remote method it happens that this method would not return (I recommend wrapping rmi calls into Callables and run them with defined timeouts).

Categories

Resources