I have a J2EE application that receives messages (events) via a web service. The messages are of varying types (requiring different processing depending on type) and sent in a specific sequence. It have identified a problem where some message types take longer to process than others. The result is that a message received second in a sequence may be processed before the first in the sequence. I have tried to address this problem by placing a synchronized block around the method that processes the messages. This seems to work, but I am not confident that this is the "correct" approach? Is there perhaps an alternative that may be more appropriate or is this "acceptable"? I have included a small snippit of code to try to explain more clearly. .... Any advice / guidance appreciated.
public class EventServiceImpl implements EventService {
public String submit (String msg) {
if (msg == null)
return ("NAK");
EventQueue.getInstance().submit(msg);
return "ACK";
}
}
public class EventQueue {
private static EventQueue instance = null;
private static int QUEUE_LENGTH = 10000;
protected boolean done = false;
BlockingQueue<String> myQueue = new LinkedBlockingQueue<String>(QUEUE_LENGTH);
protected EventQueue() {
new Thread(new Consumer(myQueue)).start();
}
public static EventQueue getInstance() {
if(instance == null) {
instance = new EventQueue();
}
return instance;
}
public void submit(String event) {
try {
myQueue.put(event);
} catch (InterruptedException ex) {
}
}
class Consumer implements Runnable {
protected BlockingQueue<String> queue;
Consumer(BlockingQueue<String> theQueue) { this.queue = theQueue; }
public void run() {
try {
while (true) {
Object obj = queue.take();
process(obj);
if (done) {
return;
}
}
} catch (InterruptedException ex) {
}
}
void process(Object obj) {
Event event = new Event( (String) obj);
EventHandler handler = EventHandlerFactory.getInstance(event);
handler.execute();
}
}
// Close queue gracefully
public void close() {
this.done = true;
}
I am not sure what is the framework (EJB(MDB)/JMS) you are working with. Generally using synchronization inside a Managed Environment like that of EJB/JMS should be avoided(its not a good practice). One way to get around is
the client should wait for the acknowledgement from the server before it sends the next message.
this way you client itself will control the sequence of events.
Please note this won't work if there are multiple client submitting the messages.
EDIT:
You have a situation wherein the client of the web service sends message in sequence without taking into account the message processing time. It simply dumps the message one after another. This is a good case for Queue ( First In First Out ) based solution. I suggest following two ways to accomplish this
Use JMS . This will have an additional overhead of adding a JMS providers and writing some plumbing code.
Use some multitheading pattern like Producer-Consumer wherein your web service handler will be dumping the incoming message in a Queue and a single threaded consumer will consume one message at a time. See this example using java.util.concurrent package.
Use database. Dump the incoming messages into a database. Use a different scheduler based program to scan the datbase (based on sequence number) and process the messages accordingly.
First and third solution is very standard for these type of problems. The second approach would be quick and won't need any additional libraries in your code.
If the events are to be processed in a specific sequence, then why not try adding "eventID" and 'orderID' fields to the messages? This way your EventServiceImpl class can sort, order and then execute in the proper order (regardless of the order they are created and/or delivered to the handler).
Synchronizing the handler.execute() block will not get the desired results, I expect. All the synchronized keyword does is prevent multiple threads from executing that block at the same time. It does nothing in the realm of properly ordering which thread goes next.
If the synchronized block does seem to make things work, then I assert you are getting very lucky in that the messages are being created, delivered and then acted upon in the proper order. In a multithread environment, this is not assured! I'd take steps to assure you are controlling this, rather than relying on good fortune.
Example:
Messages are created in the order 'client01-A', 'client01-C',
'client01-B', 'client01-D'
Messages arrive at the handler in the order 'client01-D',
'client01-B', 'client01-A', 'client01-C'
EventHandler can distinquish messages from one client to another and starts to cache 'client01' 's messages.
EventHandler recv's 'client01-A' message and knows it can process this and does so.
EventHandler looks in cache for message 'client01-B', finds it and processes it.
EventHandler cannot find 'client01-C' because it hasn't arrived yet.
EventHandler recv's 'client01-C' and processes it.
EventHandler looks in cache for 'client01-D' finds it, processes it, and considers the 'client01' interaction complete.
Something along these lines would assure proper processing and would promote good use of multiple threads.
Related
I used to implement the Runnable interface to peek() an item from a queue and send it to an API.
But now I need to use Callable interface to peek() the queue and send an item to an API. If return 200, then delete the item from the queue.
Here is the code I used to implement this functionality. How can I modify the code? Any examples or reference about this? Thanks.
public class QueueProcessor implements Runnable{
private static ObjectQueue<JSONObject> objectQueue;
static {
objectQueue = new ObjectQueue<JSONObject>();
}
public void run() {
//add items to the queue
objectQueue.add(jsonObeject)
Random r = new Random();
try {
while (true) {
try {
if (!objectQueue.isEmpty()) {
JSONObject o = objectQueue.remove();
sendRequest(o.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
Thread.sleep(r.nextInt(DEFAULT_RANGE_FOR_SLEEP));
}
} catch (InterruptedException e) {
e.printStackTrace();
Thread.currentThread().interrupt();
}
}
public void sendRequest(JSONObject json) {
Client client = ClientBuilder.newClient();
WebTarget baseTarget = client.target("someUrl");
Invocation.Builder builder = baseTarget.request();
Response response = builder.post(Entity.entity(json.toString(), MediaType.APPLICATION_JSON));
int code = response.getStatus();
if (200 == code) {
objectQueue.remove();
}
}
Just to get you started , refer to this other SO question
and note the point # 1 in the question itself.
To achieve asynchronous calls, you first need to decouple - task submission / execution ( pick item from queue and make API call ) and response processing after API call ( remove item from queue if response status is 200 ) . This decoupling can be achieved by - ExecutorService
So first introduce ExecutorService into your Runnable code i.e. start executing your Runnable from some controller class ( class with main method ) which uses an Executor to submit/execute requests. You have not shown how you triggering your thread so you might already be doing that.
Now change your Runnable into Callable<Response> i.e. create a Callable similar to your Runnable and implement Callable<Response> and in the call() method , make your API call. You do need to share your ObjectQueue<JSONObject> with your main controller class and this Callable so that queue implementation needs to be thread safe or you need to make call() method to be thread - safe.
I mean, you either loop around your queue in controller and keep submitting requests for each item or pass on whole queue to Callble and lopping is done there - its your call.
Point to note till this point is that call() method returns a value - Callable while run() method of Runnable doesn't return any value and that is a major difference between the two.
Now going back to controller class - submit or execute method will wrap your Response into a Future submit
Now use a combination of isDone() & get() methods on your Future to remove the item from queue.
Remember that you should be able to identify your processed object in queue from API response - if not then you need to combine API response with submitted JSONObject and wrap it in Future to figure out which object to remove to. Only status is not going to be enough and you might need another data structure to hold objects if queue is restricted for removal of only top element. This complication doesn't arise if you are simply replacing runnable with callable but doesn't wish to make your program truly asynchronous.
These are simply broad guidelines , providing ready made code is something that I wouldn't do. You will find lots of examples on Internet provided your basics are correct. Also, please do make a practice to include import statements while pasting code.
Few Links
How to send parallel GET requests and wait for result responses?
How to send multiple asynchronous requests to different web services?
This question is about the Camunda BPM engine.
I would like to implement an ExecutionListener that I can attach to any process events. This listener should send process state messages to a message queue. The process message should contain a state that would be "PENDING" for the process if the process is waiting in a UserTask somewhere.
Now I wonder if there is an easy way to find out if the process is waiting (somewhere) in a UserTask inside the Delegation Code (by using provided DelegateExecution object of the delegation code method). Could not find one so far.
For example:
import org.camunda.bpm.engine.RuntimeService;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.ExecutionListener;
import org.camunda.bpm.engine.runtime.ActivityInstance;
public class ExampleExecutionListener implements ExecutionListener {
public void notify(DelegateExecution execution) throws Exception {
RuntimeService runtimeService = execution.getProcessEngineServices().getRuntimeService();
ActivityInstance activityInstance = runtimeService.getActivityInstance(execution.getProcessInstanceId());
boolean isInAnyUserTask = isInAnyUserTask(activityInstance);
}
protected boolean isInAnyUserTask(ActivityInstance activityInstance) {
if ("userTask".equals(activityInstance.getActivityType())) {
return true;
}
else {
for (ActivityInstance child : activityInstance.getChildActivityInstances()) {
boolean isChildInUserTask = isInAnyUserTask(child);
if (isChildInUserTask) {
return true;
}
}
return false;
}
}
}
Note that this does not consider called process instances.
DelegateExecution does not have all the information that you need. You will have to use the task query to see whether it returns at least 1 result on the process instance that is currently running.
There is an interface TaskListener. You can implement it by yourself and append your own TaskListener to each UserTask in the BPMN code. You can also define on which event type your own TaskListenershould be executed (create, assignment, complete, delete).
The notify-method is called with a DelegateTask which contains more specific information about the concrete UserTask. You could extract the information and send these information into your queue (when you call your implementation of the TaskListeneron the create event).
Otherwise you can use the TaskService to create a query to retrieve all open tasks. For a working query you need the process instance id of the current execution, which you can retrieve from the delegate execution. To make things short take this code snippet: taskService.createTaskQuery().processInstanceId(delegateExecution.getProcessInstanceId()).list().isEmpty().
Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am brand-spanking-new to Akka and actors, and am trying to set up a fairly simple actor system:
So a DataSplitter actor runs and splits up a rather large chunk of binary data, say 20GB, into 100 KB chunks. For each chunk, the data is stored in the DataCache via the DataCacher. In the background, a DataCacheCleaner rummages through the cache and finds data chunks that it can safely delete. This is how we prevent the cache from becoming 20GB in size.
After sending the chunk off to the DataCacher for caching, the DataSplitter then notifies the ProcessorPool of the chunk which now needs to be processed. The ProcessorPool is a router/pool consisting of tens of thousands of different ProcessorActors. When each ProcessActor receives a notification to "process" a 100KB chunk of data, it then fetches the data from the DataCacher and does some processing on it.
If you're wondering why I am bothering even caching anything here (hence the DataCacher, DataCache and DataCacheCleaner), my thinking was that 100KB is still a fairly large message to pass around to tens of thousands of actor instances (100KB * 1,000 = 100MB), so I am trying to just store the 100KB chunk once (in a cache) and then let each actor access it by reference through the cache API.
There is also a Mailman actor that subscribes to the event bus and intercepts all DeadLetters.
So, altogether, 6 actors:
DataSplitter
DataCacher
DataCacheCleaner
ProcessorPool
ProcessorActor
Mailman
The Akka docs preach that you should decompose your actor system based on dividing up subtasks rather than purely by function, but I'm not exactly seeing how this applies here. The problem at hand is that I'm trying to organize a supervisor hierarchy between these actors and I'm not sure what the best/correct approach is. Obviously ProcessorPool is a router that needs to be the parent/supervisor to the ProcessorActors, so we have this known hierarchy:
/user/processorPool/
processorActors
But other than that known/obvious relationship, I'm not sure how to organize the rest of my actors. I could make them all "peers" under one common/master actor:
/user/master/
dataSplitter/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Or I could omit a master (root) actor and try to make things more vertical around the cache:
/user/
dataSplitter/
cacheSupervisor/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Being so new to Akka I'm just not sure what the best course of action is, and if someone could help with some initial hand-holding here, I'm sure the lightbulbs will all turn on. And, just as important as organizing this hierarchy is, I'm not even sure what API constructs I can use to actually create the hierarchy in the code.
Organising them under one master makes it easier to manage since you can access all the actors watched by the supervisor (in this case master).
One hierarchical implementation can be:
Master Supervisor Actor
class MasterSupervisor extends UntypedActor {
private static SupervisorStrategy strategy = new AllForOneStrategy(2,
Duration.create(5, TimeUnit.MINUTES),
new Function<Throwable, Directive>() {
#Override
public Directive apply(Throwable t) {
if (t instanceof SQLException) {
log.error("Error: SQLException")
return restart()
} else if (t instanceof IllegalArgumentException) {
log.error("Error: IllegalArgumentException")
return stop()
} else {
log.error("Error: GeneralException")
return stop()
}
}
});
#Override
public SupervisorStrategy supervisorStrategy() { return strategy }
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
// CREATE A CHILD OF MyOtherSupervisor
if (!dataSplitter) {
dataSplitter = context().actorOf(FromConfig.getInstance().props(Props.create(DataSplitter.class)), "DataSplitter")
// WATCH THE CHILD
context().watch(dataSplitter)
log.info("${self().path()} has created, watching and sent JobId = ${message} message to DataSplitter")
}
// do something with message such as Forward
dataSplitter.forward(message, context())
}
}
DataSplitter Actor
class DataSplitter extends UntypedActor {
// Inject a Service to do the main operation
DataSplitterService dataSplitterService
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
log.info("${self().path()} recieved message: ${message} from ${sender()}")
// do something with message such as Forward
dataSplitterService.splitData()
}
}
}
I have a Play framework 2 application that also uses Akka. I have an Actor that receives messages from a remote system, the amount of such messages can be very huge. After a message is received, i log it into the database (using the built-in Ebean ORM) and then continue to process it. I don't care, how fast this database logging works, but it definitely should not block the further processing. Here is a simplified code sample:
public class MessageReceiver extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof ServerMessage) {
ServerMessage serverMessage = (ServerMessage) message;
ServerMessageModel serverMessageModel = new ServerMessageModel(serverMessage);
serverMessageModel.save();
//now send the message to another actor for further processing
} else {
unhandled(message);
}
}
}
As i understand, database inserting is blocking in this realization, so it does not meet my needs. But i can't figure out how to make it unblocking. I've read about the Future class, but i can't get it to work, since it should return some value, and serverMessageModel.save(); returns void.I understand that writing a lot of messages one-by-one into the database is unefficient, but that is not the issue at the moment.
Am i right that this implementation is blocking? If it is, how can i make it run asynchronously?
Future solution seems good to me. I haven't used Futures from Java, but you can just return arbitrary Integer or String if you definitely need some return value.
Other option is to send that message to some other actor which would do the saving to the DB. Then you should make sure that the mailbox of that actor would not overfill.
Have you considered akka-persistence for this? Maybe that would suit your use-case.
If you wish to use Future - construct an Akka Future with a Callable (anonymous class), whose apply() will actually implement the db save code. You can actually put all of this (future creation and apply()) in your ServerMessageModel class -- maybe call it asynchSave(). Your Future maybe Future where status is the result of asynchSave...
public Future<Status> asyncSave(...) { /* should the params be ServerMessageModel? */
return future(new Callable<Status>() {
public Status call() {
/* do db work here */
}
}
In your onReceive you can go ahead with tell to the other actor. NOTE: if you want to make sure that you are firing the tell to the other actor after this future returns, then you could use Future's onSuccess.
Future<Status> f = serverMessageModel.asyncSave();
f.onSuccess(otherActor.tell(serverMessage, self());
You can also do failure handling... see http://doc.akka.io/docs/akka/2.3.4/java/futures.html for further details.
Hope that helps.
Persist actor state with Martin Krassers akka-persistence extension and my jdbc persistence provider akka persistence jdbc https://github.com/dnvriend/akka-persistence-jdbc
I am creating a set of widgets in Java that decodes and displays messages received at a serial interface.
The message type is defined by a unique identifier.
Each widget is only interested in a particular identifier.
How to I program the application in a way to distribute the messages correctly to the relevant widgets?
If this is for a single app (i.e. a main and couple of threads), JMS is overkill.
The basics of this is a simple queue (of which Java has several good ones, BlockingQueue waving its hand in the back over there).
The serial port reads its data, formats a some relevant message object, and dumps it on a central Message Queue. This can be as simple as a BlockingQueue singleton.
Next, you'll need a queue listener/dispatcher.
This is a separate thread that sits on the queue, waiting for messages.
When it gets a message it then dispatches it to the waiting "widgets".
How it "knows" what widgets get what is up to you.
It can be a simple registration scheme:
String messageType = "XYZ";
MyMessageListener listener = new MyMessageListener();
EventQueueFactory.registerListener(messageType, listener);
Then you can do something like:
public void registerListener(String type, MessageListener listener) {
List<MessageListener> listeners = registrationMap.get(type);
if (listeners == null) {
listeneres = new ArrayList<MessageListener>();
registrationMap.put(type, listeners);
}
listeners.add(listener);
}
public void dispatchMessage(Message msg) {
List<MessageListener> listeners = registrationMap.get(type);
if (listeners != null) {
for(MessageListener listener : listeners) {
listener.send(msg);
}
}
}
Also, if you're using Swing, it has a whole suite of Java Bean property listeners and what not that you could leverage as well.
That's the heart of it. That should give you enough rope to keep you in trouble.
sounds like a jms topic/subscription. why reinvent the wheel?
One easy way to do this is to add each widget to a map by ID, and to provide each message to the widget by pulling it out of the map and calling some method on it. This means that each widget has to implement an interface that you can call to display the message. If the widgets are not in your control, then you can create a thin wrapper class (implementing an interface) and add this wrapper class -- with a widget -- to the map, one instance per ID.