How to invoke two actors at a time , i.e. parallel? - java

I have a scenario where two functionalities run parallel.
Below is sample pseudo code.
MainActor{
// retrive company ids
//for each company id i need to run another two different actions simultaniously
tell(A_Actor)
tell(B_Actor)
//if I call above they are calling sequentially i.e. first it runs tell(A_Actor)
//then comes to tell(B_Actor).
//If tell(A_Actor) fails it wont run tell(B_Actor).
}
A_Actor{
// do ingest into a file.
}
B_Actor{
// do ingest into a DB.
}
Question :
How to run two functionalities i.e. tell(A_Actor) & tell(B_Actor) run parallel?

The tell method is asynchronous. When you fire a tellto actorA, it doesn't wait until actorA finishes or crashes to execute the next action, which here is to tell actorB
If you need to paralelize the two tell methods, then you can do the following :
val tellActions = Vector(() => actorA.tell(messageA, senderActor), () => actorB.tell(messageB, senderActor))
tellActions.par.foreach(_.apply())
Note that this is Scala code

This has been pointed out in several comments (including mine), but I felt it deserved an answer.
In short, you need to distinguish between calling the tell method in parallel with the functionality that the actors execute within their receive methods being executed in parallel. The functionality will be executed in parallel automatically, and calling the tell method in parallel doesn't make any sense.
The code you show will execute the ingest in a file and ingest into the DB in parallel. This is automatic and requires no action on your part; this is how actors and tell works. And, despite what you say, if something goes wrong with the file ingestion it will not affect the ingestion into the DB. (Assuming you built the actors and messages correctly, since you don't list their implementation.)
The tell method is asynchronous: it returns nearly immediately and doesn't do the actual logic (ingestion in this case): the only thing it does is place the message in the recipient's mailbox. Ismail's answer, in theory, shows you how you could "invoke tell" in parallel, but in that example you "sequentially" are creating the array that is used for parallel tells and the whole process will be very inefficient.) His code, while technically doing what you ask, is nonsensical in practice: it accomplishes nothing except slowing the code down significantly.
In short, I think you either:
Have something fundamentally wrong with your actors and how you are calling them.
You are actually are executing the functionality in parallel and you just aren't realizing it because you are measuring/observing something incorrectly.

Related

Concurrent findAndModify queries succeed in updating the same document

I have a task worker written in Java and using a MongoDB 3.4 replica set that runs many threads each doing essentially this.
Run task
Signal that task is complete by updating a document for that task in MongoDB
Run a query to see if all the tasks in this set of tasks are done
If so, continue to next stage of processing
Otherwise, do nothing
As you may be able to see, there is a race condition here; multiple tasks can all finish at about the same time and think that they are the last task to complete. I want to use MongoDB to make sure only one of those tasks is allowed to start the next stage of processing.
I have the following code that is meant to ensure that only one of those tasks can continue (I'm using Jongo to interface with MongoDB).
Chipset modified = chipsets
.findAndModify("{_id: #, status: {$ne: #}}", new Object[] { chipset.getId(), Chipset.Status.Queued })
.with("{$set: {status: #}}", new Object[] { Chipset.Status.Queued })
.returnNew().as(Chipset.class);
if (modified != null)
runNextProcessingStep();
Pretty simple here; I'm just using findAndModify to change the status of the Chipset (set of tasks) to Queued. The one that successfully makes the change gets to execute runNextProcessingStep().
Or that's how I think it should work. In reality, several tasks, even ones that finish 2 seconds apart, are somehow getting back a non-null modified. As I understand it MongoDB should be locking the document when running findAndModify so that a non-null document can be returned no more than once.
I've read Linearizable Reads via findAndModify and have implemented everything said in there. I've set the connection write concern to Majority and the read concern to Linearizable. I've created a unique composite index on _id and status. Still nothing. Perhaps I have misunderstood how findAndModify actually behaves? What am I doing wrong?
Well, this is embarrassing but in the interest of being a good internet citizen I'll update this with what happened. There was another thread that was changing statuses out from under me. I had convinced myself this couldn't be the case but, well, concurrency can be a real pain sometimes. findAndModify works exactly how I thought it should.

Execute the method after previous method finishes

I am making an android app in which I am fetching data from internet and storing it in a ArrayList with custom adapter. Fetching data takes time and in that time next function runs on its own. I only want the next function to run when data is completely fetched. What can I do? I think it has to do something with threads kindly explain what threads are and how can we use them?
Let's say there are 2 functions
Function A
Function B
I only want the function B to run when function A has completed its task. is there anyway to do that?
There are lots of resources available online where you can obtain information on Threads in Java.
I highly recommend the official Java Documentation.
This Introduction isn't half bad either.
As for obtaining information in one method and then waiting until it is done to run the next, as #cHao said, just call the methods sequentially like this
A();
B();
Unless you already have multiple threads set up in your code, this should work just fine.

JVM: is it possible to manipulate frame stack?

Suppose I need to execute N tasks in the same thread. The tasks may sometimes need some values from an external storage. I have no idea in advance which task may need such a value and when. It is much faster to fetch M values in one go rather than the same M values in M queries to the external storage.
Note that I cannot expect cooperation from tasks themselves, they can be concidered as nothing more than java.lang.Runnable objects.
Now, the ideal procedure, as I see it, would look like
Execute all tasks in a loop. If a task requests an external value, remember this, suspend the task and switch to the next one.
Fetch the values requested at the previous step, all at once.
Remove all completed task (suspended ones don't count as completed).
If there are still tasks left, go to step 1, but instead of executing a task, continue its execution from the suspended state.
As far as I see, the only way to "suspend" and "resume" something would be to remove its related frames from JVM stack, store them somewhere, and later push them back onto the stack and let JVM continue.
Is there any standard (not involving hacking at lower level than JVM bytecode) way to do this?
Or can you maybe suggest another possible way to achieve this (other than starting N threads or making tasks cooperate in some way)?
It's possible using something like quasar that does stack-slicing via an agent. Some degree of cooperation from the tasks is helpful, but it is possible to use AOP to insert suspension points from outside.
(IMO it's better to be explicit about what's going on (using e.g. Future and ForkJoinPool). If some plain code runs on one thread for a while and is then "magically" suspended and jumps to another thread, this can be very confusing to debug or reason about. With modern languages and libraries the overhead of being explicit about the asynchronicity boundaries should not be overwhelming. If your tasks are written in terms of generic types then it's fairly easy to pass-through something like scalaz Future. But that wouldn't meet your requirements as given).
As mentioned, Quasar does exactly that (it usually schedules N fibers on M threads, but you can set M to 1), using bytecode transformations. It even gives each task (AKA "fiber") its own stack trace, so you can dump it and get a complete stack trace without any interference from any other task sharing the thread.
Well you could try this
you need
A mechanism to save the current state of the task because when the task returns its frame would be popped from the call stack. Based on the return value or something like that you can determine weather it completed or not since you would need to re-execute it from the point where it left thus u need to preserve the state information.
Create a Request Data structure for each task. When ever a task wants to request something it logs it there , The data structure should support all the possible request a task can make.
Store these DS in a Map. At the end of the loop you can query this DS to determine the kind of resource required by each task.
get the resource put it in the DS . Start the task from the state when it returned.
The task queries the DS gets the resource.
The task should use this DS when ever it wants to use an external resource.
you would need to design the method in which resource is requested with special consideration since when you will re-execute the task again you would need to call this method yourself so that the task can execute from where it left.
*DS -> Data Structure
hope it helps.

Time Based Streaming

I am trying to figure out how to get time-based streaming but on an infinite stream. The reason is pretty simple: Web Service call latency results per unit time.
But, that would mean I would have to terminate the stream (as I currently understand it) and that's not what I want.
In words: If 10 WS calls came in during a 1 minute interval, I want a list/stream of their latency results (in order) passed to stream processing. But obviously, I hope to get more WS calls at which time I would want to invoke the processors again.
I could totally be misunderstanding this. I had thought of using Collectors.groupBy(x -> someTimeGrouping) (so all calls are grouped by whatever measurement interval I chose. But then no code will be aware of this until I call a closing function as which point the monitoring process is done.
Just trying to learn java 8 through application to previous code
By definition and construction a stream can only be consumed once, so if you send your results to an inifinite streams, you will not be able to access them more than once. Based on your description, it looks like it would make more sense to store the latency results in a collection, say an ArrayList, and when you need to analyse the data use the stream functionality to group them.

How to Synchronize Red5 NetConnection Calls

I am developing a online game using red5 and flex. using RTMP connection. I used only netConnection.call. my issue is the red5 calls are not coming synchronized manor. some calls are coming to client suddenly some calls are taking time. I want to make this calls reach to client side in order. please help me any one...
Followings are my opinions, I'm sure there are far better ways to do this.
Write a class that is responsible from NetConnection.call execution. in this class, make sure that no call is made before previous one is completed. It ensures the order, but slows execution.
Write a class such that: There should be a data structure, maybe an array in its simplest form. Array contains objects that holds call order, callback function and result returned from server. When you call a method, add those calls into the array in calling order. When you receive a result from server, check the array. if previous calls are not returned yet, store them in array. If there are no previous calls pending, call your callback function any functions "called later but finished earlier that this" and remove that item from your array.
But, (there is always a but in red5), if your application needs some result in order, maybe you should consider your architecture. Most of the time, a carefully thought event handling mechanism removes the need or ordered results.
Red5 offers two application adapters which support synchronized and multithreaded access. To use them, simply extend org.red5.server.adapter.ApplicationAdapter for sync or org.red5.server.adapter.MultiThreadedApplicationAdapter in your application.

Categories

Resources