Currently im learning how to use RxJava. I fully understand the concept of the reactive Programming paradigma where the programm needs to react to certain types of changes (Userinputs, Sensordata, etc.)
A lot of Tutorials and even the RxJava Github page explains RxJava in a very Simple way of creating a Observable and Observer. Subscribe the Observer to the Observable and you get the Stream of Data you just created manually by yourself. So like i see this, everytime i restart the Programm / App my Observer Subscribe --> gets the Data --> and then receive the onComplete. Does that mean, that the Observer is still subscribed to the Observer at this point? Or does a OnComplete Message unsubscribe the Observer?
I just cant get my Head arround this. I think of a Program (App or Backend Service) that get random Sensor Data from a Local Arduino. The Sensor Data comes in at random Time in total random variety. Can i do a one time subscription and as long the Programm runs (on my Server or on my Smartphone) the Observer is subscribed to the specific "sensor Data" Observable and even on Complete (After receiving the arriving data) its still listening for the next data that eventually comes from the sensor?
Is that right? Or do i have some kind of missunderstanding?
Once you get onComplete, stream is considered as terminated. Check:
http://www.reactive-streams.org/reactive-streams-1.0.3-javadoc/org/reactivestreams/Subscriber.html#onComplete()
Related
My programme is a notification service, it basically receives http requests(client sends notifications) and forwards them to a device.
I want it to work the following way:
receive client notification request
save it to the database(yes, i need this step, its mandatory)
async threads watch new requests in database
async threads forward them to the destination(device).
In this case the programme can send client confirmation straight away after the step 2).
Thus, not waiting for the destination to respond(device response time can be too long).
If I stored client notification in memory i would use BlockingQueue. But I need to persist my notifications in db. Also, I cannot use Message Queues, because clients want rest endpoints to send notifications.
Help me to work out the architecture of such a mechanism.
PS In Java, Postgresql
Here are some ideas that can lead to the solution:
Probably the step 2 is mandatory to make sure that the request is persisted so that rather it will be queried. So we're talking about some "data model" here.
With this in mind, if you "send" the confirmation "right away after the step 2" - what if later you want to do some action with this data (say, send it somewhere) and this action doesn't succeed. You store it on disk? what happens if the disk is full?
The most important question is what happens to your data model (in the database) in this case? Should the entry in the database still be there or the whole "logical" action has failed? This is something you should figure out depending on the actual system the answers can be different.
The most "strict" solution would use transactions in the following (schematic) way:
tr = openTransaction()
try {
saveRequestIntoDB(data);
forwardToDestination(data);
tr.commit();
} catch(SomeException ex) {
tr.rollback();
}
With this design, if something goes wrong during the "saveRequest" step - well, nothing will happen. If the data is stored in db, but then forwardToDestination fails - then the transaction will be rolled back and the record won't be stored in DB.
If all the operations succeed - the transaction will be committed.
Now It looks like you still can use the messaging system in step 4. Sending message can be fast and won't add any significant overhead to the whole request.
On the other hand, the benefits are obvious:
- Who listens to these "notifications"? If you send something and only one service should receive and process the notification how do you make sure that others won't get it? How would you implement the opposite - what if all the services should get the notification and process it independently?
These facilities are already implemented by any descent messaging system.
I can't really understand the statement:
I cannot use Message Queues, because clients want rest endpoints to send notifications.
Since the whole flow is originated by the client's request I don't see any contradication here. The code that is called from rest endpoint (which is after all is a logic entrypoint that should be implemented by you) can call the database, persist the data and then send the notification...
I have a service (ServiceA) with an endpoint to which client can subscribe and after subscription, this service produces data continuously using server sent events.
If this is important, I am using Project Reactor with Java.
It may be important, so I'll explain what this endpoint does. Every 15 seconds it fetches data from another service (ServiceB), checks if there were some changes with data that it fetched 15 seconds ago and if there were, it prouces a new event with this data, if there were no changes, it does not send anything (so the payload to the client is as small as possible).
Now, this application can have multiple clients connected at once and they all ask for the same data - it is not filtered by the user etc.
Is it sensible that this observable producing the output is shared between multiple clients?
Of course it would save us a lot of unnecessary calls to the ServiceB, but I wonder if there are any counterindications to this approach - it is the first time I am writing reactive program on the backend (coming from the RxJS) and I don't know if this would cause any concurrency problems or any other sort of problems.
The other benefit I can see is that a new client connecting would immediately be served the last received data from the ServiceB (it usually takes about 4s per call to retrieve this data).
I also wonder if it would be possible that this observable is calling the ServiceB only if there are some subscribers - i.e. until there is at least one subscriber, call the service, if there are no subscribers stop calling it, when a new subscriber subscribes call it again but first fetch the client the last fetched data (no matter how old or stale it may be).
your SSE source can perfectly be shared using the following pattern:
source.publish().refCount();
Note that you need to store the return value of that call and return that same instance to subsequent callers in order for the sharing to occur.
Once all subscribers unsubscribe, refCount will also cancel its subscription to the original source. After that the first subscriber to come in will trigger a new subscription to the source, which you should craft so that it fetches the latest data and re-initializes a polling cycle every 15s.
Here is the problem,
I have a network request which downloads some information. However, it is essential that this request is called only once during some period of time ( you will get the idea later on ) and all subscribers get the same result. My first thought was to use the share() operator, so it would multicast the result while keeping a single request source. But I am not sure what is going to happen if I try to subscribe to it again after the share operator already disposed the resources due to refCount dropping to 0.
The thing I am trying to accomplish here is that every request that I make, is dependent on the current state of information stored and those requests update this information. Once I make the first request, I need to keep a reference to it and inform every subscriber that subscribes until the time of request completion. After the request is finished, all subscribers gets their notification and unsubscribes... However, if there is a new subscription after the disposal I need it to repeat the request, thus resubscribing to the original source observable that was modified using share
Is something like this possible with simple share operator, or do I need to create a subject and control the emissions manually ?
There is a nice library RxReplayingShare, which I think makes exactly, what you are trying to achieve.
It passes the same result to all Subscriber's, when at least one is subscribed. When there are no subscribers anymore, the Observable completes. When subscribing again, the original Observable is called.
The RxMarble shows it better than the description.
I am developing a testing application on an Architecture which is based on Producer-Consumer structure. I have an producer-consumer* problem, especially if an producer callback mechanism is utilized in android service i.e. consumer. Consumers are not supposed to hold the call for more than the minimum necessary time to have the info handed over. Since the producer’s callbacks are supposed to run in a different thread than the consumer’s one.
In my specific case within the callback of Producer only a reference moving of the passed object should be done and release the control right away. The object has to be consumed in the consumer thread. Currently I have been calling a method which only gets data coming within callback and processes that data and return it via Intent baack to the Android Activity.
Now, Android intents are well known to be resource consuming entities which are not meant (and not supposed) to be used to transfer data streams.
Within the Test app, one intent per callback is generated. Those overflow the whole system. For example, at 25% of load a traffic of about a thousand Android intents per seconds are triggered.
I want a way which doesn't include Android Intents(without any Thrid party jar) using which I can send data back to my android activity or route on host machine at super high rate so that my producer call back doesn't get crashed.
Use a socket connection between the Service and the Activity for streaming data. Intent is the wrong technique.
I am working with Android USB Host mode and would like to perform an asynchronous bulk transfer. I have so far been successfully using synchronous bulk transfers, but am having a little trouble grasping how the pieces come together for an asynchronous transfer. From the UsbRequest documentation (bold mine):
Requests on bulk endpoints can be sent synchronously via bulkTransfer(UsbEndpoint, byte[], int, int) or asynchronously via queue(ByteBuffer, int) and requestWait() [a UsbDeviceConnection method].
Ok, so does this mean I call queue() from the existing thread of execution and then requestWait() somewhere else in another thread? Where does requestWait() get my logic from to execute when the request completes? Most of the async work I have done has been in languages like Javascript and Python, generally by passing a callback function as an argument. In Java I was expected to perhaps pass an object that implements a specific method as a callback, but I can't see that happening anywhere. Perhaps my mental model of the whole thing is wrong.
Can someone provide an isolated example of sending an asynchronous bulk transfer?
Basically the requestWait() method is going to return once the queued UsbRequest has completed. You can do this on the same thread or on another. Use the setClientData() AND getClientData() methods to determine which request has just completed, assuming that you had more than one outstanding!
You can queue multiple UsbRequests across multiple EndPoints and the consume their completion status by repeatedly calling requestWait() until you have no more outstanding requests.