I'm learning java-rx. I am developing an application and I want to consider the following points
The user can work offline or where the connection is very bad.
Always display updated data to the user
I want to prioritize the visualization of the data. With this I mean that I don't like to wait the timeout of the network and then go to consult the data to disk.
I developed the following observables in the application
Observable<Data> disk = ...;
Observable<Data> network = ...;
Observable<Data> networkWithSave = network.doOnNext(data -> {
saveToDisk(data);
});
I have also declared the following subscriber
new Observer<List<Items>>() {
#Override
public void onCompleted() {
mView.hideProgressBar();
}
#Override
public void onError(Throwable e) {
mView.showLoadingError();
}
#Override
public void onNext(List<Vault> vaults) {
processItems(vaults);
}
}
I would like to receive some advice as to the correct way of concatenating these Observables.
I want the data on the disk to be displayed first. Then check the network and if there is new data then update them.
The network query might be in parallel, but If it to run before the disk does not display the disk data.
Thank you very much.
Sorry for my English
Perhaps Dan Lew Blog Post can give you ideas.
I think disk.concatWith(networkWithSave).subscribe(ui) will do.
Disk data (if any) will always come first, though.
In case there is no data on disk, your disk source must complete without sending any messages. Disk source must never complete with error as this will effectively block your network source.
In your UI subscriber you may want to silently ignore onError (coming from network) if it has already got data from disk.
Related
I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}
I have an issue leading to the requirement of needing to wait sequentially from going from one thing into the next. I presently do this by setting 3 runnables with different delays to allow for a sequential flow of data to appear on my bluetooth connection. However, whilst this does work I feel there must be a better / cleaner implementation of this. My present code is below.
My code works like this:
Write command 1
Wait till command is written
Write command 2
Wait till command is written
Write command 3
Wait till command is written
Please could you give me some suggestions as to how I can perform my write functions one after another in a better manner.
Handler h =new Handler() ;
h.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 1");
mBluetoothLeService.writeCharacteristic(10);
}
}, 1000);
Handler h1 =new Handler() ;
final int Command_to_run = value;
h1.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 2");
mBluetoothLeService.writeCharacteristic(Command_to_run);
}
}, 2000);
Handler h2 =new Handler() ;
h2.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 3");
mBluetoothLeService.writeCharacteristic(20);
}
}, 3000);
Write code
public void writeCharacteristic(int Data) {
if (mBluetoothAdapter == null || mBluetoothGatt == null) {
Log.w(TAG, "BluetoothAdapter not initialized");
return;
}
byte[] value = intToByteArray(Data);
BluetoothGattService mCustomService =
mBluetoothGatt.getService(UUID.fromString("f3641400-00b0-4240-ba50-
05ca45bf8abc"));
if(mCustomService == null){
Log.w(TAG, "Custom BLE Service not found");
return;
}
/*get the read characteristic from the service*/
BluetoothGattCharacteristic characteristic =
mCustomService.getCharacteristic(UUID.fromString("f3641401-00b0-4240-
ba50-05ca45bf8abc"));
characteristic.setValue(value);
mBluetoothGatt.writeCharacteristic(characteristic);
}
I think mBluetoothLeService.writeCharacteristic(10); calls like these already blocks the thread so using them in order without the need of handlers can be your solution. I don't think that function is asynchronous, so if it returns true, you can write the next one. They are boolean functions so if it returns true you can switch to next one.
I've examined the source code and if it throws an exception inside it, it returns false. Otherwise, it returns if it was successful or not.
Side note: This behavior might differ on different API versions, the source code I've looked into was for API 29. Though, I believe the behavior would be the same, except you might need to wrap the mBluetoothLeService.writeCharacteristic(10); calls to a try-catch block.
I have to edit this since the answer is wrong, the boolean return value is not enough to determine whether the operation was successful. The operation is indeed asynchronous but there is a callback that you can use (this callback) to see if the write operation was successful, and then move on to the next one.
Check this answer for more information, and, if possible, remove the tick from this one please.
Android's BLE API is fully asynchronous and there are no blocking methods. The methods will return true if the operation was initiated successfully and false otherwise. In particular, false will be returned if there already is an operation ongoing.
When you call requestMtu, writeCharacteristic, readCharacteristic and so on, the corresponding callback onMtuChanged, onCharacteristicWrite, onCharacteristicRead will be called when the operation is complete. Note that this usually means a roundtrip to the remote device, which might take different amount of time to complete depending on how noisy the environment is and which connection parameters you have, so there is never a good idea to sleep or delay some fixed amount of time and assume the operation has then completed.
To make the code structure a bit better and avoiding the "callback hell", you could for example implement a (thread-safe) GATT queue which is later used by your application. This way you can just push what you want in the queue and let your GATT queue library handle the dirty stuff. Or you can use some Android BLE library that already does this.
See https://medium.com/#martijn.van.welie/making-android-ble-work-part-3-117d3a8aee23 for a more thorough discussion.
If the work you want to perform sequentially can be done asynchronously, you can consider the new WorkManager included in Android Jetpack. With WorkManager you can organize all your work very smartly, as per the documentation you can do it as follows:
WorkManager.getInstance(myContext)
// Candidates to run in parallel
.beginWith(listOf(filter1, filter2, filter3))
// Dependent work (only runs after all previous work in chain)
.then(compress)
.then(upload)
// Don't forget to enqueue()
.enqueue()
The library takes good care of the order of the execution for you. You can find more information on this matter here: https://developer.android.com/topic/libraries/architecture/workmanager/how-to/chain-work
thanks for curiosity !
I'm in doubt if it's really necessary to call addOnCompleteListener when I'm using setPersistence(true).
When I use addOnCompleteListener , if my internet is offline, the screen keep loading because the addOnCompleteListener will never end - waiting for connection. So, I can't add anything offline because the loading screen is waiting for connection (breaking the concept of persistence).
Example below shows what I'm talking about :
getDatabaseReference()
.child("Example")
.child(userID)
.push()
.setValue(dataExample)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()){
//interface call with success response
interface.onSuccess();
}else {
//interface call with failure response
interface.onFailure(task.getException().getMessage());
}
}
});
So, as I said, if my internet is offline, I can't complete the action, the loading screen keeps loading forever (waiting for completelistener).
I figured out, that without addOnCompleteListener, I can add and remove values while I'm offline, because they're in cache, so when I have internet, the app send the updates to database online - that's brilliant.
The example below shows what I am proposing:
getDatabaseReference()
.child("Example")
.child(userID)
.push()
.setValue(dataExample);
//interface call with successful response without handling error (because it won't happen)
interface.onSuccess();
So, my question is, it's right to do it ?
It's a good practice ?
Without addOnCompleteListener, and stay able to use my app offline, sending updates only when able to ?
What you are seeing is the expected behavior. A write to RealtimeDatabase isn't considered "complete" until it reaches the server and becomes visible to other clients. A locally cached write is not considered complete, until the SDK manages to synchronize it. If you don't need to know when the write is finally received by the server, or if it was successful, then there is no need to register a listener with the Task it generates.
I'm trying to implement excel export for some amount of data. After 5 minutes I receive a 504 Gateway timeout. In the backend the process continues with its work.
For the whole service to finish, I need approximately 15 minutes. Is there anything I can do to prevent this? I dont have access to the servers in production.
The app is Spring boot with Oracle database. I'm using POI for this export.
One common way to handle these kinds of problems is to have the first request start the process in the background, and when the file has been generated, download the results from another place. The first request finishes immediately, and the user can then check another view to see if the file has been generated, and download the results.
You can export the data in smaller chunks. Run a test with say 10K records, make a note of the id of the last record and repeat the export starting at the next record. If 10K finishes quickly, then try 50K. If you have a timer that might come in handy. Good luck.
I had the same situation where the timeout of the network calls wasn't in our hand, so I guess you have something where it is 5 mins to receive the 1st byte and then the timeout is gone.
My solution was, let's assume you have a controller and a query layer to talk to the database. In this case, you make your process in the Async way. The call to this controller should just trigger that async execution and return the success status immediately, without waiting. Here execution will happen in the background. Futures can be used here as they are async and you can also handle the result once completed by using callback methods of Future.
You can implement using Future and callback methods in java8 like below:
Futures.addCallback(
exportData,
new FutureCallback<String>() {
public void onSuccess(String message) {
System.out.println(message);
}
public void onFailure(Throwable thrown) {
thrown.getCause();
}
},
service)
and in Scala like:
val result = Future {
exportData(data)
}
result.onComplete {
case Success(message) => println(s"Got the callback result:
$message")
case Failure(e) => e.printStackTrace
}
I'm currently in the progress of working on a telephone application, and for easy portability I'm making use of Unity3D. The application's design looks nice and crisp and scales well into all of my target resolutions, but the networking is giving me an issue.
I'm using Java for the server backend, and I'm using JDBC to mananage database connections. The problem is that this application is sure to have a few thousand users at minimum (Based on my current following, blogs, and marketing techniques) and I would like to make sure that I'm doing this correctly to avoid any lockups from SQL being used at the same time.
This application pulls everything that is needed from the database. For security and glitch prevention reasons if the database cannot be connected the server lets the client(application) know that there was an error.
Here's what I'm worried about: When a user logs in a few things are done almost instantly. The database is checked for their login credentials, if it's successful the client then loads the next stage of the application then sends a packet to the server. The server then grabs more information from the server (This is done through a total of three queries in the shortest form I can possibly come up with.) However; What happens if a 2-300 people are logging in, 3-400 are spending tokens (Requires db calls) and 2-300 are requesting database data elsewhere. That's around 1,000 requested database calls coming in.
I don't want the application to seem really laggy.
Here's how I'm currently handling it after a little research, but it doesn't feel necessarily right. Looking for advice and corrections. (decodeTCP is called when a packet with the header id of X is received.)
public void decodeTcp(Session session) throws IOException {
try {
ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> scheduledFuture = scheduledExecutorService.schedule(new Callable<Object>() {
public Object call() throws Exception {
return Database.getSingleton().fetch((User)session.getAttachment());
}
}, 0, TimeUnit.SECONDS);
int results = (int) scheduledFuture.get();scheduledFuture.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
Where Database#fetch(Object) returns an int.