I aim to insert the name in Cloud Firestore when the button is clicked, but I don't want the save to be pending if the user is not connected to the internet. I don't like the behavior of Firebase where it saves pending writes, even if the internet connection is restored.
I researched and found that Firebase developers suggest using transactions to prevent pending writes when there is no internet. I have tried this, but the result is still the same: if there is no internet, it saves pending writes, and when the internet connection is restored, it rewrites to Firestore. Why aren't transactions working as expected?
HashMap < String, Object > hashMap = new HashMap < > ();
hashMap.put("Name", "Test");
FirebaseFirestore.getInstance().batch().set(FirebaseFirestore.getInstance().collection("LISTS").document(), hashMap).commit().addOnCompleteListener(new OnCompleteListener < Void > () {
#Override
public void onComplete(#NonNull Task<Void> task) {
Toast.makeText(MainActivity.this, "" + task, Toast.LENGTH_SHORT).show();
if (task.isSuccessful())
activityMainBinding.activityMainMaterialToolbarTopBar.setTitle("Done");
else
activityMainBinding.activityMainMaterialToolbarTopBar.setTitle(task.getException().getMessage());
}
});
I am short on time as I want to deliver this project to the company on the specified date, and I don't want to waste time. If it's not possible to solve the problem with transactions, please let me know so I can look for alternative solutions.
When you're using FirebaseFirestore#batch() you aren't performing a transaction, but creating a WriteBatch:
Creates a write batch, used for performing multiple writes as a single atomic operation.
So, it's an atomic operation and not a transaction. If you need to perform a transaction, please check the official documentation below:
https://firebase.google.com/docs/firestore/manage-data/transactions#transactions
So you should use FirebaseFirestore#runTransaction() which:
Executes the given updateFunction and then attempts to commit the changes applied within the transaction.
Related
I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}
I have an issue leading to the requirement of needing to wait sequentially from going from one thing into the next. I presently do this by setting 3 runnables with different delays to allow for a sequential flow of data to appear on my bluetooth connection. However, whilst this does work I feel there must be a better / cleaner implementation of this. My present code is below.
My code works like this:
Write command 1
Wait till command is written
Write command 2
Wait till command is written
Write command 3
Wait till command is written
Please could you give me some suggestions as to how I can perform my write functions one after another in a better manner.
Handler h =new Handler() ;
h.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 1");
mBluetoothLeService.writeCharacteristic(10);
}
}, 1000);
Handler h1 =new Handler() ;
final int Command_to_run = value;
h1.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 2");
mBluetoothLeService.writeCharacteristic(Command_to_run);
}
}, 2000);
Handler h2 =new Handler() ;
h2.postDelayed(new Runnable() {
public void run() {
Log.d(TAG, "Write 3");
mBluetoothLeService.writeCharacteristic(20);
}
}, 3000);
Write code
public void writeCharacteristic(int Data) {
if (mBluetoothAdapter == null || mBluetoothGatt == null) {
Log.w(TAG, "BluetoothAdapter not initialized");
return;
}
byte[] value = intToByteArray(Data);
BluetoothGattService mCustomService =
mBluetoothGatt.getService(UUID.fromString("f3641400-00b0-4240-ba50-
05ca45bf8abc"));
if(mCustomService == null){
Log.w(TAG, "Custom BLE Service not found");
return;
}
/*get the read characteristic from the service*/
BluetoothGattCharacteristic characteristic =
mCustomService.getCharacteristic(UUID.fromString("f3641401-00b0-4240-
ba50-05ca45bf8abc"));
characteristic.setValue(value);
mBluetoothGatt.writeCharacteristic(characteristic);
}
I think mBluetoothLeService.writeCharacteristic(10); calls like these already blocks the thread so using them in order without the need of handlers can be your solution. I don't think that function is asynchronous, so if it returns true, you can write the next one. They are boolean functions so if it returns true you can switch to next one.
I've examined the source code and if it throws an exception inside it, it returns false. Otherwise, it returns if it was successful or not.
Side note: This behavior might differ on different API versions, the source code I've looked into was for API 29. Though, I believe the behavior would be the same, except you might need to wrap the mBluetoothLeService.writeCharacteristic(10); calls to a try-catch block.
I have to edit this since the answer is wrong, the boolean return value is not enough to determine whether the operation was successful. The operation is indeed asynchronous but there is a callback that you can use (this callback) to see if the write operation was successful, and then move on to the next one.
Check this answer for more information, and, if possible, remove the tick from this one please.
Android's BLE API is fully asynchronous and there are no blocking methods. The methods will return true if the operation was initiated successfully and false otherwise. In particular, false will be returned if there already is an operation ongoing.
When you call requestMtu, writeCharacteristic, readCharacteristic and so on, the corresponding callback onMtuChanged, onCharacteristicWrite, onCharacteristicRead will be called when the operation is complete. Note that this usually means a roundtrip to the remote device, which might take different amount of time to complete depending on how noisy the environment is and which connection parameters you have, so there is never a good idea to sleep or delay some fixed amount of time and assume the operation has then completed.
To make the code structure a bit better and avoiding the "callback hell", you could for example implement a (thread-safe) GATT queue which is later used by your application. This way you can just push what you want in the queue and let your GATT queue library handle the dirty stuff. Or you can use some Android BLE library that already does this.
See https://medium.com/#martijn.van.welie/making-android-ble-work-part-3-117d3a8aee23 for a more thorough discussion.
If the work you want to perform sequentially can be done asynchronously, you can consider the new WorkManager included in Android Jetpack. With WorkManager you can organize all your work very smartly, as per the documentation you can do it as follows:
WorkManager.getInstance(myContext)
// Candidates to run in parallel
.beginWith(listOf(filter1, filter2, filter3))
// Dependent work (only runs after all previous work in chain)
.then(compress)
.then(upload)
// Don't forget to enqueue()
.enqueue()
The library takes good care of the order of the execution for you. You can find more information on this matter here: https://developer.android.com/topic/libraries/architecture/workmanager/how-to/chain-work
thanks for curiosity !
I'm in doubt if it's really necessary to call addOnCompleteListener when I'm using setPersistence(true).
When I use addOnCompleteListener , if my internet is offline, the screen keep loading because the addOnCompleteListener will never end - waiting for connection. So, I can't add anything offline because the loading screen is waiting for connection (breaking the concept of persistence).
Example below shows what I'm talking about :
getDatabaseReference()
.child("Example")
.child(userID)
.push()
.setValue(dataExample)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()){
//interface call with success response
interface.onSuccess();
}else {
//interface call with failure response
interface.onFailure(task.getException().getMessage());
}
}
});
So, as I said, if my internet is offline, I can't complete the action, the loading screen keeps loading forever (waiting for completelistener).
I figured out, that without addOnCompleteListener, I can add and remove values while I'm offline, because they're in cache, so when I have internet, the app send the updates to database online - that's brilliant.
The example below shows what I am proposing:
getDatabaseReference()
.child("Example")
.child(userID)
.push()
.setValue(dataExample);
//interface call with successful response without handling error (because it won't happen)
interface.onSuccess();
So, my question is, it's right to do it ?
It's a good practice ?
Without addOnCompleteListener, and stay able to use my app offline, sending updates only when able to ?
What you are seeing is the expected behavior. A write to RealtimeDatabase isn't considered "complete" until it reaches the server and becomes visible to other clients. A locally cached write is not considered complete, until the SDK manages to synchronize it. If you don't need to know when the write is finally received by the server, or if it was successful, then there is no need to register a listener with the Task it generates.
I'm learning java-rx. I am developing an application and I want to consider the following points
The user can work offline or where the connection is very bad.
Always display updated data to the user
I want to prioritize the visualization of the data. With this I mean that I don't like to wait the timeout of the network and then go to consult the data to disk.
I developed the following observables in the application
Observable<Data> disk = ...;
Observable<Data> network = ...;
Observable<Data> networkWithSave = network.doOnNext(data -> {
saveToDisk(data);
});
I have also declared the following subscriber
new Observer<List<Items>>() {
#Override
public void onCompleted() {
mView.hideProgressBar();
}
#Override
public void onError(Throwable e) {
mView.showLoadingError();
}
#Override
public void onNext(List<Vault> vaults) {
processItems(vaults);
}
}
I would like to receive some advice as to the correct way of concatenating these Observables.
I want the data on the disk to be displayed first. Then check the network and if there is new data then update them.
The network query might be in parallel, but If it to run before the disk does not display the disk data.
Thank you very much.
Sorry for my English
Perhaps Dan Lew Blog Post can give you ideas.
I think disk.concatWith(networkWithSave).subscribe(ui) will do.
Disk data (if any) will always come first, though.
In case there is no data on disk, your disk source must complete without sending any messages. Disk source must never complete with error as this will effectively block your network source.
In your UI subscriber you may want to silently ignore onError (coming from network) if it has already got data from disk.
I am trying to write a mechanism that will manage my save to DB operation.
I send the server a list of objects, it iterates them and saves each one.
Now, if they fail for some strange reason (exception) it saves them to another list that
has a timer that runs every 5 seconds, and tries to re-save them.
I then have a locking problem, which I can solve with another boolean.
My function that saves my lost object is:
private void saveLostDeals() {
synchronized (unsavedDeals) {
if (unsavedDeals.size() > 0) {
for (DealBean unsavedDeal : unsavedDeals) {
boolean successfullySaved = reportDeal(unsavedDeal,false);
if (successfullySaved) {
unsavedDeals.remove(unsavedDeal);
}
}
}
}
}
And my reportDeal() method that's being called for regular reports and for lost deals report:
try {
...
} catch (HibernateException e) {
...if (fallback)
synchronized (unsavedDeals) {
unsavedDeals.add(deal);
}
session.getTransaction().rollback();
} finally {
....
}
Now, when a lost deal is saved - if an exception occurs - the synchronized block will stop it.
What do you have to say about this save fallback mechanism? Are there better design patterns to deal with this common issue?
I would suggest using either a proxy or aspects to handle the rollback/retry mechanism. The proxy could use something like the strategy pattern for advice on what action to take.
If you however don't want to retry immediately, but say in 5 seconds as you propose, I would consider building that into the contract of your database layer by providing asynchronous routines to begin with. Something like dao.scheduleStore(o); or dao.asyncStore(o);.
It depends
For example,
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in fallback DB-----> Return the response to request
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in in-memory Store of Application -----> Return the response to request
Request to save Entity ----> Exception occurs----> unknown Exception----> In Exception block save entity to XML File store[serialize in XML]---->Return the response mentioning temp saved will be updated later to request
Timer ----> checks the file store for any serialized XML ----> updates the DB
Points to watch out for
Async calls are better in such scenarios rather than making requesting client to wait.
In case of in-memory saving , watch out for amount of data saved in memory in case of prolonged DB failure. That might bring down the whole application
Transactions, whether you want to rollback of save its intermittent state.
consistency of data to be watched for