I have a task to fetch data from server at start if application. For purpose i am calling an api by asynctask for notification codes and parse data. The data can go upto 60000 notification codes.
For each notification code i have to call different apis to get data. After performing operation i need to again call an acknowledge api to tell server notification has been acknowledged. So next time when getting notification code it dont repeat.
So in the case i have to call approximately 60000 asynctask for operation and 60000 for acknowledgement. Do each operation for different api urls and then Acknowledge each operation for different url simultaneously
My app is working when notifications are below 1000 but for more than that notification it stucks.
Please anyone can guide what's the best way to implement this.
Related
My programme is a notification service, it basically receives http requests(client sends notifications) and forwards them to a device.
I want it to work the following way:
receive client notification request
save it to the database(yes, i need this step, its mandatory)
async threads watch new requests in database
async threads forward them to the destination(device).
In this case the programme can send client confirmation straight away after the step 2).
Thus, not waiting for the destination to respond(device response time can be too long).
If I stored client notification in memory i would use BlockingQueue. But I need to persist my notifications in db. Also, I cannot use Message Queues, because clients want rest endpoints to send notifications.
Help me to work out the architecture of such a mechanism.
PS In Java, Postgresql
Here are some ideas that can lead to the solution:
Probably the step 2 is mandatory to make sure that the request is persisted so that rather it will be queried. So we're talking about some "data model" here.
With this in mind, if you "send" the confirmation "right away after the step 2" - what if later you want to do some action with this data (say, send it somewhere) and this action doesn't succeed. You store it on disk? what happens if the disk is full?
The most important question is what happens to your data model (in the database) in this case? Should the entry in the database still be there or the whole "logical" action has failed? This is something you should figure out depending on the actual system the answers can be different.
The most "strict" solution would use transactions in the following (schematic) way:
tr = openTransaction()
try {
saveRequestIntoDB(data);
forwardToDestination(data);
tr.commit();
} catch(SomeException ex) {
tr.rollback();
}
With this design, if something goes wrong during the "saveRequest" step - well, nothing will happen. If the data is stored in db, but then forwardToDestination fails - then the transaction will be rolled back and the record won't be stored in DB.
If all the operations succeed - the transaction will be committed.
Now It looks like you still can use the messaging system in step 4. Sending message can be fast and won't add any significant overhead to the whole request.
On the other hand, the benefits are obvious:
- Who listens to these "notifications"? If you send something and only one service should receive and process the notification how do you make sure that others won't get it? How would you implement the opposite - what if all the services should get the notification and process it independently?
These facilities are already implemented by any descent messaging system.
I can't really understand the statement:
I cannot use Message Queues, because clients want rest endpoints to send notifications.
Since the whole flow is originated by the client's request I don't see any contradication here. The code that is called from rest endpoint (which is after all is a logic entrypoint that should be implemented by you) can call the database, persist the data and then send the notification...
I am sending a data notification every 2 min for the device in order to collect data (GPS, Signal strength ...).
The FCM trigger service that sends the collected data to an API
However, The device stops receive FCM notification to starts that app in some cases or even till restart it.
Is there any reason for this and how Can I pass it?
Note: I have Add server up to white list in the firebase console
Generally there are two types of FCM messages we can send to registered devices. First one is data message and the other one is notification message. The default priority of data message is NORMAL where as the notification messages have a HIGH priority. Messages with HIGH priority are considered to be delivered first (That means there is a chance of not receiving data messages in some case). But it is possible to change the priority of data message to HIGH while sending them from server.
https://firebase.google.com/docs/cloud-messaging/concept-options#setting-the-priority-of-a-message
But in my case i had some situations when i was not able to receive FCM messages even after setting priority to HIGH. Thus i opted to use FCM wrapper services like Onesignal to send push notification. It is way better in case of message delivery. Inside it is FCM, but they handled the message delivery problem up to an extent. Take a look at
https://onesignal.com/
I want to implement push notification in java so please help me out
1-Each time a new record(Message) pushed into data base(due to event created by some other user), a push notification should be sent to specific Logged in user automatically.
2-Content of the push notification should be the message present in the db.
3-If there are multiple messages, then the user should receive them one by one in a queue fashion.
4-Most important thing is the logged in user need not have to trigger any event to get notification, user should receive it automatically throughout the session.
You could use Server Sent Events. Java provides SseEmitter to send timely notifications.
You can use EventSource API in JavaScript to trigger the SSE event stream and in the server-side, loop the database query code which is wrapped by an ExecutorService - which can spin of separate thread based on the initialization.
Put SSE timeout to -1 for listening for an infinite amount of time.
Please note this answer is only a hint. Use these to explore more from the internet.
I have an interesting use case which I want to test with Flink. I have an incoming stream of Message which is either PASS or FAIL. Now if the message is of type FAIL, I have a downstream ProcessFunction which saves the Message state and then sends pause commands to everything that depends on this. When I receive a PASS message which is associated with the FAIL I had received earlier (keying by message id), I send resume commands to everything I had paused earlier.
Now I plan on using State TTL to expire the stored FAIL state and resume everything after a certain timeout even if I haven't received a PASS message with the same message id. Could this be done with Flink alone or would I need to have some external timer to send timeout messages to my program?
I had something like this in mind to get it working in Flink:
For each Message, add timestamp and pass it on to a process function which waits until current_ts - timestamp == timeout before sending it on to resume everything paused by the module. Is there a better way or do you guys think this is ok?
Seems like it would be more straightforward to use a timer to expire the state (by calling state.clear() in the onTimer method), rather than using state TTL. The same onTimer method can also arrange for things to resume at the same time.
For the last few years we have used our own RM Application to process events related to our applications. This works by polling a database table every few minutes, looking for any rows that have a due date before now, and have not been processed yet.
We are currently making the transition to SNS, with SQS Worker tiers processing them. The problem with this approach is that we can't future date our messages. Our applications sometimes have events that we don't want to process until a week later.
Are there any design approaches, alternative services, clever tricks we could employ that would allow us to do achieve this?
One solution would be to keep our existing application running, at a simplified level, so all it does is send the SNS notifications when they are due, but the aim of this project is to try and do away with our existing app.
The database approach would be the wisest, being careful that each row is only processed once.
Amazon Simple Notification Service (SNS) is designed to send notifications immediately. There is no functionality for a delayed send (although some notification types are retried if they fail).
Amazon Simple Queue Service (SQS) does have a delay feature, but only up to 15 minutes -- this is useful if you need to do some work before the message is processed, such as copying related data to Amazon S3.
Given that your requirement is to wait until some future arbitrary time (effectively like a scheduling system), you could either start a process and tell it to sleep for a certain amount of time (a bad idea in case systems are restarted), or continue your approach of polling from a database.
If all jobs are scheduled for a distant future (eg at least one hour away), you theoretically only need to poll the database once an hour to retrieve the earliest scheduled time.
A week might be too long as SQS message retention itself is only 15 days. If you are okay with maximum retention of 15days, one idea is to keep the changing the visibility of a message every time you receive until it is ready for processing. The maximum allowed visibility timeout is 12 hours. More on visibility timeout and APIs for changing them,
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html
I found this approach: https://github.com/alestic/aws-sns-delayed. Basically, you can use a step function with a wait step in there