Write to GAE datastore asynchronously - java

In my Java app, sometimes my users do some work that requires a datastore write, but I don't want to keep the user waiting while the datastore is writing. I want to immediately return a response to the user while the data is stored in the background.
It seems fairly clear that I could do this by using GAE task queues, enqueueing a task to store the data. But I also see that there's an Async datastore API, which seems like it would be much easier than dealing with task queues.
Can I just call AsyncDatastoreService.put() and then return from my servlet? Will that API store my data without keeping my users waiting?

I think you are right that the Async calls seem easier. However, the docs for AsyncDatastore mention one caveat that you should consider:
Note: Exceptions are not thrown until you call the get() method. Calling this method allows you to verify that the asynchronous operation succeeded.
The "get" in that note is being called on the Future object returned by the async call. If you just return from your servlet without ever calling get on the Future object, you might not know for sure whether your put() worked.
With a queued task, you can handle the error cases more explicitly, or just rely on the automatic retries. If all you want to queue is datastore puts, you should be able to create (or find) a utility class that does most of the work for you.

Unfortunately, there aren't any really good solutions here. You can enqueue a task, but there's several big problems with that:
Task payloads are limited in size, and that size is smaller than the entity size limit.
Writing a record to the datastore is actually pretty fast, in wall-clock time. A significant part of the cost, too, is serializing the data, which you have to do to add it to the task queue anyway.
By using the task queue, you're creating more eventual consistency - the user may come back and not see their changes applied, because the task has not yet executed. You may also be introducing transaction issues - how do you handle concurrent updates?
If something fails, it could take an arbitrarily long time to apply the user's updates. In such situations, it probably would have been better to simply return an error to the user.
My recommendation would be to use the async API where possible, but to always write to the datastore directly. Note that you need to wait on all your outstanding API calls, as Peter points out, or you won't know if they failed - and if you don't wait on them, the app server will, before returning a response to the user.

If all you need is for the user to have a responsive interface while stuff churns in the back on the db, all you have to do is make an asynchronous call at the client level, aka do some ajax that sends the db write request, changes imemdiatelly the users display, and then upon an ajax request callback update the view with whatever is it you wish.
You can easily add GWT support to you GAE project (either via eclipse plugin or maven gae plugin) and have the time of your life doing asynchronous stuff.

Related

How can I ensure that my Android app doesn't access a file simultaneously?

I am building a fitness app which continually logs activity on the device. I need to log quite often, but I also don't want to unnecessarily drain the battery of my users which is why I am thinking about batching network calls together and transmitting them all at once as soon as the radio is active, the device is connected to a WiFi or it is charging.
I am using a filesystem based approach to implement that. I persist the data first to a File - eventually I might use Tape from Square to do that - but here is where I encounter the first issues.
I am continually writing new log data to the File, but I also need to periodically send all the logged data to my backend. When that happens I delete the contents of the File. The problem now is how can I prevent both of those operations from happening at the same time? Of course it will cause problems if I try to write log data to the File at the same time as some other process is reading from the File and trying to delete its contents.
I am thinking about using an IntentService essentially act as a queue for all those operations. And since - at least I have read as much - an IntentServices handles Intents sequentially in single worker Thread it shouldn't be possible for two of those operations to happen at the same time, right?
Currently I want to schedule a PeriodicTask with the GcmNetworkManager which would take care of sending the data to the server. Is there any better way to do all this?
1) You are overthinking this whole thing!
Your approach is way more complicated than it has to be! And for some reason none of the other answers point this out, but GcmNetworkManager already does everything you are trying to implement! You don't need to implement anything yourself.
2) Optimal way to implement what you are trying to do.
You don't seem to be aware that GcmNetworkManager already batches calls in the most battery efficient way with automatic retries etc and it also persists the tasks across device boots and can ensure their execution as soon as is battery efficient and required by your app.
Just whenever you have data to save schedule a OneOffTask like this:
final OneoffTask task = new OneoffTask.Builder()
// The Service which executes the task.
.setService(MyTaskService.class)
// A tag which identifies the task
.setTag(TASK_TAG)
// Sets a time frame for the execution of this task in seconds.
// This specifically means that the task can either be
// executed right now, or must have executed at the lastest in one hour.
.setExecutionWindow(0L, 3600L)
// Task is persisted on the disk, even across boots
.setPersisted(true)
// Unmetered connection required for task
.setRequiredNetwork(Task.NETWORK_STATE_UNMETERED)
// Attach data to the task in the form of a Bundle
.setExtras(dataBundle)
// If you set this to true and this task already exists
// (just depends on the tag set above) then the old task
// will be overwritten with this one.
.setUpdateCurrent(true)
// Sets if this task should only be executed when the device is charging
.setRequiresCharging(false)
.build();
mGcmNetworkManager.schedule(task);
This will do everything you want:
The Task will be persisted on the disk
The Task will be executed in a batched and battery efficient way, preferably over Wifi
You will have configurable automatic retries with a battery efficient backoff pattern
The Task will be executed within a time window you can specify.
I suggest for starters you read this to learn more about the GcmNetworkManager.
So to summarize:
All you really need to do is implement your network calls in a Service extending GcmTaskService and later whenever you need to perform such a network call you schedule a OneOffTask and everything else will be taken care of for you!
Of course you don't need to call each and every setter of the OneOffTask.Builder like I do above - I just did that to show you all the options you have. In most cases scheduling a task would just look like this:
mGcmNetworkManager.schedule(new OneoffTask.Builder()
.setService(MyTaskService.class)
.setTag(TASK_TAG)
.setExecutionWindow(0L, 300L)
.setPersisted(true)
.setExtras(bundle)
.build());
And if you put that in a helper method or even better create factory methods for all the different tasks you need to do than everything you were trying to do should just boil down to a few lines of code!
And by the way: Yes, an IntentService handles every Intent one after another sequentially in a single worker Thread. You can look at the relevant implementation here. It's actually very simple and quite straight forward.
All UI and Service methods are by default invoked on the same main thread. Unless you explicitly create threads or use AsyncTask there is no concurrency in an Android application per se.
This means that all intents, alarms, broad-casts are by default handled on the main thread.
Also note that doing I/O and/or network requests may be forbidden on the main thread (depending on Android version, see e.g. How to fix android.os.NetworkOnMainThreadException?).
Using AsyncTask or creating your own threads will bring you to concurrency problems but they are the same as with any multi-threaded programming, there is nothing special to Android there.
One more point to consider when doing concurrency is that background threads need to hold a WakeLock or the CPU may go to sleep.
Just some idea.
You may try to make use of serial executor for your file, therefore, only one thread can be execute at a time.
http://developer.android.com/reference/android/os/AsyncTask.html#SERIAL_EXECUTOR

Java simple Analytics/Event Stream Processing with front end

My application takes a lot of measurements of it's internal processes. For example I time certain methods, I time external webservice calls and I also have variables which have a changing value, and processes which have a 'state' (e.g. PAUSED, WAITING etc).
The application uses 100 to 200 threads, and each bit of data would be associated with a particular thread.
I am looking for some software that I can channel all this information into that would produce useful metrics and graphs of the data (ideally in real time or close to real time), let me set thresholds to trigger warnings, would allow me to filter the data by thread or thread group, etc etc.
The application is performing time critical tasks so the software/api would need to be very fast and never block.
The application is written in java, and ideally the software/api would be in java as well. I think what I'm looking for is called Event Stream Processing, but I'm really not sure what language to use to describe it.
All I've found so far are Esper and ERMA. Can anyone give me a recommendation? I'm the only one working on this project so I'm hoping for something that is pretty easy to set up and use, and has a workable front end.
In the end I found Graphite which was pretty close to being exactly what I wanted. Not the simplest to set up and configure however, but I got it working in the end.
http://graphite.wikidot.com/
In my case I send data directly from my application to Statsd (via UDP), which collects the data and does some pre processing before it ends up in the whisper back end, there is a simple example of a java interface here https://github.com/etsy/statsd/commit/2253223f3c19d2149d65ec5bc802198ff93da4cb
Alternatively you could send your data directly to graphite, example here http://neopatel.blogspot.co.uk/2011/04/logging-to-graphite-monitoring-tool.html

Synchronous, Asynchronous and Command Client Requests with GWT and GAE

In designing my GWT/GAE app, it has become evident to me that my client-side (GWT) will be generating three types of requests:
Synchronous - "answer me right now! I'm important and require a real-time response!!!"
Asynchronous - "answer me when you can; I need to know the answer at some point but it's really not all that ugent."
Command - "I don't need an answer. This isn't really a request, it's just a command to do something or process something on the server-side."
My game plan is to implement my GWT code so that I can specify, for each specific server-side request (note: I've decided to go with RequestFactory over traditional GWT-RPC for reasons outside the scope of this question), which type of request it is:
SynchronousRequest - Synchronous (from above); sends a command and eagerly awaits a response that it then uses to update the client's state somehow
AsynchronousRequest - Asynchronous (from above); makes an initial request and somehow - either through polling or the GAE Channel API, is notified when the response is finally received
CommandRequest - Command (from above); makes a server-side request and does not wait for a response (even if the server fails to, or refuses to, oblige the command)
I guess my intention with SynchronousRequest is not to produce a totally blocking request, however it may block the user's ability to interact with a specific Widget or portion of the screen.
The added kicker here is this: GAE strongly enforces a timeout on all of its frontend instances (60 seconds). Backend instances have much more relaxed constraints for timeouts, threading, etc. So it is obvious to me that AsynchronousRequests and CommandRequests should be routed to backend instances so that GAE timeouts do not become an issue with them.
However, if GAE is behaving badly, or if we're hitting peak traffic, or if my code just plain sucks, I have to account for the scenario where a SynchronousRequest is made (which would have to go through a timeout-regulated frontend instance) and will timeout unless my GAE server code does something fancy. I know there is a method in the GAE API that I can call to see how many milliseconds a request has before its about to timeout; but although the name of it escapes me right now, it's what this "fancy" code would be based off of. Let's call it public static long GAE.timeLeftOnRequestInMillis() for the sake of this question.
In this scenario, I'd like to detect that a SynchronousRequest is about to timeout, and somehow dynamically convert it into an AsynchronousRequest so that it doesn't time out. Perhaps this means sending an AboutToTimeoutResponse back to the client, and force the client to decide about whether to resend as an AsynchronousRequest or just fail. Or perhaps we can just transform the SynchronousRequest into an AsynchronousRequest and push it to a queue where a backend instance will consume it, process it and return a response. I don't have any preferences when it comes to implementation, so long as the request doesn't fail or timeout because the server couldn't handle it fast enough (because of GAE-imposed regulations).
So then, here is what I'm actually asking here:
How can I wrap a RequestFactory call inside SynchronousRequest, AsynchronousRequest and CommandRequest in such a way that the RequestFactory call behaves the way each of them is intended? In other words, so that the call either partially-blocks (synchronous), can be notified/updated at some point down the road (asynchronous), or can just fire-and-forget (command)?
How can I implement my requirement to let a SynchronousRequest bypass GAE's 60-second timeout and still get processed without failing?
Please note: timeout issues are easily circumvented by re-routing things to backend instances, but backends don't/can't scale. I need scalability here as well (that's primarily why I'm on GAE in the first place!) - so I need a solution that deals with scalable frontend instances and their timeouts. Thanks in advance!
If the computation that you want GAE to do is going to take longer than 60 seconds, then don't wait for the results to be computed before sending a response. According to your problem definition, there is no way to get around this. Instead, clients should submit work orders, and wait for a notification from the server when the results are ready. Requests would consist of work orders, which might look something like this:
class ComputeDigitsOfPiWorkOrder {
// parameters for the computation
int numberOfDigitsToCompute;
// Used by the GAE app to contact the requester when results are ready.
ClientId clientId;
}
This way, your GAE app can respond as soon as the work order is saved (e.g. in Task Queue), and doesn't have to wait until it actually finishes calculating a billion digits of pi before responding. Your GWT client then waits for the result using the Channel API.
In order to give some work orders higher priority, you can use multiple task queues. If you want Task Queue work to scale automatically, you'll want to use push queues. Implementing priority using push queues is a little tricky, but you can configure high priority queues to have faster feed rate.
You could replace Channel API with some other notification solution, but that would probably be the most straightforward.

Handling asynchronous saving with the possibility of time-critical errors?

So, to explain this, I'll start out by going through the application stack.
The system is running JSP with jQuery on top, talking through a controller layer with a service layer, which in turn utilizes a persistence layer implemented in Hibernate.
Now, traditionally, errors like having overlapping contracts has been handled through throwing exceptions up through the layers until they're translated into an error message for the user.
Now I have an object that at any given time can only be tied to one contract. At the moment, when I save a contract, I look at all of these objects and check if they're already covered by an existing contract. However, since multiple clients can be saving at any given time, this introduces the risk of getting past the check on two separate contracts, leading to one object being tied to two contracts at the same time.
To combat this, the idea was to use a queue, put objects into the queue from the main thread, and then have a separate thread take them out one by one, saving them.
However, here's the problem. For one, I would like the user to know that the saving is currently happening, for another, if by accident the scenario before happens, and two contracts with the same object covering the same time is in the queue, the second one will fail, and this needs to be sent back to the user.
My initial attempt was to keep data fields on the object put into the queue, and then check against those in a blocking wait, and then throw an exception or report success based on what happens. That deadlocked the system completely.
Anyone able to point me in the right direction with regards to techniques and patterns I should be using for this?
I can't really tell why you have a deadlock without seeing your code. I can think of some other options though:
Poll the thread to see its state (not as good).
Use some kind of eventing system. You would have an event listener (OverlappingContractEventListener perhaps) and then you would trigger the event from the thread when the scenario happens. The event handler would need to persist this information somehow.
If you are going for this approach, then on the client side you will need to poll.
You can poll a specific controller (using setInterval and AJAX) that looks up the corresponding information for the object to see what state its in. This information should have been persisted by your event listener.
You can use web workers (this is supported in Chrome, Firefox, Safari, and Opera. IE will support it in 10) and perform the polling in the background.
There is one other way that doesn't involve eventing. It depends on you figuring out the source of your deadlock though. Once you fix the source of your deadlock you can do one of two things:
Perform an AJAX call to the controller. The controller will wait for the service to return information. The code to issue feedback to the user will be inside the success handler of your controller.
Use a web worker to perform the call in the background. The web worker would also perform an AJAX call and wait for the response.
Shouldn't you be doing the check for duplicate contracts in the database? Depending on the case, you can do this with a constraint, trigger, o stored procedure. If it fails, send an exception up the stack. That's normally the way to handle things like this. You can then catch the exception in jQuery and display an error:
jQuery Ajax error handling, show custom exception messages
Hope this helps.

If a REST web service call fails, should a message or event queue be used to retry later?

I'm building a web service with a RESTful interface (lets call it MY_API). This service relies on another RESTful webservice to handle certain aspects (calling it OTHER_API). I'd like to determine what types of best practices I should consider using to handle failures of OTHER_API.
Scenario
My UI is a single page javascript application. There are some fairly complex actions a user can take, which can easily take the user a minute or two to complete. When they are done, they click the SAVE button and MY_API is called to save the data.
MY_API has everything it needs to persist the information submitted by the user. However, there is an action that must take place that is handled by OTHER_API. For instance, OTHER_API might handle sending out an emails. Or perhaps it handles adding line items to my user's billing statement. In both cases, these are critical things than must be completed, but they don't have to happen right now, they just need to happen eventually.
If OTHER_API fails, I don't want to simply tell the user their action has failed, as they spent a lot of time doing it and this will make the experience less than optimal.
Questions
So should I create some sort of Message or Event Queue that can save these failed REST requests to OTHER_API and process them later?
Any advice or suggestions on techniques to go about saving REST requests for delayed processing?
Is there a recommended open source message queue solution that would work for this type of scenario with JSON-based REST web services? Java is preferred as my backend is written in it.
Are there other techniques I should consider?
Rather than approach this by focusing on the failure state, it'd be faster and more robust to recognize that these actions should be performed asynchronously and out-of-band from the request by the UI. You should indeed use a message/event/job queue, and just pop those jobs right onto that queue as quickly as possible, and respond to the original request as quickly as possible. Once you've done that, the asynchronous job can be performed independently of the original request, and at its own pace — including with retries as needed.
If you want your API to indicate that there are aspects of the request which have not completed, you can use the HTTP response Status Code 202 (Accepted).

Categories

Resources