Queuing operations inbound from API - java

I written a web service (DropWizard) that takes accepts requests via POST to perform operations that may take considerable time. Considerable time meaning that it could take anywhere from 1-5 minutes to complete.
That said the caller doesn't need a response. Just a simple 200 to acknowledge receipt of the message is enough. It's actually a PayPal IPN WebHook for anybody who is curious.
I only want to perform one of these operations at a time (with the option to increase in future) so that my system doesn't overload.
What kind of queue mechanism should I consider using. This probably goes without say but I must assume that the API instance can be killed at any time, thus clearing memory. I will need a temporary place to store the queue so I can resume where the server left off when restarted.
Thank you.

You could use Apache Kafka Queue. The documentation is pretty clear. It should help you out.
http://kafka.apache.org/
Hope that helps!

you can use, activeMQ with persistance. Its very light weight and easy to use. Have a look at http://activemq.apache.org/persistence.html it will guide you through the step by step process.

Related

What is the Difference between Scheduling a Get Api call every single second, and Doing a Subscriber API?

I am Writing a Java Application where when the Data Changes an image should change,
My Colleagues are asking me to do a Scheduler where you have to call a get api every 1 second
My Suggestion is to use Pub-Sub so that whenever event happens , then only the data is changed
is Subscriber and Scheduler one and the Same?
No code
Publish/subscribe is a nicer option, theoretically.
The differences:
Polling is a kind of busy waits, with multiple clients causing superfluous network traffic. The client is active.
Publish/Subcribe needs an active server that does a push notification to all subscribers. Meanwhile there is sufficient support in HTML5/JavaScript and in java. The server is active.
Unfortunately publish/subscribe will probably be a bit harder to realize. Best would be to make a proof of concept in a separate application. Things like asynchroneous Ajax might appear.
Also some publish/subscribe libraries might still use under the hood polling at the client side, instead of push notifications.
So the colleagues' advise might be based on the simpler, unproblematic implementation.
Depending on the leeway you are given, and in the interest of architectural research: a prototype with a load test for both implementations would be fine. Hope never dies.
It's no the same:
Scheduler is when you explicitly choose when to make the request. You can to it every second, minute or whatever. Every time you create a new request.
Pub-Sub is when you create a permatent connection to the source of events, and when an event is published you consume it. You don't have here multiple requests, it's rather a socket connection.

Notifications of Lease expiration in Jini/Apache-River and JavaSpaces

I've been looking at the use of Lease and specifically their expiration. I am a little confused about how to reflect this expiration in a client side application. This is pretty trivial using some kind of polling mechanism - but after seeing the implementation of .notify on objects being written to the space, I was wondering if there is something better.
Is there a way to be notified of a Lease expiration? Or is there some sort of accepted solution on how to poll for these expirations?
I have read several sources (e.g. http://www.javacoffeebreak.com/books/extracts/jini/Lease.html), and they mention ways to be notified of this expiration but I cannot see any examples. JavaDocs hint that LeaseRenewalManager might be of use, but my initial tests haven't really given me anything.
After continued research into this, it appears nothing exists for this purpose aside from monitoring Leases via some sort of polling timer (bleh).
Unfortunately this just does not appear to be possible.

How to properly throttle web requests to external systems?

My Java web application pulls some data from external systems (JSON over HTTP) both live whenever the users of my application request it and batch (nightly updates for cases where no user has requested it). The data changes so caching options are likely exhausted.
The external systems have some throttling in place, the exact parameters of which I don't know, and which likely change depending on system load (e.g., peak times 10 requests per second from one IP address, off-peak times 100 requests per second from open IP address). If the requests are too frequent, they time out or return HTTP 503.
Right now I am attempting the request 5 times with 2000ms delay between each, giving up if an error is received each time. This is not optimal as sometimes at peak-times nearly all requests fail; I could avoid making these requests and perhaps get at least some to succeed instead.
My goals are to have a somewhat simple, reliable design, and enough flexibility so that I could both pull some metrics from the throttler to understand how well the external systems are responding (and thus adjust how often they are invoked), and to auto-adjust the interval with which I call them (individually per system) so that it is optimal both on off-peak and peak hours.
My infrastructure is Java with RabbitMQ over MongoDB over Linux.
I'm thinking of three main options:
Since I already have RabbitMQ used for batch processing, I could just introduce a queue to which the web processes would send the requests they have for external systems, then worker processes would read from that queue, throttle themselves as needed, and return the results. This would allow running multiple parallel worker processes on more servers if needed. My main concern is that it isn't a very simple solution, and how to manage peak-hour throughput being low and thus the web processes waiting for a long while. Also this converts my RabbitMQ into a critical single failure point; if it dies the whole system stops (as opposed to the nightly batch processes just not running any more, which is less critical). I suppose rpc is the correct pattern of RabbitMQ usage, but not sure. Edit - I've posted a related question How to properly implement RabbitMQ RPC from Java servlet web container? on how to implement this.
Introduce nginx (e.g. ngx_http_limit_req_module), HAProxy (link) or other proxy software to the mix (as reverse proxies?), have them take care of the throttling through some configuration magic. The pro is that I don't have to make code changes. The con is that it is more technology used, and one I've not used before, so chances of misconfiguring something are quite high. It would also likely not be easy to do dynamic throttling depending on external server load, or prioritizing live requests over batch requests, or get statistics of how the throttling is doing. Also, most documentation and examples will likely be on throttling incoming requests, not outgoing.
Do a pure-Java solution (e.g., leaky bucket implementation). Would be simple in the sense that it is "just code", but the devil is in the details; debugging all the deadlocks, starvations and race conditions isn't always fun.
What am I missing here?
Which is the best solution in this case?
P.S. Somewhat related question - what's the proper approach to log all the external system invocations, so that statistics are collected as to how often I invoke them, and what the success rate is?
E.g., after every invocation I'd invoke something like .logExternalSystemInvocation(externalSystemName, wasSuccessful, elapsedTimeMills), and then get some aggregate data out of it whenever needed.
Is there a standard library/tool to use, or do I have to roll my own?
If I use option 1. with RabbitMQ, is there a way to organize the flow so that I get this out of the box from the RabbitMQ console? I wouldn't want to send all failed messages to poison queue, it would fill up too quickly though and in most cases there is no need to re-process these failed requests as the user has already sadly moved on.
Perhaps this open source system can help you a little: http://code.google.com/p/valogato/

When, where & how should queues be used?

I'm new to enterprise Java development, although I'm sure this question equally applies to any language or platform, such as .NET.
For the first time ever now I'm dealing with message queues, and I'm very intrigued by them. (specifically, we're using ActiveMQ). My tech lead wants ActiveMQ queues to be the front-runners to all of our databases and internal web services; thus instead of a database query being fired off from the client and going directly to the database, it gets queued up first.
My question is this: are queues the way to go with every major processing component? Do best practices dictate putting them in front of system components that usually get hit with large amounts of requests? Are there situations where queues should not be used?
Thanks for any insight here!
Here are some examples where a message queue might be useful.
Limited resources
Lets say you have a large number of users making requests to a service. If the service can only handle a small number of requests concurrently then you might use a queue as a buffer.
Service decoupling
A key enterprise integration concept is decoupling of systems in for eg a workflow. Instead of having systems talk directly to each other, they asyncronously post messages to queues. The integration component then routes and delivers the message to the appropriate system.
Message replay
In the above example queues can also provide reliable delivery and processing of requests. If one component of the workflow breaks, others are unaffected and can still operate and post messages to the broken component. When the broken component recovers it can process all the queued up messages.
They key concepts here are load throttling, loose coupling, reliability and async operation.
As to whether they are the way to go for every major component, I would say no, this is not an automatic choice, you must consider each component individually.
Queues are indeed a very powerful and useful tool, but like every tool you should only use it for the job it is intended.
IMO they are not the away to go for every major processing component.
As a general rule I would use a queue where the requesting resource does not require an immediate, synchronous response. I would not use a queue where the timeliness and order of processing is vital.
Where asynchronous processing is allowable and you wish to regulate the amount of traffic to a service then a queue may be the way to go.
See #Qwerky's answer too, he (or she) makes some good points.
Please check out this:
http://code.google.com/p/disruptor/
Not only queues are there in the wild to solve those kind of problems.
Answering your question. Queues in this case will introduce asynchronous behavior in access to your databases. In this case it is more a question of can you afford such a great impact on your legacy systems. It just might be too much of change to push everything to the queues. Please describe what is the general purpose of your systems. Then it will be easer to answer your question fully.
Message queues are fundamentally an asynchronous communication system. In this case, it means that aside from the queue that links the sender and receiver, both sender and receiver operate independently; a receiver of a message does not (and should not) require interaction with the sender. Similarly, a sender of a message does not (and should not) require interaction with receiver.
If the sender needs to wait for the result of processing a message, then a message queue may not be a good solution, as this would force an asynchronous system to be synchronous, against the core design. It might be possible to construct a synchronous communication system on top of a message queue, but the fundamental asynchronous nature of a message queue would make this conversion awkward.

Prevent client from overloading server?

I have a Java servlet that's getting overloaded by client requests during peak hours. Some clients span concurrent requests. Sometimes the number of requests per second is just too great.
Should I implement application logic to restrict the number of request client can send per second? Does this need to be done on the application level?
The two most common ways of handling this are to turn away requests when the server is too busy, or handle each request slower.
Turning away requests is easy; just run a fixed number of instances. The OS may or may not queue up a few connection requests, but in general the users will simply fail to connect. A more graceful way of doing it is to have the service return an error code indicating the client should try again later.
Handling requests slower is a bit more work, because it requires separating the servlet handling the requests from the class doing the work in a different thread. You can have a larger number of servlets than worker bees. When a request comes in it accepts it, waits for a worker bee, grabs it and uses it, frees it, then returns the results.
The two can communicate through one of the classes in java.util.concurrent, like LinkedBlockingQueue or ThreadPoolExecutor. If you want to get really fancy, you can use something like a PriorityBlockingQueue to serve some customers before others.
Me, I would throw more hardware at it like Anon said ;)
Some solid answers here. I think more hardware is the way to go. Having too many clients or traffic is usually a good problem to have.
However, if you absolutely must throttle clients, there are some options.
The most scalable solutions that I've seen revolve around a distributed caching system, like Memcached, and using integers to keep counts.
Figure out a rate at which your system can handle traffic. Either overall, or per client. Then put a count into memcached that represents that rate. Each time you get a request, decrement the value. Periodically increment the counter to allow more traffic through.
For example, if you can handle 10 requests/second, put a count of 50 in every 5 seconds, up to a maximum of 50. That way you aren't refilling it all the time, but you can also handle a bit of bursting limited to a window. You will need to experiment to find a good refresh rate. The key for this counter can either be a global key, or based on user id if you need to restrict that way.
The nice thing about this system is that it works across an entire cluster AND the mechanism that refills the counters need not be in one of your current servers. You can dedicate a separate process for it. The loaded servers only need to check it and decrement it.
All that being said, I'd investigate other options first. Throttling your customers is usually a good way to annoy them. Most probably NOT the best idea. :)
I'm assuming you're not in a position to increase capacity (either via hardware or software), and you really just need to limit the externally-imposed load on your server.
Dealing with this from within your application should be avoided unless you have very special needs that are not met by the existing solutions out there, which operate at HTTP server level. A lot of thought has gone into this problem, so it's worth looking at existing solutions rather than implementing one yourself.
If you're using Tomcat, you can configure the maximum number of simultaneous requests allowed via the maxThreads and acceptCount settings. Read the introduction at http://tomcat.apache.org/tomcat-6.0-doc/config/http.html for more info on these.
For more advanced controls (like per-user restrictions), if you're proxying through Apache, you can use a variety of modules to help deal with the situation. A few modules to google for are limitipconn, mod_bw, and mod_cband. These are quite a bit harder to set up and understand than the basic controls that are probably offered by your appserver, so you may just want to stick with those.

Categories

Resources