Notifications of Lease expiration in Jini/Apache-River and JavaSpaces - java

I've been looking at the use of Lease and specifically their expiration. I am a little confused about how to reflect this expiration in a client side application. This is pretty trivial using some kind of polling mechanism - but after seeing the implementation of .notify on objects being written to the space, I was wondering if there is something better.
Is there a way to be notified of a Lease expiration? Or is there some sort of accepted solution on how to poll for these expirations?
I have read several sources (e.g. http://www.javacoffeebreak.com/books/extracts/jini/Lease.html), and they mention ways to be notified of this expiration but I cannot see any examples. JavaDocs hint that LeaseRenewalManager might be of use, but my initial tests haven't really given me anything.

After continued research into this, it appears nothing exists for this purpose aside from monitoring Leases via some sort of polling timer (bleh).
Unfortunately this just does not appear to be possible.

Related

What is the Difference between Scheduling a Get Api call every single second, and Doing a Subscriber API?

I am Writing a Java Application where when the Data Changes an image should change,
My Colleagues are asking me to do a Scheduler where you have to call a get api every 1 second
My Suggestion is to use Pub-Sub so that whenever event happens , then only the data is changed
is Subscriber and Scheduler one and the Same?
No code
Publish/subscribe is a nicer option, theoretically.
The differences:
Polling is a kind of busy waits, with multiple clients causing superfluous network traffic. The client is active.
Publish/Subcribe needs an active server that does a push notification to all subscribers. Meanwhile there is sufficient support in HTML5/JavaScript and in java. The server is active.
Unfortunately publish/subscribe will probably be a bit harder to realize. Best would be to make a proof of concept in a separate application. Things like asynchroneous Ajax might appear.
Also some publish/subscribe libraries might still use under the hood polling at the client side, instead of push notifications.
So the colleagues' advise might be based on the simpler, unproblematic implementation.
Depending on the leeway you are given, and in the interest of architectural research: a prototype with a load test for both implementations would be fine. Hope never dies.
It's no the same:
Scheduler is when you explicitly choose when to make the request. You can to it every second, minute or whatever. Every time you create a new request.
Pub-Sub is when you create a permatent connection to the source of events, and when an event is published you consume it. You don't have here multiple requests, it's rather a socket connection.

Queuing operations inbound from API

I written a web service (DropWizard) that takes accepts requests via POST to perform operations that may take considerable time. Considerable time meaning that it could take anywhere from 1-5 minutes to complete.
That said the caller doesn't need a response. Just a simple 200 to acknowledge receipt of the message is enough. It's actually a PayPal IPN WebHook for anybody who is curious.
I only want to perform one of these operations at a time (with the option to increase in future) so that my system doesn't overload.
What kind of queue mechanism should I consider using. This probably goes without say but I must assume that the API instance can be killed at any time, thus clearing memory. I will need a temporary place to store the queue so I can resume where the server left off when restarted.
Thank you.
You could use Apache Kafka Queue. The documentation is pretty clear. It should help you out.
http://kafka.apache.org/
Hope that helps!
you can use, activeMQ with persistance. Its very light weight and easy to use. Have a look at http://activemq.apache.org/persistence.html it will guide you through the step by step process.

Java implementation for consensus algorithm

I am working on a distributed systems and I have to implement consensus algorithm ( pref. Paxos ). I was looking for any API which I can use to have the consensus. But I could only stumble upon Apache Zookeeper who provides this facility. But I cannot use ZK as it fails when majority of the servers are down. This does not go along with my problem.Is there any other API or open source project which can help me to avoid the code the implementation from scratch?
You cannot solve consensus when a majority of servers is down, unless you have some way to tell with absolute certainty that they are indeed down, which is unlikely. ZooKeeper is thus correct, as it doesn't promise you the impossible.
Allow me to explain. Consider that you have 3 servers. One of them suspects that the remaining 2 have failed (e.g. missed some heartbeat) and proceeds to decide alone on the outcome of consensus. If the remaining 2 have not failed, they might decide differently, thus leading to inconsistency. This is safety violation, also known informally as the "split-brain" problem.
Note that even if you have a STONITH device, that allows servers to shutdown others, the previous situation might lead to everyone being shutdown, thus making the system unavailable as a whole. This a liveness violation.
Finally, if you have a really good STONITH device that never kills the last server standing, you don't need an algorithm to solve consensus. Just use the STONITH to try to kill everybody, and let the surviving server become the leader and decider. That STONITH is the consensus implementation.
So, stick with ZooKeeper.

Implementing a realtime, web dashboard

I'd like to implement a dashboard that is web-based and has a variety of metrics where one changes every minute and others change like twice a day. Via AJAX the metrics should be updated as quick as possible if a change occured. This means the same page would be running for at least several hours.
What would be the most efficient way (technology-/implementation-wise) of dealing with this in the Java world?
Well, there are two obvious options here:
Comet, aka long polling: the AJAX request is held open by the server until it times out after a few minutes or until a change occurs, whichever happens first. The downside of this is that handling many connections can be tricky; aside from anything else, you won't want the typical "one thread per request, handling it synchronously" model which is common.
Frequent polling from the AJAX page, where each request returns quickly. This would probably be simpler to implement, but is less efficient in network terms (far more requests) and will be less immediate; you could send a request every 5 seconds for example, but if you have a lot of users you're going to end up with a lot of traffic.
The best solution will depend on how many users you've got. If there are only going to be a few clients, you may well want to go for the "poll every 5 seconds" approach - or even possibly long polling with a thread per request (although that will probably be slightly harder to implement). If you've got a lot of clients I'd definitely go with long polling, but you'll need to look at how to detach the thread from the connection in your particular server environment.
I think time of Comet has gone. The brand new Socket.IO protocol gaining popularity. And i suggest to use netty-socketio, it supports both long-polling and websocket protocols. javascript, ios, android client libs also available.

Prevent client from overloading server?

I have a Java servlet that's getting overloaded by client requests during peak hours. Some clients span concurrent requests. Sometimes the number of requests per second is just too great.
Should I implement application logic to restrict the number of request client can send per second? Does this need to be done on the application level?
The two most common ways of handling this are to turn away requests when the server is too busy, or handle each request slower.
Turning away requests is easy; just run a fixed number of instances. The OS may or may not queue up a few connection requests, but in general the users will simply fail to connect. A more graceful way of doing it is to have the service return an error code indicating the client should try again later.
Handling requests slower is a bit more work, because it requires separating the servlet handling the requests from the class doing the work in a different thread. You can have a larger number of servlets than worker bees. When a request comes in it accepts it, waits for a worker bee, grabs it and uses it, frees it, then returns the results.
The two can communicate through one of the classes in java.util.concurrent, like LinkedBlockingQueue or ThreadPoolExecutor. If you want to get really fancy, you can use something like a PriorityBlockingQueue to serve some customers before others.
Me, I would throw more hardware at it like Anon said ;)
Some solid answers here. I think more hardware is the way to go. Having too many clients or traffic is usually a good problem to have.
However, if you absolutely must throttle clients, there are some options.
The most scalable solutions that I've seen revolve around a distributed caching system, like Memcached, and using integers to keep counts.
Figure out a rate at which your system can handle traffic. Either overall, or per client. Then put a count into memcached that represents that rate. Each time you get a request, decrement the value. Periodically increment the counter to allow more traffic through.
For example, if you can handle 10 requests/second, put a count of 50 in every 5 seconds, up to a maximum of 50. That way you aren't refilling it all the time, but you can also handle a bit of bursting limited to a window. You will need to experiment to find a good refresh rate. The key for this counter can either be a global key, or based on user id if you need to restrict that way.
The nice thing about this system is that it works across an entire cluster AND the mechanism that refills the counters need not be in one of your current servers. You can dedicate a separate process for it. The loaded servers only need to check it and decrement it.
All that being said, I'd investigate other options first. Throttling your customers is usually a good way to annoy them. Most probably NOT the best idea. :)
I'm assuming you're not in a position to increase capacity (either via hardware or software), and you really just need to limit the externally-imposed load on your server.
Dealing with this from within your application should be avoided unless you have very special needs that are not met by the existing solutions out there, which operate at HTTP server level. A lot of thought has gone into this problem, so it's worth looking at existing solutions rather than implementing one yourself.
If you're using Tomcat, you can configure the maximum number of simultaneous requests allowed via the maxThreads and acceptCount settings. Read the introduction at http://tomcat.apache.org/tomcat-6.0-doc/config/http.html for more info on these.
For more advanced controls (like per-user restrictions), if you're proxying through Apache, you can use a variety of modules to help deal with the situation. A few modules to google for are limitipconn, mod_bw, and mod_cband. These are quite a bit harder to set up and understand than the basic controls that are probably offered by your appserver, so you may just want to stick with those.

Categories

Resources