I just read that it is recommended to use asynchronous method calls on the server via promises when executing long running requests. The documentation says this is because the Play server will block on the request and not be able to handle concurrent requests.
Does this mean all of my web requests should be asynchronous?
I'm just thinking that if I want to to increase my web pages rendering times that I would make a series of ajax calls to fetch needed page regions concurrently. Since I would potentially make multiple ajax calls, my Play controller methods need to be asynchronous.
Am I understanding this correctly? The syntax is quite verbose so I want to make certain I don't take this concept overboard. It would seem strange to me that I have to do this given other web severs such as Glassfish or IIS automatically handle pooling.
Here are some detailed docs on Play's thread pools, various different configurations, how to tune them, best practices etc:
http://www.playframework.com/documentation/2.2.x/ThreadPools
Related
Hi i am using Spring 4 Async rest template to make 10k rest api calls to a web service. I have a method that creates the request object and a method that calls the web service. I am using Listenable Future classes and the two methods to create and call are enclosed in another method where the response is handled in future. Any useful links for such a task would be greatly helpful.
First, set up your testing environment.
Then benchmark what you have.
Then adjust your code and compare
(repeat as necessary).
Whatever you do, there is a cost associated with it. You need to be sure that your costs are measured and understood, every step of the way.
A simple Tomcat application might outperform a Spring application or be equivalent depending on what aspects of Spring's inversion of control are being leveraged. Using a Future might be fast or slow, depending on what it is being compared to. Using non-NIO might be faster or slower, depending on the implementation and the data being processed.
I have been searching SO and the web and I dont seem to find concrete example (or maybe just me not getting it). So maybe you guys can get me some help
I have created a survlet that extends HTTPREQUEST on TOMCAT 7. The doGet successfully access the file and do a long writing operation then returns the results to the requester.
Now my goal is to handle requests if they come at the same time. I.e queue them and execute one after the other.
Any idea how to do that? Any example to follow?
Thank you
Tomcat will handle multiple incoming requests automatically. server.xml has a maxThreads value you can configure. Note there will be only one instance of the servlet, so be sure it doesn't have any shared state.
On a related note, you generally shouldn't have long running tasks on the request thread, but rather put the long running tasks on a separate thread. Servlet 3.0 allows for much easier asynchronous processing so the tomcat threads will free to handle more requests. If async processing in servlets is new to you, check out this introduction http://www.javaworld.com/article/2077995/java-concurrency/asynchronous-processing-support-in-servlet-3-0.html.
I have a number of low-level methods in my play! 2.0 application (Java) that are calling an external Web Service (Neo4j via REST, to be specific). Each of them returns a Promise<WS.Response>. To test these methods I am currently installing a Callback<WS.Response> on their return values via onRedeem. The Callbacks contain the assertions to perform on individual WS.Responses. Each test relies on some specific fixtures that I am installing/removing via setUpClass and tearDownClass, respectively.
The problem that I am facing is that due to my test code being fully asynchronous, the tear-down logic ends up getting called before all of the Callbacks have had a chance to run. As a result, not all fixtures are being removed, and the database is left in a state that is different from the state it was in before running the tests.
One way to fix this problem would be to call get() with some arbitrary timeout on the Promise objects returned by the functions that are being tested, but that solution seems fairly brittle and unreliable to me. (What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout? In that case, my tests would fail or error out even though my code is actually correct.)
So my question is: Is there a way of testing code that calls external Web Services that is non-blocking and still ensures database consistency? And if there isn't, which of the two approaches outlined above is the "canonical"/accepted way of testing this kind of code?
What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout?
That is a problem for any test that calls external web services, whether asynchronous or not. That is why you should mock out your web service calls in some way, either using a fake web service or a fake implementation of the code that accesses the web service.
You can use e.g. Betamax for that.
I have written testing code for asynchronous code before and I believe your "brittle" approach is actually the right one.
I'm developing an MVC spring web app, and I would like to store the actions of my users (what they click on, etc.) in a database for offline analysis. Let's say an action is a tuple (long userId, long actionId, Date timestamp). I'm not specifically interested in the actions of my users, but I take this as an example.
I expect a lot of actions by a lot of (different) users par minutes (seconds). Hence the processing time is crucial.
In my current implementation, I've defined a datasource with a connection pool to store the actions in a database. I call a service from the request method of a controller, and this service calls a DAO which saves the action into the database.
This implementation is not efficient because it waits that the call from the controller and all the way down to the database is done to return the response to the user. Therefore I was thinking of wrapping this "action saving" into a thread, so that the response to the user is faster. The thread does not need to be finished to get the reponse.
I've no experience in these massive, concurrent and time-critical applications. So any feedback/comments would be very helpful.
Now my questions are:
How would you design such system?
would you implement a service and then wrap it into a thread called at every action?
What should I use?
I checked spring Batch, and this JobLauncher, but I'm not sure if it is the right thing for me.
What happen when there are concurrent accesses at the controller, the service, the DAO and the datasource level?
In more general terms, what are the best practices for designing such applications?
Thank you for your help!
Take a singleton object # apps level and update it with every user action.
This singleton object should have a Hashmap as generic, which should get refreshed periodically say after it reached a threshhold level of 10000 counts and save it to DB, as a spring batch.
Also, periodically, refresh it / clean it upto the last no.# of the records everytime it processed. We can also do a re-initialization of the singleton instance , weekly/ monthly. Remember, this might lead to an issue of updating the same in case, your apps is deployed into multiple JVM. So, you need to implement the clone not supported exception in singleton.
Here's what I did for that :
Used aspectJ to mark all the actions of the user I wanted to collect.
Then I sent this to log4j with an asynchronous dbAppender...
This lets you turn it on or off with log4j logging level.
works perfectly.
If you are interested in the actions your users take, you should be able to figure that out from the HTTP requests they send, so you might be better off logging the incoming requests in an Apache webserver that forwards to your application server. Putting a cluster of web servers in front of application servers is a typical practice (they're good for serving static content) and they are usually logging requests anyway. That way the logging will be fast, your application will not have to deal with it, and the biggest work will be writing a script to slurp the logs into a database where you can do analysis.
Typically it is considered bad form to spawn your own threads in a Java EE application.
A better approach would be to write to a local queue via JMS and then have a separate component, e.g., a message driven bean (pretty easy with EJB or Spring) which persists it to the database.
Another approach would be to just write to a log file and then have a process read the log file and write to the database once a day or whenever.
The things to consider are: -
How up-to-date do you need the information to be?
How critical is the information, can you lose some?
How reliable does the order need to be?
All of these will factor into how many threads you have processing your queue/log file, whether you need a persistent JMS queue and whether you should have the processing occur on a remote system to your main container.
Hope this answers your questions.
I am working on a web application in Java which gets data from servlets via AJAX calls.
This application features several page elements which get new data from the server at fairly rapid intervals.
With a lot of users, the demand on the server has a potential to get fairly high, so I am curious:
Which approach offers the best performance:
Many servlets (one for each type of data request)?
Or:
a single servlet that can handle all of the requests?
There is no performance reason to have more than one servlet. In a web application, only a single instance of a servlet class is instantitated, no matter how many requests. Requests are not serialized, they are handled concurrently, hence the need for your servlet to be thread safe.
The struts framework uses one servlet for everything in your app. Your stuff plugs into that one servlet. If it works for them, it will probably work for you.
One possible reason to have multiple services is that if you need to expand to multiple servers to handle the load in the future, it is easier to move a seperate service to it's own server than to do it "behind the scenes" if everything is comming out of one service.
That being said, there is extra maintinence overhead if you have multiple servlets, so it is a matter of balancing future flexibility with lower maintainability.
There is as such no performance enhancements in case you use multiple servlets since for each servlet request is handled in a separate thread, provided it is not single threaded.
But keeping modularity and separation of code, you can have multiple servlets.
Like Tony said, there really isn't a reason to use more than one servlet, unless you need to break up a complex Java Servlet class or perhaps implement an intercepting filter.
I'm sure you know that you can have multiple instances of the same servlet as long as you register different nodes in the web.xml file for your app -- ie, assuming you want to do that.
Other than that, from what I'm understanding, you might benefit from comet architecture -- http://en.wikipedia.org/wiki/Comet_(programming).
There are already some implementations of Comet on some servlet containers -- here's one look at how to use Ajax and Comet -- http://www.ibm.com/developerworks/java/library/j-jettydwr/. You should study before deciding on your architecture.
BR,
~A