I have written a servlet program which is calling a java class to generate the random strings. I have also written an HTML file which has 1000 IFRAME pointing to the servlet address. If multiple requests are going to a servlet then each request will be processed in a separate thread. So in this case, 1000 threads are being created(1000 request). The problem is that it is taking to much of time to processed and not able to handle if requests are more than 1000 and so on. It becomes slower if I do the complex calculation at the back-end. What changes need to be done at the servlet level(multithreading) or at tomcat level(if possible) for the fast response. Any suggestion?
Servlet Code
#WebServlet("/test")
public class MyServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static PrintWriter out=null;
private UserOperation operation=null;
private static int counter=0;
public MyServlet() {
super();
operation=new UserOperation();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
counter++;
out=response.getWriter();
String output=operation.getResult();
System.out.println(counter+":"+output);
out.print(output);
return;
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
doGet(request, response);
}
}
UserOperation.java
import java.util.Random;
public class UserOperation {
private Random rand=null;
private int max=9;
private int min=0;
private static final String [] RESULT= {"ONE","TWO","THREE","FOUR","FIVE","SIX","SEVEN","EIGHT","NINE","TEN"};
public UserOperation() {
rand=new Random();
}
public String getResult() {
int randNum=rand.nextInt((max-min)+1)+min;
return RESULT[randNum];
}
}
HTML File
1000 times IFRAME call. below line has been copy paste 1000 times in the HTML file.
<IFRAME src="http://localhost:8080/MultipleRequest/test" width="30px" height="30px"></IFRAME>
Try implementing java.lang.Runnable interface or extending java.lang.Thread class at your UserOperation class for multithreading.
Also try increasing the heapsize of Java by adding -Xmx parameter with appropriate limit.
You can use an async servlet with servlet 3.0; there's an example here:
https://www.javaworld.com/article/2077995/java-concurrency/java-concurrency-asynchronous-processing-support-in-servlet-3-0.html
There's still an issue of an ongoing connection requiring a port on the server. A request made will take up a port, whether it'll take up a thread or not you can mitigate with event code using the async servlet but the port part will remain regardless. The only way to mitigate a port shortage is to scale your application to multiple servers
First of all your servlet is dangerous and not thread safe. First fix that.
#WebServlet("/test")
public class MyServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private final AtomicInteger counter = new AtomicInteger(0);
private final UserOperation operation=null;
public MyServlet() {
super();
operation=new UserOperation();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
int count = this.counter.incrementAndGet();
PrintWriter out=response.getWriter();
String output=operation.getResult();
System.out.println(count + ":" + output);
out.print(output);
return;
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
doGet(request, response);
}
}
Now that that is thread safe, the fact that you think that 1000 threads are created is wrong. Tomcat has a default request handling thread pool with a size of 200. So it will handle 200 requests concurrently and not 1000 as you think.
Next to that you shouldn't be using 1000 iframes for testing as your browser is limited to generally 6 concurrent requests to the server. So I actually doubt that you receive 1000 concurrent requests, but only 6 at a time from the browser. Chrome could be even limiting it to 1 at a time as it sees the same URL and tries to cache the results 1 by 1.
So in short I doubt that your servlet is slow but that your test method makes it appear to be slow! Use a proper tool like Apache JMeter to send 1000 requests to your server and then test the performance.
Try implementing java.lang.Runnable interface or extending java.lang.Thread class at your UserOperation class for multithreading.
Also try increasing the heapsize of Java by adding -Xmx parameter with appropricate limit.
#WebServlet(urlPatterns={"/AsyncServlet"}, asyncSupported=true)
public class AsyncServlet extends HttpServlet {
private static PrintWriter out=null;
private UserOperation operation=null;
private static int counter=0;
public AsyncServlet() {
super();
operation=new UserOperation();
}
#Override
public void doGet(HttpServletRequest request,
HttpServletResponse response) {
response.setContentType("text/html;charset=UTF-8");
final AsyncContext acontext = request.startAsync();
acontext.start(new Runnable() {
public void run() {
counter++;
try {
HttpServletResponse response = (HttpServletResponse) acontext.getResponse();
out=response.getWriter();
String param = acontext.getRequest().getParameter("param");
String output = operation.getResult();
System.out.println(counter + ":"+output);
out.print(output);
acontext.complete();
} catch (IOException e) {
e.printStackTrace();
}
}
});
}
public class UserOperation {
private Random rand=null;
private int max=9;
private int min=0;
private final String [] RESULT= {"ONE","TWO","THREE","FOUR","FIVE","SIX","SEVEN","EIGHT","NINE","TEN"};
public UserOperation() {
rand=new Random();
}
public String getResult() {
int randNum=rand.nextInt((max-min)+1)+min;
return RESULT[randNum];
}
}
}
Related
I have seen lots of questions around about chunked streams in netty, but most of them were solutions about outbound streams, not inbound streams.
I would like to understand how can I get the data from the channel and send it as an InputStream to my business logic without loading all the data in memory first.
Here's what I was trying to do:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private HttpServletRequest request;
private PipedOutputStream os;
private PipedInputStream is;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
this.os = new PipedOutputStream();
this.is = new PipedInputStream(os);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
this.os.close();
this.is.close();
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
this.request = new CustomHttpRequest((HttpRequest) msg, this.is);
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf body = ((HttpContent) msg).content();
if (body.readableBytes() > 0)
body.readBytes(os, body.readableBytes());
if (msg instanceof LastHttpContent) {
os.close();
}
}
}
}
And then I have another Handler that will get my CustomHttpRequest and send to what I call a ServiceHandler, where my business logic will read from the InputStream.
public class ServiceRouterHandler extends SimpleChannelInboundHandler<CustomHttpRequest> {
...
#Override
public void channelRead0(ChannelHandlerContext ctx, CustomHttpRequest request) throws IOException {
...
future = serviceHandler.handle(request, response);
...
This does not work because when my Handler forwards the CustomHttpRequest to the ServiceHandler, and it tries to read from the InputStream, the thread is blocking, and the HttpContent is never handled in my Decoder.
I know I can try to create a separate thread for my Business Logic, but I have the impression I am overcomplicating things here.
I looked at ByteBufInputStream, but it says that
Please note that it only reads up to the number of readable bytes
determined at the moment of construction.
So I don't think it will work for Chunked Http requests. Also, I saw ChunkedWriteHandler, which seems fine for Oubound chunks, but I couldn't find something as ChunkedReadHandler...
So my question is: what's the best way to do this? My requirementes are:
- Do not keep data in memory before sending the ServiceHandlers;
- The ServiceHandlers API should be netty agnostic (that's why I use my CustomHttpRequest, instead of Netty's HttpRequest);
UPDATE
I have got this to work using a more reactive approach on the CustomHttpRequest. Now, the request does not provide an InputStream to the ServiceHandlers so they can read (which was blocking), but instead, the CustomHttpRequest now has a readInto(OutputStream) method that returns a Future, and all the service handler will just be executed when this Outputstream is fullfilled. Here is how it looks like
public class CustomHttpRequest {
...constructors and other methods hidden...
private final SettableFuture<Void> writeCompleteFuture = SettableFuture.create();
private final SettableFuture<OutputStream> outputStreamFuture = SettableFuture.create();
private ListenableFuture<Void> lastWriteFuture = Futures.transform(outputStreamFuture, x-> null);
public ListenableFuture<Void> readInto(OutputStream os) throws IOException {
outputStreamFuture.set(os);
return this.writeCompleteFuture;
}
ListenableFuture<Void> writeChunk(byte[] buf) {
this.lastWriteFuture = Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) (os) -> {
outputStreamFuture.get().write(buf);
return Futures.immediateFuture(null);
});
return lastWriteFuture;
}
void complete() {
ListenableFuture<Void> future =
Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) x -> {
outputStreamFuture.get().close();
return Futures.immediateFuture(null);
});
addFinallyCallback(future, () -> {
this.writeCompleteFuture.set(null);
});
}
}
And my updated ServletRequestHandler looks like this:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private NettyHttpServletRequestAdaptor request;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest request = (HttpRequest) msg;
this.request = new CustomHttpRequest(request, ctx.channel());
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf buf = ((HttpContent) msg).content();
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
this.request.writeChunk(bytes);
if (msg instanceof LastHttpContent) {
this.request.complete();
}
}
}
}
This works pretty well, but still, note that everything here is done in a single thread, and maybe for large data I might want to spawn a new thread to release that thread for other channels.
You're on the right track - if your serviceHandler.handle(request, response); call is doing a blocking read, you need to create a new thread for it. Remember, there are supposed to be only a small number of Netty worker threads, so you shouldn't do any blocking calls in worker threads.
The other question to ask is, does your service handler need to be blocking? What does it do? If it's shoveling the data over the network anyway, can you incorporate it into the Netty pipeline in a non-blocking way? That way, everything is async all the way, no blocking calls and extra threads required.
Below code are servlet 3.1 Non Blocking IO demo:
UploadServlet:
#WebServlet(name = "UploadServlet", urlPatterns = {"/UploadServlet"}, asyncSupported=true)
public class UploadServlet extends HttpServlet {
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
AsyncContext context = request.startAsync();
// set up async listener
context.addListener(new AsyncListener() {
public void onComplete(AsyncEvent event) throws IOException {
event.getSuppliedResponse().getOutputStream().print("Complete");
}
public void onError(AsyncEvent event) {
System.out.println(event.getThrowable());
}
public void onStartAsync(AsyncEvent event) {
}
public void onTimeout(AsyncEvent event) {
System.out.println("my asyncListener.onTimeout");
}
});
ServletInputStream input = request.getInputStream();
ReadListener readListener = new ReadListenerImpl(input, response, context);
input.setReadListener(readListener);
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
}
RealListenerImpl:
public class ReadListenerImpl implements ReadListener{
private ServletInputStream input = null;
private HttpServletResponse res = null;
private AsyncContext ac = null;
private Queue queue = new LinkedBlockingQueue();
ReadListenerImpl(ServletInputStream in, HttpServletResponse r, AsyncContext c) {
input = in;
res = r;
ac = c;
}
public void onDataAvailable() throws IOException {
System.out.println("Data is available");
StringBuilder sb = new StringBuilder();
int len = -1;
byte b[] = new byte[1024];
while (input.isReady() && (len = input.read(b)) != -1) {
String data = new String(b, 0, len);
sb.append(data);
}
queue.add(sb.toString());
}
public void onAllDataRead() throws IOException {
System.out.println("Data is all read");
// now all data are read, set up a WriteListener to write
ServletOutputStream output = res.getOutputStream();
WriteListener writeListener = new WriteListenerImpl(output, queue, ac);
output.setWriteListener(writeListener);
}
public void onError(final Throwable t) {
ac.complete();
t.printStackTrace();
}
}
WriteListenerImpl:
public class WriteListenerImpl implements WriteListener{
private ServletOutputStream output = null;
private Queue queue = null;
private AsyncContext context = null;
WriteListenerImpl(ServletOutputStream sos, Queue q, AsyncContext c) {
output = sos;
queue = q;
context = c;
}
public void onWritePossible() throws IOException {
while (queue.peek() != null && output.isReady()) {
String data = (String) queue.poll();
output.print(data);
}
if (queue.peek() == null) {
context.complete();
}
}
public void onError(final Throwable t) {
context.complete();
t.printStackTrace();
}
}
above codes work fine, i want to know what are differences with blocking IO servlet? and i want to know how above code works.
Reading input data:
In the blocking scenario when you read data from the input stream each read blocks until data is available. This could be a long time for a remote client sending large data which means the thread is held for a long time.
For example consider inbound data being received over 2 minutes at regular intervals in 13 chunks. In blocking read you read the first chunk, hold the thread for ~10 seconds, read the next chunk, hold the thread for ~10 seconds etc. In this case the thread might spend less than a second actually processing data and almost 120 seconds blocked waiting for data. Then if you have a server with 10 threads you can see that you would have a throughput of 10 clients every 2 minutes.
In the non-blocking scenario the readListener reads data while isReady() returns true (it must check isReady() before each call to read data),but when isReady() returns false the readListener returns and the thread is relinquished. Then when more data arrives onDataAvailable() is called and the readListener reads data again in until isReady is false().
In the same example, this time the thread reads the data and returns, is woken up 10 seconds later, reads the next data and returns, is woken up 10 seconds later reads data and returns etc. This time, while it has still taken 2 minutes to read the data the thread(s) needed to do this were only active for less than a second and were available for other work. So while the specific request still takes 2 minutes, the server with 10 threads can now process many more requests every 2 minutes.
Sending response data:
The scenario is similar for sending data and is useful when sending large responses. For example sending a large response in 13 chunks may take 2 minutes to send in the blocking scenario because the client takes 10 seconds to acknowledge receipt of each chunk and the thread is held while waiting. However in the non-blocking scenario the thread is only held while sending the data and not while waiting to be able to send again. So, again for the particular client the response is not sent any more quickly but the thread is held for a fraction of the time and the throughput of the server which processes the request can increase significantly.
So the examples here are contrived but used to illustrate a point. The key being that non-blocking i/o does not make a single request any faster than with blocking i/o, but increases server throughput when the application can read input data faster than the client can send it and/or send response data faster than the client can receive it.
I have a java web application that's expected to have many users who will make a strong load on it. At the same time, there are some scheduled tasks that require a lot of processing and I was looking for some automated way to start this thread and pause it according to the high load of the web requests. Is there any ready solutions available there for this task?
Use a javax.servlet.Filter with a static counter, incremented and decremented by the filter. This way you know the current load (= number of requests being processed currently)
Use #Startup with #Singleton and #Schedule to have a regular task (or any other scheduler like Quartz), for example every 5 minutes
In that task check the load. If it's low, enough start the real task(s).
You could monitor the current load in running tasks as well, and pause or exit, for example.
This works, if the real task is processing the contents of a queue for example.
Otherwise you possibly have to do some book-keeping, if the frequency of the first task is higher than the frequency of the real task or you have to make sure that the real task is only run once per day (or at least once a day).
Example code:
The filter:
#WebFilter("/*")
public class LoadFilter implements Filter {
private final static Logger log = Logger.getLogger(LoadFilter.class
.getName());
private final static AtomicInteger load = new AtomicInteger();
public static int getLoad() {
return load.get();
}
public void init(final FilterConfig fc) throws ServletException {
log.info("Hello from init()");
}
public void doFilter(ServletRequest req, ServletResponse resp,
FilterChain chain) throws IOException, ServletException {
final int currentLoad = load.incrementAndGet();
try {
log.info("Current load (enter): " + currentLoad);
chain.doFilter(req, resp);
} finally {
final int newLoad = load.decrementAndGet();
log.info("Current load (exit): " + newLoad);
}
}
public void destroy() {
log.info("Bye from destroy()");
}
}
and the EJB
#Singleton
#Startup
public class StartupSingleton {
#Schedule(second = "*/10", minute = "*", hour = "*", persistent = false)
public void execute() {
// Check load; if low, run task(s)
if (LoadFilter.getLoad() < 10) {
// Run tasks
}
}
}
I have been trying to use the timeout feature of the async context. But the behavior is highly intermittent. Sometimes the timeout happens, and many a times it doesn't. I am pasting my code here.
#WebServlet(name = "TestServlet", urlPatterns = {"/test"},asyncSupported = true)
public class TestServlet extends HttpServlet{
private static final long serialVersionUID = 1L;
private static PriorityBlockingQueue<Runnable> pq = new PriorityBlockingQueue<Runnable>(1000);
private static ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(1,1,10, TimeUnit.SECONDS,pq);
public void service(final ServletRequest servletRequest, final ServletResponse response)
throws ServletException, IOException {
TestListener listener = new TestListener();
final AsyncContext asyncContext = servletRequest.startAsync();
asyncContext.addListener(listener);
asyncContext.setTimeout(100);
Handler handler = new Handler(asyncContext);
threadPoolExecutor.execute(handler);
}
}
The listener and the handler code is included below.
public class TestListener implements AsyncListener {
public void onComplete(AsyncEvent event) throws IOException {
System.out.println("Event completed");
}
public void onError(AsyncEvent event) throws IOException {
event.getAsyncContext().complete();
}
public void onStartAsync(AsyncEvent event) throws IOException {
// TODO Auto-generated method stub
}
public void onTimeout(AsyncEvent event){
System.out.println("Timeout ");
event.getAsyncContext().complete();
}
}
public class Handler implements Runnable {
private AsyncContext asyncContext;
public Handler(AsyncContext asyncContext){
this.asyncContext = asyncContext;
}
public void run(){
try {
long currtime = System.currentTimeMillis();
Thread.sleep(500);
System.out.println("slept for " + (System.currentTimeMillis() - currtime));
} catch (InterruptedException e) {
System.out.println("Error in thread ");
}
try{
if(asyncContext != null){
System.out.println("Completing async context " + " timeout is " + asyncContext.getTimeout());
asyncContext.complete();
}
}catch (Exception e){
System.out.println("Exception in completing async context ");
}
}
}
And the output is intermittent. Including the same here -
[ops#root combinedlogs]$ time curl "http://localhost:9001/mockresponse/test"
real 0m0.506s
user 0m0.001s
sys 0m0.003s
[ops#root combinedlogs]$ time curl "http://localhost:9001/mockresponse/test"
real 0m0.159s
user 0m0.001s
sys 0m0.003s
Catalina logs -
slept for 500
Completing async context timeout is 100
Event completed
Timeout
Event completed
slept for 500
Exception in completing async context
I don't understand why this is happening. Please help! Thanks for your time.
PS: The tomcat version is 7.0.37
Try increasing the timeout and the sleep interval to more than 1 sec. For ex: Try a timeout interval of 2sec and a sleep for 5sec. It is possible that the servlet container does not detect timeouts less than 1 sec consistently. There were couple of bugs (marginally) related to such sub-second timeouts earlier in tomcat, like this one. I understand you are using a later version of tomcat than mentioned in that bug, still its worth the try.
Is it possible to take an HTTPServletRequest away from its thread, dissolve this thread (i.e. bring it back to the pool), but keep the underlying connection with the browser working, until I get the results from a time-consuming operation (say, processing an image)? When the return data are processed, another method should be called asynchronously, and be given the request as well as the data as parameters.
Usually, long pooling functions in a pretty blocking fashion, where the current thread is not dissolved, which reduces the scalability of the server-side app, in terms of concurrent connections.
Yes, you can do this with Servlet 3.0
Below is the sample to write the alert every 30 secs(not tested).
#WebServlet(async =“true”)
public class AsyncServlet extends HttpServlet {
Timer timer = new Timer("ClientNotifier");
public void doGet(HttpServletRequest req, HttpServletResponse res) {
AsyncContext aCtx = request.startAsync(req, res);
// Suspend request for 30 Secs
timer.schedule(new TimerTask(aCtx) {
public void run() {
try{
//read unread alerts count
int unreadAlertCount = alertManager.getUnreadAlerts(username);
// write unread alerts count
response.write(unreadAlertCount);
}
catch(Exception e){
aCtx.complete();
}
}
}, 30000);
}
}
Below is the sample to write based on an event. The alertManager has to be implemented which notifies AlertNotificationHandler when client has to be alerted.
#WebServlet(async=“true”)
public class AsyncServlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse res) {
final AsyncContext asyncCtx = request.startAsync(req, res);
alertManager.register(new AlertNotificationHandler() {
public void onNewAlert() { // Notified on new alerts
try {
int unreadAlertCount =
alertManager.getUnreadAlerts();
ServletResponse response = asyncCtx.getResponse();
writeResponse(response, unreadAlertCount);
// Write unread alerts count
} catch (Exception ex) {
asyncCtx.complete();
// Closes the response
}
}
});
}
}
Yes, it's possible using Servlet spec ver. 3.0. Implementation I can recommend is Jetty server. See here.