I have set the keep alive timeout in spring boot embeded tomcat server to 30 seconds. So i use below in the Application.java,
#Bean
public EmbeddedServletContainerFactory getEmbeddedServletContainerFactory() {
TomcatEmbeddedServletContainerFactory containerFactory = new TomcatEmbeddedServletContainerFactory();
containerFactory
.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
((AbstractProtocol) connector.getProtocolHandler())
.setKeepAliveTimeout(30000);
}
});
return containerFactory;
}
Then i sleep a request thread for 40 seconds from my rest controller. But when i make a request via postman it successfully return HTTP status code 200 instead it should return gateway timeout error.
I try both setConnectionTimeout and setKeepAliveTimeout and it did not work.
What am i missing here?
Edit question: My initial problem
Let me explain the original question of mine, which lead me to ask above question.
Well i have a long poll process which normally runs about more than 5 minits.
So what happen is when i call the Rest API for longpoll, After 2.2 minits i get a 504 http error in browser.
I am using a AWS environment, where i have a ELB and a HAProxy which is installed in AWS EC2 instance.
As per AWS doc, it says the default Idle Connection Timeout of ELB is 60 seconds. So i have increase it to up to 30 mins.
Moreover it says,
If you use HTTP and HTTPS listeners, we recommend that you enable the
keep-alive option for your EC2 instances. You can enable keep-alive in
your web server settings or in the kernel settings for your EC2
instances.
So have increase the embedded tomcat keep-alive timeout like above code snippet to 30.2 mins
So now i expect my long poll request to be completed, with out getting a 504 error. But still i get 504 error in browser?
Ref: AWS dev guide
It looks like you want to close abandoned HTTP connections which might occur on mobile devices.
#RestController
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Bean
public EmbeddedServletContainerFactory getEmbeddedServletContainerFactory() {
TomcatEmbeddedServletContainerFactory containerFactory = new TomcatEmbeddedServletContainerFactory();
containerFactory
.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
((AbstractProtocol) connector.getProtocolHandler()).setConnectionTimeout(100);
}
});
return containerFactory;
}
#RequestMapping
public String echo(#RequestBody String body) {
return body;
}
}
Connection timeout has been set to 100 millisencods in order to run my tests fast. Data is sent in chunks. Between every chunk the running thread is suspended for x milliseconds.
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = DemoApplication.class)
#WebIntegrationTest("server.port:19000")
public class DemoApplicationTests {
private static final int CHUNK_SIZE = 1;
private static final String HOST = "http://localhost:19000/echo";
#Rule
public ExpectedException expectedException = ExpectedException.none();
#Test
public void slowConnection() throws Exception {
final HttpURLConnection connection = openChunkedConnection();
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
writeAndWait(500, out, "chunk1");
writeAndWait(1, out, "chunk2");
out.close();
expectedException.expect(IOException.class);
expectedException.expectMessage("Server returned HTTP response code: 400 for URL: " + HOST);
assertResponse("chunk1chunk2=", connection);
}
#Test
public void fastConnection() throws Exception {
final HttpURLConnection connection = openChunkedConnection();
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
writeAndWait(1, out, "chunk1");
writeAndWait(1, out, "chunk2");
out.close();
assertResponse("chunk1chunk2=", connection);
}
private void assertResponse(String expected, HttpURLConnection connection) throws IOException {
Scanner scanner = new Scanner(connection.getInputStream()).useDelimiter("\\A");
Assert.assertEquals(expected, scanner.next());
}
private void writeAndWait(int millis, OutputStreamWriter out, String body) throws IOException, InterruptedException {
out.write(body);
Thread.sleep(millis);
}
private HttpURLConnection openChunkedConnection() throws IOException {
final URL url = new URL(HOST);
final HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setChunkedStreamingMode(CHUNK_SIZE);
return connection;
}
}
Set log level for package org.apache.catalina.core to DEBUG
logging.level.org.apache.catalina.core=DEBUG
and you can see a SocketTimeoutException for slowConnection test.
I don't know why you want HTTP status code 502 as error response status. HTTP 502 says:
The 502 (Bad Gateway) status code indicates that the server, while
acting as a gateway or proxy, received an invalid response from an
inbound server it accessed while attempting to fulfill the request.
The client Postman calls your server application. I don't see any gateway or proxy in between.
If you just condensed your question to a bare minimum and in reality you want to build a proxy on your own, you might consider using Netflix Zuul.
Update 23.03.2016:
That is the root cause for OP's question on Stackoverflow:
What i did with longpolling was, from service api, i sleep the thread for some time and wake it and do it again and again untill some db status is completed.
That implementation actually prevents the Tomcat worker thread from processing new HTTP requests. As a result your request throughput reduces with every additional long running operation.
I propose to offload the long running operation into a separate thread. The client (browser) initiates a new request to fetch the result.
Depending on the processing status, server returns either the result or a notification/error/warning/.
Here's a very simple example :
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static org.springframework.http.HttpStatus.CREATED;
import static org.springframework.http.HttpStatus.NOT_FOUND;
import static org.springframework.http.HttpStatus.OK;
#RestController
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
private ExecutorService executorService = Executors.newFixedThreadPool(10);
private Map<String, String> results = new ConcurrentHashMap<>();
#RequestMapping(path = "put/{key}", method = RequestMethod.POST)
public ResponseEntity<Void> put(#PathVariable String key) {
executorService.submit(() -> {
try {
//simulate a long running process
Thread.sleep(10000);
results.put(key, "success");
} catch (InterruptedException e) {
results.put(key, "error " + e.getMessage());
Thread.currentThread().interrupt();
}
});
return new ResponseEntity<>(CREATED);
}
#RequestMapping(path = "get/{key}", method = RequestMethod.GET)
public ResponseEntity<String> get(#PathVariable String key) {
final String result = results.get(key);
return new ResponseEntity<>(result, result == null ? NOT_FOUND : OK);
}
}
Related
I have the Rabbit MQ broker for communicating asynchronously between services. Service A is sending messages to the queue. I checked the queue and the messages from Service A have arrived:
I am trying to create a listener in the Service B in order to consume the messages produced by Service A. I verified like below to check if Service B is connected with RabbitMQ and it seems to be connected successfully.
The problem is that Service B started successfully but it is receiving messages from Rabbit MQ.
Below is the implementation of the listener:
#Slf4j
#Component
public class EventListener {
public static final String QUEUE_NAME = "events";
#RabbitListener(
bindings = {
#QueueBinding(
value = #Queue(QUEUE_NAME),
exchange = #Exchange("exchange")
)
}
)
public void handleTaskPayload(#Payload String payload) {
System.out.println(payload);
}
}
I verified the queue and exchange information in the Rabbit MQ and they are correct.
Everything is working correctly and there is no error thrown in service A or service B which makes this problem much harder to debug.
I tried to retrieve the message from the queue getMessage of RabbitMQ the message is like the below:
{"id":"1",:"name:"Test","created":null}
I will appreciate any help or guidance towards the solution of this problem.
Best Regards,
Rando.
P.S
I created a new test queue like the below and published some messages:
Modified the listener code like below and still wasn't able to trigger listener to listen to the queue events:
#Slf4j
#Component
public class RobotRunEventListener {
public static final String QUEUE_NAME = "test";
#RabbitListener(
bindings = {
#QueueBinding(
value = #Queue(QUEUE_NAME),
key = "test",
exchange = #Exchange("default")
)
}
)
public void handleTaskPayload(#Payload String payload) {
System.out.println(payload);
}
Try this approach:
#RabbitListener(queues = "test")
public void receive(String in, #Headers Map<String, Object> headers) throws IOException {
}
The problem was that the spring boot app that I was working on had a #Conditional(Config.class) that prevented the creation of the bean below:
#Slf4j
#Conditional(Config.class)
#EnableRabbit
public class InternalRabbitBootstrapConfiguration {
#Bean
public RabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMaxConcurrentConsumers(5);
return factory;
}
...
which resulted in the spring boot app not listening to Rabbit MQ events. The Config.class required a specific profile in order to enable the app to listen to Rabbit MQ events.
public class DexiModeCondition implements Condition {
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
String[] activeProfiles = context.getEnvironment().getActiveProfiles();
return activeProfiles[0].equalsIgnoreCase(mode);
}
}
I have a java HttpServer which doesn't have any issues so far.
This is my code right now:
import java.io.*;
import java.net.*;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Main {
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
server.createContext("/", new IndexHandler());
server.setExecutor(null);
server.start();
}
static class IndexHandler implements HttpHandler {
#Override
public void handle(HttpExchange t) throws IOException {
String response = "test";
t.sendResponseHeaders(200, response.length());
OutputStream os = t.getResponseBody();
os.write(response.getBytes());
os.close();
System.out.println(String.format("%s %s %d %d -", t.getRequestMethod(), t.getRequestURI(), t.getResponseCode(), response.length()));
}
}
}
The code logs the URI, status code, request method and response length. How do I compute the time taken from the start of a request to it closing? BTW I have studied the documentation and they don't have any internal implementation for this..
You have to wrap your request with middleware and then you have to log the response time. we are using metric middleware to do the same. It depends upon the server you are using. we are using http4s, otherwise you have to write the custom middleware.
val middleware: HttpMiddleware[F] = {
{ service: HttpRoutes[F] =>
metrics(service)
} compose { service: HttpRoutes[F] =>
hadoopLoggerMiddleware(service)
} compose { service: HttpRoutes[F] =>
kafkaTrafficForwarder(service)
}
}
I'm trying to set up a Spring SseEmitter to send a sequence of updates of the status of a running job. It seems to be working but:
Whenever I call emitter.complete() in in my Java server code, the javascript EventSource client calls the registered onerror function and then calls my Java endpoint again with a new connection. This happens in both Firefox and Chrome.
I can probably send an explicit "end-of-data" message from Java and then detect that and call eventSource.close() on the client, but is there a better way?
What is the purpose of emitter.complete() in that case?
Also, if I always have to terminate the connection on the client end, then I guess every connection on the server side will be terminated by either a timeout or a write error, in which case I probably want to manually send back a heartbeat of some kind every few seconds?
It feels like I'm missing something if I'm having to do all this.
I have added the following to my Spring boot application to trigger the SSE connection close()
Server Side:
Create a simple controller which returns SseEmitter.
Wrap the backend logic in a single thread executor service.
Send your events to the SseEmitter.
On complete send an event of type complete via the SseEmitter.
#RestController
public class SearchController {
#Autowired
private SearchDelegate searchDelegate;
#GetMapping(value = "/{customerId}/search")
#ResponseStatus(HttpStatus.OK)
#ApiOperation(value = "Search Sources", notes = "Search Sources")
#ApiResponses(value = {
#ApiResponse(code = 201, message = "OK"),
#ApiResponse(code = 401, message = "Unauthorized")
})
#ResponseBody
public SseEmitter search(#ApiParam(name = "searchCriteria", value = "searchCriteria", required = true) #ModelAttribute #Valid final SearchCriteriaDto searchCriteriaDto) throws Exception {
return searchDelegate.route(searchCriteriaDto);
}
}
#Service
public class SearchDelegate {
public static final String SEARCH_EVENT_NAME = "SEARCH";
public static final String COMPLETE_EVENT_NAME = "COMPLETE";
public static final String COMPLETE_EVENT_DATA = "{\"name\": \"COMPLETED_STREAM\"}";
#Autowired
private SearchService searchService;
private ExecutorService executor = Executors.newCachedThreadPool();
public SseEmitter route(SearchCriteriaDto searchCriteriaDto) throws Exception {
SseEmitter emitter = new SseEmitter();
executor.execute(() -> {
try {
if(!searchCriteriaDto.getCustomerSources().isEmpty()) {
searchCriteriaDto.getCustomerSources().forEach(customerSource -> {
try {
SearchResponse searchResponse = searchService.search(searchCriteriaDto);
emitter.send(SseEmitter.event()
.id(customerSource.getSourceId())
.name(SEARCH_EVENT_NAME)
.data(searchResponse));
} catch (Exception e) {
log.error("Error while executing query for customer {} with source {}, Caused by {}",
customerId, source.getType(), e.getMessage());
}
});
}else {
log.debug("No available customerSources for the specified customer");
}
emitter.send(SseEmitter.event().
id(String.valueOf(System.currentTimeMillis()))
.name(COMPLETE_EVENT_NAME)
.data(COMPLETE_EVENT_DATA));
emitter.complete();
} catch (Exception ex) {
emitter.completeWithError(ex);
}
});
return emitter;
}
}
Client Side:
Since we specified the name of event on our SseEmitter, an event will be dispatched on the browser to the listener for the specified event name; the website source code should use addEventListener() to listen for named events. (Notice: The onmessage handler is called if no event name is specified for a message)
Call the EventSource on the COMPLETE event to release the client connection.
https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
var sse = new EventSource('http://localhost:8080/federation/api/customers/5d96348feb061d13f46aa6ce/search?nativeQuery=true&queryString=*&size=10&customerSources=1,2,3&start=0');
sse.addEventListener("SEARCH", function(evt) {
var data = JSON.parse(evt.data);
console.log(data);
});
sse.addEventListener("COMPLETE", function(evt) {
console.log(evt);
sse.close();
});
According to the HTML standard for Server-sent events
Clients will reconnect if the connection is closed; a client can be told to stop reconnecting using the HTTP 204 No Content response code.
So Spring's SseEmitter behaves as expected and the purpose of complete() is to make sure all the events were sent and then to close the connection.
You need to either implement server-side logic that would return 204 http code on subsequent requests (e.g. by checking session id) or to send a special event and close the connection from client side after receiving it as suggested by Ashraf Sarhan
I need to inform all users about adding new Record to the database.
So, I have the following code
Application.java - here I placed socket handler method
public WebSocket<JsonNode> sockHandler() {
return WebSocket.withActor(ResponseActor::props);
}
Then I opened the connection
$(function() {
var WS = window['MozWebSocket'] ? MozWebSocket : WebSocket
var socket = new WS("#routes.Application.sockHandler().webSocketURL(request)")
socket.onmessage = function(event) {
console.log(event);
console.log(event.data);
console.log(event.responseJSON)
}});
My Actor class
public class ResponseActor extends UntypedActor {
private final ActorRef out;
public ResponseActor(ActorRef out) {
this.out = out;
}
public static Props props(ActorRef out) {
return Props.create(ResponseActor.class, out);
}
#Override
public void onReceive(Object response) throws Exception {
out.tell(Json.toJson(response), self());
}
}
And the last, as I think, I need to invoke the Actor from my Response Controller
public Result addPost() {
Map<String, String[]> request = request().body().asFormUrlEncoded();
Response response = new Response(request);
Map<String, String> validationMap = ResponseValidator.validate(response.responses);
if (validationMap.isEmpty()) {
ResponseDAO.create(response);
ActorRef responseActorRef = Akka.system().actorOf(ResponseActor.props(outRef));
responseActorRef.tell(response, ActorRef.noSender());
return ok();
} else {
return badRequest(Json.toJson(validationMap));
}
}
My question is: what is ActorRef out and where can I get it in my Controller?
Could you please clarify the logic for sending update to all clients through web sockets?
I'm working on a similar problem, myself, though in Scala, so I'll see if I can assist based on what I've learned so far (I'm having my own problems with getting the message to my actor after the socket opens).
Accepting a WebSocket connection with an actor isn't done with the typical request/response model like making a GET request to the server for a page. Instead, you need to use Play's WebSockets API:
import akka.actor.*;
import play.libs.F.*;
import play.mvc.WebSocket;
public static WebSocket<String> socket() {
return WebSocket.withActor(ResponseActor::props);
}
The Play WebSockets documentation should be able to help you from there better than I can:
https://www.playframework.com/documentation/2.4.x/JavaWebSockets
I use a rest template in a Java client to log onto a server and receive required headers to upgrade a connection to a secure websocket.
here is my code:
private static void loginAndSaveJsessionIdCookie(final String user, final String password, final HttpHeaders headersToUpdate) {
String url = "http://localhost:" + port + "/websocket-services/login.html";
new RestTemplate().execute(url, HttpMethod.POST,
new RequestCallback() {
#Override
public void doWithRequest(ClientHttpRequest request) throws IOException {
System.out.println("start login attempt");
MultiValueMap<String, String> map = new LinkedMultiValueMap<>();
map.add("username", user);
map.add("password", password);
new FormHttpMessageConverter().write(map, MediaType.APPLICATION_FORM_URLENCODED, request);
}
},
new ResponseExtractor<Object>() {
#Override
public Object extractData(ClientHttpResponse response) throws IOException {
System.out.println("got login repsonse");
headersToUpdate.add("Cookie", response.getHeaders().getFirst("Set-Cookie"));
return null;
}
});
}
This usually works, but occasionally (especially after the websocket connection has timed out) there is no response from the server and my client stops responding while the method hangs awaiting the response.
Could anyone suggest a fix or work around for this ? As it causes the client to freeze completely, requiring a force close.
To async any code threading is the best way and you can use the ExecutorService
to specify any timeout you wish to have. Following two options are available as per your need (pls chk API to know difference between them) :-
<T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks,
long timeout, TimeUnit unit)
throws InterruptedException;
OR
<T> T invokeAny(Collection<? extends Callable<T>> tasks,
long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException;