I have a web service that I access from multiple domains. For reasons I'm unable to fathom, the session seems to be shared between different sites.
So, I make a request from WebAppA to the API. This works.
Then I make a same request from WebAppB, to the same web service. This reports that it's blocked due to the CORS policy, e.g.
The 'Access-Control-Allow-Origin' header has a value 'WebAppA' that is not equal to the supplied origin.
Origin 'WebAppB' is therefore not allowed access.
But the Tomcat code for the web service claims that it allows CORS:
I have this in my web.xml:
<param-name>Access-Control-Allow-Origin</param-name>
<param-value>*</param-value>
and this in the java class that handles requests:
if (StringUtils.isNotBlank(origin)) {
response.setHeader("Access-Control-Allow-Origin", origin);
}
Logically, this should be allowing the requests from WebAppB through, but instead it still sees WebAppA as the only permitted origin. Given the snippet above, one option that springs to mind is that the Origin header might be blank. But if it was, then surely it wouldn't say WebAppB isn't allow access, because it wouldn't know that the origin was WebAppB!?
Clearing the cache fixes the issue, so it's clearly session-associated somehow, but I can't see any cookies that look like they're relevant.
Question How can I fix this so that both webapp A and B can access the same web service, without clearing the cache in between?
Disclaimer: This is a follow-on from Possible CORS issue. What's going on and how can I fix it?, but I've done a lot more investigation since so I can define the issue more clearly. (I hope).
I suspect an error in org/intermine/webservice/server/WebService.java.
It says
origin = StringUtils.defaultIfBlank(
webProperties.getProperty("ws.response.origin"),
request.getHeader("Origin"));
The method parameters (src,default) are supplied in a wrong order, which causes the server to always return a default value in "Access-control-allow-origin", instead of considering the actual current request...
Related
Is there any reason to write
corsFilter.setAllowedOrigins(new HashSet<String>(Arrays.asList("*")));
where the definition of allowedOrigins in the Restlet framework is
private Set<String> allowedOrigins = SetUtils.newHashSet("*");
Another question - when I write the above line, I get an error running my app.
For some reason I get duplicate origin, and the client refuses to accept it - in the request I can see "*" and the domain name where I sent the request from.
How does this duplication can happen, and what is the best way to deal with it?
You're right, there is no need to provide this value as it is already the default one. Could you tell me where you read that such value must be set?
I don't understand what really happens with the second part of your question, as I'm not able to reproduce it (with CorsFilter, or CorsService).
Could you try using the CorsService instead? This service helps to configure the Cors feature, and is integrated in the list of services either of the Application, or the Component, for example in the constructor of the application:
public TestCorsApplication() {
CorsService corsService = new CorsService();
corsService.setAllowedCredentials(true);
corsService.setSkippingResourceForCorsOptions(true);
getServices().add(corsService);
}
I was playing around with Jetty (9.2.3v20140905) by connecting a web socket endpoint where I tried to use my own ServerEndpointConfig when I came across Jetty's code to see how it was used.
I notice that it is used in JsrCreator when a web socket object is created:
Object createWebSocket(ServletUpgradeRequest req, ServletUpgradeResponse resp){
...
// modify handshake
configurator.modifyHandshake(config,hsreq,hsresp);
...
}
I read the javadoc of modifyHandshake of ServerEndpointConfig (javax.websocket-api 1.0) that states:
Called by the container after it has formulated a handshake response resulting from
a well-formed handshake request. The container has already
checked that this configuration has a matching URI, determined the
validity of the origin using the checkOrigin method, and filled
out the negotiated subprotocols and extensions based on this configuration.
Custom configurations may override this method in order to inspect
the request parameters and modify the handshake response that the server has formulated.
and the URI checking also.
If the developer does not override this method, no further
modification of the request and response are made by the implementation.
Here's what Jetty does:
Object createWebSocket(ServletUpgradeRequest req, ServletUpgradeResponse resp){
...
// modify handshake
configurator.modifyHandshake(config,hsreq,hsresp);
// check origin
if (!configurator.checkOrigin(req.getOrigin())){...}
...
resp.setAcceptedSubProtocol(subprotocol);
...
resp.setExtensions(configs);
}
As you can see, the origin is checked after the configurator as been called. The response is modified after the configurator as been called.
The method acceptWebSocket of WebSocketServerFactory makes a call to the WebSocketCreator:
Object websocketPojo = creator.createWebSocket(sockreq, sockresp);
And after that calls:
private boolean upgrade(HttpConnection http, ServletUpgradeRequest request, ServletUpgradeResponse response, EventDriver driver)
which also modifies the response via HandshakeRFC6455:
// build response
response.setHeader("Upgrade","WebSocket");
response.addHeader("Connection","Upgrade");
response.addHeader("Sec-WebSocket-Accept",AcceptHash.hashKey(key));
So I have no way modifying the response only with my configurator because Jetty will change it anyway.
It seems to me Jetty does not comply with JSR 356, the Java API for WebSocket, does it?
Ah, one of the many ambiguous and ill defined parts of the JSR-356 spec.
You might want to read the open bugs against the spec.
There are many real world examples of scenarios that are rendered impossible if the original 1.x spec is follow exactly.
Now, to tackle the specific details of your question:
Why is checkOrigin called after modifyHandshake in the Jetty implementation?
This is because there are valid scenarios (esp with CDI and Spring) where the information needed by a checkOrigin implementation by the end user is not valid, or exists, until the modifyHandshake call is called.
Basically, the endpoint Configurator is created, the modifyHandshake is called, and at that point, all of the library integration (CDI, Spring, etc.) starts, that's when the endpoint enters the WebSocket (RFC6455) CONNECTING state. (once the endpoint's onOpen is called, then the WebSocket RFC6455 state goes to the OPEN state)
As you have probably noticed, there's no definitions in the spec of the scopes and lifetimes of objects when CDI (or Spring) is involved.
The 1.x JSR356 spec actually distances itself from servlet container specific behavior, it was done to make the spec as generic as possible, with the ability to have non-servlet websocket server containers too. Unfortunately, that also means that there are many use cases in a servlet container that doesn't mesh with the 1.x JSR356 spec.
Once the JSR356 spec is updated to properly define the CDI scopes on WebSocket, then this quirk of checkOrigin after modifyHandshake can be fixed.
Why is the implementation modifying the response after modifyHandshake?
The implementation has to modify the response, otherwise the response is invalid for HTTP/1.1 upgrade, the need of the implementation to cooperate with the endpoint and its configuration, for sub protocols, and extensions makes this a reality. (Notice that the the JSR356 spec punts on Extensions?)
This is also an area that is promised to be corrected in the next JSR356 revision.
The current work on the WebSocket over HTTP/2 spec makes this even more interesting, as it isn't (currently) using the HTTP/1.1 upgrade semantic. It just comes into existence with a handshake only (no Upgrade, no Origin, etc).
Is it possible to instruct a Jetty server not to update the session's last access time when a particular servlet is accessed?
Our use case is an HTML page that sends asynchronous requests in the background every 5 miniutes to refresh its contents. The session's timeout is set to 30 minutes.
The unfortunate problem with this configuration is that when a user leaves that page open in a browser's tab, the session never expires because the access time of the session is updated by every asynchronous request.
For correctness' sake I have to admit that I didn't try anything yet because I wasn't able to find any help for my issue on the Internet. If what I'm asking for is not possible, I'm thinking of storing the access time in a session's variable that is controlled directly by the application. This value would have to be checked early before a request is processed (in the doGet and doPost methods of the servlets) and the session would need to be invalidated manually. Is there a better solution?
Servlet can't distinguish if the request is generated by some script or human, since both requests come from a same browser, consequently sending the same JSESSIONID. So you have to mark those requests in order to distinguish its source. You can mark them by some header or request parameter.
I like your idea of storing access time in session's variable (it will piggy back on servlet session expiry)
Your algorithm will be in this case:
if isUser(request){
session.lastRobotAccess == null
}else{
if (session.lastRobotAccess == null) {
session.lastRobotAccess = current_time
} else {
if(current_time - session.lastRobotAccess > session.timeout){
session.invalidate
}
}
}
When request arrives at servlet container it is first processed by the filters (if you have defined) and then by the servlet. Filters are useful for:
A common scenario for a filter is one in which you want to apply
preprocessing or postprocessing to requests or responses for a group
of servlets, not just a single servlet. If you need to modify the
request or response for just one servlet, there is no need to create a
filter—just do what is required directly in the servlet itself.
Since you can reach session from filter, they are more suitable place for your logic. You won't pollute servlet's logic with additional checking, and you can apply it to other servlets. Filters are also part of servlet specification so this will work in any container.
You already knew this things, but I've just put them on "paper" :-D
We would like to implement a "fault barrier" strategy for managing exceptions in our applications. One thing our applications have is the concept of a "passback" response, basically a no-op, which we'd like to return in preference to throwing 500, 400, etc. HTTP status codes - e.g. our external facing applications should always return a valid response, even if an underlying exception was thrown - we'd like to handle that internal to the application, and still return a valid noop response.
Our first implementation was a Servlet Filter which would wrap all requests in a try/catch block, and return the default return from the catch, e.g.:
try{
chain.doFilter()
} catch (Throwable t) {
generatePassbackResponse(HttpServletRequest req, HttpServletResponse res)
}
While this mostly works, and feels nice and clean (we can return nice text, set the content/type appropriately, etc.) the one problem seems to be that when an Exception is thrown the response still comes through with Status-Code: 500.
HttpServletResponse.setStatus(200) doesn't have an effect, and the javadoc does say it only applies on normal requests.
Our second implementation thought is we may have to forward to another page, or plug an errorPage into web.xml and manually sendError to that page - though we're interested in whether anyone has a specific recommendation.
There are two methods form setting the HTTP status of a response:
setStatus() will just set the status
sendError() will set the status and trigger the <error-page> mechanism
Javadoc for sendError says the response should be considered to be committed after calling sendError (this could explain the behavior of your appserver).
Implementing a custom HttpServletResponseWrapper would allow you to enforce the behavior you
need for sendError (and maybe buffer the whole request in memory, so that you can send "passbacks" for exceptions occurring after the point the request would be usually committed).
If I remember correctly, you should not be calling chain.doFilter() if you do not want anything else to process the request. The filter will get executed in every case, but chain.doFilter() will ensure that all other filters are called. In order to properly block an exception from getting to the user, you need to stop the request/response handling.
You could take a different route as well by using a framework like Spring and its Interceptors (just like a Filter). Spring gives you a lot of control over the Interceptors and how responses get handled. Granted, this is a bit heavy of a solution to your question.
In response to the comment, according to http://java.sun.com/products/servlet/Filters.html:
The most important method in the
Filter interface is the doFilter
method...This method usually performs
some of the following actions:
If the current filter is the last
filter in the chain that ends with the
target servlet, the next entity is the
resource at the end of the chain;
otherwise, it is the next filter that
was configured in the WAR. It invokes
the next entity by calling the
doFilter method on the chain object
(passing in the request and response
it was called with, or the wrapped
versions it may have created).
Alternatively, it can choose to block
the request by not making the call to
invoke the next entity. In the latter
case, the filter is responsible for
filling out the response.
The idea being that this "fault barrier" needs to stop all other filters from executing, and just handle the request/response in the manner it deems necessary.
Can you not just use the standard web.xml configuration:
<error-page>
<error-code>500</error-code>
<location>/error.jsp</location>
</error-page>
<error-page>
<location>/error.jsp</location>
<exception-type>java.lang.Exception</exception-type>
</error-page>
I cant see what else you're trying to do that this doesn't already cater for? If it's just the error code, then I think that you can set this using the response object.
The RESTEasy framework allows you to select what your response code will be using ExceptionMappers. It may be that you're reluctant to use it, but I've found it to be very quick and efficient.
The JBoss documentation covering ExceptionMappers
http://docs.jboss.org/resteasy/docs/2.0.0.GA/userguide/html_single/index.html#ExceptionMappers
My blog article showing a snippet of general purpose RESTEasy code
http://gary-rowe.com/agilestack/2010/08/22/my-current-development-stack/
I have a need to create a HttpSession (via cookie) whenever a client invokes a particular UI.
Assumptions:
Let's assuming that I'm not going to worry about any deep oAuth-like authentication dance. JESSIONSID cookie impersonation is not an issue for now.
The server is tomcat, thus a JSESSIONID cookie is sent down to the client if a new session is created.
Design issues:
I'm grappling with how to design the URI. What is actually the REST resource ? I already have /users and /users/{someuserid}. I wanted to use /auth/login but in one previous SO question, one cited article says that we should not have verbs in the url. I've noticed that even Google makes the same mistake by having https://www.google.com/accounts/OAuthGetRequestToken. So in your opinion, are /auth/login/johndoe (login) and /auth/logout/johndoe (logout) good options ?
UPDATE:
I've changed my design. I'm now thinking of using the URIs /session/johndoe (PUT for login, DELETE for logout). It should still be within the limits of the REST ethos ?
Aren't sessions irrelevant in REST Style Architecture?
http://www.prescod.net/rest/mistakes/
I am in the midst of creating a REST endpoint that recognizes sessions. I've standardized on:
POST /sessions => returns Location: http://server/api/sessions/1qazwsxcvdf
DELETE /sessions/1qazwsxcvdf => invalidates session
It is working well.