Enforce SSL on Play! Framework - java

I'm currently using Play! 1.2.2 and its new Netty client framework.
I haven't found a straightforward method to enforce SSL, although can get HTTP and HTTPS to serve asynchronously. Does anyone that's worked with Play! have a straightforward method of enforcing SSL? Not sure if I need to create redirects or if this can be solved quickly in a conf file.

There are a couple of ways to enforce SSL.
Firstly, you can set all your actions to use the .secure() method, for example
index page
Alternatively, and probably the best way, is to do this via a frontend HTTP server, such as Apache, Nginx or Lighttpd.
The idea of the frontend http server, is that your application runs on port 9000, but is not accessible from the outside network. HTTP is responsible for all incoming requests, and is configured to only accept HTTPS. The HTTPS is handled by the HTTP server, and the request is then forwarded on to Play.
This leaves your entire Play application to work as normal, and the SSL is offloaded to another application.
This same method can be applied to a load balancer, rather than HTTP server, but I am guessing the majority of people will go with the far cheaper alternative of a HTTP server, unless running in a corporate environment.

In the controller you can check against request.secure and either do a redirect or return 403/access denied.
You can force SSL for a whole controller doing this:
public static class ForceSSL extends Controller
{
#Before
static void verifySSL()
{
if (request.secure == false)
redirect("https://" + request.host + request.url);
}
}
... and annotate another controller:
#With(ForceSSL.class)
public class Foo extends Controller
{
....
}
See also http://groups.google.com/group/play-framework/browse_thread/thread/7b9aa36be85d0f7b

Related

Keycloak public client and authorization

We are using keycloak-adapter with Jetty for authentication and authorization using Keycloak.
As per Keycloak doc for OIDC Auth flow:
Another important aspect of this flow is the concept of a public vs. a confidential client. Confidential clients are required to
provide a client secret when they exchange the temporary codes for
tokens. Public clients are not required to provide this client secret.
Public clients are perfectly fine so long as HTTPS is strictly
enforced and you are very strict about what redirect URIs are
registered for the client.
HTML5/JavaScript clients always have to be public clients because
there is no way to transmit the client secret to them in a secure
manner.
We have webapps which connect to Jetty and use auth. So, we have created a public client and it works awesome for webapp/REST authentication.
The problem is as soon as we enable authorization, client type gets converted to Confidential from Public and it does not allow the reset it as Public. Now, we are in soup. We cannot have public clients due to authorization and we cannot connect webapps to confidential client.
This seems to be contradictory to us. Any idea why client needs to be confidential for authorization? Any help on this how can we overcome this issue?
Thanks.
As far as I understood, you have your frontend and backend applications separated. If your frontend is a static web-app and not being served by the same backend application (server), and your backend is a simple REST API - then you would have two Keycloak clients configured:
public client for the frontend app. It would be responsible for acquiring JWT tokens.
bearer-only client, which would be attached to your backend application.
To enable authorization you would create roles (either realm or client scoped, start on the realm level as it's easier to comprehend). Every user would then be assigned a role/s in the Keycloak admin UI. Based on this you should configure your keycloak adapter configuration (on the backend).
All things considered, in order to talk to your REST API, you would attach a JWT token to each HTTP request in the Authorization header. Depending on your frontend framework, you can use either of these:
Keycloak js adapter
Other bindings (angular, react)
P.S. For debugging I have just written a CLI tool called brauzie
that would help you fetch and analyse your JWT tokens (scopes, roles, etc.). It could be used for both public and confidential clients. You
could as well use Postman and https://jwt.io
HTH :)
I think you are referring to the "Authorization Enabled" switch in the admin console of Keycloak, when creating a client. If you go over the question mark next to the label, you'll see the hint "Enable/Disable fine grained authorization support for a client.
Create client in Keycloak admin console (v 6.0.1)
This is meant for when you create a client for a backend application that serves as a resource server. In that case, the client will be confidential.
If you want to create a client for a frontend app, to authenticate a user and obtain an JWT, then you don't need this.
See also: https://www.keycloak.org/docs/latest/authorization_services/index.html
After much deliberation, we found that authorisation is not required to be enabled on public client really when you connect to it. When any request come to public client, it only does the authentication part. Authorization part is done when actual request lands on the resource server (in our case, Jetty) using the confidential client (as Jetty has knowledge of confidential client configured in it).

Spring websockets SockJS fallback protocols are not working out of the box?

I've made an app with spring boot with websockets support. Everything is
working great. I'm using SockJS + Stomp. No worries. It's just working. But
now I want to support the ability of SockJS use its fallback protocols. And
it seems that it's not working out of the box.
Here's how I added the endpoint:
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/ws").withSockJS();
}
And that's it. No more configuraion.
And now when I disable webcokets in the brower and try to launch my app I get
404 for the transport that the SockJS is trying to use as the fallback.
See? First GET /ws/iframe.html 404 then POST /ws/**/xhr_send?t=... also 404.
What does this mean? Do I have to develop something else so that SockJS
fallback protocols will work?
xhr_send endpoint returns 404 if session id is not found.
If your Websocket service is scaled to multiple instances, root cause can be that session was initiated with one instance, and other instances are not aware about it.
Possible fixes:
sticky sessions - route all requests from a client to the same instance
distributed state - share user sessions between instances

IntelliJ Static Web Project to Tomcat or Angular CORs

I have a static web angular project in IntelliJ IDEA. The static page gets deployed to http://localhost:63342/Calculator/app/index.html. I have run into a problem where I try to post some data to a server to get a response back but when I try to post I get this error:
XMLHttpRequest cannot load <url>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 401.
Here is my post angular code:
WebIdServer.prototype.getId = function(id) {
var _this = this;
var request = {
method: 'POST',
url: 'https://<url>,
headers: {
'Authorization':'Bearer QWE234J234JNSDFMNNKWENSN2M3',
'Content-Type':'application/json',
},
data: {
id:id
}
};
_this.$log.debug(request);
return _this.$http(request)
.success(function(data, status, headers, config){
_this.$log.debug("Successfull request.");
/*called for result & error because 200 status*/
_this.uid = data.id;
_this.$log.debug(_this.uid);
})
.error(function(data, status, headers, config){
_this.$log.debug("Something went wrong with the request.");
_this.$log.debug(data);
/*handle non 200 statuses*/
});
};
I know for a fact that post works because I tried it on a local url of my application that I had running on a different port.
So my question is, since I can't post from localhost I was wondering if maybe deploying this to a tomcat server would fix things. If so, how do you deploy this static web project to a tomcat server? If that's not necessary, then how do I get around this problem I'm having?
There's a few things regarding CORS. It's a web browser telling you you cannot make a particular call. This is only a front end problem, a script running on a server can call any api regardless of the location. Three different options:
without config; same hosts
Without any configuration on your server, your front end's AJAX requests need to match both the domain and the port of the service you're calling. In your case, your angular app at http://localhost:63342 should be calling a service also hosted on http://localhost:63342 and then you're sweet. No CORS issues.
with server side config; different hosts
If the API is hosted elsewhere, you'll have to configure the API host. Most servers will let you configure access controls, to allow a particular domain to bypass the CORS block. If you have access to the server you're trying to call, see if you can set it up. The enable CORS website has examples for most servers. Usually this is pretty simple.
Create a proxy
This is your Tomcat idea. CORS is only a problem if your front end calls another service. A server script can call anything it likes. So, you could use Tomcat (or Apache, or NGINX, or NodeJS...) to host a script that'll pass on the request. Essentially, all it needs to do is add Access-Control-Allow-Origin: * to the response of the API.
I have never used Tomcat myself, but here's a blog post that might have some useful info on how to do that. Combine it with the info on enable CORS and you should be able to route anything to anywhere.
This process is common. Just look at the popularity of a node package like CORS anywhere, which is what your tomcat does.
As a disclaimer, how good of an idea this is depends on how you can pass along the credentials and tokens. You don't really want to create a service that'll just blindly call someone else's API with your credentials.

Can nginx decide whether to proxy to a url, based on a previous proxy call?

I have the following situation:
A backend system which is hidden for outside access
A thin extension, written in Play Framework, which does some external work with the data passed to the backend.
An nginx instance intercepting all public calls, and deciding to which system to proxy
The idea is the following:
If a specific call comes, I want nginx to proxy it to the Play app, and based on the result of the Play app, to decide whether to proxy it to the backend, or to return the result of the Play app to the web client. The result of the Play app could be either some JSON, or directly playing with the response codes, so when it can be poxide further, it will return 200, if not 500, etc.
Is it possible?
In this scenario you use NGIX as a reverse proxy your play app.
You play controller would handle the request and you can then apply your business logic to know whether or not to forward the request to your back end application.
Responses from Play can be standard http responses or JSON (or many other formats).
You can connect to your back end application by making Web Service requests (from WS in Play) or MQ messages (RabbitMQ plugin) or custom protocol.

Can I write a Java loader class that will hook HTTP requests in the loaded class?

I have a class that I want to hook and redirect HTTP requests in.
I also have a loader class already written, but all it does it replace the functions that contain the HTTP requests I want to change.
Is there a way to hook HTTP requests in Java so that I can redirect them all more easily?
Sort of like a proxy-wrapper.
Clarification:
The app sends out a GET or POST request to a URL.
I need the content to remain the same, just change the URL.
DNS redirects won't work, the Host HTTP header needs to be correct for the new server.
PS: This is a Desktop App, not a server script.
A cumbersome but reliable way of doing this would be to make your application use a proxy server, and then write a proxy server which makes the changes you need. The proxy server could be in-process in your application; it wouldn't need to be a separate program.
To use a proxy, set a couple of system properties - http.proxyHost and http.proxyPort. Requests made via HttpURLConnection will then use that proxy (unless they specifically override the default proxy settings). Requests made using some other method like Apache HttpClient will not, i think, be affected, but hopefully, all your requests are using HttpURLConnection.
To implement the proxy, if you're using a Sun JRE, then you should probably use the built-in HTTP server; set up a single handler mapped to the path "/", and this will pick up all requests being sent by your app, and can then determine the right URL to send them to, and make a connection to that URL (with all the right headers too). To make the connection, use URL.openConnection(Proxy.NO_PROXY) to avoid making a request to the proxy and so getting caught in an infinite loop. You'll then need to pump input and output between the two sockets.
The only other way i can think of to do this would be to override HttpURLConnection with a new handler which steers requests to your desired destination; you'd need to find a way to persuade the URL class to use your handler instead of the default one. I don't know how you'd do that in a clean way.
While an older post, this should give some ideas of some kinds of bytecode injects which can be peformed: Java Programming: Bytecode Injection. Another tool is Javassist and you may be able to find some links from the Aspected-oriented programming wiki article (look at the bytecode weavers section).
There are some products which extensively dynamically modify code.
Depending upon what is desired, there may be ... less painful ... methods. If you simply want to 'hook' HTTP requests, another option is just to use a proxy (which could be an external process) and funnel through that. Using a proxy would likely require control over the name resolution used.
you can use servlet filters which intercept the requests, the requests can further be wrapped, redirected, forwarded or completed from here.
http://www.oracle.com/technetwork/java/filters-137243.html
Do you control all of the code? If so, I suggest using Dependency Injection to inject the concrete implementation you want, which would allow you to instead inject a proxy class.
If you can change the source code, just change it and add your extra code on each HTTP request.
If you can't change the source code, but it uses dependency injection, perhaps you can inject something to catch requests.
Otherwise: use aspect-oriented programming and catch to URL class, or whatever you use to do HTTP requests. #AspectJ (http://www.eclipse.org/aspectj/doc/next/adk15notebook/ataspectj.html ) is quite easy and powerful.

Categories

Resources