Google App Engine and HTTPS Strategy - java

I am designing my first GAE app and obviously need to use HTTPS for the login functionality (can't be sending my User's UIDs and passwords in cleartext!).
But I'm confused/nervous about how to handle requests after the initial login. The way I see it, I have 2 strategies:
Use HTTPS for everything
Switch back from HTTPS (for login) to plain ole' HTTP
The first option is more secure, but might introduce performance overhead (?) and possibly send my service bill through the roof. The second option is quicker and easier, but less secure.
The other factor here is that this would be a "single-page app" (using GWT), and certain sections of the UI will be able to accept payment and will require the secure transmission of financial data. So some AJAX requests could be HTTP, but others must be HTTPS.
So I ask:
GAE has a nifty table explaining incoming/outgoing bandwidth resources, but never concretely defines how much I/O bandwidth can be dedicated for HTTPS. Does anybody know the restrictions here? I'm planning on using "Billing Enabled" and paying a little bit for the app (and for higher resource limits).
Is it possible to have a GWT/single-page app where some portions of the UI use HTTP while others utilize HTTPS? Or is it "all or nothing"?
Is there any real performance overheard to utilizing an all-HTTPS strategy?
Understanding these will help me decide between a HTTP/S hybrid solution, or a pure HTTPS solution. Thanks in advance!

If you start mixing http and https request you are as secure as you would be using http, because any http request can be intercepted and can introduce possible XSS attacks.
If you are serious about your security read up on it, assuming that you only require https for sensible data and transmitting the rest with http will bring you in a lot of trouble.

You pay for http and https the same for incoming bandwidth and you should see any difference in instances hours. The only difference is the one time pay (per month) that you need to pay for SNI or VIP

Related

HTTP requests for real time application, performance tips

I'm using Java's OkHttp3 to send multiple POST requests to the same REST endpoint, which is a third party AWS server on the same region as mine. I need those requests to be processed as fast as possible (even 1ms counts).
Right now the only performance tips I'm applying are very basic: I'm using HTTP2 so the connection socket is reused and I'm sending the requests asynchronously so it doesn't wait for any response until all requests are sent.
What are other tips I should consider to improve the performance?
EDIT: In case this is important for any reason, I'm currently passing all params through the URL, the body of the requests is empty. I may pass them as part of the body but I arbitrarily decided not to.
OkHttp is a good choice for low-latency. Netty may be a better choice for high-concurrency, but that's not your use-case.
You may want to measure what happens when you disable gzip. You’ll need to remove the accept-encoding request header in a network interceptor. That might make things faster since but only because you're on a fast link.
One other thing to research is disabling Nagle’s algorithm. You'll need to call Socket.setTcpNoDelay() which you can do with a custom SocketFactory.
The next release of OkHttp will support unencrypted HTTP/2. If you're okay with this (it is almost always a bad idea), removing TLS might buy you a (small) gain. Be very careful here; plaintext comms are bad news.

I am sending user credentials in header part using POST method

I know it is very basic question but I need a solid answer to clear my thoughts on it.
I am sending user credentials, key etc in header part in POST method,
Is it a good way? if not then why?
It's a bad way of doing things like these since if somebody could intercept your request - they would get your credentials easily. Better to avoid or at least encrypt this kind of requests.
One of the most popular solutions nowadays is to use OAuth 2.0 (or even better - OpenID Connect). They will bring some complexity to your system but the cool thing about it is that your application doesn't have to deal with passwords at all. Everything is delegated to Authority Server. And there are a lot of the authorization servers ready to use, for instance Keycloak (we have been using it and it and it was really good experience for us)

SPNEGO: Subsequent Calls after a Successful Negotiation and Authentication

Over the last few days I have built a proof-of-concept demo using the GSS-API and SPNEGO. The aim is to give users single-sign-on access to services offered by our custom application server via Http RESTful web-services.
A user holding a valid Kerberos Ticket Granting Ticket (TGT) can call the SPNEGO enabled web-service, the Client and Server will negotiate,
the user will be authenticated (both by Kerberos and on application level), and will (on successful authentication) have a Service Ticket for my Service Principal in his Ticket Cache.
This works well using CURL with the --negotiate flag as a test client.
On a first pass CURL makes a normal HTTP request with no special headers. This request is rejected by the Server,
which adds "WWW-Authenticate: Negotiate" to the response headers, suggesting negotiation.
CURL then gets a Service Ticket from the KDC, and makes a second request, this time with Negotiate + the wrapped Service Ticket in the request header (NegTokenInit)
The Server then unwraps the ticket, authenticates the user, and (if authentication was successful) executes the requested service (e.g. login).
The question is, what should happen on subsequent service calls from the client to the server? The client now has a valid Kerberos Service Ticket, yet additional calls via CURL using SPNEGO makes the same two passes described above.
As I see it, I have a number of options:
1) Each service call repeats the full 2 pass SPNEGO negotiation (as CURL does).
While this maybe the simplest option, at least in theory there will some overhead: both the client and the server are creating and tearing down GSS Contexts, and the request is being sent twice over the network, probably ok for GETs, less so for POSTs, as discusses in the questions below:
Why does the Authorization line change for every firefox request?
Why Firefox keeps negotiating kerberos service tickets?
But is the overhead* noticeable in real-life? I guess only performance testing will tell.
2) The first call uses SPNEGO negotiation. Once successfully authenticated, subsequent calls use application level authentication.
This would seem to be the approach taken by Websphere Application Server, which uses Lightweight Third Party Authentication (LTPA) security tokens for subsequent calls.
https://www.ibm.com/support/knowledgecenter/SS7JFU_8.5.5/com.ibm.websphere.express.doc/ae/csec_SPNEGO_explain.html
Initially this seems to be a bit weird. Having successfully negotiated that Kerberos can be used, why fall back to something else? On the other hand, this approach might be valid if GSS-API / SPNEGO can be shown to cause noticeable overhead / performance loss.
3) The first call uses SPNEGO negotiation. Once successfully authenticated and trusted, subsequent calls use GSS-API Kerberos.
In which case, I might as well do option 4) below.
4) Dump SPNEGO, use "pure" GSS-API Kerberos.
I could exchange the Kerberos tickets via custom Http headers or cookies.
Is there a best practice?
As background: Both the client and server applications are under my control, both are implemented in java, and I know both "speak" Kerberos.
I chose SPNEGO as "a place to start" for the proof-of-concept, and for worming my way into the world of Kerberos and Single Sign On, but it is not a hard requirement.
The proof-of-concept runs on OEL Linux servers with FreeIPA (because that is what I have in our dungeons), but the likely application will be Windows / Active Directory.
* or significant compared to other performance factors such as the database, use of XML vs JSON for the message bodies etc.
** If in the future we wanted to allow browser based access to the web services, then SPNEGO would be the way to go.
To answer your first question, GSS-SPNEGO may include multiple round trips. It is not limited to just two. You should implement a session handling and upon successful authentication issue a session cookie that client should be presenting on each request. When this cookie is invalid, server would force you to re-authenticate. This way you would only incur negotiate costs when that is really needed.
Depending on your application design you can choose different approaches for authentication. In FreeIPA we have been recommending to have a front-end authentication and allow applications to re-use the fact that front-end did authenticate the user. See http://www.freeipa.org/page/Web_App_Authentication for detailed description of different approaches.
I would recommend you to read the link referenced above and also check materials done by my colleague: https://www.adelton.com/ He is author of a number of Apache (and nginx) modules that help to decouple authentication from actual web applications when used with FreeIPA.
On re-reading my question, the question I was really asking was:
a) Is the overhead of SPNEGO significant enough that it makes sense to use if for a authorisation only, and that "something else" should be used for subsequent service calls?
or
b) Is the overhead of SPNEGO NOT significant in the greater scheme of things, and can be used for all service calls?
The answer is: It depends on the case; and key to finding out, is to measure the performance of service calls with and without SPNEGO.
Today I ran 4 basic test cases using:
a simple "ping" type web-service, without SPNEGO
a simple "ping" type web-sevice, using SPNEGO
a call to the application's login service, without SPNEGO
a call to the application's login service, using SPNEGO
Each test was called from a ksh script looping for 60 seconds, and making as many calls via CURL as possible in that time.
In the first test runs I measured:
Without SPNEGO
15 pings
11 logins
With SPENGO
8 pings
8 logins
This initially indicated that using SPNEGO I could only make half as many calls. However, on reflection, the overall volume of calls measured seemed low, even given the poorly specified Virtual Machines being used.
After some googling and tuning, I reran all the tests calling CURL with the -4 flag for IPV4 only. This gave the following:
Without SPNEGO
300+ pings
100+ logins
With SPNEGO
19 pings
14 logins
This demonstrates a significant difference with and without SPNEGO!
While I need to do further testing with real-world services that do some backend processing, this suggests that there is a strong case for "using something else" for subsequent service calls after authentication via SPNEGO.
In his comments Samson documents a precedent in the Hadoop world, and adds the additional architectural consideration of Highly Distributed / Available Service Principals

How to determine which user is making Rest requests from android application?

I am trying to build a Spray REST Web server for my mobile application on android (Eventually Iphones too). Currently, I am wondering how to determine from the server side which user is making REST Method requests. After some research I am understanding that android's SharedPreferences or an OAuth protocol can be utilized to handle user authentication. Still I am unsure how to create the entire picture of "This user is requesting some information". The message responses will be in JSON text, should the request's be in JSON as well?
I greatly appreciate all of your help, eagerly awaiting responses.
Currently, I am wondering how to determine from the server side which user is making REST Method requests.
The most adequate way to do this is to add auth layer to your server. There are many ways of how exactly you can do this, depending on security concerns you have to deal with.
Here is the list of auth schemes I would be looking into first of all:
Http basic auth (most simple one, stick with it if you can)
OpenID or OAuth (harder to implement, solves some problems that are out of scope of simply authenticating user. Note: OAuth is about authorization but can also be used to do authentication)
home-gown auth (may be implemented on different network layers, lots of options here. Do not recommend going here unless you really need to).
You can send more info from client side when user send request, you include unique info (Android ID, Ads ID for example)
server.address/request?command=abc&uid=UID...
. Another way is read IP address of the user and manage it by session.

Single Sign On without cookies in Java

I keep on facing this question from my manager how SSO will work if client disable cookies but I don't have any answer. We are currently using JOSSO for single sign on. Do we have any open source framework which support single sign on without using cooking mechanism.
In the absence of cookies, you're going to have to embed some parameter in each url request. e.g. after logging in you assign some arbitrary id to a user and embed that in every link such as http://mydomain.com/main?sessionid=123422234235235. It could get pretty messy since every link would have to be fixed up before it went out the door which slows down your content. It also has security, logging and session history implications which are not such a huge deal when the state is in a cookie.
It may be simpler to do a simple cookie test on logged in users and send them off to an error page if they do not have cookies enabled.
The CAS project passes a "ticket" from the sign on server to the consuming application as a url query parameter, the consuming app then makes a back channel request back to the sign on server to validate the ticket's authenticity. This negates the need for cookies and therefore works across domains however it is a bit "chatty"
Another arguably more robust solution is to use a product based on SAML which is an industry standard for cross domain single sign on. There are a couple of open source products out there which use SAML and CAS itself has a SAML extension however they are typically quite complex to setup. Cloudseal is also based on SAML and is much simpler to use. The Cloudseal platform itself is delivered as a managed service but all the client libraries are open source
Of course with all these solutions you are simply passing a security context from one server to another, the consuming application will no doubt create it's own local session so you would then need to use URL rewriting instead of cookies
Disclaimer: I work for Cloudseal :)

Categories

Resources