I have a web-service endpoint and a http connector on port X.
At some point this endpoint needs to switch to https, but on the same port!
(I know this is not the normal way of doing things, but this is what my clients expect from an old server they are using...)
Is there a way to do it in tomcat?
This is not possible with Tomcat.The HTTPS connector will accept SSL connection only.
We have such a proxy developed in house. It's not that hard to do. You just need to check the first incoming packet. Looking for the pattern of SSL handshake. We only look for CLIENT_HELLO. Once you figure out the protocol, you can forward the request accordingly.
This is really ugly. You shouldn't do it if all possible. We have to do it because the legacy clients do this and it's impossible to upgrade them all.
There is such a thing as HTTPS upgrade, whereby a plaintext HTTP connection is upgraded to HTTP by mutual agreement after it has been formed. Is that what you mean? If so, Tomcat doesn't seem to support it out of the box, and neither does Java out of the box either. You can probably write yourself a Tomcat Connector that will do it; on the client end you have a more interesting problem ;-)
But I would ask why? Ports aren't so expensive that you can't use two.
You don't need to run the HTTP & HTTPS on same port, Configure the Tomcat to redirect requests to HTTPS in server.xml file.
well I wonder why they are NOT usually on the same port! wouldn't that be easier?
the reason is probably that related Java APIS (javax.net.ssl) don't allow that; you must have different server sockets. are there any alternative SSL impls for Java? I'm not aware of any.
Related
I have a Spring application inside a tomcat 8 container, this application has both local (intranet) and remote (internet) service. I would like to serve local services with simple HTTP and remote with HTTPS, is it possibile editing tomcat configuration and without filter requests inside the application?
I should distinguish local from remote requests by its ip address.
You shouldn't need to. Your local network should be protected by a firewall, and you simply configure the firewall to only allow the secure port through.
Local traffic from the intranet doesn't go through the firewall, so it can access the HTTP port (80, 8080, ...).
External traffic comes in through the firewall, and it will block the HTTP port and allow the HTTPS port (443, 8443, ...).
Often with HTTPS, you don't even let Tomcat handle that, but instead put IIS (Windows) or Apache (Linux) in front of it. In that case you only have an AJP connector on localhost, and nobody can talk directly to Tomcat. The frontend web server will then do the required filtering and SSL/TLS handshake.
If you have anything that's worth using https for, I'd opt to go https all the way: Otherwise you'll sooner or later have information leaks because you've missed some crucial part of configuration. HTTPS is no black magic anymore, performance impact is low if exists it at all.
In fact, the typical usecase that I see described is exactly opposite to yours: Intranet usage is typically more protected than internet access (which is thought to be anonymous, but it depends on the nature of the site). However, an Intranet is typically authenticated (more so than the internet side, typically) and I'd expect it to be quite important to protect the authentication. The only mixed-mode solution (http/https) that I could come up with for this situation is: Use HSTS as soon as a user logs in, don't bother otherwise.
You're asking for the opposite of what I typically see - but my actually preferred solution (in all cases) is: Force https everywhere, use HSTS. And don't worry any more. Easier to maintain, Easier to setup and hard to get wrong.
I am attempting to connect to URLs in Java to see if they are valid and I am wondering if I need to connect to HTTPS(port 443?) or if connecting to just HTTP(port 80) will be enough.
Does connecting to HTTP for an HTTPS website work? Is there anything with firewalls I should watch out for that wouldn't allow me to do this?
Thanks.
If you want to check that URLs are "valid" I think you want to know if they respond with a 200 status code to a GET request.
You'll need to check http and https separately if you want to know if they both work. They're two different protocols, and severs handle them differently. Some servers mirror the same content over both protocols, but many of them redirect the HTTP -> HTTPS etc.
Also not every server supports SSL connections, therefore HTTPS might not be available.
Since you rephrased your question I'll update my answer accoring to that.
To stay with your example:
Checking for URLS on port 80 is totally independent from checking urls on port 443. Maybe port 80 leads to the same content as port 443. Maybe port 80 leads to the end-user content, while port 443 leads to the admin-login.
Maybe apache operates on port 80 while nginx operates on port 443.
So to get the all of the content, you need to scan both ports. Additionally be prepared to find sometimes two different types of content, that don't have anything to do with each other. Admittedly this will happen rarely but it can happen.
Regarding firewalls:
If a web-service is intended to be public, firewalls will happily allow you to connect to the service. If a web-service is intended to be private and you can connect to it nonetheless, the firewall admin made a mistake :)
HTH
I'm trying to communicate with a CUPS print-server that has "Encryption Required" set for all its connections. This means that, when you try to establish a connection to it, it asks to upgrade the connection to TLS-encrypted one, and neither Cups4j nor Jspi seem to be able to handle it.
Is there any way to connect to such a server from a Java application (using either these libraries or others)?
Your main problem is that CUPS/IPP is one of the rare protocols that use an HTTP to TLS upgrade, as described in RFC 2817. (https:// doesn't use that at all, see RFC 2818.) A consequence of that is that you'll find far less support in existing libraries for this upgrade.
In principle, upgrading a plain Socket into an SSLSocket isn't too difficult. However, since IPP relies on HTTP, it's likely that the libraries your library uses doesn't support this, since few HTTP libraries support RFC 2817.
I haven't looked at Cups4J, but Jspi clearly relies on Apache HTTP Client (probably version 3.x).
Support for RFC 2817 was discussed in 2011 on Apache HTTP Client mailing list, but it's not clear whether any of this made its way into the library. Anyway, the Jspi code is older than that, so it's fair to assume that it's not going to work.
A possible workaround:
Some IPP servers seem to support both TLS via an upgrade (RFC 2817) or via an initial connection (RFC 2818, the traditional https:// way). Perhaps yours does too. Check whether it listens to another port for TLS connections (e.g. by pointing an HTTPS client to it). (This could also be the same port if the server uses port unification.)
If this works, a quick patch to IppHttpConnection.java in Jspi should enable you to make it use https:// connections instead of http:// connections:
private static URI toHttpURI(URI uri) {
if (uri.getScheme().equals("ipp")) {
String uriString = uri.toString().replaceFirst("ipp", "http");
I'm not sure if ipps:// is standard, but you could use the same trick and replace ipps:// with https:// in the scheme. The rest should automatically be handled by the underlying HTTP library. (You might have to make sure your certificate is trusted too, but that's a different problem.)
Thrift provides several different non-blocking server models, like TNonblockingServer, THsHaServer, and TThreadedSelectorServer. But, I'd like to enable SSL on the server. It seems SSL only works on blocking servers in Thrift.
Anyone has any clues of a non-blocking SSL server in Thrift? Java example would be highly appreciated.
One alternative to worrying about SSL in your Java App is to stand up something like nginx (http://wiki.nginx.org/SSL-Offloader) as a reverse proxy.
This has the upside of your application not needing to care about SSL but does require one more layer in your stack.
Clients will connect to the nginx server instead of directly to your client and nginx will forward those connections to your Thrift server.
You don't necessarily need two different servers for this approach, just configure your Thrift server to only listen on localhost (127.0.0.1 for ipv4) and have nginx listen on your external interfaces and forward to localhost.
Edit: client -> server in last paragraph
This might be one of those "huh, why?" questions, but I figured it would be worth the try.
How would one, from a server-side application, use the clients IP address as the applications IP address to another website? The basic idea is that any work the server side application does, is seen as the client itself doing the work, and not the servers static IP.
I am not sure if changing HTTP headers would work, but I could be wrong. Is there any documentation out there on this?
Thanks,
Kyle
Utterly, utterly impossible. You won't even be able to open a TCP connection because the other website's server will try to handshake with the client, and fail.
An IP address isn't just any old ID, it's the actually address that servers will send any response to. Spoofing it basically only makes sense if you can fit your request into a single IP packet (which rules out TCP and thus HTTP) and are not interested in the response. Even then it can fail because your ISP's routers may have anti-spoofing rules that drop packets with "outside" IP addresses originating from "inside" networks.
Why on earth would a legitimate application want to spoof its IP address?
Changing HTTP headers might cut it, but most likely it won't. Depends on how naive the other server is.
It sounds like you're trying to do something the wrong way, can you give a bit more information as to what exactly the use-case is?
If there's no processing to be done in between, you can do port forwarding on your server's IP firewall, so the client connects to your server but ends up talking to the other server.
If there's more involvement of your server, then the correct thing to do would be to pass the client's IP to the other server as part of the URL (if it's a web app) or elsewhere in the data (if not) so the receiving server can know and correctly log the process without any need for fakery. Of course this would also call for a change in the other app.
Again assuming we're talking about HTTP, another idea that came to my mind would be to redirect your client to the other server. As long as all necessary data is in the URI, you could advise the client's browser to connect to the other server with a URI of your own creation that could carry whatever extra value your server's processing adds to the request.
Decades ago, the designer of internet asked, "how can we prevent Kyle Rozendo from doing such a devious thing?"
If the client is cooperating, you can install some software on client machine, and do the work from there. For example, a signed java applet on your page. [kidding]If the client is not cooperating, install some trojan virus[/kidding]