We have a spring-boot application that runs perfectly fine by itself on both Java 11 and Java 17.
The spring-boot application is packaged as a docker container and runs inside gcp/gke kubernetes.
We use the nginx ingress to forward the traffic with tls-passthrough.
We use a Let's Encrypt certificate for our application.
The nginx does not have access to it (AFAICT), but considers it valid.
When using Java 11 everything works fine.
However, when using Java 17 the first (few) requests pass fine, but then I get a certificate error. The nginx generates/has a default ingress certificate, that it uses for the later requests. But I don't understand why it does serve that (sometimes) in the first place.
The error is reproducible with browsers and Java applications.
I did not manage to preproduce it with curl/openssl though.
After a short time/few minutes the error vanishes for the next (few) requests before it emerges again.
When adding the ingress certificate to the trusted certs in browsers I can see that the ingress requests are upgraded to HTTP2, the first few HTTP1 requests all use the correct certificate.
We tried with different java 17 base images (openjdk/eclipse-temurin + alpine/ununtu).
We tried to explicitly disable http2 in Java and the browser.
Nothing seems to work except for adding the self-signed certificate to the trust-store (which is obviously a no go for production).
We weren't able to reproduce this locally, but might be due to our local dev setup being only a simplified version of the cloud environments.
If I use kubectl port-forward into the java app container, I cannot reproduce the issue.
We use the following versions:
nginx-ingress-1.41.3
gke v1.21.6-gke.1500
eclipse-temurin 17
spring-boot 2.6.3 with the default tomcat
TLDR: The nginx-ingress sometimes does not tls-passthrough to our Java 17 app correctly and thus serves an invalid certificate for those requests. (All responses contain the expected/same/valid content except for the certificate).
Has anyone an idea what is happening and how to fix/avoid that?
Related
I have a react, Java API application on Apache Tomcat 9 server on same server. I want to separate React.JS UI into one server and Java API onto second server. Server OS is Ubuntu 20.
I am facing below challenges, need your help to complete POC project:
How to develop and build and deploy React.JS, NODE.JS app on first server to point to Java API, Apache Tomcat 9 on second server.
Please suggest how to resolve these issues or any article’s to read to resolve the issue.
So you'll have two servers:
first, with FrontEnd, handled by NodeJS (or even Nginx, why not?)
second, with BackEnd, handled by Tomcat
Your FE should have a configuration with host of BE. Such a configuration is usually made with environment variables. That will cause a request coming from user's browser to NodeJS, then using environment variable the request will be passed through NodeJS to BE.
Another option, FE may go to BE directly, but that will cause FE and BE worknig on different hosts, so that you'll have to configure CORS on your BE.
I'm setting up a new dev environment on a windows 10 pro installation. Therefore i am exporting my spring-boot applications as .jar file and start it as windows service on different ports.
Spring boot app 1 on port 10001
Spring boot app 2 on port 10002
and so on
I already unlocked those ports in my firewall and everything seems working perfectly fine.
When I log into the application with port 10001, everything seems fine as well. However as soon as i log into another application (10002) i get automatically logged off on the 10001 application.
To sum it up, I am only able to be logged into one application at a time.
I am using a MySql8 Server installation. All applications have their own databaseschema. Additionally i am using spring security for authentication.
Because all those applications are running perfectly fine on our productive server (jelastic web hosting) it should have something to do with my dev environment instead of a code issue.
I'm happy you solved your problem. I don't think that using SSL and subdomains is the most simplistic solution to your problem though, especially if you are running automated tests in that environment, ssl might slow you down a bit.
There is a well known address you can bind your application to: 127.0.0.1. However, most people don't know, that your loop back device is actually listening to 127.0.0.1/8 in other numbers 127.0.0.1 with a netmask of 255.0.0.0 which means you can bind your services to any address in a whole class a subnet.
TLDR: try binding your application 1 to 127.0.0.2 and application 2 to 127.0.0.3. That should help with the cookies and later on, if you add monitoring ports, will make your life of managing port numbers easier.
As already mentioned in my comment above, the problem is not related to any software bug, instead its just how http is defined:
"Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server."
Are HTTP cookies port specific?
I solved my issue by using SSL encryption and different subdomains.
Short Background
We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver that points to applications on ServerA. It currently works. We want to upgrade ServerB to Server 2012, but cannot do an in place upgrade, so we are installing it on a new server (ServerC) and replacing ServerB with it.
We cannot use Tomcat, and the original setup works properly (Internet <--> ServerB (WebServer) <--> ServerA (Application Server).
My questions are (All of these apply to what happens after we swap out ServerB with ServerC):
1) Is there a way to test if a webserver is correctly configured to serve the websphere apps? I think my biggest barrier is that we cannot use the server machine to browse to any sites (I believe it is a group policy...but again, I'm just a software dev and not as knowledgeable about server configurations and system administration). The applications that we can use on the server are very limited, but I have seen some things about using Snoop (which I do not know how to use, but could find out...but I don't think we are allowed to install it on the machine anyway.)
2) When I navigate to a site hosted on IIS that points to a WebSphere application that redirects me to a Login.jsp page, why is the browser trying to download the .jsp file instead of displaying it as a web page? I have not been able to find good google/stackoverflow/serverfault results on a search for why a site hosted on IIS pointing to a WebSphere application server does not display JSP pages, but instead prompts to download the .jsp file.
3) When I try to navigate to some sites hosted on IIS that points to a WebSphere application, why would I receive a 403 Access Denied error on the new IIS server, but not the old server? The folders that the web apps have access to are located on either the local machine (separate drive letter) and the WebSphere application server. All of the local folders on the new server have been configured the same way as on the old server, and all of the local users and groups are setup the same.
Setup Information (More Detailed)
In this part, I would like to show our setup: We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver. This setup was around before anyone that is currently working at my organization (including myself) started. There are 7 sites configured/setup on IIS with virtual paths (that is, the site is named www.site_name.ourorg.domain). We have an IP address configured on the outward facing NIC for each of the sites and each site has a binding to its specific ip address with port 80 and port 443 (with valid certificates) and their own application pools. We do not have access to configure the domain controller (we are given the IPs to use and someone at a different organization manages our domain server). All of the sites are currently in production and in use on a daily basis.
The Goal
Our goal is to stand up a new Windows Server 2012 webserver (and eventually application server as well). Unfortunately, we cannot do an in-place upgrade, so our System Admin decided that probably the best route would be to setup a new server (ServerC), do a clean install of Windows Server 2012, install IIS7 using the same features and roles that are on ServerB, install IBM WebSphere Plugins and use the same plugin-cfg.xml file. (Later on, when this failed, we reinstalled the WebSphere Plugins as well as the Configuration Tool and creating a new configuration using that, per the instructions in the WebSphere site noted below.) Then, once it is installed and everything appears to be configured the same, disable the outward facing NIC on the existing webserver (ServerB), rename it (since we use Active Directory) to a new name (ServerB-o), rename ServerC to ServerB, and enable the NIC on ServerC (now called ServerB) using the same IP and configuration as the old ServerB (ServerB-o).
The Issue
After we do all of this, we can access IIS (default page, which will be disabled after testing), and it looks like the sites pointing to WebSphere are responding to requests, but we are running into two issues:
1) Some of the sites are returning a 403 Access Denied; The application pools are running as ApplicationPoolIdentity and all of the ApplicationPools (IIS APPPOOL\www.site_name.ourorg.domain) are added to the IUSR group. One peculiartity is when we are setting up the sePlugins virtual folder (for example) and choosing "Connect As...", we cannot use .\localadmin nor localadmin (both are admin users on the webserver). It tells us that the account name or password are incorrect. The old server is configured like this, though.
2) For any site that does not give the 403 error, instead of displaying the translated .jsp page, the browser prompts to download the .jsp file.
Other Information and Attempts
After trying to change the configuration on IIS and the WebSphere plugin multiple times, using a service account (on our AD) instead of .\localadmin, and a few days of research, I have realized that I do not know enough about how to configure servers, especially in this setup, to be of any more help. We are able to do the reverse (disable NIC on new ServerB, rename it to ServerC, rename ServerB-o back to ServerB, and re-enable the NIC), the sites come back up after somewhere between 15 minutes and 3 hours...
I just remembered that there was a part where I had to compare the ApplicationHost.config files and found that the ISAPI filters were not properly set on the new server, but am pretty sure I got everything configured on the new IIS the same as on the old IIS. The only thing that didn't get installed was HipIISEngineStub.dll, which seems like a McAfee-related dll (host intrusion prevention). It is on the old webserver, but not the new.
We have tried standing up the new server 3 times, and I have done more research in between each issue and was able to resolve all of them but this one. Each time we try to stand up the new server, we have to take down production for the remainder of the day, so I would prefer to be able to find a way to test it without taking production down.
One More Note
One last note is the most recent thing I was able to do was setup the configuration on ServerC, leave the outward facing NIC disabled, create a new site using the same physical path and configuration setup, except that it binds all unassigned IP addresses and an unused port (let's say 11111, for example) to one of the apps. I added the sePlugins virtual directory to it, and tested it from another workstation on the same domain by going to https://ServerC:11111. That successfully redirected my to the https://www.site_name.ourorg.domain/app_sub/Login.jsp <- which is being served by the old machine. I don't really know what this test means, other than the new IIS being able to read the configuration file and perform the appropriate steps for redirecting.
Resources
When installing WebSphere on the new webserver, I followed the steps at IBM's Site.
I have seen countless resources for the other issues I had, such as adding the AppPools to the IUSR group, configuring an app pool to run as a specific identity, how having the multiple IPs on a NIC and have them bound to sites in IIS works, and other manner of sys admin stuff that I am not familiar with, nor fully grasp.
I would greatly appreciate any assistance with getting a new server setup to properly server jsp pages using WebSphere. Even if you have a resource for completely uninstalling and reinstalling WebSphere on the new machine. I am hesitant to make any configuration changes on the WebSphere Application server itself, since we can easily roll back to the using the old webserver and the sites come back up. However, I am open to suggestions if that is where the issue is.
Once again, I apologize for posting a question that seemed to have too large a scope. I was able to get in contact with IBM support. The short answer is that while I had many other configuration issues, there were two main items preventing me from successfully serving the websites.
First, I had installed only the application server (Base installation) instead of the Network Deployment installation. This meant that there were more steps needed so that the application server to serve multiple applications to the web server. This was resolved by following the steps in this tech note. It involved setting up one plugin_root\bin\AppName and one plugin_root\config\AppName folder (and optionally in the log folder as well) on the web server for each WebSphere application; as well as modifying the configuration file (plugin_root\bin\plugin-cfg.loc and plugin_root\config\plugin-cfg.xml) for each specific app. Then, I needed to remove the ISAPI filter entry at the server level (in IIS Manager), and add the entry to the ISAPI filter for each site. I also needed to change the permissions on the Handlers Mappings in the sePlugins virtual folder to allow Read and Execute (one of the main reasons for the 403 errors).
The second issue was that I needed to add the ports that I was using for the test sites to the virtual hosts list using the Administrative Console, regenerating the plugins, and copying them to the web server (to the appropriate folders).
After getting everything up and running (and taking a snapshot of the server), I uninstalled everything and reinstalled WebSphere using the instructions listed in the resources section of my question, except I installed and configured it using the Network Deployment installation. This meant that I could have the ISAPI filter set at the server level in IIS manager, and have one folder to hold the iisWASPlugin file and associated loc, config, and log files. It turns out that I needed to set the permissions in the Handler Mappings for the sePlugins folder for each app to Read, Script, and Execute (having it just Script and Execute did not work for our setup), as well as making sure the ports were added to the Virtual Hosts list (therefore adding them to the config file).
I hope this helps someone in the future.
I have a Spring Boot java app that uses a self-signed certificate to communicate with the android front-end.
I use a tomcat server as my container for the app:
compile 'org.springframework.boot:spring-boot-starter-tomcat'
Now, I have enabled https / ssl:
TomcatEmbeddedServletContainerFactory tomcat = (TomcatEmbeddedServletContainerFactory) container;
tomcat.addConnectorCustomizers(connector -> {
connector.setPort(Integer.parseInt(serverPort));
connector.setSecure(true);
connector.setScheme("https");
I have to enable SSL as I want my android frontend to communicate to my server securely. I use the technique called certificate pinning which means I add the same self-signed certificate to both my server and my android app. For any http communications between the two, the communication will be encrypted with the keys of the same certificate and hence the server and android app will be able to understand one another.
When I load it into Heroku, I get errors each time I try to call the server:
2015-12-11T20:04:32.424629+00:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/getfood?postid=566348364a918a12046ce96f" host=app.herokuapp.com request_id=bf975c13-69f3-45f5-9e04-ca6817b6c410 fwd="197.89.172.181" dyno=web.1 connect=0ms service=4ms status=503 bytes=0
According to this blog by Julie: http://juliekrueger.com/blog/
As a side note, Heroku apps are https enabled by default. The server I
was installing had Tomcat configured to use https, and trying to
access an endpoint was returning a code=H13 desc="Connection closed
without response" error. After I removed that configuration the error
went away.
I can fix the error by just removing the ssl / https from my tomcat server, but as I mentioned, I want to use the certificate pinning technique for secure communications.
I was thinking whether it was possible to disable the SSL on heroku side but keep my tomcat server SSL active but I already contacted Heroku and they told me that disabling the piggyback SSL that comes standard with their service is not possible.
I also looked at the paid alternative here called SSL Endpoint but it seems only userful for custom domains. Since all endpoints are coded within my android app and is not visible to the user, it makes no sense for me to use a custom domain. Furthermore, I don't think it will solve my problem as its sole objective seems to be to create the custom domain:
SSL Endpoint is only useful for custom domains. All default
appname.herokuapp.com domains are already SSL-enabled and can be
accessed by using https, for example, https://appname.herokuapp.com.
I googled for a few days now and cannot seem to come up with a solution. Disabling ssl on my tomcat side would not be acceptable in my mind as it poses too much risks. I would even consider other services (Azure etc) if this would solve my problem.
Any ideas on how I can solve this?
With Heroku, in order to use your own custom SSL, you need to use a custom domain and the SSL Endpoint addon, it will probably won't make sense for your case, but it is the only way to use your own certificate.
And I haven't tried all the providers out there, but with the ones I tried, the scenario is exactly the same, it is possible to use custom SSL cert only if you are using a custom domain.
Although, browsing google a bit, found this blog post where it ilustrates how to use an intermediate DNS service to comunicate with Heroku. In the communication between the DNS service and Heroku, the provided heroku SSL cert is used, but from the client to the DNS service a different certificate is used, so it might be of some help.
Update: A possible solution would be to use Amazon Web Services, where the deal is that you rent VM's and you are allowed to setup your own environment, meaning that you can install your own tomcat and use your own custom SSL.
Update 2: Also there is CloudFront with AWS, where you can use your own certificates explained here
I am currently implementing a single signon solution for a customer that is based on Java, Tomcat and Kerberos.
Users are to access the URL of an intranet Tomcat application from their client browsers, the Tomcat application acquires the users' credentials via Kerberos and redirects them to the actual web application.
Our customer's environment is a typical mixture of a Windows AD server acting as the KDC and Linux Tomcat application servers. The SSO functionality is supposed to be used from both Windows and Linux clients. This is what appears to be different from most answers I can find on the net where people have Linux web application servers but only use Windows clients.
Now, in my local setup I get some strange behaviour. My development environment is a Tomcat 7.0.26 running from MyEclipse 8.6 under Windows 7. My test environment is a Tomcat 7.0.26 or 7.0.53 behind an Apache web server on a Centos 6 machine. I have set up the AD server correctly, generated the necessary keytab files etc. and everything is running smoothly in the development environment. I can access the Tomcat application from both Linux and Windows clients using IE and Firefox, Kerberos authentication proceeds and I get redirected properly.
When deploying the Tomcat application on the test server this keeps working when trying to sign on from Windows clients. However, when I try to access the test server from a Linux client (I have tried from Linux Mint 13 and Ubuntu 13.10), I get the following error:
javax.servlet.ServletException: GSSException: No credential found for: 1.3.6.1.5.2.51.3.6.1.5.2.5 usage: Accept
net.sourceforge.spnego.SpnegoHttpFilter.doFilter(SpnegoHttpFilter.java:233)
I have to admit that I do not properly understand this message. Does it point to a problem with the credentials supplied by the client or a problem with the application server negotiating with the KDC? I have done some research on this problem and have found out that the indicated oid 1.3.6.1.5.2.5 stands for GSS_IAKERB_MECHANISM and not GSS_KRB5_MECHANISM or GSS_SPNEGO_MECHANISM which I find strange. Also, nobody else appears to have exactly the same problem.
I have tried switching from MIT Kerberos to Heimdal Kerberos and back. I have tried Firefox and Chromium, on the application server I have switched between Tomcat 7.0.26 and 7.0.53, the problem still persists. I am using the latest spnego.jar.
Now: Calls from Linux to the Tomcat running on the Windows development machine succeed and calls from Linux clients to the Linux application server fail with the same error message for both browsers tried.
Any ideas on this one?
GSS_IAKERB_MECHANISM means that the client is not able to determine the realm/kdc to create a service ticket and asks the server to serve as an intermediate to the target KDC. Check Wireshark traffic. Your task now is to analyze why the client is not able to create a service ticket for that SPN. I have observed this issue on Heimdal on FreeBSD with Microsoft KDC.
So the problem ist not your Tomcat instance.