How does HTTP frontend server (remote) communicate to Websphere?
I have read that WAS plugin installed in the HTTP frontend server will route the requests to Websphere based on plugin-cfg.xml settings.
Will the routing be on HTTP protocol or some other binary protocol?
What are the implications on Firewall settings in this case? What ports should be kept open on the application server machine?
Will the routing be on HTTP protocol or some other binary protocol?
Plugin uses HTTP / HTTPS protocol to communicate with WebSphere. HTTPS is used, if request comes via https and plugin is configured to communicate using htts with WebSphere (has root WebSphere cert added to trusted signers).
What are the implications on Firewall settings in this case?
What ports should be kept open on the application server machine?
After you'll generate plugin-cfg.xml, you will see for each of servers, that plugin needs to communicate following fragment:
<Server CloneID="s111111" LoadBalanceWeight="1" ConnectTimeout="0" ExtendedHandshake="false" MaxConnections="-1" Name="custTestNode_server1" ServerIOTimeout="0" WaitForContinue="false">
<Transport Hostname="server1" Port="9080" Protocol="http"/>
<Transport Hostname="server1" Port="9443" Protocol="https">
<Property Name="keyring" Value="/config/webserver1/plugin-key.kdb"/>
<Property Name="stashfile" Value="/config/webserver1/plugin-key.sth"/>
</Transport>
</Server>
There are ports, in this case 9080 and 9443, which will be used to communicate with that server, and that needs to be opened in firewall.
Related
I have a C# MVC application (.NET framework 4.6.2) with a WCF (soap based) web service located at /webservice inside the application. The WCF web service is for a 3rd party vendor to call and push their data to. We have the application in a test environment on a Windows Server 2016 server with ports 80 and 443 open and our certs aren't selfsigned and valid. When we test the service using SoapUI, we are able to correctly get to the WCF web service and post the test data to the server but when our vendor posts the data from their Java application they get "Connection Reset". We've removed all authentication and are just trying to get them to reach the WCF but our IIS logs and application logs don't even show them hitting our server. SoapUI (both inside and outside our network/firewall) is able to hit the service correctly. Our web.config looks like this:
<system.serviceModel>
<diagnostics>
<messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="3000"/>
</diagnostics>
<bindings>
<basicHttpBinding>
<binding name="basicBinding" textEncoding="utf-8" openTimeout="00:03:00" closeTimeout="00:03:00"/>
</basicHttpBinding>
</bindings>
<services>
<service behaviorConfiguration="serviceBehavior" name="WebServiceUniqueName">
<endpoint address="/endpoint/soap" binding="basicHttpBinding" bindingConfiguration="basicBinding" name="soapEndpoint" bindingNamespace="https://test.site.com/webservice" contract="Our.Namespace.ISoapContract"/>
<endpoint address="mex" binding="mexHttpBinding" name="mexEndpoint" contract="IMetadataExchange"/>
<host>
<baseAddresses>
<add baseAddress="/webservice/servicename"/>
</baseAddresses>
</host>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="serviceBehavior">
<serviceMetadata externalMetadataLocation="https://test.site.com/webservice/content.xml"
httpGetEnabled="true" />
<serviceDebug httpHelpPageEnabled="false" includeExceptionDetailInFaults="true" />
</behavior>
</serviceBehaviors>
</behaviors>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/>
and the code for our WCF looks like this:
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(Namespace = "https://test.site.com/webservice")]
public class MyService : ISoapContract
{
public DataResponse SubmitData(DataRequest input)
{
// Code here
}
}
namespace Our.Namespace
{
[ServiceContract(Namespace = "https://test.site.com/webservice")]
[XmlSerializerFormat]
public interface ISoapContract
{
[OperationContract(Name = "SubmitData")]
[XmlSerializerFormat]
DataResponse SubmitData(DataRequest input);
}
}
Our server works with TLS 1.2 and falls back to 1.1 (exactly what the vendor is expecting). Our firewall isn't showing anything being blocked and the "Connection Reset" message is within the first few seconds of their request. The 3rd party is able to access the WSDL from their browsers, so all of this leads me to believe there is something failing during the handshake. SoapUI is coming through and that runs on Java, so we are really stumped at this point. Does Java calling a C# WCF application require something extra? Is there a way to capture a handshake attempt?
Update after more testing:
We took Sambit's advice and used the Microsoft web service client and that worked without any problems. We created another test WCF and also created an app that called our server and put both in Azure without any problems. We could reach our web service but the vendor still can't reach the server. We added more logging and looked at the firewall and the traffic from the vendor was getting through the firewall and to the server but was reporting "TCP reset from server".
The 3rd party vendor's application was hosted in a shared environment and they are able to run commands on their server but they can't change any code to log extra information. They were able to ping our server and run the following command:
nc -zv (server_url) 443
And that connected successfully but when they attempted to get the cert from the server, that failed:
openssl s_client -tls1_2 -showcerts -connect (server_url):443
CONNECTED(00000003) write:errno=104
--- no peer certificate available
--- No client certificate CA names sent
After help from a lot of really smart people on both sides, the problem ended up being Server Name Indication (SNI):
https://en.wikipedia.org/wiki/Server_Name_Indication
The vendor's application is running an old version of Java that doesn't understand/support SNI and they aren't able to upgrade at this time.
Our server admins dedicated an IP on our Windows Server for the domain being called by the vendor and disabled SNI for that particular domain. We are now able to receive the vendor's web server calls without any problems.
I have a JHipster monolithic application (Angular + Java SpringBoot + Tomcat container, everything together) deployed successfully in a EC2. I could set the security groups in order to enable 8443 incoming requests to the Public DNS and I am able to access it from any browser.
After that, I've requested a public certificate from Amazon for a domain I've already acquired with Route53.
So the idea was to use 443 instead of 8443, and the real domain (instead the Public DNS provided by AWS), so in effect I've created a ELB (all in the same VPC, security group and hosted zone). This ELB is listening in 443 and has a redirect to 8443 as default action.
But.. ERR_CONNECTION_REFUSED is what the browser shows..
It is important to mention that since AWS does not allow us to download the certificate (at least I don't see any option for that in the console) in the JDK of the EC2 where the app runs I've installed a custom certificate (generated with keytools) in order to apply it in Tomcat to listening the already mentioned 8443 port.
I also tried running in 8080 instead of 8443 (and of course updating the security groups) but no change..
Could you give me a clue about what I'm missing? So far the unique way I see is to create a new EC2 with a NGINX to act as a reverse proxy (with a rewrite policy maybe) behind the ELB, but I prefer to avoid additional complexity unless absolutely needed.
Additional data:
Tomcat server configuration:
server:
port: 8443
server.ssl.key-store: keystore.p12
server.ssl.key-store-password: thePassword
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: theKeyAlias
Security group inbound rules:
Custom TCP 8443 with 172.31.0.0/16 (the same range of the ELB)
HTTPS TCP 443 with 0.0.0.0/0 and ::/0
Also the AWS Certificate is enabled and already issued (CNAME record set was created in Route53)
**UPDATE 1 - 04 February 2019 22:21 (GMT-3) **
Guys, I finally decided to have a NGINX behind the ELB. Also I've realized that communication between NGINX and App Server could be HTTP, therefore my app is gonna listen in port 8080, simplifying a bit the scheme. I've realized also that I need only one certificate in order to have the "browser padlock" and encrypted all traffic between clients and ELB, so no matter if it is not possible to download it (it is not needed to install also in NGINX nor App. Server).
At the Apache level you should add a listener on port 443 which would proxy pass the requests on port 8443. This will make sure that all incoming requests on port 443 of the domain will be passed to the application running on port 8443 of the server
listen 443;
location /{
proxy_pass http://127.0.0.1:8443;
}
Finally issue RESOLVED I could make work fine the NGINX and also I had to change another things:
I've passed from an Application Load Balancer to a Classic Load Balancer. The final scheme is like I've explained in the UPDATE of this topic, I mean:
User connects via HTTP or HTTPS through Classic LB and then it goes to EC2 NGINX listening on port 80.
Then from NGINX to WebApp I've used a proxy_pass in this way:
location / {
proxy_pass http://172.x.y.z:8080;
}
And finally an HTTP forward in NGINX to use HTTPS exclusively:
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' )
{
return 301 https://$host$request_uri;
}
Lijo Abraham, your answer helped me to have a clear direction and this post shows the exactly solution applied (thats why I will green tick this post).
Many thanks and regards.
**UPDATE 1 - 10 February 2019 17:21 (GMT-3) ** Finally I've remade all again using Application ELB this time instead of Classic ELB (the latter deprecated) and everything works as expected, don't know why in the beginning ELB Classic didn't work (probably some error in security groups rules configuration or something kind of that).
I am deploying Angular on Nginx & Apache http server (as reverse proxy web servers) in my UAT environment with the backend being on spring boot on Apache Tomcat (encrypted with https for the java REST apis), I have noticed that Nginx was configured as reverse proxy much easier than Apache BUT that was largely because Apache didn't trust the Java APIs certificate (as it is self signed, so this seems correct)
Can someone explain why this happened? I trust that Nginx is secure but I want to know why it allowed this self signed certificate while Apache by default blocked it (only allowed it with SSLProxyVerify none)?
Nginx config (related part):
location /api {
proxy_pass https://192.168.170.78:7002/;
}
Apache config (the related part):
# SSL proxy config
SSLProxyEngine on
# Why this must be present for the apache to connect to the backend but not for nginx?
SSLProxyCheckPeerName off
# the (proxy) redirection rules for the server
ProxyPass /api/ https://192.168.170.78:7002/
ProxyPassReverse /api/ https://192.168.170.78:7002/
We are using websockets in one of our projects. Our setup has an Apache Web Server and a different server with Tomcat instance. Apache is on TLS but the tomcat instance does not have TLS.
We are trying to tunnel the websockets through the apache (wss) to tomcat instance (ws).
Is this possible ? The initial handshake is successful and we get a 101 Response status. After that when we try to send data through the web socket, it does not reach the tomcat instance.
Any help would be greatly appreciated.
Below is the section of configuration used for websockets from the httpd.config file.
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule ssl_module modules/mod_ssl.so
ProxyPass wss://apache/ws/connect ws://tomcatinstance/wsapp/connect
ProxyPassReverse wss://apache/ws/connect ws://tomcatinstance/wsapp/connect
I am using haproxy for port forwarding to Bitbucket server ssh. Here's haproxy config:
frontend sshd
bind *:7999
default_backend ssh
timeout client 1h
backend ssh
mode tcp
server localhost-bitbucket-ssh 127.0.0.1:7999 check port 7999
However if i do:
sudo haproxy -f haproxy.cfg
i am getting the following error:
[ALERT] 305/201411 (4168) : http frontend 'sshd' (haproxy.cfg:38) tries to use incompatible tcp backend 'ssh' (haproxy.cfg:43) as its default backend (see 'mode').
[ALERT] 305/201411 (4168) : Fatal errors found in configuration.
But i was referring to an official atlassian guide: https://confluence.atlassian.com/bitbucketserver/setting-up-ssh-port-forwarding-776640364.html are they wrong?
Also if i start haproxy before bitbucket server, bitbucket server cannot start on port 7999. I am totally confused. I have paid for that software and now i need to figure it out myself how to configure it for more than 2 days...
UPDATE
It was UFW as Thomj mentioned. But for what purposes do i need haproxy? If i can't bind Bitbucket's ssh to 22 port? I don't like to set port number.
The frontend configuration is defaulting to a mode of http which can't use a backend that's configured for tcp. Try adding 'mode tcp' to the frontend:
frontend sshd
bind *:7999
default_backend ssh
timeout client 1h
mode tcp