I have a netty-socket server running on ports 8191 and 8190. nginx is configured to listen on port 443 from the netty-socket client[both the ports configured on 443] and proxy requests to the netty-socketio server.Netty version being used is 4.1.49.Final
Since we can't run two same ports on a machine, we have configured Nginx to listen to 443 port and redirect them to 8191 and 8190 based on the calls we are making.
If I connect directly to the socket server everything works as expected, however, when connecting via Nginx connections drop and never come up. But both client and server are running.
Attaching the configurations of the netty server, client and Nginx.
Nettyclient
NettyServer
Nginx conf
Related
In one system, I am having a docker container, In that a Server Socket was ready to accept the client socket. So, How can I connect to the server socket from another system?
The container IP and port was: 171.18.1.4:9090
Server socket port was: 3333
How can I connect the client socket to the server socket?
Note: I am using java for this program
You should be able to do that by sending http requests for example via curl:
curl -X GET http://your.ip.address.sth/9090
Bear in mind that here the address is the ip address of your machine and not of the docker Container. Docker operates in private network and publishes a port to your computers network. So just google „what is my ip“ and google will tell u ur machines public ip.
I am assuming 9090 is the port your docker container has published
I have just put my socket server on Amazon EC2 and the server is up and running. The port for the server socket is ss = new ServerSocket(30001);. What do I set the socket port inside my client class to? It is currently on local host. socket = new Socket("localhost", 30001); The amazon EC2 address is
ec2-user#ec2-34-253-76-28.eu-west-1.compute.amazonaws.com
Do I just replace localhost with this?
If the client is remote (not on the same host as the server) then, yes, use the host's DNS name or public IP address.
You will also have to allow inbound connections to the EC2 instance hosting your server application. Ensure that port 30001 is open for ingress to your client's public IP address (or to the world by indicating 0.0.0.0/0 as the source CIDR). You do this in AWS via Security Groups.
If you expose your server to the world, then you should implement (at least) some form of authentication for your clients.
A few things to check, if your client cannot connect:
Is your server socket bound to 0.0.0.0 (or the public IP associated with the EC2 instance)?
Is your server app running?
Does netstat show your server app listening on port 30001?
Did you add a security group to the EC2 instance and add an ingress rule allowing inbound traffic from your client IP (or the world) to port 30001
Is the client running on a network (e.g. corporate) that blocks outbound port 30001
We are facing the problem that our Java EJB3 client is behind a firewall that allows only outgoing traffic to port 80. The client communicates with a Glassfish server for EJB calls and JMS messages. Therefore we have to somehow direct all traffic (IIOP & JMS) through this one single port. Does anybody know how to do this?
We are using Glassfish 4.1 as a server. I have heard of JProxy but that seems to be inactive by now.
We could theoretically use SSH port forwarding but that would bypass the Glassfish authentication.
Hi from the server's side (even though your problem is the client) you can change the IIOP port either by the admin console or editing your domain.xml .
<iiop-service>
<orb use-thread-pool-ids="thread-pool-1"></orb>
<iiop-listener address="0.0.0.0" port="3700" lazy-init="true" id="orb-listener-1"></iiop-listener>
<iiop-listener address="0.0.0.0" port="3820" id="SSL" security-enabled="true">
<ssl classname="com.sun.enterprise.security.ssl.GlassfishSSLImpl" cert-nickname="s1as"></ssl>
</iiop-listener>
The thing is that you need to pass through the IIOP traffic to port 80 and then hit the actual remote server. I think you need to check your options on creating an SSL tunnel, see here
I'm trying to setup a Solaris KSSL proxy (http://www.c0t0d0s0.org/archives/5575-Less-known-Solaris-Features-kssl.html) as a frontend to a Jetty web server.
I'm able to make KSSL work with Apache web server so that KSSL redirects all incoming SSL traffic from port 443 into an Apache web server listening on port 28080.
However the same configuration does not work when Jetty is listening on port 28080. I verified that the KSSL requests does not even reach Jetty or at least I cannot see them in the access log. Furthermore even if I set a simple Java class which just listens on a server socket, KSSL cannot redirect requests to it.
My question is what are the pre-requisites from a web server in order to be able to get requests from KSSL ?
Best regards,
Lior
There are 2 very common gotchas when working with kssl.
The first is that the apache listening IP has to be the same
as your ksslcfg command. So if you have Listen 123.123.123.123:28080 in
the httpd.conf file, then you must use a ksslcfg command with the same IP.
You cannot have it listening on ANY (*) and then list an IP in ksslcfg,
or listen on an IP and leave out the IP on ksslcfg. Whatever netstat shows
is listening on port 28080 must match the IP used in ksslcfg
(or don't use the IP it is listening on *)
The second is that you must do the operations in this order:
ksslcfg
restart apache
It doesn't not work if ksslcfg is run without restarting apache afterward.
I've seen many people on the web testing with something like
localhost in their ksslcfg command. It won't work unless you also
had localhost as the Listen IP in the apache configuration.
We have Tomcat application server set up at port 8080 and Apache Http Server at port 80.
httpd redirects all traffic on port 80 to port 8080.
I am looking at the tomcat server console for our site and I see several requests on port 8009. These requests stay alive for as long as 100 to 150 seconds.
We aren't making any requests to that port. Where then are these requests coming from? Why don't they finish?
8009 is the port commonly used by AJP.
The Apache JServ Protocol (AJP) is a binary protocol that can proxy
inbound requests from a web server through to an application server
that sits behind the web server.
Here's more info on AJP and its usage/configuration within Tomcat.
The AJP Connector element represents a Connector component that
communicates with a web connector via the AJP protocol. This is used
for cases where you wish to invisibly integrate Tomcat 5 into an
existing (or new) Apache installation, and you want Apache to handle
the static content contained in the web application, and/or utilize
Apache's SSL processing.