I am new at the consul. So, I have multiple client instances running and have a java application.
To connect to the client I have given IP of one of the clients. I believe this wrong. Should I give the load balancer IP tat connect to the client or something else?
Consul operates as a distributed system, and is designed for client agents to serve as the primary access point for applications which need to interface with Consul's DNS or HTTP APIs.
Consul clients should be deployed on every node/server in your environment. Applications which are running on a given server should send HTTP/DNS queries to the Consul agent which is running on the same server. This is typically achieved by configuring the Consul agent to listen on localhost (via the -client/client_addr option), and configuring your applications to connect to Consul over the same address.
Related
Currently, I have two GCE auto-scaled groups of servers, bootstrapped by the chef. 1st - redis servers (db), 2nd - java servers (app).
Any app server can talk to any db server. Every db need to be served by app server, and there should be no situation when one app server has connections to two separate db servers.
So, I need to figure out if I can connect newly created app-server with newly-created DB-server (all in the same network), using consul.
All in all, I need to automatically pair new up-scaled servers, by adding appropriate db server IP or hostname to the command, that starting java on the app server.
I'm very new to service discovery and stuff, so any help is greatly appreciated.
Answer after edit:
So if I understand you correctly, any new app server can talk to any new redis server, but once they picked a db server to talk to, they should stick with that server.
I can see a few ways to achieve that with consul:
Map each app server 1-to-1 to a redis server and expose each server as a different service in consul with domain names like: app1.service.consul, redis1.service.consul. The drawback here is that you can not scale your app servers independently from your redis servers.
Use Redis Sentinel and let it abstract the sharding of the data for you and the just expose it under one domain name in consul: redis.service.consul
I would suggest to look into the second option, since it allows you to scale your app and db servers independently.
Old answer:
It sounds like you have two services in your network: An app service and a db service. You would then typically make consul serve as the DNS server for both of them.
This can be achieved by creating a service file for each of them.
On the server where your app is running you would create a service file in /usr/local/etc/consul.d/my_app.json:
{
"service": {
"name": "my_app",
"port": 1234
}
}
Where you replace port with the port your app is listening on.
You then need to reload consul with consul reload. You can check that the changes where applied correctly by running consul monitor.
Your app should now be reachable from my_app.service.consul on your internal network.
You can check this by issuing a DNS query with dig my_app.service.consul. This should return the IP address of the app server in the ANSWER section of the DNS response.
You then have to repeat these steps on your database server where you need to create another service file for the database with the appropriate port and service name.
I am busy with a project where I'm creating a basic client/server chat application which allows a user to create a server on their local network on a port of their choice and then have multiple clients connect to that server by specifying the IP and port number of the server(So far so good).
I would like to how I can have clients see all the possible servers they can connect to on their local network when there are multiple servers running on their local network over different ports and then allowing them to connect to one. I am using basic Java socket programming for this project.
You could have
use a UDP based protocol where each server publishes it's IP:port every second.
you could have a service where each server registers. You could chat with that to get the list of all services.
The nice approach of the later option is you can use one of your chat services for server discovery. When you want to get the list, you send a message to a channel on that server which all the servers are listening to and they respond with a chat message.
Amazon RDS Databases require that I supply the IP address of any machine that should be permitted to make connections. In my local Apache Tomcat development server, my Java application is able to connect to my Amazon RDS database, and I had to supply my computer's IP address to allow this connection.
Fast forward to deploying my application to OpenShift. My application is deployed successfully and I can get to my log in page. I created a test page on the application to output the OpenShift server IP address on which my application is running. I added that IP address to the security protocol on Amazon RDS just like I did for my local machine. However, the deployed application on Openshift is still not successfully making a connection to my Amazon RDS database.
I'm using the free OpenShift account. I'm wondering if the free account version doesn't permit external database connections? Or, am I not capturing the correct IP address of the OpenShift server where my application is stored?
In general, you can conduct an experiment:
Create a small function in your application - to fetch some URL at another server you have control over, that is you can read its access.log and it accepts connections from any IP.
Then run this function (by accessing your test page, using remote shell, or by scheduling a cron job).
And check the access.log.
So you will determinate IP address (if it is changed by some kind of proxy).
If nothing is logged, then it seems external connections are blocked.
You need to determinate IP address of openstack's Web Proxy.
See about Web Proxy and ports 8000, 8443 at https://developers.openshift.com/en/managing-port-binding-routing.html
If there are many such IPs, you can create SSH tunnel and forward one port, so your connection to the database will be local.
I'm running a server-side application on a remote server, using a particular port - call this port 9000. Using a separate laptop, I've been able to telnet to a simple Java hello world TCP server and to access a HTTP server through my browser. These listened to port 9000 and were made using the standard Java libraries and com.sun.net.httpserver. However, when I use Node.js to create an application (i.e. server.listen(9000, 0.0.0.0)), I cannot connect to that application.
Is there something additional I should do to create a successfully listening HTTP server using Node.js? Any additional dependencies? As per above, assume there are no firewall issues between my laptop and my server.
For a larger context, the program I'm trying to run is etherpad-lite, which uses Node.js to create a server.
Don't include the IP address of 0.0.0.0.
This is telling the server to only listen to requests to that 'hostname'.
Just use
server.listen(9000);
I need to identify the remote ip and port of the clients that register to my service. Also, when the client web app goes down it un-registers itself from my web service.
i am using HttpServletRequest.getRemoteAddress() and HttpServletRequest.getRemotePort() to identify the clients.
but the problem is when i test on same machine i get different ports from the same client web app.
I am running JAX-WS web service on GlassFish and the Client Web App is also installed on the same container. Also, i am running Fedora 14 VBox VM.
Yes, that's correct, the port used by the connection is never guaranteed to be the same, and as you see, it varies.
The port is decided when the connection is made from the client to the server, and if multiple request are coming on multiple connections, multiple ports appear.